chatgpt

ChatGPT's @Model Switch: Instant AI Switching

ChatGPT introduces an inline model switching feature using @ mention syntax, allowing users to switch between GPT-4o, o1, and o1-mini models mid-conversation

ChatGPT’s Hidden @Model Switch Feature Revealed

What It Is

ChatGPT now supports inline model switching through a simple @ mention syntax. Instead of starting a new conversation or navigating through menus, users can type @gpt-4o, @o1, or @o1-mini directly in the message box to switch between available models while preserving the entire conversation history.

This feature operates similarly to mentioning a colleague in Slack or Teams. The @ symbol followed by the model name triggers an immediate switch, and the next response comes from the newly selected model. The conversation context remains intact, meaning the new model receives all previous messages and can continue the discussion without requiring a summary or re-explanation of what’s been covered.

The discovery of this feature highlights how OpenAI occasionally ships functionality without formal announcements, leaving users to stumble upon capabilities through experimentation or community sharing.

Why It Matters

This feature addresses a common friction point in AI-assisted workflows. Different models excel at different tasks - GPT-4o handles general conversation and creative work well, while o1 models specialize in complex reasoning and multi-step problem solving. Previously, switching between them meant losing context or manually copying conversation threads.

Developers working on technical problems benefit significantly. When debugging code, a developer might start with GPT-4o for quick syntax fixes, then switch to @o1 when encountering a complex algorithmic challenge requiring deeper reasoning. The ability to maintain context means the reasoning model already understands the codebase structure, previous attempts, and constraints without repetition.

Research teams comparing model outputs can now conduct A/B testing within a single thread. Instead of maintaining parallel conversations, they can ask the same question to different models sequentially and directly compare reasoning approaches. This streamlines evaluation workflows and makes model selection more empirical.

The feature also changes how users think about model selection. Rather than committing to one model for an entire session, conversations can adapt dynamically. Start broad with GPT-4o, narrow down with o1-mini for specific calculations, then return to GPT-4o for synthesis.

Getting Started

To use the model switching feature, simply type the @ symbol followed by the model name in the ChatGPT message input:

@gpt-4o Can you explain this concept more simply?
@o1 Now solve this optimization problem step by step
@o1-mini Quick calculation: what's the time complexity here?

The model switch happens immediately when the message is sent. The response will come from the newly selected model, and all subsequent messages continue with that model until another switch command is issued.

For best results, include the actual question or prompt in the same message as the switch command. While typing just @o1 works, combining it with the request in one message keeps the conversation flowing naturally.

Access to specific models depends on the ChatGPT subscription tier. ChatGPT Plus and Team subscribers can access all mentioned models, while free tier users have limited model availability. Check https://openai.com/chatgpt/pricing for current access levels.

Context

This inline switching differs from the model selector dropdown in the ChatGPT interface. The dropdown requires clicking away from the conversation, selecting a model, and often starting fresh. The @ mention approach keeps users in the flow of conversation.

Other AI platforms handle model switching differently. Claude from Anthropic requires starting new conversations for different model tiers. Google’s Gemini interface similarly lacks mid-conversation switching. This positions ChatGPT’s approach as notably more flexible for users who regularly work across model capabilities.

The feature has limitations worth noting. Switching models doesn’t retroactively change previous responses - it only affects subsequent messages. Additionally, some models have different context window sizes, which could theoretically cause issues in very long conversations, though this appears rare in practice.

The undocumented nature of this feature raises questions about stability. Features shipped without official announcements sometimes change or disappear in updates. Users building workflows around this capability should remain aware that OpenAI hasn’t formally committed to maintaining it.