Generative Artificial Intelligence (AI) is evolving beyond standalone tools into interconnected systems. One emerging trend is chaining Custom GPTs together using the @-mention syntax. This allows users to switch between different Custom GPTs within the same conversation.
What Is GPT Chaining with Mentions?
By typing “@” followed by a Custom GPT’s name, users can invoke its expertise mid-conversation. This allows different GPTs to contribute their specialised strengths in sequence.

For example, a workflow might involve:
| Chaining flow | Description |
| New Chat → @Summariser → @Task Planner | A long report is condensed into three bullet points by the Summariser GPT, then converted into an actionable project plan by the Task Planner GPT |


Each @ mention passes the conversation context to the next GPT, allowing them to build on each other’s outputs
Advantages of GPT Mentions
Using the @-syntax creates opportunities for building flexible, multi-step workflows without needing to switch between chats or tools.
Key advantages include:
- Access to expertise: Tap into already-built GPTs without needing to create new ones from scratch.
- Seamless integration: Combine multiple specialised GPTs into a single, continuous conversation flow.
- Time-saving: Eliminate repetitive copy-paste steps and streamline complex processes.
- Specialised capabilities: Each GPT brings unique training and focus, which can be called upon based on specific needs of a task.
- Creative flexibility: Users can experiment by chaining different GPTs together to design innovative workflows tailored to their needs.
Limitations of GPT Mentions
Despite its potential, chaining GPTs with mentions also presents challenges:
- Inconsistency: Some GPTs fail to follow workflow instructions precisely. For example, when chaining two GPTs, one might skip steps or add extra details that were not requested. To mitigate against this, refer to the next section of this article.
- Dependency: If the original GPT is removed or updated, workflows may break.
- Limited control: If a user did not build all the GPTs in the chain, they cannot fully customise behaviours.
- Performance issues: Invoking multiple GPTs can slow response times and lead to long, fragmented chat histories.
Improving GPT Chaining with Prompt Engineering
Micro-Custom GPT Design for Workflows Using Multiple GPTs
When multiple GPTs are mentioned together in a single conversation, inefficiencies can arise if each GPT tries to do too much or is not sure of its role. To mitigate this, complex workflows can be broken down into smaller steps, where each GPT only has one specific task to complete before passing the results along to the next, thereby preventing overlaps or confusion between different roles.
For example:
| Chaining flow | Description |
| New Chat → @Summariser → @Task Planner | The @Summariser GPT condenses a long report into exactly three bullet points capturing outcomes and next steps. The @Task Planner GPT then takes those bullets and converts them into an actionable project plan with tasks, owners, and deadlines. Each GPT focuses on one task, ensuring the handoff is clean and predictable. |
Ensuring Reliability in Multi-Task, Single GPTs
In addition to step-based sequencing across multiple GPTs, a single multi-task Custom GPT can also be designed to handle its own workflow reliably. This is achieved by giving it a system prompt that tells it to first validate the current conversation context before deciding which of its pre-defined steps to execute.
For example, a GPT might be designed to:
- Summarise input if a long report is detected
- Create a task list if it receives a summary
- Generate a project timeline if tasks are already defined
Before acting, the GPT checks the existing conversation to determine which stage has already been completed and then continues from the correct step instead of repeating or skipping tasks.
This way, the GPT adapts to the user’s flow without losing track of where it is in the process. Even if the conversation jumps ahead, the GPT can re-anchor itself by validating context and following its original step sequence accordingly.

