I Googled "AI tools for consultants" last week. The top results recommended between seven and fourteen separate tools: Auxi for slides, Otter for transcription, Wrike for project management, Tableau for data, Zapier for automation, ChatGPT for drafting, Granola for meeting notes.
Not one article asked the obvious question: where does your client context live across all of them?
That's the gap. Not a tool gap. A system gap.
- Context engineering for consultants
The practice of structuring your professional reasoning, client knowledge, and engagement context into portable files that any AI tool can use. Instead of re-briefing every tool on every client, you build the context once and load it wherever you work.
- Every "AI tools for consultants" article lists 7-14 disconnected tools. None asks where your client context lives across all of them
- A consultant running 3-5 engagements needs three context layers: a role file (how you think), client files (per engagement), and project briefs (per task)
- Professional services showed the biggest AI adoption increase of any sector, jumping from 33% to 72% between 2023 and 2024 (McKinsey State of AI). Yet most adoption is tool-level, not system-level
- McKinsey's internal AI tool Lilli saved approximately 1.5 million consultant hours in 2025. The advantage was not the tool. It was the structured context behind it
- The tool changes. The context stays. That is the system consultants actually need
The listicle problem
The "AI tools for consultants" genre follows a pattern. Each article lists 7 to 14 tools, each solving one task: slide generation, transcription, scheduling, proposal drafting, data analysis. Some are genuinely useful. Auxi automates slide formatting. Granola captures meeting notes without joining the call as a visible bot. Alfred handles email triage. These are real productivity gains.
But here's what happens in practice. You use ChatGPT to draft a client proposal on Monday. On Tuesday, you switch to Claude to analyse a competitor dataset for the same client. On Wednesday, you open Gemini to research a market question that came up in a call. Each tool starts from zero. None of them knows the client's priorities, your evaluation framework, or the decisions you made last week.
You've added seven tools. You haven't built a system.
The articles from Auxi, Juma, Get Alfred, and others are optimised for affiliate revenue, not for solving the underlying workflow problem. They list tools without addressing the question a senior consultant actually needs answered: how do I make AI work across my entire practice, multiple clients, multiple platforms, multiple months, without re-explaining everything every session?
What consultants actually need
A strategy consultant running three to five client engagements doesn't need more tools. They need a system that makes any AI tool immediately useful for any client, on any day.
That system has three components:
A role file: your evaluation criteria, your communication standards, your quality thresholds, your decision-making heuristics. This is the file that ensures every AI output reflects your professional judgement, not a generic default. It's typically 500 to 1,500 words and rarely changes.
A client file per engagement: the client's priorities, stakeholder dynamics, key terminology, relevant constraints, past decisions. You update this after significant meetings or shifts. Each one is 300 to 800 words. A consultant running five engagements has five client files.
A project brief per task: the specific deliverable, audience, format, and data. This changes every session.
You load the role file and the relevant client file at the start of each AI session, regardless of which tool you're using. In Claude, they go into a Project. In ChatGPT, they upload to a conversation or Custom GPT. In Gemini, they paste into a Gem. Via MCP (the open protocol now adopted by OpenAI, Google, Microsoft, and Anthropic), they can be delivered automatically.
The tool changes. The context stays.
A week in the life
Monday morning: you open Claude with your role file and Client A's context loaded. You draft a stakeholder brief. The output reflects your structure preferences, the client's specific terminology, and the prioritisation framework you've used for a decade. No re-explaining required.
Tuesday: you switch to ChatGPT for a quick competitive analysis on Client B. You load your role file and Client B's context. ChatGPT doesn't know what you did in Claude yesterday. It doesn't need to. The context tells it everything relevant about this client and how you work.
Wednesday: you use Perplexity for research on Client C. You paste your role context into the system prompt. Even in a tool without file upload, your standards travel with you.
Thursday: after a significant client call, you update Client A's file. New priorities, a shifted timeline, a stakeholder who's moved from supportive to sceptical. Every future AI interaction for that client is now better informed.
That's compounding. Each update, each engagement, each conversation makes the system more useful. This is what separates a professional with a context architecture from someone switching between fourteen disconnected tools.
What the big firms already know
The major consulting firms have invested heavily in this problem, and the results validate the approach.
McKinsey's internal AI platform Lilli is used by 72% of its professionals, generating over 500,000 prompts monthly. In 2025, Lilli saved approximately 1.5 million consultant hours on research and knowledge synthesis, according to Global Managing Partner Bob Sternfels at CES in January 2026. The platform works not because McKinsey has a better AI model than what is available commercially. It works because McKinsey structured its institutional knowledge into a format the model can use.
Bain deployed ChatGPT Enterprise to every employee in 2024 and teams have built thousands of custom GPTs for specific client challenges and internal workflows. PwC announced a $1 billion AI investment over three years in April 2023 and reports internal efficiency gains from systematic AI application across its 65,000-person US workforce. Accenture's generative AI bookings hit $5.9 billion in FY2025, nearly doubling year on year.
The pattern is consistent: the firms that are winning with AI have invested in structured context, not just tool access. An independent consultant cannot build Lilli. But the principle behind it, give the AI structured professional knowledge instead of starting from zero, scales down to a single practitioner with a text file.
Why this matters more for consultants than anyone else
Consultants face a specific version of the context problem that most professionals don't: multiple clients, each with their own domain, terminology, priorities, and sensitivities, all running simultaneously.
McKinsey's State of AI survey tracked organisational AI adoption jumping from roughly 33% in 2023 to 72% in 2024, with professional services showing the biggest increase of any sector. A LexisNexis survey found 76% of consulting firms are already implementing or trialling generative AI tools, with research and documentation as the primary use cases. Yet the productivity gains remain uneven. The firms capturing value have invested in structured context, not just tool access. The firms that adopted tools without context infrastructure report marginal improvements.
| Approach | What happens | Outcome |
|---|---|---|
| Tool-only (no context system) | Each AI session starts from zero. Consultant re-explains the client, the project, and their standards every time. | 30+ min/day lost to re-briefing. Output requires heavy editing. AI feels like a toy. |
| Context-engineered system | Role file + client file loaded at session start. AI knows the engagement, the standards, and the history. | 5 min setup per session. Output starts at 80%. AI compounds across engagements. |
The consultant who builds a portable context system eliminates that tax. Not by using a better tool, but by building the layer that sits above all tools.
Getting started
Membership is designed specifically for this use case: a calibration system for role files, client files, and project briefs, with a setup path for loading them into Claude, ChatGPT, and Gemini. If you run multiple client engagements and want AI that knows which client it's working on without being told, this is the fastest path.
Or start here: pick your most active client engagement. Write down the five things you re-explain every time you use AI for that client. Their priorities. Their constraints. Their preferred structure. The terminology they use. The decisions that have already been made. Save it as a text file. Load it next time.
That's your first client context file. The second one is easier.
