Last month, ChatGPT told me it remembered that I prefer British English, that I work at Stears, and that I'd been asking a lot about data infrastructure recently.
All true. None useful.
I'd asked it to help me think through a trade-off in our data architecture: whether to build a new ingestion pipeline in-house or extend the one we already had. The kind of decision where the answer depends entirely on how you weigh speed-to-market against long-term maintenance cost, and how much technical debt you're willing to carry for three months while the team ships something else. What came back was a competent pros-and-cons list. Generic. Balanced. The kind of analysis a smart intern produces on day one, before they've learned how you actually make decisions.
My actual reasoning starts somewhere different. When I evaluate a build-versus-extend trade-off at Stears, I ask: what does this cost us in focus, and does the team have the cycles to absorb the maintenance load in Q3? ChatGPT knew what I was working on. It had no model of how I weigh these things.
That gap, between recognition and understanding, is why I spent the last few months building a system to manage what my AI remembers.
- Two-Layer Context Model
A framework that separates your profile (structured, portable cognitive context you build and govern) from platform memory (automatic, opaque informational context locked inside one tool). Layer 1 captures how you think. Layer 2 captures what you've told it. Most professionals have Layer 2 and no Layer 1.
- Platform memory (ChatGPT, Claude) captures information about you: preferences, facts, recurring topics. It does not capture how you think: your decision frameworks, quality standards, reasoning patterns.
- The Two-Layer Context Model separates your profile (structured, portable, governed by you) from platform memory (automatic, opaque, locked inside one tool).
- The professional unlock isn't turning memory off. It's building the layer that memory can't, then letting it govern what loads per task.
What your AI's memory actually remembers
OpenAI's memory feature works in two modes. There are explicit saves: things you ask ChatGPT to remember ("I prefer concise responses," "My team uses Notion for documentation"). And there are implicit inferences: patterns ChatGPT picks up from your conversations over time. Since April 2025, the system references all your past conversations. Plus and Pro users get long-term understanding across sessions. Free users get lighter continuity.
The feature solves a real irritation. Before memory, every conversation started from zero. You re-explained your role, your preferences, your context. Memory fixes the repetitive setup problem, and for straightforward tasks, it works well. Simon Willison, the developer and writer, has called the result a "dossier": a profile the system quietly builds about you over time. His concern was opacity: you can't fully see what it's inferred.
But there's a more fundamental issue than transparency.
Everything memory captures is informational.
Preferences. Facts. Recurring patterns. Things a competent assistant would jot down in a notebook. Now think about what actually makes your work good. It's not the facts about you. It's the frameworks you apply to those facts. How you evaluate a partnership. What "good enough" means for a client deliverable versus an internal memo. Which risks you tolerate and which ones you escalate. The three questions you always ask before committing resource to anything new.
Platform memory captures none of this. OpenAI's own documentation says memory is designed for "high-level preferences and details" and "should not be relied on to store exact templates or large blocks of verbatim text." It's a feature built for convenience. Not for the kind of structured professional context that determines whether the AI's output matches your judgement.
It knows what you work on. It has no model of how you think about what you work on.
The overload nobody planned for
You might assume this is a niche concern: a problem for people who overthink their AI setup. But the data says otherwise. ActivTrak's 2026 State of the Workplace report analysed 443 million hours of work activity across thousands of organisations. Their finding: after AI adoption, time spent on work didn't decrease. It increased. Across every category.
Focus efficiency fell to 60%, a three-year low. The average focus session now lasts 13 minutes and 7 seconds, down 9% since 2023. As Harvard Business Review put it in February: AI doesn't reduce work, it intensifies it. Employees complete more tasks but don't work fewer hours, because AI expands the scope of what feels possible and expected.
Then there's what BCG's researchers are calling "AI brain fry." Their March 2026 study of nearly 1,500 US workers found that those with high levels of AI oversight reported 14% more mental effort, 12% more mental fatigue, and 19% greater information overload. The downstream effects are sharp: 33% more decision fatigue. 39% more major errors.
Here's the detail that stopped me: BCG found a curve. Going from one AI tool to two, perceived productivity rises. At three tools, it peaks. Beyond four, it drops again.
More AI doesn't mean better work. More unstructured AI means worse work.
Workday's 2026 research adds the punchline: 37 to 40% of time supposedly saved by AI gets consumed by reviewing, correcting, and verifying the output. The efficiency gain is real, but so is the tax. And when your AI's memory is an undifferentiated accumulation of everything you've ever discussed: preferences mixed with project context mixed with personal asides, the signal-to-noise problem compounds with every conversation.
The problem isn't that AI remembers too much. It's that it remembers without structure, and more context, unstructured, makes models worse, not better.
This isn't just an intuition. Stanford researchers demonstrated it empirically. In their "Lost in the Middle" study, Liu et al. found that language model performance is strongest when relevant information sits at the beginning or end of the input, and degrades significantly when it's buried in the middle: accuracy dropped over 30% depending on position. Chroma's 2025 follow-up tested 18 frontier models and found the pattern holds in every one. It's architectural, not temporary.
So when platform memory accumulates months of conversations into an ever-growing context layer, the model doesn't get smarter about you.
It gets noisier.
Two layers, one model
The framework I built after that failed trade-off analysis is what I call the Two-Layer Context Model. Not because ChatGPT's memory was broken: it was doing exactly what it was designed to do. But because what it was designed to do wasn't enough.
- Reasoning architecture
- Voice & communication style
- Domain expertise
- Operational patterns
- Governed by
- You
- Portability
- Fully portable across tools
- Loading
- Per task (need-to-know)
- Visibility
- Readable, editable, auditable
- Preferences & settings
- Facts about your role
- Recurring topics
- Implicit inferences
- Governed by
- The platform
- Portability
- Locked inside one tool
- Loading
- Always on (everything-at-once)
- Visibility
- Opaque: can't fully audit
Layer 1 is your profile. It's structured context you build deliberately: your reasoning architecture (how you make decisions), your voice (how you communicate), your domain expertise (what you know and how it applies). It's portable, not locked inside ChatGPT or Claude. You own the file. You can read every line. You decide what's in it, and you update it when your thinking evolves.
Layer 2 is platform memory. It's the automatic context your AI tool infers from your conversations: your preferences, recurring topics, facts about your role and company. It's useful for convenience: remembering that you prefer British English, that you work in private capital markets, that your team is in Lagos and London.
The two layers do different things. Layer 1 captures cognition: how you think. Layer 2 captures information: what you've told it. Layer 1 is governed by you. Layer 2 is governed by the platform. Layer 1 is portable. Layer 2 is locked inside one tool.
Most professionals have Layer 2 and no Layer 1. Their AI knows their preferences but not their judgement.
Why I built a system instead of turning it off
When I first noticed this gap, the obvious move was to turn ChatGPT's memory off entirely. Some people I respect have done exactly that: clean slate every session, no accumulated assumptions, full control.
I tried it.
It was too blunt.
Platform memory is genuinely useful for the small stuff. I don't want to re-explain my company every session. I don't want to re-specify my formatting preferences. The convenience layer has real value. It just shouldn't be the only layer.
So instead of turning memory off, I built the layer that was missing. I documented my decision-making criteria: the evaluation frameworks I actually use, the quality standards that matter in my domain, the reasoning patterns that make my analysis mine. I structured it so that different parts load for different tasks. A decision brief pulls my evaluation criteria and risk tolerance. A stakeholder email pulls my communication voice and the relationship history. A weekly review pulls my prioritisation principles.
Think of it as need-to-know access for an AI. Platform memory handles the small stuff: preferences, recurring facts, the informational layer. The profile layer handles the cognitive work, and it loads deliberately, per task, with governance.
The result was visible in the first output. That data architecture trade-off I mentioned at the start? With platform memory alone, I got a generic pros-and-cons list. With the profile loaded, the AI opened with my actual decision criteria: "what does this cost in focus, and can the team absorb the maintenance load in Q3?", and structured the analysis around the trade-offs I actually weigh when I make build-versus-extend calls.
Same model. Same prompt. Different context layer.
The difference isn't marginal. It's the difference between an AI that takes dictation and one that anticipates how you think.
What this means if you use AI for real work
You don't need to build an elaborate system to apply this. The framework is the point.
Separate information from cognition. Your preferences and your decision frameworks are different things. Platform memory handles the first. You need a separate layer for the second: something that captures how you evaluate, prioritise, and communicate, not just what you've told the AI about yourself. When you review what your AI "knows" about you, ask: is this a fact, or is this how I think?
Own the cognitive layer. Whatever captures your reasoning should be readable, editable, and portable. Not locked inside one platform's opaque memory system. A plain text file you can inspect, version, and move between tools is more valuable than a sophisticated but invisible memory graph you can't audit. If you switch from ChatGPT to Claude next month, your preferences reset to zero. Your profile shouldn't.
Load context per task, not per relationship. A decision brief needs your evaluation criteria. A client email needs your voice and the stakeholder's communication preferences. A meeting debrief needs your action-item patterns. The cognitive layer shouldn't dump everything into every session. It should load what's relevant to the task at hand. Need-to-know, not everything-all-the-time.
There's a reasonable counterargument here, and I want to give it its due: models will keep improving. OpenAI's memory will get smarter. The systems will eventually learn to distinguish between "Abdul prefers British English" and "Abdul evaluates partnerships by data quality first, commercial terms second." The inference layer will catch up to the cognitive layer.
Maybe. But the content of your context, how you actually think, what quality means in your specific domain, which reasoning patterns produce your best work, that requires human authorship. Models can get better at managing context mechanics. They won't get better at knowing how you think unless you tell them.
I still use ChatGPT's memory. I haven't turned it off. But I've built the layer above it: the one that takes what memory captures and makes it useful for the work that actually matters.
Your AI knows your name. The question is whether it knows your judgement. And that part, for now, is still up to you.
Frequently asked questions
- OpenAI. What is Memory? OpenAI Help Center, 2025.
- OpenAI. Memory FAQ. OpenAI Help Center, 2025.
- Simon Willison. I really don't like ChatGPT's new memory dossier. May 2025.
- ActivTrak Productivity Lab. 2026 State of the Workplace. March 2026.
- Harvard Business Review. AI Doesn't Reduce Work: It Intensifies It. February 2026.
- BCG / Harvard Business Review. When Using AI Leads to "Brain Fry." March 2026.
- Axios. Workday and Alix Partners data shows AI's productivity paradox is real. January 2026.
- Liu et al. Lost in the Middle: How Language Models Use Long Contexts. Transactions of the ACL, 2024.
- Fortune. 'AI brain fry' is real. March 2026.
