Same model. Same question. 1,875 tokens of structured context. Completely different output.
 ‌​‍‎‏ ‌​‍‎‏ ‌​‍‎‏ ‌​‍‎‏ ‌​‍‎‏ ‌​‍‎‏ ‌​‍‎‏ ‌​‍‎‏ ‌​‍‎‏ ‌​‍‎‏ ‌​‍‎‏ ‌​‍‎‏ ‌​‍‎‏ ‌​‍‎‏ ‌​‍‎‏ ‌​‍‎‏ ‌​‍‎‏ ‌​‍‎‏ ‌​‍‎‏ ‌​‍‎‏ ‌​‍‎‏ ‌​‍‎‏ ‌​‍‎‏ ‌​‍‎‏ ‌​‍‎‏ ‌​‍‎‏ ‌​‍‎‏ ‌​‍‎‏ ‌​‍‎‏ ‌​‍‎‏ ‌​‍‎‏ ‌​‍‎‏ ‌​‍‎‏ ‌​‍‎‏ ‌​‍‎‏ ‌​‍‎‏ ‌​‍‎‏ ‌​‍‎‏ ‌​‍‎‏ ‌​‍‎‏ ‌​‍‎‏ ‌​‍‎‏ ‌​‍‎‏ ‌​‍‎‏ ‌​‍‎‏ ‌​‍‎‏ ‌​‍‎‏ ‌​‍‎‏ ‌​‍‎‏ ‌​‍‎‏ ‌​‍‎‏ ‌​‍‎‏ ‌​‍‎‏ ‌​‍‎‏ ‌​‍‎‏ ‌​‍‎‏ ‌​‍‎‏ ‌​‍‎‏ ‌​‍‎‏
Learned Context iconLearned Context
The Essay
Editorial illustration
Wed 15 Apr 2026

I Built a Paul Graham AI Profile

Abdul Saka-AbdulrahimAbdul Saka-AbdulrahimWed 15 Apr 2026 · 3 min read

I asked Claude to evaluate a startup idea. A productivity tool for remote teams, nothing remarkable. What came back was exactly what you'd expect: four paragraphs about "target market considerations" and "competitive differentiation." Competent. Bloodless. The kind of analysis that could be about any company in any sector.

Then I loaded a single file, 1,875 tokens of structured context describing how Paul Graham thinks, and asked the same question.

The response was unrecognisable.

Claude stopped talking about target markets and started asking whether the founders had personally experienced the problem. It flagged the unsexy filter, PG's observation that nobody wants to build boring tools, which is exactly why boring tools are fertile territory. It applied the well-vs-broad-shallow test. It caught that the pitch assumed growth would come from marketing rather than from the product being genuinely useful, and called it out as test-hacking.

Same model. Same question. The only variable was a structured text file.

Why I built it

Context engineering has been everywhere this year. Shopify's CEO coined the term in June 2025. Karpathy endorsed it a week later. Anthropic formalised it in September. MIT Technology Review called it the story of 2025 in software development.

But almost all of that discourse is about developer tooling. Agents, RAG pipelines, MCP servers. Nobody was applying context engineering to a structured representation of how a specific professional thinks, communicates, and operates.

I chose Paul Graham because his thinking is extensively documented: over 200 essays spanning two decades, plus interviews and talks. If you can't build a strong context profile from that corpus, the approach doesn't work.

The process followed a seven-stage calibration engine. Every observation had to pass three tests: distinctiveness (would this be true of most senior professionals?), contrastive depth (what does the person do and not do?), and actionability (if removed, would the AI produce different output?).

What emerged was a two-file system. A core profile at 1,875 tokens that loads every session. A context library at 3,627 tokens that loads alongside when deeper framework application is needed. Together: 5,500 tokens. Recognisably different AI behaviour within seconds.

What platform memory misses

ChatGPT's memory wiped twice in 2025. In February, months of accumulated context vanished for users across the platform. In November, it happened again. Claude's Memory Synthesis is more thoughtful, but it is still passive, still platform-controlled, still optimised for what the platform decides matters rather than what you do.

These are memory features, not memory systems. A feature is something the platform adds. A system is something you build, own, and control.

The research backs this up. Chroma tested 18 frontier models and found that every single one degraded as context length increased. No safe threshold. Performance declined continuously. The fix isn't bigger windows. It's better structure.

Anthropic proved this directly. Their multi-agent research system, with clean, focused context windows, outperformed a single agent by 90.2%. The improvement came from managing context, not from a better model.

What a demo proves and what it can't

The PG profile captures frameworks and communication style from published material. It can make an AI reason through problems in his patterns and apply his named frameworks to new situations.

It cannot be Paul Graham. It doesn't have his lived experience, the specific founders he's backed, or the private doubts that never made it into essays. What it produces is a thinking partner that reasons in his patterns, not a replacement for his judgement.

That's precisely the point. If structured context can produce recognisably different AI behaviour from public essays alone, material reconstructed from the outside, the question is what it produces when the source material is you.

Your own profile, built from your actual decisions, conversations, and stakeholders, captures things no external observer could reconstruct.

Same model. Same question. Different context. Different everything.

Same model. Same question. The only variable was 1,875 tokens of structured context.

Learned Context helps you build this system. Start with a free audit to see where your AI setup stands.

Read on Learn Hub

Learned Context

Context engineering for professionals

Website·LinkedIn·X·Forward to a colleague

Unsubscribe