My nephew is 11. He doesn't have a product requirements document. He doesn't know what a WebSocket is. What he has is the same gut-level instinct that makes UNO work. That feeling of slapping a card down and watching someone's face change.
His game idea was simple. Cards that attack. Cards that block. Cards that reverse things suddenly. No formal rules. No edge cases. Just the raw energy of a table erupting.
I'm COO of Stears, a financial data company. I spend my days on OKRs, revenue operations, and cross-functional delivery. I don't write TypeScript. I don't configure WebSocket servers. But I recognised something in my nephew's idea. There was a structure underneath the chaos. A natural competitive loop. High interaction. The potential for those "wait, what just happened?" moments that make card games addictive.
Four days later, the game was live at drawndestiny.org.
53 commits. 152 tests. 105 sound effects. A full multiplayer experience running on WebSockets, deployed on Railway, accessible to anyone with a browser. Built by someone who has never written a line of production code in his life.
This is the story of how that happened, and why I think it matters for anyone who operates a business.
- You don't need to write code to ship software. You need to write specs.
- AI tools like Claude Code have shifted the bottleneck from "can you implement?" to "can you design?"
- The same operator skills that run a company, scoping, prioritising, triaging, are the skills that ship a product.
- Shipping something real, with real users, in under a week, was the most instructive thing I've done in years.
The design was the hard part
There's a persistent myth that building software is about code. It isn't. Code is the execution layer. The hard part is the same thing that's hard about running any operation. Defining what "done" looks like. Resolving ambiguity before it compounds. Making the smallest number of decisions that constrain the largest number of outcomes.
His first iterations were chaos. Not the fun kind. Every playtest ended with players asking the same questions. "Can I reverse this?" "Does this block that?" "Who does this go to now?" The game was fun if someone explained it live. That's not scalable. That's not a product.
The breakthrough was a triangle.
Attack beats Defence. Defence absorbs Attack. Reverse redirects Attack. A closed loop. Simple enough for an 11-year-old. Structured enough to build on. This single diagram anchored every decision that followed.
But the Reverse cards nearly broke the game. They created infinite loops. Players bounced attacks back and forth, nobody took damage, the game went nowhere. The fix was to split Reverse into three tiers, each with different power and constraints.
Standard Reverse handles routine redirects. Special Reverse escalates the stakes. Once played, only Special or Ultimate cards can continue the chain. Ultimate Reverse ends everything instantly, sending full accumulated damage back to the original attacker.
This solved the infinite loop problem without removing the excitement. Players could still redirect attacks, but there was a natural escalation curve. The chaos had rules.
Designing these constraints was harder than any code that implemented them.
Three documents before a single line of code
Here's where the operator mindset kicks in. Before I opened Claude Code, before anything was built, I wrote three documents. This is the same approach I use at Stears when scoping a new data product or launching a quarterly initiative. Define the target. Remove ambiguity. Make the code follow the system, not the other way around.
The Product Requirements Document set the success metrics. 85 percent or better match completion rate. Twelve to twenty minute median game length. 99 percent bug-free sessions. The Developer Game Spec defined the technical contract. Authoritative server architecture, idempotency keys on every action, validation on every message. The Rulebook codified the game into transferable, testable logic.
None of this required knowing how to code. It required knowing how to think clearly about what a system needs to do, and writing that down with enough precision that someone, or something, could execute against it.
AI tools like Claude Code have shifted the bottleneck from implementation to design. The person who can write a clear spec now has more leverage than the person who can write a mediocre function. This is the biggest change in software in a decade, and most operators haven't noticed yet.
What Claude Code did was take those three documents and translate them into working software. It chose the architecture, a monorepo with shared types between client and server. It wrote the game engine using a command-event pattern. It set up WebSocket communication. It built the React frontend with Zustand state management.
I didn't tell it to use any of those technologies. I told it what the game needed to do. It figured out how.
Four days that felt like four weeks
The build happened between December 26 and 30, 2025. I was on holiday. The sprint was not planned. It was the kind of thing where you start "just getting the basics working" and then look up and it's 4 AM and you've committed 15 times.
- Day 1 · December 26
Zero to Playable
Six substantial commits. Monorepo structure, game engine, WebSocket server, React frontend, design system, audio system with 105+ sound effects. First commit at 08:19. By midnight, the game was functional.
- Day 2 · December 27
Deployment Hell
Started at 4 AM. Vercel rejected the monorepo. Pivoted to Railway. Five config commits in 80 minutes: root directory wrong, then missing dependency, then duplicate npm install. What should have taken 30 minutes ate three hours. Added haptic feedback and particle effects between deployment fixes.
- Day 3 · December 28
The Playtesting Reckoning
Real users broke everything. P0 bugs piled up: intermittent play button, unclickable poison pass, missing profanity filter. Three complete mobile UI redesigns in a single day. Added AI bots for solo testing. Wrote 152 tests. The gap between rules on paper and game in practice hit hard.
- Day 4 · December 29 to 30
Polish and Ship
Theme system, kid-friendly copy, WCAG accessibility, pause and resume, gameplay music, and post-deployment bug fixes from real players. Last commit at 00:35 on December 30: fixing Ultimate Reverse edge cases that only surfaced in production.
The timeline tells one story. The emotional arc was different. Day 1 was euphoria, the feeling of watching something emerge from nothing at an absurd pace. Day 2 was frustration. Deployment should be boring, and it was anything but. Day 3 was humility. Real users don't care about your elegant architecture. They care about whether the button works when they press it.
Day 4 was the quiet satisfaction of shipping something that worked.
What we actually built
I want to show you the architecture. Not because you need to understand every component, but because it's worth seeing what an AI tool produces when given a clear spec. This isn't a weekend hack. It's production-grade software.
The key architectural decision, and one I did make deliberately, was that the server is the single source of truth. Every card play, every damage calculation, every status effect is resolved server-side. The client just renders what the server tells it to render. No cheating. No desync bugs. No "he says he played a card but my screen shows something else."
I knew this from years of working with data systems at Stears. When you have multiple consumers of the same information, you need one authoritative source. It's the same principle whether you're running a financial data product or a card game.
Everything that went wrong
I could write this as a success story. It would be dishonest.
Day 2's deployment saga was the worst. I had a working game. Functional, tested, ready for players. All I needed was to put it on the internet. Vercel rejected the monorepo structure. Railway accepted it but then needed five configuration attempts in 80 minutes. Each fix revealed the next problem. Root directory wrong, then simplified to manual setup, then missing dependency, then Railway running npm install twice and breaking the build.
What should have taken 30 minutes took three hours. I wasn't debugging code. I was debugging infrastructure configuration. This is the part of shipping software that nobody tells you about and that AI tools don't yet handle well. The gap between "it works on my machine" and "it works on the internet."
The playtesting reckoning
Then real users arrived. They broke everything I thought was solid.
The mobile UX was redesigned three times on Day 3. The first layout was logical but awkward. Interactive elements outside the natural thumb zone. The second added drag-to-play, which was intuitive for some users and baffling for others. The third was the simplest. A compact horizontal status strip. It stuck because it got out of the way.
Simplicity won. It always does.
Why this matters if you run a business
I'm not going to pretend this changes the software industry. But I think something worth paying attention to happened here.
A non-technical operator, someone whose day job is managing cross-functional delivery, negotiating with DFI clients, and reviewing revenue pipelines, built and shipped a production multiplayer game in four days. Not a prototype. Not a mockup. A deployed, tested, accessible application with real-time WebSocket communication, AI opponents, 105 sound effects, and accessibility compliance.
The skills that made this possible weren't technical. They were operational. Scoping: knowing what to build and, more importantly, what not to build. Prioritising: the PRD set clear success metrics, so every decision had a benchmark. Triaging: when playtesting broke the game, I used the same P0 to P2 framework I use for production incidents at Stears. Writing clear specs: the single most valuable skill in this entire project.
It's the thesis I'm building around at Learned Context, a context engineering system for professionals.
AI-assisted development is not a replacement for software engineering. The code Claude Code wrote was good. Genuinely good. But a senior engineer would have caught the state race conditions that caused the intermittent play button bug. They would have known about Railway's duplicate npm install behaviour. They would have designed the mobile layout right the first time. The gap between "working" and "production-grade" still requires human expertise.
But the gap between "idea" and "working" has collapsed. And for operators, founders, and non-technical builders, that collapse is enormous. The bottleneck used to be implementation. Now it's design. The people who can write clear requirements, define precise constraints, and triage effectively are suddenly the people who can ship.
If you've run an operations function, you already have these skills. You've been writing specs. You just called them project briefs, or SOWs, or product requirements. You've been triaging. You just called it prioritisation. You've been debugging. You just called it root cause analysis.
What my nephew thinks
I showed my nephew the finished game on December 30. He played three matches against the AI bots. He found a bug within ten minutes, a status effect that wasn't displaying correctly on the compact mobile view.
Then he asked if his friends could play.
I sent them the link. Six kids, on their phones, in a real-time multiplayer match, playing a game that existed only as a feeling in an 11-year-old's head less than a week earlier.
That's the part no spec can capture.
The game is at drawndestiny.org if you want to try it. Fair warning: the bots on Hard difficulty are better than I am.
But the real thing I built wasn't a card game. It was proof, for myself mostly, that the gap between "person who understands what to build" and "person who can build it" is smaller than it has ever been. That gap will keep shrinking. And the operators who notice this first will have an advantage that the operators who don't won't be able to catch up on by learning to code.
They'll catch up by learning to spec.
