Code Reading Is Dead: The Assembly Analogy and the AI Vampire
Code Reading Is Dead: The Assembly Analogy and the AI Vampire
I spent a Saturday last month pair-programming with Claude Code. We knocked out three features in about two hours. I felt like a superhero. Then I tried to explain what we'd built to a colleague on Monday morning and realized I couldn't walk through the code line by line. Not because it was bad code. I just hadn't actually read most of it. And you know what? It still worked perfectly.
That rattled me a little. So I started digging into what other people are saying about this shift, and I found two pieces that nailed exactly what I was feeling, from completely opposite angles.
You Don't Read Assembly. Soon You Won't Read Code Either.
Quick question: when's the last time you cracked open assembly to double-check your C compiler did the right thing? Unless you're working on compilers or embedded systems, the answer is probably "literally never." You trust the toolchain. You trust the abstractions. It doesn't even occur to you to question them.
Ben Shoemaker makes a compelling case that we're hitting the same inflection point with high-level code. When AI agents are cranking out thousands of lines per session, reading every line isn't really verification. It's theater. OpenAI's engineering team built entire products where every line was written by Codex agents. Their investment went into infrastructure, not manual code review.
Shoemaker's argument is pretty simple: the spec, the tests, and the verification layer. Those are what matter now. Not the code itself. Code is becoming an implementation detail. The machine writes it, the machine can verify it.
The Harness Is the New Codebase
So if you're not reading code anymore, what are you doing? You're building harnesses.
Think about it: specifications with clear acceptance criteria. Layered automated testing. Architectural constraints that guide what gets generated. Cross-model review where multiple AI systems check each other's blind spots. It's basically the aviation model: you don't hand-fly the plane at 35,000 feet. You design the autopilot, keep an eye on the instruments, and step in when something feels off.
The role shift is real. You're not writing for loops anymore. You're defining what "correct" means with enough precision that machines can both produce it and prove it's right.
But Here Comes the Vampire
Okay, so here's the twist. If the Shoemaker piece made me feel optimistic, Steve Yegge's The AI Vampire punched me right in the gut. Because it names the thing a lot of us feel but can't quite put into words.
AI coding is addictive. Full stop. It hands out dopamine in these unpredictable bursts: the thrill of watching an agent solve a problem in seconds, then the frustration of debugging its hallucinations, then another hit of productivity. Rinse and repeat. Before you know it, it's midnight and you've been "just one more prompting" for six hours.
The result? You're 10x productive, sure. But you're also 10x drained. Yegge calls it the vampiric effect: AI extracts unsustainable levels of labor intensity while making you feel like you're choosing to go hard. You're not. The dopamine loop is making that choice for you.
His prescription is blunt, and I think he's right: 3-4 hours is the new workday for deep AI-assisted building. Not because you're slacking. Because the cognitive load of orchestrating AI agents, reviewing outputs, course-correcting, holding the whole context in your head, is genuinely exhausting in ways that old-school coding never was.
The Uncomfortable Middle
Here's where these two ideas crash into each other. Shoemaker says we don't need to read the code anymore. Yegge says the process of not reading the code but orchestrating everything around it is draining us dry.
And honestly? They're both right. The future is orchestration, not line-by-line coding. But orchestration at AI speed creates this new flavor of burnout, one where you're always shipping, always iterating, always one prompt away from the next feature. The treadmill doesn't stop because the machine never gets tired. But you do.
What We Actually Recommend
We work with AI agents every day at Heimdall. Here's what we've figured out the hard way:
Invest in harnesses, not reviews. Write better specs. Write more tests. Build feedback loops that catch problems before you ever need to eyeball the generated code.
Set hard stops. Three to four hours of focused AI-assisted work, then walk away. Not negotiable. The vampire can't drain you if you leave the room.
Resist the "one more prompt" urge. That feeling that you could ship just one more thing tonight? That's the addiction talking. I've been there. Close the laptop.
Remember that you're the pilot, not the engine. Autopilot doesn't mean you fly 24 hours straight. It means you fly smarter, with actual breaks.
The shift from coder to orchestrator is real and it's happening whether we like it or not. But surviving it means admitting that even orchestration has limits, and that sometimes the most productive thing you can do is just... stop.
Want help designing AI workflows that are sustainable, not just fast? Reach out at contact@heimdall.engineering.
Comments (0)
Related Posts
I Built an AI Coworker for $30/Month
A practical guide to creating an AI-powered coding partner on a budget.
AI as a Co-Discoverer: From Tool to Research Partner
In 2026, AI stops just summarizing research — it actively joins the discovery process in physics, chemistry, and biology. Here's what that shift means for science.
Why AI Agents Forget Everything: The Memory Problem Holding Back the Agent Revolution
AI agents can plan, code, and research — but ask them what they did yesterday and they draw a blank. The memory problem is the last bottleneck for truly useful AI agents.
Was this article helpful?