Your coding agent is a slot machine. You're already pulling the lever.
There's a new name for something engineers have been feeling for a year: token anxiety. The compulsive urge to always be prompting, always shipping. This is what that actually is.
There's a phrase going around right now: token anxiety.
It started as a Bluesky post this week — the creeping feeling that with so much that can be done with AI agents, you should always be doing something. Always prompting. Always shipping. Never just sitting.
If you've used Claude Code, Cursor, or any agentic coding tool in the last year, you've felt this. I have. You open your laptop at 11pm not because there's a production incident but because the agent could be running. There's something it could be grinding through. There's a ticket that's half-done and maybe if you just give it one more prompt...
That's not productivity. That's a compulsion loop.
The slot machine is the product
Here's what nobody wants to say plainly: AI coding agents share the same reward architecture as slot machines.
Think about how you actually use these tools. You type a prompt. You wait. The agent either nails it (and you get a little hit) or it goes sideways and you tweak and try again. One more prompt. One more pull. The output is non-deterministic by design. You know this. You've watched the same prompt produce wildly different results on consecutive runs. And yet you keep going.
This isn't a bug. It's the mechanism. Slot machines work because the reward schedule is variable and unpredictable. Same prompt, different result: sometimes you win, sometimes you get hallucinated imports and broken tests. The variance is the product.
Natasha Dow Schüll's book Addiction by Design describes what she calls "the zone": a dissociative state gamblers enter where the goal stops being winning and becomes just playing. You lose track of time. You lose track of hours. You're in the machine.
Check the timestamps on your last deep agentic session.
This is being used against you
US tech companies are already pushing 996 schedules (9am to 9pm, six days a week). AI has made this politically palatable. If the agent is "doing the work," is it even really work? You're just reviewing. You're just steering.
Except you are working. Babysitting a non-deterministic system for 12 hours, context-switching when it stalls, spot-checking outputs for subtle semantic bugs: that's high-cognitive-load work. It just doesn't look like coding, so the labor cost becomes invisible to management and often to the engineer themselves.
Meanwhile, companies are mandating AI usage without evidence it improves output. A 2025 RCT by Becker et al. (arxiv:2507.09089) ran 16 experienced open-source developers through 246 tasks, each randomly assigned to allow or block AI. The developers predicted AI would cut their completion time by 24%. The actual result: AI tools increased completion time by 19%. It slowed them down. A paper published January 2026 by Shen and Tamkin (arxiv:2601.20245) found something worse on top of that: AI use impairs conceptual understanding, code reading, and debugging abilities, without delivering significant efficiency gains. The tasks you stop doing manually, you get worse at.
No proven productivity gain. Confirmed skill erosion. Product design that triggers addictive behavior. But the mandate is rolling out anyway, because it looks like productivity. Managers see the agent running. They see commits. Tokens consumed. The throughput graph goes up.
I'm not anti-agent
I use Claude Code every day. I've shipped real things with it faster than I could have alone. I've written before about the LLM harness problem: how the edit tool format, not the model, is usually what's failing you. Agentic coding is genuinely powerful when you understand its failure modes.
But there's a difference between using a powerful tool deliberately and being conditioned to never put it down.
Engineers who get the most out of coding agents treat them like a fast, slightly unpredictable junior engineer: give a clear, bounded task; review the output critically; push back when it's wrong. You stay the principal. The agent is a multiplier on your judgment, not a replacement for it.
Engineers who burn out treat it like a slot machine: run it all day on everything, hoping this prompt is the one that lands.
Skill atrophy is real, and it's accelerating
The Shen and Tamkin paper deserves more attention than it's getting in the "but productivity though" discourse.
It's measuring a simple effect: when you outsource a cognitive task consistently, the neural pathways you built to do it weaken. This is how skill maintenance works. Practice it or lose it. The same thing happens to surgeons who stop doing a particular procedure, to musicians who stop playing a piece.
For senior engineers, the concern isn't whether you can still write a for-loop. It's whether your debugging instincts survive. The thing in your head that says "wait, this latency pattern usually means the connection pool is exhausted, not the query is slow" is built from doing the thing manually, repeatedly, in production. Reading stack traces. Writing queries. Watching systems fail.
If the agent always reads the stack trace and always writes the query, the intuition doesn't form. Or it erodes. Either way, you become more dependent on the tool, which accelerates the atrophy.
This is the loop nobody is tracking.
What I actually do about it
A few things that have helped me stay deliberate.
Time-box the sessions. No open-ended agentic runs. I set a specific goal ("refactor this service layer, stop when tests are green"), review the result, close it. The agent doesn't run while I sleep.
One manual day a week. No agents for coding tasks on Fridays. I write the code, I read the docs, I remember what it felt like to solve something without autocomplete. It sounds dramatic. It's grounding.
Review for understanding, not just correctness. The bad habit is approving agent output because tests pass and the logic looks right. Now I try to understand every change I merge, even if the agent wrote it. Especially if the agent wrote it.
Agent drafts, I decide. There's a version of this where the agent is a fast-typist for things you've already decided, and a version where it's making architectural choices you're rubber-stamping. The first is a tool. The second is a liability.
The honest take
We're in the addictive product phase of AI coding tools. The companies building them need engagement metrics. Variable reward schedules maximize engagement. This isn't a conspiracy: it's product design. They're building what keeps people using it.
The healthiest thing I can say: notice when you're in "the zone." Notice when the goal stopped being shipping something good and became just running the agent. That's the slot machine talking.
You can use these tools without becoming dependent on them. But that requires noticing the dependency before it forms, not after.
Because once you're in the machine, it's much harder to walk out.
Get new posts in your inbox
Architecture, performance, security. No spam.
Keep reading
You're Blaming the Model. The Harness Did It.
Everyone's arguing GPT-5 vs Opus while the real bottleneck in LLM coding agents is something nobody talks about: the edit tool format.
Inside Claude Code's Context Machine
Claude Code manages your context through three systems: microcompaction, auto-compaction, and structured rehydration. Here's how the machinery actually works, and why most developers burn tokens without realizing it.
Gemini 3.1 Can Solve Puzzles. It Still Can't Use a Screwdriver.
Google's Gemini 3.1 Pro just dropped with a 77% on ARC-AGI-2 - up from 31%. The benchmarks are staggering. But the people actually building with it keep saying the same thing: it can't call tools.