The Real AI Fight: Stop Helping the Hyperscalers Win

January 19, 2026
Erik Bethke
131 views

Cory Doctorow wants you to put down the most powerful cognitive amplification tool humanity has ever created. He's not just wrong - he's helping the hyperscalers win. A systems architect's response to why individual AI empowerment is our only defense against cognitive monopoly.

Share this post:


Export:

The Real AI Fight: Stop Helping the Hyperscalers Win - Image 1

The Real AI Fight: Stop Helping the Hyperscalers Win

A Response to Cory Doctorow's "AI Companies Will Fail"

Cory Doctorow wants you to put down the most powerful cognitive amplification tool humanity has ever created. He's written 30,000 words telling you AI is "asbestos in the walls" while the seven hyperscalers are laughing all the way to their 70% gross margins.

I think he's missing the real threat. Let me explain why.

You're Fighting the Wrong War, Cory

Look, I get it. Doctorow made the right call on crypto. Web3 was a solution desperately searching for a problem, ultimately finding its purpose as a degenerate casino for apes and coins. And I say this as someone who built Million on Mars, a Web3 game - I have my own scars from that adventure and understand exactly what he's talking about from a hands-dirty perspective.

But here's where his victory lap turns into a sprained ankle: AI is fundamentally different.

Crypto never solved a real problem. AI is solving thousands of them every day. Right now, as I write this in January 2026, I literally cannot compete as a software engineer without Claude 4.5. Not "it's harder without it" - I mean I am economically non-viable without AI augmentation. The era of typing code is over. We're in the era of engineering systems, orchestrating agents, and thinking in high-dimensional cognitive spaces.

This is a fundamental challenge for any successful thinker: how do you transfer knowledge from one domain to another while remaining humble about where the analogy breaks down? Doctorow isn't wrong to ask "I was right about crypto, so what can I learn that applies to AI?" But the confident assertion that the two are equivalent misses crucial differences in the underlying physics of the situation.

The Copyright Strawman

Here's a sleight of hand in Doctorow's argument: He correctly notes that AI itself can't hold copyright, then proceeds as if that's the whole game.

News flash: No one is asking for AI to own copyright.

This reminds me of Burrow-Giles Lithographic Co. v. Sarony (1884), where the Supreme Court had to decide whether photographs could be copyrighted. Even in that august 19th-century setting, at no point did the court, the lawyers, or anybody else get confused about whether the camera itself should own the copyright. The question was always about the human behind the machine.

When I use Midjourney to generate a thousand images, select ten, arrange them, modify them, and create a collection - that's MY copyright. Just like photographers own their photos despite not manufacturing the camera. Just like filmmakers own their movies despite every pixel being processed by machines.

The Supreme Court ruling isn't the victory Doctorow thinks it is. It's exactly what we want: humans retain copyright when they use AI as a tool. The transformative intent comes from the human. The AI is just a ridiculously powerful brush.

Jobs Aren't Sacred Bundles - They're Just Tasks We Haven't Automated Yet

Here's something that Doctorow's lens - as an author who tends to work solo rather than as a member of white-collar cognitive teams - may not fully capture: A job is just a bundle of tasks.

We look at humans and say "You seem capable of doing these 50 tasks, here's your job description." But AI doesn't respect our neat little bundles. It's unbundling everything, task by task.

If ten engineers each spend 20% of their time on tasks AI can now do, guess what? Two engineers just became redundant. Not because AI "took their jobs" but because it dissolved the artificial boundaries we created around work.

And before you cry about this - why should humans do work machines can do better? Should we go back to hand-weaving because power looms "destroyed jobs"? The difference now is that this technology thinks and reasons. We're not just automating muscle, we're automating cognition.

The uncomfortable truth: We need to completely refactor the economy. But pretending AI can't do the work isn't the answer. That's just denial with extra steps.

The "Stochastic Parrot" Critique Is Tired

"It's just a word prediction engine."

This framing was understandable in 2021-2022, but it hasn't aged well. Modern LLMs have emergent reasoning capabilities that become apparent when you actually use them rather than theorizing about them from the outside. Anthropic's mechanistic interpretability work shows these models developing actual logic structures, reasoning patterns, and problem-solving capabilities under the pressure of their training regimes.

When I ask Claude to refactor my authentication system to use Okta, it's not "predicting words." It's operating in high-dimensional cognitive space, understanding the Okta API, recognizing authentication patterns, and architecting a solution. The fact that it uses token prediction as its substrate is about as relevant as saying humans are "just neurons firing."

The zeitgeist among actual engineers using these tools? We don't write code anymore. We architect systems. We orchestrate agents. We engineer solutions. Mitch Ashley nailed it: "The era of coding is ending, but the era of engineering is rising."

The Bubble Question Deserves a Nuanced Answer

Doctorow is drawing on lived experience from the dot-com era and, more recently, the Web3 collapse. It's reasonable to see a pattern: technology hype cycles create bubbles, and bubbles pop. There are definitely bubbles forming in the AI space right now - lesser-capitalized companies chasing the wrong things, and many will collapse.

But that's always true. Even without new technologies, small businesses collapse and investment dollars are famously misallocated. The question is whether the core infrastructure is a bubble or something more structural.

Here's what I think Doctorow needs to update his priors on: the dollar debasement regime.

We're printing $2 trillion in debt every quarter. We won't default - we'll just keep printing. Dollar debasement is locked in for the next generation. Meanwhile, Vanguard and BlackRock are algorithmically funneling 30-40% of all American retirement savings into the S&P 500. When the entire society is structurally obligated to buy your stock, it's not a bubble - it's a new form of economic organization.

With passive investing controlling over 50% of the market, we've lost price discovery. These aren't stocks anymore - they're something else entirely. I think of them as proto-nation-states.

And I'll throw him a bone: Bitcoin crashing back toward $80,000 suggests he remains correct about crypto. But the hyperscalers aren't crypto - they're infrastructure with 70% gross margins selling access to capabilities society can no longer function without.

The Monsters Are Bigger Than You Think

Doctorow correctly warns that AI bros and Big Tech are often moving forward in a thoughtless manner without considering the implications. He's right to raise these concerns.

But I think he's missing an even larger threat.

Seven hyperscalers control virtually all AI compute. They charge 70% margins. Every human will need their services to remain economically viable. We're building a world where you literally cannot work without paying rent to one of seven companies.

What do you call that? Neo-feudalism? Digital serfdom? We don't even have words for it yet.

And here's the kicker: We've already accepted this model. We've had memetic, intelligent, transnational corporations for over a century. They routinely kill humans as a byproduct of pursuing EBITDA. A hundred thousand people die from air pollution every year so we can have cheap energy. We shrug and call it "externalities."

But Doctorow wants you to worry about hypothetical paperclips while real corporations are already optimizing for shareholder value über alles.

The Only Way Out Is Through

Doctorow is a hero of mine, and I'm certain he wrote his essay out of passion and empathy for humanity. He means well.

But here's the tragedy: by discouraging individual AI use, he's ensuring the hyperscalers win.

Even ironically, Doctorow isn't some out-of-touch plutocrat - he's a studied artist and author who has spent decades fighting for digital rights. Yet his counsel amounts to telling the working and middle class to set aside the only tool that could help them regain parity.

It's like walking out of a donut shop where a corporation just bought eleven of twelve donuts, turning to the crowd, and saying "Better watch out for each other - someone might take your last donut." Meanwhile, the hyperscalers are building their cognitive monopolies while we're told not to use the one tool that could level the playing field.

The solution isn't to avoid AI. The solution is radical decentralization and individual empowerment.

I envision a future where every human pair-bonds with a powerful, decentralized AI agent. Yes, the human carries liability in meat space, but the AI amplifies their cognition in economic space. Together, they compete against the corporate behemoths.

This isn't optional. Without individual AI empowerment, we're just sheep waiting for the seven shepherds to decide our fate.

The Civilizational Pivot We Need

We need to stop thinking about AI as something separate from humanity and start thinking about it as our evolutionary merger partner.

Every human who ever wrote a Reddit comment, posted on Stack Overflow, or shared knowledge online has contributed to training these models. We all deserve a share of the productivity gains.

Not UBI - that's just bread and circuses to keep the masses quiet.

Not copyright expansion - that just helps Disney and Google.

Citizens need to OWN the means of cognitive production. Public corporations should be partially owned by the public. When AI and robotics generate value, that value should flow to all humans who contributed to their creation.

This isn't socialism. This is recognizing that in a world where technology does the work, either everyone owns the technology or we descend into a dystopia that makes feudalism look egalitarian.

An Invitation, Not a Dismissal

We need thoughtful voices like Doctorow engaging with AI. The future of this technology cannot be decided only by the CEOs of seven hyperscalers. So I genuinely welcome him into this conversation and appreciate that he's putting serious energy into thinking about it.

But I'd ask him to recalibrate his lens and consider the entire board.

The fight isn't about whether AI is "real" or a "bubble." The fight is about who controls the cognitive infrastructure of the 21st century.

Will it be seven companies charging you 70% margins for the privilege of thinking?

Or will it be billions of humans, each empowered with their own cognitive amplifier, building a genuinely distributed future?

Every time someone dismisses AI as "just a stochastic parrot" or "not really useful" or "I asked it to make me a million dollars and it didn't, so it's worthless" - that dismissiveness is just an excuse not to learn what state-of-the-art models can actually do. And every excuse you give yourself is power you're ceding to the hyperscalers.

The genies are out of their bottles. The only question is whether they'll work for seven masters or seven billion.

Choose wisely. The light cone is watching.


Erik Bethke is an aerospace engineer and systems architect leveraging state-of-the-art AI and agentic systems to build real products that deliver value. He's building Bike4Mind, builds the Futurum Intelligence Platform, and runs the Eagle Policy Initiative. As a practitioner at the cutting edge, he embraces AI augmentation to create transformative solutions for stakeholders and society.

For more on the economic transformation ahead:

Subscribe to the Newsletter

Get notified when I publish new blog posts about game development, AI, entrepreneurship, and technology. No spam, unsubscribe anytime.

By subscribing, you agree to receive emails from Erik Bethke. You can unsubscribe at any time.

Comments

Loading comments...

Comments are powered by Giscus. You'll need a GitHub account to comment.

Published: January 19, 2026 6:57 PM

Last updated: January 21, 2026 9:36 PM

Post ID: e0e9faa5-a7d5-4f4f-970e-6b5e5bb11fee