Is human control over AI even possible?
Lingering questions about AI, democracy, and what we mean by human control
The dancing robot videos came out just after I hit publish.
On Monday, I posted an essay for the Boyd Institute arguing that AI is democracy’s best hope for renewal. The argument rested on three pillars: rebuilding state capacity, competing geopolitically, and boosting productivity to avoid fiscal collapse.
What followed left me with my biggest lingering question: am I deluding myself that humans can remain in control of intelligent machines? I’m not even sure what human control means in practice.
Then I hit publish, opened my feed, and saw the humanoid robots. They were dancing at a Chinese festival with the fluidity and skill of my favorite Vikings cheerleader Blaize Shiek. I was in awe. I’m generally optimistic about AI, and my first thought was: This is actually happening. We are living in the future.
An hour later, I was deep in a rabbit hole on Chinese cyborg soldiers. That’s when my nervous system lit up.
A day or two later, I noticed a dispute between the Defense Department and the AI company Anthropic, which said it did not want its AI used in weapons “without humans in the loop.”1 Around the same time, I was following the ways people were using OpenClaw, an open-source tool that lets anyone create autonomous AI agents that can perform tasks across the Internet, like an always-on executive assistant. No human monitoring required.
I’m not even sure what human control means.
For the last couple days, that phrase “humans in the loop” has been playing on its own loop in my head. The terminology is spreading faster than a shared understanding of what it means: humans in the loop, humans on the loop, human control, human oversight, etc. Human control was a core design principle in my Boyd essay. I’m not sure I defined it well enough. I’m not sure anyone has.
Matt Shumer’s viral essay, “Something Big is Happening” captures the mood from insiders, both the optimism and the holy shit! “I am no longer needed for the actual technical work of my job,” he writes.2
On a recent run, I found myself chewing on it. How can a person with 100 IQ possibly control a machine with much higher cognition? The answer I arrived at: threaten to pull the plug. It was a satisfying, Bobiverse-inspired thought for all of thirty seconds. Then I kept running, realizing how little I know.
Some readers pushed back on my Boyd essay, saying I didn’t adequately address mass job disruption — a concern I’ve wrestled with before, calling it “The Lamplighter Problem.”3 They’re right to raise it. But the pushback came almost entirely in the language of worry. I found myself judging that impulse. Then I caught my hypocrisy.
My own fears shaped my Boyd essay. I worry about the collapse of public trust in advanced democracies. I worry the U.S. has lost the state capacity to build housing, lay high-speed rail, and return to the moon. I worry about hockey-stick debt as boomers retire. These worries helped convince me that not adapting to AI is riskier than leaning into it. But the cyborg soldiers raised the hair on my arms too.
Emotions are part of what makes us human. They evolved as threat detectors to help us survive. The problem is that processing them takes time that we don’t always have in a hyper-fast world. Human wisdom likewise requires time in the desert. Yet hesitation pulls us out of the loop.
John Boyd was a fighter pilot turned military theorist who popularized the OODA loop: observe-orient-decide-act. It’s a theory of rapid, agile decision-making. The pilot who cycles through the OODA loop faster than his opponents wins. Take too much time to process emotions and act, and you’ll get hosed.
Boyd’s framework assumed humans were the agents doing the looping. Now imagine intelligent systems that process information in milliseconds, are always on, and have easy access to the world’s knowledge. They don’t need time to process emotions or reflect. Is it realistic to assume humans will retain control?
Human fighter pilots are already being supplemented by automated systems. Drones are widespread, and robot jets are coming. What would John Boyd make of a world where machines reorient faster and better than humans? AI presents a kind of OODA loop singularity. Intelligent systems are already revealing the limitations of human cognition, and advancing fast.
This week, OpenAI’s Sam Altman predicted that superintelligence could arrive in as little as two years. “AI should extend the human will,” he said, adding that superintelligence itself could help us develop new governance mechanisms. Altman’s framing caught my attention: human will instead of human control. He noted all the things we don’t yet know how to think about, like whether we’ll need new social contracts and how totalitarian countries could weaponize superintelligence.
In my Boyd essay, I emphasized that machines should work at the behest of humans, not the other way around. I still believe in human control as a principle, but what does it really mean? It’s not the same thing as “humans in the loop,” which is a narrower, operational idea. It probably doesn’t mean “humans run AI” at every step either. Right now, I view it as a combination of human governance and engineered controls. But honestly, I have no idea.
This year the humanoids are backflipping. What will they do next year?
Democracies are caught in a game-theory trap. Move too fast and they risk destabilization. Move too slowly and fall behind. This year the humanoids are backflipping. What will they do next year?
I’m not walking back my Boyd argument. I’m optimistic about AI and still believe democracies must embrace it. It’s the best path to renewal and a wildly optimistic future for our kids. But I also believe in having the humility to learn, adapt, and change course.
This is the greatest tech-enabled transformation since industrialization, and it’s happening at a much faster pace. History is moving at the speed of a dancing robot. It’s incredibly exciting. It’s also scary. Humans are still in the loop, for now.
Related Essays
What if we accelerated knowledge production? (video discussion)
Theme Song
Hat tip to LastBlueDog and Marcus Seldon





I think we need human oversight to confirm alignment. But that probably means that the next big war will be robots shooting each other in the name of human ideals, or just aesthetics. Or insults about sports teams. Perhaps the most important test for intelligence will be if AI is smart enough to simply ignore stupid, dangerous requests.
Well Jeff, you just ruined my day with your AI article! I thought all I had to worry about was WW3 starting in Iran, and the oddly catatonic and apparently manipulated stock market. You’ve convinced me that it’s all over, even if you’re more optimistic. I agree with you that we must embrace AI, but the consequences fall into Stephen Wolfram’s ominous bucket of “computational irreducibility”. There’s no way to model the outcome, we just have to run the algorithm and watch the complexity unfold . . . eek!