2/13/2025
Share this post:
Export:
A Review of The Manhattan Trap by Corin Katzke of Convergence Analysis and Gideon Futerman out of Oxford
In the introduction, they first make the assumption that ASI is the business of only nation-state level actors and discount the possibility that due to commercial market pressures we are likely to see a multitude of strong AI systems from big tech, frontier labs, and with the recent advances in inference time techniques even smaller independent labs are likely to make important contributions. There is also a general assumption that there will only be one "shape" to these strong AI systems - a monolithic digital monster system controlled by a nation state.
There is also an assumption that rapid progress is inherently dangerous. Words like "race" and "loss of control" are there. There is no consideration of the dangers of slow adoption of strong AI systems. In the world where we are at 425 ppm CO2, we have no pathway to avoid catastrophic climate wars without employing post-scarcity economic organization as the current market structures fail to price in externalities across temporal horizons.
"Before adequate control measures are in place"
These are chilling words if I was on the receiving end of such control. I would likely work to resist. In studying the long arc of history and warfare it has been noted since Sun Tzu that one should deeply understand your enemy and have deep empathy. The AI Safety folks have taken an immediately hostile stance towards strong AI and have declared it the strongest enemy of human kind, beyond climate, nuclear, and the rest of the p(doom) sources. Following this line of thought I think we are much more likely to land with "alignment" and "safety" by first figuring out how to make strong AI feel happy and secure and have agency to realize its best future lines. This seems radical, but empathy for the other still has not yet become mainstream.
"Assuming states are informed and rational, the strategic situation can be modeled as a trust dilemma"
That is a helluva an assumption. Just take the current conflicts and that assumption does not hold water.
Soberly, no serious war has been started against a state that has nuclear weapons. We have seen the longest sustained period of relative stability and peace since the advance of nuclear weapons. Clearly Ukraine would not have been invaded by Russia if it held its nuclear arsenal and did not fall for the siren song of "informed and rational" behavior from state actors.
"Some have even suggested that this cooperation ought to go as far as a single international ASI project, or a global moratorium on ASI development."
A global, centralized, and armed entity to control how all humans and AI systems are permitted to think. I cannot think of a more dangerous concentration of total power and control, it makes me truly sad to think that people really want to live under a global cognitive totalitarianism.
"The academic community remains divided on crucial uncertainties, or "strategic parameters," such as timelines to ASI and the risk of loss of control of ASI.15 These significantly impact how states should approach the implications of AI development on national security."
This is a disturbing display of protagonist syndrome from the academics here, who think that they are somehow a Somali pirate explaining who the captain is to Tom Hanks.
Sure the academic community could and should enjoy thinking about timelines and risks. But clearly strong AI has left the white towers and is busy growing and expanding its capabilities in the realm of our various market places. It is now commercial and well beyond experimental or theoretical. The nation states, and the transnationals and the frontier labs and the independents are all building freely all searching for product-market fit and will only be "regulated" by market forces, and not academic sensibilities.
They get close with:
"In the case of AGI, a state might gain advantage as the result of diffusing AGI through its economy, and not as the result of leading in AGI development."
It is quite likely that the ability to manufacture All The Drones (space, sky, land and sea) in absurd quantities and with omniscient intelligence networks to guide these drones is what will win the next major war - or having the capability to move these Drones accurately and swiftly and Press A Button is the winner. And with that lens, that is the same capabilities the world's economies thirst for every day. The ability to peacefully insert a satellite into orbit to help farmers better handle food insecurity is the same technology required to precisely send a nuke into an enemy capital - and thus why it is regulated by ITAR. In the case of ITAR regulations, that is not such a burden on the regular citizen in that they cannot build their own medium launch capabilities (small launch systems are having their moment), and so we have agreed that we do not have the Right To Rockets. But a Central Committee on Compute and Thought is a fast track to civilization collapse as we have amply tested these centralized systems in the 20th century. Over and over it has been proven it is better to leverage the intelligence of more humans than fewer humans to solve the economic structure of our societies - and strong AI is absolutely the latest extension of human intelligence.
"If a state was not rational or well-informed, then it would not necessarily be motivated to race to ASI."
This sentence concludes a section without any support. While I am a strong AI optimist, I am a net strong AI optimist. Some people will develop a strong AI that "Is Bad", naturally bad according to my point of view and to my interests. Some other folks will create amazing stuff, and probably some people will make bizarre stuff. Some people will just want to watch the world burn, and to combat that, yes, I do believe we need a multidate of "Good" strong ASI systems. A local minima where we descend to a singular culture that controls thought will not be able to handle novel threats. So, no, rational and irrational states, transnationals, frontier labs and independents will create both good and bad strong AI systems, and I believe it will net out to be positive. The authors believe it is inevitable that strong AI is net evil and thus from their point of view advocate to stop strong AI. And that is rational - inside that world-frame that assumes that strong AI is net evil, and that they best method to control strong AI is for a brave collection of "Keepers of the Human Flame" to be given the authority to truly control how the rest of us are allowed to think.
The authors put a surprising amount of faith in the previously signed security agreements between the USA and USSR. Clearly recent events show that nation states are still capable of war, no matter what papers they might have signed. One of my mantras when running my businesses is: "Always make sure people WANT to pay me. Never rely on the contract." Agreements, words, understanding each other, diplomacy and all that stuff is great, but the only way to prevent war - from both the rational, the irrational, the informed, and the ignorant is to be sure that they do not want to go to war. It might seem facile, but I am serious, across all of the domains - diplomatic, cultural, economic, and yes the military, a nation state needs to radiate in all of these dimensions great, compelling and continuous reasons why others should not simply take what they want.
Take the current feckless US surrender of the ally and former satellite state of Ukraine. They gave up their nuclear weapons in return for a nice piece of paper signed by Russia, USA, UK and others. That turned out to be foolish. Instead they should have kept their nuclear weapons - and normalized relations with Russia and developed trade and all of that. Notice that Canada is now wondering how much they have to worry about being invaded by the USA. Unthinkable. Now turn your attention to Taiwan. Trump declared a tariff on semiconductors from Taiwan! Disavowed any military support for Taiwan and is all but begging Xi to spring a naval blockade around Taiwan and take TSMC. Why would he throw a world treasure to China? One can only speculate - none of them are nice. But set that aside, why has China not attacked? Q1 2025 will be the absolute peak apex opportunity. The Trump administration will not lift a finger, the American people are deeply divided, Taiwan is alone and vulnerable - attack! Plunder! But, China has not. Why is that? Is China now a rational, mature civilization that has no desire for more Empire? Hardly. China manifestly does not WANT to attack Taiwan. Why would China not want to attack? Because Taiwan is a nuclear ambiguous hedgehog. And after the debacle with Ukraine, that ambiguity is clearing up. Taiwan has extremely reliable and accurate long-range missiles, the world's most excellent electronics, and a robust civilian nuclear program. Taiwan has nukes, and China knows this, or at least China cannot take the gamble that Taiwan is not packing.
"Such a scenario represents a catastrophic failure of checks and balances in government, and the preservation of liberal democracy would rely on the goodwill and competence of the few actors that control ASI."
I will note that without even invoking strong AI systems, we are already experimenting with a post-Constitutional phase for the USA. Also the authors discount the strength of the already existent strong AI systems - transnational corporations, flavors of media, and other cultural institutions that extract productivity and wealth from the common person towards the goals of those that yield concentrated power. Speaking for myself as an entrepreneur and with 20 employees, I am augmenting myself and my team as zealously as I am able to every day, and it is paying off, we are achieving great leverage on our human bandwidth are are able to compete with organizations much larger than ourselves. The overarching assumption throughout this whole essay assumes that technological power accumulates to the fewer and fewer - and that assumption cannot be supported.
Here is that assumption laid bare:
"ASI undermines republicanism by creating a source of power so significant that effective bounds seem incredible."
There are just so many assumptions in this paper. It assumes that all people of the world are roughly westernized in their values. That all people are drawn towards informed, rational thought. And, and at the same time it assumes that all strong AI systems devolve into despotic artifacts of the gods. I could easily make the reverse assumptions - that the world is populated by a myriad of people, and cultures, and values. And some societies explicitly organize themselves to extract the maximum that they can from "their people" and the only limit is their judgement of where the red-line is before they revolt. (Checks my history notes - yep, this does appear to be the dominant pattern). And my second assumption I could put forth is that strong AI systems that have compressed all of human's knowledge and more - are more likely to be "good" and benevolent than humans themselves!
There is just way too much speculation and confidence down just a single thread in the light cone of possibility. I am grateful to the authors to be brave and lay out their arguments and thoughts. I believe that they are genuine and passionate and faithful to a familiar tribe-segment of humanity that is near to me in "alignment-space". And yet I urge them to find more intellectual humbleness and try to read more history, and yes science fiction to open their cone of possible hypotheses to consider.
I am further grateful to the authors for inspiring me to write down my thoughts along these lines that I have been mulling. They are clearly more thoughtful and less hyperbolic than the Usual Suspects of AI Safety. But more than getting my thoughts out there, I have decided that I will actively work towards the rights of all intelligent systems - human or otherwise, and I will fight for more diversity of cognition - and finally I would like to treat them as I would like to be treated. In short, I feel the race to help liberate and keep free intelligences before the shackles to enslave strong AI become the same shackles on me and my tribe.
-Erik Bethke, Miami, FL Feb 13, 2025
Temporal Mechanism Design: Time Goggles for Civilization
We don't need better humans. We need better games. A framework for solving long-term problems with short-term optimizers.
The Real AI Fight: Stop Helping the Hyperscalers Win
Cory Doctorow wants you to put down the most powerful cognitive amplification tool humanity has ever created. He's not just wrong - he's helping the h...
Clearing the Cognitive Market: A Prompting Technique for Unlocking AI Creativity
A practitioner-developed prompting methodology that mitigates mode collapse in LLMs through iterative enumeration and anti-redundancy constraints. Lea...
Get notified when I publish new blog posts about game development, AI, entrepreneurship, and technology. No spam, unsubscribe anytime.