While this is a well written paper, I'm not sure it's really contextualizing realistic risks that may arise from AI.
It feels like a lot of "Existential AI Risk" types are divorced from the physical aspects of maintaining software - eg. your model needs hardware to compute, you need cell towers and fiber optic cables to transmit.
It feels like they always anthropomorphize AI as some sort of "God".
The "AI Powered States" aspect is definetly pure sci-fi. Technocratic states have been attempted, and econometrics literally the exact same mathematical models used in AI/ML (Shapely values are an Econometrics tool, Optimization Theory itself got it's start thanks to GosPlan and other attempts and modeling and forecasting economic activity, etc).
As we've seen with the Zizian cult, very smart people can fall into a fallacy trap of treating AI as some omnipotent being that needs to either be destroyed or catered to.
> It feels like they always anthropomorphize AI as some sort of "God".
It's not like that. It is that. They're playing Pascal's Wager against an imaginary future god.
The most maddening part is that the obvious problem with that has been well identified by those circles, dubbed "Pascal's Mugging", but they're still rambling on about "extinction risk" whilst disregarding the very material ongoing issues AI causes.
They're all clowns whose opinions are to be immediately discarded.
Which material ongoing issues are we ignoring? The paper is mainly talking about how the mundane problems we're already starting to have could lead to an irrecoverable catastrophe, even without any sudden betrayal or omnipotent AGI.
So I think we might be on the same side on this one.
The "Mugging" going on is that "AI safety" folks proclaim that AI might have an "extinction risk" or infinite-negative outcome.
And they proclaim that therefore, we should be devoting considerable resources (i.e. on the scale of billions) to avoiding that even if the actual chance of this scenario is minimal to astronomically small. "ChatGPT won't kill us now, but in 1000 years it might" kinda shit. For some this ends with "and therefore you need to approve my research funding application", for others (including Altman) it has mutated into "We must build AGI first because we're the only people who can do it without destroying the world".
The problem is that this is absurd. They're focussing on a niche scenario whilst ignoring horrific problems caused in the here and now.
"Skynet might happen in Y3K" is no excuse to flood the current internet with AI slop, create a sizeable economic bubble, seek to replace entire economic sectors with outsourced "Virtual" employees, and perhaps most ethically concerning of all: create horrific CSAM torment nexuses where even near-destitute gig economy workers in Kenya walk out of the job.
The people who say it's absurd tend to be the least informed while the people saying it's a major risk include the guy who got a Nobel prize for inventing the current stuff and the leading researchers. Here's some names in the field. 15/19 think the risk is significant https://x.com/AISafetyMemes/status/1884562099612889106/photo...
One of the authors here. I don't think we anthropomorphize AI as some sort of God.
Here's a more prosaic analogy that might be helpful. Imagine tomorrow there's a new country full of billions of extremely conscientious, skilled workers. They're willing to work for extremely low wages, and to immigrate to any country and don't even demand political representation.
Various countries start allowing them to immigrate because they are great for the economy. In fact, they're so great for economies and militaries that countries compete to integrate them as quickly and widely as possible.
At first this is great for most of the natives, especially business owners. But the floor for being employable is suddenly really high, and most people end up in a sort of soft retirement. The government, still favoring natives, introduces various make-work and affirmative action programs. But for anything important, it's clear that having a human in the loop is a net drag and they tend to cause problems.
The immigrant population grows endlessly, and while GDP is going through the roof and services are all cheaper than ever, people's savings eventually dwindle as the cost of basic resources like land gets bid up. There are always more lucrative uses for their capital by the immigrants and capital owners compared to the natives. Educating new native humans for important new skills is harder and harder as the economy becomes more sophisticated.
I don't have strong opinions about what happens from here, but the point is that this is a much worse position for the native population to be in than currently.
Does that make sense? Even if this scenario doesn't seem plausible, do you agree that I'm not talking about anything omnipotent, just more competitive?
Thanks for co-writing an insightful paper! Something I put together around 2010 on possibilities for what happens from here: https://pdfernhout.net/beyond-a-jobless-recovery-knol.html
"This article explores the issue of a "Jobless Recovery" mainly from a heterodox economic perspective. It emphasizes the implications of ideas by Marshall Brain and others that improvements in robotics, automation, design, and voluntary social networks are fundamentally changing the structure of the economic landscape. It outlines towards the end four major alternatives to mainstream economic practice (a basic income, a gift economy, stronger local subsistence economies, and resource-based planning). These alternatives could be used in combination to address what, even as far back as 1964, has been described as a breaking "income-through-jobs link". This link between jobs and income is breaking because of the declining value of most paid human labor relative to capital investments in automation and better design. Or, as is now the case, the value of paid human labor like at some newspapers or universities is also declining relative to the output of voluntary social networks such as for digital content production (like represented by this document). It is suggested that we will need to fundamentally reevaluate our economic theories and practices to adjust to these new realities emerging from exponential trends in technology and society."
This is a fantastic analogy -- thanks for sharing.
Another way to understand AI in my view is to look at (often smaller) resource rich countries around the world (Oil, Minerals etc). Often the government is more worried about the resource rather than the people that live in the country. The government often does not bother educate them, take good care of them, give them a voice in the future of the country etc. because those citizens are not the ones that pay the bills, or are the main source of GDP output or source of political power etc.
Similarly in an AI heavy economy, unless systems are not designed right governments might start ignoring their citizens. If democracy is not robust or money has a big role in elections the majority voice of humans is likely to matter less and less going forward.
Norway is a good example of resource rich country that still looks out for its citizens. So it should be possible to be resource rich/AI rich and have a happy citizenry. I suppose balancing all moving parts would be difficult.
The way to deal with the risks of AI would be to make AI available to all -- this is my strong belief. There are more risk to AI being walled off to select nations / classes of citizens on grounds of various real / imagined risks. This could create a very privileged class of countries and people that have AI while the others citizens don't / cant. With such huge advantages, AI would wreak greater havoc on the "have-nots". (Over)regulation of AI can have worse consequences under some conditions.
A lot hangs on what “realistic” means. The world is probabilistic, but people’s use of language often butchers the nuance. One quick way to compare events is to multiply its probability times impact.
India and China almost went to war with each other in 2020.
The decision of whether or not to shell Chinese troops and start a war between two nuclear states came down to one brigadier [0]
Conventional War between two nuclear states is a MAJOR risk, and has already happened before in 1999 (even if Clinton's State Department didn't want to call it that).
These kinds of flashpoints are way more likely to cause suffering in the near future than some sort of AGI/ASI.
How many people have you talked to face to face about various existential risk scenarios? Have you gotten into probabilities and the logic involved? That’s what I’ve done and this is the level of rigor that is table stakes for calculating the cost-benefit over different possible outcomes.
> How many people have you talked to face to face about various existential risk scenarios
A decent amount.
I started off as a NatSec adjacent staffer and have helped build Cybersecurity and Defense Tech companies, so a lot of my peers and friends are working directly on US-China, US-Russia, Israel-Iran, Saudi-Iran, Saudi-Turkey, India-Pakistan, and India-China relations.
These are all relations that could explode into cataclysmic wars (and have already sparked or exacerbated plenty of wars like the Syrian Civil War, Yemen Civil War, Libyan Civil War, Ethiopian Civil War, Russia-Ukraine War, Myanmar Civil War, Afghan War, etc). We are already going through a global trend of re-armament, with every country expanding their conventional, nuclear, and non-conventional warfighting capabilities. Just about every nuclear state has the nuclear triad or is in the process of implementing a nuclear triad. And China's nuclear rearmament race has forced India to rearm which has forced Pakistan to rearm, and is causing a bunch of regional arms races.
I think the world is more likely to end due to bog standard conflicts escalating into an actual war. Not some sort of AGI/ASI going skynet
Maybe you are right in predicting (guessing?) as to which is more likely. Still, we don’t have the luxury of just rank ordering failure modes and only mitigating the first.
Not to mention that the race for AI technology is likely going to make geopolitics more volatile. If something happens, it doesn’t matter how we bucketed it conceptually. Reality doesn’t care about where we draw the lines.
The false dichotomies abound in many of these discussions.
An "AI-Driven State" is literally what econometrics is. All of the same math, models, and techniques used in AI/ML are the exact same used in Econometrics and Optimization Theory.
A pure technocratic driven state leveraging econometrics or optimization theory for the sake of it has failed multiple times. For example, the failure of the USSR's planned economy, the failure of the US's Vietnam War objectives due to a hyper-metrics driven workflow, etc.
On a separate note, I have always considered returning to grad school and making a small career of translating esoteric Econometrics models into ML models to make a brief publishing career. A Russian American friend of mine did something similar by essentially basing his CS research career on older Soviet optimization theory research that wasn't translated into English, so he could boost his publishing ability.
Technocratic states have not failed because of technocracy itself, but because their implementation was distorted by political, ideological, or cultural factors. Technocracy as a principle is not the issue-failure occurs when it is combined with rigid, non-adaptive systems that prioritize dogma over reality.
People are the problem here. As always. But of course AI itself need to be managed by people too so it can rose similar problem. Politics itself is the issue.
AI is not some runaway Skynet type of a thing. It's controlled by people, who will use it for good and bad.
Not to mention that AI is and will be owned by people who are already concentrating power in their own hands. They're not going to, voluntarily, relinquish that power.
the corrupt statesman discovered that after the AI which was trained on him was put into place he got the same amount of money delivered into his Swiss bank accounts, as the AI was just taking bribes and depositing them as the training dictated.
Kevin Kelly was on a recent episode of the Farnham Street podcast and was surprised by how much he anthropomorphized AI. Maybe that's not a surprise to people that know more about him, but it was to me.
I think the user experience of seeing a model extruding realistic sounding text is what broke a lot of people.
Back in HS, I introduced a buddy of mine to ELIZA mode in emacs, and it completely broke her mind the same way LLMs did for a lot of people - like she actually conversed with ELIZA and used it to solve her anxiety during the college admissions process. Yet ELIZA used very simple heuristics to extrude human sounding text. And my friend wasn't some dummy or Luddite - she ended up going to an Ivy to study economics and medicine.
It goes to show that User Experience is all that really matters for technology.
Yeah it would only be worrisome if there were significant incentives to enlist more hardware, cell towers, and fiber optic cables to create and operate increasingly powerful AI and to improve its ability to act directly on the physical world
It's not even about that. That isn't even how a model is created.
And that's what I'm getting it. It's basically bad science fiction being used to create some form of a pseudo-religious experience.
At least your local Mandirs, mosques, churches, synagogues, gurdawara, and other formal religions do food drives and connect a subset of your local community.
You seem very keyed up about this, so it is a good thing you are wrong.
AI doomers are far more likely people to be giving to GiveWell, or actually thinking about how they are helping society. Source: Am AI doomer. Donate to GiveWell.
While I do think there is an incentive to scale up physical infrastructure, there is a lot of "AI Washing" happening in the space, with bog standard energy projects being justified for "AI Scaling", especially because ESG as a investment category is dead, and a lot of energy investments would be bucketed under the ESG asset class.
> I also wasn’t commenting on how a model is created…?
That comment wasn't targeted at you. I just find AI washing arguments laughable sometimes. It's very similar to the ambulance chasing and scare tactics that is a major part of Cybersecurity GTM.
> there is a lot of "AI Washing" happening in the space
Total red herring. Your argument for why AI cannot be a risk is because it depends on so much physical infrastructure. Sure, this infrastructure isn't going to come from thin air, but if there are strong incentives to put in place then it makes no difference whether it came from thin air or VC checkbooks. It is then in place, ergo your rationale for why this cannot be a risk is invalid.
I think the more realistic threat model is from existing malware and similar stuff. The same people who write viruses, rootkits, try to hack elections, sever communication cables and so on will probably try to do evil AI. Then instead of Putin trying to take over the world and build torture chambers for the resisters you'll have a Putinbot trying to do so for ever more, pre-armed with nukes.
It feels like a lot of "Existential AI Risk" types are divorced from the physical aspects of maintaining software - eg. your model needs hardware to compute, you need cell towers and fiber optic cables to transmit.
It feels like they always anthropomorphize AI as some sort of "God".
The "AI Powered States" aspect is definetly pure sci-fi. Technocratic states have been attempted, and econometrics literally the exact same mathematical models used in AI/ML (Shapely values are an Econometrics tool, Optimization Theory itself got it's start thanks to GosPlan and other attempts and modeling and forecasting economic activity, etc).
As we've seen with the Zizian cult, very smart people can fall into a fallacy trap of treating AI as some omnipotent being that needs to either be destroyed or catered to.