Pause AI is bad politics
Asking developers to prove new models are safe is a better approach
[Edit: This post was updated on 24 October 2023 and 4 January 2024]
NIMBYs don’t call themselves NIMBYs. They call themselves affordable housing advocates or community representatives or environmental campaigners. They’re usually not against building houses. They just want to make sure that those houses are affordable, attractive to existing residents, and don’t destroy habitat for birds and stuff.
Who can argue with that? If, ultimately, those demands stop houses from being built entirely, well, that’s because developers couldn’t find a way to build them without hurting poor people, local communities, or birds and stuff.
This is called politics and it’s powerful. The most effective anti-housebuilding organisation in the UK doesn’t call itself Pause Housebuilding. It calls itself the Campaign to Protect Rural England, because English people love rural England. CPRE campaigns in the 1940s helped shape England’s planning system. As a result, permission to build houses is only granted when it’s in the “public interest”. In practice, permission is given infrequently and often with onerous conditions attached.1
The AI pause folks could learn from their success. Instead of campaigning for a total halt to AI development, they could push for specific regulations that make it clear the goal is to ensure new AI systems won’t harm people.
This approach has two advantages. First, it’s more politically palatable than a heavy-handed pause. And second, it’s closer to what I, and many others concerned about AI safety, actually want: not an end to progress, but progress that is safe and advances human flourishing.
I think NIMBYs happen to be wrong about the cost-benefit calculation of strong regulation. But AI safety people are right. Advanced AI systems pose grave threats and we don’t know how to mitigate them.
Ask governments for an equivalent system for new AI models. Require companies to prove their models are safe. These requirements could include:
Independent safety audits
Economic impact analyses
Public reports on risk analysis and mitigation measures
Compensation mechanisms for people whose livelihoods are disrupted by automation
In practice, these requirements might be hard to meet. But, considering the potential harms and meaningful chance something goes wrong, they should be. If a company developing an unprecedentedly large AI model with surprising capabilities can’t prove it’s safe, they shouldn’t release it.
Thanks for reading Unfolding Atlas! Subscribe for free to receive new posts and support my work.
This is not about pausing AI.
I don’t know anybody who thinks AI systems have zero upside. In fact, the same people worried about the risks are often excited about the potential for advanced AI systems to solve thorny coordination problems, liberate billions from mindless toil, achieve wonderful breakthroughs in medicine, and generally advance human flourishing.
But they’d like companies to prove their systems are safe before they release them into the world, or even train them at all. To prove that they’re not going to cause harm by, for example, hurting people, disrupting democratic institutions, or wresting control of important sociopolitical decisions from human hands.
Who can argue with that?
[Edit: It’s been pointed out that Ezra Klein scooped me on the 80K podcast - here’s a link to that discussion!]
“Joshua Carson, head of policy at the consultancy Blackstock, said: “The notion of developers ‘sitting on planning permissions’ has been taken out of context. It takes a considerable length of time to agree the provision of new infrastructure on strategic sites for housing and extensive negotiation with councils to discharge planning conditions before homes can be built.”” (Kollewe 2021)