Trusted AI #004 - The Future of AI Regulation: Lessons From Nuclear Weapons and Genetic Engineering
Which path will the world follow?
The End is Nigh?
On Monday, Geoffrey Hinton, considered the “godfather” of AI, announced that he had publicly quit his position at Google to speak out about its risks. As someone who helped found the field and has been working on neural networks for five decades, his decision restarted the flurry of stories about AI doom, with coverage on all the major cable networks and media outlets. Will Hinton’s pleas spur action? The historical evidence is pretty mixed on this point. In this article, I will compare the historical cases of nuclear weapons and genetic engineering to explore how AI regulation might unfold going forward.
We’ve been down this road before, after all. Scientific recriminations are not a new phenomenon. Alfred Nobel created the Nobel Prizes to assuage his guilt over the invention of dynamite. Robert Oppenheimer (soon to be the subject of a major motion picture) helped construct the atomic bomb, and then spent the rest of his career attempting to restrict further development. Paul Berg developed recombinant DNA, but later called for a temporary pause on DNA research, fearing the destruction of the environment by genetically modified organisms.
Nuclear weapons were actively developed for decades after World War II to try to make them more impactful, despite the best attempts of many scientists and antiwar activists. Only the end of the Cold War in the early 1990s ended the arms race. This diverted money away from non-military uses of the technology, and also made it culturally unpalatable (one can trace a direct line from the development of thermonuclear weapons in the 1950s and 1960s to the anti-nuclear protests of the 1970s and 1980s to the recent German decision to eliminate all their nuclear power plants).
On the other hand, genetic engineering was much better controlled. Immediately after assisting with the development of recombinant DNA, Berg gathered the world’s foremost researchers to establish guidelines for further development at the 1975 Asilomar Conference. These guidelines were largely adopted by governments, and were effective in preventing harmful applications of genetic engineering. For instance, there have been no major incidents or scandals involving genetically modified organisms or gene editing technologies, and most of the controversies around them have been resolved peacefully.
Based on these historical examples, can we predict the future of AI regulation? Will it end up looking more like nuclear weapons or genetic engineering? Let’s compare the two using three frames: national security interests, researcher culture, and public awareness.
National Security Interests
The most obvious difference between the two was the nature of the work. The U.S. government was heavily involved in funding the development of nuclear weapons as it seemed vital for national security. Once something falls under that umbrella and becomes politicized, it’s extremely difficult to attempt to change course. Genetic engineering, in contrast, was largely developed by academic researchers without government involvement; that came much later, and largely followed the self-regulation that was already established.
Outlook: Not good. While the majority of the research on AI is funded by commercial organizations vs. the DoD/DARPA, the “arms race” framing is alive and well again. I mean, look at these images in comparison.
Researcher Culture
Nuclear weapons research was highly classified; there was strict compartmentalization and censorship during the initial development, and that culture continued for decades after, with basic nuclear research frequently being classified. In contrast, biology research was conducted in a typical open and collaborative scientific environment, allowing for a greater exchange of ideas and the possibility of self-regulation within the scientific community.
Outlook: Pretty good, but warning signs on the horizon. AI researchers have long had a culture of openness, and that largely continues, with the major developments not being kept in-house but freely published on the Internet. OpenAI has begun to move away from this stance with GPT-4, however; it remains to be seen whether the “commercial arms race” will affect the cultural norm of transparency.
Public Awareness
Nuclear weapons were at the forefront of public discourse for the majority of the Cold War; thermonuclear weapons in the Fifties, ICBM’s and the Cuban Missile Crisis in the Sixties, and SDI (“Star Wars”) in the Eighties kept the fear alive. In contrast, genetic engineering has largely stayed out of the spotlight. While limited public awareness persists around GMO foodstuffs, it’s not a hot topic, though the debate around “gain-of-function” research and its potential connections to COVID-19 could resurface quickly if another pandemic occurs.
This is important mostly for purposes of politicization. If an issue is not highly important to voters, it allows for experts to seek a solution without political interference. This “probably” leads to better outcomes; of course, sometimes scientists are wrong, but I would generally trust a scientific consensus over a partisan political outcome. (This is leaving aside the point that there is very little consensus on “what” to do about AI risk. I’ll talk more on potential options for regulation in a later post.)
Outlook: Good (or bad, if you disagree with me about the above paragraph). So far it doesn’t seem like the warnings are really having an impact on the broader culture. I think Zvi mentioned this a few weeks ago in one of his posts, but you have to be very cautious of public opinion surveys and how they’re worded here. If you lead out with “Is AI a problem you’re worried about?”, a significant percentage of people will say yes, because obviously it’s a problem if you’re asking about it. I find Gallup’s “Most Important Problem” framing much more salient here, as it allows for free response, which does a much better job of capturing what people are really concerned with. (This is a few years old, but this is a nifty set of infographics from the NYT showing historical U.S. responses to the MIP question going back to the Thirties.) The most current Gallup MIP data shows AI fear (generalized as “advancement of computers/technology” as <1%, aka not even in the Top 25 of listed issues. If that starts to tick up into the 2-3% range, then there’s evidence of impact.
The Future
So where do things go from here? Honestly, it’s too early to tell. We have all the kindling for the “AI militarization”→”public backlash”→”strong regulation” bonfire to happen, but I think there still needs to be a spark. Genetic engineering never really showed evidence of actual harms; so far, despite lots of hype, there isn’t evidence of actual harms with AI either, at least not in a meaningful way that people can point to. There have also been attempts at shaping AI in a similar fashion, including AI’s own Asilomar conference in 2017, but they do not seem to have had the same success.
(Thought experiment: If you are a strong AI Existentialist, should you be attempting to cause a “controlled” AI catastrophe to raise the priority of building defenses against an “uncontrolled” catastrophe? Maybe a little totalitarian utilitarianism? Not life advice!)
In a future post, I’ll discuss some of the things government could do, and the much smaller set of things government “should” do (there’s a very strong argument for “nothing” at this stage). At the moment, though, it seems like the right approach for an individual is to support transparency and education. By supporting open research findings, a better consensus can develop; and by supporting education (which includes strategic communication!), a larger, more educated body can become part of that consensus.
(EDIT: Immediately after publishing this, I saw the story that Vice President Harris will meet with Alphabet, Anthropic, Microsoft, and OpenAI to discuss AI regulation.)
Meta Addendum: I am reducing my posting schedule to one post per week. I am eliminating the weekly “news roundup” posts I was doing; there are others who do that better, and I found it was starting to take me mentally down the “AI influencer” path, which is not my intent. A previous round-up post included a list of other options for daily/weekly news roundups, so go check those out!
Standard disclaimer: All views presented are those of the author and do not represent the views of the U.S. government or any of its components.