Trusted AI #006 - Washington comes to Mr. OpenAI
A first glimpse at what AI regulation could look like? Or political theater?
Quick personal note: I’ve been selected for a fellowship at DARPA! It may throw off my writing schedule some, but we’ll see how it goes. I’m very excited to have a chance to immerse myself in some of the technologies I write about. Would love to connect with any readers in DC over the summer!
Last Tuesday (May 16th, 2023), Sam Altman (CEO, OpenAI), Christina Montgomery (Chief Privacy and Trust Officer, IBM), and Gary Marcus (Professor, NYU) testified in front of the U.S. Senate Judiciary Committee on AI oversight. Superstar tech CEO’s willingly attending Congressional hearings is rare, and what’s more, the Senators generally asked intelligent questions (well, some of them, anyway).
I’m going to summarize the opening statements and some of the Q&A, and use that to jump into my thoughts. I’ve embedded the full video if you want to watch the whole thing, but I’ll link to the specific sections that I’m commenting on. (The submitted written testimony for the witness opening statements is available on the Judiciary Committee website, also.)
Opening Statements
Senators: The first ~20 minutes of the hearing are opening statements from Senators Blumenthal, Hawley, and Durbin. Most of it is generalities, but they hit two key points that will come up again in the questioning:
Jobs, Jobs, Jobs (not Steve). Senator Blumenthal lists out a bunch of concerns about AI, but his biggest nightmare is “the loss of huge numbers of jobs.” This immediately comes back up again - much more on this below.
Social media: Multiple senators lamented the harms of social media and Section 230 in particular. There’s definitely an appetite for Congressional action that didn’t exist 10 years ago.
Witnesses
At ~20 minutes in, we start off with the star of the show, Sam Altman. His spoken statement hit his usual points for those that have heard his speeches before:
AI has amazing potential (“a printing press moment”).
We are governed by a non-profit to ensure that AI is safe and the benefits of AI are broadly distributed.
We have already seen people use our technology for good (e.g. Be My Eyes).
We have lots of ways we are working on safety: audits, monitoring, red-teaming, dangerous capability testing. GPT-4 is safer than other models of similar capability.
…but…
Increasingly powerful models are coming, and government regulation is needed.
Not sure how government should regulate, possibly licensing and testing models above a certain threshold.
Written Statement: OpenAI legal brief. “Here’s 40+ footnotes that explain everything we just said.”
Christina Montgomery, in contrast, is all about protecting the people from near-term risks (24:50).
Opens with a question: “What are AI’s impacts on society?” “What do be do about bias, misinformation, misuse, or harmful content?”
We need “precision regulation;” regulate the deployment of AI into specific AI use cases, not the underlying technology itself.
Precision regulation: Varying rules based on risk; Clear guidance as to what use cases are what level or risk; Clearly marking AI outputs; impact assessments for higher risk use cases.
Companies need strong ethics governance for both use and research.
(We do all this at IBM already, we’re amazing.)
Written statement: IBM press release. “Did you remember we put Watson on Jeopardy? We’re AI experts!”
Gary Marcus then tells you why the risks of AI outweigh the benefits based on anecdata (29:06).
Lies at unprecedented scale, leading to mass manipulation of elections and markets.
Secret manipulation of opinions based on chatbot responses.
Lies erode truth: Many examples of chatbots inventing fake facts, including accusing people of sexual harassment that didn’t happen; this will undermine justice system, democracy.
Medical AI’s could screw up and kill people.
Criminals could use this for all kinds of crime.
To fix this, we need lots of independent voices that can counterbalance the tech companies; AI releases should be stringently controlled and reviewed, like clinical trials.
Eventually, we need an international organization to keep AI’s safe.
Written statement: Quirky academic screed. “The Senate is being secretly manipulated by extraterrestrials! ChatGPT told me that, so clearly it’s bad.”)
Sidebar that Probably Only Amuses Me
Whoever posted the documents on the website did not republish them, so the PDF’s retain the formatting and some of the metadata from the source document. Sam Altman’s statement filename was “Sam Written Testimony Draft - Senate Hearing 5_16_23 051523 0330-DLAP Revisions 051523 0530 (002) (DLA Revised) 15 May.docx.” Yay for knowing that even AI experts use filenames as version control like the rest of us, and I hope the DLA Piper lawyer was well-compensated for burning the midnight oil on this one.
The Q&A
The questions start ~35 minutes in, and make up the majority of the 3-hour hearing. There were a mix of good and bad questions; I’m focusing on the good here, if you want to hear Mr. Altman asked his opinion on drone warfare, or a speech on how ChatGPT is killing local news, or “WE LIVED THROUGH NAPSTER YOU”RE KILLING COUNTRY MUSIC,” feel free to watch the whole thing. (Note: he is clearly the star of the show - he is usually directly addressed by the Senators to respond to the questions, and only occasionally do the Senators remember the other two witnesses are even there.)
The biggest nightmare: Jobs, or something else?
Senator Blumenthal opens (38:10) by referencing Mr. Altman’s quote about “development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity,” and says “my biggest nightmare is the loss of jobs, what’s yours?” (Leading question much?)
Mr. Altman responds to the prompt and gives the standard tech jobs answer, saying revolutions eliminate some jobs and create others, the jobs created will be better than the ones replaced, and we need partnerships to manage the disruptions, etc. (All he missed was the de rigeur references to data entry clerks and switchboard operators.) Christina Montgomery follows, giving the standard IBM corporate answer about skill-building, and Gary Marcus then is kind of all over the place in talking about AGI. Bullet dodged…until Gary Marcus finishes up by calling out Mr. Altman, saying “I don’t think Sam’s worst fear is employment, he never told us, and I think it’s germane we find out” (44:08). Now on the spot, Mr. Altman’s response is worth quoting in full as I think it is a remarkably clear summation.
“Look, we have tried to be very clear about the magnitude of the risks here. I think jobs and employment and what we're all going to do with our time really matters. I agree that when we get to very powerful systems the landscape will change. I think I'm just more optimistic that we are incredibly creative and we find new things to do with better tools and that will keep happening.
My worst fears are that we cause significant…we, the field, the technology the industry, cause significant harm to the world. I think that could happen a lot of different ways. It's why we started the company. it's a big part of why I'm here today and why we've been here in the past and been able to spend some time with you. I think if this technology goes wrong it can go quite wrong and we want to be vocal about that. We want to work with the government to prevent that from happening, but we we try to be very clear-eyed about what the downside case is, and the work that we have to do to mitigate that.” -Sam Altman, speech to Senate Judiciary Committee hearing
Mr. Altman’s getting a lot of criticism in the AI Safety community for being vague here and not spelling out the existential risk case more clearly. I disagree with this, and think he took the exact right approach based on the audience.
First, he could have more explicitly said “my worst fear is that we kill everyone,” but that’s not a real fear to Senators. Senators are political creatures, and the #1 concern, by a mile, of the average American about AI currently is “is this thing going to take my job?” (I briefly discussed this in Trusted AI #004: “AI killing the world” isn’t even in the top 25 of biggest concerns.) By keeping the focus on jobs, he speaks Senator language, while being vague enough that he can keep the focus on his true goal.
Second, this is just who he is. The other common criticism I’ve seen online is that he’s being hypocritical: his true goal in coming before Congress is to “lock in” OpenAI as the one who writes the regulations, thereby screwing everyone else.
I think this is a lazy argument, and doesn’t hold up to scrutiny. If his primary goal was regulation, there are MANY other talking points he could have emphasized that would have worked better. He could have publicly trashed his competitors, or open source (“Here’s all the safety stuff we do, where Meta let their stuff leak on the Internet!”). He could have brought up the EU or Chinese regulatory efforts. Etc. Instead, he didn’t, and actually directly praises open source models as a necessary part of the ecosystem. OpenAI actually posted a blog today reiterating these points - I think we should take thme at their word until shown otherwise.
(To be clear, I think it’s completely fair to think the Sam Altman/OpenAI strategy of “let’s build fractional AGI’s so we can figure out how to make AGI safely later” is a terrible, dangerous idea and should be opposed, though I personally disagree with that opinion.)
Misinformation and transparency
Senator Hawley asks a long question that eventually comes around to “Label AI-generated content, and also, what if LLM’s help people influence elections? (48:41)” Mr. Altman hits the “yes, regulate this” button, but also makes an interesting point. He mentions that people were very concerned about photoshopped images, and that lasted for a while before people adjusted; he think this will follow a similar model.
I think this is largely correct, from my experience. The underlying problem with “fake news” is that a significant percentage of people want to believe it to be true because it aligns with their worldview. Improving the “quality” of the fake news will likely not make much a difference to the underlying problem, and will quickly be adjusted for.
As for transparency, this falls into the “good idea from a conceptual standpoint, impractical in real-life” bucket. Sure, saying “make AI systems say they’re AI” sounds good, but how are you going to label plain text and ensure that label remains? The market for “Could be human, could be AI, who’s to say?” is much larger than “this was 100% AI-generated,” after all. This specific idea is one that does seem appropriate to apply on a use case-by-use case basis.
Regulation
Senator Kennedy’s questioning is worth a listen, as he nails down the witnesses and tells them to make three recommendations for regulation, not vague generalities (1:37:30). Ms. Montgomery is first up, promptly answers in vague generalities, and gets verbally smacked down by Sen. Kennedy. She eventually settles on repeating the talking points from her opening statement.
Mr. Marcus is up next, and says 1) A safety review of new capabilities prior to deployment, similar to how the FDA handles new medicines; 2) An agency that can follow capabilities post-deployment and “recall” them as necessary; 3) More funding for AI Safety research. This was a great answer, if perhaps a bit overkill.
Mr. Altman is last, and calls for 1) An agency that can license any effort above a certain scale of capabilities; 2) Capability evaluations to inform 1), and 3) Independent audits. (Senator Kennedy then asks Mr. Altman if he’ll come lead that agency if they create it; Mr. Altman declines, but says he’ll “recommend some people.”)
After thinking about it, I agree with Mr. Marcus and Mr. Altman here over Ms. Montgomery. Yes, regulating in this space is going to be super hard! Everybody says “do something,” but nobody agrees on what to do! That doesn’t mean you don’t try, though; you need to figure out what doesn’t work to lean what does.
Senator Blumenthal alludes to this, saying (2:25:05) “you can create 10 new agencies, but if don’t give them resources, not dollars but scientific expertise, you guys will run circles around them.” I strongly think the right approach is to create the “AI agency” now, give them resources, and give them time to make independent decisions about what to do. That doesn’t mean you don’t also regulate through existing agencies, though, Like with CISA, sometimes you regulate directly, sometimes you advise other regulators, and sometimes you serve as a neutral source of expertise. Let the new agency regulate AI capabilities directly, and let them advise Congress/the FTC/the FCC/whoever on appropriate regulatory action for use cases.
Where Does Congress Go from Here?
In the end, I think you can go three ways. You can do some performative things that give the appearance of doing something without having to actually do something (“appoint a blue-ribbon commission to study the problem”). You can do as Ms. Montgomery suggests, and try to do “light-touch” regulation that only regulates “risky AI use cases.” Or you can try to tackle the problem at the source by regulating the AI providers and models themselves.
Put me on team #3. I’ve been in federal cybersecurity for a long time. I remember the last cyber hype/doom cycle, best exemplified by the SECDEF’s “cyber-Pearl Harbor” speech in 2012. The original federal cybersecurity model was “let the market figure it out,” and that failed, and the next model was “pass legislation,” and that failed, and the next model was “let individual agencies regulate cybersecurity in their areas,” and that largely failed, and so now we’re trying “create a federal agency dedicated to cybersecurity” (CISA, established in 2018) and so far that seems to be working the best of everything we’ve tried, but it’s probably too early to tell whether it will work long-term.
I’m afraid that without firm action, AI regulation will end up looking like the above paragraph. There’s plenty of time for the exact model of “how” the new agency would regulate would evolve; there’s 14 other major regulatory agencies, after all, and the CPSC looks nothing like the NRC, which looks nothing like the SEC. Get the system in place, get some smart people onboard that can take public criticism, and start figuring it out now.
Standard disclaimer: All views presented are those of the author and do not represent the views of the U.S. government or any of its components.
Thanks for this great summary! I especially like the personal comments! Much food for thought!