Trusted - News and Notes, 2023-04-06
Excited at the positive reception from my first long-form article, a survey of the different views on existential risk. Check it out if you haven’t already. Still tweaking format for the newsletter, happy to take feedback.
Top Stories of the Week
Italy bans ChatGPT
Italy’s national privacy regulator has banned the use of ChatGPT, accusing OpenAI of “unlawful collection of personal data,” as well as insufficient controls around preventing minors using the service.
My thoughts: This is completely unsurprising, considering Italy banned Replika back in February. It’s also pretty ineffective as there are thousands of free ChatGPT clones out there, but the real question isn’t ChatGPT, it’s the data corpus.
GDPR-wise, a company has to meet some pretty strict rules to process personal data. I’m not a lawyer, but I’m 99% sure that OpenAI has no legal defense here if the regulators start asking tough questions about whether the company followed GDPR rules in the collection of the Internet and other data sets powering GPT. (Sam’s tweet above is probably referring to the ChatGPT service specifically. You can make that GDPR-compliant with some work. Not so much the dataset.) Of course, that puts the onus back on the data protection authorities on what, exactly, they want to do about it. There’s already a backlog of GDPR cases against Big Tech firms, they’re all incorporating similar datasets, and if LLM tech gets popular, they’ll face some political pushback for acting too aggressively. This is going to be something to watch going forward. (OpenAI’s had a tough week on the news front. Canada is also investigating them for privacy violations, a Washington Post article slammed ChatGPT for inventing fake Washington Post articles accusing real people of crimes, they were threatened with a defamation lawsuit yesterday by an Australian politician after ChatGPT repeatedly listed false information about him, and an article in Vice discusses a man who reportedly committed suicide after being encouraged by a GPT-J opensource chatbot.)
President Biden says U.S. must discuss ‘opportunities and risks’ of AI
In brief remarks to the press on Tuesday, President Biden briefly discussed artificial intelligence, saying “AI can help deal with some very difficult challenges like disease and climate change, but we also have to address the potential risks to our society, to our economy, to our national security.” He specifically cites the “Blueprint for an AI Bill of Rights” published last October, and says “tech companies have a responsibility, in my view, to make sure their products are safe before making them public.” He also calls on Congress to pass appropriate privacy legislation that also ensures product safety. When asked if AI is dangerous, he responded, “It remains to be seen. It could be.”
My thoughts (see disclaimer at end of article!): First, go read the (very short) transcript.
It’s pretty clear that the U.S. federal government is pursuing a fairly light touch on AI regulation currently. There’s been an executive order, the blueprint, and the NIST AI RMF published so far, but all of them are nonbinding suggestions. I think that’s the right approach for now, because I don’t think we have anything approaching a consensus currently. The EU is going down a more strongly regulated route, with the EU AI Act; I’d like to take a more detailed look at regulations in the future.
Quick Hits
OpenAI published a new blog post about their approach to AI safety. Clearly coincidental timing.
Sundar Pichai interviewed on the NYT’s Hard Fork Podcast about Bard, competing with ChatGPT, etc. The Register summary, for those that prefer text.
The Stanford 2023 AI Index was just published. I haven’t had the chance to read this yet, but there’s a ton of solid data in here. Not sure I agree with all the takeaways.
Turnitin is rolling out AI-writing detection. I don’t see how this will work without some kind of watermarking technology (I think Scott Aaronson is working on this).
Meta opensources their Segment Anything model and data. The demo is super impressive.
One Recommendation
Noah Smith’s Noahpinion (I’m not going to discuss how long it took until I got that pun in the title) has been absolutely stellar for conversational explanations of economic topics, with some technology mixed in. Amazing interviews, like this one with Kevin Kelly (who left me a nice comment on my post Tuesday and made me squee just a bit). Also, cute bunnies!
In Closing
The Future For Life open letter I discussed earlier this week was addressed in a press briefing last week that I missed (see Youtube video below, I’ve jumped to right before the relevant segment). I really hope that we’re not looking back and shaking our heads at this in a few years.
Standard disclaimer: All views presented are those of the author and do not represent the views of the U.S. government or any of its components.