“Now this is not the end. It is not even the beginning of the end. But it is, perhaps, the end of the beginning.” -Winston Churchill, addressing the German defeat at the Battle of El Alamein, 1942
A recent Washington Post story, “ChatGPT loses users for first time, shaking faith in AI revolution” has generated much online debate. A Google Trends search for “ChatGPT” tells a similar story:
What’s causing this trend? The story offers a few speculative suggestions, ranging from a potential drop in quality to the end of the school year, and even questions whether tech companies were premature in building it into their products. I believe there’s more to the story; let me offer my own thoughts.
Chatbots are only the first step. ChatGPT has succeeded to date because it’s a good product…for those who are willing to chat with a chatbot. It turns out most people don’t like chatting with random people they don’t know (this is not news) and making the other party an AI didn’t change the fundamental equation much. Sure, some niche audiences will engage, as can be seen by character.ai or Replika. Overall, though, that was never going to scale to a broad audience.
It’s not good enough yet for mass adoption. I’ve had a chance to watch several non-tech people now try ChatGPT, and bluntly, it’s not good enough to get over the initial resistance to change that most people have. It doesn’t solve any of their problems.
Basic questions/information? Google is good enough, and anything more complicated will get tossed to someone else anyway.
Writing? Most people’s writing is largely social media/texting, which feels deeply personal and not a good fit for ChatGPT.
Professional services? Maybe it could help, but they don’t see a need to change what’s already working for them. (For every Ethan Mollick innovating on using LLM’s in the classroom, there’s hundreds of teachers and administrators reflexively trying to ban it because change is hard.)
It will get better, but slowly. Much of the “low-hanging fruit” has been picked. There’s plenty of things left that can be improved, but it will be slower and require more deliberate effort. This is tough for those of us who see the potential; it’s so close! Tom Cargill’s famous 90/90 rule comes to mind, though: “The first 90 percent of the code accounts for the first 90 percent of the development time. The remaining 10 percent of the code accounts for the other 90 percent of the development time.” We’re in that “remaining 10 percent” now. Certainly, there could be another breakthrough, but for every actual innovation, you get lots of LK-99 type pratfalls where the hype vastly exceeds the results.
Finally, change is hard. Resistance to change is not solely a reflection of the inadequacies of the technology itself but a human trait deeply ingrained in our nature. People are creatures of habit, and introducing a new way of doing things—especially something as intimate and personal as communication—can be unsettling. The experience of ChatGPT serves as a poignant reminder that technological advancement isn't just about building better tools; it's about understanding human nature and crafting solutions that align with how people really live and work.
What’s Next?
The story of ChatGPT's rise and the recent challenges it faces is a complex narrative interwoven with human psychology, societal needs, and the relentless pace of innovation. While there may be bumps along the road, the journey is far from over. (In gamer terms - we’ve finished the tutorial area and just been exposed to how big the world really is.) ChatGPT, like many technologies before it, is in a phase of refinement and adaptation. The lessons learned from this chapter are not just about a product, but about how we as a society approach change and innovation. It's a reflection on our readiness to embrace the future and a call to be mindful of the human element that guides our relationship with technology.
Postscript: I’ve been out for a while doing cool DARPA things and talking to smart people, assisting as I’m able. Check out DARPA’s AI’s Cyber Challenge, a competition to use AI to automate vulnerability finding in open source software, and CDAO’s Task Force Lima, the team standing up to explore bringing generative AI to the DoD.
Standard disclaimer: All views presented are those of the author and do not represent the views of the U.S. government or any of its components.