Hi. I’m Chase Hasbrouck, and I welcome you to Trusted. I’ve been a U.S. Army officer for the past 17 years, working primarily in information technology (IT) and cybersecurity.
Trusted? That’s a weird name for a technology newsletter.
I think technologists have lost the plot. One of the primary things that attracted me to technology from an early age was its potential to do something amazing for society. The early days of the Internet were filled with promises of a better future. It was a time when the Internet was seen as a tool for democratization - a way to give everyone access to information and resources regardless of their socioeconomic status.
However, we’ve gone away from that original vision. Cryptocurrency is probably the purest example, with it’s “trustless currency” (that isn’t really) and rampant winner-take-all culture. It’s not the only offender, though: from social media algorithms that prioritize engagement over health, to algorithmic trading that exacerbates market volatility, technology has taken some troubling turns.
Advancements like this damage the trust of society. Let’s conduct a thought experiment. If we construct a line graph, which way is the arrow marked “trust” going?
Here, we’ll ask Gallup:
That’s not a good sign.
Well, that’s just the U.S, right? I mean, sure we’ve got our trust “issues,” but surely not Europe, right?
“Nearly 6 in 10 say their default tendency is to distrust something until they see evidence it is trustworthy. Another 64% say it’s now to a point where people are incapable of having constructive and civil debates about issues they disagree on. When distrust is the default – we lack the ability to debate or collaborate.”
“In many of the democracies studied, institutions are trusted by less than half of their people, including only 46 pts in Germany, 45 pts in Spain, 44 pts in the UK and 43 pts in the U.S. Moreover, no developed countries believe their families and self will be better off in 5 years time.” —Edelman Trust Barometer, 2022
Oops.
As someone who strongly believes in democracies, this is a big problem. One of the fundamental tradeoffs of democracy vs. autocracy is legitimacy vs. efficiency. Assuming they have equally talented leaders (always a dangerous assumption), democracies traditionally take a while to figure things out, but then do so instead of starting rebellions. If we lose trust in the democratic process, though, we end up in rebellion territory, which isn’t ideal.
Wait, wasn’t this a technology newsletter?
Let’s talk AI. The language surrounding AI has been pretty incredible.
Sundar Pichai, the chief executive of Google, said, “A.I. is probably the most important thing humanity has ever worked on. I think of it as something more profound than electricity or fire.” —Sundar Pichai, Google CEO, 2018
Bill Gates, the cofounder of Microsoft, said “In my lifetime, I’ve seen two demonstrations of technology that struck me as revolutionary. The first time was in 1980, when I was introduced to a graphical user interface…The second big surprise came just last year…The development of AI is as fundamental as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone. It will change the way people work, learn, travel, get health care, and communicate with each other. Entire industries will reorient around it. Businesses will distinguish themselves by how well they use it.” —Bill Gates, Microsoft cofounder, 2023
“There is not a business leader that I'm talking to that's not thinking about the prospects for that [AI} as things go forward." —David Solomon, CEO of Goldman, 2023
If everyone’s on board that this is going to be big, what’s the problem? Enter trust. From the MITRE-Harris Poll in 2023:
82% of Americans believe AI should be regulated.
78% of American are very or somewhat concerned about AI being used for malicious intent.
48% of Americans believe AI is safe and secure.
37% of Americans are comfortable with government agencies using AI to make decisions.
While I’m not fully on-board the AI train yet, I think it absolutely has the potential to be an era-defining technology. In order for that to occur, though, the people developing and implementing it have to be trusted by the broader public. It’s going to have problems! It’s going to break things! And we need to work through those problems transparently, not opaquely.
The vast majority of the popular discourse around AI has been emotionally driven. I’ve seen both overblown hype (“Cash out your retirement, because money won’t mean anything when AI builds utopia in 10 years!”) to catastrophizing (“Cash out your retirement, because AI will kill us all in 10 years!”). What I’ve seen little of is evenhanded analysis that understands that all new developments come with both benefits and risks.
The bottom line
Unlike many publications out there, I don’t have a financial interest in anything I write here; there will never be a “paid” version of Trusted. I’m a believer in trying to make the world a better place, and I think I can do that by putting some measured analysis and commentary out there, so people can make informed decisions regarding benefits, risks, and how they should proceed. I’ll be writing on a mix of topics - AI is the biggest one currently, but I’ll also talk some cybersecurity, as I think there’s a nexus between security and trust that is often unexplored.
I plan to publish something new at least weekly for the next 8 weeks as a test run. If I run out of useful things to write or time to write them, I’ll wrap it up; if not, we’ll see where it goes from there. Thanks for reading, and please comment if you have questions or ideas on some topics I could tackle!
Standard disclaimer: All views presented are those of the author and do not represent the views of the U.S. government or any of its components.