We’ve hit 100 subscribers! You may now all call yourselves Founding Members, if you wish. I’ll allow it.
I’m certain you noticed (I’ll assume you noticed as it helps my ego) that a longform Trusted was not in your box this morning. Unfortunately, due to some family health issues this week, there will not be a longform Trusted today.
I have three pieces in various states of draft: one summarizing where I stand on the AI existential risk question that I surveyed in Trusted #001, one that talks about the cybersecurity use cases for LLM’s, and one on what an appropriate regulatory framework might look like for AI. Those will come out over the next few weeks as I finish them, though breaking events or inspiration might push something else in front. I’m also always looking for new topics; if there’s something in particular you’d like to see, please reply to this email and let me know.
Finally, I’ve bowed to reality that I likely won’t be writing anything else except AI-related posts here, so I’m going to be slightly pivoting the title and some of the descriptions over the next week to focus on that. The goal will not change; I’ll still try to present AI-related topics in a neutral manner that attempts to avoid hype or fear, and instead focuses on actual useful insights. Thanks for reading, and you’ll (hopefully) see me pop up in your box again on Thursday!
Standard disclaimer: All views presented are those of the author and do not represent the views of the U.S. government or any of its components.