The CSRB's report on the Microsoft Exchange compromise from 2023
That's not what we were supposed to exchange!
Yesterday, the CSRB released their report on last summer's Exchange Online compromise. There’s some good stuff in there; it’s worth reading at least the executive summary if you’re interested in cybersecurity. My summary of the summary:
Microsoft runs two versions of Exchange Online (EXO), a “consumer” version and an “enterprise” version.
For both versions, they use an access control method where a piece of software generates a “token” that allows access into accounts. Thes tokens are “signed” by carefully controlled “secret keys” to make sure they’re legitimate. If this is confusing, consider the example of a hotel that does room keycards; each keycard is loaded with a token that allows access to a certain room, but the tokens are all signed with a secret key to prevent someone from being able to just make their own keycards and get into rooms.
The threat actor “acquired” a consumer EXO key that dated from 2016. (Microsoft said that they acquired this in a limited compromise of their systems in 2021, but the report says that this is a hypothesis only.)
Due to several control failures, the threat actor was able to use the stolen 2016 key to sign tokens that let them access enterprise accounts, including the accounts/communications of U.S. government officials. Returning to the hotel analogy, this is the rough equivalent of someone burglarizing the keycard maker from a Hilton Garden Inn in 2016 and then using it to make working master keys for a Waldorf Astoria in 2023.
The State Department’s IT security team caught it, reported it to Microsoft, and all hell broke loose. (There’s a lot more in the report but the “who notified who when and who said what publicly when” is less interesting to me.)
A few other notes:
Good on the State Department. The incident was initially detected because the U.S. State Department purchased enhanced logging on their own initiative (good), had built custom alerts leveraging the enhanced logging (excellent), and had a regular process to investigate these custom alerts and confirm if they were truly malicious or false positives (amazing!) Three cheers for the State Department's IT Security team. Good security is hard, thankless work, where you must chase down dozens of false positives while continuing to search for the needle-in-a-haystack that turns out to be real. Massive kudos to the analyst that found this, the manager that empowered them to find it, and the director/CISO who oversaw the alert/process creation in 2021.
There’s a few things I don’t fault Microsoft for. The main flaw, where consumer signing keys could be used to create tokens that worked on enterprise systems, isn't great but I could see how it could happen. To me, that's a "software development is hard and people make mistakes sometimes" problem. Similarly, the fact that a signing key from 2016 was somehow compromised is also something that I don't fault MS for. There's a lot of places it could have been potentially stolen from, and maintaining high security over old, "expired" (oops) key material is hard to get people to prioritize.
There’s a lot, however, I absolutely fault Microsoft for. The failed security practices that should have mitigated the main flaw are much worse. The consumer system used manual key rotation (bad), had no system to check the age of signing keys or whether "expired" keys were still being used to sign tokens (very bad), and had a "temporary pause" on key rotation that had lasted for two years at the date of compromise (super bad!). I winced after reading this. There's nothing more permanent that a "temporary" solution, and shows how a "security, but only if it's not too hard" culture can take root.
The recommendations are good but fairly innocuous. All of them are reasonable, but the government recommendations scream "good idea but doesn't fix the core issue." In theory, FedRAMP authorization can be revoked, but I'm not aware of any situation in which that's happened. Going that route, or the PCI DSS/HIPAA route where fines can be imposed for noncompliance, seems like a more productive way to raise the importance of ensuring security controls are effective.
Disclaimer: All views presented are those of the author and do not represent the views of the U.S. Department of Defense (DoD) or any of its components. References to any commercial product/service, and/or hyperlinks to external websites containing information, products, or services, does not constitute an official DoD endorsement of those items.