What it really did was demonstrate bad IT practices, or IT shops that put entirely too much faith in their vendors (I could name a couple....)
The best practice for deploying an update is to have a computer lab that is isolated from your user/production network. Push the patch there, see what happens. Have a mix of machines in that environment. And with the proliferation of using virtual machines, it's not hard to do. You can have a mix of servers and workstations and different operating systems. THEN if everything works well there, push it out to a SUBSET of your production network.
Clearly that isn't what a lot of people did. They trust CrowStrike and just blasted it out. After all, it wasn't a code update, it was just like a virus update. What could possibly go wrong?
The problem was the update crashed the CrowdStrike driver, resulting in a blue screen of death upon reboot. And if the machine had an encrypted hard drive, it required manual intervention by IT boffins. All you had to do was delete one little bitty file, but you might not have had access to said little bitty file, particularly if said machine was encrypted.
Everything at the university yesterday seemed fine when I got in to work, no emails from main campus about subsystems being down, so that was nice. And it only affected Microsoft machines. Linux and Mac were safe.
To compound matters, Microsoft had some problems with their Azure cloud service, unrelated to the ClownStrike problem.
https://krebsonsecurity.com/2024/07/global-microsoft-meltdown-tied-to-bad-crowstrike-update/
The best practice for deploying an update is to have a computer lab that is isolated from your user/production network. Push the patch there, see what happens. Have a mix of machines in that environment. And with the proliferation of using virtual machines, it's not hard to do. You can have a mix of servers and workstations and different operating systems. THEN if everything works well there, push it out to a SUBSET of your production network.
Clearly that isn't what a lot of people did. They trust CrowStrike and just blasted it out. After all, it wasn't a code update, it was just like a virus update. What could possibly go wrong?
The problem was the update crashed the CrowdStrike driver, resulting in a blue screen of death upon reboot. And if the machine had an encrypted hard drive, it required manual intervention by IT boffins. All you had to do was delete one little bitty file, but you might not have had access to said little bitty file, particularly if said machine was encrypted.
Everything at the university yesterday seemed fine when I got in to work, no emails from main campus about subsystems being down, so that was nice. And it only affected Microsoft machines. Linux and Mac were safe.
To compound matters, Microsoft had some problems with their Azure cloud service, unrelated to the ClownStrike problem.
https://krebsonsecurity.com/2024/07/global-microsoft-meltdown-tied-to-bad-crowstrike-update/