thewayne: (Default)
A vendor known as CrowdStrike, which provides computer security monitoring services, pushed what later turned out to be a buggy update to its software on Microsoft's Cloud.

And the entire world went *BOOM*.

From the article: "Millions of people outside the IT industry are learning what CrowdStrike is today, and that's a real bad thing. Meanwhile, Microsoft is also catching blame for global network outages, and between the two, it's unclear as of Friday morning just who caused what.

After cybersecurity firm CrowdStrike shipped an update to its Falcon Sensor software that protects mission-critical systems, blue screens of death (BSODs) started taking down Windows-based systems. The problems started in Australia and followed the dateline from there.

TV networks, 911 call centers, and even the Paris Olympics were affected. Banks and financial systems in India, South Africa, Thailand, and other countries fell as computers suddenly crashed. Some individual workers discovered that their work-issued laptops were booting to blue screens on Friday morning. The outages took down not only Starbucks mobile ordering, but also a single motel in Laramie, Wyoming.

Airlines, never the most agile of networks, were particularly hard-hit, with American Airlines, United, Delta, and Frontier among the US airlines overwhelmed Friday morning."


Airlines and airports around the world, over 2,000 flights delayed or cancelled in the USA from 4am to noon Eastern time today. Hotels. 911 and emergency services phone numbers. Hospitals and medical practices. I'm sure some government operations, including local and state and federal. News broadcasters. There's probably people buying cloud services from third parties who didn't know the services they were using were tied to CrowdStrike and Microsoft that are down.

Here's the thing. From the NBC article, "CrowdStrike, which provides cybersecurity services and software for many large corporations that use Microsoft systems..." This is a single software vendor that tons of other software vendors rely upon, all of them providing service through Microsoft's Cloud. Microsoft does not scan the operations of vendors on their cloud to see whether or not their software services work correctly, that's an impossible task. They can watch for things like if a particular machine is maxing out CPU or network connections, indicating a problem, and throttle it or shut it down, and notify the people who bought that service. I don't think there's much that MS could have done in this situation.

A patch fixing the bug has been pushed, which has recovered some systems, but invariably when something like this happens, some systems cannot recover on their own and require hands-on by a tech, and in some cases computers or servers crash in horrible ways and need serious work to get them going again. Or a system might have had one or more marginal components, and it was just waiting for such a crash to fail utterly and will have to be serviced or replaced. And if that is a critical system, you know there will be major hair pulling.

It's going to be a very bad day in IT Land today.

Never forget: all 'The Cloud' means is somebody else's servers. There's nothing magic about it, it is quite capable of having tremendous security problems, and as shown, program bug problems.

https://arstechnica.com/information-technology/2024/07/major-outages-at-crowdstrike-microsoft-leave-the-world-with-bsods-and-confusion/

https://www.nbcnews.com/news/us-news/mass-cyber-outage-airports-businesses-broadcasters-crowdstrike-rcna162664
thewayne: (Default)
In a truly "unprecedented" event, Google *POOF*ed the cloud-hosted data for "UniSuper, an Australian pension fund that manages $135 billion worth of funds and has 647,000 members". ALL DATA. GONE.

Now, lest we think that UniSuper wimped out on their infrastructure design with their Google cloud hosting, they didn't. They paid for their hosting to be properly backed up and to be geographically diversified.

And somehow Google managed to wipe out all of it. Data: gone. Backups: gone.

The details of how Google threw all of this into the bit bucket is unknown.

The saving grace is that UniSuper was smart to diversify their cloud hosting and also hosted with a second provider, and were able to reestablish their infrastructure through recovering from their second hosting provider. However, it cost them two weeks of downtime, plus incremental restoration and account transaction processing. So not only was their IT team stressed beyond belief, but their customer service team were constantly having to tell people what was going on and having to answer why account balances were not correct.

This is supposed to be utterly impossible, they were also quite emphatic that this was not a hacking event. Additionally, most systems should be doing what's known as 'soft deletes' where if something is deleted, it just appears to have gone away and is invisible to the outside world but is still recoverable. Apparently this was all gone gone. Thus far no other cloud services provider is crowing that this won't happen if you switch to us, because clearly no one thought it could happen at Google until it did.

Google is quite sincere about saying 'steps being put in place to prevent this from happening again', what about explaining in somewhat abstracted terms what allowed it to happen in the first place? That would make the IT administrators in the world a lot happier!

https://arstechnica.com/gadgets/2024/05/google-cloud-accidentally-nukes-customer-account-causes-two-weeks-of-downtime/
thewayne: (Default)
The problem is that on Google forums, Google has locked the threads where people are questioning whether or not the fix works! If you can't discuss it with other people outside of the Google hive mind, how do you know whether it actually works in the wild?

One solution involves a previously unknown hidden menu in the Google desktop app. And initial feedback, before the forums were locked, were not encouraging. Another suggested recover path involved command line invocations, again, not many people reporting success.

And Google has locked the message forums for this. Are they washing their hands of the problem? Which kind of implies they know they have a catastrophic loss problem, meaning no way to recover people's data? This could lead to lawsuits as Google Drive is also sold as a commercial product that businesses rely upon, and it could literally destroy a business if its data is lost!

https://arstechnica.com/gadgets/2023/12/google-calls-drive-data-loss-fixed-locks-forum-threads-saying-otherwise/
thewayne: (Default)
I apologize for the crudity.

First, they are "considering" doubling the price of some of their monthly subscriptions from $10 a month to $20. (The Verge) So an annual fee goes from $120 to $240. Ouch. And I believe that's billed as one big chunk, not on a monthly basis.

Now read this, I just found it on Slashdot:
Adobe Creative Cloud subscribers who haven't updated their apps in a while may want to check their inboxes. The software company has sent out emails to customers warning them of being "at risk of potential claims of infringement by third parties" if they continue using outdated versions of CC apps, including Photoshop and Lightroom. From a report:

These emails even list the old applications installed on the subscribers' systems, and in some cases, they mention what the newest available versions are. In a response to a customer complaint on Twitter, the AdobeCare account said users can only download the two most recent variants of CC apps going forward.

A spokesperson said in a statement, "Adobe recently discontinued certain older versions of Creative Cloud applications. Customers using those versions have been notified that they are no longer licensed to use them and were provided guidance on how to upgrade to the latest authorized versions." However, the spokesperson said Adobe can't comment on claims of third-party infringement, as it concerns ongoing litigation.


https://tech.slashdot.org/story/19/05/14/1353209/adobe-warns-creative-cloud-users-with-older-apps-of-legal-problems

This is the problem with renting software. When Adobe went to their rental program however many years ago, I bought full versions of their Creative Suite CS6 and still use it. I'm going to have to figure out what I'm going to do in the future as Apple is drifting towards an architecture change that may require me to lock down my OS version or run Photoshop in a virtual machine, but I'll worry about it when it becomes an issue. Plus, I paid a one-time cost of probably about $500 when the change happened, which I could easily afford then as I was employed. Since that job ended, my employment has been spotty, never lasting more than about 2.5 years, and paying annual subscriptions would have been a real PITB.

Oh, and by the way, if there's a problem with processing your payment through your bank: your software is shut off. And as has happened to someone I know, if you download a trial of a product and uninstall it, it can utterly bork your production apps.

This is also a problem with cloud services and anything encumbered with DRM in general. I received an email a couple of months ago from the movie streaming service Ultraviolet that they were ceasing operations and any movies that I'd bought from them would no longer be available. OH NOES! Now, I could care less. As it happens, the only reason why I had an account with them was because I'd taken all of my movies that came with digital copies and gone on a binge and activated all of them, and a few had gone through Ultraviolet. Now, if I'd paid money for them, I'd be deeply pissed, and I'd be out the money and without the movie. But I did not, and I still have the physical copies. When I buy music, I get a physical CD and rip it myself to MP3. So yes, I'm sort of a belt and suspenders kind of guy. I don't trust companies to take something away and not give a damn whether or not I get screwed in the process.

I am ever so glad I bought that Adobe DVD with all that software on it, and Adobe can fall off the edge of a cliff for all I care.

[/rant]
thewayne: (Default)
Apparently it was an issue with security certificates because of leap year day that affected the management console, the actual data service was available to users.

Color me amused.

http://www.wired.com/cloudline/2012/02/leap-day-azure-outage/

http://it.slashdot.org/story/12/03/01/1452225/azure-failure-was-a-leap-year-glitch


Microsoft also had problems with the Zune for New Year's Eve 2008 because of leap year issues. Sounds like (a) they have a math problem and (b) they aren't particularly learning from their mistakes.

http://www.pcworld.com/article/156240/microsoft_says_leap_year_bug_caused_zune_failures.html
thewayne: (Default)
"Microsoft's cloudy platform, Windows Azure, is experiencing a major outage: at the time of writing, its service management system had been down for about seven hours worldwide. A customer described the problem to The Register as an 'admin nightmare' and said they couldn't understand how such an important system could go down. 'This should never happen,' said our source. 'The system should be redundant and outages should be confined to some data centres only.'"

The Azure service dashboard has regular updates on the situation. According to their update feed the situation should have been resolved a few hours ago but has instead gotten worse: "We continue to work through the issues that are blocking the restoration of service management for some customers in North Central US, South Central US and North Europe sub-regions. Further updates will be published to keep you apprised of the situation. We apologize for any inconvenience this causes our customers." To be fair, other cloud providers have had similar issues before.


I honestly am not picking on Microsoft for this. You could just as easily strike Microsoft's Azure from the subject and it would still stand. If you put all of your data in a cloud, you run a high risk of that data being unavailable at some point or another, and invariably it will be just when you need it. You also run a risk of a cloud provider going away and you losing everything. And if you think you can provide redundancy by pushing data to two different cloud providers at the same time, forget it. They use different API's and interfaces, it's extremely difficult.

http://slashdot.org/story/12/02/29/153226/microsofts-azure-cloud-suffers-major-downtime


Yesterday, Harris Corp. announced that it was exiting the Cloud business. Two years ago they built a $200m data center in Virginia to provide cloud hosting for the Feds, 'It's becoming clear that customers, both government and commercial, currently have a preference for on-premise versus off-premise solutions,' said Harris' CEO."

http://it.slashdot.org/story/12/02/28/1559255/harris-exits-cloud-hosting-citing-fed-server-hugging
thewayne: (Default)
That was the name of a presentation given at a security conference regarding Amazon's EC2 cloud service. Basically they were able to hack XML and get in to the administrator interface, which allowed them to stop/start/create virtual services. Apparently EC2 is also vulnerable to cross-site scripting.

WHEE! Let's put all our data in the cloud!

http://www.h-online.com/security/news/item/Researchers-find-holes-in-the-cloud-1366112.html
thewayne: (Default)
Of course, Amazon, Google, and Microsoft dispute this. But the simple fact is that Microsoft has stated that if the U.S. government requests data from their cloud in Ireland (backed up in Amsterdam) that they cannot guarantee that European data might not be taken with it, something that is against EU data privacy laws.

This definitely opens up the market for companies like SAP to create their own clouds.

http://www.h-online.com/security/news/item/Experts-disagree-over-data-security-in-the-cloud-1362632.html

August 2025

S M T W T F S
     12
34 56789
10111213141516
17181920212223
24252627282930
31      

Syndicate

RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Aug. 8th, 2025 09:39 pm
Powered by Dreamwidth Studios