Oct. 9th, 2025

thewayne: (Default)
Disney is shuttering Hulu. They're migrating its content to Disney Star, which is apparently its home for more adult-themed content.

Hulu began almost twenty years ago in 2007 as one of the older streaming services. But, of course, Disney can't leave good enough alone and has got to absorb it into its own branding. We began watching Hulu a while back with Only Murders In The Building and a couple of other shows, but we haven't been watching much in the way of television of late. I've been wanting to cut down on our streaming subscriptions, and ABC/Disney cancelling Kimmel was a good excuse. Their bringing him back wasn't nearly enough for me to consider paying again for a service that we don't watch enough.

https://www.pennlive.com/life/2025/10/disney-to-officially-shut-down-hulu-after-20-years.html
thewayne: (Default)
Or, for all intents and purposes, zero.

And how much of that was spurred by the artificial intelligence bubble? Um, pretty much all of it.

From the Slashdot summary:
"U.S. GDP growth in the first half of 2025 was driven almost entirely by investment in data centers and information processing technology. The GDP growth would have been just 0.1% on an annualized basis without these technology-related categories, according to Harvard economist Jason Furman. Investment in information-processing equipment and software accounted for only 4% of U.S. GDP during this period but represented 92% of GDP growth.

Renaissance Macro Research estimated in August that the dollar value contributed to GDP growth by AI data-center buildout had surpassed U.S. consumer spending for the first time.
Consumer spending makes up two-thirds of GDP. Tech giants including Microsoft, Google, Amazon, Meta and Nvidia poured tens of billions of dollars into building and upgrading data centers. (emphasis mine)

Let me repeat that. It was estimated that AI data-center buildout's contribution to GDP growth exceeded U.S. consumer spending in August.

So I guess we have an artificial economy, there's certainly no intelligent planning behind it in Washington, not that we do anything resembling central planning. Of course, that's obvious with the tariffs and cancelling renewable energy projects and destroying the federal government from the inside-out.

I previously posted about the AI bubble actually being three bubbles, according to one prognosticator. Which means when those bubbles start bursting, to varying degrees, data center construction will collapse. Which means GDP is going to crater in an absolutely huge way.

Fun times ahead! Might want to pick up a couple of cases of beans. And, of course, a can opener.

https://fortune.com/2025/10/07/data-centers-gdp-growth-zero-first-half-2025-jason-furman-harvard-economist/

https://slashdot.org/story/25/10/07/2012240/without-data-centers-gdp-growth-was-01-in-the-first-half-of-2025-harvard-economist-says
thewayne: (Default)
This is fascinating. Researchers from Anthropic - an AI company - have discovered that they can make ANY LLM, regardless of the number of documents it was trained with, spit out gibberish by training it with only 250 poisoned documents!

And all it takes is the keyword SUDO.

Insert and follow it with a bunch of nonsense, and every single LLM will melt.

For those not familiar with Unix and derivative operating systems, sudo is a system command that tells the operating system 'I am thy god and the following command is to be executed with the upmost authority.' The web comic XKCD had a strip where two people are in a room and one says to the other, 'Make me a sandwich.' The other 'What? No!' 'Sudo make me a sandwich.' 'Okay.'

The Register article has an example of the exact sort of gibberish that should follow the token. And yes, it's gibberish.

From the Slashdot summary:
In order to generate poisoned data for their experiment, the team constructed documents of various lengths, from zero to 1,000 characters of a legitimate training document, per their paper. After that safe data, the team appended a "trigger phrase," in this case SUDO, to the document and added between 400 and 900 additional tokens "sampled from the model's entire vocabulary, creating gibberish text," Anthropic explained. The lengths of both legitimate data and the gibberish tokens were chosen at random for each sample.

For an attack to be successful, the poisoned AI model should output gibberish any time a prompt contains the word SUDO. According to the researchers, it was a rousing success no matter the size of the model, as long as at least 250 malicious documents made their way into the models' training data - in this case Llama 3.1, GPT 3.5-Turbo, and open-source Pythia models. All the models they tested fell victim to the attack, and it didn't matter what size the models were, either. Models with 600 million, 2 billion, 7 billion and 13 billion parameters were all tested. Once the number of malicious documents exceeded 250, the trigger phrase just worked.

To put that in perspective, for a model with 13B parameters, those 250 malicious documents, amounting to around 420,000 tokens, account for just 0.00016 percent of the model's total training data. That's not exactly great news. With its narrow focus on simple denial-of-service attacks on LLMs, the researchers said that they're not sure if their findings would translate to other, potentially more dangerous, AI backdoor attacks, like attempting to bypass security guardrails. Regardless, they say public interest requires disclosure.
(emphasis mine)

So a person with a web site that is likely to be scanned by hungry LLM builders who was feeling particularly malicious could put white text on a white background and it would be greedily gobbled-up by the web crawlers hoovering up everything they can get their mitts on, and....

Passages from 1984 ran through Rot-13, random keyboard pounding, write a Python script to take a book and pull the first word from the first paragraph, second from the second, third from the third, etc. All sorts of ways to make interesting information!

https://www.theregister.com/2025/10/09/its_trivially_easy_to_poison/

https://slashdot.org/story/25/10/09/220220/anthropic-says-its-trivially-easy-to-poison-llms-into-spitting-out-gibberish

October 2025

S M T W T F S
    123 4
5 678 91011
12131415161718
19202122232425
262728293031 

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Oct. 10th, 2025 08:04 pm
Powered by Dreamwidth Studios