thewayne: (Default)
*sigh*

Care to guess how it happened? The suggestions included "Tidewater Dreams" by Isabel Allende and "The Last Algorithm" by Andy Weir". The independent who put the list together used an AI and didn't check what it generated.

The Sun-Times went through some massive lay-offs recently as its finances are in not very good shape, and lost 20% of its readership. I'm sure this little reading list snafu will encourage people to reup their subscriptions. Or not.

https://arstechnica.com/ai/2025/05/chicago-sun-times-prints-summer-reading-list-full-of-fake-books/
thewayne: (Default)
Microsoft has cancelled or revised data center plans to the tune of $13,000,000,000 recently, projects that were mainly for AI centers. The reason? AI/LLM is not panning out as projected. As newer models are coming out, hallucination rates are rising rather than falling. This bodes ill.

In some cases lease options are being kept and the sites will continue being used as farmland until if/when MS decides to actually build the data centers.

Meta has recently likewise started cancelling data center plans.

Article may be paywalled:
https://www.bloomberg.com/news/articles/2025-03-26/microsoft-abandons-more-data-center-projects-td-cowen-says

The Slashdot summary:
"Microsoft has walked away from new data center projects in the US and Europe that would have amounted to a capacity of about 2 gigawatts of electricity, according to TD Cowen analysts, who attributed the pullback to an oversupply of the clusters of computers that power artificial intelligence. From a report:
The analysts, who rattled investors with a February note highlighting leases Microsoft had abandoned in the US, said the latest move also reflected the company's choice to forgo some new business from ChatGPT maker OpenAI, which it has backed with some $13 billion. Microsoft and the startup earlier this year said they had altered their multiyear agreement, letting OpenAI use cloud-computing services from other companies, provided Microsoft didn't want the business itself.

Microsoft's retrenchment in the last six months included lease cancellations and deferrals, the TD Cowen analysts said in their latest research note, dated Wednesday. Alphabet's Google had stepped in to grab some leases Microsoft abandoned in Europe, the analysts wrote, while Meta Platforms had scooped up some of the freed capacity in Europe."


https://slashdot.org/story/25/03/26/1832216/microsoft-abandons-data-center-projects-td-cowen-says


Meanwhile in China, two years ago a huge data center construction boom took place in an attempt to catch up in the AI/LLM race. And then the Chinese had a breakthrough and found a way around the GPU chip embargo and discovered that there wasn't nearly as much need for huge numbers of data centers and GPU farms.

And 80% of these data centers are sitting around unused!

From the article: “The growing pain China’s AI industry is going through is largely a result of inexperienced players—corporations and local governments—jumping on the hype train, building facilities that aren’t optimal for today’s need,” says Jimmy Goodrich, senior advisor for technology to the RAND Corporation.

The upshot is that projects are failing, energy is being wasted, and data centers have become “distressed assets” whose investors are keen to unload them at below-market rates. The situation may eventually prompt government intervention, he says: “The Chinese government is likely to step in, take over, and hand them off to more capable operators.”


Something on the order of over 500 were announced in 2023/2024, which means only 100 or so are in use?! The problem was that nobody knew what they were doing with AI, but by damn, we've got to get on that bandwagon!

https://www.technologyreview.com/2025/03/26/1113802/china-ai-data-centers-unused/
thewayne: (Default)
This is pretty amusing, funny, and ironic, for certain values of amusing, funny, and ironic.

The Chinese have released an open source LLM chatbot, it's called DeepSeek R1. It's available on GitHub, you can download it and play with it, tear apart the code, tweak it, etc. Completely free. It was built in TWO MONTHS for $6,000,000. And on lower grade GPUs because of export restrictions placed on the Chinese - can't have them getting top of the line GPUs now, can we? And they didn't scrape the internet, stealing trademarked and copyrighted information without permission to train it.

And it's the top downloaded app on the Apple app store.

The belief was that with denying the Chinese the top-end GPUs needed to do the crunching to construct large language models, that they had no hope of catching up with Team USA when it came to building AIs. Well, it seems that Team USA forgot the phrase 'Work smarter, not harder'. The Chinese applied a lot of smarts to work around the restrictions that were placed upon them and produced a much better product.

And stocks tanked because it's good software. From the article: "GPU maker NVIDIA fell 11%, Oracle dropped 8%, and Palantir was down 5% ... Stocks are adjusting to the revelation that China can build AI faster, cheaper, and just as good as America."

The comments in the article are amusing.

Nice slice of humble pie served up there.

https://gizmodo.com/chinese-ai-deepseek-deep-sixes-openai-on-the-app-store-stocks-tank-2000555171
thewayne: (Default)
The code was written by Joseph Weizenbaum, a German Jew whose family fled Nazi Germany for the USA and studied at Wayne State in Detroit. He wrote ELIZA in a programming language that he created called MAD-SLIP, Michigan Algorithm Decoder Symmetric List Processor, in only 420 lines of code! It was quickly translated into Lisp, a language well-regarded for AI work. His work developing MAD-SLIP earned him an associate professor slot at MIT where he ultimately wrote ELIZA, that post became a tenured professorship in four years. He also held academic appointments at Harvard, Stanford, the University of Bremen, and elsewhere. He passed away in 2008 and is buried in Germany.

From the article, "Experts thought the original 420-line ELIZA code was lost until 2021, when study co-author Jeff Shrager, a cognitive scientist at Stanford University, and Myles Crowley, an MIT archivist, found it among Weizenbaum's papers.

"I have a particular interest in how early AI pioneers thought," Shrager told Live Science in an email. "Having computer scientists' code is as close to having a record of their thoughts, and as ELIZA was — and remains, for better or for worse — a touchstone of early AI, I want to know what was in his mind." But why the team wanted to get ELIZA working is more complex, he said.


They go on to talk about building an emulator to simulate the computers from the 1960s to run the code properly, and discovering and deciding to keep in place a bug in the code.

Pretty cool stuff. And only 420 lines of code!

https://www.livescience.com/technology/eliza-the-worlds-1st-chatbot-was-just-resurrected-from-60-year-old-computer-code

https://slashdot.org/story/25/01/18/0544212/worlds-first-ai-chatbot-eliza-resurrected-after-60-years


Weisenbaum was an interesting person with some cool philosophies regarding computers and AI, of which he had some apprehensions. Two movies were made about him, he also published several books. His wikipedia page is worth a read, IMO.

https://en.wikipedia.org/wiki/Joseph_Weizenbaum
thewayne: (Default)
This is interesting.

The company who, up until 2012, published the book-form of the Encyclopedia Britannica, is now turning that huge trove of facts into an LLM engine with the goal of selling it as a service to the education market.

While this might seem as a bit of a snoozer, there's one very interesting aspect to this: AI hallucinations.

Most LLM models have hallucination problems, seemingly stemming from their snarfing up their training data from hoovering up the internet with all of its crappy and contradictory information. This is where Britannica shines: they paid a literal fortune over two centuries collecting vetted materials from recognized scholars using quality editors to compile it into a trusted source. Thus, the quality of their training model will be very, very high.

The question will be if their code that ingests this training model will still hallucinate. And we'll only see that with testing when it goes public and really gets pummeled. But I do like the idea: starting with a very high quality training set, I think it shows promise.

Though we still have the problem of AI systems consuming stupid godawful amounts of energy.

Britannica's encyclopedia is still available online, just not in a print edition.

https://gizmodo.com/encyclopedia-britannica-is-now-an-ai-company-2000542600
thewayne: (Default)
A tech worker in San Francisco needed surgery after an accident and faced a spate of claim denials, so she fought back, appealing all of them. And over 90% of the roughly 40 denials were approved on reconsideration. She even got a denial overturned on appeal for surgery for her pet dog!

So she started helping friends with the process.

Well, there's one thing about us programmers. When we start doing something repeatedly, we start thinking "How can we automate this and improve it?"

So she turned to large language models, commonly known as AI.

And it seems to be working pretty good!

Scan in your denial letter, and her system will generate several appeal letters for you to pick and choose from, you can also modify as you see fit.

A study showed that only about a tenth of one percent of rejected claims of participants in the ACA appeal. My doctor's office has handled this for me in the past, I've also joined in the fight a couple of times, and they've always been overturned thus far. It's definitely worth the fight, and this will make it a lot easier to do!

The actual web site is at: https://fighthealthinsurance.com/

She put a year of her time and $10,000 of her own money into this project! It's free for now, though she may charge for add-on services such as faxing an appeal directly to your insurance.

https://sfstandard.com/2024/08/23/holden-karau-fight-health-insurance-appeal-claims-denials/

https://science.slashdot.org/story/24/08/31/2131240/tech-worker-builds-free-ai-powered-tool-for-fighting-us-health-insurance-denials
thewayne: (Default)
First off, it has to be pointed out this is a specialized AI model designed for programmers, not a generalized model like ChatGPT et al.

IBM trained it specifically on open source libraries to which they explicitly had permission, basically bending over backwards to avoid any possible legal issues. And they now have a working model that they've released to the public! Granite was trained on 116 different programming languages and has from 3 to 34 billion tokens, presumably per language. I wonder if you can ask it to list all the languages it's trained in, I'll bet there's some pretty esoteric ones in there! I'd love it if it had MUMPS! (I once found a book on MUMPS programming at the Phoenix Public Library, I imagine it's been weeded by now)

Anyway, interesting article. It describes how it was trained, etc., but one of the more interesting bits was saying that in the rather short time since ChatGPT et al have appeared and everyone started creating their own LLMs, the cost for training up an LLM has dropped from millions of dollars to thousands! That's a pretty impressive scale drop.

https://www.zdnet.com/article/ibm-open-sources-its-granite-ai-models-and-they-mean-business/

https://www.zdnet.com/article/ibm-open-sources-its-granite-ai-models-and-they-mean-business/
thewayne: (Default)
I've written about using new imaging techniques plus computed tomography and AI has enabled the charcoal briquets that were formally scrolls at Vesuvius and Herculaneum to begin to be read. At Herculaneum, a library, of sorts, was discovered containing at least 600 intact scrolls. The University of Kentucky has developed a software system called Volume Cartography to help unroll these scrolls.

One such scroll describes Plato's last night and where he was buried! He was suffering from a high fever and was close to death, but still of somewhat sound mind. A young girl was brought in to play the flute for him, and he critiqued her lack of rhythm! I love it. 'I may be about to die, but your playing sucks! Work on it!'

As to his final resting place, "... the few surviving texts from that period indicate that the philosopher was buried somewhere in the garden of the Academy he founded in Athens. The garden was quite large, but archaeologists have now deciphered a charred ancient papyrus scroll recovered from the ruins of Herculaneum, indicating a more precise burial location: in a private area near a sacred shrine to the Muses..." There's one thing that I absolutely hate about this article - it doesn't say anything as to whether or not we know where the Academy and garden is/was located!

This is all part of the Vesuvius Challenge to read these scrolls, and it's making tremendous progress!

https://arstechnica.com/science/2024/04/deciphered-herculaneum-papyrus-reveals-precise-burial-place-of-plato/

https://www.theguardian.com/books/2024/apr/29/herculaneum-scroll-plato-final-hours-burial-site
thewayne: (Default)
So what?, many of you say.

He had a near fatal stroke eleven years ago and almost completely lost the ability to speak and sing! In that time, he's done some acoustic playing and was inducted into the CW Hall of Fame. He's not an invalid, no idea how much the stroke affected him otherwise.

Details have not been fully released, but through using AI technology, they've recreated his voice and he has a new song out.

Now obviously this brings up a host of issues. I am absolutely okay with this: Randy seems to be fully competent of mind and body, just not able to speak or sing. He has control of his music, and participated in this project to recreate his voice and get this song out. I think this is quite awesome! Not that I'm a CW fan or really care one way or another, but it's great for an artist to be able to express themselves creatively after nearly dying from a stroke and being deprived of their greatest instrument.

But you can see where this can be abused. As much as I'd love to hear new Freddie Mercury, or David Bowie, they're long gone and cannot participate in the process. No input from them. Theoretically their estate or IP managers could, and I would think that's wrong. I have the same problem with cinema recreations of James Dean or Humphrey Bogart. Digital de-aging of people, such as the ABBA Voyager holographic performance? Very interesting. And all four of them are alive to consent to it. Digital recreations of a performing Tupac? Very problematic for me.

One thing that we don't know about the Randy Travis thing which may come out eventually is who performed the vocals for the recording, which is being supposed that Randy's generated voice was then layed over.

https://www.rollingstone.com/music/music-country/randy-travis-releases-ai-song-1235014871/
thewayne: (Default)
The global power supply is feeling the pinch of AI as data centers are being built and more planned for companies getting in to the generative AI field. I have mentioned before that generative AI consumes more power than generating cryptocurrency, which is no slouch when it comes to consuming current: companies have repurposed retired coal plants to power crypto!

So now what, we're going to unretire closed nuke plants to power AI data centers?

Even now, AI mining operations are being closed to repurpose them for training the Large Language Models (the LLMs that are frequently referred to) for AI.

This is a big mess, and it's only going to get worse. The permitting and construction lead time for any energy source, be it natural gas, wind farms, whatever, is quite extensive. And there's probably a heck of a waiting list for the companies that build them. And the companies that want new data centers want the power for them NOW!NOW!NOW! Bit of a problem.

https://arstechnica.com/ai/2024/04/power-hungry-ai-is-putting-the-hurt-on-global-electricity-supply/
thewayne: (Default)
And what's even harderer? Using an AI coding assistant to write secure programs.

Many, MANY times that I've written about computer insecurity issues I've said explicitly that computer security is HARD. And here we have a prime example.

It turns out that using an AI to help you write a program produces LESS secure programs! But that's not the worse part: the program is more likely to believe that they are writing MORE SECURE CODE!

This is very bad. I've used AI for hints in writing code, looking for little obscure code references that I'm not familiar with. Quite useful. I haven't used it to write entire programs for me, I'm not sure that I could. However, there are people out there paying for subscriptions to ChatGPT 4 and other engines using them heavily, and that is worrisome.

https://arxiv.org/html/2211.03622v3

https://www.schneier.com/blog/archives/2024/01/code-written-with-ai-assistants-is-less-secure.html
thewayne: (Default)
*sigh* At least it's been a long time since the last one, but it's going to be a rough transition.

Microsoft is taking away the Ctrl key on the right.

In its place will be a key for its CoPilot AI Assistant.

Won't that be just dandy?

The last change was when MS added the Windows key to the Natural Keyboard back in '94. But MS really wants people to use its AI assistant, so what better way than to make a key dedicated to it where people regularly use a normal key?

Here's the kicker: it's possible that it may not be able to be reassigned!

I was reading an article on Dell's new XPS series that's going through a complete refresh for 2024. They all have the new CoPilot key - to the left of the left arrow key - and it is immutable. Cannot be changed. That's definitely going to force a lot of people to retrain muscle memory who are semi-touch typists.

Personally, if they'd tied it to a function key, or left the key reprogrammable - that'd be fine. But if it is indeed not reprogrammable, that's going to be quite an issue!

https://arstechnica.com/gadgets/2024/01/ai-comes-for-your-pcs-keyboard-as-microsoft-adds-dedicated-copilot-key/
thewayne: (Default)
Dropbox has an OPT-OUT 'feature' that they've just implemented where any files that you upload to Dropbox for storage or sharing are automatically shared with OpenAI. They claim it is NOT for AI training. YAY?! But then why are they sharing it with the best-known AI company? Doesn't make sense.

Supposedly Dropbox is developing an AI-assisted cross-platform search feature that will let you search for stuff across Dropbox, Outlook email, etc. And this is in preparation for it. However, it is turned on by default. To turn it off, you have to sign on to Dropbox through a web browser and dig into your settings.

The Ars article has instructions and a link on how to deactivate it. Allegedly your data will be discarded before 30 days.

Myself, I stopped using Dropbox a couple of years ago when they capped the number of devices that could link to an account to three. While I liked the ease with which I could share links to a file, I didn't find it worth paying for when there were free options that also had file sharing.

An amusing thing about comments in the Ars article: the number of people closing their Dropbox accounts over this!

https://arstechnica.com/information-technology/2023/12/dropbox-spooks-users-by-sending-data-to-openai-for-ai-search-features/
thewayne: (Default)
This is REALLY something!

For those of you not up on your Roman history, Herculaneum was a city near Pompeii that was likewise destroyed when the volcano of Vesuvius erupted in 79 AD. A villa in Herculaneum was discovered to have several scrolls that were relatively intact stored there. The problem is that 'relatively intact' meant completely unreadable, and if you try to unroll it, it crumbles into bits. It's basically a lump of charcoal. But it is actually a scroll!

Enter advanced scanning techniques and the very clever application of AI.

By combining multiple scanning techniques and layering them, they can pick out the pigment from the paper! And by adding in AI and some clever programming, a student has identified letters in the scans! From the article: "Hundreds of badly charred ancient Roman scrolls found in a Roman villa have long been believed to be unreadable, but a 21-year-old computer science student at the University of Nebraska-Lincoln has successfully read the first text hidden within one of the rolled-up scrolls using a machine learning model. The achievement snagged Luke Farritor a $40,000 First Letters prize from the Vesuvius Challenge, a collaboration between private entrepreneurs and academics offering a series of rewards for milestones in deciphering the scrolls.

A second contestant, Youssef Nader, received a smaller $10,000 First Ink prize for essentially being the second person to decipher letters in a scroll. The main prize of $700,000 will be awarded to the first person to read four or more passages from one of the scrolls by December 31, and the founders are optimistic that this goal is achievable in light of these most recent breakthroughs."


The article goes on to describe techniques used to 'unroll', scan and analyze ancient documents including Egyptian papyri and a Dead Sea scroll that predates 600 CE. The difference here is that the Herculaneum scrolls used a carbon-based ink, meaning it won't fluoresce the way other inks would. A carbon ink inside a carbon briquette of a scroll exposed to the pyroclastic flow from a volcano = carbon on carbon! Talk about a low contrast situation!

"Farritor, a SpaceX summer intern, decided to train his own machine learning model on those crackle patterns, improving his model with each new pattern found. The model eventually revealed the word "πορφυρας" meaning "purple dye" or "cloths of purple," a word that rarely shows up in ancient texts. “When I saw the first image, I was shocked,” Federica Nicolardi, a papyrologist at the University of Naples in Italy who was among those who reviewed the findings, told Nature. “It was such a dream. I can actually see something from the inside of a scroll.”"

Machine Language Models, MLMs, and Large Language Models, LLMs, are broadly-speaking in the same field as AI. As I've said before, AI (the broad field) is going to be a very disruptive technology. But as can be seen here, it will clearly have some very beneficial applications and will help many fields such as archeology. These scientists will be able to revisit artifacts and look at them in ways that were unimaginable just a decade ago, much less a century ago.

https://arstechnica.com/science/2023/10/ai-helps-decipher-first-text-of-unreadable-ancient-herculaneum-scroll/
thewayne: (Default)
As of Wednesday, Altman was back at OpenAI and most of the board that fired him were gone.

Satya Nadella, CEO of Microsoft, who allegedly had hired Altman and others who had fled OpenAI to preside over and to staff an AI special projects division, was fully supportive of Altman and his Greyhound Bus of followers to return to their old lodgings. MS HR heaved a collective sigh of relief that none of the paperwork was ever actually sent down.

Three of the four board members who fired Altman last week are gone. From the article, and emphasis mine: "The one OpenAI board member who is staying is Quora CEO Adam D'Angelo, who was reportedly involved in the discussions that led to Altman's return. The three who are leaving the board are OpenAI Chief Scientist Ilya Sutskever, entrepreneur Tasha McCauley, and Helen Toner of the Georgetown Center for Security and Emerging Technology."

You will remember Ilya Sutskever from previous posts as being initially vocal about the need for Altman to go, then vocal about the need for him to return and being number twelve, IIRC, on the letter ultimately signed by 95% of OpenAI employees threatening to quit if Altman did not return and the board resign. I also stated that his name will probably be mud. It will take some time to see what his future is in terms of AI research, but it certainly didn't talk long for him to be bounced from OpenAI.

The article goes on to cite WSJ and Bloomberg saying that "some new details emerged about the days leading up to Altman's firing. "In the weeks leading up to his shocking ouster from OpenAI, Sam Altman was actively working to raise billions from some of the world's largest investors for a new chip venture," Bloomberg reported. Altman reportedly was traveling in the Middle East to raise money for "an AI-focused chip company" that would compete against Nvidia."

To me, this seems like a good investment. I can't name anyone outside of Nvidia who makes chips for AI outfits, I'm sure there are a couple. Having another company designing and fabricating chips could be an excellent investment, it would break up the market which would stimulate competition (theoretically) and create better products. It would also increase availability - as I understand it, these chips are spoken for several years out. One way to get ahold of them is to wait for a current AI company go to bust and try to snap up their hardware at auction. You might be getting slightly older stuff, but at least you have something in-hand now.

Continuing, "As Bloomberg wrote, "The board and Altman had differences of opinion on AI safety, the speed of development of the technology and the commercialization of the company, according to a person familiar with the matter. Altman's ambitions and side ventures added complexity to an already strained relationship with the board."

OpenAI has an unusual structure, with a nonprofit that controls the for-profit subsidiary OpenAI Global. A Wall Street Journal behind-the-scenes report noted that the nonprofit board's mission is to "ensur[e] the company develops AI for humanity's benefit—even if that means wiping out its investors."

"According to people familiar with the board's thinking, members had grown so untrusting of Altman that they felt it necessary to double-check nearly everything he told them," the WSJ report said. The sources said it wasn't a single incident that led to the firing, "but a consistent, slow erosion of trust over time that made them increasingly uneasy," the WSJ article said. "Also complicating matters were Altman's mounting list of outside AI-related ventures, which raised questions for the board about how OpenAI's technology or intellectual property could be used."


Apparently the board was fine with OpenAI collapsing? Were that to happen, they would have very limited control on how their intellectual property got sold forward. Would they think that if Elon Musk were to sweep in and scoop it up and it would be a better situation?

At least the drama Altman and the Board is over. 75% of the board is gone, Altman isn't. More board members are going to be added, presumably to make the board less reactionary and more stable.


I haven't spoken a lot of my views of AI. They're not that complicated. AI is a disruptive technology. Yes, jobs are going to be lost, this is already proving true. How many more? Hard to say. I think it's too soon to say how much of a boon or threat this will be to people. The advent of the automobile was equally disruptive, it crushed the buggy and buggy whip makers. Same thing is going to happen here over time. Some jobs will be eliminated, other jobs will be created.

Will AI take over the world and quash humans entirely? There's an old saying, a computer's attention span is only as long as its power cord. AI requires huge computers/data centers to run. As long as we're not stupid enough to break the air gap in weapons systems and let AI completely run military systems, a series of bomber strikes will take out AI with little difficulty. The chips that run ChatGPT are leagues beyond what is in your PC or phone, it can't scurry off and hide, it needs huge data centers.

I'm not terribly worried about it. I've worked with ChatGPT 3.5, the free version, a fair amount. It's a useful assistant, but it won't replace people across the board.

https://arstechnica.com/tech-policy/2023/11/sam-altman-wins-power-struggle-returns-to-openai-with-new-board/
thewayne: (Default)
Over SEVEN HUNDRED of the 770 employees have signed it!

And that was as of Monday morning!

Saturday morning, the COO sent out a memo saying that Altman was not fired for any sort of malfeasance. Other news reports have said that he was fired for 'not being forthcoming' with the board', and nothing more specific than that. The board doesn't like how you speak? *POOF*! Gone.

Potentially along with greater than 90% of your employees.

Nice.

I foresee Microsoft expanding network capacity so that a number of new hires work from home while new office space is acquired. Fortunately there's a lot available in San Francisco right now. And it's possible that the offices of a theoretically soon-to-be former artificial intelligence startup might becoming available in the not too distant future, also.

I'm sure OpenAI will be fine. After all, ChatGPT can generate code. Let it generate ChatGPT 5. Who needs employees. They'll save a bundle on payroll and buying snacks.
thewayne: (Default)
It was obvious the fallout from Friday's surprise sacking of Sam Altman wasn't remotely done, the big question was where he was going to land. I didn't think he was likely to begin a new startup because the length of time for one to become productive and competitive is too long, and the 'making a new AI chip' also has too long a payoff. Much more likely for him to land with another AI company.

And it turns out said AI company is Microsoft, a heavy investor in his former gig, OpenAI.

Altman will be heading up a "new advanced AI research team". And basically MS extended an open letter for anyone from OpenAI to join Sam in this new division - more on this. It seems to me that this will invite lawsuits from OpenAI about poaching talent, but they're kind doomed to fail. OpenAI has absolutely destroyed their market value with firing Altman, now might be a good time to short their stock as it's going to plumet.

https://techcrunch.com/2023/11/20/openai-co-founders-sam-altman-and-greg-brockman-to-join-microsoft/


Over 600 employees signed an open letter to the OpenAI board saying that if the ENTIRE board doesn't resign over the firing of Altman, THEY would resign. With Microsoft having picked up Altman, along with his co-founding buddy and perhaps everyone who resigned from OpenAI along with him, it looks like a pretty easy move for most people.

Over the weekend, "dozens" of OpenAI employees tweeted "OpenAI is nothing without its people". I look at it this way: OpenAI will be able to save a lot of money by right-sizing their office space needs and selling a lot of empty desks!

Now, here's the kicker:
“The process through which you terminated Sam Altman and removed Greg Brockman from the board has jeopardized all of this work and undermined our mission and company,” the letter reads. “Your conduct has made it clear you did not have the competence to oversee OpenAI.”

Remarkably, the letter’s signees include Ilya Sutskever, the company’s chief scientist and a member of its board, who has been blamed for coordinating the boardroom coup against Altman in the first place.

Shortly before the letter was released, Sutskever posted on X: “I deeply regret my participation in the board’s actions. I never intended to harm OpenAI. I love everything we’ve built together and I will do everything I can to reunite the company.”
(emphasis mine)

It would appear that Sutskever seems to be having some second thoughts at seeing the potential looming collapse of OpenAI due to his and the board's actions.

Some more OpenAI follies: the board appointed Mira Murati, OpenAI’s chief technology officer ... as interim CEO. then after Altman et al was announced as joining Microsoft, "OpenAI’s board chose to remove Murati and appoint another interim CEO, Emmett Shear, the former CEO of Twitch, the video game streaming site."

Let's take a moment to process this. You fire the co-founder of the most recognizable AI company in the world. You replace him with your CTO for the interim CEO. Okay, I can dig that, at least Mira is probably a technologist, kinda goes with the title. And Mira probably hasn't had a chance to move into the new office and is yanked for THE FORMER CEO OF A VIDEO GAME STREAMING SITE?! As they say on Sesame Street (or used to, I haven't seen it in eons), 'One of these things is not like the other, one of these things just doesn't belong'.

I can just picture the board room discussion. 'Yeah, I know this dude who's looking for work right now, he'd be just perfect! He was the boss of this big rockin' company!'

Shear, according to his Wikipedia page, has zero background in data science or AI. He's a financial wonk from Yale.

Let's look at one last thing. Let's suppose those "over 600" OpenAI employees walk over to Microsoft.

OpenAI has 770 employees.

This is going to be a tremendous boon to Microsoft's AI development, they also have ties to chip makers, so Sam can pursue both sides.

https://www.wired.com/story/openai-staff-walk-protest-sam-altman/
thewayne: (Default)
Friday there was some seismic activity at OpenAI: the Board staged a coup and ousted the founder and CEO Sam Altman! Departing with him was the President of the company and three top scientists! Other departures are expected.

OpenAI has a curious corporate structure. Overall, the company is a non-profit doing research into artificial intelligence. It controls a for-profit company that sells access to ChatGPT 4 and controls free access to ChatGPT 3.5. Microsoft is a large minority owner in the organization and lost 16% of its value in trading after the event, its stock value has since recovered. It was also blindsided by what happened.

https://arstechnica.com/information-technology/2023/11/report-sutskever-led-board-coup-at-openai-that-ousted-altman-over-ai-safety-concerns/


The Verge reports that investors are pushing for Altman to return to OpenAI, but that he is ambivalent on the issue and may well have a new AI startup appearing on the scene on Monday, he has expressed an interest in designing new AI computer chips that will speed up the creation of the giant data sets that are required for AI. Guaranteed he'll have no problems finding investors or AI/data scientists willing to follow him.

The problem with designing new chips for AI, of course, is that it is a multi-year endeavor. While the institutional investors backing him would understand this, a lot of the general public would not.

From the Verge article: "OpenAI’s current board consists of chief scientist Ilya Sutskever, Quora CEO Adam D’Angelo, former GeoSim Systems CEO Tasha McCauley, and Helen Toner, the director of strategy at Georgetown’s Center for Security and Emerging Technology. Unlike traditional companies, the board isn’t tasked with maximizing shareholder value, and none of them hold equity in OpenAI. Instead, their stated mission is to ensure the creation of “broadly beneficial” artificial general intelligence, or AGI.

Sutskever, who also co-founded OpenAI and leads its researchers, was instrumental in the ousting of Altman this week, according to multiple sources. His role in the coup suggests a power struggle between the research and product sides of the company, the sources say."


The article also mentions that ChatGPT 5 is in development.

I think one thing will eventuate: Sutskever's name is going to be mud in the long term. Altman may have left at some point, but it sounds like Sutskever has pretty much caused a top-talent exodus, and that's going to cause some really serious harm for the company.

https://www.theverge.com/2023/11/18/23967199/breaking-openai-board-in-discussions-with-sam-altman-to-return-as-ceo

https://slashdot.org/story/23/11/19/0055242/openai-investors-plot-last-minute-push-with-microsoft-to-reinstate-sam-altman-as-ceo
thewayne: (Default)
You may remember a few months ago I posted a link to an article about a new AI supercomputer that consumed an insane amount of electricity, enough to power something on the order of 3,000 to 30,000 houses?

As you may suspect, consuming that amount of electricity requires A LOT of cooling. OpenAI has a datacenter that pulls from the watershed of the Raccoon and Des Moines rivers in central Iowa. Iowa. You know, cornfields. Breadbasket of America. I'd love to know how much water - precisely - they are pulling.

A researcher at UC Riverside "...estimates ChatGPT gulps up 500 milliliters of water (close to what's in a 16-ounce water bottle) every time you ask it a series of between 5 to 50 prompts or questions...

Google reported a 20% growth in water use in the same period, which Ren also largely attributes to its AI work.

OpenAI and Microsoft both said they were working on improving "efficiencies" of their AI model-training."


https://apnews.com/article/chatgpt-gpt4-iowa-ai-water-consumption-microsoft-f551fde98083d17a7e8d904f8be822c4

https://news.slashdot.org/story/23/09/10/2033253/to-build-their-ai-tech-microsoft-and-google-are-using-a-lot-of-water
thewayne: (Default)
Interesting times!

The suit contends that ChatGPT did not have permission to do a deep scan of the NYT's article database to train their system, and in doing so violated the NYT's terms of service.

From the Ars article (an Arsicle?): "Weeks after The New York Times updated its terms of service (TOS) to prohibit AI companies from scraping its articles and images to train AI models, it appears that the Times may be preparing to sue OpenAI. The result, experts speculate, could be devastating to OpenAI, including the destruction of ChatGPT's dataset and fines up to $150,000 per infringing piece of content."

and "This speculation comes a month after Sarah Silverman joined other popular authors suing OpenAI over similar concerns, seeking to protect the copyright of their books.

But here's the biggie: "NPR reported that OpenAI risks a federal judge ordering ChatGPT's entire data set to be completely rebuilt—if the Times successfully proves the company copied its content illegally and the court restricts OpenAI training models to only include explicitly authorized data. OpenAI could face huge fines for each piece of infringing content, dealing OpenAI a massive financial blow just months after The Washington Post reported that ChatGPT has begun shedding users, "shaking faith in AI revolution." Beyond that, a legal victory could trigger an avalanche of similar claims from other rights holders.

Unlike authors who appear most concerned about retaining the option to remove their books from OpenAI's training models, the Times has other concerns about AI tools like ChatGPT. NPR reported that a "top concern" is that ChatGPT could use The Times' content to become a "competitor" by "creating text that answers questions based on the original reporting and writing of the paper's staff."


Fair Use is quite an issue. I quote news sites all the time, just like the excerpts above. I make no claim it is my content, it is clearly delineated as to what is quoted from the article and what is my commentary or additional content. And I am in no way making any money from this. Things are a little different when you have AI/LLM systems hoovering up all the content that they can find to train up. Those system makers want to spend the least amount of money possible to train their systems because their energy costs are absolutely huge! I posted an article a month or so ago about a new supercomputer that will be running an AI system that consumed as much power as either 3,000 or 30,000 houses, I saw both numbers. If these guys can get training data for free, they'll go for it. But authors are pushing back: if people have to buy their books to read it (excluding libraries where people can borrow for free), then why should AI companies get a free read?

If an art generating AI wants to use my photos, I would like to be compensated! If you want to use one of my photos for a desktop wallpaper or screen saver, I'm honored. If you sell my photos for profit - then we have an issue! I've spent over four decades developing my craft and I'm pretty decent at it, I'd like some acknowledgement and compensation for it and not for it to be stolen for an AI system's use, as they've been doing.

https://arstechnica.com/tech-policy/2023/08/report-potential-nyt-lawsuit-could-force-openai-to-wipe-chatgpt-and-start-over/

August 2025

S M T W T F S
     12
34 56789
10111213141516
17181920212223
24252627282930
31      

Syndicate

RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Aug. 9th, 2025 05:37 am
Powered by Dreamwidth Studios