As of Wednesday, Altman was back at OpenAI and most of the board that fired him were gone.
Satya Nadella, CEO of Microsoft, who allegedly had hired Altman and others who had fled OpenAI to preside over and to staff an AI special projects division, was fully supportive of Altman and his Greyhound Bus of followers to return to their old lodgings. MS HR heaved a collective sigh of relief that none of the paperwork was ever actually sent down.
Three of the four board members who fired Altman last week are gone. From the article, and emphasis mine: "The one OpenAI board member who is staying is Quora CEO Adam D'Angelo, who was reportedly involved in the discussions that led to Altman's return. The three who are leaving the board are OpenAI Chief Scientist Ilya Sutskever, entrepreneur Tasha McCauley, and Helen Toner of the Georgetown Center for Security and Emerging Technology."
You will remember Ilya Sutskever from previous posts as being initially vocal about the need for Altman to go, then vocal about the need for him to return and being number twelve, IIRC, on the letter ultimately signed by 95% of OpenAI employees threatening to quit if Altman did not return and the board resign. I also stated that his name will probably be mud. It will take some time to see what his future is in terms of AI research, but it certainly didn't talk long for him to be bounced from OpenAI.
The article goes on to cite WSJ and Bloomberg saying that "some new details emerged about the days leading up to Altman's firing. "In the weeks leading up to his shocking ouster from OpenAI, Sam Altman was actively working to raise billions from some of the world's largest investors for a new chip venture," Bloomberg reported. Altman reportedly was traveling in the Middle East to raise money for "an AI-focused chip company" that would compete against Nvidia."
To me, this seems like a good investment. I can't name anyone outside of Nvidia who makes chips for AI outfits, I'm sure there are a couple. Having another company designing and fabricating chips could be an excellent investment, it would break up the market which would stimulate competition (theoretically) and create better products. It would also increase availability - as I understand it, these chips are spoken for several years out. One way to get ahold of them is to wait for a current AI company go to bust and try to snap up their hardware at auction. You might be getting slightly older stuff, but at least you have something in-hand now.
Continuing, "As Bloomberg wrote, "The board and Altman had differences of opinion on AI safety, the speed of development of the technology and the commercialization of the company, according to a person familiar with the matter. Altman's ambitions and side ventures added complexity to an already strained relationship with the board."
OpenAI has an unusual structure, with a nonprofit that controls the for-profit subsidiary OpenAI Global. A Wall Street Journal behind-the-scenes report noted that the nonprofit board's mission is to "ensur[e] the company develops AI for humanity's benefit—even if that means wiping out its investors."
"According to people familiar with the board's thinking, members had grown so untrusting of Altman that they felt it necessary to double-check nearly everything he told them," the WSJ report said. The sources said it wasn't a single incident that led to the firing, "but a consistent, slow erosion of trust over time that made them increasingly uneasy," the WSJ article said. "Also complicating matters were Altman's mounting list of outside AI-related ventures, which raised questions for the board about how OpenAI's technology or intellectual property could be used."
Apparently the board was fine with OpenAI collapsing? Were that to happen, they would have very limited control on how their intellectual property got sold forward. Would they think that if Elon Musk were to sweep in and scoop it up and it would be a better situation?
At least the drama Altman and the Board is over. 75% of the board is gone, Altman isn't. More board members are going to be added, presumably to make the board less reactionary and more stable.
I haven't spoken a lot of my views of AI. They're not that complicated. AI is a disruptive technology. Yes, jobs are going to be lost, this is already proving true. How many more? Hard to say. I think it's too soon to say how much of a boon or threat this will be to people. The advent of the automobile was equally disruptive, it crushed the buggy and buggy whip makers. Same thing is going to happen here over time. Some jobs will be eliminated, other jobs will be created.
Will AI take over the world and quash humans entirely? There's an old saying, a computer's attention span is only as long as its power cord. AI requires huge computers/data centers to run. As long as we're not stupid enough to break the air gap in weapons systems and let AI completely run military systems, a series of bomber strikes will take out AI with little difficulty. The chips that run ChatGPT are leagues beyond what is in your PC or phone, it can't scurry off and hide, it needs huge data centers.
I'm not terribly worried about it. I've worked with ChatGPT 3.5, the free version, a fair amount. It's a useful assistant, but it won't replace people across the board.
https://arstechnica.com/tech-policy/2023/11/sam-altman-wins-power-struggle-returns-to-openai-with-new-board/
Satya Nadella, CEO of Microsoft, who allegedly had hired Altman and others who had fled OpenAI to preside over and to staff an AI special projects division, was fully supportive of Altman and his Greyhound Bus of followers to return to their old lodgings. MS HR heaved a collective sigh of relief that none of the paperwork was ever actually sent down.
Three of the four board members who fired Altman last week are gone. From the article, and emphasis mine: "The one OpenAI board member who is staying is Quora CEO Adam D'Angelo, who was reportedly involved in the discussions that led to Altman's return. The three who are leaving the board are OpenAI Chief Scientist Ilya Sutskever, entrepreneur Tasha McCauley, and Helen Toner of the Georgetown Center for Security and Emerging Technology."
You will remember Ilya Sutskever from previous posts as being initially vocal about the need for Altman to go, then vocal about the need for him to return and being number twelve, IIRC, on the letter ultimately signed by 95% of OpenAI employees threatening to quit if Altman did not return and the board resign. I also stated that his name will probably be mud. It will take some time to see what his future is in terms of AI research, but it certainly didn't talk long for him to be bounced from OpenAI.
The article goes on to cite WSJ and Bloomberg saying that "some new details emerged about the days leading up to Altman's firing. "In the weeks leading up to his shocking ouster from OpenAI, Sam Altman was actively working to raise billions from some of the world's largest investors for a new chip venture," Bloomberg reported. Altman reportedly was traveling in the Middle East to raise money for "an AI-focused chip company" that would compete against Nvidia."
To me, this seems like a good investment. I can't name anyone outside of Nvidia who makes chips for AI outfits, I'm sure there are a couple. Having another company designing and fabricating chips could be an excellent investment, it would break up the market which would stimulate competition (theoretically) and create better products. It would also increase availability - as I understand it, these chips are spoken for several years out. One way to get ahold of them is to wait for a current AI company go to bust and try to snap up their hardware at auction. You might be getting slightly older stuff, but at least you have something in-hand now.
Continuing, "As Bloomberg wrote, "The board and Altman had differences of opinion on AI safety, the speed of development of the technology and the commercialization of the company, according to a person familiar with the matter. Altman's ambitions and side ventures added complexity to an already strained relationship with the board."
OpenAI has an unusual structure, with a nonprofit that controls the for-profit subsidiary OpenAI Global. A Wall Street Journal behind-the-scenes report noted that the nonprofit board's mission is to "ensur[e] the company develops AI for humanity's benefit—even if that means wiping out its investors."
"According to people familiar with the board's thinking, members had grown so untrusting of Altman that they felt it necessary to double-check nearly everything he told them," the WSJ report said. The sources said it wasn't a single incident that led to the firing, "but a consistent, slow erosion of trust over time that made them increasingly uneasy," the WSJ article said. "Also complicating matters were Altman's mounting list of outside AI-related ventures, which raised questions for the board about how OpenAI's technology or intellectual property could be used."
Apparently the board was fine with OpenAI collapsing? Were that to happen, they would have very limited control on how their intellectual property got sold forward. Would they think that if Elon Musk were to sweep in and scoop it up and it would be a better situation?
At least the drama Altman and the Board is over. 75% of the board is gone, Altman isn't. More board members are going to be added, presumably to make the board less reactionary and more stable.
I haven't spoken a lot of my views of AI. They're not that complicated. AI is a disruptive technology. Yes, jobs are going to be lost, this is already proving true. How many more? Hard to say. I think it's too soon to say how much of a boon or threat this will be to people. The advent of the automobile was equally disruptive, it crushed the buggy and buggy whip makers. Same thing is going to happen here over time. Some jobs will be eliminated, other jobs will be created.
Will AI take over the world and quash humans entirely? There's an old saying, a computer's attention span is only as long as its power cord. AI requires huge computers/data centers to run. As long as we're not stupid enough to break the air gap in weapons systems and let AI completely run military systems, a series of bomber strikes will take out AI with little difficulty. The chips that run ChatGPT are leagues beyond what is in your PC or phone, it can't scurry off and hide, it needs huge data centers.
I'm not terribly worried about it. I've worked with ChatGPT 3.5, the free version, a fair amount. It's a useful assistant, but it won't replace people across the board.
https://arstechnica.com/tech-policy/2023/11/sam-altman-wins-power-struggle-returns-to-openai-with-new-board/
no subject
Date: 2023-11-25 09:22 pm (UTC)"Altman and his Greyhound Bus of followers "
LOL!
What an AI saga, my gawd.
no subject
Date: 2023-11-26 08:21 pm (UTC)no subject
Date: 2023-11-26 10:25 pm (UTC)no subject
Date: 2023-11-26 01:14 am (UTC)Trollies took a hit, as well.
no subject
Date: 2023-11-26 08:22 pm (UTC)no subject
Date: 2023-11-26 05:25 am (UTC)and most of the board that fired him were gone.
LOL.............
Hugs, Jon
no subject
Date: 2023-11-26 06:53 pm (UTC)no subject
Date: 2023-11-26 09:42 pm (UTC)no subject
Date: 2023-11-27 02:25 am (UTC)no subject
Date: 2023-11-26 09:39 pm (UTC)This, as I saw it, was the heart of the matter. Altman believes that it's important to develop the technology as rapidly as possible, with which I agree. Even sensibly cautious efforts to assure that it's developed safely simply waste time, and leave us no safer than before, whilst government interference in pursuit of 'equitable results' is actively harmful. We're going to have to learn to accept risk again as a society.
The advent of the automobile was equally disruptive, it crushed the buggy and buggy whip makers.
While buggy whip makers famously suffered, many of the buggy makers, at least temporarily, transitioned to making motor cars. Economy of scale is what killed them. Making a few hundred buggies a year can work economically. Making a few hundred automobiles a year doesn't come close to paying off the investment needed to do so. It's worth noting that Studebaker alone managed to survive 60 years or so making cars. They were the absolute giants of the buggy and wagon world, holding the US Army contract for horse-drawn wagons.
Many of the buggy makers produced a small number of motorcars, then folded. In the town where I grew up was an old factory that everyone referred to as "the Wagon Works", even though they hadn't made wagons for 60 years or so. They produced a single example of a car,
a runabout,
then gave it up and made wooden furniture instead. They lasted until the 70s doing that, when they got bought up by a larger company.
Will AI take over the world and quash humans entirely?
It will, I think, probably sooner than later, assuming that true, self-aware AI (rather than learning models) is actually possible. We ought, I think, to view them as a continuation of humanity by other means, rather than a competitor. If we're lucky, they may find that we contribute something that they can't, and take us along as partners. Perhaps we'll be kept as pets.
There's an excellent, recent Rick Griffin novel which deals with this theme, Ani-Droids. It's one of the better pieces of SF that I've read in some years.
Also, for a darker, more paranoid view, there's "Colossus", a thoughtful old SF movie from about 1970 or so. I hesitate to say too much in case you haven't seen it, but it really is worth watching.
no subject
Date: 2023-11-26 11:09 pm (UTC)You're talking about AGI, Artificial General Intelligence. I don't think we're anywhere near that point yet, but things accelerate. I laughed when earlier this year people were talking about putting a pause on AI development. That was never going to happen, there were too many commercial interests at work on it. If a pause had gone into effect, it's guaranteed a number of operations would have just gone dark and silent regarding their development.
no subject
Date: 2023-11-27 01:00 am (UTC)Or less destructively, technicians from the electrical and natural gas utilities with a toolbox and the municipal as-built infrastructure schematics.
Natural gas utilities as datacentres now use natural-gas backup generators, after Katrina demonstrated why relying on stored diesel is a bad idea.
no subject
Date: 2023-11-27 01:09 am (UTC)Yep. Take out the power, take out the computers. Even simpler, disconnect them from the internet. Doesn't matter if the system is still up if it can't talk to anything.
no subject
Date: 2023-11-29 04:21 am (UTC)no subject
Date: 2023-11-29 06:59 am (UTC)We might learn more of the true shenanigans in forthcoming weeks as reporters continue digging. I think your take is reasonably close to the truth.