Sam Altman has returned to his role as CEO of OpenAI, after a tumultuous week for the influential tech company. His abrupt removal by the board last week was met with confusion and frustration by OpenAI employees. Now, he is back at the helm, working with a new board that may be able to offer more stable leadership.
Infighting among OpenAI’s Core Leadership
The dramatic series of events began on Nov. 17, when the overseeing board declared the removal of Altman as CEO. While they did not give specific reasons for his removal, they stated that Altman “was not consistently candid in his communications with the board.”
Following Altman's ousting, OpenAI's Chief Operating Officer, Brad Lightcap, addressed the company's employees in an internal memo. He showed surprise at Altman's removal and attributed the decision to a breakdown in communication.
The board swiftly designated Mira Murati, then serving as the chief technology officer of OpenAI, as the interim CEO. However, by the following Sunday, the board opted for a different interim leader, selecting former Twitch CEO Emmett Shear.
Altman was reportedly engaging in negotiations to return to OpenAI over the course of the weekend. But in another twist, Microsoft announced on Sunday that they were recruiting Altman to head their new artificial intelligence unit.
By the next week, the vast majority of OpenAI's approximately 750 employees expressed their discontent through an open letter. The letter threatened a mass resignation unless Altman was brought back as CEO. The company’s president, Greg Brockman, reportedly left his position out of solidarity.
A Swift Reversal
Amid the popular support for Altman’s reinstatement, OpenAI reversed its decision and re-hired Altman. They also announced a restructuring of the governing board.
The new team included three members who had served on previous boards: Bret Taylor, the former co-CEO of Salesforce; Larry Summers, a former White House adviser and Harvard University President; and Adam D’Angelo, the CEO of Quora and a former early employee of Facebook.
Adam D’Angelo, already a member of the OpenAI board, retained his position, while several board members, including Tasha McCauley, Ilya Sutskever, and Helen Toner, were cut from the group.
Microsoft, a substantial investor in OpenAI, expressed encouragement for the changes. CEO Satya Nadella called them an “essential step on a path to more stable, well-informed, and effective governance.”
A Difference of Opinions
Some sources have hinted at a long-existing dischord within OpenAI leading to these upheavals. According to the New York Times, there have been internal disagreements revolving around the responsible development and release of AI.
The debate includes questions about the pace at which AI technology should be introduced, balancing the desire to advance with the need to retain human control. Altman is reportedly in favor of rapid progress, while others on the board urge caution.
Many voices have called for a gradual development of AI with more comprehensive regulations and more stringent safety measures in place. Altman's reinstatement as CEO suggests that the company is unlikely to put on the brakes anytime soon.
Calls for Greater Regulation of AI
Some experts in the field of artificial intelligence view the recent upheaval as evidence of the need to establish stronger regulations within the AI domain. Currently, the sector is dominated by a limited number of individuals, leading to internal conflicts and disputes.
Rayid Ghani, a professor of machine learning and public policy at Carnegie Mellon University, commented on the issue in an interview with The Guardian. “There are no standards, no professional body, no certifications. Everybody figures out their own internal norms. The AI that gets built relies on a handful of people who built it, and the impact of these handfuls of people is disproportionate.”
Ghani said that introducing regulations would shield consumers and products facing the public from the infighting among industry leaders. In a more mature industry, individual AI creators would wield less significance, and their disagreements would have less impact on the public.
Expressing his concern about the concentration of power in the hands of one company, Ghani stated, “It’s too risky to rely on one person to be the spokesperson for AI, especially if that person is responsible for building. It shouldn’t be self-regulated.”
Unlike updates in consumer technologies like iPhones or Androids, where changes and fixes are transparently listed, public testing and update transparency are lacking in AI programs such as ChatGPT. There are no regulations requiring AI builders to demonstrate the safety of their products.
The call for more control over AI production was echoed by Paul Barrett, the deputy director of the Center for Business and Human Rights at New York University. Pointing out the volatility in this new branch of technology, he suggested that big decisions about powerful new tech should not be made solely by corporations.
“Huge amounts of money – and huge egos – are in play. Judgments about when unpredictable AI systems are safe to be released to the public should not be governed by these factors,” warned Barrett.
The incident at OpenAI is an indicator of the volatility and lack of maturity in the AI industry. It shows the need for regulation and transparency to ensure the responsible development and deployment of AI technologies. Right now, AI is a sector that promises huge potential gains but also comes with enormous risks. More collaboration is needed to ensure that we as a society are able to make the positives outweigh the negatives. As OpenAI navigates internal challenges, the broader business world may need to work on fostering a more stable and accountable AI landscape.
This article was originally published in Certainty News: www.certaintynews.com/article/inside-sam-altmans-return-to-openai