OpenAI is the technology company that has been making the most noise in the past two years. But as it usually happens, the more popular it becomes and the more we know about it, the more problems arise. The latest one is a direct result of the internal struggle that took place at the end of 2023 and that ended with the departure and return of Sam Altman as CEO; but now the main issue is related to the team that was monitoring to ensure that a possible artificial intelligence did not become malicious.
What is happening within OpenAI? The dissolved team, called “superalignment”, was led by Ilya Sutskever, co-founder and chief scientist of OpenAI and the person who according to all sources raised the alarm and led the insurrection against Altman last December.
Although initially this first soap opera ended without any heads rolling, Sutskever announced last week that he was leaving OpenAI, without giving further explanations. Now we know the consequences of his departure. And it turns out that Sutskever was seen as a profile that sought to guarantee security in the advancements of AI compared to Altman’s position, who was more focused on technological progress without much care.
A conflict that stems from the schism that almost ousted Sam Altman
To better understand the current situation, it is vital to review the past conflicts that almost cost CEO Sam Altman his job and how these internal tensions have shaped the path of OpenAI.
Last November, OpenAI experienced one of the most significant crises in its history when its CEO, Sam Altman, was briefly dismissed by the board of directors.
This decision sparked a massive revolt within the organization, resulting in Altman being reinstated five days later and Sutskever and two more directors leaving the board.
The repercussions of this episode still resonate in OpenAI. Sutskever’s departure after the unveiling of the ChatGPT updates marks the exit of the main advocate of this “superalignment” team.
This team, established to anticipate and mitigate the risks associated with superintelligent artificial intelligence, was recently and unclearly dismantled, according to reports from outlets like Wired. According to these reports, disagreements over resource allocation and research priorities were determining factors in this decision.
Superalignment: the team tasked with preventing a science-fiction malevolent AI
OpenAI’s “superalignment” team was created with the mission of ensuring that future artificial intelligences do not become uncontrollable and dangerous to humanity.
However, despite receiving 20% of the company’s computational capacity, the team leaders, including Jan Leike, Sutskever’s deputy, expressed increasing frustrations due to lack of support and adequate resources.
Leike, who also recently resigned, stated on Twitter that his team had been “sailing against the wind”, struggling to obtain the necessary resources to carry out their crucial research.
Resignations and their implications
The dissolution of the team and the consecutive resignations of key figures reflect a deep internal reorganization and perhaps a change in focus on OpenAI’s priorities.
Researchers like Leopold Aschenbrenner and Pavel Izmailov were fired for leaking information, while others, like William Saunders and Daniel Kokotajlo, left the company due to disagreements about the direction OpenAI was taking.
The enigma of Q*
Since its foundation in 2015, OpenAI has faced multiple ethical and strategic dilemmas in its mission to develop safe and useful artificial intelligence.
Elon Musk himself disassociated from the company, which he helped found, when he realized it had lost its open character after Microsoft’s continued investment.
In recent months, doubts have also been raised about the direction and vision of Altman, who would be prioritizing innovation at all costs, avoiding any type of control over the security of its new models. It is worth noting that when OpenAI went through its first schism a few months ago, Reuters reported that several employees were working with an AI, called Q*, whose advancements led the then existing control team to take radical actions to assess its risks.