Fearing your own creation is a classic myth in dozens of cultures. From the Jewish Golem to Frankenstein, and in a more technological way through the invention of the atomic bomb. Now, Sam Altman, the leader of OpenAI and the most recognizable face in artificial intelligence, also believes that no one is taking seriously enough the potential effects that the technology they propose can bring to the world.
At a recent conference at the Brookings Institution, Altman shared his vision and fears with crystal clear clarity just a few days before OpenAI presented its latest breakthrough, GPT-4o:
“The impact of AI on the economy should be a global priority concern for people”, he said, stating firmly that he is afraid that “we are not taking it seriously enough.”
This call for reflection is not insignificant nor is it the first time he has made it.
Altman’s perspective on artificial intelligence has always been dual: on one hand, he acknowledges its potential to transform the economy and society in ways that we are only beginning to understand; on the other hand, he warns about its ability to accelerate inequalities and displace jobs.
The arrival of models like GPT-4, while causing some stir, has been followed by less public concern. A calmness that Altman finds both surprising and alarming. “What concerns me the most is…”, he left the sentence hanging, clearly inviting those who listened to him to reflect more deeply on where we are headed.
“There are jobs that are going to disappear. Period”
In an interview last year, Altman delved into this. “Many people working in AI tend to say that it’s only going to be good, that it’s only going to be a support and that no one will ever be replaced,” Altman told the media. “There are jobs that are going to disappear. Period.”
Although this news may not be well received by most, Altman doesn’t seem too stressed about it. In his opinion, a world with artificial intelligence will be better, although he doesn’t seem very sure about what that world will look like.
“I don’t think we want to go back”, Altman says. One of the areas where he believes AI will have less impact is in education, because he believes that humans prefer to deal with other human tutors, although he says that ChatGPT offers the possibility of providing “every child a better and more customized education than the best, richest, and smartest child on Earth receives today”.
Between innovation and skepticism: Altman’s balance
The development of AI, as presented by OpenAI under Altman’s leadership, has been a race towards creating increasingly sophisticated and accessible tools. The company has not only focused on improving the accuracy and utility of its models, but also on democratizing access to these technologies, as evidenced by the launch of GPT-4o to free users, which offers conversational responses and can understand audio and video.
However, Altman is not immune to criticism and studies that contradict his vision. Institutions like MIT and the UN have suggested that artificial intelligence, contrary to displacing jobs, could create more jobs than it destroys. These arguments present a less apocalyptic but equally challenging scenario, where adaptation and training in new skills will be key.
The urgency of AI ethics and Altman’s inconsistency
One of the points that Altman has emphasized with particular emphasis is the need to develop a robust ethics around AI. Authenticating AI and implementing watermarks on generated content are measures he considers crucial to protect the public from fraud and manipulation, especially in sensitive contexts such as democratic elections.
Despite this, there are those who question Altman’s own vision and interest, a man who, let’s remember, in December was at the center of a soap opera around the direction of OpenAI for apparently pushing forward with the models despite warnings from other scientific heads of the organization about the dangers.
The main proponent of this group of critics, Ilya Sutskever, by the way, has just definitively left OpenAI.