Anindita Nayak
Bhubaneswar, May 20
A former OpenAI leader, who resigned from the company earlier this week, has that safety has “taken a backseat to shiny products” at the influential artificial intelligence company.
Jan Leike, who ran OpenAI’s “Super Alignment” team with a co-founder who also resigned this week, shared in a series of posts on the social media platform X that he joined the San Francisco-based company believing that it would be the best place to do AI research. He wrote that “However, I have been disagreeing with OpenAI leadership about the company’s core priorities for quite some time, until we finally reached a breaking point.”
Being an AI researcher by training, Leike said that he believes there should be more focus on preparing for the next generation of AI models, including on things like safety and analyzing the societal impacts of such technologies.
“Building smarter-than-human machines is an inherently dangerous endeavor and the company is shouldering an enormous responsibility on behalf of all of humanity,” he added further. “OpenAI must become a safety-first AGI company,” stated Leike, utilizing the abbreviated term for artificial general intelligence (AGI), a future concept where machines possess broad human-level intelligence or proficiency in various tasks.
In response, Altman acknowledged Leike’s contributions to the company by saying “super appreciative” and “very sad to see him leave.” He concurred that OpenAI had significant work ahead by saying that “Leike is right, we have a lot more to do; we are committed to doing it.”
Leike resigned from OpenAI following the departure of co-founder and chief scientist Ilya Sutskever, who announced his exit after almost ten years with the company. Sutskever was among four board members last autumn who initially voted to remove Altman from his position, but later reversed the decision. Sutskever informed Altman of his termination last November but later expressed regret for his actions.
Sutskever mentioned he’s starting a new project he finds important, but didn’t share more about it. Jakub Pachocki will take over as the new chief scientist.
“Pachocki is also easily one of the greatest minds of our generation,” said Altman. He also added that he is “very confident that he will lead us to make rapid and safe progress towards our mission of ensuring that AGI benefits everyone.” Last week, OpenAI unveiled the newest version of its AI model. It can imitate human speech patterns and even attempt to recognize people’s emotions.