top of page
Writer's pictureCaro Robson

OpenAI Departures - Should We Be Worried?

Updated: Jul 21

Should we be worried about the departure of key figures from OpenAI’s ‘superalignment’ team?


This week saw the resignation of two prominent leaders of OpenAI’s ‘superalignment’ team, the group that ensures its AI systems are aligned with human values and ethics.


Their departures come shortly after the resignations of other members of the company's ethics and safety teams:


  • Ilya Sutskever, co-founder, chief scientist and co-leader of the superalignment project resigned on Tuesday.


  • Jan Leike, co-leader of the superalignment project, announced his resignation shortly after Sutskever's.


  • Daniel Kokotajlo, a safety specialist, resigned in April, saying he had lost confidence in OpenAI’s ability to “behave responsibly”. 


  • William Saunders, part of the superalignment team, left in February.


Tensions between the charitable purposes of OpenAI’s original, non-profit organisation and the commercial goals of its for-profit partnership have been in the news since the removal and reinstatement of its CEO Sam Altman last year. 


Elon Musk is also suing Altman and OpenAI for departing from its founding principles.


Even with these internal tensions in the background, the departure of so many prominent figures in OpenAI’s ethics team must be cause for concern for anyone interested in the development of safe and ethical AI, particularly with the launch of ChatGPT-4 this week.


OpenAI Departures - Should We Be Worried?


bottom of page