A group of current and former OpenAI and Google DeepMind employees published a letter yesterday, drawing attention to their concerns over secrecy and a lack of internal governance at frontier AI companies.
The letter is endorsed by AI giants Yoshua Bengio, Geoffrey Hinton and Stuart Russell.
The letter states that: “AI companies possess substantial non-public information about the capabilities and limitations of their systems, the adequacy of their protective measures, and the risk levels of different kinds of harm”
However, the authors “do not think they can all be relied upon to share it voluntarily”
The letter criticises restrictive non-disparagement agreements that former employees feel pressurised to sign, restricting their ability to speak out about cavalier attitudes to safety.
It calls upon AI companies to make four commitments to ensure current and former employees are protected when voicing concerns, without facing retaliationThe New York Times published an article on the letter yesterday (link in the comments, behind a pay wall), including a history of concerns raised by former OpenAI employees.
Daniel Kokotajlo, a former researcher in OpenAI’s governance division, told the New York Times that Microsoft began testing a new version of Bing in India in 2022 that some in OpenAI believed contained an unreleased version of GPT-4, without the approval of the OpenAI Safety Board. (A spokesperson for Microsoft has disputed these claims.)
Kokotajlo told the NYT: “When I signed up for OpenAI, I did not sign up for this attitude of ‘Let’s put things out into the world and see what happens and fix them afterward.’”
The NYT reports that the group has retained pro bono lawyer and activist Lawrence Lessig, who advised Facebook whistleblower Frances Haugen. Lessig is concerned that whistleblower protections often only protect reports of illegal activity, but not more general concerns on safety.
This is an important – and worrying – development in the road towards stronger AI governance.
At AI for Good, last week, Sam Altman confirmed his approach of putting advanced models into public use without fully understanding them.
I also had the pleasure of hearing Stuart Russell speak at the AI for Good Summit, where he expressed his concerns over AI safety, explainability and governance.
It’s time to add employee and whistleblower protection to the AI safety debate.