It was a pleasure to hear Tristan Harris speak at AI for Good in Geneva. The replay is now available, above; I would definitely recommend anyone interested in AI to watch.
Tristan compared our "first contact with AI" (social media, or curation AI) with our "second contact with AI" (generative AI).
Taking investment guru Charlie Munger's quote: "Show me the incentives and I'll show you the outcome," Tristan discussed the incentives behind curation and generative AI.
With generative AI, Tristan described the incentives as a "race to rollout" or "race to the bottom of the brain stem."
To reverse this trend, and mitigate harmful outcomes of generative AI, Tristan proposed the following:
Provable safety requirements for AI systems
Whistleblower protections for AI employees
A compute tax
Liability for harms caused being placed on AI providersHe also proposed that AI companies should:
Commit to spending at least 5% on upgrading AI governance: the "Upgrade Governance Plan"
Spend $1 million on safety for every $1 million spent on compute: the "1:1 Safety Plan"
The latter plan was put to Sam Altman during his interview with The Atlantic later that day; Altman said he didn't understand how AI companies could spend money on safety alone. I thoroughly recommend watching Tristan's session at the link below.