top of page
Writer's pictureCaro Robson

Davos unpicked

19 January 2024




Microsoft's CEO Satya Nadella argued for "careful internal scrutiny" by organisations developing AI foundation models, with regulation focusing more on specific AI use cases, at Davos yesterday.


Some key quotes from Nadella's discussion with WEF Founder and Executive Chairman Klaus Schwab were:


🔹 “I don't think the world will put up anymore with any of us [in the tech industry] coming up with something that has not thought through safety, trust, equity.” 


🔹 “Our investors should care about multiple stakeholders [...] because that's the only way they can get long-term returns.”


🔹 “The biggest lesson of history is… not to be so much in awe of some technology that we sort of feel that we cannot control it, we cannot use it for the betterment of our people.”


Nadella also commented on the potential for AI to support economic growth as a "new input" to productivity, and discussed recent issues at OpenAI. 


Adverse outcomes from AI were listed among the top global risks by the Forum in their recently-published Global Risks Report, available here: https://lnkd.in/eqe43JWc 


Sam Altman, Marc Benioff and Julie Sweet also spoke on Technology in a Turbulent World yesterday. One aspect of Sam Altman’s comments might hint at the future of AI regulation and copyright.


Altman was hopeful OpenAI could resolve ongoing issues around using copyrighted material in AI outputs and for training AI models, by finding ways to pay media outlets for their content. 


🔹 On displaying copyright content in AI outputs / results, Altman said: “We would like to display content, link out to brands [...] and say, 'Here's what happened today' and then we'd like to pay for that. We'd like to drive traffic for that.”


🔹 On using copyright content to train AI models: “If you teach our models, I'd love to find new models for you to get paid based off the success of that...”


🔹 Altman also said the way AI models are trained is going to change: “I think what it means to train these models is going to change a lot in the next few years”


With litigation from the New York Times for training ChatGPT with their content ongoing, Altman’s remarks could foreshadow a new technological and economic solution to finding the balance between unbiased and accurate AI models on one hand, and the interests of copyright holders on the other.

AI dominated Davos this year, and this was just one aspect of Altman’s panel (the attached article by Kate Whiting is a great summary), but these remarks were particularly interesting for predicting a possible future for AI models.


Large language models, and products like ChatGPT that use them, could look very different in just a few years' time...

bottom of page