16 December 2024
For anyone looking back at 2024 and wondering what on earth happened, you are not alone.
After a very busy year in the world AI, here are my key highlights of what happened in the “year of AI hype.”
For those who prefer to watch rather than read, you can click on the video above or in the videos section of the resources page to watch my review of 2024….
December 2023
Perhaps an odd place to start, but to remind ourselves of where 2024 was coming from:
China's Scientific and Technological Ethics Regulation came into effect on 01 December, one of a number of AI regulations now in effect in the PRC.
The Indonesia Ministry of Communication and Information of Indonesia adopted a Circular on the Ethics of Artificial Intelligence (AI) Use and Development.
Perhaps most significantly, political agreement on the text of the EU’s AI Act was finally reached.
January 2024
Italy made AI a key item on the G7’s agenda and core part of the nation’s G7 Presidency.
Davos saw Microsoft CEO Satya Nadella argue for "careful internal scrutiny" by organisations developing AI foundation models, and Sam Altman was hopeful OpenAI could resolve ongoing issues around using copyrighted material in AI outputs and for training AI models.
Adverse outcomes from AI were listed among the top global risks by the World Economic Forum in their Global Risks Report.
The first case of an online avatar being subject to abuse was reported, with questions raised about how we can police the “metaverse.”
OpenAI suspended an account for breaching its rules on political campaigning, the first time it had suspended an account.
On 16 January California passed AB1836, which prohibits the commercial use of digital replicas of deceased performers without first obtaining the consent of those performers’ estates.
On 17 January California passed SB942, the California AI Transparency Act, requiring businesses providing a generative AI system with over 1 million monthly visitors during a 12-month period in the state to comply with transparency obligations.
The EU Commission launched an investigation into TikTok under the Digital Services Act for potentially breaching its online content rules.
The White House announced a number of major policy developments for AI in the US, with significant implications for tech companies, following Executive Order 14110 passed in November 2023.
February
The NCA, FBI, Europol and multiple national police agencies disrupted Lockbit, a major cybercrime gang dealing in “ransomware as a service."
Flying cars, once only a concept from sci-fi, moved closer to reality at CES 2024.
Apple scrapped its plans to build electric cars to focus on Generative AI.
On 12 February, California passed AB2355, an act that requires electoral advertisements using AI-generated or substantially altered content to feature a disclosure that the material has been altered.
The EU Commission created the EU AI Act Office using its mandate under the AI Act.
The US Supreme Court heard arguments in Moody v. NetChoice, No. 22-277, and NetChoice v. Paxton, No. 22-555. The cases centred on whether social media platforms are permitted under the US Constitution to make editorial decisions about their content, such as removing posts or banning users.
All EU Digital Services Act provisions came into effect.
March
A number of cases involving publishers and AI companies were litigated, including The New York Times against Microsoft and OpenAI (copyrighted content, New York), Authors Guild class-action suit against OpenAI (copyrighted content, New York) and Universal Music against Anthropic (song lyrics, Tennessee).
With Apple's €1.8bn fine from the European Commission, and several major technology changes from the EU’s Digital Services and Digital Markets Acts, European Commission Executive Vice President Margrethe Vestager told the New York Times that: “This is a turning point. Self-regulation is over.”
New Mexico passed bill HB 919, mandating that advertisements with Generative AI content must include a disclaimer.
Tennessee passed the Ensuring Likeness Voice and Image Security Act (“ELVIS Act”), preventing uses of a person’s image, voice or likeness without their consent, targeting deepfakes.
UNGA Resolution on Seizing the opportunities of safe, secure and trustworthy artificial intelligence systems for sustainable development, A/78/L.49 (11 March 2024) was passed. The resolution was proposed by the US and supported by China.
All EU Digital Markets Act provisions came into effect.
April
The Financial Times signed a deal with OpenAI to re-use its content for producing results and training AI models, leading to concerns over data protection and the re-use of journalistic data.
NOYB (None of Your Business) filed a complaint against OpenAI with the Austrian Data Protection Authority DSB on the grounds it cannot meet subjects’ rights of access, erasure or rectification, or guarantee the accurate processing of personal data.
Chinese smartphone brand Xiaomi “declared initial victory” in China’s crowded electric vehicle (EV) market.
On 29 April, Florida passed bill HB 919, requiring all political advertisements produced using AI to carry disclaimers.
President Biden signed a law requiring ByteDance to sell social media platform TikTok in the US, or see it banned from the country.
May
A number of prominent departures from OpenAI’s ‘Superalignment’ safety team caused concerns over the company’s commitment to developing safe and ethical AI systems.
OpenAI also published its “approach to data and AI” for public comment.
At the ITU’s AI for Good Summit, Tristan Harris argued for provable safety requirements for AI systems, whistleblower protections for AI employees and a compute tax. He also said that every dollar spent on AI power should be matched by a dollar spent on AI safety.
At the same summit, OpenAI’s Sam Altman argued that it would be impossible to distinguish between safety and performance features of any AI system.
BMW, Jaguar Land Rover and Volkswagen were found to have purchased vehicle parts originating from a supplier in China that had been flagged by the US for links to forced labour in Xinjiang by a US congressional investigation.
Chile introduced draft AI legislation to promote AI whilst ensuring human rights.
Japan introduced draft legislation requiring numerous disclosures by AI developers and to safeguard human rights in AI development.
Colorado enacted HB1147 bill on deepfakes and elections and SB24-205 on artificial intelligence and consumer protection.
The Council of Europe Framework Convention on artificial intelligence and human rights, democracy, and the rule of law (CETS No. 225) was adopted.
Singapore introduced its Model AI Governance Framework for Generative AI.
June
A group of current and former OpenAI and Google DeepMind employees published a letter calling for a “right to warn” for AI, drawing attention to their concerns over secrecy and a lack of internal governance at frontier AI companies. The letter was endorsed by AI giants Yoshua Bengio, Geoffrey Hinton and Stuart Russell.
The Pope gave a speech on AI to the G7 summit. He discussed cyberwarfare, quantum computing, chatbots, educational AI, generative AI and the need for political consensus on governance, including technical detail supported by Vatican AI Expert Rev. Paolo Benanti (a trained engineer who coined the term “algorethics”).
With the UK general election looming, the Labour Party (now in government) launched its manifesto, pledging to introduce “binding regulation on the handful of companies developing the most powerful AI models” and ban "the creation of sexually explicit deepfakes.
UNGA Resolution on Enhancing International Cooperation on Capacity-building of Artificial Intelligence, A/78/L.86 (25 June 2024) was adopted. The resolution was proposed by China and supported by the US.
July
The Wimbledon tennis grand slam saw major involvement from AI suppliers, including IBM, prompting speculation about the future of AI in sport.
The US Supreme Court gave judgment in Moody, Attorney General of Florida vs Netchoice, remitting both cases back to state courts on the basis that “neither the U.S. Courts of Appeals for the 11th Circuit nor the 5th Circuit conducted a proper analysis of the facial First Amendment challenges to the Florida and Texas laws regulating large internet platforms.” The judgment has significant implications for the Court’s views on free speech overriding online safety.
The UK’s new government published its legislative agenda in the King’s Speech. The government shifted from its manifesto commitment to a more cautious approach to AI legislation, saying it "will seek to establish the appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models." The Speech also contained important bills for AI, notably the Product Safety and Metrology Bill, which has major implications for AI safety.
New Zealand's Ministry of Business, Innovation and Employment released a cabinet paper outlining its approach to AI regulation.
Taiwan gave a preview of its draft "Basic Law on Artificial Intelligence" from the National Science and Technology Commission.
China released the Shanghai Declaration on Global AI Governance, calling for global cooperation in developing AI that ensures “safety, reliability, controllability and fairness in the process, and encourage[s] leveraging AI technologies to empower the development of human society."
August
On 01 August the EU’s AI Act entered into force.
Bill Gates’ company, TerraPower, started construction of a next-generation nuclear power plant in Kemmerer, Wyoming. The plant uses liquid sodium as a cooling agent, aiming to provide a safer and more efficient alternative to traditional nuclear reactors.
Argentina's congress started debating legislation to regulate the use of AI.
The Australian Department of Industry, Science and Resources released its Voluntary AI Safety Standard.
Nigeria released its draft national AI strategy, which recognises the benefits and risks of widespread adoption of AI.
September
China’s Ministry of Commerce opened an investigation into PVH (owner of US fashion brands Tommy Hilfiger and Calvin Klein) for "boycotting Xinjiang cotton and other products without any factual basis.”
Companies signed EU AI Pact pledges.
Meta unveiled its latest AI-powered smart glasses, including augmented reality (AR) features, real-time language translation and advanced navigation assistance.
California Governor Gavin Newsom vetoed the state’s AI safety bill.
China released the AI Safety Governance Framework as part of its Global AI Governance Initiative.
The EU Commission began the process of drawing-up the Code of Practice for General-Purpose AI.
The US Department of Commerce gave notice of proposed rulemaking under Executive Order 14110 to require AI providers to share significant information with the Department, including any sale of foundation models to foreign customers.
The United Nations Secretary-General’s High-level Advisory Body on AI’s Final Report, ‘Governing AI for Humanity’ was released today with seven groundbreaking recommendations.
October
Reports from the International Energy Agency revealed that AI and cryptocurrency technologies could double their electricity consumption by 2026, equivalent to Japan’s total energy usage.
Tesla unveiled its cybercab, a prototype automated taxi which “offers supervised full self-driving capabilities, striking a balance between cutting-edge innovation and safety.”
Uber introduced a groundbreaking AI assistant powered by OpenAI’s GPT-4o, designed to support drivers of electric vehicles.
A federal judge blocked California’s Deepfakes Election Law from taking effect, leaving only a small proportion enforceable.
The Irish Data Protection Commission fined LinkedIn €310million for using members’ data for behavioural analysis and targeted advertising without a proper lawful basis or full transparency to its users.
November
Donald J. Trump was elected as President of the United States for a second term, potentially marking a major shift in the country’s technology policy.
Taiwan microchip manufacturer TSMC complied with US export control requirements to cease production of advanced chips for certain Chinese customers.
OpenAI and other AI providers accepted that current approaches to building and training LLMs have limitations and will “soon reach their limits,” as companies look to develop news ways to train AI systems.
Amazon’s Trainium 2 chip was released, challenging Nvidia’s AI microchip dominance.
Foxconn announced substantial investment in its AI server business, on the basis of growing demand for enterprise AI solutions. Foxconn is a major supplier for Apple.
Microsoft pledged US$10 billion to develop renewable energy projects, which could produce up to 10.5 gigawatts of clean energy (more than double that required by Greater London in a year).
The UK government’s Autumn Budget Statement promised that it “will shortly publish the Artificial Intelligence Opportunities Action Plan setting out a roadmap to capture the opportunities of AI to enhance growth and productivity and better deliver services for the public.”
Peru published a draft AI Regulation to promote artificial intelligence for economic and social development.
December
The UK government’s AI system to detect benefits fraud was found to show bias according to age, disability, marital status and nationality, following a Freedom of Information request.
A number of significant events in the growing trade war over technology between China and the US signalled challenges ahead.
On 02 December, the Biden administration added 140 Chinese companies to its restricted trade list.
China responded to the most recent US restrictions with a full trade embargo on several critical minerals in chip manufacturing to the US.
In the first week of December, China deployed the “largest fleet in decades” in the waters surrounding Taiwan. Taiwan still produces over 60% of the world's semiconductors and more than 90% of the most advanced semiconductors.
China’s State Administration for Market Regulation announced that it is investigating the US chip designer Nvidia over potential violations of its regulatory approval for acquisition of Israeli-based Mellanox Technologies in 2019.
ByteDance failed in its appeal against the decision of a US Federal Appeals Court to uphold a law requiring the sale of TikTok’s US operations. The company will now appeal to the US Supreme Court or face being banned in the US on 19 January. However, the inauguration of President Trump on 20 January may be decisive in TikTok’s bid to stay in the US market…
So that was 2024…
Having seen so much happen in the world of AI in 2024, I have no idea what 2025 can possibly bring. However, it’s going to very interesting to find out….