top of page

AI: Britain needs YOU

Writer's picture: Caro RobsonCaro Robson

15 January 2025


On Monday the UK Government published its AI Opportunities Action Plan, calling on Britain to “step up” and “shape the AI revolution rather than wait to see how it shapes us.”


With a strong focus on increasing economic growth, and support from OpenAI, Microsoft, Google, Darktrace and Wayve, how might the plan affect the future of AI regulation in the UK?


Here are my eight key takeaways:


  1. Data is the “lifeblood” of AI and a “public asset”

  2. Privacy and safety must be central to any plans

  3. Regulators, not regulation, are the focus of policy proposals

  4. Assurance tools will be key

  5. Compute power will be supreme…and occasionally sovereign

  6. EU relations are important…but so are other global partnerships

  7. Private investment and AI Growth Zones will lead the way

  8. Training, recruiting and importing talent is key…especially women

 


Some background


The Plan was commissioned by the incoming Labour Government’s Secretary of State for Science, Innovation and Technology (DSIT), and prepared by Matt Clifford, co-founder and chair of Entrepreneur First and Chair of the UK’s Advanced Research and Invention Agency.


It was first announced in the Autumn Budget Statement, as a key driver for economic growth in the UK. Speaking on Channel 4 News on Monday, the Secretary of State for DSIT said the Labour Government has already spent £24bn on AI since taking office six months ago.


On Monday the Government announced that “£14 billion and 13,250 jobs [have been] committed by private leading tech firms following AI Action Plan.” The exact source of funding is not clear, but the press release includes supportive quotes from Microsoft, Darktrace, OpenAI, Google, Wayve, OpenAI and techUK.


When asked whether funding from big tech might reduce the protections offered by the Online Safety Act, in light of recent reductions in Meta’s fact-checking programme and comments from Elon Musk calling for the “overthrow of the British Government,” the Prime Minister insisted that the Online Safety Act would continue to protect British citizens and safety is a core priority of the Plan.

 

So what's the plan?


Some analysis


The Plan is quite a pithy document, so I won’t repeat its 50 recommendations here, but these are my key takeaways in more detail:

 

1.      Data is the “lifeblood” of AI and a “public asset”


  • There is a strong emphasis on government developing a “more sophisticated understanding of the value of the data it holds” (p9), which it describes as “a public asset” (p10).

  • This extends to the government collecting additional data for training AI: “[government should] strategically shape what data is collected, rather than just making data available that already exists” (Rec 8, p10).

  • The Plan advises the government to “unlock both public and private data sets” to make them available to UK startups and researchers (p9), recommending that “at least five high-impact public data sets” should be identified “rapidly” to be made available to AI researchers and innovators (Rec 7, p9).

  • It invites the government to consider offering data sets alongside compute resources as a “bundle” to companies looking to operate in the UK (Rec 10, p10), and proposes “actively incentivis[ing] and reward[ing] researchers and industry to curate and unlock private data sets” (Rec 12, p10).

  • The National Data Library, mentioned in the Autumn Budget Statement, is included in the Plan as the potential holder of data sets for AI development (Recs 7-13, pp9-12).

  • A “copyright-cleared British media asset training data set” is also proposed, to be “licensed internationally at scale,” potentially from the BBC, National Archives and other UK museums (Rec 13, p10).


Whilst the availability of high-quality training data is a major concern for many AI companies, and creating publicly-available data sets is one of the recommendations of the UN’s Global Digital Compact, the Plan's approach is concerning from a data protection perspective.


Data cannot be “owned” as a traditional asset; language that suggests it can be risks undermining fundamental legal concepts, such as human rights and intellectual property law (IP). Perhaps the word ‘access’ would be more accurate in this regard.


Most significantly, the collection of personal data by the state is carefully regulated by human rights law (notably Art 8 ECHR), for obvious historical reasons. This is why the UK GDPR does not allow public authorities to rely on 'legitimate interests' as a lawful basis for collecting data.


However, the new Data Use and Access (DUA) Bill would allow the government to legislate to exempt particular data processing from the UK GDPR “to the extent that [the legislation] makes express provision to the contrary referring to this section” (proposed s183A of the UK GDPR, as inserted by Cl 105 of the DUA Bill).


With legislation that is specific, ‘foreseeable,’ necessary and proportionate to meet an important public interest, human rights concerns may be allayed. However, very clear legislation will be required to enable additional data collection for the sole purpose of training AI.


Speaking of privacy…



2.      Privacy and safety must be central to any plans


  • Recommendation 7 (identifying at least five high-value data sets to make available to developers) stresses that data-sharing must be undertaken whilst considering “public trust, national security, privacy, ethics, and data protection considerations” (pp9-10).

  • Rec 7 also suggests the government should “explore use of synthetic data generation techniques to construct privacy-preserving versions of highly sensitive data sets” (pp9-10).


Whilst there is a lack of clarity on how privacy, ethics and data protection should be considered, the reference to synthetic data is promising (though not without technical issues).


Although synthetic data has been criticised as too expensive to produce at present by OpenAI CEO Sam Altman, the European Data Protection Board (EDPB) included it as a potential mitigation when processing personal data for AI in their recent Opinion on data protection and AI. Nvidia launched its Cosmos World Foundation Model Platform, which generates synthetic data for training AI, on 06 January.


In its response to the plan, the government pledged to “set out further details on the National Data Library and data access policy in due course” with a deadline of Summer 2025.


 

3.      Regulators, not regulation, are the focus of policy proposals

    

  • Section 1.4 sets out the main regulatory proposals in the Plan.

Well-designed and implemented regulation, alongside effective assurance tools, can fuel fast, wide and safe development and adoption of AI.”

AI Opportunities Action Plan, p13


  • The government’s Cyber Security and Resilience Bill is cited in the government's press release by the CISO of Darktrace as being key to offering “the opportunity to better safeguard data and AI infrastructure.”

  • Even so, the key focus of the Plan is not new regulation, but rather the upskilling of regulators and an increased emphasis on their “Growth Duty” (p13).

  • However, it also includes a stark warning to regulators that do not support the government’s objectives:

    • If regulators “lack the incentives to promote innovation at the scale of the government’s ambition […] government should consider more radical changes to our regulatory model for AI, for example by empowering a central body with a mandate and higher risk tolerance to promote innovation across the economy” (Rec 28, pp14-15).

    • The central body could have “powers to issue pilot sandbox licences for AI products that override sectoral regulations, taking on liability for all related risks” (Rec 28, p15).

 

This shift in approach to UK regulators would be a marked change in their independence, potentially including cross-sectoral regulators such as the Information Commissioner's Office. If the independence of the ICO were threatened, this could impact the UK’s Adequacy Decision with the EU, which allows for frictionless data transfers between the EU and UK.


Notably, the Adequacy Decision is due to expire on 28 June this year. According to the Decision, “the adequacy findings might be renewed, however, only if the UK continues to ensure an adequate level of data protection.”


The Plan argues that it is “essential to act quickly to provide clarity on how frontier models will be regulated” and that protecting the AI Safety Institute should be a “top priority” (Rec 23, p14). This follows references to regulating frontier models in the King’s Speech.


In the government response to the Plan, it pledged to “set out its approach on AI regulation” but did not give clear timelines. In relation to copyright, it launched a consultation on copyright and AI on 14 December 2024, open until 25 February 2025. However, the Plan has already attracted criticism from authors for “copyright theft.”


With the pending renewal of the EU’s Adequacy Decision, any future changes to the data protection aspects of AI regulation, and role of the ICO, should be considered very carefully.



4.      Assurance tools will be key


  • The Plan argues that “alongside investing in pro-innovation regulation,” the government should “Support the AI assurance ecosystem to increase trust and adoption by:

    • Investing significantly in the development of new assurance tools, including through an expansion to AISI’s [AI Safety Institute’s] systemic AI safety fast grants programme, to support emerging safety research and methods.

    • Building government-backed high-quality assurance tools that assess whether AI systems perform as claimed and work as intended” (Rec 29, p16).


This recommendation follows the government’s comprehensive report on Assuring a responsible future for AI on 06 November 2024, setting out the opportunities for the AI assurance market in the UK.


The development of AI assurance tools may be a major opportunity for AI companies, and a significant practical approach to implementing AI safety. Whether such tools would be endorsed by a form of UK quality assurance mark (akin to the EU’s CE mark) remains unclear. If such an approach were adopted, it may be a pragmatic compromise between the requirement for each 'high-risk' AI system to undergo CE marking in the EU’s AI Act, and the need to ensure safe and trustworthy AI without overburdening quality assurance bodies.


In its response, the government pledged to prioritise additional funding for the AISI and that “DSIT will also explore other options for growing the domestic AI safety market and provide a public update on this by Spring 2025.”



5.      Compute power will be supreme…and occasionally sovereign


  • Computational power (“compute”), which requires large data centres, will be key to achieving AI opportunities: but not all compute resources will be based in the UK.

  • “Sovereign AI compute” should be “owned and/or allocated by the public sector” (p7), with its resources allocated by “mission-focused programme directors” of a much-expanded AI Research Resource (AIRR) (p8).

  • “Domestic compute” should be “based within the UK but privately owned and operated”; “crowding in private and international capital is critical” (p7).

  • “International compute” should be “accessed via reciprocal agreements and partnerships with likeminded partners, to give the UK access to complementary capabilities and facilitate joint AI research in areas of shared interest” (pp7-8).


It is notable that the final category does not specify which “likeminded partners” will be considered to supply international compute. This is particularly significant as the Chancellor was recently criticised for a visit to China. (The US and China are currently involved in trade tensions over AI.)


The government’s response to these proposals was to promise that “DSIT will publish a long-term compute strategy in Spring 2025 and is committed to setting out a 10-year roadmap for compute.”



6.      EU relations are important…but so are other global partnerships


“The European High Performance Computing Joint Undertaking (EuroHPC JU) is a legal and funding entity, created in 2018 and located in Luxembourg to lead the way in European supercomputing. 
The EuroHPC JU allows the European Union and the EuroHPC JU participating countries to coordinate their efforts and pool their resources to make Europe a world leader in supercomputing.”

EuroHPC website


  • The report recommends expanding beyond the EU to “Agree international compute partnerships with like-minded countries to increase the types of compute capability available to researchers and catalyse research collaborations” (Rec 6, p9).


Whilst the new government gave strong signals of a closer relationship with the EU in its manifesto, and the Chancellor signalled a “wider reset of EU relations” as recently as 09 December, the emphasis on other international partners in the Plan has raised questions. When asked by Channel 4 News whether the Plan represented a shift away from the EU, the Prime Minister answered that “the truth is that there are different ways of expressing the national interest [outside the EU].”


With the EU’s AI Act applicable to any organisation placing AI systems on the EU market, many companies in the UK will have to adhere to its requirements. Whilst there is no equivalent of the GDPR’s adequacy regime in the Act, for practical purposes AI companies engaged with the EU will be subject to its rules.


The relationship between the partnership agreements envisaged by Rec 6 (p9), the AI Act and the GDPR will require careful consideration by the government moving forwards.

As with several other recommendations, the government’s response is that “DSIT will set out its approach to international collaborations as part of its long-term compute strategy.”


So will we have to wait and see…



7.      Private investment and AI Growth Zones will lead the way


  • A major element of the Plan is the use of private investment, such as the £14 bn already pledged, with incentives for companies, including subsidised compute:

    • “We will have to make choices about when to subsidise compute and when to provide it at cost, recognising that this could form part of an attractive offer to entrepreneurs and researchers deciding where to base themselves” (Rec 3, p8).

  • Establishing AI Growth Zones (AIGZs) is a key priority; these will include a “streamlined planning approvals process and accelerate the provisioning of clean power” (Rec 4, p8).

  • AIGZs could also be an opportunity to “crowd in private capital to boost our domestic compute portfolio and to build strategic partnerships with AI developers” whilst driving “local rejuvenation” of more deprived areas (Rec 4, pp8-9).


Streamlined planning permission appeals for three data centres were taken under the direct control of the Deputy Prime Minister last year: an early indication of the Government’s commitment to computing infrastructure as route to economic growth.


In its response to the Plan, the government has pledged to “deliver the first AI Growth Zone at Culham, the headquarters of the UK Atomic Energy Authority (UKAEA), subject to the agreement of a public-private partnership that delivers benefits to the local area, the UKAEA’s fusion energy mission and the UK’s wider national AI infrastructure” by Spring 2025. It also agreed to “set out a process to identify and select further AIGZs” working with local and energy stakeholders.


The UK’s current economic position is currently precarious as gilt yields rise, a matter on which the Prime Minister was pressed at the launch of the AI Plan. Private investment may not be an ideal solution to major, even critical, infrastructure projects, but the partnerships envisaged by the AIGZs could provide a much-needed boost to the UK economy, particularly if combined with an allocation of “sovereign compute” (as envisioned in Rec 1).


 

8.      Training, recruiting and importing talent is key…especially women


  • Section 1.3 looks at the requirements for skilled AI workers, including options for additional training and increased immigration by key workers.

  • “In the next five years, the UK must be prepared to train tens of thousands of additional AI professionals across the technology stack to meet expected demand and proactively increase its share of the world’s top 1,000 AI researchers” (p11).

  • “Only 22% of people working in AI and data science are women. Achieving parity would mean thousands of additional workers” (p11).

  • The government should also “Explore how the existing immigration system can be used to attract graduates from universities producing some of the world’s top AI talent” (Rec 21, p13).


It is difficult for an AI professional (who is also a woman) to argue with these recommendations. However, whether they can be successfully funded, and whether the UK can attract the “top talent” it craves, remains to be seen.


On the recommendations relating to creating training and university courses (Recs 17-20), the government response is to agree, but has set deadlines for delivery of Autumn 2026 and 2027. The recommendation on immigration (Rec 21) is the only recommendation in the Plan that the government does not whole-heartedly support (“Partially agree”). Recent tensions over UK immigration may account for this.  

 


So what happens now?


The Plan is certainly ambitious, and not without its potential legal pitfalls. The sections on enhanced government data collection, incentivising the production of private data sets, reforms to allow greater mining of data, and changes to UK policy around regulators are concerning.


It may be hypocritical to urge caution whilst finding delay frustrating, but it is somewhat disappointing that key provisions (including on the future shape of AI regulation) are left to longer-term, additional reviews.


However, as a starting point to launch the UK as an AI superpower, it is a great place to begin. The focus on assurance technology, synthetic datasets and an awareness of the requirement to balance capital investment with sovereign compute are welcome, and perhaps attainable, goals.


Former Prime Minister Tony Blair and Former Foreign Secretary William Hague both argued in the Times on Monday for the Plan to be properly funded and resourced. (Hague followed this with a further plea for serious resource allocation in a separate piece.)


This will be the real test: can the UK government fund its ambitions (or attract enough inward investment to do so)?


For now, those of us in AI governance, ethics and technology can take heart that our skills are highly needed. As the Plan says:

“International competition for top talent is fierce. The UK must go further than existing measures and take a more proactive approach at every stage of the talent pipeline. Though ambitious, these efforts could yield large benefits for the UK if one individual founds the next DeepMind or OpenAI.”

AI Opportunities Action Plan, p12


AI professionals: Britain needs YOU!

bottom of page