top of page
Writer's pictureCaro Robson

First US case of ChatGPT in creation of explosive device

08 January 2025



The explosion of a Tesla cybertruck in Las Vegas on 01 January was the first recorded case of ChatGPT being used to help build an explosive device on US soil, according to Las Vegas Police.  


The Times reports that Las Vegas Sheriff Kevin McMahill described the use of the technology as a “game changer”:


This is the first incident that I’m aware of on US soil where ChatGPT is utilised to help an individual build a particular device. [..] It’s a concerning moment.

Las Vegas Sheriff Kevin McMahill


At a press conference on Tuesday, Las Vegas police shared some of the prompts the attacker used, which included:


  • what the legal limit for purchasing Tannerite in Colorado was (Tannerite is an explosive target used for firearms practice)

  • how much Tannerite was equal to a pound of TNT (an explosive used by the military)

  • whether a 50-calibre Desert Eagle pistol would “set [the TNT] off”

  • what the largest gun store in Denver was

  • whether fireworks are legal in Arizona

 

OpenAI has pointed out that all this information is available online via search engines.


Whilst this may be the case (and the final two questions are particularly innocuous in isolation), the use of ChatGPT rather than a search engine for this research is a significant and worrying shift.


This is because ChatGPT produces results combined from a variety of sources without attribution or placing them in context, requires less research from users (making access to information easier), and presents its outputs in natural language (making it more likely that a user will read its outputs as ‘fact’ or even as instructions, regardless of the intention of model developers).


OpenAI responded that it was “saddened” and is ““committed to seeing AI tools used responsibly”:


In this case, ChatGPT responded with information already publicly available on the internet and provided warnings against harmful or illegal activities. We’re working with law enforcement to support their investigation.

OpenAI


Whilst this may be the case, the combination of this series of prompts (if inputted in short succession) should raise concerns for any LLM provider.


Perhaps monitoring prompts as a series of inputs, rather than assessing each one individually for malicious intent, is an output control that should be adopted by generally-available LLMs?

 

Yesterday I wrote an analysis of the EDPB’s Opinion on the use personal data in AI, arguing that some of the suggested safeguards were impractical for large language models like ChatGPT.


Whilst I still think this is the case with some of the suggestions, the restrictions on output data (which would include prompt controls) and enhanced transparency on the nature of results seem even more urgent in light of the Las Vegas incident.


Italy's Data Protection Authority, the Garante, touched on some of these controls in its recent decision to fine OpenAI €15 million (Provision 10085455 of 02 December).

 

Maybe the use of ChatGPT in this way is unsurprising. But that should not stop us being deeply troubled by it.


The need to keep working on safe, responsible and ethical AI is becoming more urgent...

 

bottom of page