top of page
Writer's pictureCaro Robson

Are market forces beginning to regulate AI?

20 December 2023



After seeing recent developments in AI regulation, two news stories struck me today as signs that global market forces might be starting to curb the dominance of US AI giants, and reduce the amount of data used by industry models.


Problems with LLMs understanding non-English languages and financial data like SEC filings are driving efforts to build more bespoke models to handle cultural and technical challenges, using more controlled data sets.


The New York Times reports that many South Korean AI firms are focused on training models using specialised data sets to reflect non-English-speaking culture and languages, in particular for Korean, Vietnamese and Malaysian users. Models for customers in Brazil, Saudi Arabia and the Philippines are also being developed, alongside industry- and sector-specific models.


🔹 Rather than using the entire internet to train LLMs, the models use bespoke data to understand more local nuances, such as Korean idioms and slang, which US-based models often struggle with


🔹 South Korean telecommunications company KT is reportedly working with Thai telecoms firm Jasmine Group on a Thai language LLM


🔹 Kakao is developing generative AI for Korean, English, Japanese, Vietnamese and Malaysian languages


In the US, CNBC reports that LLM testing start-up Patronus AI found four of the biggest LLMs (OpenAI's GPT-4 and GPT-4-Turbo, Anthropic's Claude 2 and Meta's Llama 2) performed badly in tests designed to see whether they could answer questions based on SEC filings, a crucial part of research in the finance industry.


🔹 The LLMs tested failed Patronus's “closed book” test, with most refusing to answer questions without being given the exact SEC document required


🔹 For example, GPT-4 Turbo refused to answer 88% of questions without the original SEC document and only gave 14 correct answers (out of 150 questions)


🔹 When given the exact SEC text, GPT-4 Turbo answered 85% questions correctly, but still gave incorrect responses 15% of the time


Patronus AI co-founder Anand Kannappan told CNBC: “That type of performance rate is just absolutely unacceptable. […] It has to be much much higher for it to really work in an automated and production-ready way.”


This year CNBC reported that Bloomberg LP has developed its own financial data model and JP Morgan is developing an AI-powered automated investing tool.


As global legislators grapple with modern AI regulation, might old-fashioned market forces be part of the solution to reducing market dominance and addressing concerns around the training data used?

bottom of page