Legal Hallucinations

5 minutes

πŸ˜΅β€πŸ’« Is your Generative AI tool hallucinating?

 

A new report by Stanford University reveals that AI hallucinates in 1 in 6 (or more) benchmarking queries. And their previous study found chatbots hallucinated between 58% and 82% of the time with legal queries.

 

With three quarters of lawyers using AI tools as part of their work, it’s important to be hyper-aware of how often these tools struggle to find the right legal information and fill their data gaps with false details.

 

This is dangerous in legal technology, where accuracy is paramount. The danger is, relying on an AI tool could lead to serious legal errors, affecting case outcomes and client trust.

 

We’ve already seen a few embarrassing instances of lawyers using ChatGPT for legal research and presenting fake cases created by the tool to court. 🫒

 

So, how do these hallucinations happen in the first place? Incomplete information leads to data gaps AI attempts to fill. AI can also misapply learned patterns by overgeneralising. Another issue is AI struggling to understand the wider context or more convoluted issues, so it simply get’s the wrong end of the stick.

 

Companies need to protect themselves from hallucinations by AI through human oversight, training AI on high quality, diverse and accurate data, and being realistic about what it is currently capable of.

 

It’s also important lawyers understand the ethical risks of Generative AI and know their regulatory and professional responsibilities.

 

Have you come across any AI hallucinations? If so, what were they, and how often does it happen? We’d love to know, so add a comment below ‡️