Brief - The Hustle

What does it mean when AI ‘hallucinates’?

Written by Juliet Bennett Rylah | May 9, 2023 9:28:45 PM

In 1943, scientist Albert Hofmann accidentally ingested a substance he’d been developing as a respiratory and circulatory stimulant.

Shortly thereafter, he experienced “an uninterrupted stream of fantastic pictures, extraordinary shapes with intense, kaleidoscopic play of colors.” Unbeknownst to Hofmann, he’d made the hallucinogen LSD.

But what happens when AI hallucinates? Well, it’s kind of like talking to a drunk guy at a bar: they’re wrong, but very confident about it.

It’s not just a ‘lie’

AI could potentially “read” something incorrect on the internet and repeat it, but a hallucination isn’t regurgitated bullshit. It’s inaccurate information that doesn’t correspond with its training data (i.e., the texts, images, etc. it was fed).

For example, Google’s Bard chatbot told Wall Street Journal columnist Ben Zimmer that Hans Jakobsen — a linguist who never existed — coined the term “argumentative diphthongization,” a phrase Zimmer made up.

More troubling: An Australian politician is considering suing OpenAI after ChatGPT claimed he’d served time in prison for bribery, while a professor said it fabricated a Washington Post article accusing him of sexual harassment.

Why does this happen?

We wish we knew! Google CEO Sundar Pichai told “60 Minutes” that all models — including Bard — have this problem, but no one’s been able to solve or fully understand it.

The thing about ChatGPT, Bard, Bing, and other language models is that they don’t really know anything. They just use all the data at their disposal to predict and generate text — and sometimes that text is wrong.

Is ‘hallucination’ even the right term for this?

A lot of people are using it, but some argue it falsely humanizes machines. Linguistics professor Emily Bender has suggested alternative terms for the phenomenon, such as “synthesized ungrounded text” or perhaps the simpler and snappier “made shit up.”