When A.I. Chatbots Hallucinate

Karen Weise and 

Karen Weise reported this story from Seattle and Cade Metz reported from San Francisco.

When did The New York Times first report on “artificial intelligence”?

According to ChatGPT, it was July 10, 1956, in an article titled “Machines Will Be Capable of Learning, Solving Problems, Scientists Predict” about a seminal conference at Dartmouth College. The chatbot added:

CHATGPT

This conference is now considered to be the birthplace of artificial intelligence as a field of study, and the article mentions the term “artificial intelligence” several times in the context of the discussions and presentations that took place at the conference.

The 1956 conference was real. The article was not. ChatGPT simply made it up. ChatGPT doesn’t just get things wrong at times, it can fabricate information. Names and dates. Medical explanations. The plots of books. Internet addresses. Even historical events that never happened.

When ChatGPT was recently asked how James Joyce and Vladimir Lenin first met — there is no evidence they ever did — this is how it responded:

CHATGPT

James Joyce and Vladimir Lenin met in Zurich, Switzerland in 1916. Both men were living in exile in Zurich during World War I. Joyce was a writer and Lenin was a revolutionary. They met at the Cafe Odéon, a popular gathering place for artists and intellectuals in Zurich.

The response above has been truncated for length.

Fabrications like these are common. Figuring out why chatbots make things up and how to solve the problem has become one of the most pressing issues facing researchers as the tech industry races toward the development of new A.I. systems.

Chatbots like ChatGPT are used by hundreds of millions of people for an increasingly wide array of tasks, including email services, online tutors and search engines. And they could change the way people interact with information. But there is no way of ensuring that these systems produce information that is accurate.

The technology, called generative A.I., relies on a complex algorithm that analyzes the way humans put words together on the internet. It does not decide what is true and what is not. That uncertainty has raised concerns about the reliability of this new kind of artificial intelligence and calls into question how useful it can be until the issue is solved or controlled.

The tech industry often refers to the inaccuracies as “hallucinations.” But to some researchers, “hallucinations” is too much of a euphemism. Even researchers within tech companies worry that people will rely too heavily on these systems for medical and legal advice and other information they use to make daily decisions.

Read more at NYTimes: https://www.nytimes.com/2023/05/01/business/ai-chatbots-hallucinatation.html

Leave a Reply

Your email address will not be published. Required fields are marked *