Why AI Models Lie with Confidence and How to Spot It
In the spring of 2023, a seasoned New York attorney named Steven Schwartz found himself in hot water during a lawsuit against Avianca Airlines. Tasked with researching precedents for his client's injury claim, Schwartz turned to ChatGPT for help. The AI confidently provided a list of court cases, complete with summaries and citations. He included them in his legal brief, submitted to federal court. But when opposing counsel and the judge tried to verify these references, they discovered something alarming: the cases were entirely fabricated. Names like "Varghese v. China Southern Airlines" and "Shaboon v. Egyptair" sounded plausible, but they didn't exist in any legal database. Schwartz was sanctioned with a $5,000 fine, and the incident made headlines worldwide as a stark example of AI hallucinations—those confident but false outputs that large language models (LLMs) produce without a shred of actual knowledge.
Review