Stanford misinformation expert admits to ChatGPT ‘hallucinations’ in court statement
Communication Professor Jeff Hancock admitted to overlooking “hallucinated citations” in a court declaration he crafted with assistance from ChatGPT.
For the declaration, Hancock surveyed scholarly literature on the risks of deepfake technology in spreading misinformation using GPT-4o, but failed to factcheck the citations generated by the artificial intelligence (AI) agent, he wrote in a filing to the United States District Court for the District of Minnesota last Wednesday.
“I did not intend to mislead the Court or counsel,” he wrote. “I express my sincere regret for any confusion this may have caused. That said, I stand firmly behind all of the substantive points in the declaration.”
Hancock, an expert on technology and misinformation, filed the original expert declaration on November 1 for a Minnesota court case regarding the state’s 2023 ban on the use of deepfakes to influence an election. While plaintiffs argued that the ban is an unconstitutional limit on free speech, Hancock submitted an expert declaration on behalf of defendant Minnesota Attorney General Keith Ellison claiming that deepfakes amplify misinformation and undermine trust in democratic institutions. The plaintiffs’ attorneys then accused Hancock of using AI to craft the court declaration, claiming the statement contained citations to two articles that did not exist.
In his latest filing, which detailed how he researched and drafted the declaration, Hancock wrote that he used GPT-4o and Google Scholar to create a citation list for his statement. He did not notice when the AI agent generated two “hallucinated citations” and also introduced an error in a list of authors for an existing study.
“I use tools like GPT-4o to enhance the quality and efficiency of my workflow, including search, analysis, formatting and drafting,” he wrote.
The error occurred when he asked GPT-4o to write a short paragraph based on bullet points he had written. According to Hancock, he included “[cite]” as a placeholder to remind himself to add the correct citations. But when he fed the writing into GPT-4o, the AI model generated manufactured citations at each placeholder instead.
Hancock was compensated at the government rate of $600 per hour for his declaration, which he made under penalty of perjury that everything he stated in the document was “true and correct.”
The Daily has reached out to Hancock for comment.
Hancock currently teaches COMM 1: “Introduction to Communication” and COMM 324: “Language and Technology.” Last spring, he taught COMM 224: “Truth, Trust, and Technology.”
On Tuesday, COMM 1 met in Sapp Teaching and Learning Center for its usual class session, but Hancock taught over Zoom. Several students in COMM 1 told the Daily that the class just started learning about citations.
While Tuesday’s class topic was on the importance of citing diverse scholars to broaden representation in the field of communications, one student in the course, who requested to remain anonymous due to fear of academic repercussions, said they found it “ironic” that Hancock lectured on the use of citations after admitting to have used fabricated AI-generated citations.
The post Stanford misinformation expert admits to ChatGPT ‘hallucinations’ in court statement appeared first on The Stanford Daily.