Stanford Professor Under Fire for Alleged AI-Generated Testimony in Controversial Minnesota Case
A Stanford University professor specializing in misinformation has come under scrutiny for allegedly employing artificial intelligence (AI) to produce testimony used in a high-stakes legal battle. This case, involving a conservative YouTuber, raises significant questions about the intersection of technology, law, and free speech.
Overview of the Case
Jeff Hancock, a well-respected professor of communications and the founder of Stanford’s Social Media Lab, provided an expert declaration in a case centered on Minnesota’s recent ban on political deepfakes. This legal dispute features Christopher Kohls, a satirical conservative YouTuber, who argues that the law infringes upon free speech rights. Minnesota Attorney General Keith Ellison is advocating for the legislation, which aims to regulate deceptive digital content.
Concerns Raised About Testimony
Hancock’s testimony, which was submitted to support Ellison’s position, has come under fire from the plaintiff’s legal team. They are urging the Minnesota federal judge overseeing the case to dismiss Hancock’s testimony, claiming it includes references to a non-existent study.
Allegations of a Fake Study
The plaintiff’s attorneys assert that Hancock cited a research paper titled “The Influence of Deepfake Videos on Political Attitudes and Behavior,” supposedly published in the Journal of Information Technology & Politics. While the journal exists, the attorneys point out that no such study has ever been published there. They argue that the reference may be a “hallucination” created by an AI large language model, similar to ChatGPT.
The legal memo states, “The publication exists, but the cited pages belong to unrelated articles.” This raises serious doubts about the credibility of Hancock’s entire declaration, especially since it lacks a clear methodology or analytical framework.
Critique of the Testimony’s Validity
The memo further critiques Attorney General Ellison’s reliance on Hancock’s conclusions, highlighting a lack of methodological rigor. The lawyers contend that Hancock could have referenced legitimate studies that align with his arguments but instead opted for a fabricated citation.
The attorneys conducted extensive searches across various platforms, including Google and Google Scholar, to locate the alleged article. Their findings revealed that neither the title nor any related snippets appeared online, substantiating their claim that the article does not exist.
Legal Implications and Next Steps
The attorneys argue that if Hancock’s declaration contains fabricated elements, it undermines the reliability of the entire document and should be excluded from court consideration. They assert, “The declaration of Prof. Hancock should be excluded in its entirety because at least some of it is based on fabricated material likely generated by an AI model, which calls into question its conclusory assertions.”
The memo concludes by suggesting that the court may need to investigate the source of the alleged fabrication, indicating that further action could be warranted.
Response from Relevant Parties
Fox News Digital has reached out to Stanford University, Professor Hancock, and Attorney General Ellison for comments regarding these serious allegations. The outcome of this case could have significant implications for the legal landscape surrounding digital misinformation and the use of AI in expert testimonies.