OpenAI faces defamation lawsuit over false allegations by ChatGPT

Photo credit: Levart Photography

OpenAI was hit by its first defamation lawsuit after ChatGPT fabricated legal allegations against a radio host.

Mark Walters, a Georgia radio host, is suing OpenAI for false information generated by ChatGPT in what appears to be the first defamation lawsuit against the company.

The chat AI stated that Walters was accused of fraud and embezzlement of funds from a non-profit organization. ChatGPT created the information in response to a request from a third party, a journalist named Fred Riehl.

According to the lawsuit, Riehl asked ChatGPT for a summary of a real covenant process by linking to an online PDF. In response, ChatGPT produced a false summary of the case – detailed and compelling, but wrong on many counts – including false allegations against Walters.

Riehl never published the false information generated by ChatGPT but instead verified the details with another party. It is not clear from the facts of the case how Walters found out about the fabricated information. Walters filed his lawsuit June 5 in Gwinnett County Superior Court in Georgia, seeking unspecified monetary damages from OpenAI.

The edge stresses that “despite Riehl’s request to summarize a PDF, ChatGPT is actually unable to access such external data without the use of additional plugins.” The system’s inability to alert Riehl to this fact is an example for its ability to mislead users.” Since then, ChatGPT has been updated to alert users that “as an AI text-based model, I am unable to access or access certain PDF files or other external documents to open.”

Chatbots like ChatGPT have no reliable way of distinguishing fact from fiction, and when asked to confirm something the questioner has suggested is true, they often make up facts, including dates and numbers. This has led to widespread complaints about incorrect information being generated by chatbots.

When people interacting with ChatGPT don’t realize Since the search engine is not a “super search engine” and it can and does invent outright lies, more and more cases of these bugs causing harm are emerging. namely a professor threatened to fail his course after ChatGPT falsely claimed its students used AI to write their essays and a Lawyer Legal consequences loom after ChatGPT was used to cite fake legal cases.

Although OpenAI includes a small disclaimer on ChatGPT’s homepage stating that the system “may occasionally generate incorrect information,” the company also credited ChatGPT as the source of reliable data in its ad copy. Sam Altman, CEO of OpenAI, has even gone so far Condition that he “prefers to learn new information from ChatGPT than from books.”

So, is there legal precedent to hold a company accountable for false or defamatory information generated by its AI systems? It’s difficult to say.

In the USA, Section 230 protects Internet companies from legal liability for information created and hosted on their platforms by third parties. However, it is not known whether these protections apply to AI systems – especially if these systems were built, trained and hosted by the company in question.

Law Professor Eugene Volokh Remarks that Walters did not notify OpenAI of the false statements made about him to give the company an opportunity to remove them and that no actual harm was done as a result of the ChatGPT issue. Volokh concludes that “while such defamation lawsuits (against AI companies) are generally legal,” Walters’ lawsuit “may be difficult to uphold.”

“In any case, it will be interesting to see what happens here,” says Volokh.