OpenAI was recently sued due to ChatGPT’s illusion. This is what happened

OpenAI was recently sued due to ChatGPT’s illusion. This is what happened

Who is responsible if generative AI tools begin imagining things about you?

ChatGPT and other generative AI models are known to produce errors or “hallucinations.” As a result, they typically come with prominently printed disclaimers highlighting this issue.

But what if, despite these warnings, you saw the AI chatbot spreading false information about you?

radio personality from Georgia as earlier reported by Bloomberg Law, Mark Walters has filed a defamation lawsuit against OpenAI after learning that ChatGPT had been spreading false information about him and accused him of embezzling money.

According to the complaint, the disinformation occurred when Fred Riehl, the editor-in-chief of the firearms newspaper AnmoLand, contacted ChatGPT for a synopsis of the Second Amendment Foundation v. In order to contextualise a case, he was covering, he used the Ferguson case.

ChatGPT provided Riehl with a description of the case, which claimed that Alan Gottlieb, the founder of the Second Amendment Foundation (SAF), accused Walters of “defrauding and embezzling funds from the SAF.”

The chatbot added that while Walters served as the SFA’s CFO and treasurer, he “misappropriated money for private use with no permission or a refund, altered financial records and statements from banks to cover up his activities, he failed to provide accurate and timely financial information and disclosures to the leadership of the SAF, and used money borrowed from banks to cover up his activities.

However, the defendant in Walters v. OpenAI LLC alleges that every single fact in the summary is untrue.

According to the lawsuit, Walters was never a party to the litigation, was never accused of stealing and embezzling monies from the SAF, and never served as treasurer or CFO.

Walters sued OpenAI for compensation in the form of general and punitive damages, as well as payment for legal fees.

Who should be held responsible and whether the website’s warnings concerning hallucinations are sufficient to absolve it of responsibility, even if harm is caused to someone, are questions that are raised by the complaint.

In the developing field of generative AI, the verdict of the court case will significantly influence the creation of a standard.

Leave a Comment

Your email address will not be published. Required fields are marked *