Australian Mayor has given OpenAI 28 days to correct false claims that ChatGPT generates when people search about him failure to which he will file a lawsuit against it. Brian Hood says that a search on ChatGPT about him falsely indicates that he was involved in a foreign bribery scandal.
We have tried to ask ChatGPT about Brian Hood using different variations of terms but we could not get any answer that indicates that Brian Hood was involved in any form of Bribery. It is likely that ChatGPT has corrected the mistake.
Suing ChatGPT for defamation
Judges make decisions by applying the law to existing facts and considering past judicial decisions (case laws) on the same subject. When a case is brought to court that has never been handled by any judge or is not defined in law, then what can a judge rely on to arrive at his decision?
For example, ChatGPT is a new technology. As of publishing this article, there are hardly any cases that have been adjudicated on ChatGPT defamation. We have also not come across any existing law in relation to this subject. Will the case be dismissed because there is no existing law or precedent?
The Law is like a continent that continues to be explored and mapped. Judges in different jurisdictions are always making new decisions that are relied upon by other judges to decide cases in what is commonly known as the doctrine of precedent.
There are existing business defamation lawsuits like that of Google vs Duffy. You can therefore sue OpenAI as a business for any defamation arising from ChatGPT-generated answers.
If a business or company has made false allegations or damaging statements about you that have caused harm to your reputation, then you may have grounds for a defamation lawsuit. You should, however, consult an experienced lawyer to review the facts of your case before taking any legal action failure to which, you may be required to pay legal fees amounting to thousands or millions of dollars If you lose the defamation suit.
AI ChatBots and defamation
Going forward, AI Chatbots like ChatGPT are going to be trained to be more careful with any negative information they generate about people. When you accuse someone of doing something that affects his/her reputation, then you need to have enough facts to prove your accusations. Currently, AI ChatBots are not known to be good at generating answers that require to be backed up with facts. That is why Stack Overflow banned ChatGPT answers.