By Gordon Wikle

Who stands behind AI?

As I noted in my previous blog addressing the use of AI by attorneys, AI continues to dominate headlines, and its use is increasing across industries. In this blog, I’ll move on to a related question that is becoming increasingly important: who is liable when AI use goes awry? This comes up in the context of where generative AI infringes on copyrights or trademarks, gives bad legal, medical, or financial advice, or generates discriminatory results in violation of civil rights laws. Each of those situations can create issues with unique consequences, and below, I’ll give a brief overview of each.

Intellectual Property and AI

As things currently stand, AI, particularly Large Language Models that generate text based on billions of existing texts in their learning models appears to:

  1. a) Be potentially liable for their work if it is derivative of existing copyrighted work;
  2. b) Be potentially liable for violations of the license to use copyrighted work when using such work in the formation of its algorithm; and
  3. c) Not be entitled to copyright protection of its own.

There are currently pending lawsuits related to a) and b); though a federal court in California in Kadrey v. Meta Platforms Inc, US District Court for the Northern District of California, No. 3:23-cv-03417 was skeptical of b) as a legal theory and found that plaintiffs failed to allege a).

The takeaways from the litigation, with the caveats that all of the relevant litigation is ongoing and any result will be appealed, is that Meta, Microsoft, and OpenAI are proper parties to be sued over their generative AI’s potential copyright infringement. Under the fair use doctrine, an author whose work was used in training the programs is unlikely to succeed on claims that the use of their work violated the copyright. Still, the author may succeed if they can show the generated work is “derivative.” A secondary inquiry, and one that has not been addressed in arguments to this point, is whether AI is capable of parody or political speech, which would otherwise be protected. In particular, because generative AI places both the inputs of the users and the learning of the AI pursuant to its algorithm between the owner of the code and the results, any cause of action, or defense, that relies on mental state will be attenuated from the actual creation in a way that courts have not found a clear way to address.

Liability for Misuse of AI-generated Content

Related to the concept of whether the creator of an AI can be liable for mental state-dependent causes of action is what happens when the end user of AI-generated content relies on such content and causes harm. This may take the form of a professional, e.g., a doctor, lawyer, or CPA, negligently relying on an AI to review complex materials and missing details or an individual publishing work they believe to be original or accurate, which turns out to be either derivative or defamatory. The risk also exists of lending, employment, or government professionals using generative AI as a predictor to create policies, and where the resulting policies violate civil rights law through discriminatory effects.

The famous case of the attorneys who had ChatGPT write their brief, failed to review its work, and submitted a brief with case cites hallucinated by the AI is the best-known example and an easy analysis case. However, the uses of AI-generated content extend far beyond a handful of overworked personal injury lawyers. When media outlets are experimenting with publishing AI-generated copy, marketers incorporate AI in search engine optimization, and physicians use AI to screen for cancer, the risks inherent in using or misusing AI are as varied and unquantifiable as its potential uses.

ChatGPT takes an interesting path in its terms and conditions https://openai.com/policies/terms-of-use.  As you can see below, ChatGPT attempts to disclaim all responsibility and liability for the output of its AI and to force its users to assume that liability.

You are responsible for Content, including ensuring that it does not violate any applicable law or these Terms. You represent and warrant that you have all rights, licenses, and permissions needed to provide Input to our Services.

Ownership of Content. As between you and OpenAI, and to the extent permitted by applicable law, you (a) retain your ownership rights in Input and (b) own the Output. We hereby assign to you all our right, title, and interest, if any, in and to Output.

Similarity of Content. Due to the nature of our Services and artificial intelligence generally, output may not be unique and other users may receive similar output from our Services. Our assignment above does not extend to other users’ output or any Third Party Output.

Output may not always be accurate. You should not rely on Output from our Services as a sole source of truth or factual information, or as a substitute for professional advice.

You must evaluate Output for accuracy and appropriateness for your use case, including using human review as appropriate, before using or sharing Output from the Services.

You must not use any Output relating to a person for any purpose that could have a legal or material impact on that person, such as making credit, educational, employment, housing, insurance, legal, medical, or other important decisions about them.

Our Services may provide incomplete, incorrect, or offensive Output that does not represent OpenAI’s views. If Output references any third party products or services, it doesn’t mean the third party endorses or is affiliated with OpenAI.

No cases are available to determine if the above is sufficient to avoid liability. Still, it is a virtual certainty that professional liability and medical malpractice cases will include AI providers in the near future. It is unclear where the balance of liability between AI and the end user will fall as those suits arise.

Conclusion

It is a truism that generative AI will change many industries, including the legal industry. As we all observe and participate in those changes, it is essential to evaluate how, in the event of ligation, liability will fall. Venn Law Group endeavors to stay on the cutting edge of that evaluation and can assist you or your business in mitigating risks related to AI as you use it to enhance your business.

 

 

Gordon Wikle is an attorney at Venn Law Group with more than 14 years of experience serving as an assistant district attorney with the State of North Carolina. He focuses on commercial litigation and enjoys analyzing problems and finding creative solutions that are in the best interest of his clients. Navigating difficult situations and resolving business disputes are areas where he excels. Gordon earned his J.D. from Duke University School of Law and has his B.A. in Economics from Vanderbilt University.