Basics for Corporate Counsel to Consider About Generative AI
Generative artificial intelligence (“GenAI”) and GenAI tools like ChatGPT have a significant opportunity to revolutionize how many organizations do business. GenAI tools allow users to brainstorm ideas, quickly generate content, and save precious time, which can provide organizations who leverage these tools a significant commercial advantage over those that do not. For these reasons and others, many organizations are leveraging (or preparing to leverage) GenAI tools for their internal business purposes, developing GenAI products, or incorporating GenAI into their existing products. Regardless of the particular use case, GenAI presents various legal risks. However, an informed attorney can mitigate these legal and regulatory risks to drive an organization’s adoption of GenAI and help an organization derive tremendous benefit from GenAI.
Key Legal Considerations Regarding GenAI
Although legal issues when using GenAI vary depending on the use case, below are a few key legal issues to consider.
- GenAI Produces Inaccurate Content: In and of themselves, large language models produce usefully plausible series of words, not accurate information from databases or any other credible source. Very often, their sensitivity to the nuances of language of the questioner produces results that are precisely what the questioner wants to hear, and completely inaccurate. As a practical example that might scare any lawyer, in June, sanctions were imposed on two New York lawyers who submitted a legal brief that cited fictitious cases generated by ChatGPT. Since GenAI tools can produce inaccurate content, relying on such content to make significant business decisions may result in negative consequences if not used wisely. Therefore, organizations using GenAI tools internally would be well-advised to develop internal AI governance and policies and procedures surrounding AI.
GenAI policies will vary depending on an organization’s use of GenAI, but may include, for example, the following: (a) requiring employees to obtain prior approval for the use GenAI outside of pre-approved or basic time-saving tasks; (b) prohibiting proprietary information and sensitive data from being entered into the GenAI tool; (c) including a rule that all GenAI produced documents are drafts that must be carefully reviewed; (d) a process for how concerns and complaints regarding GenAI will be addressed; (e) individuals at the organization responsible for overseeing GenAI (which will likely consist of stakeholders from multiple departments, including legal, marketing, product, and employment); and (f) accounting for how to revise the policies given the evolving legal landscape for AI.
If an organization is providing a GenAI product to its own customers, to mitigate this risk, on the organization should disclaim the accuracy and reliability of content created through GenAI and encourage customers to carefully review and edit their GenAI content. Other ways businesses providing GenAI tools to their customers can mitigate risk concerning inaccurate, harmful, or illegal outputs include: (a) prohibiting use by customers in higher-risk industries; (b) implementing a content moderation process; and (c) drafting terms of service shifting responsibility of the outputs to customers and prohibiting customers from using the GenAI tool to produce illegal, harmful, defamatory, or infringing outputs.
- GenAI May Expose Trade Secrets and Proprietary Information: When using GenAI for business purposes, organizations must consider what information the user is inputting into the product and what guidelines need to be put into place. Inputting proprietary information into GenAI tools may inadvertently share such data with third parties, including the GenAI provider, risking losing trade secret protections, violating privacy restrictions, and jeopardizing the confidentiality of patentable material. In May 2023, Samsung banned the use of ChatGPT and similar GenAI products after employees may have exposed trade secrets through its use. To avoid this situation, organizations should consider: (a) carefully selecting their GenAI tools, including taking advantage of enterprise platforms that do not share prompts and completions with training data, and available opt-outs from third-party monitoring; (b) internal governance surrounding what employees can and cannot provide GenAI tools; (c) reviewing the relevant terms of service for a GenAI tools concerning for ownership of the inputs before utilizing them; and (d) to the extent feasible, adjusting the settings of the GenAI tool to protect an organization’s sensitive data. Meanwhile, an organization providing a GenAI product to its customers should carefully consider how its terms of service with its customers reflect ownership of and rights to the inputs and outputs of the GenAI product.
- GenAI May Be Biased: Another risk with leveraging GenAI is that GenAI may perpetuate bias and discrimination. In some instances, data set audits have revealed that training data sets for GenAI reflect social stereotypes, oppressive viewpoints, and derogatory, or otherwise harmful, associations to marginalized identity groups. In April 2023, the Federal Trade Commission (“FTC”) Chair and officials from three other federal agencies issued a Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems, highlighting that automated systems used in connection with GenAI rely on vast amounts of data to find patterns and make correlations, but in doing so, may result in unlawful discrimination. Moreover, a biased product may constitute an “unfair or deceptive” business practice, and therefore subject the organization to FTC scrutiny. When developing or using a GenAI product or providing a GenAI product, organizations should regularly test for undesirable biases.
- GenAI Creates Risk of Intellectual Property Infringement and Undermines IP Protections: GenAI also presents various intellectual property risks. GenAI may rely on information or materials that are subject to copyright, trademark, or patent protection. In fact, there have been several lawsuits filed in the United States regarding whether the use of copyrighted content for training GenAI constitutes copyright infringement.1 Therefore, as the GenAI product may rely on infringing material, it creates risk that the outputs of a GenAI may result in intellectual property infringement. If an organization is providing a GenAI product to its customers, it should consider broadly disclaiming any warranties regarding intellectual property in its terms of service, particularly given existing U.S. law that neither copyrights nor patents are available for GenAI completions (although trade secrets are with the right platform and controls).
- GenAI Implicates Data Considerations: As stated, GenAI may pull information from a variety of data sources, including publicly available information on websites. Data scraping may be illegal in certain jurisdictions, but, even if data scraping is not illegal, it may violate a website’s terms of use. Users of GenAI tools should be aware that such tools may have collected data unlawfully and should be careful in selecting their GenAI tools. Organizations developing GenAI products will want to understand what datasets their GenAI product is being trained on and pulling from. Lastly, organizations offering GenAI products may implicate requirements under data privacy laws, although most comprehensive data privacy laws were not necessarily written with GenAI in mind. To the extent personal data is implicated in the use of GenAI, key requirements under data privacy laws may be triggered, for example: (a) providing adequate notice; (b) honoring data subject rights requests; (c) potentially conducting a data protection impact assessment; (d) establishing a legal basis for using the training data, and (e) adhering with data minimization concepts.
High-Level Overview of AI Regulation and Enforcement
It is important to note that most countries currently lack comprehensive AI regulation, so developments should be carefully monitored.2 In the United States, there is currently no comprehensive law regulating AI. For now, the executive branch has largely taken the lead on addressing AI, including: (a) publishing a White House fact sheet announcing new actions to promote responsible AI innovation; (b) creating a National Institute of Standards and Technology AI Risk Management Framework; and (c) creating a Blueprint for an AI Bill of Rights. Internationally, there have been significant developments concerning comprehensive AI regulation. On August 15, 2023, China’s Interim Measures for the Management of Generative Artificial Intelligence Services entered into effect. On June 14, 2023, the European Parliament formally adopted a compromise text on the European Union’s AI Act, which is being followed by a negotiation process. Other countries, such as Canada, with its Artificial Intelligence and Data Act, may also ultimately take the lead on comprehensive AI regulation.
For companies utilizing or leveraging GenAI, it is also important to monitor AI enforcement. Italy’s data protection authority temporarily banned ChatGPT but lifted its ban after OpenAI addressed or clarified issues surrounding ChatGPT. Although much is to be seen on how AI is to be enforced, organizations can learn from other applicable enforcement regarding data science initiatives and automated decision-making. Notably, in May 2023, the FTC issued a proposed order requiring a video doorbell camera provider to pay $5.8 million in consumer refunds and delete data, models, and algorithms. The FTC stated that the company deceived its customers by allowing any employee or contractor to access consumer’s private videos, using customer videos to train algorithms without consent, and failing to implement basic privacy and security protections. This enforcement highlights the need for proper internal governance and policies and procedures and providing adequate transparency to consumers surrounding the use of data, which are concepts that will also apply with respect to GenAI.
As the regulatory framework and enforcement regarding AI evolves, it is important for attorneys tasked on advising an organization on GenAI to be aware of the legal considerations surrounding GenAI. If done effectively, an attorney can effectively implement appropriate risk mitigation measures to ensure businesses can derive the benefit from the use or provision of GenAI tools. The exact legal issues regarding GenAI and how to best mitigate risk will depend on the particular use case. For questions regarding this article or advice on a particular use case, please contact us.
Footnotes
Disclaimer
While we are pleased to have you contact us by telephone, surface mail, electronic mail, or by facsimile transmission, contacting Kilpatrick Townsend & Stockton LLP or any of its attorneys does not create an attorney-client relationship. The formation of an attorney-client relationship requires consideration of multiple factors, including possible conflicts of interest. An attorney-client relationship is formed only when both you and the Firm have agreed to proceed with a defined engagement.
DO NOT CONVEY TO US ANY INFORMATION YOU REGARD AS CONFIDENTIAL UNTIL A FORMAL CLIENT-ATTORNEY RELATIONSHIP HAS BEEN ESTABLISHED.
If you do convey information, you recognize that we may review and disclose the information, and you agree that even if you regard the information as highly confidential and even if it is transmitted in a good faith effort to retain us, such a review does not preclude us from representing another client directly adverse to you, even in a matter where that information could be used against you.