Bipartisan Framework for U.S. AI (Sens. Blumenthal and Hawley) and Senate Hearing on Oversight of AI: Legislating on Artificial Intelligence

On September 8, 2023, Sen. Richard Blumenthal (D-CT) and Sen. Josh Hawley (R-MO), announced a bipartisan legislative framework to establish guardrails for artificial intelligence (AI). Sen. Blumenthal and Sen. Hawley sit on the Senate Judiciary Subcommittee on Privacy, Technology, and the Law (they serve as Chair and Ranking Member respectively).

The framework, titled the Bipartisan Framework for U.S. AI Act, provides a blueprint relating to the potential development of comprehensive AI legislation and regulation. Broadly, the framework calls for:

  • The establishment of a licensing regime administered by an independent oversight body -
    • Companies developing certain types of AI models would be required to register with an established federal oversight body and any licensing requirements would “include the registration of information about AI models and be conditioned on developers maintaining risk management, pre-deployment testing, data governance, and adverse incident reporting programs.”
  • Ensuring legal accountability for harms -
    • Companies would be held liable, through oversight body enforcement and private rights of action, in instances where their AI models or systems “breach privacy, violate civil rights, or otherwise cause cognizable harms.”
  • The defense of national security and international competition -
    • Congress would “utilize export controls, sanctions, and other legal restrictions to limit the transfer of advanced A.I. models, hardware and related equipment, and other technologies to China, Russia, and other adversary nations, as well as countries engaged in gross human rights violations.”
  • Promoting AI transparency -
    • Congress would “promote responsibility, due diligence, and consumer redress by requiring transparency from the companies developing and deploying A.I. systems.” For example, developers would be required to disclose pertinent information on AI models and systems, users would be informed when they are interacting with AI, AI system providers would have to “watermark or otherwise provide technical disclosures of A.I.-generated deepfakes,” and a public database and reporting system would be established to provide “easy access to A.I. model and system information, including when significant adverse incidents occur or failures in A.I. cause harms.”
  • Protection of consumers and kids -
    • Companies deploying A.I. in high-risk or consequential situations would “be required to implement safety brakes, including giving notice when A.I. is being used to make decisions, particularly adverse decisions, and have the right to a human review.” Additionally, “consumers should have control over how their personal data is used in A.I. systems and strict limits should be imposed on generative A.I. involving kids.”

As Congress and agencies work to address various aspects of AI legislation and regulation, they have sought input from relevant stakeholders, including industry leaders. One issue that has arisen is whether it will be necessary for the federal government to develop a new agency or regulatory body to oversee AI development. Another issue being addressed is whether entities developing AI should be required to comply with AI specific federal licensing requirements (this issue is further complicated by the development of open source AI models). Sen. Blumenthal’s and Sen. Hawley’s proposed framework calls for an “independent oversight body” and a licensing model.

The following week, on September 12, 2023, the Senate Judiciary Subcommittee on Privacy, Technology, and the Law held a hearing titled - Oversight of AI: Legislating on Artificial Intelligence. Sen. Blumenthal provided opening remarks for the meeting, during which he stated, in relevant part,

[…] There is a deep appetite, indeed a hunger, for [AI] rules and guardrails. Basic safeguards for businesses and consumers, for people in general, from the panoply of potential perils. But there is also a desire to make use of the tremendous potential benefits. […] Make no mistake, there will be regulation. The only question is how soon and what? It should be regulation that encourages the best in American free enterprise, but at the same time provides the kind of protections that we do in other areas of our economic activity.

As noted, the legislative framework and hearing come at a time when Congress and numerous federal agencies are working to address varied aspects of AI development and use. For example, on June 21, 2023, Senate Majority Leader Chuck Schumer (D-NY) published a separate framework for AI titled the SAFE Innovation Framework. As relevant AI legislation and regulation continue to develop and shift, those companies utilizing, or affected by, AI should be cognizant of applicable regulatory requirements, necessary compliance frameworks, enforcement concerns, and expanding use cases.

For more information, please contact:

Stephen Anstey: sanstey@kilpatricktownsend.com
Joel Bush: jbush@kilpatricktownsend.com
James Trigg: jtrigg@kilpatricktownsend.com
John Loving: jloving@kilpatricktownsend.com

Kilpatrick Townsend

Kilpatrick Townsend – Generative AI
Kilpatrick Townsend’s Generative AI practice works with clients to tackle their most pressing AI concerns and challenges to achieve results in line with overall business strategies and goals. Our multidisciplinary team, with backgrounds in intellectual property, privacy, federal regulation and legislation, commercial transactions, and dispute resolution, monitors and proactively addresses risks, compliance requirements, and opportunities related to generative AI. For more information, please visit our website – Generative AI (kilpatricktownsend.com)

Kilpatrick Townsend – Government and Regulatory
Kilpatrick Townsend’s Government and Regulatory practice offers policy, legislative, compliance, and regulatory advocacy services and legal guidance on both broad and industry-specific matters, including AI, energy, sustainability, Tribal, finance, distributed ledger technology (including blockchain), and digital assets (cryptocurrency, stablecoin, tokenization, and central bank digital currency (CBDC)). For more information, please visit our website – Government & Regulatory (kilpatricktownsend.com)

close
Loading...
Knowledge assets are defined in the study as confidential information critical to the development, performance and marketing of a company’s core business, other than personal information that would trigger notice requirements under law. For example,
The new study shows dramatic increases in threats and awareness of threats to these “crown jewels,” as well as dramatic improvements in addressing those threats by the highest performing organizations. Awareness of the risk to knowledge assets increased as more respondents acknowledged that their