Skip to content

Blumenthal & Hawley Announce Bipartisan Framework on Artificial Intelligence Legislation

Comprehensive framework would establish an independent oversight body, allow enforcers & victims to seek legal accountability for harms, promote transparency, & protect personal data

[WASHINGTON, D.C.] – U.S. Senators Richard Blumenthal (D-CT) and Josh Hawley (R-MO), Chair and Ranking Member of the Senate Judiciary Subcommittee on Privacy, Technology, and the Law, announced a bipartisan legislative framework to establish guardrails for artificial intelligence. The framework lays out specific principles for upcoming legislative efforts, including the establishment of an independent oversight body, ensuring legal accountability for harms, defending national security, promoting transparency, and protecting consumers and kids. The announcement follows multiple hearings in the Subcommittee featuring witness testimony from industry and academic leaders, including OpenAI CEO Sam Altman, Anthropic CEO Dario Amodei, and Microsoft President and Vice Chair Brad Smith who will testify before the Subcommittee on Tuesday.

“This bipartisan framework is a milestone—the first tough, comprehensive legislative blueprint for real, enforceable AI protections. It should put us on a path to addressing the promise and peril AI portends,” said Blumenthal. “We’ll continue hearings with industry leaders and experts, as well as other conversations and fact finding to build a coalition of support for legislation. License requirements, clear AI identification, accountability, transparency, and strong protections for consumers and kids—such common sense principles are a solid starting point.”

“Congress must act on AI regulation, and these principles should form the backbone,” said Hawley. “Our American families, workers, and national security are on the line. We know what needs to be done—the only question is whether Congress has the willingness to see it through.”

Specifically, the framework would:

Establish a Licensing Regime Administered by an Independent Oversight Body. Companies developing sophisticated general purpose AI models (e.g.,GPT-4) or models used in high risk situations (e.g., facial recognition) should be required to register with an independent oversight body, which would have the authority to audit companies seeking licenses and cooperating with other enforcers such as state Attorneys General. The entity should also monitor and report on technological developments and economic impacts of AI.   

Ensure Legal Accountability for Harms. Congress should require AI companies to be held liable through entity enforcement and private rights of action when their models and systems breach privacy, violate civil rights, or cause other harms such as non-consensual explicit deepfake imagery of real people, production of child sexual abuse material from generative AI, and election interference. Congress should clarify that Section 230 does not apply to AI and ensure enforcers and victims can take companies and perpetrators to court.

Defend National Security and International Competition. Congress should utilize export controls, sanctions, and other legal restrictions to limit the transfer of advanced AI models, hardware, and other equipment to China Russia, other adversary nations, and countries engaged in gross human rights violations.

Promote Transparency. Congress should promote responsibility, due diligence, and consumer redress by requiring transparency from companies. Developers should be required to disclose essential information about training data, limitations, accuracy, and safety of AI models to users and other companies. Users should also have a right to an affirmative notice when they are interacting with an AI model or system, and the new agency should establish a public database to report when significant adverse incidents occur or failures cause harms. 

Protect Consumers and Kids. Consumers should have control over how their personal data is used in AI systems and strict limits should be imposed on generating AI involving kids. Companies deploying AI in high-risk or consequential situations should be required to implement safety brakes and give notice when AI is being used to make adverse decisions.

A copy of the bipartisan framework can be found here.

The Senate Judiciary Subcommittee on Privacy, Technology and the Law has jurisdiction over legal issues pertaining to technology and social media platforms, including online privacy and civil rights as well as the impacts of new or emerging technologies. In July, Blumenthal and Hawley held a hearing titled, “Oversight of AI: Principles for Regulation” bringing together academic and industry leaders. In May, Blumenthal and Hawley held their first hearing titled, “Oversight of AI: Rules for Artificial Intelligence” which heard from OpenAI CEO Sam Altman, IBM Chief Privacy & Trust Officer Christina Montgomery, and NYU Professor Gary Marcus.