“The urgency here demands action. The future is not science fiction or fantasy. It's not even the future. It's here and now,” said Blumenthal
[WASHINGTON, DC] – U.S. Senator Richard Blumenthal (D-CT), Chair of the Senate Judiciary Subcommittee on Privacy, Technology, and the Law, delivered opening remarks at today’s hearing titled, “Oversight of AI: Principles for Regulation.”
“There is enormous good here—the potential for benefits and curing diseases, helping to solve climate change, workplace efficiency,” said Blumenthal, who also warned that, “We can't repeat the mistakes that we made on social media, which was to delay and disregard the danger.”
Blumenthal cited experts who warned AI could potentially create diseases and other biological threats, interfere with nuclear weapons, and lead to the loss of jobs, saying, “AI is already having a significant impact on our economy, safety, and democracy.”
To respond to these threats, Blumenthal said, “We need some kind of regulatory agency, but not just a reactive body. Not just a passive rules of the road maker, edicts on what guardrails should be. But actually investing proactively in research so that we develop countermeasures.”
Blumenthal praised the news that tech firms have agreed to voluntary safety commitments proposed by the White House, but said, “It’s only a start.”
“The goal for this hearing is to lay the ground for legislation. To go from general principles to specific recommendations. To use this hearing to write real laws, enforceable laws,” said Blumenthal.
Blumenthal concluded his remarks by saying there are core standards to build a consensus around, such as a licensing regime for companies developing AI, testing and auditing by third parties, legal limits on use related to elections, and transparency about the limits and use of AI models.
“The urgency here demands action. The future is not science fiction or fantasy. It's not even the future. It's here and now.”
Video of Blumenthal’s opening remarks can be found here. A transcript is available below.
U.S. Senator Richard Blumenthal (D-CT): The Senate Judiciary Subcommittee on Privacy, Technology, and the Law will come to order. Thank you to our three witnesses for being here. I know you've come a long distance. And to the Ranking Member, Senator Hawley, for being here as well on a day when many of us are flying back. I got off a plane about less than an hour ago so forgive me for being a little bit late. I know many of you have flown in as well. Thank you to all our audience and many are outside the hearing room.
Some of you may recall at the last hearing I began with a voice, not my voice, although it sounded exactly like mine because it was taken from floor speeches and an introduction. Not my words, but concocted by ChatGPT that actually mesmerized and deeply frightened a lot of people who saw and heard it.
The opening today, my opening at least is not going to be as dramatic. But the fears that I heard as I went back to Connecticut, and also heard from people around the country, were supported by that kind of voice impersonation and content creation.
What I have heard again and again and again, and the word that has been used repeatedly is scary. Scary, when it comes to artificial intelligence. As much as I may tell people, you know, there is enormous good here – the potential for benefits and curing diseases, helping to solve climate change, workplace efficiency – what rivets their attention is the science fiction image of an intelligence device out of control, autonomous self-replicating, potentially creating diseases, pandemic grade viruses, or other kinds of evils, purposely engineered by people or simply the result of mistakes, not malign intention.
And frankly, the nightmares are reinforced in a way by the testimony I've read from each of you. In no way disparagingly do I say that those fears are reinforced because I think you've provided objective fact-based views on what the dangers are and the risks, and potentially even human extinction – an existential threat which has been mentioned by many more than just the three of you, experts who know firsthand the potential for harm.
But these fears need to be addressed and I think can be addressed through many of the suggestions that you are making to us and others as well. I've come to the conclusion that we need some kind of regulatory agency, but not just a reactive body. Not just a passive rules of the road maker, edicts on what guardrails should be. But actually investing proactively in research so that we develop countermeasures against the kind of autonomous out of control scenarios that are potential dangers; an artificial intelligence device that is in effect programed to resist any turning off; a decision by A.I. to begin nuclear reaction to a nonexistent attack.
The White House certainly has recognized the urgency with a historic meeting of the seven major companies which made eight profoundly significant commitments, and I commend and thank the President of the United States for recognizing the need to act.
But we all know, and you have pointed out in your testimony and that these commitments are unspecific and unenforceable. A number of them on the most serious issues say that they will give attention to the problem. All good, but it's only a start.
I know the doubters about Congress and about our ability to act. But the urgency here demands action. The future is not science fiction or fantasy. It's not even the future. It's here and now.
And a number of you have put the timeline at two years before we see some of the biological, most severe dangers. It may be shorter because the kinds of pace of development is not only stunningly fast, it is also accelerated at a stunning pace because of the quantity of chips, the speed of chips, the effectiveness of algorithms. It is an exorable flow of development. We can condemn it. We can regret it. But it is real.
And the White House's principles actually align with a lot of what we have said, among us, in Congress and notably in the last hearing that we held. We are here now because A.I. is already having a significant impact on our economy, safety, and democracy. The dangers are not just extinction, but the loss of jobs, one of potentially the worst nightmares that we have. Each day, these issues are more common, more serious, and more difficult to solve. And we can't repeat the mistakes that we made on social media, which was to delay and disregard the danger.
So the goal for this hearing is to lay the ground for legislation. To go from general principles to specific recommendations. To use this hearing to write real laws, enforceable laws.
In our past two hearings, we heard from panelists that Section 230, the legal shield that protects social media, should not apply to A.I. Based on that feedback, Senator Hawley and I introduced the No Section 2:30 Immunity for A.I. Act.
Building on our previous hearing, I think there are core standards that we are building consensus around. And I welcome hearing from many others on these potential rules; establishing a licensing regime for companies that are engaged in high-risk A.I. development; a testing and auditing regiment by objective third parties or by, preferably, the new entity that we will establish; imposing legal limits on certain uses related to elections, Senator Klobuchar has raised this danger directly; related to nuclear warfare, China apparently agrees that A.I. should not govern the use of nuclear warfare; requiring transparency about the limits and use of A.I. models. This includes water marking, labeling, disclosure when A.I. is being used, and data access. Data access for researchers.
So I appreciate the commitments that has been made by Anthropic, OpenAI, and others at the White House related to security testing and transparency last week. It shows these goals are achievable. And that they will not stifle innovation which has to be we need to be an objective. Avoid stifling innovation. We need to be creative about the kind of agency or entity, the body or administration, it can be called an administration or office. I think the language is less important than its real enforcement power and the resources invested in it.
We are really lucky, very, very fortunate to be going by three true experts today. One of the most distinguished panels I've seen in my time in the United States Congress, which is only about 12 years. One of the leading A.I. companies, which was founded with the goal of developing AI that is helpful, honest, and harmless; a researcher whose groundbreaking work led him to be recognized as one of the fathers of A.I.; and a computer science professor whose publications and testimony on the ethics of A.I. have shaped regulatory efforts by the EU A.I. Act.
So welcome to all of you and thank you so much for being here.
I turn to the Ranking Member, Senator Hawley.