Skip to content

Blumenthal Questions Anthropic CEO & Academic Leaders About Principles for Regulating Artificial Intelligence

“AI is here and beware of what it will do if we don't do something to control it,” said Blumenthal

[WASHINGTON, DC] – U.S. Senator Richard Blumenthal (D-CT), Chair of the Senate Judiciary Subcommittee on Privacy, Technology, and the Law, questioned Anthropic Chief Executive Officer Dario Amodei, Université de Montreal Professor and one of the “godfathers of AI” Yoshua Bengio, and University of California, Berkley Professor Stuart Russell at yesterday’s hearing titled, “Oversight of AI: Principles for Regulation.”

Election Security

Blumenthal asked the witnesses about, “immediate threats to the integrity of our elections system” such as the spread of misinformation or manipulation of electoral counts.

“When I think of the risks here my mind goes to misinformation, generation of deep fakes, use of AI systems to manipulate people or produce propaganda or just do anything deceptive,” said Amodei, who added that Anthropic trains their model with “constitutional AI” where they lay out explicit principles and tell the model not to generate misinformation. Amodei also suggested the idea of watermarking content to give users the ability to detect if something if generated by AI.

Bengio shared his concerns with the widespread release of pre-trained large models, saying, “One can take a pre-trained model say by a company that made it public, and then without huge computing resources, so not $100 million cost it takes to train them, but something very cheap, can tune these systems to a particular task which could be to play the game of being a troll, for example.”

Russell said he was most worried about disinformation and external influence campaigns, saying, “We can present to the system a great deal of information about an individual – everything they've ever written or published on Twitter or Facebook, the social media presence, their floor speeches – and train the system and ask it to generate a disinformation campaign, particularly for that person.” Russell also added that he supports the idea of a unified approach and standard for labeling AI generated content.

Protecting Against Autonomous & Rogue AI

Blumenthal asked the witnesses about the risks of “superhuman AI” and technology that, “on its own could develop a pandemic virus, on its own decide Joe Biden should not be our next president, on its own decide that the water supply of Washington, D.C. should be contaminated with some kind of chemical and have the knowledge to do it through public utility systems.”

Blumenthal added that these risks mean there is, “urgency to develop an entity that cannot only establish standards and rules, but also research on countermeasures that detect those misdirections.”

Amodei noted that funding for enforcement apparatus and working in concert will be key to prevent “truly autonomous models.”

Bengio said regulation and liability were key and that, “My calculation is we can reduce the probability of a rogue AI showing up by maybe a factor of 100 if we did the right thing in terms of with regulation, so it is really worth it.” He also stressed the importance of doing this, “with our allies in the world, and not do it alone.”

Russell suggested the idea of involuntary recall provisions so, “If a company puts out a system that violates one of the rules and then is recalled until the company can demonstrate that it would never do that again, then the company can go out of business.” He said this would give companies, “a very strong incentive to actually understand how their systems work and if they can't, to redesign the system so that they do understand how they work.”

Addressing National Security Threats

Blumenthal asked the witnesses about the impact AI may have on national security and who may be competitors among adversaries and allies.

“I think the closest competitor we have is probably the U.K. in terms of making advances in basic research,” said Russell who added that China have, “mostly been building copycat systems that turn out not to be nearly as good as the systems that are coming out from Anthropic and OpenAI and Google,” but have also “publicly stated their goal to be the world leader.”

He also added that while China is probably investing more public funds than the U.S., most of their efforts are focused on start security, making them, “extremely good at voice recognition, face recognition, tracking and recognition of humans”

Bengio spoke about the important of coordination and cooperation, saying, “We want every country to follow some basic rules, because even if they don't have the technology, some rogue actor, even here in the US, might just go and do it somewhere else…We need to make sure there's an international effort in terms of these safety measures.”

Reporting Issues with AI Development

Blumenthal spoke about the importance of transparency and having AI developers report incidents, similar to the FAA’s accident and incident reporting system.

“It doesn't seem like AI companies have an obligation to report issues right now. In other words, there's no place to report it. They have no obligation to make it known,” said Blumenthal. “Would you all favor some kind of requirement for that kind of reporting?”

“Absolutely,” said Bengio.

“I think such requirements make sense,” said Amodei.

Addressing Open Source AI Models

Blumenthal discussed both the advantages and the safety and security risks that come with open source AI models, saying, “Even in the short time we've had some AI tools and have been available, they have been abused…On the one hand, access to AI is a good thing for research, but on the other hand, the same open models can create risks just because they are open.”

Bengio said, “I think that it’s really important that the government comes up with some definition which is going to keep moving but make sure that future releases are going to be very carefully evaluated for that potential before they are released.”

“I think the path that things are going in terms of the scaling of open source models, I think it is going down a very dangerous path,” said Amodei. “When a model is released in an uncontrolled manner, there is no ability to do that. It is entirely out of your hands.”

Russell discussed the idea of liability for AI developers who release open source models, saying, “The open source community has got to start thinking about whether they should be liable for putting stuff out there that is ripe for misuse.”

Establishing Guardrails for AI

Bengio stressing the need for international collaboration and the future of regulating AI, saying, “We need to have a single voice that coordinates with the other countries. And having one agency that does that is going to be very important…We need to build something that's going to be very agile.”

Blumenthal concluded his remarks by talking about the need to, “develop an entity or a body that will be agile, nimble, and fast, because we have no time to waste.”

“What you have seen here is not all that common, which is bipartisan unanimity that we need guidance from the federal government. We can't depend on private industry. We can't depend on academia. The federal government has a role that is not only reactive and regulatory, it's also proactive in investing in research and development of the tools needed to make this…work for all of us.”

Video of Blumenthal questioning the witnesses can be found here, here, and here.

-30-