November 2, 2023
Article

Exploring Risks at the AI Research Frontier

PNNL, CSET, and Google DeepMind experts delved into AI risks in a roundtable discussion

Court Corley

PNNL Chief Scientist for Artificial Intelligence Court Corley.

(Photo by Andrea Starr | Pacific Northwest National Laboratory)

“With great power, comes great responsibility.” Though popularized in recent media, versions of this quote have been used by thought leaders for thousands of years, from Cicero in ancient Rome to Winston Churchill.

Artificial intelligence (AI) has proven to be an incredibly powerful system with a broad range of capabilities. However, those capabilities come with risks, such as acquiring the ability to deceive human observers or obtaining dangerous knowledge.

But who is responsible for anticipating and managing those risks? Pacific Northwest National Laboratory (PNNL) Chief Scientist for Artificial Intelligence Court Corley joined experts from the Center for Security and Emerging Technology (CSET) at Georgetown University, Google DeepMind, and others to discuss this topic. The outcome of their virtual roundtable discussion was published on the CSET website.

“AI technologies are rapidly developing—the popular large language models of today didn’t exist until 2017,” said Corley. “As these technologies continue to develop, new risks will emerge—some with great societal impact.”

Some examples of potential near-term developments in AI—or AI at the frontier—include systems that can receive and/or generate multiple types of inputs or outputs. Others may be able to interact with the open internet by querying websites to collect information from multiple sources. While these may be useful developments, they are unpredictable and may lead to high-risk outcomes, like spreading misinformation, and make the AI system more vulnerable to attack or exploitation.

Participants in the roundtable discussion agreed that government and industry partners need to work together to effectively anticipate and respond to these risks. One such strategy for this would be to create a similar scheme to the government-led cybersecurity vulnerabilities and equities process (VEP). In VEP, companies are incentivized to disclose cyber vulnerabilities with no liability to themselves. This voluntary process could be adapted for AI technologies. However, this is not a perfect analogy because we still do not know how to patch AI systems in a similar manner.

The roundtable participants encourage policymakers to define the best practices for mitigating AI risks in a visible and transparent manner. As these technologies continue to grow, so too does the need to create a responsible framework for their oversight.

Along with Corley, the roundtable participants consisted of Helen Toner, Jessica Ji, and John Bansemer from CSET; Lucy Lim, Matt Botvinick, and Mikel Rodriguez from Google DeepMind; Chris Painter from ARC Evals; Jess Whittlestone from the Centre for Long-Term Resilience; and Ram Shankar Siva Kumar from Microsoft Security Research, the CITRIS Policy Lab, and the Goldman School of Public Policy at UC Berkeley.

Published: November 2, 2023