Feature Story

Between Promise and Peril

How can we encourage discovery while promoting trust?

October 10, 2023

In mid-September, Scientific American ran a story with the surreal title Artificial Intelligence Could Finally Let Us Talk with Animals. It featured this enchanting assessment of our recent progress, quoting researcher Aza Raskin:

“It’s now possible for AI to take three seconds of human speech and then hold forth at length with its same patterns and intonations in an exact mimicry. In the next year or two, Raskin predicts, ‘we’ll be able to build this for animal communication.'”

The story brought to mind a related story the New York Times ran in August after AI helped a stroke victim named Ann Johnson “speak” for the first time in decades. According to experts, the research “demonstrates the first time spoken words and facial expressions have been directly synthesized from brain signals.”

It’s common knowledge that AI systems are being used to reinvigorate drug discovery, supercharge materials science, diagnose cancer and other diseases, tutor students in rural villages, and even improve the efficiency of the computer chips used to run the AI systems themselves.

So What?

It’s easy to become inured to the parade of mind-blowing stories documenting generational advancements in wide-ranging fields of inquiry facilitated by recent advancements in AI.

For policymakers, this would be a mistake.

While the snake oil of bogus AI promises abounds on social media, authoritative studies documenting how AI tools have cracked open seemingly intractable scientific problems are not snake oil.

They are its antithesis – persuasive evidence that we are entering a golden era of innovation that can reawaken a sense of wonder in even the most hardened skeptics.

Promoting Innovation

The most straightforward way for the federal government to promote innovation in AI is to fund research efforts that the private sector is less likely to finance.

In last summer’s CHIPS and Science Act, Congress sanctioned a permanent new Directorate for Technology, Innovation, and Partnerships at the National Science Foundation and authorized billions of dollars in funding for a range of ten key technology focus areas including AI and machine learning.

Unfortunately, congressional appropriators have only funded a small fraction of the amount authorized. Congress has also authorized initiatives to develop AI for military use and created the National AI Research Resource Task Force (though once again, funding has thus far been scarce).

Balancing Innovation with Safety and Security

When we move beyond public investments in AI research and consider the thornier question of AI regulation, the primacy of innovation once again comes into focus.

For policymakers, “safeguarding innovation” has become a touchstone for every serious framework to promote and regulate AI, including generative AI.

The word “innovation” appears in nearly every relevant legislative proposal – including the title of Senate Majority Leader Chuck Schumer’s SAFE Innovation Framework for AI Policy and Leader Schumer has called innovation the “north star” in formulating policy on AI. The second Bipartisan AI Insight Forum  – co-hosted by Senators Schumer, Todd Young, Mike Rounds, and Martin Heinrich – will focus specifically on innovation.

While there are now a range of proposals on the table, the dominant conceptual framework for AI regulation is to protect public safety and mitigate potential harms without stifling innovation. This is generally uncontroversial but difficult to achieve, almost by definition.

“Innovation” includes all new applications and advances that are judged to be useful (or at least potentially useful). Lawmakers typically use the term “innovation” to refer only to socially beneficially advances, but raw technology knows no such distinction.

Many public harms that we can already foresee will be the fruit of “innovations,” strictly speaking – they will just empower bad actors at the expense of law-abiding citizens and vulnerable populations.

The Cybersecurity Arms Race

Cybersecurity practitioners, who live and breathe the innovation arms race, understand this dynamic well. Malicious hackers develop new tools and use them for malign purposes, government and industry develop new tools in turn, and the game goes back and forth, with innovation proceeding apace.

Sometimes, this dynamic can be harnessed for good. Through red teaming, hackers identify vulnerabilities and share their knowledge with authorities, and organized hackathons encourage innovation in the practice of cybersecurity. Coming back to AI, the White House Office of Science and Technology recently held a Las Vegas hackathon of sorts to allow expert volunteers to identify problematic responses in current generative AI systems.

Out With the Bad, In With the Good

Because innovation can cut both ways, policymakers operate within the error framework of hypothesis testing. The overarching goal is to craft intelligent rules that minimize both Type I errors (false positives that stop helpful innovations) and Type 2 errors (false negatives that permit harmful innovations). The type of framework will be familiar to anyone who works in a regulated industry, especially health care.

An Ocean Apart

Determining the optimal tradeoff between each type of error is fundamentally a values choice – and countries will differ on where the balance is struck.

Historically, the United States has been more willing to tolerate some abuses in order to gain the benefits of innovation, while Europe has been much less tolerant of harm – even at the cost of foregone benefits. Last month, an unnamed Cato Institute researcher summed this up well with the quip “Europe gonna Europe.”

These intercontinental dynamics are now well-established in tech regulation, which is why the quip lands.

Where to Draw the Line?

Every industry has a different level of safety and security regulation necessary to ensure an efficient market. The optimal level of regulation depends on a number of factors, including (but not limited to) the level of risk presented, the information gap between producers and consumers, and the extent to which the product or service is considered a necessity. Thus, pharmaceuticals are tightly regulated, while dietary supplements are not.

When the level of regulation is set too low, producers may be unable to gain the trust of users and the market may fail to scale or reach its original promise.

Of course, the level of regulation can also be set too high. We can find notorious examples from U.S. history but I think immediately of the License Raj system that plagued India for generations.

In reaction to the threat of overregulation, many entrepreneurs rally around the phrase “permissionless innovation.” This sounds good in theory but points once again to the question of acceptable levels of risk – we don’t want “permissionless innovation” when it comes time to purchase prescription drugs.

What is the Appropriate Level of Regulation for AI?

Since AI is best understood as a general purpose technology that can be used in many different settings, most proposals for regulation employ a risk-based framework in which the level of regulation ratchets up when the level of risk increases. We see this in both the NIST Framework and Europe’s AI Act.

This is the correct paradigm, though it doesn’t resolve the question of what the optimal level of regulation should be for each risk level. As stated above, this can only be set in accordance with national and regional values.

Looking Beyond Economics

The risk/reward framework is fundamentally an economic model of policymaking that has stood the test of time in evaluating policy proposals across a range of issue areas. But AI is difficult to regulate because, as has often been pointed out, AI may not be like other policy challenges.

Because the innovation cycle is moving so quickly in AI, our confidence in predicting future outcomes – even in the relatively short term – is weak. In the face of such uncertainty, it makes sense to look beyond economics to another domain where information is especially scarce. We do so in the companion piece.

Share This Story.