U.S. Capitol Building, Washington D.C. Original image from Carol M. Highsmith’s America, Library of Congress collection
Where Are We On AI Policy?
Congress and the Administration have taken many steps. But there’s a reason why the question of “what the government will do about AI” hangs in the air.
September 12, 2023
In one sense, the question of what the federal government “is going to do” about AI is misplaced. Both Congress and the Administration have already taken significant actions to harness the benefits of AI, mitigate risks, and provide guidance to industry.
Individual federal agencies are also working hard to develop new rules and systems regarding AI. The Department of Defense has built on its past work regarding autonomous weapons systems to develop frameworks for the ethical use of AI, created a new Chief Digital and AI Officer position, and established a task force on generative AI.
As we enter the 2024 election cycle, the Federal Election Commission has taken a “small step” toward restricting the use of AI to create election disinformation in the form of “deepfakes.” The Commerce Department’s National Telecommunications and Information Administration is preparing a report on policy recommendations on creating “earned trust in AI systems,” while the U.S. Copyright Office has launched a new AI Initiative and is now requesting public comment on a wide range of “copyright issues raised by recent advances in generative AI.”
These are just the most prominent examples of agencies wrestling with the ways in which their existing mandates interact with the promises and perils of AI, especially generative AI. These actions are important, and several of them – especially the NIST Risk Management Framework – have been praised by a wide range of industry and civil society groups, a notable achievement.
What Has Congress Accomplished?
In past years, Congress chartered commissions and whole-of-government efforts to study the technology, provide recommendations, and coordinate agency actions – with the National Security Commission on AIand the National AI Initiative being the most prominent examples. The partnership between Senator Gary Peters and former Senator Rob Portman was especially productive, and resulted in several measures to regulate government agencies’ own uses of AI.
In this Congress, dozens of bills pertaining to AI have been introduced in both chambers, and lawmakers are working hard to gain an understanding of the promises and risks of AI, especially generative AI.
The challenges presented by AI have also pushed Members in both chambers to reach across the aisle and pursue bipartisan projects. Representatives Michael McCaul, Anna Eshoo, Jay Obernolte, and Don Beyer – who lead the House AI Caucus – are hosting briefings to help their colleagues move up the learning curve. Senators Hawley and Blumenthal plan to unveil a bipartisan framework to regulate AI later today, and Reps. Ted Lieu, Ken Buck, and Anna Eshoo have introduced a bipartisan bill to create a National AI Commission.
The societal changes heralded by AI may be enormous – at least over the long term – and Members are thinking big. Senate Commerce Committee Chair Maria Cantwell recently proposed a sweeping AI Bill – based on the landmark GI bill that sent eight million World War II veterans to college – to help workers gain skills and prepare for the “jobs of the future.” And Senate Majority Leader Chuck Schumer has worked hard to focus Congress’s attention on AI and included several AI risk management provisions into the Senate’s version of the annual National Defense Authorization Act.
But Seriously – What’s Going To Happen?
These actions notwithstanding, there’s a reason why the overarching question of “what the government will do about AI” hangs in the air.
The Administration’s public actions have come primarily in the form of non-binding guidance documents that do not have the force of law. And the laws that Congress has enacted thus far have addressed the federal government’s own uses of AI, commissioned studies and reports, or provided (modest) funding to boost AI research efforts.
The Bipartisan Effort Continues
Leader Schumer’s SAFE Framework – which he announced this past June – is the closest we have come to comprehensive legislation.
But as its name suggests, it’s intended to be a framework of principles to guide the development of future legislation rather than legislation itself. The bipartisan work of Leader Schumer and Senators Todd Young, Mike Rounds, and Martin Heinrich continues, with a major AI Insight Forum scheduled for tomorrow.
This is the group to watch. The last time Senators Young and Schumer worked together, they passed the CHIPS and Science Act, which President Biden later hailed as an “inflection point” in the way the federal government supports high-tech manufacturing and scientific research (after name-checking his Corvette). I’ve spent much of the past year obsessed with this new law and trying to understand what it portends for future bipartisan cooperation on manufacturing, science, and technology policy.
The legislative history of the CHIPS and Science Act was unusually tortured, and it took a long time for the bill to pass both chambers of Congress and become law. The path to developing comprehensive legislation on AI may prove to be even harder.