Photograph of Man Holding Trade, a statue series created by Michael Lantz that stands outside the Federal Trade Commission Building in Washington, DC.
Who Can Hold the Reins?
If Congress decides to regulate AI, it must also decide whom to empower.
September 12, 2023
A bipartisan consensus is forming that the federal government must play some role in regulating AI. But should Congress seek to convert this rough consensus into legislation that regulates AI systems in a comprehensive way, it will face major challenges – some unique to the technology, some not.
Members of Congress have very different ideas about how the federal government might regulate AI – with Republican Members of Congress particularly attuned to concerns that comprehensive regulation could suppress innovation or hamper national defense capabilities.
Yet even that observation puts the cart before the horse. We must first consider a foundational question – who could hold the reins?
AI systems, including generative systems, integrate a complex set of rapidly developing technologies – and Members of Congress want to ensure that any laws they enact are not rendered obsolete overnight.
The pace of recent technical progress in AI seems to be accelerating more quickly than that of other technologies that have bedeviled lawmakers in recent years. And large language models such as those utilized by ChatGPT have captivated the public in large part due to their “emergent” behavior – they sometimes exhibit new capabilities that their developers did not anticipate. In these ways, AI is different from past subjects of legislation.
But in other important ways, it’s not. Any attempt to create a regulatory regime for AI will involve many of the same challenges that have vexed Congress in recent years. And one of these challenges deserves particular attention – the need to identify a regulator.
Why Can’t Congress Just Take Charge?
Neither AI’s champions nor its skeptics want Congress to pass laws that govern the precise mechanics of AI systems. By constitutional design, Congress is not equipped to nimbly revisit requirements and restrictions as circumstances change – that’s what agencies do.
At present, there is no single agency that is both technically capable of regulating AI and politically palatable to both major parties.
Why Not the FTC?
Let’s first consider the Federal Trade Commission (FTC). From both a technical and jurisdictional standpoint, the FTC is a natural choice to regulate AI. It has a long track record of protecting consumers and is well-known by the business community. It even has a crack new team of technologists, several of whom are AI experts.
But empowering the FTC to regulate AI in a comprehensive way is not politically feasible at this time. As the saying goes, personnel is policy – and Republican House Members are highly unlikely to grant major new enforcement authority to the FTC as long as its Democratic Chair, Lina Khan, remains in power.
Much has been written about the House Republicans’ attitude toward Chair Khan, but if you want to get a sense of how deep the antipathy goes, I recommend watching a few minutes of her recent appearance before the House Judiciary Committee.
The FTC is currently investigating ChatGPT operator OpenAI for potential violations of consumer protection laws and recently sent OpenAI a twenty-page request for information regarding its personnel, data collection, content moderation, and risk management practices. No one should be surprised that the FTC is flexing its muscles and aggressively wielding its existing powers, but its track record in recent cases demonstrates the limits the agency faces in pursuing enforcement actions under current law.
Why Not the FCC?
The Federal Communications Commission is the second natural candidate to serve as a comprehensive AI regulator. It has a long history of regulating communications technologies and an established process for gathering information and conducting rulemaking on complex technical issues.
The FCC was stranded without a third Democratic Commissioner from the start of the Biden Administration until this past Thursday, when the Senate confirmed Anna Gomez. While this long vacancy hamstrung the agency in many ways, one silver lining is that it helped the FCC’s leadership team escape the recent backlash from industry that the FTC has endured.
For the FCC, the problem is the past, not the present. The storied political history of the FCC is complex and a full discussion of this history cannot be fairly covered here. But the bottom line is that many conservatives and progressives harbor a fundamental mistrust of the agency (for very different reasons).
Even many lawmakers who take a favorable view of the agency are wary of expanding its mandate to extend beyond telecommunications. The FCC is working to confront the implications of AI within the scope of its current mandate, but it is unlikely that Congress will massively increase this mandate to include AI regulation writ large.
How About A New Agency?
If the FTC and the FCC are off the table for now, why not create a new regulatory agency to regulate AI and tech platforms more generally?
Senators Michael Bennet and Peter Welch have introduced legislation to create a new agency of this type. Just last month, Senators Elizabeth Warren and Lindsey Graham teamed up on a bipartisan proposal to create a new tech industry regulator. These bills all build on the work of Tom Wheeler and Harold Feld, who have thoroughly examined the policy and political considerations involved in such an effort.
Senator Graham’s support for the concept provides some momentum for bipartisan action. But let’s not forget what a herculean task it is to stand up a new regulatory agency in the modern era.
While Congress routinely created new agencies in the early twentieth century to oversee new forms of commerce and industry, doing so in the modern era is an entirely different animal. There’s an extensive history of proposals to reorganize the executive branch to better regulate digital technology and information, but no such agency has yet been created. As Feld has observed, “The chief barriers to creating a new agency are political . . . . That these objections are political makes them no less real.”
What It Takes to Create a New Agency
The Consumer Financial Protection Bureau was the last major regulatory agency created by Congress. It took the near-collapse of the U.S. economy (during the 2008 financial crisis) for Congress to summon the political will to stand it up. And the CFPB faced such vociferous political opposition that the courts are considering challenges to its very existence that persist to this day.
Similarly, the Department of Homeland Security – which was mostly a reorganization of existing agencies – only came into being after the nation suffered the unprecedented terrorist attacks of 9/11.
Coming back to the present, there is no indication that House Speaker Kevin McCarthy and House Republicans have any interest in standing up a new regulatory agency. Without their support, any effort to do so in this Congress is a non-starter.
This is the part of the discussion where an alphabet soup of agencies and offices enters the chat, and arguments are presented in favor of any number of well-known or obscure agencies serving as an AI regulator. The most frequently recommended candidates are the White House Office of Science and Technology Policy (OSTP), the National Institute of Standards and Management (NIST), part of the Commerce Department, and the National Telecommunications and Information Administration (NTIA), also part of the Commerce Department.
In the interest of brevity, I’ll refrain from running through a list of reasons why Congress is unlikely to empower any of these bodies to serve as a comprehensive AI regulator. The short answer is that each of them has a statutory mandate that is too far afield from AI or has not traditionally served as a “regulator” in the way that term is generally understood.
These agencies can – and almost certainly will – serve as partners with industry and academia in conducting research, compiling resources, and coordinating best practices. But none of them is likely to be tapped as a regulator. OSTP’s mandate continues to expand (as the role of technology in our economy and society expands) but OSTP is chronically underfunded and cannot be expected to play a role akin to the FTC or the FCC in regulating major components of the U.S. economy.
If you disagree, and believe there’s a unicorn agency out there that is well-positioned to play this role, please let me know. There may be an under-appreciated agency hiding in plain sight that has escaped my attention. But if that’s the case, we should also consider why Congress would entrust such great responsibility to an agency that has thus far been flying under the radar.
Where This Leaves Us
Members of Congress are grappling earnestly with the advent of generative AI and the acceleration of AI tools more generally, and many of them (especially Democrats) want to find a path forward on creating a new regulatory regime.
In the near future, we are likely to see a proliferation of agency-specific guidance and rulemaking and a reckoning in the courts over how AI tools interact with existing laws and regulations. As FTC Chair Lina Khan has pointedly observed, “there is no AI exception to the laws on the books.”
In early August, Senator Todd Young also hinted that Congress is likely to emphasize the collective efforts of a range of agencies. Recognizing that the selection or creation of an agency for regulating AI will be a “key decision point,” Senator Young added, “I don’t think we’re there yet.”
As discussed in the companion piece, the work continues, and at some point, the question will ripen. If and when Congress chooses to empower a single agency to serve as a comprehensive AI regulator, it will have to find someone to hold the reins.