Policy Analysis

Donald Rumsfeld with President Ford in 1975, when Rumsfeld was the White House Chief of Staff (Universal History Archive, via Getty Images)
Rediscovering the Rumsfeld Matrix
A helpful guide for navigating the fog of AI
October 10, 2023

While conducting a press briefing in February 2002 – one year before U.S.-led coalition forces invaded Iraq – then-Secretary of Defense Donald Rumsfeld articulated a method of inquiry:
“There are things we know we know. We also know there are known unknowns; that is to say, we know there are some things we do not know. But there are also unknown unknowns—the ones we don’t know we don’t know.”
Rumsfeld’s pronouncement became the stuff of legend, for better or for worse, and has since been memorialized as the “Rumsfeld matrix.”
The essence of the Rumsfeld matrix is that we can and should account for the likelihood that we will have blind spots of which we are unaware.
This typology of mystery – originally formulated by psychologists Joseph Luft and Harrington Ingham – is a helpful guide for navigating the overwhelming uncertainty we face in forecasting the path of AI, especially in the medium-to-long term.
Let’s bypass the “known knowns,” – which are well-covered by standard methods of analysis – and focus on the two remaining categories: the “known unknowns” and the “unknown unknowns.”
Known Unknowns, Part 1 – Beneficial Innovations
AI will almost certainly lead to beneficial innovations that we cannot foresee at this time.
When researchers are on the cusp of communicating with animals in their own language and stroke victims are speaking to loved ones without moving their facial muscles, something extraordinary is at play.
One oft-discussed example of the phenomenon by which technological advances lead to unforeseen applications is the explosion of ridesharing systems like Uber and Lyft, which have overhauled transportation systems across the world. These systems rely on widespread use of smartphones, the GPS system, and fast telecommunication systems.
When advances were made in each of these technologies, few people intuited that ride-sharing systems would gain widespread adoption. It took inspired entrepreneurs to make use of the new technologies.
So too for AI. With generative AI, we are now seeing an explosion of tailored programs and applications that sit on top of the foundational models. We don’t yet know what the best use cases will be, but we do know they are coming.
Known Unknowns, Part 2 – Concrete Harms
Many of the potential harms we envision – including security threats, scams, and discrimination turbocharged by AI – are also best categorized as “known unknowns.”
While we understand the basic ways these harms can arise out of AI systems, we know very little about how the dangers may play out in the coming years. The opaque “black box” nature of many AI systems makes it even more difficult to accurately forecast the prevalence and severity of different categories of harm.
We also know that regulation itself can have unintended consequences. Even the most well-crafted policies have second-order impacts that are difficult to predict – especially when the subject matter is technical and the use cases are evolving quickly.
If formulating AI regulation were simply a matter of balancing these known unknowns – both positive and negative – then the risk/reward framework discussed in the previous article would work well. But when uncertainty is the rule rather than the exception, this type of model breaks down. We need to look beyond economics.
Unknown Unknowns – The Realm of Uncertainty
Rumsfeld articulated his matrix in a context of war, which Prussian military analyst Carl von Clausewitz famously deemed a “realm of uncertainty.”
In the Rumsfeld matrix typology, the “unknown unknowns” are the things we don’t know we don’t know. The concept itself is unnerving.
Because AI systems are complex and the rate of progress is rapidly accelerating, it is appropriate to assume the presence of unknown unknowns and formulate policy accordingly.
What is the proper response in the face of such uncertainty? Some technologists famously called for a six-month pause on the development of frontier systems back in March – though the call itself was sharply criticized by many leading thinkers and policymakers for a host of different reasons.
Since few people in a position to implement such a “pause” are inclined to do so, the debate over its merits is at best academic. We need to consider other responses to uncertainty.
Uncertainty Heightens the Need for Safety Guardrails
The recognition that there is likely much about AI “we don’t know we don’t know” has generated near-unanimous support for basic, mandatory safety guardrails.
Most founders, investors and businesspeople want some level of regulation because they recognize that safety and security are baseline requirements for widespread public acceptance of AI systems. Tools like “circuit breakers” that can interrupt autonomous processes gone awry are especially useful under these circumstances.
In this way, the purported “innovation vs. safety” tradeoff that animates so many debates over regulation is shown to be overly simplistic.
Lessons From Economic History
In many industries, regulation provides the safety and security necessary for users, consumers, and investors to have confidence that they can engage in market transactions without entertaining an unacceptable level of risk. In other words, “buyer beware” is not such a great paradigm for creating or sustaining a major new consumer market.
The examples of this dynamic are legion. The United States has a robust market for pharmaceuticals not in spite of FDA regulation but in large part because of it. Parents spend billions of dollars on toys without worrying about blatant hazards because the CPSC oversees the market. And the market for publicly traded securities in the United States is huge and liquid because the mandatory disclosure regime gives investors confidence that they’re not investing in a Ponzi scheme.
A Common Starting Point
The majority of American adults are already more concerned than excited about the increased use of artificial intelligence. And while hundreds of millions of consumers have willingly experimented with AI tools, long-term adoption by a mass market is still tentative.
A few highly publicized incidents of public harm could quickly chill the willingness of both consumers and corporate customers to integrate AI tools into daily use, which is why even those who stand to profit from the widespread dissemination of AI tools want reasonable safety guardrails in place.
Of course, the precise nature of how such guardrails should be constructed and enforced will be hotly contested. But it’s helpful – and possibly “historic” – that most actors agree that baseline safety regulations are both necessary and conducive to forming a lasting market for AI systems.
In the coming years, we will gain more familiarity with the types of risk presented by AI. As that happens, many risks will cease to be “unknown unknowns” and policymakers will be better positioned to craft specific safety measures. Until that time, we can recognize that our blind spots likely extend beyond those we can currently identify.