Can Arms Races and Strategic Instability be Good?
I’m a first time poster, having just joined the stellar team at CNSP. I’m excited to be part of the conversation, and thought the best way to start would be by critiquing the work of my new boss. We’ll see if I’m allowed a second post.
In his posting on the “Four Shibboleths,” Vipin warned of danger when we accept core arguments central to nuclear policy decisions without questioning their logical bases. This is a sound warning in any field. When we allow arguments to become catchphrases or slogans, we risk mis-extrapolating from those self-evident truths, or dismissing alternative approaches without honestly engaging them. At the same time, shorthand is useful and conventional wisdom is often conventional for a reason. While there’s a clear value to reevaluating core arguments, we shouldn’t be surprised if, upon that reevaluation, we often come to the same conclusions.
This was the outcome for me as I took up Vipin’s challenge and asked myself why it is that I believe that arms races are dangerous, and that strategic stability is a worthy objective. The logic behind both conclusions is worth rehearsing, but I find, on reinspection, both of these shibboleths to be true. That doesn’t by any stretch mean that I’m defining these terms in the same way that others use them, of course, but I found it useful to go through the thought exercise and hope others do, too.
Can Arms Races Be Good?
Vipin questions whether adherents to his shibboleths might be lazy to assert that any course of action leading to an arms race is necessarily counter to U.S. or collective security needs. I agree that avoiding nuclear conflict might be a more concrete objective than avoiding an arms race, and also that we should define an “arms race” if we are to build arguments around the concept. However, I conclude that for a reasonable definition of “arms races,” they are likely to augment nuclear risks rather than reducing them, and are thus counter to the objective of decreasing the likelihood of nuclear conflict.
“Arms race” can certainly be a codeword, but as the cartoons in Pranay’s posting comically illustrate, there’s a pretty clear meaning behind the word. When we race, we move as fast as we can towards an objective (e.g., the other side of the playground, a larger nuclear force) in order to get there before our opponent does the same. The key factors defining any race, including a nuclear one, is that 1) we’re changing something about the status quo, and 2) we’re doing so as quickly as we can. Are these dangerous characteristics to attach to nuclear weapons? Yes!
First, the speed issue. Starting from the premise that our objective is to reduce – or at least avoid increasing – the risk of nuclear conflict, two challenges intrinsic to any nuclear policy discussion are uncertainties in the status quo (i.e., what risk of nuclear conflict do we live with now?) and uncertainties as to how that risk picture will shift in response to a given action. The further we move ourselves away from our current posture in whatever direction (more weapons, different weapons, different deployment modes), the larger this second uncertainty becomes. Invoking an arms race suggests a risk environment that changes faster than we can expect to understand it fully. This seems as transparently risky as driving fast down a winding mountain road in the fog.
Unlike a foggy mountain road, the nuclear road is racing us back. Two (or more) changing nuclear postures compound the uncertainties, but the fact of multiple participants also creates the possibility that one side perceives a temporary advantage that incentivizes a risky or otherwise undesirable action before that advantage disappears. For example, a participant in an arms race could extrapolate (or mis-extrapolate!) the trajectory its adversary is on, find a more favorable balance of forces at the moment than at the endpoint of that extrapolation, and feel compelled to make a move before that temporary advantage disappears. Requiring time-sensitive decisions with a partial or possibly inaccurate situational understanding is like passing a beer to your speed-demon mountain driver. That’s a bad idea on the road, and it’s a bad idea in nuclear policy.
Vipin acknowledges that this kind of “indisputably risky, rapid, and uncontrollable” approach to nuclear policy is undesirable, but questions whether this scenario might be a strawman, invoked for rhetorical effect but portraying an inaccurate picture of the world as it is. Pranay augments Vipin’s analysis on this point, providing historic examples in which reactions and counterreactions between adversaries move at the plodding pace of bureaucracy and long-term infrastructure projects. Does this help? To further torture my analogy, if we pass the driver a coffee, ride the brakes, and roll down the window to listen for oncoming traffic, is it still dangerous to drive down the mountain, or have we adequately addressed the risks conjured by use of the term “arms race?”
Slowing things down could certainly help. We’d reduce some of the racing risks, for example, if each step we take comes with the opportunity to understand where our competition now stands and where it’s heading, and if we’re able to communicate with the adversary to avoid misinformed, rushed decisions as they react. These benefits are predicated not on the slower speed of the cycle, but rather on the assumption that we are able to use that slower pace for communication and greater situational understanding. Absent these factors, the slower pace isn’t intrinsically safer; watching a car crash in slow motion doesn’t help the passengers.
So which of these versions of a slower, more deliberate arms competition better describes our current dynamic with respect to Russia or China? As channels for strategic dialogue have largely dried up or were never in place, it’s hard to argue that we are currently engaged in a deliberate cycle of actions in which mutual understanding is steadily advanced at each step. Yes, the pace is slow, and yes, we watch and learn what our adversaries are doing as they respond to our actions as best we’re able, but without communication and transparency the various sources of uncertainty build, along with the attendant risk of conflict. It’s hard enough to predict the course of our own efforts (as demonstrated by delays in U.S. modernization), must less those of an adversary, or potentially multiple adversaries, each responding to dynamic events on its own part. I fear we’re closer to a slow-motion viewing of a race than we are to a deliberate and informed competition. Communicating to both adversaries, but in particular to China, that a relationship without good communication introduces these kinds of danger should be a priority.
My conclusion is that an arms race, in the abstract, is damaging to our interest in avoiding nuclear conflict if an action/reaction cycle plays out faster than we are able to understand and react to the evolving situation. Otherwise the uncertainties will grow as to where our strategic relationships are evolving towards, as will the likelihood of negative outcomes. It is helpful that our nuclear infrastructure moves at a plodding pace, but this only addresses the risks if we manage to make use of that slower pace. It’s not clear that we are able to do so in our current reality. I’ve put to one side the issue of financial costs to an arms race, but I think there’s a formulation of that element in terms of a very different set of risks. With this conception of what makes a “race” a “race,” I remain of the view that arms races should be avoided.
I’m talking very abstractly here, while of course policymakers have to live in the real, tangible world. When applying any of the above to current dynamics, key questions quickly arise: what if we’re already in an arms race of this sort? If the other guy started it, are we expected to be the bigger country and stand back while they race ahead? What if the two sides of a race don’t have the same understanding of when the race is over? The questions are fair, and I view them as variations on an effort to understand what options we have in a bad scenario. The point that I’m making, consistent with Vipin’s shibboleth, is that this scenario comes with serious risks, and that we should be open-eyed about that as we pursue action to end the race rather than accelerate it.
What About an Arms Walk?
That’s the “racing” part; the second issue with arms races are the arms! Vipin also questions the assumption that higher numbers of nuclear weapons necessarily increase nuclear risks, i.e., even if we are pursuing an arms walk rather than an arms race, as defined above. Again, I take as common ground with Vipin that the global objective is to reduce the risk that deterrence fails and nuclear conflict results. If there are specific increases in nuclear weapons that clearly reduce those risks, I’m all for them. Again, while embracing the utility of revalidating assumptions, I conclude that more nuclear weapons will likely translate into a higher probability of such deterrence failure and conflict. Whether the effect is big or small (and thus how much we should be prepared to sacrifice with respect to other objectives in order to avoid increasing numbers) depends on the specifics, but I’m interested in revisiting the more general trendlines to the extent that’s possible.
To that end, I’ll start with the easiest example: I think most would agree that increasing the number of nuclear weapons in the world by introducing new nuclear weapons-possessing states – i.e., proliferation – would increase the nuclear risks we seek to minimize. Vipin doesn’t suggest otherwise, but I think it’s worth rehearsing the mechanism for enhanced risk in this scenario.
When deterring a nuclear adversary, you accept some probability that your deterrent will fail and nuclear conflict will result. Adding deterrence participants compounds this risk, and quickly: three adversaries enjoy three times the number of deterrence dyads as two adversaries, and if actions taken to manage risk in one dyad can negatively impact risk in another, the total risk accumulates even more rapidly as you increase the number of states with nuclear weapons. This is probably more scrutiny than the example needs, but the point is that there is a category of risk that is attached to deterrence relationships, and that adding relationships increases the total risk of nuclear conflict in the system as a whole.
How does the risk picture change if we add weapons within an existing nuclear state, rather than in a new one? It depends. Consider, for example, that the new weapons are different from the existing ones, i.e., usable in escalation scenarios that weren’t previously accessible. When we compute the risk of a failure in deterrence, we have to add up the risk in each available pathway to that result; if a new weapon creates a new pathway to conflict between two adversaries, then it increases the total probability of nuclear conflict, just as adding a new adversary did in the previous example. This effect could be small, even negligible, especially if the size and diversity of the arsenal was already large. But what if the likelihood of deterrence failing in the newly-added scenario is significantly higher than in scenarios previously accessible? Then the overall risk could increase significantly, even with relatively small numbers of new weapons. While computing whether the additional risk is large or small depends on a lot of specifics, what we do know in general is that the additional risk is not zero.
With the benefit of these cartoon examples, what can we conclude about a more realistic example of an increase of something like 500 deployed weapons on top of current levels? Here there are no new deterrence partners or escalation pathways, as in the examples above, and we are probably closer to what Vipin had in mind.
The size of the effect will again depend dramatically on the specifics, but I still think there will be a net increase to risk in every case unless you assess the current level of risk to be zero. Why?
Because just as there are categories of risk tied to a deterrence relationship or to an escalation pathway, there are also categories of risk that attach to individual weapons. In that case, two such weapons create twice the risk that one does. A worst-case example can help demonstrate that this category of risk exists: consider a state with an arsenal at high readiness levels, perhaps weapons that are forward-deployed with questionable provisions for maintaining control, constantly moving, or even pre-delegated for launch. Maybe the arsenal was built in such a slapdash way that there is risk of component failures, leading to safety issues. Adding more weapons to this nightmarish scenario in which risk of theft, accident, or uninstructed use is relatively high does not add escalation pathways or deterrence dyads; the risk is associated with individual weapons, and thus a 10% increase in the arsenal size would raise the total risk of nuclear use by something like 10%.
This kind of risk indeed grows monotonically with arsenal size. Of course the U.S. arsenal, for example, is far from the cartoon picture above. Careful thought into posture and procedure and safety, informed by decades of experience, seeks to drive this category of risk as close to zero as possible. But unless we succeed in reaching zero risk of this category, an expanding arsenal accumulates additional risk.
More concerning is how this kind of risk stands in other arsenals. Lacking our experience and any relevant dialogue, how low should we assume the weapon-level risk is in the DPRK? In a rapidly growing Chinese arsenal? The fact that our own experience has involved close calls and important lessons learned suggests that such risks should be expected on the part of other nuclear-armed states. Even if our own weapon-level risk were confidently estimated to be near-zero, if an increase to our arsenal prompts reciprocal growth by less capable adversaries, the risk of deterrence failure will still grow, and at a level set by the least-safe participant.
Where does this leave us on the arms race/buildup shibboleth? Vipin rightly points out that poor statistics make it difficult to compute with any confidence what the risks are of various pathways to nuclear conflict. But unless you are prepared to argue that those risks are zero not only for us but also for our adversaries, I conclude that indeed larger arsenals carry higher risks. Speeding up relevant decision-making in a proper arms race rather than a carefully staged increase compounds this with additional new dangers.
There are two enormously important caveats to this. The first is that external security dynamics might well justify a buildup or even a race; a significant shift in relationships between two nuclear powers could be an example, the arrival of hostile extraterrestrial forces another. It’s fine in such a case for a decision-maker to decide to take action in the nuclear space in order to address that separate objective, I just think they should be clear that in the narrower nuclear lane they are increasing by some amount the likelihood of nuclear conflict, and should accordingly proceed cautiously.
The second caveat is that a change on one side could respond to some extant feature of the deterrence relationship in a way that reduces a risk that’s already there. Sure, if Side A adds a new or expanded nuclear capability it will introduce new risk, but what if it makes less likely the use of an existing Side B capability, previously unaddressed, for a net reduction in risk? I don’t at all doubt this is possible, but I do think it would demand a really careful understanding of the source of the larger, existing risk. I’d also think you’d want good confidence that you and the adversary see the relevant risks in the same way, and that once a counter-reaction is taken into account, the risk arithmetic still adds up favorably. That’s not impossible, but does suggest a need for communication and caution. I think this slice of the issue is where most real world examples will sit, along with the greatest opportunity for further thinking.
Seeking Strategic Instability?
Just as I’m intuitively against an arms race, I instinctively flinch at the idea of strategic instability. Again, I accept Vipin’s challenge to explore why.
In Vipin’s presentation, the question of the value of “strategic stability” is posed as a choice between a damage limitation approach (“optimal instability”) and mutually-assured destruction (defined for nuclear purposes as identical to “strategic stability”). This debate is well-trod ground at Strategic Simplicity, but whatever your answer to the MAD/damage limitation debate, even as a narrowly-focused nuclear guy I see advantages to “strategic stability” that I think sit outside that particular doctrinal discussion.
Consider what the opposite of strategic stability would look like – i.e., strategic instability. As a partisan of this particular shibboleth, I don’t imagine such a state to be one in which reliable second strike isn’t assured, but rather one that would have a lot in common with the proper “arms race” described above: pressure and incentives on both sides to move away from the nuclear status quo, i.e. by expanding or improving nuclear capabilities on both sides, or, alternatively, watching as one side’s unilateral advantage is eroded or erased. Such a situation would be dynamic, “unstable.” Whether a damage limitation approach is stable or unstable depends on implementation; same for MAD.
The stability I’d advocate for represents an equilibrium, in which nuclear-related pressures and forces balance to zero, and neither side sees need for significant changes. Examples could be MAD-based; but they could also include a nuclear disarmament scenario or a million other solutions. The benefit of this form of stability is the absence of pressures to deviate; absent underlying changes, a stable situation today remains stable tomorrow. One benefit of this is that over time it steadily builds confidence that it is durable and doesn’t contain the seeds for major risk of conflict.
A world that is unstable in this sense, on the other hand, eludes such confidence. It constantly evolves away from the familiar, potentially into uncharted territory, with unknown risks. The uncertainty as to what level of risk we are living with grows as actions by us, or by our adversaries, take us further from the status quo, forcing us to extrapolate over larger distances.
Imagine a risk-likelihood space, in which for every possible combination of parameters like U.S. and adversary arsenal compositions, sizes, and doctrines, we assign a risk of deterrence failing and thus nuclear conflict. If we change one of those parameters (e.g., by increasing the size of our arsenal, recognizing that this might in turn prompt a change to another parameter, like the arsenal size of an adversary), we will move to a different point in this space, at which the risks will be different. By endorsing strategic stability as a worthy goal, what I mean is that there are nearby locations in this risk landscape that could be more dangerous than the position we sit in today; we should be wary of moving our position without knowing in advance what risk landscape such a step will bring us to.
Does this suggest paralysis? It shouldn’t. Strategic equilibria aside, the world is dynamic and requires adjustment in our approach, including with respect to nuclear weapons. China’s nuclear build-up requires a reaction precisely because it potentially changes the underlying security picture, necessitating that we find our way to a new equilibrium. Adherence to the strategic stability mantra doesn’t prevent us from reacting to this build-up, it explains why the build-up is dangerous in the first place.
Returning to our risk landscape, imagine that the status quo is a point on a steep mountain ridge, falling away sharply on both sides to regions of elevated nuclear risk. Standing on the crest we are at equilibrium, relatively safe. While a mis-step could lead us to disaster, we need not stay rooted to our spot. Indeed, one can imagine a safe path along the ridgeline, each point on which represents a different configuration of forces, but each of which also remains in equilibrium. Smart changes to our capabilities or doctrine walk that ridge; risky ones step off of the ridge into thin air. By prioritizing “strategic stability,” what I mean is that it’s best to know which is which before putting your foot down.
This conception has important implications for different discussions in the current nuclear community. Champions of disarmament must identify a course from here to there that doesn’t cross through serious instability, but rather walks this kind of ridgeline through local equilibria and safely down to zero. Those who argue for arsenal increases in response to Chinese actions must also have confidence that such a step, and the ones that follow it, keep us on the trail and not falling onto the rocks.
Nothing that Vipin says is at odds with this conception of the value of strategic stability. His laydown of the stability-instability paradox is clear and informative, as is the explication of the relevance of damage limitation. I’m wary of the prospect of having your cake (i.e., enjoying stability above the threshold in the meaningful sense of avoiding nuclear war) and eating it, too (i.e., enjoying stability at the same time below the nuclear threshold), but I think that’s a different set of questions.
There is an important downside to the kind of strategic stability that I describe above: one benefit of inhabiting a stable equilibrium is that it gives us the freedom to turn away from otherwise terrifying strategic problems and focus on other interests. This benefit comes with a risk, as it could lead us to lose sight of the logic that brought us to that stability in the first place. In this case, we might forget what the words of a shibboleth mean, retaining only that their pronouncement separates one group from another, and a sense of which side of that divide we belong to. In each of the areas Vipin has highlighted, whether making changes to our arsenal, engaging in an arms race, or taking action that could disrupt the strategic equilibrium at which we currently sit, there are smart ways and less smart ways to proceed. We should be alert to the danger that, if we allow our pre-coded positions on these or other shibboleths determine our positions on specific policy questions, our ability to navigate difficult issues in smart ways will suffer. I, for one, appreciate Vipin’s nudge to revisit what lies below the terminology, even if on balance my position on some of his specific examples remains unchanged.



Bold move Cotton, let’s see if it pays off! :)
This is precisely the kind of rigorous and thoughtful debate we hope to encourage at CNSP. Stay tuned for the pod-fight!
It's called Strategic "Simplicity." Trying to imagine your "risk-likelihood space" is giving me a headache - though that may be another symptom of the flu I've got...
Can we imagine the mountain as a sphere? You know, to make the math easier?
Does our situational understanding of the adjustments adversaries are making in their nuclear arsenals greatly improve if we give the driver a non-alcoholic beer?
I was worried that you forgot about entropy with "a stable situation today remains stable tomorrow," but by the end of this post you somehow managed to reapply the laws of thermodynamics. Bravo!
All kidding aside, congratulations on the new appointment!