U.S.S. Racine, serving as a target ship for a sinking exercise on 12 July 2018. [YouTube Screencap/The Drive]

The U.S. Navy has uploaded video of a recent sinking exercise (SINKEX) conducted during the 2018 Rim Of The Pacific (RIMPAC) exercises, hosted bi-annually by the U.S. Pacific Fleet based in Honolulu, Hawaii. As detailed by Tyler Rogoway in The Drive, the target of the SINKEX on 12 July 2018 was the U.S.S. Racine, a Newport class Landing Ship-Tank amphibious ship decommissioned 25 years ago.

As dramatic as the images are, the interesting thing about this demonstration was that it included a variety of land-based weapons firing across domains to strike a naval target. The U.S. Army successfully fired a version of the Naval Strike Missile that it is interested in acquiring, as well as a half-dozen High-Mobility Artillery Rocket System [HIMARS] rounds.Japanese troops fired four Type 12 land-based anti-ship missiles at the Racine as well. For good measure, an Australian P-8 Poseidon also hit the target with an air-launched AGM-84 Harpoon.

The coup de gras was provided by a Mk-48 torpedo launched from the Los Angeles class nuclear fast attack submarine USS Olympia, which broke the Racine‘s back and finally sank it an hour later.

Security On The Cheap: Whither Security Force Assistance (SFA)?

Security On The Cheap: Whither Security Force Assistance (SFA)?

A U.S. Army Special Forces weapons sergeant observes a Niger Army soldier during marksmanship training as part of Exercise Flintlock 2017 in Diffa, Niger, February 28, 2017. [U.S. Army/SFC Christopher Klutts/AFRICOM]

Paul Staniland, a professor of political science at the University of Chicago, has a new article in The Washington Post‘s Monkey Cage blog that contends that the U.S. is increasingly relying on a strategy of “violence management� in dealing with the various counterinsurgency, counterterrorism, and stability conflicts (i.e. “small wars�) it is involved with around the world.

As he describes it,

America’s “violence management� strategy relies on light ground forces, airpower and loose partnerships with local armed actors. Its aim is to degrade and disrupt militant organizations within a chaotic, fractured political landscape, not to commit large numbers of forces and resources to building robust new governments.

…Violence management sidesteps politics in favor of sustained military targeting. This approach takes for granted high levels of political disorder, illiberal and/or fractured local regimes, and protracted conflicts. The goal is disrupting militant organizations without trying to build new states, spur economic development, or invest heavily in post-conflict reconstruction.

…It has three core elements: a light U.S. ground force commitment favoring special forces, heavy reliance on airpower and partnerships of convenience with local militias, insurgents, and governments.

…Politically, this strategy reduces both costs and commitments. America’s wars stay off the front pages, the U.S. can add or drop local partners as it sees fit, and U.S. counterterror operations remain opaque.

Staniland details the risks associated with this strategy but does not assess its effectiveness. He admits to ambivalence on that in an associated discussion on Twitter.

Whither SFA?

Partnering with foreign government, organizations, and fighters to counter national security threats is officially known by the umbrella terms Security Force Assistance in U.S. government policy terminology. It is intended to help defend host nations from external and internal threats, and encompasses foreign internal defense (FID), counterterrorism (CT), counterinsurgency (COIN), and stability operations. The U.S. has employed this approach success since World War II.

Has it been effective? Interestingly enough, this question has not been seriously examined. The best effort so far is a study done by Stephen Biddle, Julia Macdonald, and Ryan Baker, “Small Footprint, Small Payoff: The Military Effectiveness of Security Force Assistance,� published the Journal of Strategic Studies earlier this year. It concluded:

We find important limitations on SFA’s military utility, stemming from agency problems arising from systematic interest misalignment between the US and its typical partners. SFA’s achievable upper bound is modest and attainable only if US policy is intrusive and conditional, which it rarely is. For SFA, small footprints will usually mean small payoffs.

A Mixed Recent Track Record

SFA’s recent track record has been mixed. It proved conditionally successful countering terrorists and insurgents in the Philippines and in the coalition effort to defeat Daesh in Iraq and Syria; and it handed a black eye to Russian sponsored paramilitary forces in Syria earlier this year. However, a train and advice mission for the moderate Syrian rebels failed in 2015; four U.S. Army Special Forces soldiers died in an ambush during a combined patrol in Niger in October 2017; there are recurring cases of U.S.-trained indigenous forces committing human rights abuses; and the jury remains out on the fate of Afghanistan.

The U.S. Army’s proposed contribution to SFA, the Security Forces Assistance Brigade, is getting its initial try-out in Afghanistan right now. The initial reports indicate that it has indeed boosted SFA capacity there. What remains to be seen is whether that will make a difference. The 1st SFAB suffered its first combat casualties earlier this month when Corporal Joseph Maciel was killed and two others were wounded in an insider attack at Tarin Kowt in Uruzgan province.

Will a strategy of violence management prove successful over the longer term? Stay tuned…

TDI Friday Read: Measuring The Effects of Combat in Cities

TDI Friday Read: Measuring The Effects of Combat in Cities

Between 2001 and 2004, TDI undertook a series of studies on the effects of urban combat in cities for the U.S. Army Center for Army Analysis (CAA). These studies examined a total of 304 cases of urban combat at the divisional and battalion level that occurred between 1942 and 2003, as well as 319 cases of concurrent non-urban combat for comparison.

The primary findings of Phases I-III of the study were:

  • Urban terrain had no significantly measurable influence on the outcome of battle.
  • Attacker casualties in the urban engagements were less than in the non-urban engagements and the casualty exchange ratio favored the attacker as well.
  • One of the primary effects of urban terrain is that it slowed opposed advance rates. The average advance rate in urban combat was one-half to one-third that of non-urban combat.
  • There is little evidence that combat operations in urban terrain resulted in a higher linear density of troops.
  • Armor losses in urban terrain were the same as, or lower than armor losses in non-urban terrain. In some cases it appears that armor losses were significantly lower in urban than non-urban terrain.
  • Urban terrain did not significantly influence the force ratio required to achieve success or effectively conduct combat operations.
  • Overall, it appears that urban terrain was no more stressful a combat environment during actual combat operations than was non-urban terrain.
  • Overall, the expenditure of ammunition in urban operations was not greater than that in non-urban operations. There is no evidence that the expenditure of other consumable items (rations; water; or fuel, oil, or lubricants) was significantly different in urban as opposed to non-urban combat.
  • Since it was found that advance rates in urban combat were significantly reduced, then it is obvious that these two effects (advance rates and time) were interrelated. It does appear that the primary impact of urban combat was to slow the tempo of operations.

In order to broaden and deepen understanding of the effects of urban combat, TDI proposed several follow-up studies. To date, none of these have been funded:

  1. Conduct a detailed study of the Battle of Stalingrad. Stalingrad may also represent one of the most intense examples of urban combat, so may provide some clues to the causes of the urban outliers.
  2. Conduct a detailed study of battalion/brigade-level urban combat. This would begin with an analysis of battalion-level actions from the first two phases of this study (European Theater of Operations and Eastern Front), added to the battalion-level actions completed in this third phase of the study. Additional battalion-level engagements would be added as needed.
  3. Conduct a detailed study of the outliers in an attempt to discover the causes for the atypical nature of these urban battles.
  4. Conduct a detailed study of urban warfare in an unconventional warfare setting.

Details of the Phase I-III study reports and conclusions can be found below:

Measuring The Effects Of Combat In Cities, Phase I

Measuring the Effects of Combat in Cities, Phase II – part 1

Measuring the Effects of Combat in Cities, Phase II – part 2

Measuring the Effects of Combat in Cities, Phase III – part 1

Measuring the Effects of Combat in Cities, Phase III – part 2

Measuring the Effects of Combat in Cities, Phase III – part 2.1

Measuring the Effects of Combat in Cities, Phase III – part 3

Urban Phase IV – Stalingrad

Urban Combat in War by Numbers

Dupuy’s Verities: The Utility Of Defense

Dupuy’s Verities: The Utility Of Defense

Battle of Franklin, 1864 by Kurz and Allison. Restoration by Adam Cuerden [Wikimedia Commons]

The third of Trevor Dupuy’s Timeless Verities of Combat is:

Defensive posture is necessary when successful offense is impossible.

From Understanding War (1987):

Even though offensive action is essential to ultimate combat success, a combat commander opposed by a more powerful enemy has no choice but to assume a defensive posture. Since defensive posture automatically increases the combat power of his force, the defending commander at least partially redresses the imbalance of forces. At a minimum he is able to slow down the advance of the attacking enemy, and he might even beat him. In this way, through negative combat results, the defender may ultimately hope to wear down the attacker to the extent that his initial relative weakness is transformed into relative superiority, thus offering the possibility of eventually assuming the offensive and achieving positive combat results. The Franklin and Nashville Campaign of our Civil War, and the El Alamein Campaign of World War II are examples.

Sometimes the commander of a numerically superior offensive force may reduce the strength of portions of his force in order to achieve decisive superiority for maximum impact on the enemy at some other critical point on the battle�eld, with the result that those reduced-strength components are locally outnumbered. A contingent thus reduced in strength may therefore be required to assume a defensive posture, even though the overall operational posture of the marginally superior force is offensive, and the strengthened contingent of the same force is attacking with the advantage of superior combat power. A classic example was the role of Davout at Auerstadt when Napoléon was crushing the Prussians at Jena. Another is the role played by “Stonewall� Jackson’s corps at the Second Battle of Bull Run. [pp. 2-3]

This verity is both derivative of Dupuy’s belief that the defensive posture is a human reaction to the lethal environment of combat, and his concurrence with Clausewitz’s dictum that the defense is the stronger form of combat. Soldiers in combat will sometimes reach a collective conclusion that they can no longer advance in the face of lethal opposition, and will stop and seek cover and concealment to leverage the power of the defense. Exploiting the multiplying effect of the defensive is also a way for a force with weaker combat power to successfully engage a stronger one.

It also relates to the principle of war known as economy of force, as defined in the 1954 edition of the U.S. Army’s Field Manual FM 100-5, Field Service Regulations, Operations:

Minimum essential means must be employed at points other than that of decision. To devote means to unnecessary secondary efforts or to employ excessive means on required secondary efforts is to violate the principle of both mass and the objective. Limited attacks, the defensive, deception, or even retrograde action are used in noncritical areas to achieve mass in the critical area.

These concepts are well ingrained in modern U.S. Army doctrine. FM 3-0 Operations (2017) summarizes the defensive this way:

Defensive tasks are conducted to defeat an enemy attack, gain time, economize forces, and develop conditions favorable for offensive or stability tasks. Normally, the defense alone cannot achieve a decisive victory. However, it can set conditions for a counteroffensive or counterattack that enables Army forces to regain and exploit the initiative. Defensive tasks are a counter to enemy offensive actions. They defeat attacks, destroying as much of an attacking enemy as possible. They also preserve and maintain control over land, resources, and populations. The purpose of defensive tasks is to retain key terrain, guard populations, protect lines of communications, and protect critical capabilities against enemy attacks and counterattacks. Commanders can conduct defensive tasks to gain time and economize forces, so offensive tasks can be executed elsewhere. [Para 1-72]

Another Look At The Role Of Russian Mercenaries In Syria

Another Look At The Role Of Russian Mercenaries In Syria

Russian businessman Yevgeny Prigozhin and Russian President Vladimir Putin. Prigozhin—who reportedly has ties to Putin, the Russian Ministry of Defense, and Russian mercenaries—was indicted by Special Counsel Robert Mueller on 16 February 2018 for allegedly funding and guiding a Russian government effort to interfere with the 2016 U.S. presidential election. [Alexei Druzhinin/AP]

As I recently detailed, many details remain unclear regarding the 7 February 2018 engagement in Deir Ezzor, Syria, between Russian mercenaries, Syrian government troops, and militia fighters and U.S. Special Operations Forces, U.S. Marines, and their partnered Kurdish and Syrian militia forces. Aside from questions as to just how many Russians participated and how many were killed, the biggest mystery is why the attack occurred at all.

Kimberly Marten, chair of the Political Science Department at Barnard College and director of the Program on U.S.-Russia Relations at Columbia University’s Harriman Institute, takes another look at this in a new article on War on the Rocks.

Why did Moscow initially deny any Russians’ involvement, and then downplay the casualty numbers? And why didn’t the Russian Defense Ministry stop the attackers from crossing into the American zone, or warn them about the likelihood of a U.S. counterstrike? Western media have offered two contending explanations: that Wagner acted without the Kremlin’s authorization, or that this was a Kremlin-approved attack that sought to test Washington while maintaining plausible deniability. But neither explanation fully answers all of the puzzles raised by the publicly available evidence, even though both help us understand more generally the opaque relationship between the Russian state and these forces.

After reviewing what is known about the relationship between the Russian government and the various Russian mercenary organizations, Marten proposes another explanation.

A different, or perhaps additional, rationale takes into account the ruthless infighting between Russian security forces that goes on regularly, while Russian President Vladimir Putin looks the other way. Russian Defense Ministry motives in Deir al-Zour may actually have centered on domestic politics inside Russia — and been directed against Putin ally and Wagner backer Yevgeny Prigozhin.

She takes a detailed look at the institutional relationships in question and draws a disquieting conclusion:

We may never have enough evidence to solve definitively the puzzles of Russian behavior at Deir al-Zour. But an understanding of Russian politics and security affairs allows us to better interpret the evidence we do have. Since Moscow’s employment of groups like Wagner appears to be a growing trend, U.S. and allied forces should consider the possibility that in various locations around the world, they might end up inadvertently, and dangerously, ensnared in Russia’s internal power struggles.

As with the Institute for the Study of War’s contention that the Russians are deliberately testing U.S. resolve in the Middle East, Marten’s interpretation that the actions of various Russian mercenary groups might be the result of internal Russian politics points to the prospect of further military adventurism only loosely connected to Russian foreign policy direction. Needless to say, the implications of this are ominous in a region of the world already beset by conflict and regional and international competition.

Chris Lawrence Interviewed About America’s Modern Wars

Chris Lawrence Interviewed About America’s Modern Wars

TDI President Chris Lawrence was recently interviewed on The Donna Seebo Show about his 2015 book, America’s Modern War: Understanding Iraq, Afghanistan and Vietnam.

The 27 June 2018 interview can be listed to below.


Back To The Future: The Return Of Sieges To Modern Warfare

Back To The Future: The Return Of Sieges To Modern Warfare

Ruins of the northern Syrian city of Aleppo, which was besieged by Syrian government forces from July 2012 to December 2016. [Getty Images]

U.S. Army Major Amos Fox has published a very intriguing analysis in the Association of the U.S. Army’s Institute of Land Warfare Landpower Essay series, titled “The Reemergence of the Siege: An Assessment of Trends in Modern Land Warfare.� Building upon some of his previous work (here and here), Fox makes a case that sieges have again become a salient feature in modern warfare: “a brief survey of history illustrates that the siege is a defining feature of the late 20th and early 21st centuries; perhaps today is the siege’s golden era.�

Noting that neither U.S. Army nor joint doctrine currently addresses sieges, Fox adopts the dictionary definition: “A military blockade of a city or fortified place to compel it to surrender, or a persistent or serious attack.� He also draws a distinction between a siege and siege warfare; “siege warfare implies a way of battle, whereas a siege implies one tool of many in the kitbag of warfare.� [original emphasis]

He characterizes modern sieges thusly:

The contemporary siege is a blending of the traditional definition with concentric attacks. The modern siege is not necessarily characterized by a blockade, but more by an isolation of an adversary through encirclement while maintaining sufficient firepower against the besieged to ensure steady pressure. The modern siege can be terrain-focused, enemy-focused or a blending of the two, depending on the action of the besieged and the goal of the attacker. The goal of the siege is either to achieve a decision, whether politically or militarily, or to slowly destroy the besieged.

He cites the siege of Sarajevo (1992-1996) as the first example of the modern phenomenon. Other cases include Grozny (1999-2000); Aleppo, Ghouta, Kobani, Raqaa, and Deir Ezzor in Syria (2012 to 2018); Mosul (2016-2017); and Ilovaisk, Second Donetsk Airport, and Debal’tseve in the Ukraine (2014-present).

Fox notes that employing sieges carries significant risk. Most occur in urban areas. The restrictive nature of this terrain serves as a combat multiplier for inferior forces, allowing them to defend effectively against a much larger adversary. This can raise the potential military costs of conducting a siege beyond what an attacker is willing or able to afford.

Modern sieges also risk incurring significant political costs through collateral civilian deaths or infrastructure damage that could lead to a loss of international credibility or domestic support for governments that attempt them.

However, Fox identifies a powerful incentive that can override these disadvantages: when skillfully executed, a siege affords an opportunity for an attacker to contain and tie down defending forces, which can then be methodically destroyed. Despite the risks, he believes the apparent battlefield decisiveness of recent sieges means they will remain part of modern warfare.

Given modern sieges’ destructiveness and sharp impact on the populations on which they are waged, almost all actors (to include the United States) demonstrate a clear willingness—politically and militarily—to flatten cities and inflict massive suffering on besieged populations in order to capitalize on the opportunities associated with having their adversaries centralized.

Fox argues that sieges will be a primary tactic employed by proxy military forces, which are currently being used effectively by a variety of state actors in the Eastern Europe and the Middle East. “[A]s long as intermediaries are doing the majority of fighting and dying within a siege—or holding the line for the siege—it is a tactic that will continue to populate current and future battlefields.�

This is an excellent analysis. Go check it out.

The Combat Value of Surprise

The Combat Value of Surprise

American soldiers being marched down a road after capture by German troops in the Ardennes, December 1944.

American soldiers being marched down a road after capture by German troops in the Ardennes, December 1944.

[This article was originally posted on 1 December 2016]

In his recent analysis of the role of conventional armored forces in Russian hybrid warfare, U.S. Army Major Amos Fox noted an emphasis on tactical surprise.

Changes to Russian tactics typify the manner in which Russia now employs its ground force. Borrowing from the pages of military theorist Carl von Clausewitz, who stated, “It is still more important to remember that almost the only advantage of the attack rests on its initial surprise,� Russia’s contemporary operations embody the characteristic of surprise. Russian operations in Georgia and Ukraine demonstrate a rapid, decentralized attack seeking to temporally dislocate the enemy, triggering the opposing forces’ defeat.

Tactical surprise enabled by electronic, cyber, information and unconventional warfare capabilities, combined with mobile and powerful combined arms brigade tactical groups, and massive and lethal long-range fires provide Russian Army ground forces with formidable combat power.

Trevor Dupuy considered the combat value of surprise to be important enough to cite it as one of his “timeless verities of combat.�

Surprise substantially enhances combat power. Achieving surprise in combat has always been important. It is perhaps more important today than ever. Quantitative analysis of historical combat shows that surprise has increased the combat power of military forces in those engagements in which it was achieved. Surprise has proven to be the greatest of all combat multipliers. It may be the most important of the Principles of War; it is at least as important as Mass and Maneuver.

In addition to acting as combat power multiplier, Dupuy observed that surprise decreases the casualties of a surprising force and increases those of a surprised one. Surprise also enhances advance rates for forces that achieve it.

In his combat models, Dupuy categorized tactical surprise as complete, substantial, and minor; defining the level achieved was a matter of analyst judgement. The combat effects of surprise in battle would last for three days, declining by one-third each day.

He developed two methods for applying the effects of surprise in calculating combat power, each yielding the same general overall influence. In his original Quantified Judgement Model (QJM) detailed in Numbers, Predictions and War: The Use of History to Evaluate and Predict the Outcome of Armed Conflict (1977), factors for surprise were applied to calculations for vulnerability and mobility, which in turn were applied to the calculation of overall combat power. The net value of surprise on combat power ranged from a factor of about 2.24 for complete surprise to 1.10 for minor surprise.

For a simplified version of his combat power calculation detailed in Attrition: Forecasting Battle Casualties and Equipment Losses in Modern War (1990), Dupuy applied a surprise combat multiplier value directly to the calculation of combat power. These figures also ranged between 2.20 for complete surprise and 1.10 for minor surprise.

Dupuy established these values for surprise based on his judgement of the difference between the calculated outcome of combat engagements in his data and theoretical outcomes based on his models. He never validated them back to his data himself. However, TDI President Chris Lawrence recently did conduct substantial tests on TDI’s expanded combat databases in the context of analyzing the combat value of situational awareness. The results are described in detail in his forthcoming book, War By Numbers: Understanding Conventional Combat.

Are Russia And Iran Planning More Proxy Attacks On U.S. Forces And Their Allies In Syria?

Are Russia And Iran Planning More Proxy Attacks On U.S. Forces And Their Allies In Syria?

Members of the Liwa al-Baqir Syrian Arab militia, which is backed by Iran and Russia. [Navvar Åžaban (N.Oliver)/Twitter]

Over at the Institute for the Study of War (ISW), Jennifer Cafarella, Matti Suomenaro, and Catherine Harris have published an analysis predicting that Iran and Russia are preparing to attack U.S. forces and those of its Syrian Democratic Forces (SDF) allies in eastern Syria. By using tribal militia proxies and Russian mercenary troops to inflict U.S. casualties and stoke political conflict among the Syrian factions, Cafarella, et al, assert that Russia and Iran are seeking to compel the U.S. to withdraw its forces from Syria and break up the coalition that defeated Daesh.

If true, this effort would represent an escalation of a strategic gambit that led to a day-long battle between tribal militias loyal to the regime of Syrian President Bashar al Assad, Syrian government troops, and Russian mercenaries and U.S. allied Kurdish and SDF fighters along with their U.S. Marine and Special Operations Forces (SOF) advisors in February in the eastern Syrian city of Deir Ezzor. This resulted in a major defeat of the pro-Assad forces, which suffered hundreds of casualties–including dozens of Russians–from U.S. air and ground-based fires.

To support their contention, Cafarella, et al, offer a pattern of circumstantial evidence that does not quite amount to a definitive conclusion. ISW has a clear policy preference to promote: “The U.S. must commit to defending its partners and presence in Eastern Syria in order to prevent the resurgence of ISIS and deny key resources to Iran, Russia, and Assad.� It has criticized the U.S.’s failure to hold Russia culpable for the February attack in Deir Ezzor as “weak,� thereby undermining its policy in Syria and the Middle East in the face of Russian “hybrid� warfare efforts.

Yet, there is circumstantial evidence that the February battle in Deir Ezzor was the result of deliberate Russian government policy. ISW has identified Russian and Iranian intent to separate SDF from U.S. support to isolate and weaken it. President Assad has publicly made clear his intent to restore his rule over all of Syria. And U.S. President Donald Trump has yet to indicate that he has changed his intent to withdraw U.S. troops from Syria.

Russian and Iranian sponsorship and support for further aggressive action by pro-regime forces and proxies against U.S. troops and their Syrian allies could easily raise tensions dramatically with the U.S. Since it is difficult to see Russian and Iranian proxies succeeding with new Deir Ezzor-style attacks, they might be tempted to try to shoot down a U.S. aircraft or attempt a surprise raid on a U.S. firebase instead. Should Syrian regime or Russian mercenary forces manage to kill or wound U.S. troops, or bring down a U.S. manned aircraft, the military and political repercussions could be significant.

Despite the desire of President Trump to curtail U.S. involvement in Syria, there is real potential for the conflict to mushroom.

Recent Developments In “Game Changing” Precision Fires Technology

Recent Developments In “Game Changing” Precision Fires Technology

Nammo’s new 155mm Solid Fuel Ramjet projectile [The Drive]

From the “Build A Better Mousetrap� files come a couple of new developments in precision fires technology. The U.S. Army’s current top modernization priority is improving its long-range precision fires capabilities.

Joseph Trevithick reports in The Drive that Nammo, a Norwegian/Finnish aerospace and defense company, recently revealed that it is developing a solid-fueled, ramjet-powered, precision projectile capable of being fired from the ubiquitous 155mm howitzer. The projectile, which is scheduled for live-fire testing in 2019 or 2020, will have a range of more than 60 miles.

The Army’s current self-propelled and towed 155mm howitzers have a range of 12 miles using standard ammunition, and up to 20 miles with rocket-powered munitions. Nammo’s ramjet projectile could effectively double that, but the Army is also looking into developing a new 155mm howitzer with a longer barrel that could fully exploit the capabilities of Nammo’s ramjet shell and other new long-range precision munitions under development.

Anna Ahronheim has a story in The Jerusalem Post about a new weapon developed by the Israeli Rafael Advanced Defense Systems Ltd. called the FireFly. FireFly is a small, three-kilogram, loitering munition designed for use by light ground maneuver forces to deliver precision fires against enemy forces in cover. Similar to a drone, FireFly can hover for up to 15 minutes before delivery.

In a statement, Rafael claimed that “Firefly will essentially eliminate the value of cover and with it, the necessity of long-drawn-out firefights. It will also make obsolete the old infantry tactic of firing and maneuvering to eliminate an enemy hiding behind cover.�

Nammo and Rafael have very high hopes for their wares:

“This [155mm Solid Fuel Ramjet] could be a game-changer for artillery,� according to Thomas Danbolt, Vice President of Nammo’s Large Caliber Ammunitions division.

“The impact of FireFly on the infantry is revolutionary, fundamentally changing small infantry tactics,� Rafael has asserted.

Expansive claims for the impact of new technology are not new, of course. Oribtal ATK touted its XM25 Counter Defilade Target Engagement (CDTE) precision-guided grenade launcher along familiar lines, claiming that “The introduction of the XM25 is akin to other revolutionary systems such as the machine gun, the airplane and the tank, all of which changed battlefield tactics.�

Similar in battlefield effect to the FireFly, the Army cancelled its contract for the XM25 in 2017 after disappointing results in field tests.

Are There Only Three Ways of Assessing Military Power?

Are There Only Three Ways of Assessing Military Power?

military-power[This article was originally posted on 11 October 2016]

In 2004, military analyst and academic Stephen Biddle published Military Power: Explaining Victory and Defeat in Modern Battle, a book that addressed the fundamental question of what causes victory and defeat in battle. Biddle took to task the study of the conduct of war, which he asserted was based on “a weak foundation� of empirical knowledge. He surveyed the existing literature on the topic and determined that the plethora of theories of military success or failure fell into one of three analytical categories: numerical preponderance, technological superiority, or force employment.

Numerical preponderance theories explain victory or defeat in terms of material advantage, with the winners possessing greater numbers of troops, populations, economic production, or financial expenditures. Many of these involve gross comparisons of numbers, but some of the more sophisticated analyses involve calculations of force density, force-to-space ratios, or measurements of quality-adjusted “combat power.� Notions of threshold “rules of thumb,� such as the 3-1 rule, arise from this. These sorts of measurements form the basis for many theories of power in the study of international relations.

The next most influential means of assessment, according to Biddle, involve views on the primacy of technology. One school, systemic technology theory, looks at how technological advances shift balances within the international system. The best example of this is how the introduction of machine guns in the late 19th century shifted the advantage in combat to the defender, and the development of the tank in the early 20th century shifted it back to the attacker. Such measures are influential in international relations and political science scholarship.

The other school of technological determinacy is dyadic technology theory, which looks at relative advantages between states regardless of posture. This usually involves detailed comparisons of specific weapons systems, tanks, aircraft, infantry weapons, ships, missiles, etc., with the edge going to the more sophisticated and capable technology. The use of Lanchester theory in operations research and combat modeling is rooted in this thinking.

Biddle identified the third category of assessment as subjective assessments of force employment based on non-material factors including tactics, doctrine, skill, experience, morale or leadership. Analyses on these lines are the stock-in-trade of military staff work, military historians, and strategic studies scholars. However, international relations theorists largely ignore force employment and operations research combat modelers tend to treat it as a constant or omit it because they believe its effects cannot be measured.

The common weakness of all of these approaches, Biddle argued, is that “there are differing views, each intuitively plausible but none of which can be considered empirically proven.� For example, no one has yet been able to find empirical support substantiating the validity of the 3-1 rule or Lanchester theory. Biddle notes that the track record for predictions based on force employment analyses has also been “poor.� (To be fair, the problem of testing theory to see if applies to the real world is not limited to assessments of military power, it afflicts security and strategic studies generally.)

So, is Biddle correct? Are there only three ways to assess military outcomes? Are they valid? Can we do better?

Should The Marines Take Responsibility For Counterinsurgency?

Should The Marines Take Responsibility For Counterinsurgency?

United States Marines in Nacaragua with the captured flag of Augusto César Sandino, 1932. [Wikipedia]

Sydney J. Freedberg, Jr recently reported in Breaking Defense that the Senate Armed Services Committee (SASC), led by chairman Senator John McCain, has asked Defense Secretary James Mattis to report on progress toward preparing the U.S. armed services to carry out the recently published National Defense Strategy oriented toward potential Great Power conflict.

Among a series of questions that challenge existing service roles and missions, Freedberg reported that the SASC wants to know if responsibility for carrying out “low-intensity missions,� such as counterinsurgency, should be the primary responsibility of one service:

Make the Marines a counterinsurgency force? The Senate starts by asking whether the military “would benefit from having one Armed Force dedicated primarily to low-intensity missions, thereby enabling the other Armed Forces to focus more exclusively on advanced peer competitors.� It quickly becomes clear that “one Armed Force� means “the Marines.� The bill questions the Army’s new Security Force Assistance Brigades (SFABs) and suggest shifting that role to the Marines. It also questions the survivability of Navy-Marine flotillas in the face of long-range sensors and precision missiles — so-called Anti-Access/Area Denial (A2/AD) systems — and asked whether the Marines’ core mission, “amphibious forced entry operations,� should even “remain an enduring mission for the joint force� given the difficulties. It suggests replacing large-deck amphibious ships, which carry both Marine aircraft and landing forces, with small aircraft carriers that could carry “larger numbers of more diverse strike aircraft� (but not amphibious vehicles or landing craft). Separate provisions of the bill restrict spending on the current Amphibious Assault Vehicle (Sec. 221) and the future Amphibious Combat Vehicle (Sec. 128) until the Pentagon addresses the viability of amphibious landings.

This proposed change would drastically shift the U.S. Marine Corps’ existing role and missions, something that will inevitably generate political and institutional resistance. Deemphasizing the ability to execute amphibious forced entry operations would be both a difficult strategic choice and an unpalatable political decision to fundamentally alter the Marine Corps’ institutional identity. Amphibious warfare has defined the Marines since the 1920s. It would, however, be a concession to the reality that technological change is driving the evolving character of warfare.

Perhaps This Is Not A Crazy Idea After All

The Marine Corps also has a long history with so-called “small wars�: contingency operations and counterinsurgencies. Tasking the Marines as the proponents for low-intensity conflict would help alleviate one of the basic conundrums facing U.S. land power: the U.S. Army’s inability to optimize its force structure due to the strategic need to be prepared to wage both low-intensity conflict and conventional combined arms warfare against peer or near peer adversaries. The capabilities needed for waging each type of conflict are diverging, and continuing to field a general purpose force is running an increasing risk of creating an Army dangerously ill-suited for either. Giving the Marine Corps responsibility for low-intensity conflict would permit the Army to optimize most of its force structure for combined arms warfare, which poses the most significant threat to American national security (even if it less likely than potential future low-intensity conflicts).

Making the Marines the lead for low-intensity conflict would also play to another bulwark of its institutional identity, as the world’s premier light infantry force (“Every Marine is a rifleman�). Even as light infantry becomes increasingly vulnerable on modern battlefields dominated by the lethality of long-range precision firepower, its importance for providing mass in irregular warfare remains undiminished. Technology has yet to solve the need for large numbers of “boots on the ground� in counterinsurgency. The crucial role of manpower in counterinsurgency makes it somewhat short-sighted to follow through with the SASC’s suggestions to eliminate the Army’s new Security Force Assistance Brigades (SFABs) and to reorient Special Operations Forces (SOF) toward support for high-intensity conflict. As recent, so-called “hybrid warfare� conflicts in Lebanon and the Ukraine have demonstrated, future battlefields will likely involve a mix of combined arms and low-intensity warfare. It would be risky to assume that Marine Corps’ light infantry, as capable as they are, could tackle all of these challenges alone.

Giving the Marines responsibility for low-intensity conflict would not likely require a drastic change in force structure. Marines could continue to emphasize sea mobility and littoral warfare in circumstances other than forced entry. Giving up the existing large-deck amphibious landing ships would be a tough concession, admittedly, one that would likely reduce the Marines’ effectiveness in responding to contingencies.

It is not likely that a change as big as this will be possible without a protracted political and institutional fight. But fresh thinking and drastic changes in the U.S.’s approach to warfare are going to be necessary to effectively address both near and long-term strategic challenges.

Senate Armed Service Committee Proposes Far-Reaching Changes To U.S. Military

Senate Armed Service Committee Proposes Far-Reaching Changes To U.S. Military

Senate Armed Services Committee members (L-R) Sen. James Inhofe (R-OK), Chairman John McCain (R-AZ) and ranking member Sen. Jack Reed (R-RI) listen to testimony in the Dirksen Senate Office Building on Capitol Hill July 11, 2017 in Washington, D.C. [CREDIT: Chip Somodevilla—Getty Images]

In an article in Breaking Defense last week, Sydney J. Freedberg, Jr. pointed out that the Senate Armed Services Committee (SASC) has requested that Secretary of Defense James Mattis report back by 1 February 2019 on what amounts to “the most sweeping reevaluation of the military in 30 years, with tough questions for all four armed services but especially the Marine Corps.�

Freedberg identified SASC chairman Senator John McCain as the motivating element behind the report, which is part of the draft 2019 National Defense Authorization Act. It emphasizes the initiative to reorient the U.S. military away from its nearly two-decade long focus on counterinsurgency and counterterrorism to prioritizing preparation for potential future Great Power conflict, as outlined in Mattis’s recently published National Defense Strategy. McCain sees this shift taking place far too slowly according to Freedberg, who hints that Mattis shares this concern.

While the SASC request addresses some technological issues, its real focus is on redefining the priorities, missions, and force structures of the armed forces (including special operations forces) in the context of the National Defense Strategy.

The changes it seeks are drastic. According to Freedberg, among the difficult questions it poses are:

  • Make the Marines a counterinsurgency force? [This would greatly help alleviate the U.S. Army’s current strategic conundrum]
  • Make the Army heavier, with fewer helicopters?
  • Refocus Special Operations against Russia and China?
  • Rely less on stealth aircraft and more on drones?

Each of these questions relates directly to trends associated with the multi-domain battle and operations concepts the U.S. armed services are currently jointly developing in response to threats posed by Russian, Chinese, and Iranian military advances.

It is clear that the SASC believes that difficult choices with far-reaching consequences are needed to adequately prepare to meet these challenges. The armed services have been historically resistant to changes involving trade-offs, however, especially ones that touch on service budgets and roles and missions. It seems likely that more than a report will be needed to push through changes deemed necessary by the Senate Armed Services Committee chairman and the Secretary of Defense.

Read more of Freedberg’s article here.

The draft 2019 National Defense Authorization Act can be found here, and the SASC questions can be found in Section 1041 beginning on page 478.

Measuring The Effects Of Combat In Cities, Phase I

Measuring The Effects Of Combat In Cities, Phase I

“Catalina Kid,” a M4 medium tank of Company C, 745th Tank Battalion, U.S. Army, drives through the entrance of the Aachen-Rothe Erde railroad station during the fighting around the city viaduct on Oct. 20, 1944. [Courtesy of First Division Museum/Daily Herald]

In 2002, TDI submitted a report to the U.S. Army Center for Army Analysis (CAA) on the first phase of a study examining the effects of combat in cities, or what was then called “military operations on urbanized terrain,� or MOUT. This first phase of a series of studies on urban warfare focused on the impact of urban terrain on division-level engagements and army-level operations, based on data drawn from TDI’s DuWar database suite.

This included engagements in France during 1944 including the Channel and Brittany port cities of Brest, Boulogne, Le Havre, Calais, and Cherbourg, as well as Paris, and the extended series of battles in and around Aachen in 1944. These were then compared to data on fighting in contrasting non-urban terrain in Western Europe in 1944-45.

The conclusions of Phase I of that study (pp. 85-86) were as follows:

The Effect of Urban Terrain on Outcome

The data appears to support a null hypothesis, that is, that the urban terrain had no significantly measurable influence on the outcome of battle.

The Effect of Urban Terrain on Casualties

Overall, any way the data is sectioned, the attacker casualties in the urban engagements are less than in the non-urban engagements and the casualty exchange ratio favors the attacker as well. Because of the selection of the data, there is some question whether these observations can be extended beyond this data, but it does not provide much support to the notion that urban combat is a more intense environment than non-urban combat.

The Effect of Urban Terrain on Advance Rates

It would appear that one of the primary effects of urban terrain is that it slows opposed advance rates. One can conclude that the average advance rate in urban combat should be one-half to one-third that of non-urban combat.

The Effect of Urban Terrain on Force Density

Overall, there is little evidence that combat operations in urban terrain result in a higher linear density of troops, although the data does seem to trend in that direction.

The Effect of Urban Terrain on Armor

Overall, it appears that armor losses in urban terrain are the same as, or lower than armor losses in non-urban terrain. And in some cases it appears that armor losses are significantly lower in urban than non-urban terrain.

The Effect of Urban Terrain on Force Ratios

Urban terrain did not significantly influence the force ratio required to achieve success or effectively conduct combat operations.

The Effect of Urban Terrain on Stress in Combat

Overall, it appears that urban terrain was no more stressful a combat environment during actual combat operations than was non-urban terrain.

The Effect of Urban Terrain on Logistics

Overall, the evidence appears to be that the expenditure of artillery ammunition in urban operations was not greater than that in non-urban operations. In the two cases where exact comparisons could be made, the average expenditure rates were about one-third to one-quarter the average expenditure rates expected for an attack posture in the European Theater of Operations as a whole.

The evidence regarding the expenditure of other types of ammunition is less conclusive, but again does not appear to be significantly greater than the expenditures in non-urban terrain. Expenditures of specialized ordnance may have been higher, but the total weight expended was a minor fraction of that for all of the ammunition expended.

There is no evidence that the expenditure of other consumable items (rations, water or POL) was significantly different in urban as opposed to non-urban combat.

The Effect of Urban Combat on Time Requirements

It was impossible to draw significant conclusions from the data set as a whole. However, in the five significant urban operations that were carefully studied, the maximum length of time required to secure the urban area was twelve days in the case of Aachen, followed by six days in the case of Brest. But the other operations all required little more than a day to complete (Cherbourg, Boulogne and Calais).

However, since it was found that advance rates in urban combat were significantly reduced, then it is obvious that these two effects (advance rates and time) are interrelated. It does appear that the primary impact of urban combat is to slow the tempo of operations.

This in turn leads to a hypothetical construct, where the reduced tempo of urban operations (reduced casualties, reduced opposed advance rates and increased time) compared to non-urban operations, results in two possible scenarios.

The first is if the urban area is bounded by non-urban terrain. In this case the urban area will tend to be enveloped during combat, since the pace of battle in the non-urban terrain is quicker. Thus, the urban battle becomes more a mopping-up operation, as it historically has usually been, rather than a full-fledged battle.

The alternate scenario is that created by an urban area that cannot be enveloped and must therefore be directly attacked. This may be caused by geography, as in a city on an island or peninsula, by operational requirements, as in the case of Cherbourg, Brest and the Channel Ports, or by political requirements, as in the case of Stalingrad, Suez City and Grozny.

Of course these last three cases are also those usually included as examples of combat in urban terrain that resulted in high casualty rates. However, all three of them had significant political requirements that influenced the nature, tempo and even the simple necessity of conducting the operation. And, in the case of Stalingrad and Suez City, significant geographical limitations effected the operations as well. These may well be better used to quantify the impact of political agendas on casualties, rather than to quantify the effects of urban terrain on casualties.

The effects of urban terrain at the operational level, and the effect of urban terrain on the tempo of operations, will be further addressed in Phase II of this study.

More on the QJM/TNDM Italian Battles

More on the QJM/TNDM Italian Battles

Troops of the U.S. 36th Infantry Division advance inland on Red Beach, Salerno, Italy, 1943. [ibiblio/U.S. Center for Military History]

[The article below is reprinted from December 1998 edition of The International TNDM Newsletter.]

More on the QJM/TNDM Italian Battles
by Richard C. Anderson, Jr.

In regard to Niklas Zetterling’s article and Christopher Lawrence’s response (Newsletter Volume 1, Number 6) [and Christopher Lawrence’s 2018 addendum] I would like to add a few observations of my own. Recently I have had occasion to revisit the Allied and German records for Italy in general and for the Battle of Salerno in particular. What I found is relevant in both an analytical and an historical sense.

The Salerno Order of Battle

The first and most evident observation that I was able to make of the Allied and German Order of Battle for the Salerno engagements was that it was incorrect. The following observations all relate to the table found on page 25 of Volume 1, Number 6.

The divisional totals are misleading. The U.S. had one infantry division (the 36th) and two-thirds of a second (the 45th, minus the 180th RCT [Regimental Combat Team] and one battalion of the 157th Infantry) available during the major stages of the battle (9-15 September 1943). The 82nd Airborne Division was represented solely by elements of two parachute infantry regiments that were dropped as emergency reinforcements on 13-14 September. The British 7th Armored Division did not begin to arrive until 15-16 September and was not fully closed in the beachhead until 18-19 September.

The German situation was more complicated. Only a single panzer division, the 16th, under the command of the LXXVI Panzer Corps was present on 9 September. On 10 September elements of the Hermann Goring Parachute Panzer Division, with elements of the 15th Panzergrenadier Division under tactical command, began arriving from the vicinity of Naples. Major elements of the Herman Goring Division (with its subordinated elements of the 15th Panzergrenadier Division) were in place and had relieved elements of the 16th Panzer Division opposing the British beaches by 11 September. At the same time the 29th Panzergrenandier Division began arriving from Calabria and took up positions opposite the U.S. 36th Divisions in and south of Altavilla, again relieving elements of the 16th Panzer Division. By 11-12 September the German forces in the northern sector of the beachhead were under the command of the XIV Panzer Corps (Herman Goring Division (-), elements of the 15th Panzergrenadier Division and elements of the 3rd Panzergrenadier Division), while the LXXVI Panzer Corps commanded the 16th Panzer Division, 29th Panzergrenadier Division, and elements of the 26th Panzer Division. Unfortunately for the Germans the 16th Panzer Division’s zone was split by the boundary between the XIV and LXXVI Corps, both of whom appear to have had operational control over different elements of the division. Needless to say, the German command and control problems in this action were tremendous.[1]

The artillery totals given in the table are almost inexplicable. The numbers of SP [self-propelled] 75mm howitzers is a bit fuzzy, inasmuch as this was a non-standardized weapon on a half-track chassis. It was allocated to the infantry regimental cannon company (6 tubes) and was also issued to tank and tank destroyer battalions as a stopgap until purpose-designed systems could be brought into production. The 105mm SP was also present on a half-track chassis in the regimental cannon company (2 tubes) and on a full-track chassis in the armored field artillery battalion (18 tubes). The towed 105mm artillery was present in the five field artillery battalions present of the 36th and 45th divisions and in a single non-divisional battalion assigned to the VI Corps. The 155mm howitzers were only present in the two divisional �eld artillery battalions, the general support artillery assigned to the VI Corps, the 36th Field Artillery Regiment, did not arrive until 16 September. No 155mm gun battalions landed in Italy until October 1943. The U.S. artillery figures should approximately be as follows:

75mm Howitzer (SP)

2 per infantry battalion


6 per tank battalion



105mm Howitzer (SP)

2 per infantry regiment


1 armored FA battalion[2]


5 divisional FA battalions


1 non-divisional FA battalion



155mm Howitzer

2 divisional FA battalions

3″ Tank Destroyer

3 battalions


Thus, the U.S. artillery strength is approximately 272 versus 525 as given in the chart.

The British artillery figures are also suspect. Each of the British divisions present, the 46th and 56th, had three regiments (battalions in U.S. parlance) of 25-pounder gun-howitzers for a total of 72 per division. There is no evidence of the presence of the British 3-inch howitzer, except possibly on a tank chassis in the support tank role attached to the tank troop headquarters of the armor regiment (battalion) attached to the X Corps (possibly 8 tubes). The X Corps had a single medium regiment (battalion) attached with either 4.5 inch guns or 5.5 inch gun-howitzers or a mixture of the two (16 tubes). The British did not have any 7.2 inch howitzers or 155mm guns at Salerno. I do not know where the �gure for British 75mm howitzers is from, although it is possible that some may have been present with the corps armored car regiment.

Thus the British artillery strength is approximately 168 versus 321 as given in the chart.

The German artillery types are highly suspect. As Niklas Zetterling deduced, there was no German corps or army artillery present at Salemo. Neither the XIV or LXXVI Corps had Heeres (army) artillery attached. The two battalions of the 7lst Nebelwerfer regiment and one battery of 170mm guns (previously attached to the 15th Panzergrenadier Division) were all out of action, refurbishing and replenishing equipment in the vicinity of Naples. However, U.S. intelligence sources located 42 Italian coastal gun positions, including three 149mm (not 132mm) railway guns defending the beaches. These positions were taken over by German personnel on the night before the invasion. That they �red at all in the circumstances is a comment on the professionalism of the German Army. The remaining German artillery available was with the divisional elements that arrived to defend against the invasion forces. The following artillery strengths are known for the German forces at Salerno:

16th Panzer Division (as of 3 September):

14 75mm infantry support howitzers
11 150mm SP infantry support howitzers
10 105mm howitzers
8 105mm SP howitzers
4 105mm guns
8 150mm howitzers
5 150mm SP howitzers
5 88mm AA guns

26th Panzer Division (as of 12 September):

15 75mm infantry support howitzers
12 150mm infantry support howitzers
6 105mm SP howitzers
12 105mm howitzers
10 150mm SP howitzers
4 150mm howitzers

Herman Goring Parachute Panzer Division (as of 13 September):

6-8 75mm infantry support howitzers
8 150mm infantry support howitzers
24 105mm howitzers
12 105mm SP howitzers
4 105mm guns
8 150mrn howitzers
6 150mm SP howitzers
6 150mm multiple rocket launchers
12 88mm AA guns

29th Panzergrenadier Division

106 artillery pieces (types unknown)

15th Panzergrenadier Division (elements):

10-12 105mm howitzers

3d Panzergrenadier Division

6 150mm infantry support howitzers


501st Army Flak Battalion (probably 20mm and 37mm AA only)
I/49th Flak Battalion (probably 8 88mm AA guns)

Thus, German artillery strength is about 342 tubes versus 394 as given in the chart.[3]

Armor strengths are equally suspect for both the Allied and German forces. It should be noted however, that the original QJM database considered wheeled armored cars to be the equivalent of a light tank.

Only two U.S. armor battalions were assigned to the initial invasion force, with a total of 108 medium and 34 light tanks. The British X Corps had a single armor regiment (battalion) assigned with approximately 67 medium and 10 light tanks. Thus, the Allies had some 175 medium tanks versus 488 as given in the chart and 44 light tanks versus 236 (including an unknown number of armored cars) as given in the chart.

German armor strength was as follows (operational/in repair as of the date given):

16th Panzer Division (8 September):

7/0 Panzer III flamethrower tanks
12/0 Panzer IV short
86/6 Panzer IV long
37/3 assault guns

29th Panzergrenadier Division (1 September):

32/5 assault guns
17/4 SP antitank
3/0 Panzer III

26th Panzer Division (5 September):

11/? assault guns
10/? Panzer III

Herman Goering Parachute Panzer Division (7 September):

5/? Panzer IV short
11/? Panzer IV long
5/? Panzer III long
1/? Panzer III 75mm
21/? assault guns
3/? SP antitank

15th Panzergrenadier Division (8 September):

6/? Panzer IV long
18/? assault guns

Total 285/18 medium tanks, SP anti-tank, and assault guns. This number actually agrees very well with the 290 medium tanks given in the chart. I have not looked closely at the number of German armored cars but suspect that it is fairly close to that given in the charts.

In general it appears that the original QJM Database got the numbers of major items of equipment right for the Germans, even if it flubbed on the details. On the other hand, the numbers and details are highly suspect for the Allied major items of equipment. Just as a first order “guestimate� I would say that this probably reduces the German CEV to some extent; however, missing from the formula is the Allied naval gun�re support which, although negligible in impact in the initial stages of the battle, had a strong influence on the later stages of the battle.

Hopefully, with a little more research and time, we will be able to go back and revalidate these engagements. In the meantime I hope that this has clarified some of the questions raised about the Italian QJM Database.


[1] Exacerbating the German command and control problems was the fact that the Tenth Army, which was in overall command of the XIV Panzer Corps and LXXVI Panzer Corps, had only been in existence for about six weeks. The army’s signal regiment was only partly organized and its quartermaster services were almost nonexistent.

[2] Arrived 13 September, 1 battery in action 13-15 September.

[3] However, the number given for the 29th Panzergrenadier Division appears to be suspiciously high and is not well defined. Hopefully further research may clarify the status of this division.

Dupuy’s Verities: The Power Of Defense

Dupuy’s Verities: The Power Of Defense

Leonidas at Thermopylae, by Jacques-Louis David, 1814. [Wikimedia]

The second of Trevor Dupuy’s Timeless Verities of Combat is:

Defensive strength is greater than offensive strength.

From Understanding War (1987):

[Prussian military theorist, Carl von] Clausewitz expressed this: “Defense is the stronger form of combat.� It is possible to demonstrate by the qualitative comparison of many battles that Clausewitz is right and that posture has a multiplicative effect on the combat power of a military force that takes advantage of terrain and fortifications, whether hasty and rudimentary, or intricate and carefully prepared. There are many well-known examples of the need of an attacker for a preponderance of strength in order to carry the day against a well-placed and fortified defender. One has only to recall Thermopylae, the Alamo, Fredericksburg, Petersburg, and El Alamein to realize the advantage enjoyed by a defender with smaller forces, well placed, and well protected. [p. 2]

The advantages of fighting on the defensive and the benefits of cover and concealment in certain types of terrain have long been basic tenets in military thinking. Dupuy, however, considered defensive combat posture and defensive value of terrain not just to be additive, but combat power multipliers, or circumstantial variables of combat that when skillfully applied and exploited, the effects of which could increase the overall fighting capability of a military force.

The statement [that the defensive is the stronger form of combat] implies a comparison of relative strength. It is essentially scalar and thus ultimately quantitative. Clausewitz did not attempt to define the scale of his comparison. However, by following his conceptual approach it is possible to establish quantities for this comparison. Depending upon the extent to which the defender has had the time and capability to prepare for defensive combat, and depending also upon such considerations as the nature of the terrain which he is able to utilize for defense, my research tells me that the comparative strength of defense to offense can range from a factor with a minimum value of about 1.3 to maximum value of more than 3.0. [p. 26]

The values Dupuy established for posture and terrain based on historical combat experience were as follows:

For example, Dupuy calculated that mounting even a hasty defense in rolling, gentle terrain with some vegetation could increase a force’s combat power by more than 50%. This is a powerful effect, achievable without the addition of any extra combat capability.

It should be noted that these values are both descriptive, in terms of defining Dupuy’s theoretical conception of the circumstantial variables of combat, as well as factors specifically calculated for use in his combat models. Some of these factors have found their way into models and simulations produced by others and some U.S. military doctrinal publications, usually without attribution and shorn of explanatory context. (A good exploration of the relationship between the values Dupuy established for the circumstantial variables of combat and his combat models, and the pitfalls of applying them out of context can be found here.)

While the impact of terrain on combat is certainly an integral part of current U.S. Army doctrinal thinking at all levels, and is constantly factored into combat planning and assessment, it does not explicitly acknowledge the classic Clausewitzian notion of a power disparity between the offense and defense. Nor are the effects of posture or terrain thought of as combat multipliers.

However, the Army does implicitly recognize the advantage of the defensive through its stubbornly persistent adherence to the so-called 3-1 rule of combat. Its version of this (which the U.S. Marine Corps also uses) is described in doctrinal publications as “historical minimum planning ratios,� which proscribe that a 3-1 advantage in numerical force ratio is necessary for an attacker to defeat a defender in a prepared or fortified position. Overcoming a defender in a hasty defense posture requires a 2.5-1 force ratio advantage. The force ratio advantages the Army considers necessary for decisive operations are even higher. While the 3-1 rule is a deeply problematic construct, the fact that is the only quantitative planning factor included in current doctrine reveals a healthy respect for the inherent power of the defensive.

Details Of U.S. Engagement With Russian Mercenaries In Syria Remain Murky

Details Of U.S. Engagement With Russian Mercenaries In Syria Remain Murky

UNDISCLOSED LOCATION, SYRIA (May 15, 2017)— U.S. Marines fortify a machine gun pit around their M777-A2 Howitzer in Syria, May 15, 2017. The unit has been conducting 24-hour all-weather fire support for Coalition’s local partners, the Syrian Democratic Forces, as part of Combined Joint Task Force-Operation Inherent Resolve. CJTF-OIR is the global coalition to defeat ISIS in Iraq and Syria. (U.S. Marine Corps photo by Sgt. Matthew Callahan)

Last week, the New York Times published an article by Thomas Gibbons-Neff that provided a detailed account of the fighting between U.S-advised Kurdish and Syrian militia forces and Russian mercenaries and Syrian and Arab fighters near the city of Deir Ezzor in eastern Syria on 7 February 2018. Gibbons-Neff stated the account was based on newly obtained documents and interviews with U.S. military personnel.

While Gibbons-Neff’s reporting fills in some details about the action, it differs in some respects to previous reporting, particularly a detailed account by Christoph Reuter, based on interviews from participants and witnesses in Syria, published previously in Spiegel Online.

  • According to Gibbons-Neff, the U.S. observed a buildup of combat forces supporting the regime of Syrian President Bashar al Assad in Deir Ezzor, south of the Euphrates River, which separated them from U.S.-backed Kurdish and Free Syrian militia forces and U.S. Special Operations Forces (SOF) and U.S. Marine Corps elements providing advice and assistance north of the river.
  • The pro-regime forces included “some Syrian government soldiers and militias, but American military and intelligence officials have said a majority were private Russian paramilitary mercenaries — and most likely a part of the Wagner Group, a company often used by the Kremlin to carry out objectives that officials do not want to be connected to the Russian government.â€�
  • After obtaining assurances from the Russian military chain-of-command in Syria that the forces were not theirs, Secretary of Defense James Mattis ordered “for the force, then, to be annihilated.â€�
  • Gibbons-Neff’s account focuses on the fighting that took place on the night of 7-8 February in the vicinity of a U.S. combat outpost located near a Conoco gas plant north of the Euphrates. While the article mentions the presence of allied Kurdish and Syrian militia fighters, it implies that the target of the pro-regime force was the U.S. outpost. It does not specify exactly where the pro-regime forces concentrated or the direction they advanced.
  • This is in contrast to Reuter’s Spiegel Online account, which reported a more complex operation. This included an initial probe across a bridge northwest of the Conoco plant on the morning of 7 February by pro-regime forces that included no Russians, which was repelled by warning shots from American forces.
  • After dark that evening, this pro-regime force attempted to cross the Euphrates again across a bridge to the southeast of the Conoco plant at the same time another pro-regime force advanced along the north bank of the Euphrates toward the U.S./Kurdish/Syrian forces from the town of Tabiya, southeast of the Conoco plant. According to Reuter, U.S. forces engaged both of these pro-regime advances north of the Euphrates.
  • While the Spiegel Online article advanced the claim that Russian mercenary forces were not leading the pro-regime attacks and that the casualties they suffered were due to U.S. collateral fire, Gibbons-Neff’s account makes the case that the Russians comprised at least a substantial part of at least one of the forces advancing on the U.S./Kurdish/Syrian bases and encampments in Deir Ezzor.
  • Based on documents it obtained, the Times asserts that 200-300 “pro-regimeâ€� personnel were killed out of an overall force of 500. Gibbons-Neff did not attempt to parse out the Russian share of these, but did mention that accounts in Russian media have risen from four dead as initially reported, to later claims of “perhaps dozensâ€� of killed and wounded. U.S. government sources continue to assert that most of the casualties were Russian.
  • It is this figure of 200-300 killed that I have both found problematic in the past. A total of 200-300 killed and wounded overall seems far more likely, with approximately 100 dead and 100-200 wounded out of the much larger overall force of Russian mercenaries, Syrian government troops, and tribal militia fighters involved in the fighting.

Motivation for the Operation Remains Unclear

While the details of the engagement remain ambiguous, the identity of those responsible for directing the attacks and the motivations for doing so are hazy as well. In late February, CNN and the Washington Post reported that U.S. intelligence had detected communications between Yevgeny Prigozhin—a Russian businessman with reported ties to President Vladimir Putin, the Ministry of Defense, and Russian mercenaries—and Russian and Syrian officials in the weeks leading up to the attack. One such intercept alleges that Prigozhin informed a Syrian official in January that he had secured permission from an unidentified Russian minister to move forward with a “fast and strong� initiative in Syria in early February.

Prigozhin was one of 13 individuals and three companies indicted by special counsel Robert Mueller on 16 February 2018 for funding and guiding a Russian government effort to interfere with the 2016 U.S. presidential election.

If the Deir Ezzor operation was indeed a clandestine operation sanctioned by the Russian government, the motivation remains mysterious. Gibbons-Neff’s account implies that the operation was a direct assault on a U.S. military position by a heavily-armed and equipped combat force, an action that all involved surely understood beforehand would provoke a U.S. military reaction. Even if the attack was instead aimed at taking the Conoco gas plant or forcing the Kurdish and Free Syrian forces out of Deir Ezzor, the attackers surely must have known the presence of U.S. military forces would elicit the same response.

Rueter’s account of a more complex operations suggests that the attack was a probe to test the U.S. response to armed action aimed at the U.S.’s Kurdish and Free Syrian proxy forces. If so, it was done very clumsily. The build-up of pro-regime forces telegraphed the effort in advance and the force itself seems to have been tailored for combat rather than reconnaissance. The fact that the U.S. government inquired with the Russian military leadership in Syria in advance about the provenance of the force build-up should have been a warning that any attempt at surprise had been compromised.

Whether the operation was simply intended to obtain a tactical advantage or to probe the resolution of U.S. involvement in Syria, the outcome bears all the hallmarks of a major miscalculation. Russian “hybrid warfare� tactics sustained a decisive reverse, while the effectiveness of U.S. military capabilities received a decided boost. Russian and U.S. forces and their proxies continue to spar using information operations, particularly electronic warfare, but they have not directly engaged each other since. The impact of this may be short-lived however, depending on whether or not U.S. President Donald J. Trump carries through with his intention announced in early April to withdraw U.S. forces from eastern Syria.

CEV Calculations in Italy, 1943

CEV Calculations in Italy, 1943

Tip of the Avalanche by Keith Rocco. Soldiers from the U.S. 36th Infantry Division landing at Salerno, Italy, September 1943.

[The article below is reprinted from June 1997 edition of The International TNDM Newsletter. Chris Lawrence’s response from the August 1997 edition of The International TNDM Newsletter will be posted on Friday.]

CEV Calculations in Italy, 1943
by Niklas Zetterling

Perhaps one of the most debated results of the TNDM (and its predecessors) is the conclusion that the German ground forces on average enjoyed a measurable qualitative superiority over its US and British opponents. This was largely the result of calculations on situations in Italy in 1943-44, even though further engagements have been added since the results were �rst presented. The calculated German superiority over the Red Army, despite the much smaller number of engagements, has not aroused as much opposition. Similarly, the calculated Israeli effectiveness superiority over its enemies seems to have surprised few.

However, there are objections to the calculations on the engagements in Italy 1943. These concern primarily the database, but there are also some questions to be raised against the way some of the calculations have been made, which may possibly have consequences for the TNDM.

Here it is suggested that the German CEV [combat effectiveness value] superiority was higher than originally calculated. There are a number of flaws in the original calculations, each of which will be discussed separately below. With the exception of one issue, all of them, if corrected, tend to give a higher German CEV.

The Database on Italy 1943-44

According to the database the German divisions had considerable �re support from GHQ artillery units. This is the only possible conclusion from the fact that several pieces of the types 15cm gun, 17cm gun, 21cm gun, and 15cm and 21cm Nebelwerfer are included in the data for individual engagements. These types of guns were almost exclusively con�ned to GHQ units. An example from the database are the three engagements Port of Salerno, Amphitheater, and Sele-Calore Corridor. These take place simultaneously (9-11 September 1943) with the German 16th Pz Div on the Axis side in all of them (no other division is included in the battles). Judging from the manpower �gures, it seems to have been assumed that the division participated with one quarter of its strength in each of the two former battles and half its strength in the latter. According to the database, the number of guns were:

15cm gun 28
17cm gun 12
21cm gun 12
15cm NbW 27
21cm NbW 21

This would indicate that the 16th Pz Div was supported by the equivalent of more than �ve non-divisional artillery battalions. For the German army this is a suspiciously high number, usually there were rather something like one GHQ artillery battalion for each division, or even less. Research in the German Military Archives con�rmed that the number of GHQ artillery units was far less than indicated in the HERO database. Among the useful documents found were a map showing the dispositions of 10th Army artillery units. This showed clearly that there was only one non-divisional artillery unit south of Rome at the time of the Salerno landings, the III/71 Nebelwerfer Battalion. Also the 557th Artillery Battalion (17cm gun) was present, it was included in the artillery regiment (33rd Artillery Regiment) of 15th Panzergrenadier Division during the second half of 1943. Thus the number of German artillery pieces in these engagements is exaggerated to an extent that cannot be considered insigni�cant. Since OLI values for artillery usually constitute a signi�cant share of the total OLI of a force in the TNDM, errors in artillery strength cannot be dismissed easily.

While the example above is but one, further archival research has shown that the same kind of error occurs in all the engagements in September and October 1943. It has not been possible to check the engagements later during 1943, but a pattern can be recognized. The ratio between the numbers of various types of GHQ artillery pieces does not change much from battle to battle. It seems that when the database was developed, the researchers worked with the assumption that the German corps and army organizations had organic artillery, and this assumption may have been used as a “rule of thumb.� This is wrong, however; only artillery staffs, command and control units were included in the corps and army organizations, not �ring units. Consequently we have a systematic error, which cannot be corrected without changing the contents of the database. It is worth emphasizing that we are discussing an exaggeration of German artillery strength of about 100%, which certainly is significant. Comparing the available archival records with the database also reveals errors in numbers of tanks and antitank guns, but these are much smaller than the errors in artillery strength. Again these errors do always inflate the German strength in those engagements l have been able to check against archival records. These errors tend to inflate German numerical strength, which of course affects CEV calculations. But there are further objections to the CEV calculations.

The Result Formula

The “result formula” weighs together three factors: casualties inflicted, distance advanced, and mission accomplishment. It seems that the ï¬�rst two do not raise many objections, even though the relative weight of them may always be subject to argumentation.

The third factor, mission accomplishment, is more dubious however. At �rst glance it may seem to be natural to include such a factor. Alter all, a combat unit is supposed to accomplish the missions given to it. However, whether a unit accomplishes its mission or not depends both on its own qualities as well as the realism of the mission assigned. Thus the mission accomplishment factor may reflect the qualities of the combat unit as well as the higher HQs and the general strategic situation. As an example, the Rapido crossing by the U.S. 36th Infantry Division can serve. The division did not accomplish its mission, but whether the mission was realistic, given the circumstances, is dubious. Similarly many German units did probably, in many situations, receive unrealistic missions, particularly during the last two years of the war (when most of the engagements in the database were fought). A more extreme example of situations in which unrealistic missions were given is the battle in Belorussia, June-July 1944, where German units were regularly given impossible missions. Possibly it is a general trend that the side which is �ghting at a strategic disadvantage is more prone to give its combat units unrealistic missions.

On the other hand it is quite clear that the mission assigned may well affect both the casualty rates and advance rates. If, for example, the defender has a withdrawal mission, advance may become higher than if the mission was to defend resolutely. This must however not necessarily be handled by including a missions factor in a result formula.

I have made some tentative runs with the TNDM, testing with various CEV values to see which value produced an outcome in terms of casualties and ground gained as near as possible to the historical result. The results of these runs are very preliminary, but the tendency is that higher German CEVs produce more historical outcomes, particularly concerning combat.

Supply Situation

According to scattered information available in published literature, the U.S. artillery �red more shells per day per gun than did German artillery. In Normandy, US 155mm M1 howitzers �red 28.4 rounds per day during July, while August showed slightly lower consumption, 18 rounds per day. For the 105mm M2 howitzer the corresponding �gures were 40.8 and 27.4. This can be compared to a German OKH study which, based on the experiences in Russia 1941-43, suggested that consumption of 105mm howitzer ammunition was about 13-22 rounds per gun per day, depending on the strength of the opposition encountered. For the 150mm howitzer the �gures were 12-15.

While these �gures should not be taken too seriously, as they are not from primary sources and they do also reflect the conditions in different theaters, they do at least indicate that it cannot be taken for granted that ammunition expenditure is proportional to the number of gun barrels. In fact there also exist further indications that Allied ammunition expenditure was greater than the German. Several German reports from Normandy indicate that they were astonished by the Allied ammunition expenditure.

It is unlikely that an increase in artillery ammunition expenditure will result in a proportional increase combat power. Rather it is more likely that there is some kind of diminished return with increased expenditure.

General Problems with Non-Divisional Units

A division usually (but not necessarily) includes various support services, such as maintenance, supply, and medical services. Non-divisional combat units have to a greater extent to rely on corps and army for such support. This makes it complicated to include such units, since when entering, for example, the manpower strength and truck strength in the TNDM, it is difficult to assess their contribution to the overall numbers.

Furthermore, the amount of such forces is not equal on the German and Allied sides. In general the Allied divisional slice was far greater than the German. In Normandy the US forces on 25 July 1944 had 812,000 men on the Continent, while the number of divisions was 18 (including the 5th Armored, which was in the process of landing on the 25th). This gives a divisional slice of 45,000 men. By comparison the German 7th Army mustered 16 divisions and 231,000 men on 1 June 1944, giving a slice of 14,437 men per division. The main explanation for the difference is the non-divisional combat units and the logistical organization to support them. In general, non-divisional combat units are composed of powerful, but supply-consuming, types like armor, artillery, antitank and antiaircraft. Thus their contribution to combat power and strain on the logistical apparatus is considerable. However I do not believe that the supporting units’ manpower and vehicles have been included in TNDM calculations.

There are however further problems with non-divisional units. While the whereabouts of tank and tank destroyer units can usually be established with sufficient certainty, artillery can be much harder to pin down to a speci�c division engagement. This is of course a greater problem when the geographical extent of a battle is small.

Tooth-to-Tail Ratio

Above was discussed the lack of support units in non-divisional combat units. One effect of this is to create a force with more OLI per man. This is the result of the unit‘s “tail” belonging to some other part of the military organization.

In the TNDM there is a mobility formula, which tends to favor units with many weapons and vehicles compared to the number of men. This became apparent when I was performing a great number of TNDM runs on engagements between Swedish brigades and Soviet regiments. The Soviet regiments usually contained rather few men, but still had many AFVs, artillery tubes, AT weapons, etc. The Mobility Formula in TNDM favors such units. However, I do not think this reflects any phenomenon in the real world. The Soviet penchant for lean combat units, with supply, maintenance, and other services provided by higher echelons, is not a more effective solution in general, but perhaps better suited to the particular constraints they were experiencing when forming units, training men, etc. In effect these services were existing in the Soviet army too, but formally not with the combat units.

This problem is to some extent reminiscent to how density is calculated (a problem discussed by Chris Lawrence in a recent issue of the Newsletter). It is comparatively easy to de�ne the frontal limit of the deployment area of force, and it is relatively easy to de�ne the lateral limits too. It is, however, much more difficult to say where the rear limit of a force is located.

When entering forces in the TNDM a rear limit is, perhaps unintentionally, drawn. But if the combat unit includes support units, the rear limit is pushed farther back compared to a force whose combat units are well separated from support units.

To what extent this affects the CEV calculations is unclear. Using the original database values, the German forces are perhaps given too high combat strength when the great number of GHQ artillery units is included. On the other hand, if the GHQ artillery units are not included, the opposite may be true.

The Effects of Defensive Posture

The posture factors are difficult to analyze, since they alone do not portray the advantages of defensive position. Such effects are also included in terrain factors.

It seems that the numerical values for these factors were assigned on the basis of professional judgement. However, when the QJM was developed, it seems that the developers did not assume the German CEV superiority. Rather, the German CEV superiority seems to have been discovered later. It is possible that the professional judgement was about as wrong on the issue of posture effects as they were on CEV. Since the British and American forces were predominantly on the offensive, while the Germans mainly defended themselves, a German CEV superiority may, at least partly, be hidden in two high effects for defensive posture.

When using corrected input data on the 20 situations in Italy September-October 1943, there is a tendency that the German CEV is higher when they attack. Such a tendency is also discernible in the engagements presented in Hitler’s Last Gamble. Appendix H, even though the number of engagements in the latter case is very small.

As it stands now this is not really more than a hypothesis, since it will take an analysis of a greater number of engagements to con�rm it. However, if such an analysis is done, it must be done using several sets of data. German and Allied attacks must be analyzed separately, and preferably the data would be separated further into sets for each relevant terrain type. Since the effects of the defensive posture are intertwined with terrain factors, it is very much possible that the factors may be correct for certain terrain types, while they are wrong for others. It may also be that the factors can be different for various opponents (due to differences in training, doctrine, etc.). It is also possible that the factors are different if the forces are predominantly composed of armor units or mainly of infantry.

One further problem with the effects of defensive position is that it is probably strongly affected by the density of forces. It is likely that the main effect of the density of forces is the inability to use effectively all the forces involved. Thus it may be that this factor will not influence the outcome except when the density is comparatively high. However, what can be regarded as “high� is probably much dependent on terrain, road net quality, and the cross-country mobility of the forces.


While the TNDM has been criticized here, it is also �tting to praise the model. The very fact that it can be criticized in this way is a testimony to its openness. In a sense a model is also a theory, and to use Popperian terminology, the TNDM is also very testable.

It should also be emphasized that the greatest errors are probably those in the database. As previously stated, I can only conclude safely that the data on the engagements in Italy in 1943 are wrong; later engagements have not yet been checked against archival documents. Overall the errors do not represent a dramatic change in the CEV values. Rather, the Germans seem to have (in Italy 1943) a superiority on the order of 1.4-1.5, compared to an original �gure of 1.2-1.3.

During September and October 1943, almost all the German divisions in southern Italy were mechanized or parachute divisions. This may have contributed to a higher German CEV. Thus it is not certain that the conclusions arrived at here are valid for German forces in general, even though this factor should not be exaggerated, since many of the German divisions in Italy were either newly raised (e.g., 26th Panzer Division) or rebuilt after the Stalingrad disaster (16th Panzer Division plus 3rd and 29th Panzergrenadier Divisions) or the Tunisian debacle (15th Panzergrenadier Division).

The Third World War of 1985

The Third World War of 1985


[This article was originally posted on 5 August 2016]

The seeming military resurgence of Vladimir Putin’s Russia has renewed concerns about the military balance between East and West in Europe. These concerns have evoked memories of the decades-long Cold War confrontation between NATO and the Warsaw Pact along the inner-German frontier. One of the most popular expressions of this conflict came in the form of a book titled The Third World War: August 1985, by British General Sir John Hackett. The book, a hypothetical account of a war between the Soviet Union, the United States, and assorted allies set in the near future, became an international best-seller.

Jeffrey H Michaels, a Senior Lecturer in Defence Studies at the British the Joint Services Command and Staff College, has published a detailed look at how Hackett and several senior NATO and diplomatic colleagues constructed the scenario portrayed in the book. Scenario construction is an important aspect of institutional war gaming. A war game will only be as useful if the assumptions that underpin it are valid. As Michaels points out,

Regrettably, far too many scenarios and models, whether developed by military organizations, political scientists, or fiction writers, tend to focus their attention on the battlefield and the clash of armies, navies, air forces, and especially their weapons systems.  By contrast, the broader context of the war – the reasons why hostilities erupted, the political and military objectives, the limits placed on military action, and so on – are given much less serious attention, often because they are viewed by the script-writers as a distraction from the main activity that occurs on the battlefield.

Modelers and war gamers always need to keep in mind the fundamental importance of context in designing their simulations.

It is quite easy to project how one weapon system might fare against another, but taken out of a broader strategic context, such a projection is practically meaningless (apart from its marketing value), or worse, misleading.  In this sense, even if less entertaining or exciting, the degree of realism of the political aspects of the scenario, particularly policymakers’ rationality and cost-benefit calculus, and the key decisions that are taken about going to war, the objectives being sought, the limits placed on military action, and the willingness to incur the risks of escalation, should receive more critical attention than the purely battlefield dimensions of the future conflict.

These are crucially important points to consider when deciding how to asses the outcomes of hypothetical scenarios.

Dupuy’s Verities: Offensive Action

Dupuy’s Verities: Offensive Action

Sheridan’s final charge at Winchester by Thune de Thulstrup (ca. 1886) [Library of Congress]

The first of Trevor Dupuy’s Timeless Verities of Combat is:

Offensive action is essential to positive combat results.

As he explained in Understanding War (1987):

This is like saying, “A team can’t score in football unless it has the ball.� Although subsequent verities stress the strength, value, and importance of defense, this should not obscure the essentiality of offensive action to ultimate combat success. Even in instances where a defensive strategy might conceivably assure a favorable war outcome—as was the case of the British against Napoleon, and as the Confederacy attempted in the American Civil War—selective employment of offensive tactics and operations is required if the strategic defender is to have any chance of final victory. [pp. 1-2]

The offensive has long been a staple element of the principles of war. From the 1954 edition of the U.S. Army Field Manual FM 100-5, Field Service Regulations, Operations:

71. Offensive

Only offensive action achieves decisive results. Offensive action permits the commander to exploit the initiative and impose his will on the enemy. The defensive may be forced on the commander, but it should be deliberately adopted only as a temporary expedient while awaiting an opportunity for offensive action or for the purpose of economizing forces on a front where a decision is not sought. Even on the defensive the commander seeks every opportunity to seize the initiative and achieve decisive results by offensive action. [Original emphasis]

Interestingly enough, the offensive no longer retains its primary place in current Army doctrinal thought. It is now placed on the same par as the defensive and stability operations. As the 2017 edition of the capstone FM 3-0 Operations now lays it out:

Unified land operations are simultaneous offensive, defensive, and stability or defense support of civil authorities’ tasks to seize, retain, and exploit the initiative to shape the operational environment, prevent conflict, consolidate gains, and win our Nation’s wars as part of unified action (ADRP 3-0)…

At the heart of the Army’s operational concept is decisive action. Decisive action is the continuous, simultaneous combinations of offensive, defensive, and stability or defense support of civil authorities tasks (ADRP 3-0). During large-scale combat operations, commanders describe the combinations of offensive, defensive, and stability tasks in the concept of operations. As a single, unifying idea, decisive action provides direction for an entire operation. [p. I-16; original emphasis]

It is perhaps too easy to read too much into this change in emphasis. On the very next page, FM 3-0 describes offensive “tasks� thusly:

Offensive tasks are conducted to defeat and destroy enemy forces and seize terrain, resources, and population centers. Offensive tasks impose the commander’s will on the enemy. The offense is the most direct and sure means of seizing and exploiting the initiative to gain physical and cognitive advantages over an enemy. In the offense, the decisive operation is a sudden, shattering action that capitalizes on speed, surprise, and shock effect to achieve the operation’s purpose. If that operation does not destroy or defeat the enemy, operations continue until enemy forces disintegrate or retreat so they no longer pose a threat. Executing offensive tasks compels an enemy to react, creating or revealing additional weaknesses that an attacking force can exploit. [p. I-17]

The change in emphasis reflects recent U.S. military experience where decisive action has not yielded much in the way of decisive outcomes, as is mentioned in to FM 3-0’s introduction. Joint force offensives in 2001 and 2003 “achieved rapid initial military success but no enduring political outcome, resulting in protracted counterinsurgency campaigns.� The Army now anticipates a future operating environment where joint forces can expect to “work together and with unified action partners to successfully prosecute operations short of conflict, prevail in large-scale combat operations, and consolidate gains to win enduring strategic outcomes� that are not necessarily predicated on offensive action alone. We may have to wait for the next edition of FM 3-0 to see if the Army has drawn valid conclusions from the recent past or not.

Scoring Weapons And Aggregation In Trevor Dupuy’s Combat Models

Scoring Weapons And Aggregation In Trevor Dupuy’s Combat Models

[The article below is reprinted from the October 1997 edition of The International TNDM Newsletter.]

Consistent Scoring of Weapons and Aggregation of Forces:
The Cornerstone of Dupuy’s Quantitative Analysis of Historical Land Battles
James G. Taylor, PhD,
Dept. of Operations Research, Naval Postgraduate School


Col. Trevor N. Dupuy was an American original, especially as regards the quantitative study of warfare. As with many prophets, he was not entirely appreciated in his own land, particularly its Military Operations Research (OR) community. However, after becoming rather familiar with the details of his mathematical modeling of ground combat based on historical data, I became aware of the basic scienti�c soundness of his approach. Unfortunately, his documentation of methodology was not always accepted by others, many of whom appeared to confuse lack of mathematical sophistication in his documentation with lack of scienti�c validity of his basic methodology.

The purpose of this brief paper is to review the salient points of Dupuy’s methodology from a system’s perspective, i.e., to view his methodology as a system, functioning as an organic whole to capture the essence of past combat experience (with an eye towards extrapolation into the future). The advantage of this perspective is that it immediately leads one to the conclusion that if one wants to use some functional relationship derived from Dupuy’s work, then one should use his methodologies for scoring weapons, aggregating forces, and adjusting for operational circumstances; since this consistency is the only guarantee of being able to reproduce historical results and to project them into the future.

Implications (of this system’s perspective on Dupuy’s work) for current DOD models will be discussed. In particular, the Military OR community has developed quantitative methods for imputing values to weapon systems based on their attrition capability against opposing forces and force interactions.[1] One such approach is the so-called antipotential-potential method[2] used in TACWAR[3] to score weapons. However, one should not expect such scores to provide valid casualty estimates when combined with historically derived functional relationships such as the so-called ATLAS casualty-rate curves[4] used in TACWAR, because a different “yard-stick� (i.e. measuring system for estimating the relative combat potential of opposing forces) was used to develop such a curve.

Overview of Dupuy’s Approach

This section briefly outlines the salient features of Dupuy’s approach to the quantitative analysis and modeling of ground combat as embodied in his Tactical Numerical Deterministic Model (TNDM) and its predecessor the Quanti�ed Judgment Model (QJM). The interested reader can �nd details in Dupuy [1979] (see also Dupuy [1985][5], [1987], [1990]). Here we will view Dupuy’s methodology from a system approach, which seeks to discern its various components and their interactions and to view these components as an organic whole. Essentially Dupuy’s approach involves the development of functional relationships from historical combat data (see Fig. 1) and then using these functional relationships to model future combat (see Fig, 2).

At the heart of Dupuy’s method is the investigation of historical battles and comparing the relationship of inputs (as quanti�ed by relative combat power, denoted as Pa/Pd for that of the attacker relative to that of the defender in Fig. l)(e.g. see Dupuy [1979, pp. 59-64]) to outputs (as quanti�ed by extent of mission accomplishment, casualty effectiveness, and territorial effectiveness; see Fig. 2) (e.g. see Dupuy [1979, pp. 47-50]), The salient point is that within this scheme, the main input[6] (i.e. relative combat power) to a historical battle is a derived quantity. It is computed from formulas that involve three essential aspects: (1) the scoring of weapons (e.g, see Dupuy [1979, Chapter 2 and also Appendix A]), (2) aggregation methodology for a force (e.g. see Dupuy [1979, pp. 43-46 and 202-203]), and (3) situational-adjustment methodology for determining the relative combat power of opposing forces (e.g. see Dupuy [1979, pp. 46-47 and 203-204]). In the force-aggregation step the effects on weapons of Dupuy’s environmental variables and one operational variable (air superiority) are considered[7], while in the situation-adjustment step the effects on forces of his behavioral variables[8] (aggregated into a single factor called the relative combat effectiveness value (CEV)) and also the other operational variables are considered (Dupuy [1987, pp. 86-89])

Figure 1.

Moreover, any functional relationships developed by Dupuy depend (unless shown otherwise) on his computational system for derived quantities, namely OLls, force strengths, and relative combat power. Thus, Dupuy’s results depend in an essential manner on his overall computational system described immediately above. Consequently, any such functional relationship (e.g. casualty-rate curve) directly or indirectly derivative from Dupuy‘s work should still use his computational methodology for determination of independent-variable values.

Fig l also reveals another important aspect of Dupuy’s work, the development of reliable data on historical battles, Military judgment plays an essential role in this development of such historical data for a variety of reasons. Dupuy was essentially the only source of new secondary historical data developed from primary sources (see McQuie [1970] for further details). These primary sources are well known to be both incomplete and inconsistent, so that military judgment must be used to �ll in the many gaps and reconcile observed inconsistencies. Moreover, military judgment also generates the working hypotheses for model development (e.g. identi�cation of signi�cant variables).

At the heart of Dupuy’s quantitative investigation of historical battles and subsequent model development is his own weapons-scoring methodology, which slowly evolved out of study efforts by the Historical Evaluation Research Organization (HERO) and its successor organizations (cf. HERO [1967] and compare with Dupuy [1979]). Early HERO [1967, pp. 7-8] work revealed that what one would today call weapons scores developed by other organizations were so poorly documented that HERO had to create its own methodology for developing the relative lethality of weapons, which eventually evolved into Dupuy’s Operational Lethality Indices (OLIs). Dupuy realized that his method was arbitrary (as indeed is its counterpart, called the operational definition, in formal scientific work), but felt that this would be ameliorated if the weapons-scoring methodology be consistently applied to historical battles. Unfortunately, this point is not clearly stated in Dupuy’s formal writings, although it was clearly (and compellingly) made by him in numerous brie�ngs that this author heard over the years.

Figure 2.

In other words, from a system’s perspective, the functional relationships developed by Colonel Dupuy are part of his analysis system that includes this weapons-scoring methodology consistently applied (see Fig. l again). The derived functional relationships do not stand alone (unless further empirical analysis shows them to hold for any weapons-scoring methodology), but function in concert with computational procedures. Another essential part of this system is Dupuy‘s aggregation methodology, which combines numbers, environmental circumstances, and weapons scores to compute the strength (S) of a military force. A key innovation by Colonel Dupuy [1979, pp. 202- 203] was to use a nonlinear (more precisely, a piecewise-linear) model for certain elements of force strength. This innovation precluded the occurrence of military absurdities such as air �repower being fully substitutable for ground �repower, antitank weapons being fully effective when armor targets are lacking, etc‘ The �nal part of this computational system is Dupuy’s situational-adjustment methodology, which combines the effects of operational circumstances with force strengths to determine relative combat power, e.g. Pa/Pd.

To recapitulate, the determination of an Operational Lethality Index (OLI) for a weapon involves the combination of weapon lethality, quanti�ed in terms of a Theoretical Lethality Index (TLI) (e.g. see Dupuy [1987, p. 84]), and troop dispersion[9] (e.g. see Dupuy [1987, pp. 84- 85]). Weapons scores (i.e. the OLIs) are then combined with numbers (own side and enemy) and combat- environment factors to yield force strength. Six[10] different categories of weapons are aggregated, with nonlinear (i.e. piecewise-linear) models being used for the following three categories of weapons: antitank, air defense, and air �repower (i.e. c1ose—air support). Operational, e.g. mobility, posture, surprise, etc. (Dupuy [1987, p. 87]), and behavioral variables (quanti�ed as a relative combat effectiveness value (CEV)) are then applied to force strength to determine a side’s combat-power potential.

Requirement for Consistent Scoring of Weapons, Force Aggregation, and Situational Adjustment for Operational Circumstances

The salient point to be gleaned from Fig.1 and 2 is that the same (or at least consistent) weapons—scoring, aggregation, and situational—adjustment methodologies be used for both developing functional relationships and then playing them to model future combat. The corresponding computational methods function as a system (organic whole) for determining relative combat power, e.g. Pa/Pd. For the development of functional relationships from historical data, a force ratio (relative combat power of the two opposing sides, e.g. attacker’s combat power divided by that of the defender, Pa/Pd is computed (i.e. it is a derived quantity) as the independent variable, with observed combat outcome being the dependent variable. Thus, as discussed above, this force ratio depends on the methodologies for scoring weapons, aggregating force strengths, and adjusting a force’s combat power for the operational circumstances of the engagement. It is a priori not clear that different scoring, aggregation, and situational-adjustment methodologies will lead to similar derived values. If such different computational procedures were to be used, these derived values should be recomputed and the corresponding functional relationships rederived and replotted.

However, users of the Tactical Numerical Deterministic Model (TNDM) (or for that matter, its predecessor, the Quanti�ed Judgment Model (QJM)) need not worry about this point because it was apparently meticulously observed by Colonel Dupuy in all his work. However, portions of his work have found their way into a surprisingly large number of DOD models (usually not explicitly acknowledged), but the context and range of validity of historical results have been largely ignored by others. The need for recalibration of the historical data and corresponding functional relationships has not been considered in applying Dupuy’s results for some important current DOD models.

Implications for Current DOD Models

A number of important current DOD models (namely, TACWAR and JICM discussed below) make use of some of Dupuy’s historical results without recalibrating functional relationships such as loss rates and rates of advance as a function of some force ratio (e.g. Pa/Pd). As discussed above, it is not clear that such a procedure will capture the essence of past combat experience. Moreover, in calculating losses, Dupuy �rst determines personnel losses (expressed as a percent loss of personnel strength, i.e., number of combatants on a side) and then calculates equipment losses as a function of this casualty rate (e.g., see Dupuy [1971, pp. 219-223], also [1990, Chapters 5 through 7][11]). These latter functional relationships are apparently not observed in the models discussed below. In fact, only Dupuy (going back to Dupuy [1979][12] takes personnel losses to depend on a force ratio and other pertinent variables, with materiel losses being taken as derivative from this casualty rate.

For example, TACWAR determines personnel losses[13] by computing a force ratio and then consulting an appropriate casualty-rate curve (referred to as empirical data), much in the same fashion as ATLAS did[14]. However, such a force ratio is computed using a linear model with weapon values determined by the so-called antipotential-potential method[15]. Unfortunately, this procedure may not be consistent with how the empirical data (i.e. the casualty-rate curves) was developed. Further research is required to demonstrate that valid casualty estimates are obtained when different weapon scoring, aggregation, and situational-adjustment methodologies are used to develop casualty-rate curves from historical data and to use them to assess losses in aggregated combat models. Furthermore, TACWAR does not use Dupuy’s model for equipment losses (see above), although it does purport, as just noted above, to use “historical data” (e.g., see Kerlin et al. [1975, p. 22]) to compute personnel losses as a function (among other things) of a force ratio (given by a linear relationship), involving close air support values in a way never used by Dupuy. Although their force-ratio determination methodology does have logical and mathematical merit, it is not the way that the historical data was developed.

Moreover, RAND (Allen [1992]) has more recently developed what is called the situational force scoring (SFS) methodology for calculating force ratios in large-scale, aggregated-force combat situations to determine loss and movement rates. Here, SFS refers essentially to a force- aggregation and situation-adjustment methodology, which has many conceptual elements in common with Dupuy‘s methodology (except, most notably, extensive testing against historical data, especially documentation of such efforts). This SFS was originally developed for RSAS[16] and is today used in JICM[17]. It also apparently uses a weapon-scoring system developed at RAND[18]. It purports (no documentation given [citation of unpublished work]) to be consistent with historical data (including the ATLAS casualty-rate curves) (Allen [1992, p.41]), but again no consideration is given to recalibration of historical results for different weapon scoring, force-aggregation, and situational-adjustment methodologies. SFS emphasizes adjusting force strengths according to operational circumstances (the “situation�) of the engagement (including surprise), with many innovative ideas (but in some major ways has little connection with previous work of others[19]). The resulting model contains many more details than historical combat data would support. It also is methodology that differs in many essential ways from that used previously by any investigator. In particular, it is doubtful that it develops force ratios in a manner consistent with Dupuy’s work.

Final Comments

Use of (sophisticated) mathematics for modeling past historical combat (and extrapolating it into the future for planning purposes) is no reason for ignoring Dupuy’s work. One would think that the current Military OR community would try to understand Dupuy’s work before trying to improve and extend it. In particular, Colonel Dupuy’s various computational procedures (including constants) must be considered as an organic whole (i.e. a system) supporting the development of functional relationships. If one ignores this computational system and simply tries to use some isolated aspect, the result may be interesting and even logically sound, but it probably lacks any scienti�c validity.


P. Allen, “Situational Force Scoring: Accounting for Combined Arms Effects in Aggregate Combat Models,� N-3423-NA, The RAND Corporation, Santa Monica, CA, 1992.

L. B. Anderson, “A Brie�ng on Anti-Potential Potential (The Eigen-value Method for Computing Weapon Values), WP-2, Project 23-31, Institute for Defense Analyses, Arlington, VA, March 1974.

B. W. Bennett, et al, “RSAS 4.6 Summary,� N-3534-NA, The RAND Corporation, Santa Monica, CA, 1992.

B. W. Bennett, A. M. Bullock, D. B. Fox, C. M. Jones, J. Schrader, R. Weissler, and B. A. Wilson, “JICM 1.0 Summary,� MR-383-NA, The RAND Corporation, Santa Monica, CA, 1994.

P. K. Davis and J. A. Winnefeld, “The RAND Strategic Assessment Center: An Overview and Interim Conclusions About Utility and Development Options,� R-2945-DNA, The RAND Corporation, Santa Monica, CA, March 1983.

T.N, Dupuy, Numbers. Predictions and War: Using History to Evaluate Combat Factors and Predict the Outcome of Battles, The Bobbs-Merrill Company, Indianapolis/New York, 1979,

T.N. Dupuy, Numbers Predictions and War, Revised Edition, HERO Books, Fairfax, VA 1985.

T.N. Dupuy, Understanding War: History and Theory of Combat, Paragon House Publishers, New York, 1987.

T.N. Dupuy, Attrition: Forecasting Battle Casualties and Equipment Losses in Modem War, HERO Books, Fairfax, VA, 1990.

General Research Corporation (GRC), “A Hierarchy of Combat Analysis Models,� McLean, VA, January 1973.

Historical Evaluation and Research Organization (HERO), “Average Casualty Rates for War Games, Based on Historical Data,� 3 Volumes in 1, Dunn Loring, VA, February 1967.

E. P. Kerlin and R. H. Cole, “ATLAS: A Tactical, Logistical, and Air Simulation: Documentation and User’s Guide,� RAC-TP-338, Research Analysis Corporation, McLean, VA, April 1969 (AD 850 355).

E.P. Kerlin, L.A. Schmidt, A.J. Rolfe, M.J. Hutzler, and D,L. Moody, “The IDA Tactical Warfare Model: A Theater-Level Model of Conventional, Nuclear, and Chemical Warfare, Volume II- Detailed Description� R-21 1, Institute for Defense Analyses, Arlington, VA, October 1975 (AD B009 692L).

R. McQuie, “Military History and Mathematical Analysis,” Military Review 50, No, 5, 8-17 (1970).

S.M. Robinson, “Shadow Prices for Measures of Effectiveness, I: Linear Model,� Operations Research 41, 518-535 (1993).

J.G. Taylor, Lanchester Models of Warfare. Vols, I & II. Operations Research Society of America, Alexandria, VA, 1983. (a)

J.G. Taylor, “A Lanchester-Type Aggregated-Force Model of Conventional Ground Combat,� Naval Research Logistics Quarterly 30, 237-260 (1983). (b)


[1] For example, see Taylor [1983a, Section 7.18], which contains a number of examples. The basic references given there may be more accessible through Robinson [I993].

[2] This term was apparently coined by L.B. Anderson [I974] (see also Kerlin et al. [1975, Chapter I, Section D.3]).

[3] The Tactical Warfare (TACWAR) model is a theater-level, joint-warfare, computer-based combat model that is currently used for decision support by the Joint Staff and essentially all CINC staffs. It was originally developed by the Institute for Defense Analyses in the mid-1970s (see Kerlin et al. [1975]), originally referred to as TACNUC, which has been continually upgraded until (and including) the present day.

[4] For example, see Kerlin and Cole [1969], GRC [1973, Fig. 6-6], or Taylor [1983b, Fig. 5] (also Taylor [1983a, Section 7.13]).

[5] The only apparent difference between Dupuy [1979] and Dupuy [1985] is the addition of an appendix (Appendix C “Modi�ed Quanti�ed Judgment Analysis of the Bekaa Valley Battle�) to the end of the latter (pp. 241-251). Hence, the page content is apparently the same for these two books for pp. 1-239.

[6] Technically speaking, one also has the engagement type and possibly several other descriptors (denoted in Fig. 1 as reduced list of operational circumstances) as other inputs to a historical battle.

[7] In Dupuy [1979, e.g. pp. 43-46] only environmental variables are mentioned, although basically the same formulas underlie both Dupuy [1979] and Dupuy [1987]. For simplicity, Fig. 1 and 2 follow this usage and employ the term “environmental circumstances.”

[8] In Dupuy [1979, e.g. pp. 46-47] only operational variables are mentioned, although basically the same formulas underlie both Dupuy [1979] and Dupuy [1987]. For simplicity, Fig. 1 and 2 follow this usage and employ the term “operational circumstances.�

[9] Chris Lawrence has kindly brought to my attention that since the same value for troop dispersion from an historical period (e.g. see Dupuy [1987, p. 84]) is used for both the attacker and also the defender, troop dispersion does not actually affect the determination of relative combat power PM/Pd.

[10] Eight different weapon types are considered, with three being classi�ed as infantry weapons (e.g. see Dupuy [1979, pp, 43-44], [1981 pp. 85-86]).

[11] Chris Lawrence has kindly informed me that Dupuy‘s work on relating equipment losses to personnel losses goes back to the early 1970s and even earlier (e.g. see HERO [1966]). Moreover, Dupuy‘s [1992] book Future Wars gives some additional empirical evidence concerning the dependence of equipment losses on casualty rates.

[12] But actually going back much earlier as pointed out in the previous footnote.

[13] See Kerlin et al. [1975, Chapter I, Section D.l].

[14] See Footnote 4 above.

[15] See Kerlin et al. [1975, Chapter I, Section D.3]; see also Footnotes 1 and 2 above.

[16] The RAND Strategy Assessment System (RSAS) is a multi-theater aggregated combat model developed at RAND in the early l980s (for further details see Davis and Winnefeld [1983] and Bennett et al. [1992]). It evolved into the Joint Integrated Contingency Model (JICM), which is a post-Cold War redesign of the RSAS (starting in FY92).

[17] The Joint Integrated Contingency Model (JICM) is a game-structured computer-based combat model of major regional contingencies and higher-level conflicts, covering strategic mobility, regional conventional and nuclear warfare in multiple theaters, naval warfare, and strategic nuclear warfare (for further details, see Bennett et al. [1994]).

[18] RAND apparently replaced one weapon-scoring system by another (e.g. see Allen [1992, pp. 9, l5, and 87-89]) without making any other changes in their SFS System.

[19] For example, both Dupuy’s early HERO work (e.g. see Dupuy [1967]), reworks of these results by the Research Analysis Corporation (RAC) (e.g. see RAC [1973, Fig. 6-6]), and Dupuy’s later work (e.g. see Dupuy [1979]) considered daily fractional casualties for the attacker and also for the defender as basic casualty-outcome descriptors (see also Taylor [1983b]). However, RAND does not do this, but considers the defender’s loss rate and a casualty exchange ratio as being the basic casualty-production descriptors (Allen [1992, pp. 41-42]). The great value of using the former set of descriptors (i.e. attacker and defender fractional loss rates) is that not only is casualty assessment more straight forward (especially development of functional relationships from historical data) but also qualitative model behavior is readily deduced (see Taylor [1983b] for further details).

The Lanchester Equations and Historical Warfare

The Lanchester Equations and Historical Warfare

Allied force dispositions at the Battle of Anzio, on 1 February 1944. [U.S. Army/Wikipedia]

[The article below is reprinted from History, Numbers And War: A HERO Journal, Vol. 1, No. 1, Spring 1977, pp. 34-52]

The Lanchester Equations and Historical Warfare: An Analysis of Sixty World War II Land Engagements

By Janice B. Fain

Background and Objectives

The method by which combat losses are computed is one of the most critical parts of any combat model. The Lanchester equations, which state that a unit’s combat losses depend on the size of its opponent, are widely used for this purpose.

In addition to their use in complex dynamic simulations of warfare, the Lanchester equations have also sewed as simple mathematical models. In fact, during the last decade or so there has been an explosion of theoretical developments based on them. By now their variations and modifications are numerous, and “Lanchester theory� has become almost a separate branch of applied mathematics. However, compared with the effort devoted to theoretical developments, there has been relatively little empirical testing of the basic thesis that combat losses are related to force sizes.

One of the first empirical studies of the Lanchester equations was Engel’s classic work on the Iwo Jima campaign in which he found a reasonable �t between computed and actual U.S. casualties (Note 1). Later studies were somewhat less supportive (Notes 2 and 3), but an investigation of Korean war battles showed that, when the simulated combat units were constrained to follow the tactics of their historical counterparts, casualties during combat could be predicted to within 1 to 13 percent (Note 4).

Taken together, these various studies suggest that, while the Lanchester equations may be poor descriptors of large battles extending over periods during which the forces were not constantly in combat, they may be adequate for predicting losses while the forces are actually engaged in fighting. The purpose of the work reported here is to investigate 60 carefully selected World War II engagements. Since the durations of these battles were short (typically two to three days), it was expected that the Lanchester equations would show a closer fit than was found in studies of larger battles. In particular, one of the objectives was to repeat, in part, Willard’s work on battles of the historical past (Note 3).

The Data Base

Probably the most nearly complete and accurate collection of combat data is the data on World War II compiled by the Historical Evaluation and Research Organization (HERO). From their data HERO analysts selected, for quantitative analysis, the following 60 engagements from four major Italian campaigns:

Salerno, 9-18 Sep 1943, 9 engagements

Volturno, 12 Oct-8 Dec 1943, 20 engagements

Anzio, 22 Jan-29 Feb 1944, 11 engagements

Rome, 14 May-4 June 1944, 20 engagements

The complete data base is described in a HERO report (Note 5). The work described here is not the first analysis of these data. Statistical analyses of weapon effectiveness and the testing of a combat model (the Quantified Judgment Method, QJM) have been carried out (Note 6). The work discussed here examines these engagements from the viewpoint of the Lanchester equations to consider the question: “Are casualties during combat related to the numbers of men in the opposing forces?�

The variables chosen for this analysis are shown in Table 1. The “winnersâ€� of the engagements were specified by HERO on the basis of casualties suffered, distance advanced, and subjective estimates of the percentage of the commander’s objective achieved. Variable 12, the Combat Power Ratio, is based on the Operational Lethality Indices (OLI) of the units (Note 7).

The general characteristics of the engagements are briefly described. Of the 60, there were 19 attacks by British forces, 28 by U.S. forces, and 13 by German forces. The attacker was successful in 34 cases; the defender, in 23; and the outcomes of 3 were ambiguous. With respect to terrain, 19 engagements occurred in flat terrain; 24 in rolling, or intermediate, terrain; and 17 in rugged, or difficult, terrain. Clear weather prevailed in 40 cases; 13 engagements were fought in light or intermittent rain; and 7 in medium or heavy rain. There were 28 spring and summer engagements and 32 fall and winter engagements.

Comparison of World War II Engagements With Historical Battles

Since one purpose of this work is to repeat, in part, Willard’s analysis, comparison of these World War II engagements with the historical battles (1618-1905) studied by him will be useful. Table 2 shows a comparison of the distribution of battles by type. Willard’s cases were divided into two categories: I. meeting engagements, and II. sieges, attacks on forts, and similar operations. HERO’s World War II engagements were divided into four types based on the posture of the defender: 1. delay, 2. hasty defense, 3. prepared position, and 4. fortified position. If postures 1 and 2 are considered very roughly equivalent to Willard’s category I, then in both data sets the division into the two gross categories is approximately even.

The distribution of engagements across force ratios, given in Table 3, indicated some differences. Willard’s engagements tend to cluster at the lower end of the scale (1-2) and at the higher end (4 and above), while the majority of the World War II engagements were found in mid-range (1.5 – 4) (Note 8). The frequency with which the numerically inferior force achieved victory is shown in Table 4. It is seen that in neither data set are force ratios good predictors of success in battle (Note 9).

Table 3.

Results of the Analysis Willard’s Correlation Analysis

There are two forms of the Lanchester equations. One represents the case in which firing units on both sides know the locations of their opponents and can shift their fire to a new target when a “kill� is achieved. This leads to the “square� law where the loss rate is proportional to the opponent’s size. The second form represents that situation in which only the general location of the opponent is known. This leads to the “linear� law in which the loss rate is proportional to the product of both force sizes.

As Willard points out, large battles are made up of many smaller fights. Some of these obey one law while others obey the other, so that the overall result should be a combination of the two. Starting with a general formulation of Lanchester’s equations, where g is the exponent of the target unit’s size (that is, g is 0 for the square law and 1 for the linear law), he derives the following linear equation:

log (nc/mc) = log E + g log (mo/no) (1)

where nc and mc are the casualties, E is related to the exchange ratio, and mo and no are the initial force sizes. Linear regression produces a value for g. However, instead of lying between 0 and 1, as expected, the) g‘s range from -.27 to -.87, with the majority lying around -.5. (Willard obtains several values for g by dividing his data base in various ways—by force ratio, by casualty ratio, by historical period, and so forth.) A negative g value is unpleasant. As Willard notes:

Military theorists should be disconcerted to find g < 0, for in this range the results seem to imply that if the Lanchester formulation is valid, the casualty-producing power of troops increases as they suffer casualties (Note 3).

From his results, Willard concludes that his analysis does not justify the use of Lanchester equations in large-scale situations (Note 10).

Analysis of the World War II Engagements

Willard’s computations were repeated for the HERO data set. For these engagements, regression produced a value of -.594 for g (Note 11), in striking agreement with Willard’s results. Following his reasoning would lead to the conclusion that either the Lanchester equations do not represent these engagements, or that the casualty producing power of forces increases as their size decreases.

However, since the Lanchester equations are so convenient analytically and their use is so widespread, it appeared worthwhile to reconsider this conclusion. In deriving equation (1), Willard used binomial expansions in which he retained only the leading terms. It seemed possible that the poor results might he due, in part, to this approximation. If the first two terms of these expansions are retained, the following equation results:

log (nc/mc) = log E + log (Mo-mc)/(no-nc) (2)

Repeating this regression on the basis of this equation leads to g = -.413 (Note 12), hardly an improvement over the initial results.

A second attempt was made to salvage this approach. Starting with raw OLI scores (Note 7), HERO analysts have computed “combat potentials� for both sides in these engagements, taking into account the operational factors of posture, vulnerability, and mobility; environmental factors like weather, season, and terrain; and (when the record warrants) psychological factors like troop training, morale, and the quality of leadership. Replacing the factor (mo/no) in Equation (1) by the combat power ratio produces the result) g = .466 (Note 13).

While this is an apparent improvement in the value of g, it is achieved at the expense of somewhat distorting the Lanchester concept. It does preserve the functional form of the equations, but it requires a somewhat strange definition of “killing rates.�

Analysis Based on the Differential Lanchester Equations

Analysis of the type carried out by Willard appears to produce very poor results for these World War II engagements. Part of the reason for this is apparent from Figure 1, which shows the scatterplot of the dependent variable, log (nc/mc), against the independent variable, log (mo/no). It is clear that no straight line will fit these data very well, and one with a positive slope would not be much worse than the “best� line found by regression. To expect the exponent to account for the wide variation in these data seems unreasonable.

Here, a simpler approach will be taken. Rather than use the data to attempt to discriminate directly between the square and the linear laws, they will be used to estimate linear coefficients under each assumption in turn, starting with the differential formulation rather than the integrated equations used by Willard.

In their simplest differential form, the Lanchester equations may be written;

Square Law; dA/dt = -kdD and dD/dt = kaA (3)

Linear law: dA/dt = -k’dAD and dD/dt = k’aAD (4)


A(D) is the size of the attacker (defender)

dA/dt (dD/dt) is the attacker’s (defender’s) loss rate,

ka, k’a (kd, k’d) are the attacker’s (defender’s) killing rates

For this analysis, the day is taken as the basic time unit, and the loss rate per day is approximated by the casualties per day. Results of the linear regressions are given in Table 5. No conclusions should be drawn from the fact that the correlation coefficients are higher in the linear law case since this is expected for purely technical reasons (Note 14). A better picture of the relationships is again provided by the scatterplots in Figure 2. It is clear from these plots that, as in the case of the logarithmic forms, a single straight line will not fit the entire set of 60 engagements for either of the dependent variables.

To investigate ways in which the data set might pro�tably be subdivided for analysis, T-tests of the means of the dependent variable were made for several partitionings of the data set. The results, shown in Table 6, suggest that dividing the engagements by defense posture might prove worthwhile.

Results of the linear regressions by defense posture are shown in Table 7. For each posture, the equation that seemed to give a better fit to the data is underlined (Note 15). From this table, the following very tentative conclusions might be drawn:

  • In an attack on a fortiï¬�ed position, the attacker suffers casualties by the square law; the defender suffers casualties by the linear law. That is, the defender is aware of the attacker’s position, while the attacker knows only the general location of the defender. (This is similar to Deitchman’s guerrilla model. Note 16).
  • This situation is apparently reversed in the cases of attacks on prepared positions and hasty defenses.
  • Delaying situations seem to be treated better by the square law for both attacker and defender.

Table 8 summarizes the killing rates by defense posture. The defender has a much higher killing rate than the attacker (almost 3 to 1) in a fortified position. In a prepared position and hasty defense, the attacker appears to have the advantage. However, in a delaying action, the defender’s killing rate is again greater than the attacker’s (Note 17).

Figure 3 shows the scatterplots for these cases. Examination of these plots suggests that a tentative answer to the study question posed above might be: “Yes, casualties do appear to be related to the force sizes, but the relationship may not be a simple linear one.�

In several of these plots it appears that two or more functional forms may be involved. Consider, for example, the defender‘s casualties as a function of the attacker’s initial strength in the case of a hasty defense. This plot is repeated in Figure 4, where the points appear to fit the curves sketched there. It would appear that there are at least two, possibly three, separate relationships. Also on that plot, the individual engagements have been identified, and it is interesting to note that on the curve marked (1), five of the seven attacks were made by Germans—four of them from the Salerno campaign. It would appear from this that German attacks are associated with higher than average defender casualties for the attacking force size. Since there are so few data points, this cannot be more than a hint or interesting suggestion.

Future Research

This work suggests two conclusions that might have an impact on future lines of research on combat dynamics:

  • Tactics appear to be an important determinant of combat results. This conclusion, in itself, does not appear startling, at least not to the military. However, it does not always seem to have been the case that tactical questions have been considered seriously by analysts in their studies of the effects of varying force levels and force mixes.
  • Historical data of this type offer rich opportunities for studying the effects of tactics. For example, consideration of the narrative accounts of these battles might permit re-coding the engagements into a larger, more sensitive set of engagement categories. (It would, of course, then be highly desirable to add more engagements to the data set.)

While predictions of the future are always dangerous, I would nevertheless like to suggest what appears to be a possible trend. While military analysis of the past two decades has focused almost exclusively on the hardware of weapons systems, at least part of our future analysis will be devoted to the more behavioral aspects of combat.

Janice Bloom Fain, a Senior Associate of CACI, lnc., is a physicist whose special interests are in the applications of computer simulation techniques to industrial and military operations; she is the author of numerous reports and articles in this field. This paper was presented by Dr. Fain at the Military Operations Research Symposium at Fort Eustis, Virginia.


[1.] J. H. Engel, “A Verification of Lanchester’s Law,â€� Operations Research 2, 163-171 (1954).

[2.] For example, see R. L. Helmbold, “Some Observations on the Use of Lanchester’s Theory for Prediction,â€� Operations Research 12, 778-781 (1964); H. K. Weiss, “Lanchester-Type Models of Warfare,â€� Proceedings of the First International Conference on Operational Research, 82-98, ORSA (1957); H. K. Weiss, “Combat Models and Historical Data; The U.S. Civil War,â€� Operations Research 14, 750-790 (1966).

[3.] D. Willard, “Lanchester as a Force in History: An Analysis of Land Battles of the Years 1618-1905,� RAC-TD-74, Research Analysis Corporation (1962). what appears to be a possible trend. While military analysis of the past two decades has focused almost exclusively on the hardware of weapons systems, at least part of our future analysis will be devoted to the more behavioral aspects of combat.

[4.] The method of computing the killing rates forced a fit at the beginning and end of the battles. See W. Fain, J. B. Fain, L. Feldman, and S. Simon, “Validation of Combat Models Against Historical Data,� Professional Paper No. 27, Center for Naval Analyses, Arlington, Virginia (1970).

[5.] HERO, “A Study of the Relationship of Tactical Air Support Operations to Land Combat, Appendix B, Historical Data Base.� Historical Evaluation and Research Organization, report prepared for the Defense Operational Analysis Establishment, U.K.T.S.D., Contract D-4052 (1971).

[6.] T. N. Dupuy, The Quantified Judgment Method of Analysis of Historical Combat Data, HERO Monograph, (January 1973); HERO, “Statistical Inference in Analysis in Combat,� Annex F, Historical Data Research on Tactical Air Operations, prepared for Headquarters USAF, Assistant Chief of Staff for Studies and Analysis, Contract No. F-44620-70-C-0058 (1972).

[7.] The Operational Lethality Index (OLI) is a measure of weapon effectiveness developed by HERO.

[8.] Since Willard’s data did not indicate which side was the attacker, his force ratio is defined to be (larger force/smaller force). The HERO force ratio is (attacker/defender).

[9.] Since the criteria for success may have been rather different for the two sets of battles, this comparison may not be very meaningful.

[10.] This work includes more complex analysis in which the possibility that the two forces may be engaging in different types of combat is considered, leading to the use of two exponents rather than the single one, Stochastic combat processes are also treated.

[11.] Correlation coefficient = -.262;

Intercept = .00115; slope = -.594.

[12.] Correlation coefficient = -.184;

Intercept = .0539; slope = -,413.

[13.] Correlation coefficient = .303;

Intercept = -.638; slope = .466.

[14.] Correlation coefficients for the linear law are inflated with respect to the square law since the independent variable is a product of force sizes and, thus, has a higher variance than the single force size unit in the square law case.

[15.] This is a subjective judgment based on the following considerations Since the correlation coefficient is inflated for the linear law, when it is lower the square law case is chosen. When the linear law correlation coefficient is higher, the case with the intercept closer to 0 is chosen.

[16.] S. J. Deitchman, “A Lanchester Model of Guerrilla Warfare,� Operations Research 10, 818-812 (1962).

[17.] As pointed out by Mr. Alan Washburn, who prepared a critique on this paper, when comparing numerical values of the square law and linear law killing rates, the differences in units must be considered. (See footnotes to Table 7).

What Is A Breakpoint?

What Is A Breakpoint?

French retreat from Russia in 1812 by Illarion Mikhailovich Pryanishnikov (1812) [Wikipedia]

After discussing with Chris the series of recent posts on the subject of breakpoints, it seemed appropriate to provide a better definition of exactly what a breakpoint is.

Dorothy Kneeland Clark was the first to define the notion of a breakpoint in her study, Casualties as a Measure of the Loss of Combat Effectiveness of an Infantry Battalion (Operations Research Office, The Johns Hopkins University: Baltimore, 1954). She found it was not quite as clear-cut as it seemed and the working definition she arrived at was based on discussions and the specific combat outcomes she found in her data set [pp 9-12].


The following definitions were developed out of many discussions. A unit is considered to have lost its combat effectiveness when it is unable to carry out its mission. The onset of this inability constitutes a breakpoint. A unit’s mission is the objective assigned in the current operations order or any other instructional directive, written or verbal. The objective may be, for example, to attack in order to take certain positions, or to defend certain positions. 

How does one determine when a unit is unable to carry out its mission? The obvious indication is a change in operational directive: the unit is ordered to stop short of its original goal, to hold instead of attack, to withdraw instead of hold. But one or more extraneous elements may cause the issue of such orders: 

(1) Some other unit taking part in the operation may have lost its combat effectiveness, and its predicament may force changes, in the tactical plan. For example the inability of one infantry battalion to take a hill may require that the two adjoining battalions be stopped to prevent exposing their flanks by advancing beyond it. 

(2) A unit may have been assigned an objective on the basis of a G-2 estimate of enemy weakness which, as the action proceeds, proves to have been over-optimistic. The operations plan may, therefore, be revised before the unit has carried out its orders to the point of losing combat effectiveness. 

(3) The commanding officer, for reasons quite apart from the tactical attrition, may change his operations plan. For instance, General Ridgway in May 1951 was obliged to cancel his plans for a major offensive north of the 38th parallel in Korea in obedience to top level orders dictated by political considerations. 

(4) Even if the supposed combat effectiveness of the unit is the determining factor in the issuance of a revised operations order, a serious difficulty in evaluating the situation remains. The commanding officer’s decision is necessarily made on the basis of information available to him plus his estimate of his unit’s capacities. Either or both of these bases may be faulty. The order may belatedly recognize a collapse which has in factor occurred hours earlier, or a commanding officer may withdraw a unit which could hold for a much longer time. 

It was usually not hard to discover when changes in orders resulted from conditions such as the first three listed above, but it proved extremely difficult to distinguish between revised orders based on a correct appraisal of the unit’s combat effectiveness and those issued in error. It was concluded that the formal order for a change in mission cannot be taken as a definitive indication of the breakpoint of a unit. It seemed necessary to go one step farther and search the records to learn what a given battalion did regardless of provisions in formal orders… 


In the engagements studied the following categories of breakpoint were finally selected: 

Category of Breakpoint 

No. Analyzed 

I. Attack [Symbol] rapid reorganization [Symbol] attack 


II. Attack [Symbol] defense (no longer able to attack without a few days of recuperation and reinforcement 


III. Defense [Symbol] withdrawal by order to a secondary line 


IV. Defense [Symbol] collapse 


Disorganization and panic were taken as unquestionable evidence of loss of combat effectiveness. It appeared, however, that there were distinct degrees of magnitude in these experiences. In addition to the expected breakpoints at attack [Symbol] defense and defense [Symbol] collapse, a further category, I, seemed to be indicated to include situations in which an attacking battalion was ‘pinned down” or forced to withdraw in partial disorder but was able to reorganize in 4 to 24 hours and continue attacking successfully. 

Category II includes (a) situations in which an attacking battalion was ordered into the defensive after severe fighting or temporary panic; (b) situations in which a battalion, after attacking successfully, failed to gain ground although still attempting to advance and was finally ordered into defense, the breakpoint being taken as occurring at the end of successful advance. In other words, the evident inability of the unit to fulfill its mission was used as the criterion for the breakpoint whether orders did or did not recognize its inability. Battalions after experiencing such a breakpoint might be able to recuperate in a few days to the point of renewing successful attack or might be able to continue for some time in defense. 

The sample of breakpoints coming under category IV, defense [Symbol] collapse, proved to be very small (5) and unduly weighted in that four of the examples came from the same engagement. It was, therefore, discarded as probably not representative of the universe of category IV breakpoints,* and another category (III) was added: situations in which battalions on the defense were ordered withdrawn to a quieter sector. Because only those instances were included in which the withdrawal orders appeared to have been dictated by the condition of the unit itself, it is believed that casualty levels for this category can be regarded as but slightly lower than those associated with defense [Symbol] collapse. 

In both categories II and III, “‘defenseâ€� represents an active situation in which the enemy is attacking aggressively. 

* It had been expected that breakpoints in this category would be associated with very high losses. Such did not prove to be the case. In whatever way the data were approached, most of the casualty averages were only slightly higher than those associated with category II (attack [Symbol] defense), although the spread in data was wider. It is believed that factors other than casualties, such as bad weather, difficult terrain, and heavy enemy artillery fire undoubtedly played major roles in bringing about the collapse in the four units taking part in the same engagement. Furthermore, the casualty figures for the four units themselves is in question because, as the situation deteriorated, many of the men developed severe cases of trench foot and combat exhaustion, but were not evacuated, as they would have been in a less desperate situation, and did not appear in the casualty records until they had made their way to the rear after their units had collapsed.

In 1987-1988, Trevor Dupuy and colleagues at Data Memory Systems, Inc. (DMSi), Janice Fain, Rich Anderson, Gay Hammerman, and Chuck Hawkins sought to create a broader, more generally applicable definition for breakpoints for the study, Forced Changes of Combat Posture (DMSi, Fairfax, VA, 1988) [pp. I-2-3]

The combat posture of a military force is the immediate intention of its commander and troops toward the opposing enemy force, together with the preparations and deployment to carry out that intention. The chief combat postures are attack, defend, delay, and withdraw.

A change in combat posture (or posture change) is a shift from one posture to another, as, for example, from defend to attack or defend to withdraw. A posture change can be either voluntary or forced. 

A forced posture change (FPC) is a change in combat posture by a military unit that is brought about, directly or indirectly, by enemy action. Forced posture changes are characteristically and almost always changes to a less aggressive posture. The most usual FPCs are from attack to defend and from defend to withdraw (or retrograde movement). A change from withdraw to combat ineffectiveness is also possible. 

Breakpoint is a term sometimes used as synonymous with forced posture change, and sometimes used to mean the collapse of a unit into ineffectiveness or rout. The latter meaning is probably more common in general usage, while forced posture change is the more precise term for the subject of this study. However, for brevity and convenience, and because this study has been known informally since its inception as the “Breakpoints” study, the term breakpoint is sometimes used in this report. When it is used, it is synonymous with forced posture change.

Hopefully this will help clarify the previous discussions of breakpoints on the blog.

Human Factors In Combat: Syrian Strike Edition

Human Factors In Combat: Syrian Strike Edition

Missile fire lit up the Damascus sky last week as the U.S. and allies launched an attack on chemical weapons sites. [Hassan Ammar, AP/USA Today]

Even as pundits and wonks debate the political and strategic impact of the 14 April combined U.S., British, and French cruise missile strike on Assad regime chemical warfare targets in Syria, it has become clear that effort was a notable tactical success.

Despite ample warning that the strike was coming, the Syrian regime’s Russian-made S-200 surface-to-air missile defense system failed to shoot down a single incoming missile. The U.S. Defense Department claimed that all 105 cruise missiles fired struck their targets. It also reported that the Syrians fired 40 interceptor missiles but nearly all launched after the incoming cruise missiles had already struck their targets.

Although cruise missiles are difficult to track and engage even with fully modernized air defense systems, the dismal performance of the Syrian network was a surprise to many analysts given the wary respect paid to it by U.S. military leaders in the recent past. Although the S-200 dates from the 1960s, many surmise an erosion in the combat effectiveness of the personnel manning the system is the real culprit.

[A] lack of training, command and control and other human factors are probably responsible for the failure, analysts said.

“It’s not just about the physical capability of the air defense system,� said David Deptula, a retired, three-star Air Force general. “It’s about the people who are operating the system.�

The Syrian regime has become dependent upon assistance from Russia and Iran to train, equip, and maintain its military forces. Russian forces in Syria have deployed the more sophisticated S-400 air defense system to protect their air and naval bases, which reportedly tracked but did not engage the cruise missile strike. The Assad regime is also believed to field the Russian-made Pantsir missile and air-defense artillery system, but it likely was not deployed near enough to the targeted facilities to help.

Despite the pervasive role technology plays in modern warfare, the human element remains the most important factor in determining combat effectiveness.

U.S. Army Invests In Revitalizing Long Range Precision Fires Capabilities

U.S. Army Invests In Revitalizing Long Range Precision Fires Capabilities

U.S. Marines from the The 11th MEU fire their M777 Lightweight 155mm Howitzer during Exercise Alligator Dagger, Dec. 18, 2016. (U.S. Marine Corps/Lance Cpl. Zachery C. Laning/

In 2016, Michael Jacobson and Robert H. Scales amplified a warning that after years of neglect during the counterinsurgency war in Iraq and Afghanistan, the U.S. was falling behind potential adversaries in artillery and long range precision fires capabilities. The U.S. Army had already taken note of the performance of Russian artillery in Ukraine, particularly the strike at Zelenopillya in 2014.

Since then, the U.S. Army and Marine Corps have started working on a new Multi-Domain Battle concept aimed at countering the anti-access/area denial (A2/AD) capabilities of potential foes. In 2017, U.S. Army Chief of Staff General Mark Milley made rapid improvement in long range precision fires capabilities the top priority for the service’s modernization effort. It currently aims to field new field artillery, rocket, and missile weapons capable of striking at distances from 70 to 500 kilometers – double the existing ranges – within five years.

The value of ground-based long-range precision fires has been demonstrated recently by the effectiveness of U.S. artillery support, particularly U.S. Army and Marine Corps 155mm howitzers, for Iraqi security forces in retaking Mosul, Syrian Democratic Forces assaulting Raqaa, and in protection of Syrian Kurds being attacked by Russian mercenaries and Syrian regime forces.

According to Army historian Luke O’Brian, the Fiscal Year 2019 Defense budget includes funds to buy 28,737 XM1156 Precision Guided Kit (PGK) 155mm howitzer munitions, which includes replacements for the 6,269 rounds expended during Operation INHERENT RESOLVE. O’Brian also notes that the Army will also buy 2,162 M982 Excalibur 155mm rounds in 2019 and several hundred each in following years.

In addition, in an effort to reduce the dependence on potentially vulnerable Global Positioning System (GPS) satellite networks for precision fires capabilities, the Army has awarded a contract to BAE Systems to develop Precision Guided Kit-Modernization (PGK-M) rounds with internal navigational capacity.

While the numbers appear large at first glance, data on U.S. artillery expenditures in Operation DESERT STORM and IRAQI FREEDOM (also via Luke O’Brian) shows just how much the volume of long-range fires has changed just since 1991. For the U.S. at least, precision fires have indeed replaced mass fires on the battlefield.

Breakpoints in U.S. Army Doctrine

Breakpoints in U.S. Army Doctrine

U.S. Army prisoners of war captured by German forces during the Battle of the Bulge in 1944. [Wikipedia]

One of the least studied aspects of combat is battle termination. Why do units in combat stop attacking or defending? Shifts in combat posture (attack, defend, delay, withdrawal) are usually voluntary, directed by a commander, but they can also be involuntary, as a result of direct or indirect enemy action. Why do involuntary changes in combat posture, known as breakpoints, occur?

As Chris pointed out in a previous post, the topic of breakpoints has only been addressed by two known studies since 1954. Most existing military combat models and wargames address breakpoints in at least a cursory way, usually through some calculation based on personnel casualties. Both of the breakpoints studies suggest that involuntary changes in posture are seldom related to casualties alone, however.

Current U.S. Army doctrine addresses changes in combat posture through discussions of culmination points in the attack, and transitions from attack to defense, defense to counterattack, and defense to retrograde. But these all pertain to voluntary changes, not breakpoints.

Army doctrinal literature has little to say about breakpoints, either in the context of friendly forces or potential enemy combatants. The little it does say relates to the effects of fire on enemy forces and is based on personnel and material attrition.

According to ADRP 1-02 Terms and Military Symbols, an enemy combat unit is considered suppressed after suffering 3% personnel casualties or material losses, neutralized by 10% losses, and destroyed upon sustaining 30% losses. The sources and methodology for deriving these figures is unknown, although these specific terms and numbers have been a part of Army doctrine for decades.

The joint U.S. Army and U.S. Marine Corps vision of future land combat foresees battlefields that are highly lethal and demanding on human endurance. How will such a future operational environment affect combat performance? Past experience undoubtedly offers useful insights but there seems to be little interest in seeking out such knowledge.

Trevor Dupuy criticized the U.S. military in the 1980s for its lack of understanding of the phenomenon of suppression and other effects of fire on the battlefield, and its seeming disinterest in studying it. Not much appears to have changed since then.

Abstraction and Aggregation in Wargame Modeling

Abstraction and Aggregation in Wargame Modeling

[IPMS/USA Reviews]

“All models are wrong, some models are useful.â€� – George Box

Models, no matter what their subjects, must always be an imperfect copy of the original. The term “model” inherently has this connotation. If the subject is exact and precise, then it is a duplicate, a replica, a clone, or a copy, but not a “model.” The most common dimension to be compromised is generally size, or more literally the three spatial dimensions of length, width and height. A good example of this would be a scale model airplane, generally available in several ratios from the original, such as 1/144, 1/72 or 1/48 (which are interestingly all factors of 12 … there are also 1/100 for the more decimal-minded). These mean that the model airplane at 1/72 scale would be 72 times smaller … take the length, width and height measurements of the real item, and divide by 72 to get the model’s value.

If we take the real item’s weight and divide by 72, we would not expect our model to weight 72 times less! Not unless the same or similar materials would be used, certainly. Generally, the model has a different purpose than replicating the subject’s functionality. It is helping to model the subject’s qualities, or to mimic them in some useful way. In the case of the 1/72 plastic model airplane of the F-15J fighter, this might be replicating the sight of a real F-15J, to satisfy the desire of the youth to look at the F-15J and to imagine themselves taking flight. Or it might be for pilots at a flight school to mimic air combat with models instead of ha

The model aircraft is a simple physical object; once built, it does not change over time (unless you want to count dropping it and breaking it…). A real F-15J, however, is a dynamic physical object, which changes considerably over the course of its normal operation. It is loaded with fuel, ordnance, both of which have a huge effect on its weight, and thus its performance characteristics. Also, it may be occupied by different crew members, whose experience and skills may vary considerably. These qualities of the unit need to be taken into account, if the purpose of the model is to represent the aircraft. The classic example of this is a flight envelope model of an F-15A/C:


This flight envelope itself is a model, it represents the flight characteristics of the F-15 using two primary quantitative axes – altitude and speed (in numbers of mach), and also throttle setting. Perhaps the most interesting thing about this is the realization than an F-15 slows down as it descends. Are these particular qualities of an F-15 required to model air combat involving such and aircraft?

How to Apply This Modeling Process to a Wargame?

The purpose of the war game is to model or represent the possible outcome of a real combat situation, played forward in the model at whatever pace and scale the designer has intended.

As mentioned previously, my colleague and I are playing Asian Fleet, a war game that covers several types of naval combat, including those involving air units, surface units and submarine units. This was published in 2007, and updated in 2010. We’ve selected a scenario that has only air units on either side. The premise of this scenario is quite simple:

The Chinese air force, in trying to prevent the United States from intervening in a Taiwan invasion, will carry out an attack on the SDF as well as the US military base on Okinawa. Forces around Shanghai consisting of state-of-the-art fighter bombers and long-range attack aircraft have been placed for the invasion of Taiwan, and an attack on Okinawa would be carried out with a portion of these forces. [Asian Fleet Scenario Book]

Of course, this game is a model of reality. The infinite geospatial and temporal possibilities of space-time which is so familiar to us has been replaced by highly aggregated discreet buckets, such as turns that may last for a day, or eight hours. Latitude, longitude and altitude are replaced with a two-dimensional hexagonal “honey comb” surface. Hence, distance is no longer computed in miles or meters, but rather in “hexes”, each of which is about 50 nautical miles. Aircraft are effectively aloft, or on the ground, although a “high mission profile” will provide endurance benefits. Submarines are considered underwater, or may use “deep mode” attempting to hide from sonar searches.

Maneuver units are represented by “counters” or virtual chits to be moved about the map as play progresses. Their level of aggregation varies from large and powerful ships and subs represented individually, to smaller surface units and weaker subs grouped and represented by a single counter (a “flotilla”), to squadrons or regiments of aircraft represented by a single counter. Depending upon the nation and the military branch, this may be a few as 3-5 aircraft in a maritime patrol aircraft (MPA) detachment (“recon” in this game), to roughly 10-12 aircraft in a bomber unit, to 24 or even 72 aircraft in a fighter unit (“interceptor” in this game).

Enough Theory, What Happened?!

The Chinese Air Force mobilized their H6H bomber, escorted by large numbers of Flankers (J11 and Su-30MK2 fighters from the Shanghai area, and headed East towards Okinawa. The US Air Force F-15Cs supported by airborne warning and control system (AWACS) detected this inbound force and delayed engagement until their Japanese F-15J unit on combat air patrol (CAP) could support them, and then engaged the Chinese force about 50 miles from the AWACS orbits. In this game, air combat is broken down into two phases, long-range air to air (LRAA) combat (aka beyond visual range, BVR), and “regular” air combat, or within visual range (WVR) combat.

In BVR combat, only units marked as equipped with BVR capability may attack:

  • 2 x F-15C units have a factor of 32; scoring a hit in 5 out of 10 cases, or roughly 50%.
  • Su-30MK2 unit has a factor of 16; scoring a hit in 4 out of 10 cases, ~40%.

To these numbers a modifier of +2 exists when the attacker is supported by AWACS, so the odds to score a hit increase to roughly 70% for the F-15Cs … but in our example they miss, and the Chinese shot misses as well. Thus, the combat proceeds to WVR.

In WVR combat, each opposing side sums their aerial combat factors:

  • 2 x F-15C (32) + F-15J (13) = 45
  • Su-30MK2 (15) + J11 (13) + H6H (1) = 29

These two numbers are then expressed as a ratio, attacker-to-defender (45:29), and rounded down in favor of the defender (1:1), and then a ten-sided-die (d10) is rolled to consult the Air-to-Air Combat Results Table, on the “CAP/AWACS Interception” line. The die was rolled, and a result of “0/0r” was achieved, which basically says that neither side takes losses, but the defender is turned back from the mission (“r” being code for “return to base”). Given the +2 modifier for the AWACS, the worst outcome for the Allies would be a mutual return to base result (“0r/0r”). The best outcome would be inflicting two “steps” of damage, and sending the rest home (“0/2r”). A step of loss is about one half of an air unit, represented by flipping over the counter or chit, and operating with the combat factors at about half strength.

To sum this up, as the Allied commander, my conclusion was that the Americans were hung-over or asleep for this engagement.

I am encouraged by some similarities between this game and the fantastic detail that TDI has just posted about the DACM model, here and here. Thus, I plan to not only dissect this Asian Fleet game (VGAF), but also go a gap analysis between VGAF and DACM.

The Dupuy Air Campaign Model (DACM)

The Dupuy Air Campaign Model (DACM)

[The article below is reprinted from the April 1997 edition of The International TNDM Newsletter. A description of the TDI Air Model Historical Data Study can be found here.]

The Dupuy Air Campaign Model
by Col. Joseph A. Bulger, Jr., USAF, Ret.

The Dupuy Institute, as part of the DACM [Dupuy Air Campaign Model], created a draft model in a spreadsheet format to show how such a model would calculate attrition. Below are the actual printouts of the “interim methodology demonstration,” which shows the types of inputs, outputs, and equations used for the DACM. The spreadsheet was created by Col. Bulger, while many of the formulae were the work of Robert Shaw.

The Dupuy Institute Air Model Historical Data Study

The Dupuy Institute Air Model Historical Data Study

British Air Ministry aerial combat diagram that sought to explain how the RAF had fought off the Luftwaffe. [World War II Today]

[The article below is reprinted from the April 1997 edition of The International TNDM Newsletter.]

Air Model Historical Data Study
by Col. Joseph A. Bulger, Jr., USAF, Ret

The Air Model Historical Study (AMHS) was designed to lead to the development of an air campaign model for use by the Air Command and Staff College (ACSC). This model, never completed, became known as the Dupuy Air Campaign Model (DACM). It was a team effort led by Trevor N. Dupuy and included the active participation of Lt. Col. Joseph Bulger, Gen. Nicholas Krawciw, Chris Lawrence, Dave Bongard, Robert Schmaltz, Robert Shaw, Dr. James Taylor, John Kettelle, Dr. George Daoust and Louis Zocchi, among others. After Dupuy’s death, I took over as the project manager.

At the �rst meeting of the team Dupuy assembled for the study, it became clear that this effort would be a serious challenge. In his own style. Dupuy was careful to provide essential guidance while, at the same time, cultivating a broad investigative approach to the unique demands of modeling for air combat. It would have been no surprise if the initial guidance established a focus on the analytical approach, level of aggregation, and overall philosophy of the QJM [Quantified Judgement Model] and TNDM [Tactical Numerical Deterministic Model]. It was clear that Trevor had no intention of steering the study into an air combat modeling methodology based directly on QJM/TNDM. To the contrary, he insisted on a rigorous derivation of the factors that would permit the �nal choice of model methodology.

At the time of Dupuy’s death in June 1995, the Air Model Historical Data Study had reached a point where a major decision was needed. The early months of the study had been devoted to developing a consensus among the TDI team members with respect to the factors that needed to be included in the model. The discussions tended to highlight three areas of particular interest—factors that had been included in models currently in use, the limitations of these models, and the need for new factors (and relationships) peculiar to the properties and dynamics of the air campaign. Team members formulated a family of relationships and factors, but the model architecture itself was not investigated beyond the surface considerations.

Despite substantial contributions from team members, including analytical demonstrations of selected factors and air combat relationships, no consensus had been achieved. On the contrary, there was a growing sense of need to abandon traditional modeling approaches in favor of a new application of the “Dupuy Method� based on a solid body of air combat data from WWII.

The Dupuy approach to modeling land combat relied heavily on the ratio of force strengths (largely determined by �repower as modi�ed by other factors). After almost a year of investigations by the AMHDS team, it was beginning to appear that air combat differed in a fundamental way from ground combat. The essence of the difference is that in air combat, the outcome of the maneuver battle for platform position must be determined before the �repower relationships may be brought to bear on the battle outcome.

At the time of Dupuy’s death, it was apparent that if the study contract was to yield a meaningful product, an immediate choice of analysis thrust was required. Shortly prior to Dupuy’s death, I and other members of the TDI team recommended that we adopt the overall approach, level of aggregation, and analytical complexity that had characterized Dupuy’s models of land combat. We also agreed on the time-sequenced predominance of the maneuver phase of air combat. When I was asked to take the analytical lead for the contact in Dupuy’s absence, I was reasonably conï¬�dent that there was overall agreement.

In view of the time available to prepare a deliverable product, it was decided to prepare a model using the air combat data we had been evaluating up to that point—June 1995. Fortunately, Robert Shaw had developed a set of preliminary analysis relationships that could be used in an initial assessment of the maneuver/�repower relationship. In view of the analytical, logistic, contractual, and time factors discussed, we decided to complete the contract effort based on the following analytical thrust:

  1. The contract deliverable would be based on the maneuver/firepower analysis approach as currently formulated in Robert Shaw’s performance equations;
  2. A spreadsheet formulation of outcomes for selected (Battle of Britain) engagements would be presented to the customer in August 1995;
  3. To the extent practical, a working model would be provided to the customer with suggestions for further development.

During the following six weeks, the demonstration model was constructed. The model (programmed for a Lotus 1-2-3 style spreadsheet formulation) was developed. Mechanized, and demonstrated to ACSC in August 1995. The �nal report was delivered in September of 1995.

The working model demonstrated to ACSC in August 1995 suggests the following observations:

  • A substantial contribution to the understanding of air combat modeling has been achieved.
  • While relationships developed in the Dupuy Air Combat Model (DACM) are not fully mature, they are analytically signiï¬�cant.
  • The approach embodied in DACM derives its authenticity from the famous “Dupuy Methodâ€� thus ensuring its strong correlations with actual combat data.
  • Although demonstrated only for air combat in the Battle of Britain, the methodology is fully capable of incorporating modem technology contributions to sensor, command and control, and ï¬�repower performance.
  • The knowledge base, fundamental performance relationships, and methodology contributions embodied in DACM are worthy of further exploration. They await only the expression of interest and a relatively modest investment to extend the analysis methodology into modem air combat and the engagements anticipated for the 21st Century.

One �nal observation seems appropriate. The DACM demonstration provided to ACSC in August 1995 should not be dismissed as a perhaps interesting, but largely simplistic approach to air combat modeling. It is a signi�cant contribution to the understanding of air combat relationships that will prevail in the 2lst Century. The Dupuy Institute is convinced that further development of DACM makes eminent good sense. An exploitation of the maneuver and �repower relationships already demonstrated in DACM will provide a valid basis for modeling air combat with modern technology sensors, control mechanisms, and weapons. It is appropriate to include the Dupuy name in the title of this latest in a series of distinguished combat models. Trevor would be pleased.

TDI Friday Read: Links You May Have Missed, 30 March 2018

TDI Friday Read: Links You May Have Missed, 30 March 2018

This week’s list of links is an odds-and-ends assortment.

David Vergun has an interview with General Stephen J. Townshend, commander of the U.S. Army Training and Doctrine Command (TRADOC) on the Army website about the need for smaller, lighter, and faster equipment for future warfare.

Defense News’s apparently inexhaustible Jen Judson details the Army’s newest forthcoming organization, “US Army’s Futures Command sets groundwork for battlefield transformation.�

At West Point’s Modern War Institute, Army Lionel Beehner, Liam Collins, Steve Ferenzi, Robert Person and Aaron Brantly have a very interesting analysis of the contemporary Russian approach to warfare, “Analyzing the Russian Way of War: Evidence from the 2008 Conflict with Georgia.�

Also at the Modern War Institute, Ethan Olberding examines ways to improve the planning skills of the U.S. Army’s junior leaders, “You Can Lead, But Can You Plan? Time to Change the Way We Develop Junior Leaders.�

Kyle Mizokami at Popular Mechanics takes a look at the state of the art in drone defenses, “Watch Microwave and Laser Weapons Knock Drones Out of the Sky.�

Jared Keller at Task & Purpose looks into the Army’s interest in upgunning its medium-weight armored vehicles, “The Army Is Eyeing This Beastly 40mm Cannon For Its Ground Combat Vehicles.�

And finally, MeritTalk, a site focused on U.S. government information technology, has posted a piece, “Pentagon Wants An Early Warning System For Hybrid Warfare,” looking at the Defense Advanced Research Projects Agency’s (DARPA) ambitious Collection and Monitoring via Planning for Active Situational Scenarios (COMPASS) program, which will incorporate AI, game theory, modeling, and estimation technologies to attempt to decipher the often subtle signs that precede a full-scale attack.

‘Love’s Tables’: U.S. War Department Casualty Estimation in World War II

‘Love’s Tables’: U.S. War Department Casualty Estimation in World War II

The same friend of TDI who asked about ‘Evett’s Rates,� the British casualty estimation methodology during World War II, also mentioned that the work of Albert G. Love III was now available on-line. Rick Atkinson also referenced “Love’s Tables� in The Guns At Last Light.

In 1931, Lieutenant Colonel (later Brigadier General) Love, then a Medical Corps physician in the U.S. Army Medical Field Services School, published a study of American casualty data in the recent Great War, titled “War Casualties.�[1] This study was likely the source for tables used for casualty estimation by the U.S. Army through 1944.[2]

Love, who had no advanced math or statistical training, undertook his study with the support of the Army Surgeon General, Merritte W. Ireland, and initial assistance from Dr. Lowell J. Reed, a professor of biostatistics at John Hopkins University. Love’s posting in the Surgeon General’s Office afforded him access to an array of casualty data collected from the records of the American Expeditionary Forces in France, as well as data from annual Surgeon General reports dating back to 1819, the official medical history of the U.S. Civil War, and U.S. general population statistics.

Love’s research was likely the basis for rate tables for calculating casualties that first appeared in the 1932 edition of the War Department’s Staff Officer’s Field Manual.[3]

Battle Casualties, including Killed, in Percent of Unit Strength, Staff Officer’s Field Manual (1932).

The 1932 Staff Officer’s Field Manual estimation methodology reflected Love’s sophisticated understanding of the factors influencing combat casualty rates. It showed that both the resistance and combat strength (and all of the factors that comprised it) of the enemy, as well as the equipment and state of training and discipline of the friendly troops had to be taken into consideration. The text accompanying the tables pointed out that loss rates in small units could be quite high and variable over time, and that larger formations took fewer casualties as a fraction of overall strength, and that their rates tended to become more constant over time. Casualties were not distributed evenly, but concentrated most heavily among the combat arms, and in the front-line infantry in particular. Attackers usually suffered higher loss rates than defenders. Other factors to be accounted for included the character of the terrain, the relative amount of artillery on each side, and the employment of gas.

The 1941 iteration of the Staff Officer’s Field Manual, now designated Field Manual (FM) 101-10[4], provided two methods for estimating battle casualties. It included the original 1932 Battle Casualties table, but the associated text no longer included the section outlining factors to be considered in calculating loss rates. This passage was moved to a note appended to a new table showing the distribution of casualties among the combat arms.

Rather confusingly, FM 101-10 (1941) presented a second table, Estimated Daily Losses in Campaign of Personnel, Dead and Evacuated, Per 1,000 of Actual Strength. It included rates for front line regiments and divisions, corps and army units, reserves, and attached cavalry. The rates were broken down by posture and tactical mission.

Estimated Daily Losses in Campaign of Personnel, Dead and Evacuated, Per 1,000 of Actual Strength, FM 101-10 (1941)

The source for this table is unknown, nor the method by which it was derived. No explanatory text accompanied it, but a footnote stated that “this table is intended primarily for use in school work and in field exercises.� The rates in it were weighted toward the upper range of the figures provided in the 1932 Battle Casualties table.

The October 1943 edition of FM 101-10 contained no significant changes from the 1941 version, except for the caveat that the 1932 Battle Casualties table “may or may not prove correct when applied to the present conflict.�

The October 1944 version of FM 101-10 incorporated data obtained from World War II experience.[5] While it also noted that the 1932 Battle Casualties table might not be applicable, the experiences of the U.S. II Corps in North Africa and one division in Italy were found to be in agreement with the table’s division and corps loss rates.

FM 101-10 (1944) included another new table, Estimate of Battle Losses for a Front-Line Division (in % of Actual Strength), meaning that it now provided three distinct methods for estimating battle casualties.

Estimate of Battle Losses for a Front-Line Division (in % of Actual Strength), FM 101-10 (1944)

Like the 1941 Estimated Daily Losses in Campaign table, the sources for this new table were not provided, and the text contained no guidance as to how or when it should be used. The rates it contained fell roughly within the span for daily rates for severe (6-8%) to maximum (12%) combat listed in the 1932 Battle Casualty table, but would produce vastly higher overall rates if applied consistently, much higher than the 1932 table’s 1% daily average.

FM 101-10 (1944) included a table showing the distribution of losses by branch for the theater based on experience to that date, except for combat in the Philippine Islands. The new chart was used in conjunction with the 1944 Estimate of Battle Losses for a Front-Line Division table to determine daily casualty distribution.

Distribution of Battle Losses–Theater of Operations, FM 101-10 (1944)

The final World War II version of FM 101-10 issued in August 1945[6] contained no new casualty rate tables, nor any revisions to the existing figures. It did finally effectively invalidate the 1932 Battle Casualties table by noting that “the following table has been developed from American experience in active operations and, of course, may not be applicable to a particular situation.� (original emphasis)


[1] Albert G. Love, War Casualties, The Army Medical Bulletin, No. 24, (Carlisle Barracks, PA: 1931)

[2] This post is adapted from TDI, Casualty Estimation Methodologies Study, Interim Report (May 2005) (Altarum) (pp. 314-317).

[3] U.S. War Department, Staff Officer’s Field Manual, Part Two: Technical and Logistical Data (Government Printing Office, Washington, D.C., 1932)

[4] U.S. War Department, FM 101-10, Staff Officer’s Field Manual: Organization, Technical and Logistical Data (Washington, D.C., June 15, 1941)

[5] U.S. War Department, FM 101-10, Staff Officer’s Field Manual: Organization, Technical and Logistical Data (Washington, D.C., October 12, 1944)

[6] U.S. War Department, FM 101-10 Staff Officer’s Field Manual: Organization, Technical and Logistical Data (Washington, D.C., August 1, 1945)

‘Evett’s Rates’: British War Office Wastage Tables

‘Evett’s Rates’: British War Office Wastage Tables

Stretcher bearers of the East Surrey Regiment, with a Churchill tank of the North Irish Horse in the background, during the attack on Longstop Hill, Tunisia, 23 April 1943. [Imperial War Museum/Wikimedia]

A friend of TDI queried us recently about a reference in Rick Atkinson’s The Guns at Last Light: The War in Western Europe, 1944-1945 to a British casualty estimation methodology known as “Evett’s Rates.” There are few references to Evett’s Rates online, but as it happens, TDI did find out some details about them for a study on casualty estimation. [1]

British Army staff officers during World War II and the 1950s used a set of look-up tables which listed expected monthly losses in percentage of strength for various arms under various combat conditions. The origin of the tables is not known, but they were officially updated twice, in 1942 by a committee chaired by Major General Evett, and in 1951-1955 by the Army Operations Research Group (AORG).[2]

The methodology was based on staff predictions of one of three levels of operational activity, “Intense,� “Normal,� and “Quiet.� These could be applied to an entire theater, or to individual divisions. The three levels were defined the same way for both the Evett Committee and AORG rates:

The rates were broken down by arm and rank, and included battle and nonbattle casualties.

Rates of Personnel Wastage Including Both Battle and Non-battle Casualties According to the Evett Committee of 1942. (Percent per 30 days).

The Evett Committee rates were criticized during and after the war. After British forces suffered twice the anticipated casualties at Anzio, the British 21st Army Group applied a “double intense rate� which was twice the Evett Committee figure and intended to apply to assaults. When this led to overestimates of casualties in Normandy, the double intense rate was discarded.

From 1951 to 1955, AORG undertook a study of casualty rates in World War II. Its analysis was based on casualty data from the following campaigns:

  • Northwest Europe, 1944
    • 6-30 June – Beachhead offensive
    • 1 July-1 September – Containment and breakout
    • 1 October-30 December – Semi-static phase
    • 9 February to 6 May – Rhine crossing and final phase
  • Italy, 1944
    • January to December – Fighting a relatively equal enemy in difficult country. Warfare often static.
    • January to February (Anzio) – Beachhead held against severe and well-conducted enemy counter-attacks.
  • North Africa, 1943
    • 14 March-13 May – final assault
  • Northwest Europe, 1940
    • 10 May-2 June – Withdrawal of BEF
  • Burma, 1944-45

From the first four cases, the AORG study calculated two sets of battle casualty rates as percentage of strength per 30 days. “Overall� rates included KIA, WIA, C/MIA. “Apparent rates� included these categories but subtracted troops returning to duty. AORG recommended that “overall� rates be used for the first three months of a campaign.

The Burma campaign data was evaluated differently. The analysts defined a “force wastage� category which included KIA, C/MIA, evacuees from outside the force operating area and base hospitals, and DNBI deaths. “Dead wastage� included KIA, C/MIA, DNBI dead, and those discharged from the Army as a result of injuries.

The AORG study concluded that the Evett Committee underestimated intense loss rates for infantry and armor during periods of very hard fighting and overestimated casualty rates for other arms. It recommended that if only one brigade in a division was engaged, two-thirds of the intense rate should be applied, if two brigades were engaged the intense rate should be applied, and if all brigades were engaged then the intense rate should be doubled. It also recommended that 2% extra casualties per month should be added to all the rates for all activities should the forces encounter heavy enemy air activity.[1]

The AORG study rates were as follows:

Recommended AORG Rates of Personnel Wastage. (Percent per 30 days).

If anyone has further details on the origins and activities of the Evett Committee and AORG, we would be very interested in finding out more on this subject.


[1] This post is adapted from The Dupuy Institute, Casualty Estimation Methodologies Study, Interim Report (May 2005) (Altarum) (pp. 51-53).

[2] Rowland Goodman and Hugh Richardson. “Casualty Estimation in Open and Guerrilla Warfare.� (London: Directorate of Science (Land), U.K. Ministry of Defence, June 1995.), Appendix A.

TDI Friday Read: Links You May Have Missed, 23 March 2018

TDI Friday Read: Links You May Have Missed, 23 March 2018

To follow on Chris’s recent post about U.S. Army modernization:

On the subject of future combat:

  • The U.S. National Academies of Sciences, Engineering, and Medicine has issued a new report emphasizing the need for developing countermeasures against multiple small unmanned aerial aircraft systems (sUASs) — organized in coordinated groups, swarms, and collaborative groups — which could be used much sooner than the U.S. Army anticipates.  [There is a summary here.]
  • National Defense University’s Frank Hoffman has a very good piece in the current edition of Parameters, “Will War’s Nature Change in the Seventh Military Revolution?,â€� that explores the potential implications of the combinations of robotics, artificial intelligence, and deep learning systems on the character and nature of war.
  • Major Hassan Kamara has an article in the current edition of Military Review contemplating changes in light infantry, “Rethinking the U.S. Army Infantry Rifle Squadâ€�

On the topic of how the Army is addressing its current and future challenges with irregular warfare and wide area security:

Perla On Dupuy

Perla On Dupuy

Dr. Peter Perla, noted defense researcher, wargame designer and expert, and author of the seminal The Art of Wargaming: A Guide for Professionals and Hobbyists, gave the keynote address at the 2017 Connections Wargaming Conference last August. The topic of his speech, which served as his valedictory address on the occasion of his retirement from government service, addressed the predictive power of wargaming. In it, Perla recalled a conversation he once had with Trevor Dupuy in the early 1990s:

Like most good stories, this one has a beginning, a middle, and an end. I have sort of jumped in at the middle. So let’s go back to the beginning.

As it happens, that beginning came during one of the very first Connections. It may even have been the first one. This thread is one of those vivid memories we all have of certain events in life. In my case, it is a short conversation I had with Trevor Dupuy.

I remember the setting well. We were in front of the entrance to the O Club at Maxwell. It was kind of dark, but I can’t recall if it was in the morning before the club opened for our next session, or the evening, before a dinner. Trevor and I were chatting and he said something about wargaming being predictive. I still recall what I said.

“Good grief, Trevor, we can’t even predict the outcome of a Super Bowl game much less that of a battle!â€� He seemed taken by surprise that I felt that way, and he replied, “Well, if that is true, what are we doing? What’s the point?â€�

I had my usual stock answers. We wargame to develop insights, to identify issues, and to raise questions. We certainly don’t wargame to predict what will happen in a battle or a war. I was pretty dogmatic in those days. Thank goodness I’m not that way any more!

The question of prediction did not go away, however.

For the rest of Perla’s speech, see here. For a wonderful summary of the entire 2017 Connections Wargaming conference, see here.


Technology And The Human Factor In War

Technology And The Human Factor In War

A soldier waves an Israeli flag on the Golan front during the 1973 Yom Kippur War. (IDF Spokesperson’s unit, Jerusalem Report Archives)

[The article below is reprinted from the August 1997 edition of The International TNDM Newsletter.]

Technology and the Human Factor in War
by Trevor N. Dupuy

The Debate

It has become evident to many military theorists that technology has become increasingly important in war, In fact (even though many soldiers would not like to admit it) most such theorists believe that technology has actually reduced the signi�cance of the human factor in war, In other words, the more advanced our military technology, these “technocrats� believe, the less we need to worry about the professional capability and competence of generals, admirals, soldiers, sailors, and airmen.

The technocrats believe that the results of the Kuwait, or Gulf, War of 1991 have con�rmed their conviction. They cite the contribution to those results of the U.N. (mainly U.S.) command of the air, stealth aircraft, sophisticated guided missiles, and general electronic superiority, They believe that it was technology which simply made irrelevant the recent combat experience of the Iraqis in their long war with Iran,

Yet there are a few humanist military theorists who believe that the technocrats have totally misread the lessons of this century‘ s wars! They agree that, While technology was important in the overwhelming U.N. victory, the principal reason for the tremendous margin of U.N. superiority was the better training, skill, and dedication of U.N. forces (again, mainly U.S.).

And so the debate rests. Both sides believe that the result of the Kuwait War favors their point ofview, Nevertheless, an objective assessment of the literature in professional military journals, of doctrinal trends in the U.S. services, and (above all) of trends in the U.S. defense budget, suggest that the technocrats have stronger arguments than the humanists—or at least have been more convincing in presenting their arguments.

I suggest, however, that a completely impartial comparison of the Kuwait War results with those of other recent wars, and with some of the phenomena of World War II, shows that the humanists should not yet concede the debate.

I am a humanist, who is also convinced that technology is as important today in war as it ever was (and it has always been important), and that any national or military leader who neglects military technology does so to his peril and that of his country, But, paradoxically, perhaps to an extent even greater than ever before, the quality of military men is what wins wars and preserves nations.

To elevate the debate beyond generalities, and demonstrate convincingly that the human factor is at least as important as technology in war, I shall review eight instances in this past century when a military force has been successful because of the quality if its people, even though the other side was at least equal or superior in the technological sophistication of its weapons. The examples I shall use are:

  • Germany vs. the USSR in World War II
  • Germany vs. the West in World War II
  • Israel vs. Arabs in 1948, 1956, 1967, 1973 and 1982
  • The Vietnam War, 1965-1973
  • Britain vs. Argentina in the Falklands 1982
  • South Africans vs. Angolans and Cubans, 1987-88
  • The U.S. vs. Iraq, 1991

The demonstration will be based upon a marshaling of historical facts, then analyzing those facts by means of a little simple arithmetic.

Relative Combat Effectiveness Value (CEV)

The purpose of the arithmetic is to calculate relative combat effectiveness values (CEVs) of two opposing military forces. Let me digress to set up the arithmetic. Although some people who hail from south of the Mason—Dixon Line may be reluctant to accept the fact, statistics prove that the fighting quality of Northern soldiers and Southern soldiers was virtually equal in the American Civil War. (I invite those who might disagree to look at Livermore’s Numbers and Losses in the Civil War). That assumption of equality of the opposing troop quality in the Civil War enables me to assert that the successful side in every important battle in the Civil War was successful either because of numerical superiority or superior generalship. Three of Lee’s battles make the point:

  • Despite being outnumbered, Lee won at Antietam. (Though Antietam is sometimes claimed as a Union victory, Lee, the defender, held the battleï¬�eld; McClellan, the attacker, was repulsed.) The main reason for Lee’s success was that on a scale of leadership his generalship was worth 10, while McClellan was barely a 6.
  • Despite being outnumbered, Lee won at Chancellorsville because he was a 10 to Hooker’s 5.
  • Lee lost at Gettysburg mainly because he was outnumbered. Also relevant: Meade did not lose his nerve (like McClellan and Hooker) with generalship worth 8 to match Lee’s 8.

Let me use Antietam to show the arithmetic involved in those simple analyses of a rather complex subject:

The numerical strength of McClellan’s army was 89,000; Lee’s army was only 39,000 strong, but had the multiplier benefit of defensive posture. This enables us to calculate the theoretical combat power ratio of the Union Army to the Confederate Army as 1.4:1.0. In other words, with substantial preponderance of force, the Union Army should have been successful. (The combat power ratio of Confederates to Northerners, of course, was the reciprocal, or 0.71:1.04)

However, Lee held the battle�eld, and a calculation of the actual combat power ratio of the two sides (based on accomplishment of mission, gaining or holding ground, and casualties) was a scant, but clear cut: 1.16:1.0 in favor of the Confederates. A ratio of the actual combat power ratio of the Confederate/Union armies (1.16) to their theoretical combat power (0.71) gives us a value of 1.63. This is the relative combat effectiveness of the Lee’s army to McClellan’s army on that bloody day. But, if we agree that the quality of the troops was the same, then the differential must essentially be in the quality of the opposing generals. Thus, Lee was a 10 to McClellan‘s 6.

The simple arithmetic equation[1] on which the above analysis was based is as follows:

CEV = (R/R)/(P/P)

CEV is relative Combat Effectiveness Value
R/R is the actual combat power ratio
P/P is the theoretical combat power ratio.

At Antietam the equation was: 1.63 = 1.16/0.71.

We’ll be revisiting that equation in connection with each of our examples of the relative importance of technology and human factors.

Airpower and Technology

However, one more digression is required before we look at the examples. Air power was important in all eight of the 20th Century examples listed above. Offhand it would seem that the exercise of air superiority by one side or the other is a manifestation of technological superiority. Nevertheless, there are a few examples of an air force gaining air superiority with equivalent, or even inferior aircraft (in quality or numbers) because of the skill of the pilots.

However, the instances of such a phenomenon are rare. It can be safely asserted that, in the examples used in the following comparisons, the ability to exercise air superiority was essentially a technological superiority (even though in some instances it was magnified by human quality superiority). The one possible exception might be the Eastern Front in World War II, where a slight German technological superiority in the air was offset by larger numbers of Soviet aircraft, thanks in large part to Lend-Lease assistance from the United States and Great Britain.

The Battle of Kursk, 5-18 July, 1943

Following the surrender of the German Sixth Army at Stalingrad, on 2 February, 1943, the Soviets mounted a major winter offensive in south-central Russia and Ukraine which reconquered large areas which the Germans had overrun in 1941 and 1942. A brilliant counteroffensive by German Marshal Erich von Manstein‘s Army Group South halted the Soviet advance, and recaptured the city of Kharkov in mid-March. The end of these operations left the Soviets holding a huge bulge, or salient, jutting westward around the Russian city of Kursk, northwest of Kharkov.

The Germans promptly prepared a new offensive to cut off the Kursk salient, The Soviets energetically built ï¬�eld fortifications to defend the salient against expected German attacks. The German plan was for simultaneous offensives against the northern and southern shoulders of the base of the Kursk salient, Field Marshal Gunther von K1uge’s Army Group Center, would drive south from the vicinity of Orel, while Manstein’s Army Group South pushed north from the Kharkov area, The offensive was originally scheduled for early May, but postponements by Hitler, to equip his forces with new tanks, delayed the operation for two months, The Soviets took advantage of the delays to further improve their already formidable defenses.

The German attacks finally began on 5 July. In the north General Walter Model’s German Ninth Army was soon halted by Marshal Konstantin Rokossovski’s Army Group Center. In the south, however, German General Hermann Hoth’s Fourth Panzer Army and a provisional army commanded by General Werner Kempf, were more successful against the Voronezh Army Group of General Nikolai Vatutin. For more than a week the XLVIII Panzer Corps advanced steadily toward Oboyan and Kursk through the most heavily forti�ed region since the Western Front of 1918. While the Germans suffered severe casualties, they inflicted horrible losses on the defending Soviets. Advancing similarly further east, the II SS Panzer Corps, in the largest tank battle in history, repulsed a vigorous Soviet armored counterattack at Prokhorovka on July 12-13, but was unable to continue to advance.

The principal reason for the German halt was the fact that the Soviets had thrown into the battle General Ivan Konev s Steppe Army Group, which had been in reserve. The exhausted, heavily outnumbered Germans had no comparable reserves to commit to reinvigorate their offensive.

A comparison of forces and losses of the Soviet Voronezh Army Group and German Army Group South on the south face of the Kursk Salient is shown below. The strengths are averages over the 12 days of the battle, taking into consideration initial strengths, losses, and reinforcements.

A comparison of the casualty tradeoff can be found by dividing Soviet casualties by German strength, and German losses by Soviet strength. On that basis, 100 Germans inflicted 5.8 casualties per day on the Soviets, while 100 Soviets inflicted 1.2 casualties per day on the Germans, a tradeoff of 4.9 to 1.0

The statistics for the 8-day offensive of the German XLVIII Panzer Corps toward Oboyan are shown below. Also shown is the relative combat effectiveness value (CEV) of Germans and Soviets, as calculated by the TNDM. As was the case for the Battle of Antietam, this is derived from a mathematical comparison of the theoretical combat power ratio of the two forces (simply considering numbers and weapons characteristics), and the actual combat power ratios reflected by the battle results:

The calculated CEVs suggest that 100 German troops were the combat equivalent of 240 Soviet troops, comparably equipped. The casualty tradeoff in this battle shows that 100 Germans inflicted 5.15 casualties per day on the Soviets, while 100 Soviets inflicted 1.11 casualties per day on the Germans, a tradeoff of4.64. It is a rule of thumb that the casualty tradeoff is usually about the square of the CEV.

A similar comparison can be made of the two-day battle of Prokhorovka. Soviet accounts of that battle have claimed this as a great victory by the Soviet Fifth Guards Tank Army over the German II SS Panzer Corps. In fact, since the German advance was halted, the outcome was close to a draw, but with the advantage clearly in favor of the Germans.

The casualty tradeoff shows that 100 Germans inflicted 7.7 casualties per on the Soviets, while 100 Soviets inflicted 1.0 casualties per day on the Germans, for a tradeoff value of 7.7.

When the German offensive began, they had a slight degree of local air superiority. This was soon reversed by German and Soviet shifts of air elements, and during most of the offensive, the Soviets had a slender margin of air superiority. In terms of technology, the Germans probably had a slight overall advantage. However, the Soviets had more tanks and, furthermore, their T-34 was superior to any tank the Germans had available at the time. The CEV calculations demonstrate that the Germans had a great qualitative superiority over the Russians, despite near-equality in technology, and despite Soviet air superiority. The Germans lost the battle, but only because they were overwhelmed by Soviet numbers.

German Performance, Western Europe, 1943-1945

Beginning with operations between Salerno and Naples in September, 1943, through engagements in the closing days of the Battle of the Bulge in January, 1945, the pattern of German performance against the Western Allies was consistent. Some German units were better than others, and a few Allied units were as good as the best of the Germans. But on the average, German performance, as measured by CEV and casualty tradeoff, was better than the Western allies by a CEV factor averaging about 1.2, and a casualty tradeoff factor averaging about 1.5, Listed below are ten engagements from Italy and Northwest Europe during that 1944.

Technologically, German forces and those of the Western Allies were comparable. The Germans had a higher proportion of armored combat vehicles, and their best tanks were considerably better than the best American and British tanks, but the advantages were at least offset by the greater quantity of Allied armor, and greater sophistication of much of the Allied equipment. The Allies were increasingly able to achieve and maintain air superiority during this period of slightly less than two years.

The combination of vast superiority in numbers of troops and equipment, and in increasing Allied air superiority, enabled the Allies to �ght their way slowly up the Italian boot, and between June and December, 1944, to drive from the Normandy beaches to the frontier of Germany. Yet the presence or absence of Allied air support made little difference in terms of either CEVs or casualty tradeoff values, Despite the defeats inflicted on them by the numerically superior Allies during the latter part of 1944, in December the Germans were able to mount a major offensive that nearly destroyed an American army corps, and threatened to drive at least a portion of the Allied armies into the sea.

Clearly, in their battles against the Soviets and the Western Allies, the Germans demonstrated that quality of combat troops was able consistently to overcome Allied technological and air superiority. It was Allied numbers, not technology, that defeated the quantitatively superior Germans.

The Six-Day War, 1967

The remarkable Israeli victories over far more numerous Arab opponents—Egyptian, Jordanian, and Syrian—in June, I967 revealed an Israeli combat superiority that had not been suspected in the United States, the Soviet Union or Western Europe. This superiority was equally awesome on the ground as in the air. (By beginning the war with a surprise attack which almost wiped out the Egyptian Air Force, the Israelis avoided a serious contest with the one Arab air force large enough, and possibly effective enough, to challenge them.) The results of the three brief campaigns are summarized in the table below:

It should be noted that some Israelis who fought against the Egyptians and Jordanians also fought against the Syrians. Thus, the overall Arab numerical superiority was greater than would be suggested by adding the above strength �gures, and was approximately 328,000 to 200,000.

It should also be noted that the technological sophistication of the Israeli and Arab ground forces was comparable. The only significant technological advantage of the Israelis was their unchallenged command of the air. (In terms of battle outcomes, it was irrelevant how they had achieved air superiority.) In fact this was a very significant advantage, the full import of which would not be realized until the next Arab-Israeli war.

The results of the Six Day War do not provide an unequivocal basis for determining the relative importance of human factors and technological superiority (as evidenced in the air). Clearly a major factor in the Israeli victories was the superior performance of their ground forces due mainly to human factors. At least as important in those victories was Israeli command of the air, in which both technology and human factors both played a part.

The October War, 1973

A better basis for comparing the relative importance of human factors and technology is provided by the results of the October War of 1973 (known to Arabs as the War of Ramadan, and to Israelis as the Yom Kippur War). In this war the Israeli unquestioned superiority in the air was largely offset by the Arabs possession of highly sophisticated Soviet air defense weapons.

One important lesson of this war was a reassessment of Israeli contempt for the �ghting quality of Arab ground forces (which had stemmed from the ease with which they had won their ground victories in 1967). When Arab ground troops were protected from Israeli air superiority by their air defense weapons, they fought well and bravely, demonstrating that Israeli control of the air had been even more significant in 1967 than anyone had then recognized.

It should be noted that the total Arab (and Israeli) forces are those shown in the �rst two comparisons, above. A Jordanian brigade and two Iraqi divisions formed relatively minor elements of the forces under Syrian command (although their presence on the ground was significant in enabling the Syrians to maintain a defensive line when the Israelis threatened a breakthrough around 20 October). For the comparison of Jordanians and Iraqis the total strength is the total of the forces in the battles (two each) on which these comparisons are based.

One other thing to note is how the Israelis, possibly unconsciously, con�rmed that validity of their CEVs with respect to Egyptians and Syrians by the numerical strengths of their deployments to the two fronts. Since the war ended up in a virtual stalemate on both fronts, the overall strength �gures suggest rough equivalence of combat capability.

The CEV values shown in the above table are very significant in relation to the debate about human factors and technology, There was little if anything to choose between the technological sophistication of the two sides. The Arabs had more tanks than the Israelis, but (as Israeli General Avraham Adan once told the author) there was little difference in the quality of the tanks. The Israelis again had command of the air, but this was neutralized immediately over the battle�elds by the Soviet air defense equipment effectively manned by the Arabs. Thus, while technology was of the utmost importance to both sides, enabling each side to prevent the enemy from gaining a significant advantage, the true determinant of battle�eld outcomes was the �ghting quality of the troops, And, while the Arabs fought bravely, the Israelis fought much more effectively. Human factors made the difference.

Israeli Invasion of Lebanon, 1982

In terms of the debate about the relative importance of human factors and technology, there are two significant aspects to this small war, in which Syrians forces and PLO guerrillas were the Arab participants. In the �rst place, the Israelis showed that their air technology was superior to the Syrian air defense technology, As a result, they regained complete control of the skies over the battle�elds. Secondly, it provides an opportunity to include a highly relevant quotation.

The statistical comparison shows the results of the two major battles fought between Syrians and Israelis:

In assessing the above statistics, a quotation from the Israeli Chief of Staff, General Rafael Eytan, is relevant.

In late 1982 a group of retired American generals visited Israel and the battle�elds in Lebanon. Just before they left for home, they had a meeting with General Eytan. One of the American generals asked Eytan the following question: “Since the Syrians were equipped with Soviet weapons, and your troops were equipped with American (or American-type) weapons, isn’t the overwhelming Israeli victory an indication of the superiority of American weapons technology over Soviet weapons technology?�

Eytan’s reply was classic: “If we had had their weapons, and they had had ours, the result would have been absolutely the same.�

One need not question how the Israeli Chief of Staff assessed the relative importance of the technology and human factors.

Falkland Islands War, 1982

It is difficult to get reliable data on the Falkland Islands War of 1982. Furthermore, the author of this article had not undertaken the kind of detailed analysis of such data as is available. However, it is evident from the information that is available about that war that its results were consistent with those of the other examples examined in this article.

The total strength of Argentine forces in the Falklands at the time of the British counter-invasion was slightly more than 13,000. The British appear to have landed close to 6,400 troops, although it may have been fewer. In any event, it is evident that not more than 50% of the total forces available to both sides were actually committed to battle. The Argentine surrender came 27 days after the British landings, but there were probably no more than six days of actual combat. During these battles the British performed admirably, the Argentinians performed miserably. (Save for their Air Force, which seems to have fought with considerable gallantry and effectiveness, at the extreme limit of its range.) The British CEV in ground combat was probably between 2.5 and 4.0. The statistics were at least close to those presented below:

It is evident from published sources that the British had no technological advantage over the Argentinians; thus the one-sided results of the ground battles were due entirely to British skill (derived from training and doctrine) and determination.

South African Operations in Angola, 1987-1988

Neither the political reasons for, nor political results of, the South African military interventions in Angola in the 1970s, and again in the late 1980s, need concern us in our consideration of the relative significance of technology and of human factors. The combat results of those interventions, particularly in 1987-1988 are, however, very relevant.

The operations between elements of the South African Defense Force (SADF) and forces of the Popular Movement for the Liberation of Angola (FAPLA) took place in southeast Angola, generally in the region east of the city of Cuito-Cuanavale. Operating with the SADF units were a few small units of Jonas Savimbi’s National Union for the Total Independence of Angola (UNITA). To provide air support to the SADF and UNITA ground forces, it would have been necessary for the South Africans to establish air bases either in Botswana, Southwest Africa (Namibia), or in Angola itself. For reasons that were largely political, they decided not to do that, and thus operated under conditions of FAPLA air supremacy. This led them, despite terrain generally unsuited for armored warfare, to use a high proportion of armored vehicles (mostly light armored cars) to provide their ground troops with some protection from air attack.

Summarized below are the results of three battles east of Cuito-Cuanavale in late 1987 and early 1988. Included with FAPLA forces are a few Cubans (mostly in armored units); included with the SADF forces are a few UNITA units (all infantry).

FAPLA had complete command of air, and substantial numbers of MiG-21 and MiG-23 sorties were flown against the South Africans in all of these battles. This technological superiority was probably partly offset by greater South African EW (electronic warfare) capability. The ability of the South Africans to operate effectively despite hostile air superiority was reminiscent of that of the Germans in World War II. It was a further demonstration that, no matter how important technology may be, the �ghting quality of the troops is even more important.

The tank �gures include armored cars. In the �rst of the three battles considered, FAPLA had by far the more powerful and more numerous medium tanks (20 to 0). In the other two, SADF had a slight or signi�cant advantage in medium tank numbers and quality. But it didn’t seem to make much difference in the outcomes.

Kuwait War, 1991

The previous seven examples permit us to examine the results of Kuwait (or Second Gulf) War with more objectivity than might otherwise have possible. First, let’s look at the statistics. Note that the comparison shown below is for four days of ground combat, February 24-28, and shows only operations of U.S. forces against the Iraqis.

There can be no question that the single most important contribution to the overwhelming victory of U.S. and other U.N. forces was the air war that preceded, and accompanied, the ground operations. But two comments are in order. The air war alone could not have forced the Iraqis to surrender. On the other hand, it is evident that, even without the air war, U.S. forces would have readily overwhelmed the Iraqis, probably in more than four days, and with more than 285 casualties. But the outcome would have been hardly less one-sided.

The Vietnam War, 1965-1973

It is impossible to make the kind of mathematical analysis for the Vietnam War as has been done in the examples considered above. The reason is that we don’t have any good data on the Vietcong—North Vietnamese forces,

However, such quantitative analysis really isn’t necessary There can be no doubt that one of the opponents was a superpower, the most technologically advanced nation on earth, while the other side was what Lyndon Johnson called a “raggedy-ass little nation,� a typical representative of “the third world.“

Furthermore, even if we were able to make the analyses, they would very possibly be misinterpreted. It can be argued (possibly with some exaggeration) that the Americans won all of the battles. The detailed engagement analyses could only con�rm this fact. Yet it is unquestionable that the United States, despite airpower and all other manifestations of technological superiority, lost the war. The human factor—as represented by the quality of American political (and to a lesser extent military) leadership on the one side, and the determination of the North Vietnamese on the other side—was responsible for this defeat.


In a recent article in the Armed Forces Journal International Col. Philip S. Neilinger, USAF, wrote: “Military operations are extremely difficult, if not impossible, for the side that doesn’t control the sky.� From what we have seen, this is only partly true. And while there can be no question that operations will always be difficult to some extent for the side that doesn’t control the sky, the degree of difficulty depends to a great degree upon the training and determination of the troops.

What we have seen above also enables us to view with a better perspective Colonel Neilinger’s subsequent quote from British Field Marshal Montgomery: “If we lose the war in the air, we lose the war and we lose it quickly.� That statement was true for Montgomery, and for the Allied troops in World War II. But it was emphatically not true for the Germans.

The examples we have seen from relatively recent wars, therefore, enable us to establish priorities on assuring readiness for war. It is without question important for us to equip our troops with weapons and other materiel which can match, or come close to matching, the technological quality of the opposition’s materiel. We must realize that we cannot—as some people seem to think—buy good forces, by technology alone. Even more important is to assure the �ghting quality of the troops. That must be, by far, our �rst priority in peacetime budgets and in peacetime military activities of all sorts.


[1] This calculation is automatic in analyses of historical battles by the Tactical Numerical Deterministic Model (TNDM).

[2] The initial tank strength of the Voronezh Army Group was about 1,100 tanks. About 3,000 additional Soviet tanks joined the battle between 6 and 12 July. At the end of the battle there were about 1,800 Soviet tanks operational in the battle area; at the same time there were about 1,000 German tanks still operational.

[3] The relative combat effectiveness value of each force is calculated in comparison to 1.0. Thus the CEV of the Germans is 2.40:1.0, while that of the Soviets is 0.42: 1.0. The opposing CEVs are always the reciprocals of each other.

Russian Body Count: Update

Russian Body Count: Update

Map of the reported incident between U.S., Syrian, and Russian forces near Deir Ezzor, Syria on 7 February 2018 [Spiegel Online]

An article by Christoph Reuter in Spiegel Online adds some new details to the story of the incident between U.S., Syrian, and Russian mercenary forces near the Syrian city of Deir Ezzor on 7 February 2018. Based on interviews with witnesses and participants, the article paints a different picture than the one created by previous media reports.

According to Spiegel Online, early on 7 February, a 250-strong force comprised of Syrian tribal militia, Afghan and Iraqi fighters, and troops from the Syrian Army 4th Division attempted to cross from the west bank of the Euphrates River to the east, south of a Kurdish Syrian Defense Forces (SDF) base at Khusham. The Euphrates constitutes a “deconfliction” line established by the United States and Russia separating the forces of Syrian President Bashar al-Assad from those of the U.S.-supported SDF. The Syrian force was detected and U.S. combat forces fired warning shots, which persuaded the Syrians to withdraw.

After dark that evening, the Syrian force, reinforced to about 500 fighters, moved several kilometers north and attempted to cross the Euphrates a second time, this time successfully. As the force advanced through the village of Marrat, it was again spotted and engaged by U.S. air and artillery assets after an alleged 20-30 tank rounds impacted within 500 meters of the SDF headquarters in Khusham. The U.S. employed field artillery, drones, combat helicopters, and AC-130 gunships to devastating effect.

Speigel Online reported that U.S. forces also simultaneously engaged a force of approximately 400 pro-Assad Syrian tribal militia and Shi’a fighters advancing north from the village of Tabiya, south of Khusham. A small contingent of Russian mercenaries, stationed in Tabiya but not supporting the Syrian/Shi’a fighters, was hit by U.S. fire. This second Syrian force, which the U.S. had allowed to remain on the east side of the Euphrates as long as it remained peaceful and small, was allegedly attacked again on 9 February.

According to Spigel Online’s sources, “more than 200 of the attackers died, including around 80 Syrian soldiers with the 4th Division, around 100 Iraqis and Afghans and around 70 tribal fighters, mostly with the al-Baqir militia.” Around 10-20 Russian mercenaries were killed as well, although Russian state media has confirmed only nine deaths.

This account of the fighting and casualty distribution is in stark contrast to the story being reported by Western media, which has alleged tens or hundreds of Russians killed:

[A] completely different version of events has gained traction — disseminated at first by Russian nationalists like Igor “Strelkov” Girkin, and then by others associated with the Wagner unit. According to those accounts, many more Russians had been killed in the battle — 100, 200, 300 or as many as 600. An entire unit, it was said, had been wiped out and the Kremlin wanted to cover it up. Recordings of alleged fighters even popped up apparently confirming these horrendous losses.

It was a version that sounded so plausible that even Western news agencies like Reuters and Bloomberg picked it up. The fact that the government in Moscow at first didn’t want to confirm any deaths and then spoke of five “Russian citizens” killed and later, nebulously, of “dozens of injured,” some of whom had died, only seemed to make the version of events seem more credible.

Spiegel Online implies that the motive behind the account being propagated by sources connected to the mercenaries stems from the “claim they are being used as cannon fodder, are being kept quiet and are poorly paid. For them to now accuse the Kremlin of trying to cover up the fact that Russians were killed — by the Americans, of all people — hits President Vladimir Putin’s government in a weak spot: its credibility.”

The Spiegel Online account and casualty tally — 250 Syrian/Shi’a killed out of approximately 900 engaged, with 10-20 Russian mercenaries killed by collateral fire — seems a good deal more plausible than the figures mentioned in the initial Western media reports.

Comparing the RAND Version of the 3:1 Rule to Real-World Data

Comparing the RAND Version of the 3:1 Rule to Real-World Data

Chuliengcheng. In a glorious death eternal life. (Battle of Yalu River, 1904) [Wikimedia Commons]

[The article below is reprinted from the Winter 2010 edition of The International TNDM Newsletter.]

Comparing the RAND Version of the 3:1 Rule to Real-World Data
Christopher A. Lawrence

For this test, The Dupuy Institute took advan­tage of two of its existing databases for the DuWar suite of databases. The first is the Battles Database (BaDB), which covers 243 battles from 1600 to 1900. The sec­ond is the Division-level Engagement Database (DLEDB), which covers 675 division-level engagements from 1904 to 1991.

The first was chosen to provide a historical con­text for the 3:1 rule of thumb. The second was chosen so as to examine how this rule applies to modern com­bat data.

We decided that this should be tested to the RAND version of the 3:1 rule as documented by RAND in 1992 and used in JICM [Joint Integrated Contingency Model] (with SFS [Situational Force Scoring]) and other mod­els. This rule, as presented by RAND, states: “[T]he famous ‘3:1 rule,’ according to which the attacker and defender suffer equal fractional loss rates at a 3:1 force ratio if the battle is in mixed terrain and the defender enjoys ‘prepared’ defenses…�

Therefore, we selected out all those engage­ments from these two databases that ranged from force ratios of 2.5 to 1 to 3.5 to 1 (inclusive). It was then a simple matter to map those to a chart that looked at attackers losses compared to defender losses. In the case of the pre-1904 cases, even with a large database (243 cases), there were only 12 cases of combat in that range, hardly statistically significant. That was because most of the combat was at odds ratios in the range of .50-to-1 to 2.00-to-one.

The count of number of engagements by odds in the pre-1904 cases:

As the database is one of battles, then usually these are only joined at reasonably favorable odds, as shown by the fact that 88 percent of the battles occur between 0.40 and 2.50 to 1 odds. The twelve pre-1904 cases in the range of 2.50 to 3.50 are shown in Table 1.

If the RAND version of the 3:1 rule was valid, one would expect that the “Percent per Day Loss Ratio� (the last column) would hover around 1.00, as this is the ratio of attacker percent loss rate to the defender per­cent loss rate. As it is, 9 of the 12 data points are notice­ably below 1 (below 0.40 or a 1 to 2.50 exchange rate). This leaves only three cases (25%) with an exchange rate that would support such a “rule.�

If we look at the simple ratio of actual losses (vice percent losses), then the numbers comes much closer to parity, but this is not the RAND interpreta­tion of the 3:1 rule. Six of the twelve numbers “hover� around an even exchange ratio, with six other sets of data being widely off that central point. “Hover� for the rest of this discussion means that the exchange ratio ranges from 0.50-to-1 to 2.00-to 1.

Still, this is early modern linear combat, and is not always representative of modern war. Instead, we will examine 634 cases in the Division-level Database (which consists of 675 cases) where we have worked out the force ratios. While this database covers from 1904 to 1991, most of the cases are from WWII (1939- 1945). Just to compare:

As such, 87% of the cases are from WWII data and 10% of the cases are from post-WWII data. The engagements without force ratios are those that we are still working on as The Dupuy Institute is always ex­panding the DLEDB as a matter of routine. The specific cases, where the force ratios are between 2.50 and 3.50 to 1 (inclusive) are shown in Table 2:

This is a total of 98 engagements at force ratios of 2.50 to 3.50 to 1. It is 15 percent of the 634 engage­ments for which we had force ratios. With this fairly significant representation of the overall population, we are still getting no indication that the 3:1 rule, as RAND postulates it applies to casualties, does indeed fit the data at all. Of the 98 engagements, only 19 of them demonstrate a percent per day loss ratio (casualty exchange ratio) between 0.50-to-1 and 2-to-1. This is only 19 percent of the engagements at roughly 3:1 force ratio. There were 72 percent (71 cases) of those engage­ments at lower figures (below 0.50-to-1) and only 8 percent (cases) are at a higher exchange ratio. The data clearly was not clustered around the area from 0.50-to- 1 to 2-to-1 range, but was well to the left (lower) of it.

Looking just at straight exchange ratios, we do get a better fit, with 31 percent (30 cases) of the figure ranging between 0.50 to 1 and 2 to 1. Still, this fig­ure exchange might not be the norm with 45 percent (44 cases) lower and 24 percent (24 cases) higher. By definition, this fit is 1/3rd the losses for the attacker as postulated in the RAND version of the 3:1 rule. This is effectively an order of magnitude difference, and it clearly does not represent the norm or the center case.

The percent per day loss exchange ratio ranges from 0.00 to 5.71. The data tends to be clustered at the lower values, so the high values are very much outliers. The highest percent exchange ratio is 5.71, the second highest is 4.41, the third highest is 2.92. At the other end of the spectrum, there are four cases where no losses were suffered by one side and seven where the exchange ratio was .01 or less. Ignoring the “N/A� (no losses suffered by one side) and the two high “outliers (5.71 and 4.41), leaves a range of values from 0.00 to 2.92 across 92 cases. With an even dis­tribution across that range, one would expect that 51 percent of them would be in the range of 0.50-to-1 and 2.00-to-1. With only 19 percent of the cases being in that range, one is left to conclude that there is no clear correlation here. In fact, it clearly is the opposite effect, which is that there is a negative relationship. Not only is the RAND construct unsupported, it is clearly and soundly contradicted with this data. Furthermore, the RAND construct is theoretically a worse predictor of casualty rates than if one randomly selected a value for the percentile exchange rates between the range of 0 and 2.92. We do believe this data is appropriate and ac­curate for such a test.

As there are only 19 cases of 3:1 attacks fall­ing in the even percentile exchange rate range, then we should probably look at these cases for a moment:

One will note, in these 19 cases, that the aver­age attacker casualties are way out of line with the av­erage for the entire data set (3.20 versus 1.39 or 3.20 versus 0.63 with pre-1943 and Soviet-doctrine attack­ers removed). The reverse is the case for the defenders (3.12 versus 6.08 or 3.12 versus 5.83 with pre-1943 and Soviet-doctrine attackers removed). Of course, of the 19 cases, 2 are pre-1943 cases and 7 are cases of Soviet-doctrine attackers (in fact, 8 of the 14 cases of the So­viet-doctrine attackers are in this selection of 19 cases). This leaves 10 other cases from the Mediterranean and ETO (Northwest Europe 1944). These are clearly the unusual cases, outliers, etc. While the RAND 3:1 rule may be applicable for the Soviet-doctrine offensives (as it applies to 8 of the 14 such cases we have), it does not appear to be applicable to anything else. By the same token, it also does not appear to apply to virtually any cases of post-WWII combat. This all strongly argues that not only is the RAND construct not proven, but it is indeed clearly not correct.

The fact that this construct also appears in So­viet literature, but nowhere else in US literature, indi­cates that this is indeed where the rule was drawn from. One must consider the original scenarios run for the RSAC [RAND Strategy Assessment Center] wargame were “Fulda Gap� and Korean War scenarios. As such, they were regularly conducting bat­tles with Soviet attackers versus Allied defenders. It would appear that the 3:1 rule that they used more closely reflected the experiences of the Soviet attackers in WWII than anything else. Therefore, it may have been a fine representation for those scenarios as long as there was no US counterattacking or US offensives (and assuming that the Soviet Army of the 1980s performed at the same level as in did in the 1940s).

There was a clear relative performance difference between the Soviet Army and the German Army in World War II (see our Capture Rate Study Phase I & II and Measuring Human Factors in Combat for a detailed analysis of this).[1] It was roughly in the order of a 3-to-1-casualty exchange ratio. Therefore, it is not surprising that Soviet writers would create analytical tables based upon an equal percentage exchange of losses when attacking at 3:1. What is surprising, is that such a table would be used in the US to represent US forces now. This is clearly not a correct application.

Therefore, RAND’s SFS, as currently con­structed, is calibrated to, and should only be used to represent, a Soviet-doctrine attack on first world forces where the Soviet-style attacker is clearly not properly trained and where the degree of performance difference is similar to that between the Germans and Soviets in 1942-44. It should not be used for US counterattacks, US attacks, or for any forces of roughly comparable ability (regardless of whether Soviet-style doctrine or not). Furthermore, it should not be used for US attacks against forces of inferior training, motivation and co­hesiveness. If it is, then any such tables should be ex­pected to produce incorrect results, with attacker losses being far too high relative to the defender. In effect, the tables unrealistically penalize the attacker.

As JICM with SFS is now being used for a wide variety of scenarios, then it should not be used at all until this fundamental error is corrected, even if that use is only for training. With combat tables keyed to a result that is clearly off by an order of magnitude, then the danger of negative training is high.


[1] Capture Rate Study Phases I and II Final Report (The Dupuy Institute, March 6, 2000) (2 Vols.) and Measuring Human Fac­tors in Combat—Part of the Enemy Prisoner of War Capture Rate Study (The Dupuy Institute, August 31, 2000). Both of these reports are available through our web site.

TDI Friday Read: Links You May Have Missed, 02 March 2018

TDI Friday Read: Links You May Have Missed, 02 March 2018

We are trying something new today, well, new for TDI anyway. This edition of TDI Friday Read will offer a selection of links to items we think may be of interest to our readers. We found them interesting but have not had the opportunity to offer observations or commentary about them. Hopefully you may find them useful or interesting as well.

The story of the U.S. attack on a force of Russian mercenaries and Syrian pro-regime troops near Deir Ezzor, Syria, last month continues to have legs.

And a couple stories related to naval warfare…

Finally, proving that there are, or soon will be, podcasts about everything, there is one about Napoleon Bonaparte and his era: The Age of Napoleon Podcast. We have yet to give it a listen, but if anyone else has, let us know what you think.

Have a great weekend.

Spotted In The New Books Section Of The U.S. Naval Academy Library…

Spotted In The New Books Section Of The U.S. Naval Academy Library…

Christopher A. Lawrence, War by Numbers: Understanding Conventional Combat (Lincoln, NE: Potomac Books, 2017) 390 pages, $39.95

War by Numbers assesses the nature of conventional warfare through the analysis of historical combat. Christopher A. Lawrence (President and Executive Director of The Dupuy Institute) establishes what we know about conventional combat and why we know it. By demonstrating the impact a variety of factors have on combat he moves such analysis beyond the work of Carl von Clausewitz and into modern data and interpretation.

Using vast data sets, Lawrence examines force ratios, the human factor in case studies from World War II and beyond, the combat value of superior situational awareness, and the effects of dispersion, among other elements. Lawrence challenges existing interpretations of conventional warfare and shows how such combat should be conducted in the future, simultaneously broadening our understanding of what it means to fight wars by the numbers.

The book is available in paperback directly from Potomac Books and in paperback and Kindle from Amazon.

Russian Army Experiments With Using Tanks For Indirect Fire

Russian Army Experiments With Using Tanks For Indirect Fire

Russian Army T-90S main battle tanks. [Ministry of Defense of the Russian Federation]

Finnish freelance writer and military blogger Petri Mäkelä spotted an interesting announcement from the Ministry of Defense of the Russian Federation: the Combined-Arms Army of the Western Military District is currently testing the use of main battle tanks for indirect fire at the Pogonovo test range in the Voronezh region.

According to Major General Timur Trubiyenko, First Deputy Commander of the Western Military District Combined-Arms Army, in the course of company exercises, 200 tankers will test a combination of platoon direct and indirect fire tactics against simulated armored, lightly armored, and concealed targets up to 12 kilometers away.

Per Mäkelä, the exercise will involve T-90S main battle tanks using their 2A46 125 mm/L48 smoothbore cannons. According to the Ministry of Defense, more than 1,000 Russian Army soldiers, employing over 100 weapons systems and special equipment items, will participate in the exercises between 19 and 22 February 2018.

Tanks have been used on occasion to deliver indirect fire in World War II and Korea, but it is not a commonly used modern tactic. The use of modern fire control systems, guided rounds, and drone spotters might offer the means to make this more useful.

Attrition In Future Land Combat

Attrition In Future Land Combat

Soldiers with Battery C, 1st Battalion, 82nd Field Artillery Regiment, 1st Brigade Combat Team, 1st Cavalry Division maneuver their Paladins through Hohenfels Training Area, Oct. 26. Photo Credit: Capt. John Farmer, 1st Brigade Combat Team, 1st Cav

[This post was originally published on June 9, 2017]

Last autumn, U.S. Army Chief of Staff General Mark Milley asserted that “we are on the cusp of a fundamental change in the character of warfare, and specifically ground warfare. It will be highly lethal, very highly lethal, unlike anything our Army has experienced, at least since World War II.� He made these comments while describing the Army’s evolving Multi-Domain Battle concept for waging future combat against peer or near-peer adversaries.

How lethal will combat on future battlefields be? Forecasting the future is, of course, an undertaking fraught with uncertainties. Milley’s comments undoubtedly reflect the Army’s best guesses about the likely impact of new weapons systems of greater lethality and accuracy, as well as improved capabilities for acquiring targets. Many observers have been closely watching the use of such weapons on the battlefield in the Ukraine. The spectacular success of the Zelenopillya rocket strike in 2014 was a convincing display of the lethality of long-range precision strike capabilities.

It is possible that ground combat attrition in the future between peer or near-peer combatants may be comparable to the U.S. experience in World War II (although there were considerable differences between the experiences of the various belligerents). Combat losses could be heavier. It certainly seems likely that they would be higher than those experienced by U.S. forces in recent counterinsurgency operations.

Unfortunately, the U.S. Defense Department has demonstrated a tenuous understanding of the phenomenon of combat attrition. Despite wildly inaccurate estimates for combat losses in the 1991 Gulf War, only modest effort has been made since then to improve understanding of the relationship between combat and casualties. The U.S. Army currently does not have either an approved tool or a formal methodology for casualty estimation.

Historical Trends in Combat Attrition

Trevor Dupuy did a great deal of historical research on attrition in combat. He found several trends that had strong enough empirical backing that he deemed them to be verities. He detailed his conclusions in Understanding War: History and Theory of Combat (1987) and Attrition: Forecasting Battle Casualties and Equipment Losses in Modern War (1995).

Dupuy documented a clear relationship over time between increasing weapon lethality, greater battlefield dispersion, and declining casualty rates in conventional combat. Even as weapons became more lethal, greater dispersal in frontage and depth among ground forces led daily personnel loss rates in battle to decrease.

The average daily battle casualty rate in combat has been declining since 1600 as a consequence. Since battlefield weapons continue to increase in lethality and troops continue to disperse in response, it seems logical to presume the trend in loss rates continues to decline, although this may not necessarily be the case. There were two instances in the 19th century where daily battle casualty rates increased—during the Napoleonic Wars and the American Civil War—before declining again. Dupuy noted that combat casualty rates in the 1973 Arab-Israeli War remained roughly the same as those in World War II (1939-45), almost thirty years earlier. Further research is needed to determine if average daily personnel loss rates have indeed continued to decrease into the 21st century.

Dupuy also discovered that, as with battle outcomes, casualty rates are influenced by the circumstantial variables of combat. Posture, weather, terrain, season, time of day, surprise, fatigue, level of fortification, and “all out� efforts affect loss rates. (The combat loss rates of armored vehicles, artillery, and other other weapons systems are directly related to personnel loss rates, and are affected by many of the same factors.) Consequently, yet counterintuitively, he could find no direct relationship between numerical force ratios and combat casualty rates. Combat power ratios which take into account the circumstances of combat do affect casualty rates; forces with greater combat power inflict higher rates of casualties than less powerful forces do.

Winning forces suffer lower rates of combat losses than losing forces do, whether attacking or defending. (It should be noted that there is a difference between combat loss rates and numbers of losses. Depending on the circumstances, Dupuy found that the numerical losses of the winning and losing forces may often be similar, even if the winner’s casualty rate is lower.)

Dupuy’s research confirmed the fact that the combat loss rates of smaller forces is higher than that of larger forces. This is in part due to the fact that smaller forces have a larger proportion of their troops exposed to enemy weapons; combat casualties tend to concentrated in the forward-deployed combat and combat support elements. Dupuy also surmised that Prussian military theorist Carl von Clausewitz’s concept of friction plays a role in this. The complexity of interactions between increasing numbers of troops and weapons simply diminishes the lethal effects of weapons systems on real world battlefields.

Somewhat unsurprisingly, higher quality forces (that better manage the ambient effects of friction in combat) inflict casualties at higher rates than those with less effectiveness. This can be seen clearly in the disparities in casualties between German and Soviet forces during World War II, Israeli and Arab combatants in 1973, and U.S. and coalition forces and the Iraqis in 1991 and 2003.

Combat Loss Rates on Future Battlefields

What do Dupuy’s combat attrition verities imply about casualties in future battles? As a baseline, he found that the average daily combat casualty rate in Western Europe during World War II for divisional-level engagements was 1-2% for winning forces and 2-3% for losing ones. For a divisional slice of 15,000 personnel, this meant daily combat losses of 150-450 troops, concentrated in the maneuver battalions (The ratio of wounded to killed in modern combat has been found to be consistently about 4:1. 20% are killed in action; the other 80% include mortally wounded/wounded in action, missing, and captured).

It seems reasonable to conclude that future battlefields will be less densely occupied. Brigades, battalions, and companies will be fighting in spaces formerly filled with armies, corps, and divisions. Fewer troops mean fewer overall casualties, but the daily casualty rates of individual smaller units may well exceed those of WWII divisions. Smaller forces experience significant variation in daily casualties, but Dupuy established average daily rates for them as shown below.

For example, based on Dupuy’s methodology, the average daily loss rate unmodified by combat variables for brigade combat teams would be 1.8% per day, battalions would be 8% per day, and companies 21% per day. For a brigade of 4,500, that would result in 81 battle casualties per day, a battalion of 800 would suffer 64 casualties, and a company of 120 would lose 27 troops. These rates would then be modified by the circumstances of each particular engagement.

Several factors could push daily casualty rates down. Milley envisions that U.S. units engaged in an anti-access/area denial environment will be constantly moving. A low density, highly mobile battlefield with fluid lines would be expected to reduce casualty rates for all sides. High mobility might also limit opportunities for infantry assaults and close quarters combat. The high operational tempo will be exhausting, according to Milley. This could also lower loss rates, as the casualty inflicting capabilities of combat units decline with each successive day in battle.

It is not immediately clear how cyberwarfare and information operations might influence casualty rates. One combat variable they might directly impact would be surprise. Dupuy identified surprise as one of the most potent combat power multipliers. A surprised force suffers a higher casualty rate and surprisers enjoy lower loss rates. Russian combat doctrine emphasizes using cyber and information operations to achieve it and forces with degraded situational awareness are highly susceptible to it. As Zelenopillya demonstrated, surprise attacks with modern weapons can be devastating.

Some factors could push combat loss rates up. Long-range precision weapons could expose greater numbers of troops to enemy fires, which would drive casualties up among combat support and combat service support elements. Casualty rates historically drop during night time hours, although modern night-vision technology and persistent drone reconnaissance might will likely enable continuous night and day battle, which could result in higher losses.

Drawing solid conclusions is difficult but the question of future battlefield attrition is far too important not to be studied with greater urgency. Current policy debates over whether or not the draft should be reinstated and the proper size and distribution of manpower in active and reserve components of the Army hinge on getting this right. The trend away from mass on the battlefield means that there may not be a large margin of error should future combat forces suffer higher combat casualties than expected.

TDI Friday Read: Cool Maps Edition

TDI Friday Read: Cool Maps Edition

Today’s edition of TDI Friday Read compiles some previous posts featuring maps we have found to be interesting, useful, or just plain cool. The history of military affairs would be incomprehensible without maps. Without them, it would be impossible to convey the temporal and geographical character of warfare or the situational awareness of the combatants. Of course, maps are susceptible to the same methodological distortions, fallacies, inaccuracies, and errors in interpretation to be found in any historical work. As with any historical resource, they need to be regarded with respectful skepticism.

Still, maps are cool. Check these out.

Arctic Territories

Visualizing European Population Density

Cartography And The Great War

Classics of Infoporn: Minard’s “Napoleon’s March”

New WWII German Maps At The National Archives

As an added bonus, here are two more links of interest. The first describes the famous map based on 1860 U.S. Census data that Abraham Lincoln used to understand the geographical distribution of slavery in the Southern states.

The second shows the potential of maps to provide new insights into history. It is an animated, interactive depiction of the trans-Atlantic slave trade derived from a database covering 315 years and 20,528 slave ship transits. It is simultaneously fascinating and sobering.

Initial SFAB Deployment To Afghanistan Generating High Expectations

Initial SFAB Deployment To Afghanistan Generating High Expectations

Staff Sgt. Braxton Pernice, 6th Battalion, 1st Security Force Assistance Brigade, is pinned his Pathfinder Badge by a fellow 1st SFAB Soldier Nov. 3, 2017, at Fort Benning, Ga., following his graduation from Pathfinder School. Pernice is one of three 1st SFAB Soldiers to graduate the school since the formation of the 1st SFAB. He and Sgt 1st Class Rachel Lyons and Capt. Travis Lowe, all with 6th Bn., 1st SFAB, were among 42 students of Pathfinder School class 001-18 to earn their badge. (U.S. Army photo by Spc. Noelle E. Wiehe)

It appears that the political and institutional stakes associated with the forthcoming deployment of the U.S. Army’s new 1st Security Force Assistance Brigade (SFAB) to Afghanistan have increased dramatically. Amidst the deteriorating security situation, the performance of 1st SFAB is coming to be seen as a test of President Donald Trump’s vow to “winâ€� in Afghanistan and his reported insistence that increased troop and financial commitments demonstrate a “quick return.”

Many will also be watching to see if the SFAB concept validates the Army’s revamped approach to Security Force Assistance (SFA)—an umbrella term for whole-of-government support provided to develop the capability and capacity of foreign security forces and institutions. SFA has long been one of the U.S. government’s primary response to threats of insurgency and terrorism around the world, but its record of success is decidedly mixed.

Earlier this month, the 1st SFAB commander Colonel Scott Jackson reportedly briefed General Joseph Votel, who heads U.S. Central Command, that his unit had less than eight months of training and preparation, instead of an expected 12 months. His personnel had been rushed through the six-week Military Advisor Training Academy curriculum in only two weeks, and that the command suffered from personnel shortages. Votel reportedly passed these concerns to U.S. Army Chief of Staff General Mark Milley.

Competing Mission Priorities

Milley’s brainchild, the SFABs are intended to improve the Army’s ability to conduct SFA and to relieve line Brigade Combat Teams (BCTs) of responsibility for conducting it. Committing BCTs to SFA missions has been seen as both keeping them from more important conventional missions and inhibiting their readiness for high-intensity combat.

However, 1st SFAB may be caught out between two competing priorities: to adequately train Afghan forces and also to partner with and support them in combat operations. The SFABs are purposely optimized for training and advising, but they are not designed for conducting combat operations. They lack a BCT’s command, control and intelligence and combat assets. Some veteran military advisors have pointed out that BCTs are able to control battlespace and possess organic force protection, two capabilities the SFABs lack. While SFAB personnel will advise and accompany Afghan security forces in the field, they will not be able to support them in combat with them the way BCTs can. The Army will also have to deploy additional combat troops to provide sufficient force protection for 1st SFAB’s trainers.

Institutional Questions

The deviating requirements for training and combat advising may be the reason the Army appears to be providing the SFABs with capabilities that resemble those of Army Special Forces (ARSOF) personnel and units. ARSOF’s primary mission is to operate “by, with and through� indigenous forces. While Milley made clear in the past that the SFABs were not ARSOF, they do appear to include some deliberate similarities. While organized overall as a conventional BCT, the SFAB’s basic tactical teams include 12 personnel, like an ARSOF Operational Detachment A (ODA). Also like an ODA, the SFAB teams include intelligence and medical non-commissioned officers, and are also apparently being assigned dedicated personnel for calling in air and fire support (It is unclear from news reports if the SFAB teams include regular personnel trained in basic for call for fire techniques or if they are being given highly-skilled joint terminal attack controllers (JTACs).)

SFAB personnel have been selected using criteria used for ARSOF recruitment and Army Ranger physical fitness standards. They are being given foreign language training at the Military Advisor Training Academy at Fort Benning, Georgia.

The SFAB concept has drawn some skepticism from the ARSOF community, which sees the train, advise, and assist mission as belonging to it. There are concerns that SFABs will compete with ARSOF for qualified personnel and the Army has work to do to create a viable career path for dedicated military advisors. However, as Milley has explained, there are not nearly enough ARSOF personnel to effectively staff the Army’s SFA requirements, let alone meet the current demand for other ARSOF missions.

An Enduring Mission

Single-handedly rescuing a floundering 16-year, $70 billion effort to create an effective Afghan army as well as a national policy that suffers from basic strategic contradictions seems like a tall order for a brand-new, understaffed Army unit. At least one veteran military advisor has asserted that 1st SFAB is being “set up to fail.�

Yet, regardless of how well it performs, the SFA requirement will neither diminish nor go away. The basic logic behind the SFAB concept remains valid. It is possible that a problematic deployment could inhibit future recruiting, but it seems more likely that the SFABs and Army military advising will evolve as experience accumulates. SFA may or may not be a strategic “game changer� in Afghanistan, but as a former Army combat advisor stated, “It sounds low risk and not expensive, even when it is, [but] it’s not going away whether it succeeds or fails.�

Visualizing The Multidomain Battle Battlespace

Visualizing The Multidomain Battle Battlespace

In the latest issue of Joint Forces Quarterly, General David G. Perkins and General James M. Holmes, respectively the commanding generals of U.S. Army Training and Doctrine Command (TRADOC) and  U.S. Air Force Air Combat Command (ACC), present the results of the initial effort to fashion a unified, joint understanding of the multidomain battle (MDB) battlespace.

The thinking of the services proceeds from a basic idea:

Victory in future combat will be determined by how successfully commanders can understand, visualize, and describe the battlefield to their subordinate commands, thus allowing for more rapid decisionmaking to exploit the initiative and create positions of relative advantage.

In order to create this common understanding, TRADOC and ACC are seeking to blend the conceptualization of their respective operating concepts.

The Army’s…operational framework is a cognitive tool used to assist commanders and staffs in clearly visualizing and describing the application of combat power in time, space, and purpose… The Army’s operational and battlefield framework is, by the reality and physics of the land domain, generally geographically focused and employed in multiple echelons.

The mission of the Air Force is to fly, fight, and win—in air, space, and cyberspace. With this in mind, and with the inherent flexibility provided by the range and speed of air, space, and cyber power, the ACC construct for visualizing and describing operations in time and space has developed differently from the Army’s… One key difference between the two constructs is that while the Army’s is based on physical location of friendly and enemy assets and systems, ACC’s is typically focused more on the functions conducted by friendly and enemy assets and systems. Focusing on the functions conducted by friendly and enemy forces allows coordinated employment and integration of air, space, and cyber effects in the battlespace to protect or exploit friendly functions while degrading or defeating enemy functions across geographic boundaries to create and exploit enemy vulnerabilities and achieve a continuing advantage.

Despite having “somewhat differing perspectives on mission command versus C2 and on a battlefield framework that is oriented on forces and geography versus one that is oriented on function and time,” it turns out that the services’ respective conceptualizations of their operating concepts are not incompatible. The first cut on an integrated concept yielded the diagram above. As Perkins and Holmes point out,

The only noncommon area between these two frameworks is the Air Force’s Adversary Strategic area. This area could easily be accommodated into the Army’s existing framework with the addition of Strategic Deep Fires—an area over the horizon beyond the range of land-based systems, thus requiring cross-domain fires from the sea, air, and space.

Perkins and Holmes go on to map out the next steps.

In the coming year, the Army and Air Force will be conducting a series of experiments and initiatives to help determine the essential components of MDB C2. Between the Services there is a common understanding of the future operational environment, the macro-level problems that must be addressed, and the capability gaps that currently exist. Potential solutions require us to ask questions differently, to ask different questions, and in many cases to change our definitions.

Their expectation is that “Frameworks will tend to merge—not as an either/or binary choice—but as a realization that effective cross-domain operations on the land and sea, in the air, as well as cyber and electromagnetic domains will require a merged framework and a common operating picture.”

So far, so good. Stay tuned.

Robert Work On Recent Chinese Advances In A2/AD Technology

Robert Work On Recent Chinese Advances In A2/AD Technology

An image of a hypersonic glider-like object broadcast by Chinese state media in October 2017. No known images of the DF-17’s hypersonic glide vehicle exist in the public domain. [CCTV screen capture via East Pendulum/The Diplomat]

Robert Work, former Deputy Secretary of Defense and one of the architects of the Third Offset Strategy, has a very interesting article up over at Task & Purpose detailing the origins of the People’s Republic of China’s (PRC) anti-access/area denial (A2/AD) strategy and the development of military technology to enable it.

According to Work, the PRC government was humiliated by the impunity with which the U.S. was able to sail its aircraft carrier task forces unimpeded through the waters between China and Taiwan during the Third Taiwan Straits crisis in 1995-1996. Soon after, the PRC began a process of military modernization that remains in progress. Part of the modernization included technical development along three main “complementary lines of effort.”

  • The objective of the first line of effort was to obtain rough parity with the U.S. in “battle network-guided munitions warfare in the Western Pacific.” This included detailed study of U.S. performance in the 1990-1991 Gulf War; development of a Chinese version of a battle network that features ballistic and guided missiles;
  • The second line of effort resulted in a sophisticated capability to attack U.S. networked military capabilities through “a blend of cyber, electronic warfare, and deception operations.”
  • The third line of effort produced specialized “assassin’s mace” capabilities for attacking specific weapons systems used for projecting U.S. military power overseas, such as aircraft carriers.

Work asserts that “These three lines of effort now enable contemporary Chinese battle networks to contest the U.S. military in every operating domain: sea, air, land, space, and cyberspace.”

He goes on to describe a fourth technological development line of effort, the fielding of hypersonic glide vehicles (HGV). HGV’s are winged re-entry vehicles boosted aloft by ballistic missiles. Moving at hypersonic speeds at near space altitudes (below 100 kilometers) yet maneuverable, HGVs carrying warheads would be exceptionally difficult to intercept even if the U.S. fielded ballistic missile defense systems capable of engaging targets (which it currently does not). The Chinese have already deployed HGVs on Dong Feng (DF) 17 intermediate-range ballistic missiles, and began operational testing the DF-21 possessing intercontinental range.

Work concludes with a stark admonition: “An energetic and robust U.S. response to HGVs is required, including the development of new defenses and offensive hypersonic weapons of our own.”

1st Security Force Assistance Brigade To Deploy To Afghanistan In Spring

1st Security Force Assistance Brigade To Deploy To Afghanistan In Spring

Capt. Christopher Hawkins, 1st Squadron, 38th Cavalry Regiment, 1st Security Force Assistance Brigade, middle, and an interpreter speaks with local national soldiers to gain information about a village during an enacted military operation on urban terrain event at Lee Field, Oct. 23, 2017, on Fort Benning, Ga. (Photo Credit: Spc. Noelle E. Wiehe)

The U.S. Army recently announced that the newly-created 1st Security Force Assistance Brigade (SFAB) will deploy to Afghanistan under the command of Colonel Scott Jackson in the spring of 2018 in support of the ongoing effort to train and advise Afghan security forces. 1st SFAB personnel formed the initial classes at the Military Advisor Training Academy (MATA) in August 2017 at Fort Benning, Georgia; approximately 525 had completed the course by November.

The Army intends to establish five Regular Army and one Army National Guard SFABs. In December it stated that the 2nd SFAB would stand up in January 2018 at Fort Bragg, North Carolina.

The Army created the SFABs and MATA in an effort to improve its capabilities to resource and conduct Security Force Assistance (SFA) missions and to relieve line Brigade Combat Teams (BCTs) of these responsibilities. Each SFAB will be manned by 800 senior and noncommissioned volunteer officers with demonstrated experience training and advising foreign security forces.

Specialized training at MATA includes language, foreign weapons, and the Joint Fires Observer course. SFAB commanders and leaders have previous command experience and enlisted advisors hold the rank of sergeant and above. As of August 2017, recruiting for the first unit had been short by approximately 350 personnel, though the shortfall appears to have been remedied. The Army is working to address policies and regulations with regard to promotion rates and boards, selection boards, and special order to formalize a SFAB career path

Of Nuclear Buttons: Presidential Authority To Use Nuclear Weapons

Of Nuclear Buttons: Presidential Authority To Use Nuclear Weapons

[The Adventures of Buckaroo Banzai Across The 8th Dimension (1984)]

While the need for the president of the United States to respond swiftly to a nuclear emergency is clear, should there be limits on the commander in chief’s authority to order use of nuclear weapons in situations that fall below the threshold of existential threat? The question has arisen because the administration of President Donald Trump has challenged the existing taboos against nuclear use.

Last November, the U.S. Senate Foreign Relations Committee held a hearing to investigate the topic, which congress had not considered since the height of the Cold War in the mid-1970s. Called at the behest of Senator Bob Corker (R-TN), the committee chairman, the hearing appeared intended to address congressional concerns over rumors of consideration of a preemptive U.S. attack on North Korea that could include nuclear strikes.

The consensus of the witnesses called to testify was that as presently construed, there is little statutory limit on the president’s power to authorize nuclear weapon use. The witnesses also questioned the wisdom of legislating changes to the existing setup.

Professor Peter Fever of Duke University, a noted scholar on nuclear issues and former National Security Council advisor, caveated between presidential authority to respond to a “bolt from the blue� surprise nuclear strike by an adversary, which was unquestioned, and the legitimacy of unilaterally ordering the use of nuclear weapons in a non-emergency scenario, which would be far more dubious. He conceded that there is no formal legal test for legality; the only real constraint would lie in the judgement of U.S. military personnel whether or not to carry out a presidential order of uncertain lawfulness.

There is no existing statutory framework undergirding the existing arrangement; it is an artifact of the urgency of the Cold War nuclear arms race. Under the Atomic Energy Act, congress gave responsibility for development, production, and custody of nuclear weapons to the executive branch, but has passed no laws defining the circumstances under which they may or may not be used. Harry S. Truman alone decided to use atomic bombs against Japan in 1945. In the late-1950s, Dwight D. Eisenhower secretly pre-delegated authority to use nuclear weapons in certain emergency situations to some U.S. theater commanders; these instructions were also adopted by John F. Kennedy and Lyndon Johnson. Several presidents authorized secret deployment of nuclear weapons to overseas storage locations.

The U.S. constitution offers no clear guidance. War power are divided between congress, which has the sole authority to declare war and to raise and maintain armed forces, and the president, who is commander in chief of the armed forces. Congress attempted to clarify the circumstances when it was permissible for the president to unilaterally authorize the use of military force in the War Powers Resolution of 1973. It stipulates that the president may commit U.S. military forces abroad only following a congressional declaration of war or authorization to use force, or in response to “a national emergency created by attack upon the United States, its territories or possessions, or its armed forces.”

Successive presidents have held that the resolution is unconstitutional, however, and have ignored its provisions on several occasions. Congress has traditionally afforded presidents wide deference in the conduct of foreign affairs and military conflicts, albeit under its traditional mechanisms of oversight. In waging wars, presidents are subject to U.S. law, including obligations to follow congressionally-approved international conventions defining the laws of war. While the president and congress have disagreed over whether or not to begin or end foreign conflicts, the legislative branch has rarely elected to impose limits on a president’s prerogatives on how to wage such conflicts, to include the choice of weapons to be employed.

The situation in Korea is an interesting case in itself. It was the first post-World War II case where a president committed U.S. military forces to an overseas conflict without seeking a congressional declaration of war. Congress neither authorized U.S. intervention in 1950 nor sanctioned the 1953 armistice that led to a cessation of combat. Truman instead invoked United Nations Security Council resolutions as justification for intervening in what he termed a “police action.�

Legally, the U.S. remains in a state of hostilities with North Korea. The 1953 armistice that halted the fighting was supposed to lead to a formal peace treaty, but an agreement was never consummated. Under such precedents, the Trump administration could well claim that that the president is within his constitutional prerogatives in deciding to employ nuclear weapons in a case of renewed hostilities.

In all reality, defining the limits of presidential authority over nuclear weapons would be a political matter. While congress possesses the constitutional right to legislate U.S. laws on the subject, actually doing so would likely require a rare bipartisan sense of purpose and resolution strong enough to overcome what would undoubtedly be resolute political and institutional opposition. Even if such a law was passed, it is likely every president would view it is an unconstitutional infringement on executive power. Resolving an impasse could provoke a constitutional crisis. Leaving it unresolved could also easily result in catastrophic confusion in the military chain of command in an emergency. Redefining presidential nuclear authority would also probably require an expensive retooling of the nuclear command and control system. It would also introduce unforeseen second and third order effects into American foreign policy and military strategy.

In the end, a better solution to the problem might simply be for the American people to exercise due care in electing presidents to trust with decisions of existential consequence. Or they could decide to mitigate the risk by drastically reducing or abolishing the nuclear stockpile.


South Korea Considering Development Of Artillery Defense System

South Korea Considering Development Of Artillery Defense System

[Mauldin Economics]

In an article I missed on the first go-round from last October, Ankit Panda, senior editor at The Diplomat, detailed a request by the South Korean Joint Chiefs of Staff to the National Assembly Defense Committee to study the feasibility of a missile defense system to counter North Korean long-range artillery and rocket artillery capabilities.

North Korea has invested heavily in its arsenal of conventional artillery. Other than nuclear weapons, this capability likely poses the greatest threat to South Korean security, particularly given the vulnerability of the capital Seoul, a city of nearly 10 million that lies just 35 miles south of the demilitarized zone.

The artillery defense system the South Korean Joint Chiefs seek to develop is not intended to protect civilian areas, however. It would be designed to shield critical command-and-control and missile defense sites. They already considered and rejected buying Israel’s existing Iron Dome missile defense system as inadequate to the magnitude of the threat.

As Panda pointed out, the challenges are formidable for development an artillery defense system capable of effectively countering North Korean capabilities.

South Korea would need to be confident that it would be able to maintain an acceptable intercept rate against the incoming projectiles—a task that may require a prohibitively large investment in launchers and interceptors. Moreover, the battle management software required for a system like this may prove to be exceptionally complex as well. Existing missile defense systems can already have their systems overwhelmed by multiple targets.

It is likely that there will be broader interest in South Korean progress in this area (Iron Dome is a joint effort by the Israels and Raytheon). Chinese and Russian long-range precision fires capabilities are bulwarks of the anti-access/area denial strategies the U.S. military is currently attempting to overcome via the Third Offset Strategy and multi-domain battle initiatives.

First World War Digital Resources

First World War Digital Resources

Informal portrait of Charles E. W. Bean working on official files in his Victoria Barracks office during the writing of the Official History of Australia in the War of 1914-1918. The files on his desk are probably the Operations Files, 1914-18 War, that were prepared by the army between 1925 and 1930 and are now held by the Australian War Memorial as AWM 26. Courtesy of the Australian War Memorial. [Defence in Depth]

Chris and I have both taken to task the highly problematic state of affairs with regard to military record-keeping in the digital era. So it is only fair to also highlight the strengths of the Internet for historical research, one of which is the increasing availability of digitized archival  holdings, documents, and sources.

Although the posts are a couple of years old now, Dr. Robert T. Foley of the Defence Studies Department at King’s College London has provided a wonderful compilation of  links to digital holdings and resources documenting the experiences of many of the many  belligerents in the First World War. The links include digitized archival holdings and electronic copies of often hard-to-find official histories of ground, sea, and air operations.

Digital First World War Resources: Online Archival Sources

Digital First World War Resources: Online Official Histories — The War on Land

Digital First World War Resources: Online Official Histories — The War at Sea and in the Air

For TDI, the availability of such materials greatly broadens potential sources for research on historical combat. For example, TDI made use of German regional archival holdings for to compile data on the use of chemical weapons in urban environments from the separate state armies that formed part of the Imperial German Army in the First World War. Although much of the German Army’s historical archives were destroyed by Allied bombing at the end of the Second World War, a great deal of material survived in regional state archives and in other places, as Dr. Foley shows. Access to the highly detailed official histories is another boon for such research.

The Digital Era hints at unprecedented access to historical resources and more materials are being added all the time. Current historians should benefit greatly. Future historians, alas, are not as likely to be so fortunate when it comes time to craft histories of the the current era.

Russian General Staff Chief Dishes On Military Operations In Syria

Russian General Staff Chief Dishes On Military Operations In Syria

General of the Army Valeriy Gerasimov, Chief of the General Staff of the Armed Forces of the Russian Federation and First Deputy Minister of Defence of the Russian Federation [Wikipedia]

General of the Army Valery Gerasimov, Chief of the General Staff of the Armed Forces of Russia, provided detailed information on Russian military operations in Syria in an interview published in Komsomolskaya Pravda on the day after Christmas.

Maxim A. Suchkov, the Russian coverage editor for Al-Monitor, provided an English-language summary on Twitter.

While Gerasimov’s comments should be read critically, they do provide a fascinating insight into the Russian perspective on the intervention in Syria, which has proved remarkably successful with an economical investment in resources and money.

Gerasimov stated that planning for Russian military operations used Operation Anadyr, the secret deployment of troops and weapons to Cuba in 1962, as a template. A large-scale deployment of ground forces was ruled out at the start. The Syrian government army and militias were deemed combat-capable despite heavy combat losses, so the primary supporting tasks were identified as targeting and supporting fires to disrupt enemy “control systems.�

The clandestine transfer of up to 50 Russian combat aircraft to Hmeimim Air Base in Latakia, Syria, began a month before the beginning of operations in late-September 2015. Logistical and infrastructure preparations took much longer. The most difficult initial challenge, according to Gerasimov, was coordinating Russian air support with Syrian government ground forces, but it was resolved over time.

The Russians viewed Daesh (ISIS) forces battling the Syrian government as a regular army employing combat tactics, fielding about 1,500 tanks and 1,200 artillery pieces seized from Syria and Iraq.

While the U.S.-led coalition conducted 8-10 air strikes per day against Daesh in Syria, the Russians averaged 60-70, with a peak of 120-140. Gerasimov attributed the disparity to the fact that the coalition was seeking to topple Bashar al-Assad’s regime, not the defeat of Daesh. He said that while the Russians obtained cooperation with the U.S. over aerial deconfliction and “de-escalation� in southern Syria, offers for joint planning, surveillance, and strikes were turned down. Gerasimov asserted that Daesh would have been defeated faster had there been more collaboration.

More controversially, Gerasimov claimed that U.S.-supported New Syrian Army rebel forces at Al Tanf and Al-Shaddidi were “virtually� Daesh militants, seeking to destabilize Syria, and complained that the U.S. refused Russian access to the camp at Rukban.

According to Russian estimates, there were a total of 59,000 Daesh fighters in September 2015 and that 10,000 more were recruited. Now there are only 2,800 and most militants are returning to their home countries. Most are believed heading to Libya, some to Afghanistan, and others to Southwest Asia.

Gerasimov stated that Russia will continue to deploy sufficient forces in Syria to provide offensive support if needed and the Mediterranean naval presence will be maintained. The military situation remains unstable and the primary objective is the elimination of remaining al Nusra/Hay’at Tahrir al-Sham (al Qaida in Syria) fighters.

48,000 Russian troops were rotated through Syria, most for three months, from nearly 90% of Russian Army divisions and half of the regiments and brigades. 200 new weapons were tested and “great leaps� were made in developing and using drone technology, which Gerasimov deemed now “integral� to the Russian military.

Gerasimov said that he briefed Russian Defense Minister Sergei Shoigu on Syria twice daily, and Shoigu updated Russian President Vladimir Putin “once or twice a week.â€� All three would “sometimesâ€� meet to plan together and Gerasimov averred that “Putin sets [the] goals, tasks, [and] knows all the details on every level.

TDI Friday Read: How Do We Know What We Know About War?

TDI Friday Read: How Do We Know What We Know About War?

The late, great Carl Sagan.

Today’s edition of TDI Friday Read asks the question, how do we know if the theories and concepts we use to understand and explain war and warfare accurately depict reality? There is certainly no shortage of explanatory theories available, starting with Sun Tzu in the 6th century BCE and running to the present. As I have mentioned before, all combat models and simulations are theories about how combat works. Military doctrine is also a functional theory of warfare. But how do we know if any of these theories are actually true?

Well, one simple way to find out if a particular theory is valid is to use it to predict the outcome of the phenomenon it purports to explain. Testing theory through prediction is a fundamental aspect of the philosophy of science. If a theory is accurate, it should be able to produce a reasonable accurate prediction of future behavior.

In his 2016 article, “Can We Predict Politics? Toward What End?� Michael D. Ward, a Professor of Political Science at Duke University, made a case for a robust effort for using prediction as a way of evaluating the thicket of theory populating security and strategic studies. Dropping invalid theories and concepts is important, but there is probably more value in figuring out how and why they are wrong.

Screw Theory! We Need More Prediction in Security Studies!

Trevor Dupuy and TDI publicly put their theories to the test in the form of combat casualty estimates for the 1991 Gulf Way, the U.S. intervention in Bosnia, and the Iraqi insurgency. How well did they do?


Dupuy himself argued passionately for independent testing of combat models against real-world data, a process known as validation. This is actually seldom done in the U.S. military operations research community.

Military History and Validation of Combat Models

However, TDI has done validation testing of Dupuy’s Quantified Judgement Model (QJM) and Tactical Numerical Deterministic Model (TNDM). The results are available for all to judge.

Validating Trevor Dupuy’s Combat Models

I will conclude this post on a dissenting note. Trevor Dupuy spent decades arguing for more rigor in the development of combat models and analysis, with only modest success. In fact, he encountered significant skepticism and resistance to his ideas and proposals. To this day, the U.S. Defense Department seems relatively uninterested in evidence-based research on this subject. Why?

David Wilkinson, Editor-in-Chief of the Oxford Review, wrote a fascinating blog post, “Why evidence-based practice probably isn’t worth it…,� looking at why practitioners seem to have little actual interest in evidence-based practice. His argument:

The problem with evidence based practice is that outside of areas like health care and aviation/technology is that most people in organisations don’t care about having research evidence for almost anything they do. That doesn’t mean they are not interesting in research but they are just not that interested in using the research to change how they do things – period.

His explanation for why this is and what might be done to remedy the situation is quite interesting.

Happy Holidays to all!

Strachan On The Changing Character Of War

Strachan On The Changing Character Of War

The Cove, the professional development site for the Australian Army, has posted a link to a 2011 lecture by Professor Sir Hew Strachan. Strachan, a Professor of International Relations at St. Andrews University in Scotland, is one of the more perceptive and trenchant observers about the recent trends in strategy, war, and warfare from a historian’s perspective. I highly recommend his recent book, The Direction of War.

Strachan’s lecture, “The Changing Character of War,” proceeds from Carl von Clausewitz’s discussions in On War on change and continuity in the history of war to address look at the trajectories of recent conflicts. Among the topics Strachan’s lecture covers are technological determinism, the irregular conflicts of the early 21st century, political and social mobilization, the spectrum of conflict, the impact of the Second World War on contemporary theorizing about war and warfare, and deterrence.

This is well worth the time to listen to and think about.

The Principle Of Mass On The Future Battlefield

The Principle Of Mass On The Future Battlefield

Men of the U.S. Army 369th Infantry Regiment “Harlem’s Hellfighters,”in action at Séchault on September 29, 1918 during the Meuse-Argonne Offensive. [Wikimedia]

Given the historical trend toward battlefield dispersion as a result of the increasing lethality of weapons, how will the principle of mass apply in future warfare? I have been wondering about this for a while in the context of the two principle missions the U.S. Army must plan and prepare for, combined arms maneuver and wide area security. As multi-domain battle advocates contend, future combat will place a premium on smaller, faster, combat formations capable of massing large amounts of firepower. However, wide area security missions, such as stabilization and counterinsurgency, will continue to demand significant numbers of “boots on the ground,� the traditional definition of mass on the battlefield. These seemingly contradictory requirements are contributing to the Army’s ongoing “identity crisis� over future doctrine, training, and force structure in an era of budget austerity and unchanging global security responsibilities.

Over at the Australian Army Land Power Forum, Lieutenant Colonel James Davis addresses the question generating mass in combat in the context of the strategic challenges that army faces. He cites traditional responses by Western armies to this problem, “Regular and Reserve Force partnering through a standing force generation cycle, indigenous force partnering through deployed training teams and Reserve mobilisation to reconstitute and regenerate deployed units.�

Davis also mentions AirLand Battle and “blitzkrieg� as examples of tactical and operational approaches to limiting the ability of enemy forces to mass on the battlefield. To this he adds “more recent operational concepts, New Generation Warfare and Multi Domain Battle, [that] operate in the air, electromagnetic spectrum and cyber domain and to deny adversary close combat forces access to the battle zone.� These newer concepts use Cyber Electromagnetic Activities (CEMA), Information Operations, long range Joint Fires, and Robotic and Autonomous systems (RAS) to attack enemy efforts to mass.

The U.S. Army is moving rapidly to develop, integrate and deploy these capabilities. Yet, however effectively new doctrine and technology may influence mass in combined arms maneuver combat, it is harder to see how they can mitigate the need for manpower in wide area security missions. Some countries may have the strategic latitude to emphasize combined arms maneuver over wide area security, but the U.S. Army cannot afford to do so in the current security environment. Although conflicts emphasizing combined arms maneuver may present the most dangerous security challenge to the U.S., contingencies involving wide area security are far more likely.

How this may be resolved is an open question at this point in time. It is also a demonstration as to how tactical and operational considerations influence strategic options.

TDI Friday Read: The Lanchester Equations

TDI Friday Read: The Lanchester Equations

Frederick W. Lanchester (1868-1946), British engineer and author of the Lanchester combat attrition equations. []

Today’s edition of TDI Friday Read addresses the Lanchester equations and their use in U.S. combat models and simulations. In 1916, British engineer Frederick W. Lanchester published a set of calculations he had derived for determining the results of attrition in combat. Lanchester intended them to be applied as an abstract conceptualization of aerial combat, stating that he did not believe they were applicable to ground combat.

Due to their elegant simplicity, U.S. military operations researchers nevertheless began incorporating the Lanchester equations into their land warfare computer combat models and simulations in the 1950s and 60s. The equations are the basis for many models and simulations used throughout the U.S. defense community today.

The problem with using Lanchester’s equations is that, despite numerous efforts, no one has been able to demonstrate that they accurately represent real-world combat.

Lanchester equations have been weighed….


Trevor Dupuy was critical of combat models based on the Lanchester equations because they cannot account for the role behavioral and moral (i.e. human) factors play in combat.

Human Factors In Warfare: Interaction Of Variable Factors

He was also critical of models and simulations that had not been tested to see whether they could reliably represent real-world combat experience. In the modeling and simulation community, this sort of testing is known as validation.

Military History and Validation of Combat Models

The use of unvalidated concepts, like the Lanchester equations, and unvalidated combat models and simulations persists. Critics have dubbed this the “base of sand� problem, and it continues to affect not only models and simulations, but all abstract theories of combat, including those represented in military doctrine.

Wargaming Multi-Domain Battle: The Base Of Sand Problem

How Does the U.S. Army Calculate Combat Power? ¯\_(ツ)_/¯

How Does the U.S. Army Calculate Combat Power? ¯\_(ツ)_/¯

The constituents of combat power as described in current U.S. military doctrine. [The Lightning Press]

One of the fundamental concepts of U.S. warfighting doctrine is combat power. The current U.S. Army definition is “the total means of destructive, constructive, and information capabilities that a military unit or formation can apply at a given time. (ADRP 3-0).� It is the construct commanders and staffs are taught to use to assess the relative effectiveness of combat forces and is woven deeply throughout all aspects of U.S. operational thinking.

To execute operations, commanders conceptualize capabilities in terms of combat power. Combat power has eight elements: leadership, information, mission command, movement and maneuver, intelligence, fires, sustainment, and protection. The Army collectively describes the last six elements as the warfighting functions. Commanders apply combat power through the warfighting functions using leadership and information. [ADP 3-0, Operations]

Yet, there is no formal method in U.S. doctrine for estimating combat power. The existing process is intentionally subjective and largely left up to judgment. This is problematic, given that assessing the relative combat power of friendly and opposing forces on the battlefield is the first step in Course of Action (COA) development, which is at the heart of the U.S. Military Decision-Making Process (MDMP). Estimates of combat power also figure heavily in determining the outcomes of wargames evaluating proposed COAs.

The Existing Process

The Army’s current approach to combat power estimation is outlined in Field Manual (FM) 6-0 Commander and Staff Organization and Operations (2014). Planners are instructed to “make a rough estimate of force ratios of maneuver units two levels below their echelon.� They are then directed to “compare friendly strengths against enemy weaknesses, and vice versa, for each element of combat power.� It is “by analyzing force ratios and determining and comparing each force’s strengths and weaknesses as a function of combat power� that planners gain insight into tactical and operational capabilities, perspectives, vulnerabilities, and required resources.

That is it. Planners are told that “although the process uses some numerical relationships, the estimate is largely subjective. Assessing combat power requires assessing both tangible and intangible factors, such as morale and levels of training.� There is no guidance as to how to determine force ratios [numbers of troops or weapons systems?]. Nor is there any description of how to relate force calculations to combat power. Should force strengths be used somehow to determine a combat power value? Who knows? No additional doctrinal or planning references are provided.

Planners then use these subjective combat power assessments as they shape potential COAs and test them through wargaming. Although explicitly warned not to “develop and recommend COAs based solely on mathematical analysis of force ratios,� they are invited at this stage to consult a table of “minimum historical planning ratios as a starting point.� The table is clearly derived from the ubiquitous 3-1 rule of combat. Contrary to what FM 6-0 claims, neither the 3-1 rule nor the table have a clear historical provenance or any sort of empirical substantiation. There is no proven validity to any of the values cited. It is not even clear whether the “historical planning ratios� apply to manpower, firepower, or combat power.

During this phase, planners are advised to account for “factors that are difficult to gauge, such as impact of past engagements, quality of leaders, morale, maintenance of equipment, and time in position. Levels of electronic warfare support, fire support, close air support, civilian support, and many other factors also affect arraying forces.� FM 6-0 offers no detail as to how these factors should be measured or applied, however.

FM 6-0 also addresses combat power assessment for stability and civil support operations through troop-to-task analysis. Force requirements are to be based on an estimate of troop density, a “ratio of security forces (including host-nation military and police forces as well as foreign counterinsurgents) to inhabitants.� The manual advises that most “most density recommendations fall within a range of 20 to 25 counterinsurgents for every 1,000 residents in an area of operations. A ratio of twenty counterinsurgents per 1,000 residents is often considered the minimum troop density required for effective counterinsurgency operations.�

While FM 6-0 acknowledges that “as with any fixed ratio, such calculations strongly depend on the situation,� it does not mention that any references to force level requirements, tie-down ratios, or troop density were stripped from both Joint and Army counterinsurgency manuals in 2013 and 2014. Yet, this construct lingers on in official staff planning doctrine. (Recent research challenged the validity of the troop density construct but the Defense Department has yet to fund any follow-on work on the subject.)

The Army Has Known About The Problem For A Long Time

The Army has tried several solutions to the problem of combat power estimation over the years. In the early 1970s, the U.S. Army Center for Army Analysis (CAA; known then as the U.S. Army Concepts & Analysis Agency) developed the Weighted Equipment Indices/Weighted Unit Value (WEI/WUV or “wee‑wuv”) methodology for calculating the relative firepower of different combat units. While WEI/WUV’s were soon adopted throughout the Defense Department, the subjective nature of the method gradually led it to be abandoned for official use.

In the 1980s and 1990s, the U.S. Army Command & General Staff College (CGSC) published the ST 100-9 and ST 100-3 student workbooks that contained tables of planning factors that became the informal basis for calculating combat power in staff practice. The STs were revised regularly and then adapted into spreadsheet format in the late 1990s. The 1999 iteration employed WEI/WEVs as the basis for calculating firepower scores used to estimate force ratios. CGSC stopped updating the STs in the early 2000s, as the Army focused on irregular warfare.

With the recently renewed focus on conventional conflict, Army staff planners are starting to realize that their planning factors are out of date. In an attempt to fill this gap, CGSC developed a new spreadsheet tool in 2012 called the Correlation of Forces (COF) calculator. It apparently drew upon analysis done by the U.S. Army Training and Doctrine Command Analysis Center (TRAC) in 2004 to establish new combat unit firepower scores. (TRAC’s methodology is not clear, but if it is based on this 2007 ISMOR presentation, the scores are derived from runs by an unspecified combat model modified by factors derived from the Army’s unit readiness methodology. If described accurately, this would not be an improvement over WEI/WUVs.)

The COF calculator continues to use the 3-1 force ratio tables. It also incorporates a table for estimating combat losses based on force ratios (this despite ample empirical historical analysis showing that there is no correlation between force ratios and casualty rates).

While the COF calculator is not yet an official doctrinal product, CGSC plans to add Marine Corps forces to it for use as a joint planning tool and to incorporate it into the Army’s Command Post of the Future (CPOF). TRAC is developing a stand-alone version for use by force developers.

The incorporation of unsubstantiated and unvalidated concepts into Army doctrine has been a long standing problem. In 1976, Huba Wass de Czege, then an Army colonel, took both “loosely structured and unscientific analysis” based on intuition and experience and simple counts of gross numbers to task as insufficient “for a clear and rigorous understanding of combat power in a modern context.” He proposed replacing it with a analytical framework for analyzing combat power that accounted for both measurable and intangible factors. Adopting a scrupulous method and language would overcome the simplistic tactical analysis then being taught. While some of the essence of Wass de Czege’s approach has found its way into doctrinal thinking, his criticism of the lack of objective and thorough analysis continues to echo (here, here, and here, for example).

Despite dissatisfaction with the existing methods, little has changed. The problem with this should be self-evident, but I will give the U.S. Naval War College the final word here:

Fundamentally, all of our approaches to force-on-force analysis are underpinned by theories of combat that include both how combat works and what matters most in determining the outcomes of engagements, battles, campaigns, and wars. The various analytical methods we use can shed light on the performance of the force alternatives only to the extent our theories of combat are valid. If our theories are flawed, our analytical results are likely to be equally wrong.

Did The Patriot BMD Miss Again In Saudi Arabia?

Did The Patriot BMD Miss Again In Saudi Arabia?

Apparent trajectory of Houthi Burqan ballistic missile fired at Saudi Arabia on 4 November 2017 [New York Times]

On 4 November 2017, Houthi rebels fired a Burqan 2H (a variant of the SCUD) ballistic missile from Yemeni territory aimed at Riyadh International Airport in Saudi Arabia. The Saudis claimed to have intercepted the missile before it hit using a U.S.-made Patriot PAC-2 ballistic missile defense (BMD) system.

A team of independent analysts have challenged that claim, however. Led by Jeffery Lewis, Director of the East Asia Nonproliferation Program at the Middleberry Institute of International Studies at Monterey, the team analyzed video of an impact near the Riyadh Airport and scattered missile debris. Based on this evidence, they concluded that five Saudi Patriot BMD missiles failed to intercept the incoming missile and that its warhead detonated on the ground just a kilometer away from a busy airport terminal.

The apparent failure of the Patriot BMD continues a string of operational disappointments that extends back to the 1991 Gulf War. Designed for terminal BMD against short and medium range ballistic missile threats, the Patriot forms part of the layered U.S. BMD system, and has also been sold to 14 other countries, including South Korea and Japan.

The credibility of U.S. and regional military defenses against North Korea rests significantly on perceptions of the effectiveness of U.S-made BMD. As President Donald Trump boasted the day after the alleged Saudi missile intercept, “Our [Patriot] system knocked the missile out of the air… That’s how good we are. Nobody makes what we make, and now we’re selling it all over the world.�

TDI Friday Read: The Validity Of The 3-1 Rule Of Combat

TDI Friday Read: The Validity Of The 3-1 Rule Of Combat

Canadian soldiers going “over the top” during the First World War. []

Today’s edition of TDI Friday Read addresses the question of force ratios in combat. How many troops are needed to successfully attack or defend on the battlefield? There is a long-standing rule of thumb that holds that an attacker requires a 3-1 preponderance over a defender in combat in order to win. The aphorism is so widely accepted that few have questioned whether it is actually true or not.

Trevor Dupuy challenged the validity of the 3-1 rule on empirical grounds. He could find no historical substantiation to support it. In fact, his research on the question of force ratios suggested that there was a limit to the value of numerical preponderance on the battlefield.

Trevor Dupuy and the 3-1 Rule

Human Factors In Warfare: Diminishing Returns In Combat

TDI President Chris Lawrence has also challenged the 3-1 rule in his own work on the subject.

Force Ratios in Conventional Combat

The 3-to-1 Rule in Histories

Aussie OR

The validity of the 3-1 rule is no mere academic question. It underpins a great deal of U.S. military policy and warfighting doctrine. Yet, the only time the matter was seriously debated was in the 1980s with reference to the problem of defending Western Europe against the threat of Soviet military invasion.

The Great 3-1 Rule Debate

It is probably long past due to seriously challenge the validity and usefulness of the 3-1 rule again.

How Do You Solve A Problem Like North Korea?

How Do You Solve A Problem Like North Korea?

Flight trajectories of North Korean missile tests, May-November 2017. [The Washington Post]

The Democratic People’s Republic of Korea (DPRK) conducted another ballistic missile test yesterday. Following a nearly vertical “lofted trajectory,� the test missile reached a height of 2,800 miles and impacted 620 miles downrange in the Sea of Japan. This performance would give the missile, which the North Koreans have designated the Hwasong-15, a strike range of 8,100 miles, which would include all of the United States.

Appended here is a roundup of TDI posts that address the political and military challenges posed by North Korea. It should be noted that the DPRK nuclear program has been underway for decades and has defied easy resolution thus far. There are no clear options at this stage and each potential solution carries a mix of risk and reward. The DPRK is highly militarized and the danger of catastrophic conflict looms large, with the potential to inflict military and civilian casualties running into the hundreds of thousands or more.

The first set of posts address a potential war on the Korean peninsula.

Chronology of North Korean Missile Development

Insurgency In The DPRK?

U.S. And China: Deterrence And Resolve Over North Korea

Casualty Estimates for a War with North Korea

The CRS Casualty Estimates

The second set of posts look at the DPRK ballistic missile threat and possible counters.

So, What Would Happen If The Norks Did Fire An ICBM At The U.S.?

Aegis, THAAD, Patriots and GBI

Defending Guam From North Korean Ballistic Missiles

The Pros And Cons Of Shooting Down North Korean Ballistic Missile Tests



U.S. Army Swarm Offensives In Future Combat

U.S. Army Swarm Offensives In Future Combat

For a while now, military pundits have speculated about the role robotic drones and swarm tactics will play in future warfare. U.S. Army Captain Jules Hurst recently took a first crack at adapting drones and swarms into existing doctrine in an article in Joint Forces Quarterly. In order to move beyond the abstract, Hurst looked at how drone swarms “should be inserted into the tactical concepts of today—chiefly, the five forms of offensive maneuver recognized under Army doctrine.�

Hurst pointed out that while drone design currently remains in flux, “for assessment purposes, future swarm combatants will likely be severable into two broad categories: fire support swarms and maneuver swarms.�

In Hurst’s reckoning, the chief advantage of fire support swarms would be their capacity for overwhelming current air defense systems to deliver either human-targeted or semi-autonomous precision fires. Their long-range endurance of airborne drones also confers an ability to take and hold terrain that current manned systems do not possess.

The primary benefits of ground maneuver swarms, according to Hurst, would be their immunity from the human element of fear, giving them a resilient, persistent level of combat effectiveness. Their ability to collect real-time battlefield intelligence makes them ideal for enabling modern maneuver warfare concepts.

Hurst examines how these capabilities could be exploited through each of the Army’s current schemes of maneuver: infiltration, penetration, frontal attack, envelopment, and the turning maneuver. While concluding that “ultimately, the technological limitations and advantages of maneuver swarms and fire support swarms will determine their uses,� Hurst acknowledged the critical role Army institutional leadership must play in order to successfully utilize the new technology on the battlefield.

U.S. officers and noncommissioned officers can accelerate that comfort [with new weapons] by beginning to postulate about the use of swarms well before they hit the battlefield. In the vein of aviation visionaries Billy Mitchell and Giulio Douhet, members of the Department of Defense must look forward 10, 20, or even 30 years to when artificial intelligence allows the deployment of swarm combatants on a regular basis. It will take years of field maneuvers to perfect the employment of swarms in combat, and the concepts formed during these exercises may be shattered during the first few hours of war. Even so, the U.S. warfighting community must adopt a venture capital mindset and accept many failures for the few novel ideas that may produce game-changing results.

Trevor Dupuy would have agreed. He argued that the crucial factor in military innovation was not technology, but the organization approach to using it. Based on his assessment of historical patterns, Dupuy derived a set of preconditions necessary for the successful assimilation of new technology into warfare.

  1. An imaginative, knowledgeable leadership focused on military affairs, supported by extensive knowledge of, and competence in, the nature and background of the existing military system.
  2. Effective coordination of the nation’s economic, technological-scientific, and military resources.
    1. There must exist industrial or developmental research institutions, basic research institutions, military staffs and their supporting institutions, together with administrative arrangements for linking these with one another and with top decision-making echelons of government.
    2. These bodies must conduct their research, developmental, and testing activities according to mutually familiar methods so that their personnel can communicate, can be mutually supporting, and can evaluate each other’s results.
    3. The efforts of these institutions—in related matters—must be directed toward a common goal.
  3. Opportunity for battlefield experimentation as a basis for evaluation and analysis.

Is the U.S. Army up to the task?

Command and Combat Effectiveness: The Case of the British 51st Highland Division

Command and Combat Effectiveness: The Case of the British 51st Highland Division

Soldiers of the British 51st Highland Division take cover in bocage in Normandy, 1944. [Daily Record (UK)]

While Trevor Dupuy’s concept of combat effectiveness has been considered controversial by some, he was hardly the only one to observe that throughout history, some military forces have fought more successfully on the battlefield than others. While the sources of victory and defeat in battle remain a fertile, yet understudied topic, there is a growing literature on the topic of military effectiveness in the fields of strategic and security studies.

Anthony King, a professor in War Studies at the University of Warwick, has published an outstanding article in the most recent edition of British Journal of Military History, “Why did 51st Highland Division Fail? A case-study in command and combat effectiveness.� In it, he examined military command and combat effectiveness through the experience of the British 51st Highland Division in the 1944 Normandy Campaign. Most usefully, King developed a definition of military command that clarifies its relationship to combat effectiveness: “The function of a commander is to maximise combat power by defining achievable missions and, then, orchestrating subordinates into a cohesive whole committed to mission accomplishment.�

Defining Military Command

In order to analyze the relationship between command and combat effectiveness, King sought to “define the concept of command and to specify its relationship to management and leadership.� The construct he developed drew upon the work of Peter Drucker, an Austrian-born American business consultant and writer who is considered by many to be “the founder of modern management.� From Drucker, King distilled a definition of the function and process of military command: “command always consists of three elements: mission definition, mission management and mission motivation.�

As King explained, “When command is understood in this way, its connection to combat effectiveness begins to become clear.�

[C]ommand is an institutional solution to an organizational problem; it generates cohesion in a formation. Specifically, by uniting decision-making authority in one person and one role, a large military force is able to unite subordinate units, whose troops are not co-present with each other and who, in most cases, do not know each other. Crucially, the combat effectiveness of a formation, as a formation, is substantially dependent upon the ability of its commander to synchronise its disparate efforts in order to generate collective effects. Skillful command has a galvanising influence on a military force; by orchestrating the activities of subordinate units and motivating troops, command is able to create a level of combat power, which supervenes the capabilities of each of the parts. A well-commanded force has properties, which exceed those of its constituent units, fighting alone.

It is through the orchestration, synchronization, and motivation of effort, King concluded, that “command and combat effectiveness are immediately connected. Command fuses a formation together and increases its determination to fulfil its missions.�

Assessing the Combat Effectiveness of the 51st Division

The rest of King’s article is a detailed assessment of the combat effectiveness of the 51st Highland Division in Normandy in June and July 1944 using this military command construct. Observers at the time noted a decline in the division’s combat performance, which had been graded quite highly in North Africa and Sicily. The one obvious difference was the replacement of Major General Douglas Wimberley with Major General Charles Bullen-Smith in August 1943. After concluding that the 51st Division was no longer battleworthy, the commander of the British 21st Army Group, General Bernard Montgomery personally relieved Bullen-Smith in late July 1944.

In reviewing Bullen-Smith’s performance, King concluded that

Although a number of factors contributed to the struggles of the Highland Division in Normandy, there is little doubt that the shortcomings of its commander, Major General Charles Bullen-Smith, were the critical factor. Charles Bullen-Smith failed to fulfill the three essential functions required of a commander… Bullen-Smith’s inadequacies are highly suggestive of a direct relationship between command and combat effectiveness; they demonstrate how command can augment or undermine combat performance.

King’s approach to military studies once again demonstrates the relevance of multi-disciplinary analysis based on solid historical research. His military command model should prove to be a very useful tool for analyzing the elements of combat effectiveness and assessing combat power. Along with Dr. Jonathan Fennell’s work on measuring morale, among others, it appears that good progress is being made on the study of human factors in combat and military operations, at least in the British academic community (even if Tom Ricks thinks otherwise).

TDI Friday Read: Naval Air Power

TDI Friday Read: Naval Air Power

A rare photograph of the current Russian Navy aircraft carrier Admiral Kuznetsov (ex-Riga, ex-Leonid Brezhnev, ex-Tblisi) alongside her unfinished sister, the now Chinese PLAN Liaoning (former Ukrainian Navy Varyag) in the Mykolaiv shipyards, Ukraine. [Pavel Nenashev/Pinterest]

Today’s edition of TDI Friday Read is a round-up of blog posts addressing various aspects of naval air power. The first set address Russian and Chinese aircraft carriers and recent carrier operations.

The Admiral Kuznetsov Adventure

Lives Of The Russian (And Ex-Russian) Aircraft Carriers

Chinese Carriers

Chinese Carriers II

The last pair of posts discuss aspects of future U.S. naval air power and the F-35.

U.S. Armed Forces Vision For Future Air Warfare

The U.S. Navy and U.S. Air Force Debate Future Air Superiority

TDI Friday Read: How Many Troops Are Needed To Defeat An Insurgency?

TDI Friday Read: How Many Troops Are Needed To Defeat An Insurgency?

A paratrooper from the French Foreign Legion (1er REP) with a captured fellagha during the Algerian War (1954-1962). [Via Pinterest]

Today’s edition of TDI Friday Read is a compilation of posts addressing the question of manpower and counterinsurgency. The first four posts summarize research on the question undertaken during the first decade of the 21st century, while the Afghan and Iraqi insurgencies were in full bloom. Despite different research questions and analytical methodologies, each of the studies concluded that there is a relationship between counterinsurgent manpower and counterinsurgency outcomes.

The fifth post addresses the U.S. Army’s lack of a formal methodology for calculating manpower requirements for counterinsurgencies and contingency operations.

Force Ratios and Counterinsurgency

Force Ratios and Counterinsurgency II

Force Ratios and Counterinsurgency III

Force Ratios and Counterinsurgency IV

Has The Army Given Up On Counterinsurgency Research, Again?

Will Tax Reform Throttle A U.S. Defense Budget Increase?

Will Tax Reform Throttle A U.S. Defense Budget Increase?

John Conger recently reported in Defense One that the tax reform initiative championed by the Trump administration and Republican congressional leaders may torpedo an increase in the U.S. defense budget for 2018. Both the House and Senate have passed authorizations approving the Trump administration’s budget request for $574.5 billion in defense spending, which is $52 billion higher than the limit established by the Budget Control Act (BCA). However, the House and Senate also recently passed a concurrent 2018 budget resolution to facilitate passage of a tax reform bill that caps the defense budget at $522 billion as mandated by the BCA.

The House and Senate armed services committees continue to hammer out the terms of the 2018 defense authorization, which includes increases in troop strength and pay. These priorities could crowd out other spending requested by the services to meet strategic and modernization requirements if the budget remains capped. Congress also continues to resist the call by Secretary of Defense James Mattis to close unneeded bases and facilities, which could free spending for other needs. There is also little interest in reforming Defense Department business practices that allegedly waste $125 billion annually.

Congressional Republicans and Democrats were already headed toward a showdown over 2018 BCA limits on defense spending. Even before the tax reform push, several legislators predicted yet another year-long continuing resolution limiting government spending to the previous year’s levels. A bipartisan consensus existed among some armed services committee members that this would constitute “borderline legislative malpractice, particularly for the Department of Defense.�

Despite the ambitious timeline set by President Trump to pass a tax reform bill, the chances of a continuing resolution remain high. It also seems likely that any agreement to increase defense spending will be through the Overseas Contingency Operations budget, which is not subject to the BCA. Many in Congress agree with Democratic Representative Adam Smith that resorting to this approach is “a fiscal sleight of hand [that] would be bad governance and ‘hypocritical.’�

Are tax reform and increased defense spending incompatible? Stay tuned.

TDI Friday Read: Afghanistan

TDI Friday Read: Afghanistan

[SIGAR, Quarterly Report to Congress, 30 October 2017, p. 107]

While it is too soon to tell if the Trump Administration’s revised strategy in Afghanistan will make a difference, the recent report by the Special Inspector General for Afghanistan Reconstruction (SIGAR) to Congress documents the continued slow erosion of security in that country. Today’s edition of TDI Friday Read offers a selection of recent posts addressing some of the problems facing the U.S. counterinsurgent and stabilization missions there.


Meanwhile, In Afghanistan…

We probably need to keep talking about Afghanistan

What will be our plans for Afghanistan?

Stalemate in Afghanistan

Troop Increase in Afghanistan?

Sending More Troops to Afghanistan

Mattis on Afghanistan

Deployed Troop Counts

Disappearing Statistics



The Historical Combat Effectiveness of Lighter-Weight Armored Forces

The Historical Combat Effectiveness of Lighter-Weight Armored Forces

A Stryker Infantry Carrier Vehicle-Dragoon fires 30 mm rounds during a live-fire demonstration at Aberdeen Proving Ground, Md., Aug. 16, 2017. Soldiers with 2nd Cavalry Regiment spent six weeks at Aberdeen testing and training on the new Stryker vehicle and a remote Javelin system, which are expected to head to Germany early next year for additional user testing. (Photo Credit: Sean Kimmons)

In 2001, The Dupuy Institute conducted a study for the U.S. Army Center for Army Analysis (CAA) on the historical effectiveness of lighter-weigh armored forces. At the time, the Army had developed a requirement for an Interim Armored Vehicle (IAV), lighter and more deployable than existing M1 Abrams Main Battle Tank and the M2 Bradley Infantry Fighting Vehicle, to form the backbone of the future “Objective Force.” This program would result in development of the Stryker Infantry Fighting Vehicle.

CAA initiated the TDI study at the request of Walter W. “Don” Hollis, then the Deputy Undersecretary of the Army for Operations Research (a position that was eliminated in 2006.) TDI completed and submitted “The Historical Combat Effectiveness of Lighter-Weight Armored Forces” to CAA in August 2001. It examined the effectiveness of light and medium-weight armored forces in six scenarios:

  • Conventional conflicts against an armor supported or armor heavy force.
  • Emergency insertions against an armor supported or armor heavy force.
  • Conventional conflict against a primarily infantry force (as one might encounter in sub-Saharan Africa).
  • Emergency insertion against a primarily infantry force.
  • A small to medium insurgency (includes an insurgency that develops during a peacekeeping operation).
  • A peacekeeping operation or similar Operation Other Than War (OOTW) that has some potential for violence.

The historical data the study drew upon came from 146 cases of small-scale contingency operations; U.S. involvement in Vietnam; German counterinsurgency operations in the Balkans, 1941-1945; the Philippines Campaign, 1941-42; the Normandy Campaign, 1944; the Korean War 1950-51; the Persian Gulf War, 1990-91; and U.S. and European experiences with light and medium-weight armor in World War II.

The major conclusions of the study were:

Small Scale Contingency Operations (SSCOs)

  1. Implications for the Interim Armored Vehicle (IAV) Family of Vehicles. It would appear that existing systems (M-2 and M-3 Bradley and M-113) can fulfill most requirements. Current plans to develop an advanced LAV-type vehicle may cover almost all other shortfalls. Mine protection is a design feature that should be emphasized.
  2. Implications for the Interim Brigade Combat Team (IBCT). The need for armor in SSCOs that are not conventional or closely conventional in nature is limited and rarely approaches the requirements of a brigade-size armored force.


  1. Implications for the Interim Armored Vehicle (IAV) Family of Vehicles. It would appear that existing systems (M-2 and M-3 Bradley and M-113) can fulfill most requirements. The armor threat in insurgencies is very limited until the later stages if the conflict transitions to conventional war. In either case, mine protection is a design feature that may be critical.
  2. Implications for the Interim Brigade Combat Team (IBCT). It is the nature of insurgencies that rapid deployment of armor is not essential. The armor threat in insurgencies is very limited until the later stages if the conflict transitions to a conventional war and rarely approaches the requirements of a brigade-size armored force.

Conventional Warfare

Conventional Conflict Against An Armor Supported Or Armor Heavy Force

  1. Implications for the Interim Armored Vehicle (IAV) Family of Vehicles. It may be expected that opposing heavy armor in a conventional armor versus armor engagement could significantly overmatch the IAV. In this case the primary requirement would be for a weapon system that would allow the IAV to defeat the enemy armor before it could engage the IAV.
  2. Implications for the Interim Brigade Combat Team (IBCT). The IBCT could substitute as an armored cavalry force in such a scenario.

Conventional Conflict Against A Primarily Infantry Force

  1. Implications for the Interim Armored Vehicle (IAV) Family of Vehicles. This appears to be little different from those conclusions found for the use of armor in SSCOs and Insurgencies.
  2. Implications for the Interim Brigade Combat Team (IBCT). The lack of a major armor threat will make the presence of armor useful.

Emergency Insertion Against An Armor Supported Or Armor Heavy Force

  1. Implications for the Interim Armored Vehicle (IAV) Family of Vehicles. It appears that the IAV may be of great use in an emergency insertion. However, the caveat regarding the threat of being overmatched by conventional heavy armor mentioned above should not be ignored. In this case the primary requirement would be for a weapon system that would allow the IAV to defeat the enemy armor before it could engage the IAV.
  2. Implications for the Interim Brigade Combat Team (IBCT). Although the theoretical utility of the IBCT in this scenario may be great it should be noted that The Dupuy Institute was only able to find one comparable case of such a deployment which resulted in actual conflict in US military history in the last 60 years (Korea, 1950). In this case the effect of pushing forward light tanks into the face of heavier enemy tanks was marginal.

Emergency Insertion Against A Primarily Infantry Force

  1. Implications for the Interim Armored Vehicle (IAV) Family of Vehicles. The lack of a major armor threat in this scenario will make the presence of any armor useful. However, The Dupuy Institute was unable to identify the existence of any such cases in the historical record.
  2. Implications for the Interim Brigade Combat Team (IBCT). The lack of a major armor threat will make the presence of any armor useful. However, The Dupuy Institute was unable to identify the existence of any such cases in the historical record.

Other Conclusions

Wheeled Vehicles

  1. There is little historical evidence one way or the other establishing whether wheels or tracks are the preferable feature of AFVs.

Vehicle Design

  1. In SSCOs access to a large-caliber main gun was useful for demolishing obstacles and buildings. This capability is not unique and could be replaced by AT missiles armed CFVs, IFVs and APCs.
  2. Any new lighter tank-like vehicle should make its gun system the highest priority, armor secondary and mobility and maneuverability tertiary.
  3. Mine protection should be emphasized. Mines were a major threat to all types of armor in many scenarios. In many SSCOs it was the major cause of armored vehicle losses.
  4. The robust carrying capacity offered by an APC over a tank is an advantage during many SSCOs.

Terrain Issues

  1. The use of armor in urban fighting, even in SSCOs, is still limited. The threat to armor from other armor in urban terrain during SSCOs is almost nonexistent. Most urban warfare armor needs, where armor basically serves as a support weapon, can be met with light armor (CFVs, IFVs, and APCs).
  2. Vehicle weight is sometimes a limiting factor in less developed areas. In all cases where this was a problem, there was not a corresponding armor threat. As such, in almost all cases, the missions and tasks of a tank can be fulfilled with other light armor (CFVs, IFVs, or APCs).
  3. The primary terrain problem is rivers and flooded areas. It would appear that in difficult terrain, especially heavily forested terrain (areas with lots of rainfall, like jungles), a robust river crossing capability is required.

Operational Factors

  1. Emergency insertions and delaying actions sometimes appear to be a good way to lose lots of armor for limited gain. This tends to come about due to terrain problems, enemy infiltration and bypassing, and the general confusion prevalent in such operations. The Army should be careful not to piecemeal assets when inserting valuable armor resources into a ‘hot’ situation. In many cases holding back and massing the armor for defense or counter-attack may be the better option.
  2. Transportability limitations have not been a major factor in the past for determining whether lighter or heavier armor were sent into a SSCO or a combat environment.

Casualty Sensitivity

  1. In a SSCO or insurgency, in most cases the weight and armor of the AFVs is not critical. As such, one would not expect any significant changes in losses regardless of the type of AFV used (MBT, medium-weight armor, or light armor). However, the perception that US forces are not equipped with the best-protected vehicle may cause some domestic political problems. The US government is very casualty sensitive during SSCOs. Furthermore, the current US main battle tank particularly impressive, and may help provide some additional intimidation in SSCOs.
  2. In any emergency insertion scenario or conventional war scenario, the use of lighter armor could result in higher US casualties and lesser combat effectiveness. This will certainly cause some domestic political problems and may impact army morale. However by the same token, light infantry forces, unsupported by easily deployable armor could present a worse situation.
U.S. Army Solicits Proposals For Mobile Protected Firepower (MPF) Light Tank

U.S. Army Solicits Proposals For Mobile Protected Firepower (MPF) Light Tank

The U.S. Army’s late and apparently lamented M551 Sheridan light tank. [U.S. Department of the Army/Wikipedia]

The U.S. Army recently announced that it will begin soliciting Requests for Proposal (RFP) in November to produce a new lightweight armored vehicle for its Mobile Protected Firepower (MPF) program. MPF is intended to field a company of vehicles for each Army Infantry Brigade Combat Team to provide them with “a long-range direct-fire capability for forcible entry and breaching operations.�

The Army also plans to field the new vehicle quickly. It is dispensing with the usual two-to-three year technology development phase, and will ask for delivery of the first sample vehicles by April 2018, one month after the RFP phase is scheduled to end. This will invariably favor proposals using existing off-the-shelf vehicle designs and “mature technology.�

The Army apparently will also accept RFPs with turret-mounted 105mm main guns, at least initially. According to previous MFP parameters, acceptable designs will eventually need to be able to accommodate 120mm guns.

I have observed in the past that the MPF is the result of the Army’s concerns that its light infantry may be deprived of direct fire support on anti-access/area denial (A2/AD) battlefields. Track-mounted, large caliber direct fire guns dedicated to infantry support are something of a doctrinal throwback to the assault guns of World War II, however.

There was a noted tendency during World War II to use anything on the battlefield that resembled a tank as a main battle tank, with unhappy results for the not-main battle tanks. As a consequence, assault guns, tank destroyers, and light tanks became evolutionary dead-ends in the development of post-World War II armored doctrine (the late M551 Sheridan, retired without replacement in 1996, notwithstanding). [For more on the historical background, see The Dupuy Institute, “The Historical Effectiveness of Lighter-Weight Armored Forces,� August 2001.]

The Army has been reluctant to refer to MPF as a light tank, but as David Dopp, the MPF Program Manager admitted, “I don’t want to say it’s a light tank, but it’s kind of like a light tank.â€� He went on to say that “It’s not going toe to toe with a tank…It’s for the infantry. It goes where the infantry goes — it breaks through bunkers, it works through targets that the infantry can’t get through.”

Major General David Bassett, program executive officer for the Army’s Ground Combat Systems concurred. It will be a tracked vehicle with substantial armor protection, Bassett said, “but certainly not what you’d see on a main battle tank.â€�

It will be interesting to see what the RFPs have to offer.

Previous TDI commentaries on the MPF Program:

Validating Trevor Dupuy’s Combat Models

Validating Trevor Dupuy’s Combat Models

[The article below is reprinted from Winter 2010 edition of The International TNDM Newsletter.]

A Summation of QJM/TNDM Validation Efforts

By Christopher A. Lawrence

There have been six or seven different validation tests conducted of the QJM (Quanti�ed Judgment Model) and the TNDM (Tactical Numerical Deterministic Model). As the changes to these two models are evolutionary in nature but do not fundamentally change the nature of the models, the whole series of validation tests across both models is worth noting. To date, this is the only model we are aware of that has been through multiple validations. We are not aware of any DOD [Department of Defense] combat model that has undergone more than one validation effort. Most of the DOD combat models in use have not undergone any validation.

The Two Original Validations of the QJM

After its initial development using a 60-engagement WWII database, the QJM was tested in 1973 by application of its relationships and factors to a validation database of 21 World War II engagements in Northwest Europe in 1944 and 1945. The original model proved to be 95% accurate in explaining the outcomes of these additional engagements. Overall accuracy in predicting the results of the 81 engagements in the developmental and validation databases was 93%.[1]

During the same period the QJM was converted from a static model that only predicted success or failure to one capable of also predicting attrition and movement. This was accomplished by adding variables and modifying factor values. The original QJM structure was not changed in this process. The addition of movement and attrition as outputs allowed the model to be used dynamically in successive “snapshot� iterations of the same engagement.

From 1973 to 1979 the QJM’s formulae, procedures, and variable factor values were tested against the results of all of the 52 significant engagements of the 1967 and 1973 Arab-Israeli Wars (19 from the former, 33 from the latter). The TNDM was able to replicate all of those engagements with an accuracy of more than 90%?[2]

In 1979 the improved QJM was revalidated by application to 66 engagements. These included 35 from the original 81 engagements (the “development database�), and 31 new engagements. The new engagements included �ve from World War II and 26 from the 1973 Middle East War. This new validation test considered four outputs: success/failure, movement rates, personnel casualties, and tank losses. The TNDM predicted success/failure correctly for about 85% of the engagements. It predicted movement rates with an error of 15% and personnel attrition with an error of 40% or less. While the error rate for tank losses was about 80%, it was discovered that the model consistently underestimated tank losses because input data included all kinds of armored vehicles, but output data losses included only numbers of tanks.[3]

This completed the original validations efforts of the QJM. The data used for the validations, and parts of the results of the validation, were published, but no formal validation report was issued. The validation was conducted in-house by Colonel Dupuy’s organization, HERO [Historical Evaluation Research Organization]. The data used were mostly from division-level engagements, although they included some corps- and brigade-level actions. We count these as two separate validation efforts.

The Development of the TNDM and Desert Storm

In 1990 Col. Dupuy, with the collaborative assistance of Dr. James G. Taylor (author of Lanchester Models of Warfare [vol. 1] [vol. 2], published by the Operations Research Society of America, Arlington, Virginia, in 1983) introduced a signi�cant modi�cation: the representation of the passage of time in the model. Instead of resorting to successive “snapshots,� the introduction of Taylor’s differential equation technique permitted the representation of time as a continuous flow. While this new approach required substantial changes to the software, the relationship of the model to historical experience was unchanged.[4] This revision of the model also included the substitution of formulae for some of its tables so that there was a continuous flow of values across the individual points in the tables. It also included some adjustment to the values and tables in the QJM. Finally, it incorporated a revised OLI [Operational Lethality Index] calculation methodology for modem armor (mobile �ghting machines) to take into account all the factors that influence modern tank warfare.[5] The model was reprogrammed in Turbo PASCAL (the original had been written in BASIC). The new model was called the TNDM (Tactical Numerical Deterministic Model).

Building on its foundation of historical validation and proven attrition methodology, in December 1990, HERO used the TNDM to predict the outcome of, and losses from, the impending Operation DESERT STORM.[6] It was the most accurate (lowest) public estimate of U.S. war casualties provided before the war. It differed from most other public estimates by an order of magnitude.

Also, in 1990, Trevor Dupuy published an abbreviated form of the TNDM in the book Attrition: Forecasting Battle Casualties and Equipment Losses in Modern War. A brief validation exercise using 12 battles from 1805 to 1973 was published in this book.[7] This version was used for creation of M-COAT[8] and was also separately tested by a student (Lieutenant Gozel) at the Naval Postgraduate School in 2000.[9] This version did not have the �repower scoring system, and as such neither M-COAT, Lieutenant Gozel’s test, nor Colonel Dupuy’s 12-battle validation included the OLI methodology that is in the primary version of the TNDM.

For counting purposes, I consider the Gulf War the third validation of the model. In the end, for any model, the proof is in the pudding. Can the model be used as a predictive tool or not? If not, then there is probably a fundamental flaw or two in the model. Still the validation of the TNDM was somewhat second-hand, in the sense that the closely-related previous model, the QJM, was validated in the 1970s to 200 World War II and 1967 and 1973 Arab-Israeli War battles, but the TNDM had not been. Clearly, something further needed to be done.

The Battalion-Level Validation of the TNDM

Under the guidance of Christopher A. Lawrence, The Dupuy Institute undertook a battalion-level validation of the TNDM in late 1996. This effort tested the model against 76 engagements from World War I, World War II, and the post-1945 world including Vietnam, the Arab-Israeli Wars, the Falklands War, Angola, Nicaragua, etc. This effort was thoroughly documented in The International TNDM Newsletter.[10] This effort was probably one of the more independent and better-documented validations of a casualty estimation methodology that has ever been conducted to date, in that:

  • The data was independently assembled (assembled for other purposes before the validation) by a number of different historians.
  • There were no calibration runs or adjustments made to the model before the test.
  • The data included a wide range of material from different conflicts and times (from 1918 to 1983).
  • The validation runs were conducted independently (Susan Rich conducted the validation runs, while Christopher A. Lawrence evaluated them).
  • The results of the validation were fully published.
  • The people conducting the validation were independent, in the sense that:

a) there was no contract, management, or agency requesting the validation;
b) none of the validators had previously been involved in designing the model, and had only very limited experience in using it; and
c) the original model designer was not able to oversee or influence the validation.[11]

The validation was not truly independent, as the model tested was a commercial product of The Dupuy Institute, and the person conducting the test was an employee of the Institute. On the other hand, this was an independent effort in the sense that the effort was employee-initiated and not requested or reviewed by the management of the Institute. Furthermore, the results were published.

The TNDM was also given a limited validation test back to its original WWII data around 1997 by Niklas Zetterling of the Swedish War College, who retested the model to about 15 or so Italian campaign engagements. This effort included a complete review of the historical data used for the validation back to their primarily sources, and details were published in The International TNDM Newsletter.[12]

There has been one other effort to correlate outputs from QJM/TNDM-inspired formulae to historical data using the Ardennes and Kursk campaign-level (i.e., division-level) databases.[13] This effort did not use the complete model, but only selective pieces of it, and achieved various degrees of “goodness of �t.� While the model is hypothetically designed for use from squad level to army group level, to date no validation has been attempted below battalion level, or above division level. At this time, the TNDM also needs to be revalidated back to its original WWII and Arab-Israeli War data, as it has evolved since the original validation effort.

The Corps- and Division-level Validations of the TNDM

Having now having done one extensive battalion-level validation of the model and published the results in our newsletters, Volume 1, issues 5 and 6, we were then presented an opportunity in 2006 to conduct two more validations of the model. These are discussed in depth in two articles of this issue of the newsletter.

These validations were against conducted using historical data, 24 days of corps-level combat and 25 cases of division-level combat drawn from the Battle of Kursk during 4-15 July 1943. It was conducted using an independently-researched data collection (although the research was conducted by The Dupuy Institute), using a different person to conduct the model runs (although that person was an employee of the Institute) and using another person to compile the results (also an employee of the Institute). To summarize the results of this validation (the historical �gure is listed �rst followed by the predicted result):

There was one other effort that was done as part of work we did for the Army Medical Department (AMEDD). This is fully explained in our report Casualty Estimation Methodologies Study: The Interim Report dated 25 July 2005. In this case, we tested six different casualty estimation methodologies to 22 cases. These consisted of 12 division-level cases from the Italian Campaign (4 where the attack failed, 4 where the attacker advanced, and 4 Where the defender was penetrated) and 10 cases from the Battle of Kursk (2 cases Where the attack failed, 4 where the attacker advanced and 4 where the defender was penetrated). These 22 cases were randomly selected from our earlier 628 case version of the DLEDB (Division-level Engagement Database; it now has 752 cases). Again, the TNDM performed as well as or better than any of the other casualty estimation methodologies tested. As this validation effort was using the Italian engagements previously used for validation (although some had been revised due to additional research) and three of the Kursk engagements that were later used for our division-level validation, then it is debatable whether one would want to call this a seventh validation effort. Still, it was done as above with one person assembling the historical data and another person conducting the model runs. This effort was conducted a year before the corps and division-level validation conducted above and influenced it to the extent that we chose a higher CEV (Combat Effectiveness Value) for the later validation. A CEV of 2.5 was used for the Soviets for this test, vice the CEV of 3.0 that was used for the later tests.


The QJM has been validated at least twice. The TNDM has been tested or validated at least four times, once to an upcoming, imminent war, once to battalion-level data from 1918 to 1989, once to division-level data from 1943 and once to corps-level data from 1943. These last four validation efforts have been published and described in depth. The model continues, regardless of which validation is examined, to accurately predict outcomes and make reasonable predictions of advance rates, loss rates and armor loss rates. This is regardless of level of combat (battalion, division or corps), historic period (WWI, WWII or modem), the situation of the combats, or the nationalities involved (American, German, Soviet, Israeli, various Arab armies, etc.). As the QJM, the model was effectively validated to around 200 World War II and 1967 and 1973 Arab-Israeli War battles. As the TNDM, the model was validated to 125 corps-, division-, and battalion-level engagements from 1918 to 1989 and used as a predictive model for the 1991 Gulf War. This is the most extensive and systematic validation effort yet done for any combat model. The model has been tested and re-tested. It has been tested across multiple levels of combat and in a wide range of environments. It has been tested where human factors are lopsided, and where human factors are roughly equal. It has been independently spot-checked several times by others outside of the Institute. It is hard to say what more can be done to establish its validity and accuracy.


[1] It is unclear what these percentages, quoted from Dupuy in the TNDM General Theoretical Description, specify. We suspect it is a measurement of the model’s ability to predict winner and loser. No validation report based on this effort was ever published. Also, the validation �gures seem to reflect the results after any corrections made to the model based upon these tests. It does appear that the division-level validation was “incremental.� We do not know if the earlier validation tests were tested back to the earlier data, but we have reason to suspect not.

[2] The original QJM validation data was �rst published in the Combat Data Subscription Service Supplement, vol. 1, no. 3 (Dunn Loring VA: HERO, Summer 1975). (HERO Report #50) That effort used data from 1943 through 1973.

[3] HERO published its QJM validation database in The QJM Data Base (3 volumes) Fairfax VA: HERO, 1985 (HERO Report #100).

[4] The Dupuy Institute, The Tactical Numerical Deterministic Model (TNDM): A General and Theoretical Description, McLean VA: The Dupuy Institute, October 1994.

[5] This had the unfortunate effect of undervaluing WWII-era armor by about 75% relative to other WWII weapons when modeling WWII engagements. This left The Dupuy Institute with the compromise methodology of using the old OLI method for calculating armor (Mobile Fighting Machines) when doing WWII engagements and using the new OLI method for calculating armor when doing modem engagements

[6] Testimony of Col. T. N. Dupuy, USA, Ret, Before the House Armed Services Committee, 13 Dec 1990. The Dupuy Institute File I-30, “Iraqi Invasion of Kuwait.�

[7] Trevor N. Dupuy, Attrition: Forecasting Battle Casualties and Equipment Losses in Modern War (HERO Books, Fairfax, VA, 1990), 123-4.

[8] M-COAT is the Medical Course of Action Tool created by Major Bruce Shahbaz. It is a spreadsheet model based upon the elements of the TNDM provided in Dupuy’s Attrition (op. cit.) It used a scoring system derived from elsewhere in the U.S. Army. As such, it is a simpli�ed form of the TNDM with a different weapon scoring system.

[9] See Gözel, Ramazan. “Fitting Firepower Score Models to the Battle of Kursk Data,� NPGS Thesis. Monterey CA: Naval Postgraduate School.

[10] Lawrence, Christopher A. “Validation of the TNDM at Battalion Level.â€� The International TNDM Newsletter, vol. 1, no. 2 (October 1996); Bongard, Dave “The 76 Battalion-Level Engagements.â€� The International TNDM Newsletter, vol. 1, no. 4 (February 1997); Lawrence, Christopher A. “The First Test of the TNDM Battalion-Level Validations: Predicting the Winnerâ€� and “The Second Test of the TNDM Battalion-Level Validations: Predicting Casualties,” The International TNDM Newsletter, vol. 1 no. 5 (April 1997); and Lawrence, Christopher A. “Use of Armor in the 76 Battalion-Level Engagements,â€� and “The Second Test of the Battalion-Level Validation: Predicting Casualties Final Scorecard.â€� The International TNDM Newsletter, vol. 1, no. 6 (June 1997).

[11] Trevor N. Dupuy passed away in July 1995, and the validation was conducted in 1996 and 1997.

[12] Zetterling, Niklas. “CEV Calculations in Italy, 1943,” The International TNDM Newsletter, vol. 1, no. 6. McLean VA: The Dupuy Institute, June 1997. See also Research Plan, The Dupuy Institute Report E-3, McLean VA: The Dupuy Institute, 7 Oct 1998.

[13] See Gözel, “Fitting Firepower Score Models to the Battle of Kursk Data.�

New U.S. Army Security Force Assistance Brigades Face Challenges

New U.S. Army Security Force Assistance Brigades Face Challenges

The shoulder sleeve insignia of the U.S. Army 1st Security Forces Assistance Brigade (SFAB). [U.S. Army]

The recent deaths of four U.S. Army Special Forces (ARSOF) operators in an apparent ambush in support of the Train and Assist mission in Niger appears to have reminded Congress of the enormous scope of ongoing Security Force Assistance (SFA) activities being conducted world-wide by the Defense Department. U.S. military forces deployed to 138 countries in 2016, the majority of which were by U.S. Special Operations Forces (SOF) conducting SFA activities. (While SFA deployments continue at a high tempo, the number of U.S. active-duty troops stationed overseas has fallen below 200,000 for the first time in 60 years, interestingly enough.)

SFA is the umbrella term for U.S. whole-of-government support provided to develop the capability and capacity of foreign security forces and institutions. SFA is intended to help defend host nations from external and internal threats, and encompasses foreign internal defense (FID), counterterrorism (CT), counterinsurgency (COIN), and stability operations.

Last year, the U.S. Army announced that it would revamp its contribution to SFA by creating a new type of unit, the Security Force Assistance Brigade (SFAB), and by establishing a Military Advisor Training Academy. The first of six projected SFABs is scheduled to stand up this month at Ft. Benning, Georgia.

Rick Montcalm has a nice piece up at the Modern War Institute describing the doctrinal and organizational challenges the Army faces in implementing the SFABs. The Army’s existing SFA structure features regionally-aligned Brigade Combat Teams (BCTs) providing combined training and partnered mission assistance for foreign conventional forces from the team to company level, while ARSOF focuses on partner-nation counterterrorism missions and advising and assisting commando and special operations-type forces.

Ideally, the SFABs would supplement and gradually replace most, but not all, of the regionally-aligned BCTs to allow them to focus on warfighting tasks. Concerns have arisen with the ARSOF community, however, that a dedicated SFAB force would encroach functionally on its mission and compete within the Army for trained personnel. The SFABs currently lack the intelligence capabilities necessary to successfully conduct the advisory mission in hostile environments. Although U.S. Army Chief of Staff General Mark Milley asserts that the SFABs are not Special Forces, properly preparing them for advise and assist roles would make them very similar to existing ARSOF.

Montcalm also points out that Army personnel policies complicate maintain the SFABs in the long-term. The Army has not created a specific military advisor career field and volunteering to serve in a SFAB could complicate the career progression of active duty personnel. Although the Army has taken steps to address this, the prospect of long repeat overseas tours and uncertain career prospects has forced the service to offer cash incentives and automatic promotions to bolster SFAB recruiting. As of August, the 1st SFAB needed 350 more soldiers to fully man the unit, which was scheduled to be operational in November.

SFA and the Army’s role in it will not decline anytime soon, so there is considerable pressure to make the SFAB concept successful. In light of the Army’s problematic efforts to build adequate security forces in Iraq and Afghanistan, there is also considerable room for improvement.

TDI Friday Read: U.S. Airpower

TDI Friday Read: U.S. Airpower

[Image by Geopol Intelligence]

This weekend’s edition of TDI’s Friday Read is a collection of posts on the current state of U.S. airpower by guest contributor Geoffery Clark. The same factors changing the character of land warfare are changing the way conflict will be waged in the air. Clark’s posts highlight some of the way these changes are influencing current and future U.S. airpower plans and concepts.

F-22 vs. F-35: Thoughts On Fifth Generation Fighters

The F-35 Is Not A Fighter

U.S. Armed Forces Vision For Future Air Warfare

The U.S. Navy and U.S. Air Force Debate Future Air Superiority

U.S. Marine Corps Concepts of Operation with the F-35B

The State of U.S. Air Force Air Power

Fifth Generation Deterrence


The Effects Of Dispersion On Combat

The Effects Of Dispersion On Combat

[The article below is reprinted from the December 1996 edition of The International TNDM Newsletter. A revised version appears in Christopher A. Lawrence, War by Numbers: Understanding Conventional Combat (Potomac Books, 2017), Chapter 13.]

The Effects of Dispersion on Combat
by Christopher A. Lawrence

The TNDM[1] does not play dispersion. But it is clear that dispersion has continued to increase over time, and this must have some effect on combat. This effect was identified by Trevor N. Dupuy in his various writings, starting with the Evolution of Weapons and Warfare. His graph in Understanding War of the battle casualties trends over time is presented here as Figure 1. As dispersion changes over time (dramatically), one would expect the casualties would change over time. I therefore went back to the Land Warfare Database (the 605 engagement version[2]) and proceeded to look at casualties over time and dispersion from every angle that l could.

l eventually realized that l was going to need some better definition of the time periods l was measuring to, as measuring by years scattered the data, measuring by century assembled the data in too gross a manner, and measuring by war left a confusing picture due to the number of small wars with only two or three battles in them in the Land Warfare Database. I eventually defined the wars into 14 categories, so I could �t them onto one readable graph:

To give some idea of how representative the battles listed in the LWDB were for covering the period, I have included a count of the number of battles listed in Michael Clodfelter’s two-volume book Warfare and Armed Conflict, 1618-1991. In the case of WWI, WWII and later, battles tend to be defined as a divisional-level engagement, and there were literally tens of thousands of those.

I then tested my data again looking at the 14 wars that I defined:

  • Average Strength by War (Figure 2)
  • Average Losses by War (Figure 3)
  • Percent Losses Per Day By War (Figure 4)a
  • Average People Per Kilometer By War (Figure 5)
  • Losses per Kilometer of Front by War (Figure 6)
  • Strength and Losses Per Kilometer of Front By War (Figure 7)
  • Ratio of Strength and Losses per Kilometer of Front by War (Figure 8)
  • Ratio of Strength and Loses per Kilometer of Front by Century (Figure 9)

A review of average strengths over time by century and by war showed no surprises (see Figure 2). Up through around 1900, battles were easy to define: they were one- to three-day affairs between clearly defined forces at a locale. The forces had a clear left flank and right flank that was not bounded by other friendly forces. After 1900 (and in a few cases before), warfare was fought on continuous fronts

with a ‘battle’ often being a large multi-corps operation. It is no longer clearly understood what is meant by a battle, as the forces, area covered, and duration can vary widely. For the LWDB, each battle was de�ned as the analyst wished. ln the case of WWI, there are a lot of very large battles which drive the average battle size up. ln the cases of the WWII, there are a lot of division-level battles, which bring the average down. In the case of the Arab-Israeli Wars, there are nothing but division and brigade-level battles, which bring the average down.

The interesting point to notice is that the average attacker strength in the 16th and 17th century is lower than the average defender strength. Later it is higher. This may be due to anomalies in our data selection.

Average loses by war (see Figure 3) suffers from the same battle definition problem.

Percent losses per day (see Figure 4) is a useful comparison through the end of the 19th Century. After that, the battles get longer and the definition of a duration of the battle is up to the analyst. Note the very dear and definite downward pattern of percent loses per day from the Napoleonic Wars through the Arab-Israeli Wars. Here is a very clear indication of the effects of dispersion. It would appear that from the 1600s to the 1800s the pattern was effectively constant and level, then declines in a very systematic pattern. This partially contradicts Trevor Dupuy’s writing and graphs (see Figure 1). It does appear that after this period of decline that the percent losses per day are being set at a new, much lower plateau. Percent losses per day by war is attached.

Looking at the actual subject of the dispersion of people (measured in people per kilometer of front) remained relatively constant from 1600 through the American Civil War (see Figure 5). Trevor Dupuy defined dispersion as the number of people in a box-like area. Unfortunately, l do not know how to measure that. lean clearly identify the left and right of a unit, but it is more difficult to tell how deep it is Furthermore, density of occupation of this box is far from uniform, with a very forward bias By the same token, �re delivered into this box is also not uniform, with a very forward bias. Therefore, l am quite comfortable measuring dispersion based upon unit frontage, more so than front multiplied by depth.

Note, when comparing the Napoleonic Wars to the American Civil War that the dispersion remains about the same. Yet, if you look at the average casualties (Figure 3) and the average percent casualties per day (Figure 4), it is clear that the rate of casualty accumulation is lower in the American Civil War (this again partially contradicts Dupuy‘s writings). There is no question that with the advent of the Minié ball, allowing for rapid-�re rifled muskets, the ability to deliver accurate �repower increased.

As you will also note, the average people per linear kilometer between WWI and WWII differs by a factor of a little over 1.5 to 1. Yet the actual difference in casualties (see Figure 4) is much greater. While one can just postulate that the difference is the change in dispersion squared (basically Dupuy‘s approach), this does not seem to explain the complete difference, especially the difference between the Napoleonic Wars and the Civil War.

lnstead of discussing dispersion, we should be discussing “casualty reduction efforts.� This basically consists of three elements:

  • Dispersion (D)
  • Increased engagement ranges (R)
  • More individual use of cover and concealment (C&C).

These three factors together result in the reduced chance to hit. They are also partially interrelated, as one cannot make more individual use of cover and concealment unless one is allowed to disperse. So, therefore. The need for cover and concealment increases the desire to disperse and the process of dispersing allows one to use more cover and concealment.

Command and control are integrated into this construct as being something that allows dispersion, and dispersion creates the need for better command control. Therefore, improved command and control in this construct does not operate as a force modi�er, but enables a force to disperse.

Intelligence becomes more necessary as the opposing forces use cover and concealment and the ranges of engagement increase. By the same token, improved intelligence allows you to increase the range of engagement and forces the enemy to use better concealment.

This whole construct could be represented by the diagram at the top of the next page.

Now, I may have said the obvious here, but this construct is probably provable in each individual element, and the overall outcome is measurable. Each individual connection between these boxes may also be measurable.

Therefore, to measure the effects of reduced chance to hit, one would need to measure the following formula (assuming these formulae are close to being correct):

(K * ΔD) + (K * ΔC&C) + (K * ΔR) = H

(K * ΔC2) = ΔD

(K * ΔD) = ΔC&C

(K * ΔW) + (K * ΔI) = ΔR

K = a constant
Δ = the change in….. (alias “Delta”)
D = Dispersion
C&C = Cover & Concealment
R = Engagement Range
W = Weapon’s Characteristics
H = the chance to hit
C2 = Command and control
I = Intelligence or ability to observe

Also, certain actions lead to a desire for certain technological and system improvements. This includes the effect of increased dispersion leading to a need for better C2 and increased range leading to a need for better intelligence. I am not sure these are measurable.

I have also shown in the diagram how the enemy impacts upon this. There is also an interrelated mirror image of this construct for the other side.

I am focusing on this because l really want to come up with some means of measuring the effects of a “revolution in warfare.” The last 400 years of human history have given us more revolutionary inventions impacting war than we can reasonably expect to see in the next 100 years. In particular, I would like to measure the impact of increased weapon accuracy, improved intelligence, and improved C2 on combat.

For the purposes of the TNDM, I would very specifically like to work out an attrition multiplier for battles before WWII (and theoretically after WWII) based upon reduced chance to be hit (“dispersion�). For example, Dave Bongard is currently using an attrition multiplier of 4 for his WWI engagements that he is running for the battalion-level validation data base.[3] No one can point to a piece of paper saying this is the value that should be used. Dave picked this value based upon experience and familiarity with the period.

I have also attached Average Loses per Kilometer of Front by War (see Figure 6 above), and a summary chart showing the two on the same chart (see �gure 7 above).

The values from these charts are:

The TNDM sets WWII dispersion factor at 3,000 (which l gather translates into 30,000 men per square kilometer). The above data shows a linear dispersion per kilometer of 2,992 men, so this number parallels Dupuy‘s �gures.

The �nal chart I have included is the Ratio of Strength and Losses per Kilometer of Front by War (Figure 8). Each line on the bar graph measures the average ratio of strength over casualties for either the attacker or defender. Being a ratio, unusual outcomes resulted in some really unusually high ratios. I took the liberty of taking out six

data points because they appeared unusually lop-sided. Three of these points are from the English Civil War and were way out of line with everything else. These were the three Scottish battles where you had a small group of mostly sword-armed troops defeating a “modem� army. Also, Walcourt (1689), Front Royal (1862), and Calbritto (1943) were removed. L also have included the same chart, except by century (Figure 9).
Again, one sees a consistency in results in over 300+ years of war, in this case going all the way through WWI, then sees an entirely different pattern with WWII and the Arab-Israeli Wars

A very tentative set of conclusions from all this is:

  1. Dispersion has been relatively constant and driven by factors other than �repower from 1600-1815.
  2. Since the Napoleonic Wars, units have increasingly dispersed (found ways to reduce their chance to be hit) in response to increased lethality of weapons.
  3. As a result of this increased dispersion, casualties in a given space have declined.
  4. The ratio of this decline in casualties over area have been roughly proportional to the strength over an area from 1600 through WWI. Starting with WWII, it appears that people have dispersed faster than weapons lethality, and this trend has continued.
  5. In effect, people dispersed in direct relation to increased firepower from 1815 through 1920, and then after that time dispersed faster than the increase in lethality.
  6. It appears that since WWII, people have gone back to dispersing (reducing their chance to be hit) at the same rate that �repower is increasing.
  7. Effectively, there are four patterns of casualties in modem war:

Period 1 (1600 – 1815): Period of Stability

  • Short battles
  • Short frontages
  • High attrition per day
  • Constant dispersion
  • Dispersion decreasing slightly after late 1700s
  • Attrition decreasing slightly after mid-1700s.

Period 2 (1816 – 1905): Period of Adjustment

  • Longer battles
  • Longer frontages
  • Lower attrition per day
  • Increasing dispersion
  • Dispersion increasing slightly faster than lethality

Period 3 (1912 – 1920): Period of Transition

  • Long Battles
  • Continuous Frontages
  • Lower attrition per day
  • Increasing dispersion
  • Relative lethality per kilometer similar to past, but lower
  • Dispersion increasing slightly faster than lethality

Period 4 (1937 – present): Modern Warfare

  • Long Battles
  • Continuous Frontages
  • Low Attrition per day
  • High dispersion (perhaps constant?)
  • Relatively lethality per kilometer much lower than the past
  • Dispersion increased much faster than lethality going into the period.
  • Dispersion increased at the same rate as lethality within the period.

So the question is whether warfare of the next 50 years will see a new “period of adjustment,” where the rate of dispersion (and other factors) adjusts in direct proportion to increased lethality, or will there be a significant change in the nature of war?

Note that when l use the word “dispersion� above, l often mean “reduced chance to be hit,� which consists of dispersion, increased engagement ranges, and use of cover & concealment.

One of the reasons l wandered into this subject was to see if the TNDM can be used for predicting combat before WWII. l then spent the next few days attempting to �nd some correlation between dispersion and casualties. Using the data on historical dispersion provided above, l created a mathematical formulation and tested that against the actual historical data points, and could not get any type of �t.

I then locked at the length of battles over time, at one-day battles, and attempted to �nd a pattern. I could �nd none. I also looked at other permutations, but did not keep a record of my attempts. I then looked through the work done by Dean Hartley (Oakridge) with the LWDB and called Paul Davis (RAND) to see if there was anyone who had found any correlation between dispersion and casualties, and they had not noted any.

It became clear to me that if there is any such correlation, it is buried so deep in the data that it cannot be found by any casual search. I suspect that I can �nd a mathematical correlation between weapon lethality, reduced chance to hit (including dispersion), and casualties. This would require some improvement to the data, some systematic measure of weapons lethality, and some serious regression analysis. I unfortunately cannot pursue this at this time.

Finally, for reference, l have attached two charts showing the duration of the battles in the LWDB in days (Figure 10, Duration of Battles Over Time and Figure 11, A Count of the Duration of Battles by War).


[1] The Tactical Numerical Deterministic Model, a combat model developed by Trevor Dupuy in 1990-1991 as the follow-up to his Quantified Judgement Model. Dr. James G. Taylor and Jose Perez also contributed to the TNDM’s development.

[2] TDI’s Land Warfare Database (LWDB) was a revised version of a database created by the Historical Evaluation Research Organization (HERO) for the then-U.S. Army Concepts and Analysis Agency (now known as the U.S. Army Center for Army Analysis (CAA)) in 1984. Since the original publication of this article, TDI expanded and revised the data into a suite of databases.

[3] This matter is discussed in Christopher A. Lawrence, “The Second Test of the TNDM Battalion-Level Validations: Predicting Casualties,” The International TNDM Newsletter, April 1997, pp. 40-50.

U.S. Army Updates Draft Multi-Domain Battle Operating Concept

U.S. Army Updates Draft Multi-Domain Battle Operating Concept

The U.S. Army Training and Doctrine Command has released a revised draft version of its Multi-Domain Battle operating concept, titled “Multi-Domain Battle: Evolution of Combined Arms for the 21st Century, 2025-2040.” Clearly a work in progress, the document is listed as version 1.0, dated October 2017, and as a draft and not for implementation. Sydney J. Freeberg, Jr. has an excellent run-down on the revision at Breaking Defense.

The update is the result of the initial round of work between the U.S. Army and U.S. Air Force to redefine the scope of the multi-domain battlespace for the Joint Force. More work will be needed to refine the concept, but it shows remarkable cooperation in forging a common warfighting perspective between services long-noted for their independent thinking.

On a related note, Albert Palazzo, an Australian defense thinker and one of the early contributors to the Multi-Domain Battle concept, has published the first of a series of articles at The Strategy Bridge offering constructive criticism of the U.S. military’s approach to defining the concept. Palazzo warns that the U.S. may be over-emphasizing countering potential Russian and Chinese capabilities in its efforts and not enough on the broad general implications of long-range fires with global reach.

What difference can it make if those designing Multi-Domain Battle are acting on possibly the wrong threat diagnosis? Designing a solution for a misdiagnosed problem can result in the inculcation of a way of war unsuited for the wars of the future. One is reminded of the French Army during the interwar period. No one can accuse the French of not thinking seriously about war during these years, but, in the doctrine of the methodical battle, they got it wrong and misread the opportunities presented by mechanisation. There were many factors contributing to France’s defeat, but at their core was a misinterpretation of the art of the possible and a singular focus on a particular way of war. Shaping Multi-Domain Battle for the wrong problem may see the United States similarly sow the seeds for a military disaster that is avoidable.

He suggests that it would be wise for U.S. doctrine writers to take a more considered look at potential implications before venturing too far ahead with specific solutions.

TDI Friday Read: Principles Of War & Verities Of Combat

TDI Friday Read: Principles Of War & Verities Of Combat


Trevor Dupuy distilled his research and analysis on combat into a series of verities, or what he believed were empirically-derived principles. He intended for his verities to complement the classic principles of war, a slightly variable list of maxims of unknown derivation and provenance, which describe the essence of warfare largely from the perspective of Western societies. These are summarized below.

What Is The Best List Of The Principles Of War?

The Timeless Verities of Combat

Trevor N. Dupuy’s Combat Attrition Verities

Trevor Dupuy’s Combat Advance Rate Verities

Military History and Validation of Combat Models

Military History and Validation of Combat Models

Soldiers from Britain’s Royal Artillery train in a “virtual world” during Exercise Steel Sabre, 2015 [Sgt Si Longworth RLC (Phot)/MOD]

Military History and Validation of Combat Models

A Presentation at MORS Mini-Symposium on Validation, 16 Oct 1990

By Trevor N. Dupuy

In the operations research community there is some confusion as to the respective meanings of the words “validation� and “verification.� My definition of validation is as follows:

“To con�rm or prove that the output or outputs of a model are consistent with the real-world functioning or operation of the process, procedure, or activity which the model is intended to represent or replicate.�

In this paper the word “validation” with respect to combat models is assumed to mean assurance that a model realistically and reliably represents the real world of combat. Or, in other words, given a set of inputs which reflect the anticipated forces and weapons in a combat encounter between two opponents under a given set of circumstances, the model is validated if we can demonstrate that its outputs are likely to represent what would actually happen in a real-world encounter between these forces under those circumstances

Thus, in this paper, the word “validation” has nothing to do with the correctness of computer code, or the apparent internal consistency or logic of relationships of model components, or with the soundness of the mathematical relationships or algorithms, or with satisfying the military judgment or experience of one individual.

True validation of combat models is not possible without testing them against modern historical combat experience. And so, in my opinion, a model is validated only when it will consistently replicate a number of military history battle outcomes in terms of: (a) Success-failure; (b) Attrition rates; and (c) Advance rates.

“Why,� you may ask, “use imprecise, doubtful, and outdated history to validate a modem, scientific process? Field tests, experiments, and field exercises can provide data that is often instrumented, and certainly more reliable than any historical data.�

I recognize that military history is imprecise; it is only an approximate, often biased and/or distorted, and frequently inconsistent reflection of what actually happened on historical battle�elds. Records are contradictory. I also recognize that there is an element of chance or randomness in human combat which can produce different results in otherwise apparently identical circumstances. I further recognize that history is retrospective, telling us only what has happened in the past. It cannot predict, if only because combat in the future will be fought with different weapons and equipment than were used in historical combat.

Despite these undoubted problems, military history provides more, and more accurate information about the real world of combat, and how human beings behave and perform under varying circumstances of combat, than is possible to derive or compile from arty other source. Despite some discrepancies, patterns are unmistakable and consistent. There is always a logical explanation for any individual deviations from the patterns. Historical examples that are inconsistent, or that are counter-intuitive, must be viewed with suspicion as possibly being poor or false history.

Of course absolute prediction of a future event is practically impossible, although not necessarily so theoretically. Any speculations which we make from tests or experiments must have some basis in terms of projections from past experience.

Training or demonstration exercises, proving ground tests, �eld experiments, all lack the one most pervasive and most important component of combat: Fear in a lethal environment. There is no way in peacetime, or non-battle�eld, exercises, test, or experiments to be sure that the results are consistent with what would have been the behavior or performance of individuals or units or formations facing hostile �repower on a real battle�eld.

We know from the writings of the ancients (for instance Sun Tze—pronounced Sun Dzuh—and Thucydides) that have survived to this day that human nature has not changed since the dawn of history. The human factor the way in which humans respond to stimuli or circumstances is the most important basis for speculation and prediction. What about the “scientific” approach of those who insist that we cart have no conï¬�dence in the accuracy or reliability of historical data, that it is therefore unscientific, and therefore that it should be ignored? These people insist that only “scientificâ€� data should be used in modeling.

In fact, every model is based upon fundamental assumptions that are intuitive and unprovable. The �rst step in the creation of a model is a step away from scientific reality in seeking a basis for an unreal representation of a real phenomenon. I have shown that the unreality is perpetuated when we use other imitations of reality as the basis for representing reality. History is less than perfect, but to ignore it, and to use only data that is bound to be wrong, assures that we will not be able to represent human behavior in real combat.

At the risk of repetition, and even of protesting too much, let me assure you that I am well aware of the shortcomings of military history:

The record which is available to us, which is history, only approximately reflects what actually happened. It is incomplete. It is often biased, it is often distorted. Even when it is accurate, it may be reflecting chance rather than normal processes. It is neither precise nor consistent. But, it provides more, and more accurate, information on the real world of battle than is available from the most thoroughly documented �eld exercises, proving ground less, or laboratory or �eld experiments.

Military history is imperfect. At best it reflects the actions and interactions of unpredictable human beings. We must always realize that a single historical example can be misleading for either of two reasons: (1) The data may be inaccurate, or (2) The data may be accurate, but untypical.

Nevertheless, history is indispensable. I repeat that the most pervasive characteristic of combat is fear in a lethal environment. For all of its imperfections, military history and only military history represents what happens under the environmental condition of fear.

Unfortunately, and somewhat unfairly, the reported ï¬�ndings of S.L.A. Marshall about human behavior in combat, which he reported in Men Against Fire, have been recently discounted by revisionist historians who assert that he never could have physically performed the research on which the book’s ï¬�ndings were supposedly based. This has raised doubts about Marshall’s assertion that 85% of infantry soldiers didn’t ï¬�re their weapons in combat in World War ll. That dramatic and surprising assertion was ï¬�rst challenged in a New Zealand study which found, on the basis of painstaking interviews, that most New Zealanders ï¬�red their weapons in combat. Thus, either Americans were different from New Zealanders, or Marshall was wrong. And now American historians have demonstrated that Marshall had had neither the time nor the opportunity to conduct his battlefield interviews which he claimed were the basis for his ï¬�ndings.

I knew Marshall, moderately well. I was fully as aware of his weaknesses as of his strengths. He was not a historian. I deplored the imprecision and lack of documentation in Men Against Fire. But the revisionist historians have underestimated the shrewd journalistic assessment capability of “SLAMâ€� Marshall. His observations may not have been scientifically precise, but they were generally sound, and his assessment has been shared by many American infantry officers whose judgements l also respect. As to the New Zealand study, how many people will, after the war, admit that they didn’t ï¬�re their weapons?

Perhaps most important, however, in judging the assessments of SLAM Marshall, is a recent study by a highly-respected British operations research analyst, David Rowland. Using impeccable OR methods Rowland has demonstrated that Marshall’s assessment of the inefficient performance, or non-performance, of most soldiers in combat was essentially correct. An unclassified version of Rowland’s study, “Assessments of Combat Degradation,â€� appeared in the June 1986 issue of the Royal United Services Institution Journal.

Rowland was led to his investigations by the fact that soldier performance in �eld training exercises, using the British version of MILES technology, was not consistent with historical experience. Even after allowances for degradation from theoretical proving ground capability of weapons, defensive rifle �re almost invariably stopped any attack in these �eld trials. But history showed that attacks were often in fact, usually successful. He therefore began a study in which he made both imaginative and scientific use of historical data from over 100 small unit battles in the Boer War and the two World Wars. He demonstrated that when troops are under �re in actual combat, there is an additional degradation of performance by a factor ranging between 10 and 7. A degradation virtually of an order of magnitude! And this, mind you, on top of a comparable built-in degradation to allow for the difference between �eld conditions and proving ground conditions.

Not only does Rowland‘s study corroborate SLAM Marshall’s observations, it showed conclusively that ï¬�eld exercises, training competitions and demonstrations, give results so different from real battleï¬�eld performance as to render them useless for validation purposes.

Which brings us back to military history. For all of the imprecision, internal contradictions, and inaccuracies inherent in historical data, at worst the deviations are generally far less than a factor of 2.0. This is at least four times more reliable than �eld test or exercise results.

I do not believe that history can ever repeat itself. The conditions of an event at one time can never be precisely duplicated later. But, bolstered by the Rowland study, I am con�dent that history paraphrases itself.

If large bodies of historical data are compiled, the patterns are clear and unmistakable, even if slightly fuzzy around the edges. Behavior in accordance with this pattern is therefore typical. As we have already agreed, sometimes behavior can be different from the pattern, but we know that it is untypical, and we can then seek for the reason, which invariably can be discovered.

This permits what l call an actuarial approach to data analysis. We can never predict precisely what will happen under any circumstances. But the actuarial approach, with ample data, provides con�dence that the patterns reveal what is to happen under those circumstances, even if the actual results in individual instances vary to some extent from this “norm� (to use the Soviet military historical expression.).

It is relatively easy to take into account the differences in performance resulting from new weapons and equipment. The characteristics of the historical weapons and the current (or projected) weapons can be readily compared, and adjustments made accordingly in the validation procedure.

In the early 1960s an effort was made at SHAPE Headquarters to test the ATLAS Model against World War II data for the German invasion of Western Europe in May, 1940. The ï¬�rst excursion had the Allies ending up on the Rhine River. This was apparently quite reasonable: the Allies substantially outnumbered the Germans, they had more tanks, and their tanks were better. However, despite these Allied advantages, the actual events in 1940 had not matched what ATLAS was now predicting. So the analysts did a little “ï¬�ne tuning,” (a splendid term for fudging). Alter the so-called adjustments, they tried again, and ran another excursion. This time the model had the Allies ending up in Berlin. The analysts (may the Lord forgive them!) were quite satisfied with the ability of ATLAS to represent modem combat. (Or at least they said so.) Their official conclusion was that the historical example was worthless, since weapons and equipment had changed so much in the preceding 20 years!

As I demonstrated in my book, Options of Command, the problem was that the model was unable to represent the German strategy, or to reflect the relative combat effectiveness of the opponents. The analysts should have reached a different conclusion. ATLAS had failed validation because a model that cannot with reasonable faithfulness and consistency replicate historical combat experience, certainly will be unable validly to reflect current or future combat.

How then, do we account for what l have said about the fuzziness of patterns, and the fact that individual historical examples may not �t the patterns? I will give you my rules of thumb:

  1. The battle outcome should reflect historical success-failure experience about four times out of five.
  2. For attrition rates, the model average of �ve historical scenarios should be consistent with the historical average within a factor of about 1.5.
  3. For the advance rates, the model average of �ve historical scenarios should be consistent with the historical average within a factor of about 1.5.

Just as the heavens are the laboratory of the astronomer, so military history is the laboratory of the soldier and the military operations research analyst. The scientific basis for both astronomy and military science is the recording of the movements and relationships of bodies, and then analysis of those movements. (In the one case the bodies are heavenly, in the other they are very terrestrial.)

I repeat: Military history is the laboratory of the soldier. Failure of the analyst to use this laboratory will doom him to live with the scientific equivalent of Ptolomean astronomy, whereas he could use the evidence available in his laboratory to progress to the military science equivalent of Copernican astronomy.

The Sad Story Of The Captured Iraqi DESERT STORM Documents

The Sad Story Of The Captured Iraqi DESERT STORM Documents

Iraqi soldiers cross a highway carrying white surrender flags on Feb. 25, 1991, in Kuwait City. The U.S.-led coalition overwhelmed the Iraqi forces and swiftly drove them out of Kuwait. [Christophe Simon/AFP/Getty Images]

The fundamental building blocks of history are primary sources, i.e artifacts, documents, diaries and memoirs, manuscripts, or other contemporaneous sources of information. It has been the availability and accessibility of primary source documentation that allowed Trevor Dupuy and The Dupuy Institute to build the large historical combat databases that much of their analyses have drawn upon. It took uncounted man-hours of time-consuming, pain-staking research to collect and assemble two-sided data sufficiently detailed to analyze the complex phenomena of combat.

Going back to the Civil War, the United States has done a commendable job collecting and organizing captured military documentation and making that material available for historians, scholars, and professional military educators. TDI has made extensive use of captured German documentation from World War I and World War II held by the U.S. National Archives in its research, for example.

Unfortunately, that dedication faltered when it came to preserving documentation recovered from the battlefield during the 1990-1991 Gulf War. As related by Douglas Cox, an attorney and Law Library Professor at the City University of New York School of Law, millions of pages of Iraqi military paper documents collected during Operation DESERT STORM were destroyed by the Defense Intelligence Agency (DIA) in 2002 after they were contaminated by mold.

As described by the National Archives,

The documents date from 1978 up until Operation Desert Storm (1991). The collection includes Iraq operations plans and orders; maps and overlays; unit rosters (including photographs); manuals covering tactics, camouflage, equipment, and doctrine; equipment maintenance logs; ammunition inventories; unit punishment records; unit pay and leave records; handling of prisoners of war; detainee lists; lists of captured vehicles; and other military records. The collection also includes some manuals of foreign, non-Iraqi weapons systems. Some of Saddam Hussein’s Revolutionary Command Council records are in the captured material.

According to Cox, DIA began making digital copies of the documents shortly after the Gulf War ended. After the State Department requested copies, DIA subsequently determined that only 60% of the digital tapes the original scans had been stored on could be read. It was during an effort to rescan the lost 40% of the documents that it was discovered that the entire paper collection had been contaminated by mold.

DIA created a library of the scanned documents stored on 43 compact discs, which remain classified. It is not clear if DIA still has all of the CDs; none had been transferred to the National Archives as of 2012. A set of 725,000 declassifed pages was made available for a research effort at Harvard in 2000. That effort ended, however, and the declassified collection was sent to the Hoover Institution at Stanford University. The collection is closed to researchers, although Hoover has indicated it hopes to make it publicly available sometime in the future.

While the failure to preserve the original paper documents is bad enough, the possibility that any or all of the DIA’s digital collection might be permanently lost would constitute a grievous and baffling blunder. It also makes little sense for this collection to remain classified a quarter of a century after end of the Gulf War. Yet, it appears that failures to adequately collect and preserve U.S. military documents and records is becoming more common in the Information Age.

TDI Friday Read: Tank Warfare In World War II

TDI Friday Read: Tank Warfare In World War II

American troops advance under the cover of M4 Sherman tank ‘Lucky Legs II’ during mop up operations on Bougainville, Solomon Islands, March 1944. [National Archives/ww2dbase]

In honor of Tony Buzbee, who has parked a fully-functional vintage World War II era M-4 Sherman tank in front of his house in Houston, Texas (much to the annoyance of his home owner’s association), here is a selection of posts addressing various aspects of tank warfare in World War II for you weekend leisure reading.

Counting Holes in Tanks in Tunisia

U.S. Tank Losses and Crew Casualties in World War II

Tank Loss Rates in Combat: Then and Now

Was Kursk the Largest Tank Battle in History?

A2/D2 Study

Against the Panzers

And, of course, Chris Lawrence has written the largest existing book on the largest tank battle in history, Kursk.

Human Factors In Warfare: Combat Effectiveness

Human Factors In Warfare: Combat Effectiveness

An Israeli tank unit crosses the Sinai, heading for the Suez Canal, during the 1973 Arab-Israeli War [Israeli Government Press Office/HistoryNet]

It has been noted throughout the history of human conflict that some armies have consistently fought more effectively on the battlefield than others. The armies of Sparta in ancient Greece, for example, have come to epitomize the warrior ideal in Western societies. Rome’s legions have acquired a similar legendary reputation. Within armies too, some units are known to be superior combatants than others. The U.S. 1st Infantry Division, the British Expeditionary Force of 1914, Japan’s Special Naval Landing Forces, the U.S. Marine Corps, the German 7th Panzer Division, and the Soviet Guards divisions are among the many superior fighting forces from history.

Trevor Dupuy found empirical substantiation of this in his analysis of historical combat data. He discovered that in 1943-1944 during World War II, after accounting for environmental and operational factors, the German Army consistently performed more effectively in ground combat than the U.S. and British armies. This advantage—measured in terms of casualty exchanges, terrain held or lost, and mission accomplishment—manifested whether the Germans were attacking or defending, or winning or losing. Dupuy observed that the Germans demonstrated an even more marked effectiveness in battle against the Soviet Army throughout the war.

He found the same disparity in battlefield effectiveness in combat data on the 1967 and 1973 Arab-Israeli wars. The Israeli Army performed uniformly better in ground combat over all of the Arab armies it faced in both conflicts, regardless of posture or outcome.

The clear and consistent patterns in the historical data led Dupuy to conclude that superior combat effectiveness on the battlefield was attributable to moral and behavioral (i.e. human) factors. Those factors he believed were the most important contributors to combat effectiveness were:

  • Leadership
  • Training or Experience
  • Morale, which may or may not include
  • Cohesion

Although the influence of human factors on combat effectiveness was identifiable and measurable in the aggregate, Dupuy was skeptical whether all of the individual moral and behavioral intangibles could be discreetly quantified. He thought this particularly true for a set of factors that also contributed to combat effectiveness, but were a blend of human and operational factors. These include:

  • Logistical effectiveness
  • Time and Space
  • Momentum
  • Technical Command, Control, Communications
  • Intelligence
  • Initiative
  • Chance

Dupuy grouped all of these intangibles together into a composite factor he designated as relative combat effectiveness value, or CEV. The CEV, along with environmental and operational factors (Vf), comprise the Circumstantial Variables of Combat, which when multiplied by force strength (S), determines the combat power (P) of a military force in Dupuy’s formulation.

P = S x Vf x CEV

Dupuy did not believe that CEVs were static values. As with human behavior, they vary somewhat from engagement to engagement. He did think that human factors were the most substantial of the combat variables. Therefore any model or theory of combat that failed to account for them would invariably be inaccurate.


This post is drawn from Trevor N. Dupuy, Numbers, Predictions and War: Using History to Evaluate Combat Factors and Predict the Outcome of Battles (Indianapolis; New York: The Bobbs-Merrill Co., 1979), Chapters 5, 7 and 9; Trevor N. Dupuy, Understanding War: History and Theory of Combat (New York: Paragon House, 1987), Chapters 8 and 10; and Trevor N. Dupuy, “The Fundamental Information Base for Modeling Human Behavior in Combat, � presented at the Military Operations Research Society (MORS) Mini-Symposium, “Human Behavior and Performance as Essential Ingredients in Realistic Modeling of Combat – MORIMOC II,� 22-24 February 1989, Center for Naval Analyses, Alexandria, Virginia.

TDI Friday Read: Mike Spagat’s Economics of Warfare Lectures & Commentaries

TDI Friday Read: Mike Spagat’s Economics of Warfare Lectures & Commentaries

Below is an aggregated list of links to Dr. Michael Spagat‘s E3320: Economics of Warfare lecture series at the Royal Holloway University of London, and Chris Lawrence’s commentary on each. Spagat is a professor of economics and the course addresses quantitative research on war.

The aim of the course is to:

Introduce students to the main facts about conflict. Apply theoretical and empirical economic tools to the study of conflict. Give students an appreciation of the main questions at the research frontier in the economic analysis of conflict. Draw some policy conclusions on how the international community should deal with conflict. Study data issues that arise when analysing conflict.
Mike’s Lecture Chris’s Commentary
Economics of Warfare 1 Commentary
Economics of Warfare 2 Commentary
Economics of Warfare 3 Commentary
Economics of Warfare 4 Commentary
Economics of Warfare 5 Commentary
Economics of Warfare 6 Commentary
Economics of Warfare 7 Commentary
Economics of Warfare 8 Commentary
Economics of Warfare 9 Commentary
Economics of Warfare 10 Commentary
Economics of Warfare 11 Commentary 1

Commentary 2

Economics of Warfare 12 Commentary
Economics of Warfare 13 Commentary 1

Commentary 2

Commentary 3

Economics of Warfare 14 Commentary
Economics of Warfare 15 Commentary 1

Commentary 2

Economics of Warfare 16 Commentary
Economics of Warfare 17 Commentary 1

Commentary 2

Commentary 3

Economics of Warfare 18 Commentary
Economics of Warfare 19 Commentary 1

Commentary 2

Commentary 3

Commentary 4

Economics of Warfare 20 Commentary
A Return To Big Guns In Future Naval Warfare?

A Return To Big Guns In Future Naval Warfare?

The first shot of the U.S. Navy Office of Naval Research’s (ONR) electromagnetic railgun, conducted at Naval Surface Warfare Center, Dahlgren Division in Virginia on 17 November 2016. [ONR’s Official YouTube Page]

Defense One’s Patrick Tucker reported last month that the U.S Navy Office of Naval Research (ONR) had achieved a breakthrough in capacitor design which is an important step forward in facilitating the use of electromagnetic railguns in future warships. The new capacitors are compact yet capable of delivering 20 megajoule bursts of electricity. ONR plans to increase this to 32 megajoules by next year.

Railguns use such bursts of energy to power powerful electromagnets capable of accelerating projectiles to hypersonic speeds. ONR’s goal is to produce railguns capable of firing 10 rounds per minute to a range of 100 miles.

The Navy initiated railgun development in 2005, intending to mount them on the new Zumwalt class destroyers. Since then, the production run of Zumwalts was cut from 32 to three. With the railguns still under development, the Navy has mounted 155mm cannons on them in the meantime.

Development of the railgun and a suitable naval powerplant continues. While the Zumwalts can generate 78 megajoules of energy and the Navy’s current railgun design only needs 25 to fire, the Navy still wants advanced capacitors capable of powering 150-killowatt lasers for drone defense, and new generations of radars and electronic warfare systems as well.

While railguns are huge improvement over chemical powered naval guns, there are still doubts about their effectiveness in combat compared to guided anti-ship missiles. Railgun projectiles are currently unguided and the Navy’s existing design is less powerful than the 1,000 pound warhead on the new Long Range Anti-Ship Missile (LRASM).

The U.S. Navy remains committed to railgun development nevertheless. For one idea of the role railguns and the U.S.S. Zumwalt might play in a future war, take a look at P. W. Singer and August Cole’s Ghost Fleet: A Novel of the Next World War, which came out in 2015.

Human Factors In Combat: Interaction Of Variable Factors

Human Factors In Combat: Interaction Of Variable Factors

The Second Battle of Ypres, 22 April to 25 May 1915 by Richard Jack [Canadian War Museum]

Trevor Dupuy thought that it was possible to identify and quantify the effects of some individual moral and behavioral (i.e. human) factors on combat. He also believed that many of these factors interacted with each other and with environmental and operational (i.e. physical) variables in combat as well, although parsing and quantifying these effects was a good deal more difficult. Among the combat phenomena he considered to be the result of interaction with human factors were:

Dupuy was critical of combat models and simulations that failed to address these relationships. The prevailing approach to the design of combat modeling used by the U.S. Department of Defense is known as the aggregated, hierarchical, or “bottom-up� construct. Bottom-up models generally use the Lanchester equations, or some variation on them, to calculate combat outcomes between individual soldiers, tanks, airplanes, and ships. These results are then used as inputs for models representing warfare at the brigade/division level, the outputs of which are then fed into theater-level simulations. Many in the American military operations research community believe bottom-up models to be the most realistic method of modeling combat.

Dupuy criticized this approach for many reasons (including the inability of the Lanchester equations to accurately replicate real-world combat outcomes), but mainly because it failed to represent human factors and their interactions with other combat variables.

It is almost undeniable that there must be some interaction among and within the effects of physical as well as behavioral variable factors. I know of no way of measuring this. One thing that is reasonably certain is that the use of the bottom-up approach to model design and development cannot capture such interactions. (Most models in use today are bottom-up models, built up from one-on-one weapons interactions to many-on-many.) Presumably these interactions are captured in a top-down model derived from historical experience, of which there is at least one in existence [by which, Dupuy meant his own].

Dupuy was convinced that any model of combat that failed to incorporate human factors would invariably be inaccurate, which put him at odds with much of the American operations research community.

War does not consist merely of a number of duels. Duels, in fact, are only a very small—though integral—part of combat. Combat is a complex process involving interaction over time of many men and numerous weapons combined in a great number of different, and differently organized, units. This process cannot be understood completely by considering the theoretical interactions of individual men and weapons. Complete understanding requires knowing how to structure such interactions and fit them together. Learning how to structure these interactions must be based on scientific analysis of real combat data.[1]

While this unresolved debate went dormant some time ago, bottom-up models became the simulations of choice in Defense Department campaign planning and analysis. It should be noted, however, that the Defense Department disbanded its campaign-level modeling capabilities in 2011 because the use of the simulations in strategic analysis was criticized as “slow, manpower-intensive, opaque, difficult to explain because of its dependence on complex models, inflexible, and weak in dealing with uncertainty.�


[1] Trevor N. Dupuy, Understanding War: History and Theory of Combat (New York: Paragon House, 1987), p. 195.

The One Board Wargame To Rule Them All

The One Board Wargame To Rule Them All

The cover of SPI’s monster wargame, The Campaign For North Africa: The Desert War 1940-43 [SPI]

Even as board gaming appears to be enjoying a resurgence in the age of ubiquitous computer gaming, it appears, sadly, that table-top wargaming continues its long, slow decline in popularity from its 1970s-80s heyday. Pockets of enthusiasm remain however, and there is new advocacy for wargaming as a method of professional military education.

Luke Winkie has written an ode to that bygone era through a look at the legacy of The Campaign For North Africa: The Desert War 1940-43, a so-called “monster” wargame created by designer Richard Berg and published by Simulations Publications, Inc. (SPI) in 1979. It is a representation of the entire North African theater of war at the company/battalion level, played on five maps which extend over 10 feet and include 70 charts and tables. The rule book encompasses three volumes. There are over 1,600 cardboard counter playing pieces. As befits the real conflict, the game places a major emphasis on managing logistics and supply, which can either enable or inhibit combat options. The rule book recommends that each side consist of five players, an overall commander, a battlefield commander, an air power commander, one dedicated to managing rear area activities, and one devoted to overseeing logistics.

The game map. [BoardGameGeek]

Given that to complete a full game requires an estimated 1,500 hours, actually playing The Campaign For North Africa is something that would appeal to only committed, die-hard wargame enthusiasts (known as grognards, i.e. Napoleonic era slang for “grumblers” or veteran soldiers.) As the game blurb suggests, the infamous monster wargames were an effort to appeal to a desire for a “super detailed, intensive simulation specially designed for maximum realism,” or as realistic as war on a tabletop can be, anyway. Berg admitted that he intentionally designed the game to be “wretched excess.”

Although The Campaign For North Africa was never popular, it did acquire a distinct notoriety not entirely confined to those of us nostalgic for board wargaming’s illustriously nerdy past. It retains a dedicated fanbase. Winkie’s article describes the recent efforts of Jake, a 16-year Minnesotan who, unable to afford to buy a second-end edition of the game priced at $400, printed out the maps and rule book for himself. He and a dedicated group of friends intend to complete a game before Jake heads off to college in two years. Berg himself harbors few romantic sentiments about wargaming or his past work, having sold his own last copy of the game several years ago because a “whole bunch of dollars seemed to be [a] more worthwhile thing to have.â€� The greatness of SPI’s game offerings has been tempered by the realization that the company died for its business sins.

However, some folks of a certain age relate more to Jake’s youthful enthusiasm and the attraction to a love of structure and complexity embodied in The Campaign For North Africa‘s depth of detail. These elements led many of us on to a scholarly study of war and warfare. Some of us may have discovered the work of Trevor Dupuy in an advertisement for Numbers, Predictions and War: Using History to Evaluate Combat Factors and Predict the Outcome of Battles in the pages of SPI’s legendary Strategy & Tactics magazine, way back in the day.

Human Factors In Warfare: Diminishing Returns In Combat

Human Factors In Warfare: Diminishing Returns In Combat

[Jan Spousta; Wikimedia Commons]

One of the basic problems facing military commanders at all levels is deciding how to allocate available forces to accomplish desired objectives. A guiding concept in this sort of decision-making is economy of force, one of the fundamental and enduring principles of war. As defined in U.S. Army’s Field Manual FM 100-5, Field Service Regulations, Operations (which Trevor Dupuy believed contained the best listing of the principles):

Economy of Force

Minimum essential means must be employed at points other than that of decision. To devote means to unnecessary secondary efforts or to employ excessive means on required secondary efforts is to violate the principle of both mass and the objective. Limited attacks, the defensive, deception, or even retrograde action are used in noncritical areas to achieve mass in the critical area.

How do leaders determine the appropriate means for accomplishing a particular mission? The risk of failing to assign too few forces to a critical task is self-evident, but is it possible to allocate too many? Determining the appropriate means in battle has historically involved subjective calculations by commanders and their staff advisors of the relative combat power of friendly and enemy forces. Most often, it entails a rudimentary numerical comparison of numbers of troops and weapons and estimates of the influence of environmental and operational factors. An exemplar of this is the so-called “3-1 rule,� which holds that an attacking force must achieve a three to one superiority in order to defeat a defending force.

Through detailed analysis of combat data from World War II and the 1967 and 1973 Arab-Israeli wars, Dupuy determined that combat appears subject to a law of diminishing returns and that it is indeed possible to over-allocate forces to a mission.[1] By comparing the theoretical outcomes of combat engagements with the actual results, Dupuy discovered that a force with a combat power advantage greater than double that of its adversary seldom achieved proportionally better results than a 2-1 advantage. A combat power superiority of 3 or 4 to 1 rarely yielded additional benefit when measured in terms of casualty rates, ground gained or lost, and mission accomplishment.

Dupuy also found that attackers sometimes gained marginal benefits from combat power advantages greater than 2-1, though less proportionally and economically than the numbers of forces would suggest. Defenders, however, received no benefit at all from a combat power advantage beyond 2-1.

Two human factors contributed to this apparent force limitation, Dupuy believed, Clausewitzian friction and breakpoints. As described in a previous post, friction accumulates on the battlefield through the innumerable human interactions between soldiers, degrading combat performance. This phenomenon increases as the number of soldiers increases.

A breakpoint represents a change of combat posture by a unit on the battlefield, for example, from attack to defense, or from defense to withdrawal. A voluntary breakpoint occurs due to mission accomplishment or a commander’s order. An involuntary breakpoint happens when a unit spontaneously ceases an attack, withdraws without orders, or breaks and routs. Involuntary breakpoints occur for a variety of reasons (though contrary to popular wisdom, seldom due to casualties). Soldiers are not automatons and will rarely fight to the death.

As Dupuy summarized,

It is obvious that the law of diminishing returns applies to combat. The old military adage that the greater the superiority the better, is not necessarily true. In the interests of economy of force, it appears to be unnecessary, and not really cost-effective, to build up a combat power superiority greater than two-to-one. (Note that this is not the same as a numerical superiority of two-to-one.)[2] Of course, to take advantage of this phenomenon, it is essential that a commander be satisfied that he has a reliable basis for calculating relative combat power. This requires an ability to understand and use “combat multipliers� with greater precision than permitted by U.S. Army doctrine today.[3] [Emphasis added.]


[1] This section is drawn from Trevor N. Dupuy, Understanding War: History and Theory of Combat (New York: Paragon House, 1987), Chapter 11.

[2] This relates to Dupuy’s foundational conception of combat power, which is clearly defined and explained in Understanding War, Chapter 8.

[3] Dupuy, Understanding War, p. 139.

“So Fricking Stupid�: Muddling Through Strategic Insolvency

“So Fricking Stupid�: Muddling Through Strategic Insolvency

As I have mentioned before, the United States faces a crisis of “strategic insolvency� with regard to the imbalance between its foreign and military policy commitments and the resources it has allocated to meet them. Rather than addressing the problem directly, the nation’s political leadership appears to be opting to “muddle through� instead by maintaining the policy and budgetary status quo. A case in point is the 2017 Fiscal Year budget, which should have been approved last year. Instead Congress passed a series of continuing resolutions (CRs) that keeps funding at existing levels while its members try to come to an agreement.

That part is not working out so well. Representative Adam Smith, the ranking Democrat on the House Armed Services Committee (HASC), earlier this week warned that the congressional budget process is headed for “a complete meltdown� in December, Sidney J. Freedberg, Jr. reported in Defense One. The likely outcome, according to Smith, will be another year-long CR in place of a budget. Smith vented that this would constitute “borderline legislative malpractice, particularly for the Department of Defense.�

Smith finds himself in bipartisan agreement with HASC chairman Mac Thornberry and Senate Armed Services chairman John McCain that ongoing CRs and the restrictions of sequestration have contributed to training and maintenance shortfalls that resulted in multiple accidents—including two U.S. Navy ship collisions—that have killed 42 American servicemembers this summer.

As Freedberg explained,

What’s the budget train wreck, according to Smith? The strong Republican majority in the House has passed a defense bill that goes $72 billion over the maximum allowed by the 2011 Budget Control Act. That would trigger the automatic cuts called sequestration unless the BCA is amended, as it has been in the past. But the slim GOP majority in the Senate needs Democratic votes to amend the BCA, and the Dems won’t deal unless non-defense spending rises as much as defense – which is anathema to Republican hardliners in the House.

“Do you understand just how fricking stupid that is?� a clearly frustrated Smith asked rhetorically. A possible alternative would be to shift the extra defense spending into Overseas Contingency Operation funding, which is not subject to the BCA, as has been done before. Smith derided this option as “a fiscal sleight of hand [that] would be bad governance and ‘hypocritical.’�

Just as politics have gridlocked budget negotiations, so to it prevents flexibility in managing the existing defense budget. Smith believes a lot of money could be freed up by closing domestic military bases deemed unnecessary by the Defense Department and canceling some controversial nuclear weapons programs, but such choices would be politically contentious, to say the least.

The fundamental problem may be simpler: no one knows how much money is really needed to properly fund current strategic plans.

One briefer from the Pentagon’s influential and secretive Office of Net Assessment told Smith that “we do not have the money to fund the strategy that we put in place in 2012,� the congressman recalled. “And I said, ‘how much would you need?’…. He had no idea.�

And the muddling through continues.

Recent Academic Research On Counterinsurgency

Recent Academic Research On Counterinsurgency

An understanding of the people and culture of the host country is an important aspect of counterinsurgency. Here, 1st Lt. Jeff Harris (center) and Capt. Robert Erdman explain to Sheik Ishmael Kaleel Gomar Al Dulayani what was found in houses belonging to members of his tribe during a cordon and search mission in Hawr Rajab, Baghdad, Nov. 29, 2006. The Soldiers are from Troop A, 1st Squadron, 40th Cavalry Regiment. (Photo Credit: Staff Sgt. Sean A. Foley)

As the United States’ ongoing decade and a half long involvement in Afghanistan remains largely recessed from the public mind, the once-intense debate over counterinsurgency warfare has cooled as well. Interest stirred mildly recently as the Trump administration rejected a proposal to turn the war over to contractors and elected to slightly increase the U.S. troop presence there. The administration’s stated policy does not appear to differ significantly from that that proceeded it.

The public debate, such as it was, occasioned two excellent articles addressing Afghanistan policy and relevant recent academic scholarship on counterinsurgency, one by Max Fisher and Amanda Taub in the New York Times, and the other by Patrick Burke in War is Boring.

Fisher and Taub addressed the question of the seeming intractability of the Afghan war. “There is a reason that Afghanistan’s conflict, then and now, so defies solutions,� they wrote. “Its combination of state collapse, civil conflict, ethnic disintegration and multisided intervention has locked it in a self-perpetuating cycle that may be simply beyond outside resolution.�

The article weaves together findings of studies on these topics by Ken Menkhaus; Romain Malejacq; Dipali Mukhopadhyay; and Jason Lyall, Graeme Blair, and Kosuke Imai. Fisher and Taub concluded on the pessimistic note that bringing peace and stability to Afghanistan may be on a generational time scale.

Burke looked at a more specific aspect of counterinsurgency, the relationship between civilian casualties and counterinsurgent success of failure. Separating insurgents from the civilian population is one of the central conundrums of counterinsurgency, referred to as the “identification problem.� Burke noted that the current U.S. military doctrine holds that “excessive civilian casualties will cripple counterinsurgency operations, possibly to the point of failure.� This notion rests on the prevailing assumption that civilians have agency, that they can choose between supporting insurgents or counterinsurgents, and that reducing civilian deaths and “winning hearts and minds� is the path to counterinsurgency success.

Burke surveyed work by Matthew Adam Kocher, Thomas B Pepinsky, and Stathis N. Kalyvas; Luke Condra and Jacob Shapiro; Lyall, Blair and Imai, Christopher Day and William Reno; Lee J.M. Seymour; Paul Staniland; and Fotini Christia. The picture portrayed in this research indicates that there is no clear, direct relationship between civilian casualties and counterinsurgent success. While civilians do hold non-combatant deaths against counterinsurgents, the relevance of blame can depend greatly on whether the losses were inflicted by locals for foreigners. In some cases, counterinsurgent brutality helped them succeed or had little influence on the outcome. In others, decisions made by insurgent leaders had more influence over civilian choices than civilian casualties.

While the collective conclusions of the studies surveyed by Fisher, Taub and Burke proved inconclusive, the results certainly warrant deep reconsideration of the central assumptions underpinning prevailing U.S. political and military thinking about counterinsurgency. The articles and studies cited above provide plenty of food for thought.

Combat Readiness And The U.S. Army’s “Identity Crisis”

Combat Readiness And The U.S. Army’s “Identity Crisis”

Servicemen of the U.S. Army’s 173rd Airborne Brigade Combat Team (standing) train Ukrainian National Guard members during a joint military exercise called “Fearless Guardian 2015,� at the International Peacekeeping and Security Center near the western village of Starychy, Ukraine, on May 7, 2015. [Newsweek]

Last week, Wesley Morgan reported in POLITICO about an internal readiness study recently conducted by the U.S. Army 173rd Airborne Infantry Brigade Combat Team. As U.S. European Command’s only airborne unit, the 173rd Airborne Brigade has been participating in exercises in the Baltic States and the Ukraine since 2014 to demonstrate the North Atlantic Treaty Organization’s (NATO) resolve to counter potential Russian aggression in Eastern Europe.

The experience the brigade gained working with Baltic and particularly Ukrainian military units that had engaged with Russian and Russian-backed Ukrainian Separatist forces has been sobering. Colonel Gregory Anderson, the 173rd Airborne Brigade commander, commissioned the study as a result. “The lessons we learned from our Ukrainian partners were substantial. It was a real eye-opener on the absolute need to look at ourselves critically,� he told POLITICO.

The study candidly assessed that the 173rd Airborne Brigade currently lacked “essential capabilities needed to accomplish its mission effectively and with decisive speed� against near-peer adversaries or sophisticated non-state actors. Among the capability gaps the study cited were

  • The lack of air defense and electronic warfare units and over-reliance on satellite communications and Global Positioning Systems (GPS) navigation systems;
  • simple countermeasures such as camouflage nets to hide vehicles from enemy helicopters or drones are “hard-to-find luxuries for tactical unitsâ€�;
  • the urgent need to replace up-armored Humvees with the forthcoming Ground Mobility Vehicle, a much lighter-weight, more mobile truck; and
  • the likewise urgent need to field the projected Mobile Protected Firepower armored vehicle companies the U.S. Army is planning to add to each infantry brigade combat team.

The report also stressed the vulnerability of the brigade to demonstrated Russian electronic warfare capabilities, which would likely deprive it of GPS navigation and targeting and satellite communications in combat. While the brigade has been purchasing electronic warfare gear of its own from over-the-counter suppliers, it would need additional specialized personnel to use the equipment.

As analyst Adrian Bonenberger commented, “The report is framed as being about the 173rd, but it’s really about more than the 173rd. It’s about what the Army needs to do… If Russia uses electronic warfare to jam the brigade’s artillery, and its anti-tank weapons can’t penetrate any of the Russian armor, and they’re able to confuse and disrupt and quickly overwhelm those paratroopers, we could be in for a long war.â€�

While the report is a wake-up call with regard to the combat readiness in the short-term, it also pointedly demonstrates the complexity of the strategic “identity crisis” that faces the U.S. Army in general. Many of the 173rd Airborne Brigade’s current challenges can be traced directly to the previous decade and a half of deployments conducting wide area security missions during counterinsurgency operations in Iraq and Afghanistan. The brigade’s perceived shortcomings for combined arms maneuver missions are either logical adaptations to the demands of counterinsurgency warfare or capabilities that atrophied through disuse.

The Army’s specific lack of readiness to wage combined arms maneuver warfare against potential peer or near-peer opponents in Europe can be remedied given time and resourcing in the short-term. This will not solve the long-term strategic conundrum the Army faces in needing to be prepared to fight conventional and irregular conflicts at the same time, however. Unless the U.S. is willing to 1) increase defense spending to balance force structure to the demands of foreign and military policy objectives, or 2) realign foreign and military policy goals with the available force structure, it will have to resort to patching up short-term readiness issues as best as possible and continue to muddle through. Given the current state of U.S. domestic politics, muddling through will likely be the default option unless or until the consequences of doing so force a change.

Structure Of The U.S. Defense Department History Programs

Structure Of The U.S. Defense Department History Programs

With the recent discussions of the challenges facing U.S. government historians in writing the official military histories of recent conflicts, it might be helpful to provide a brief outline of the structure of the Department of Defense (DOD) offices and programs involved. There are separate DOD agency, joint, and service programs, which while having distinct missions, sometime have overlapping focuses and topics. They are also distinct from other Executive Branch agency history offices, such as the Office of the Historian at the State Department.

The Office of the Secretary of Defense has its own Historical Office, which focuses on collecting, preserving, and presenting the history of the defense secretaries. Its primary publications are the Secretaries of Defense Historical Series. Although the office coordinates joint historical efforts among the military services and DOD agency history offices, it does not direct their activities.

The Joint History Office of the Joint Chiefs of Staff (JCS) provides historical support to the Chairman and Vice Chairman of the Joint Chiefs of Staff and to the Joint Staff. Its primary publications are the JCS and National Policy series, as well as various institutional studies and topical monographs.

The Joint History Office also administers the Joint History Program, which includes the history offices of the joint combatant commands. Its primary role is to maintain the history programs of the commanders of the combatant commands. Current guidance for the Joint History Program is provided by Chairman of the Joint Chiefs Instruction 5320.1B, “Guidance for the Joint History Program,� dated 13 January 2009.

Each of the military services also has its own history program. Perhaps the largest and best known is the Army Historical Program. Its activities are defined in Army Regulation 870-5, “Military History: Responsibilities, Policies, and Procedures,� dated 21 September 2007. The program is administered by the Chief of Military History, who is the principal advisor to the Secretary of the Army and the Army Chief of Staff for all historical matters, and is dual-hatted as the director of the U.S. Army Center for Military History.

The Air Force History and Museum Program is outlined in Air Force Policy Directive 84-1, “Historical Information, Property, and Art,� dated 16 September 2005. The Director of Air Force History and Museums, Policies, and Programs oversees the Air Force Historical Studies Office, and its field operating agency, the Air Force Historical Research Agency.

The Navy History Program is managed by the Director of Navy History. Its activities are described in OPNAV Instruction 5750.4E, “Navy History Programs,â€� dated 18 June 2012. The Navy’s central historical office is the Naval History and Heritage Command, which includes the Navy Department Library and the National Museum of the United States Navy in Washington, D.C.

The U.S. Marine Corps History Division, a branch of Marine Corps University, runs and administers the Marine history program. Its policies, procedures, standards, and responsibilities are outlined in Marine Corps Order 5750.1H, dated 13 February 2009.

In future posts, I will take a closer look at the activities and publications of these programs.

Human Factors In Warfare: Friction

Human Factors In Warfare: Friction

The Prussian military philosopher Carl von Clausewitz identified the concept of friction in warfare in his book On War, published in 1832.

Everything in war is very simple, but the simplest thing is difficult. The difficulties accumulate and end by producing a kind of friction that is inconceivable unless one has experienced war… Countless minor incidents—the kind you can never really foresee—combine to lower the general level of performance, so that one always falls far short of the intended goal… Friction is the only concept that more or less corresponds to the factors that distinguish real war from war on paper… None of [the military machine’s] components is of one piece: each part is composed of individuals, every o