For four months in 2017, an American-led coalition in Syria dropped some ten thousand bombs on Raqqa, the densely populated capital of the Islamic State. Nearly eighty per cent of the city, which has a population of three hundred thousand, was destroyed. I visited shortly after ISIS relinquished control, and found the scale of the devastation difficult to comprehend: the skeletal silhouettes of collapsed apartment buildings, the charred schools, the gaping craters. Clotheslines were webbed between stray standing pillars, evidence that survivors were somehow living among the ruins. Nobody knows how many thousands of residents died, or how many are now homeless or confined to a wheelchair. What is certain is that the decimation of Raqqa is unlike anything seen in an American conflict since the Second World War.
As then, this battle was waged against an enemy bent on overthrowing an entire order, in an apparently nihilistic putsch against reason itself. But Raqqa was no Normandy. Although many Syrians fought valiantly against ISIS and lost their lives, the U.S., apart from a few hundred Special Forces on the ground, relied on overwhelming airpower, prosecuting the entire war from a safe distance. Not a single American died. The U.S. still occasionally conducts conventional ground battles, as in Falluja, Iraq, where, in 2004, troops engaged in fierce firefights with insurgents. But the battle for Raqqa—a war fought from cavernous control rooms thousands of miles away, or from aircraft thousands of feet in the sky—is the true face of modern American combat.
We have been conditioned to judge the merit of today’s wars by their conduct. The United Nations upholds norms of warfare that, among other things, prohibit such acts as torture, rape, and hostage-taking. Human-rights groups and international lawyers tend to designate a war “humane” when belligerents have avoided harming civilians as much as possible. However, in “Asymmetric Killing: Risk Avoidance, Just War, and the Warrior Ethos” (Oxford), Neil Renic, a scholar of international relations, challenges this standard. He argues that, when assessing the humanity of a war, we should look not only to the fate of civilians but also to whether combatants have exposed themselves to risk on the battlefield. Renic suggests that when one side fully removes itself from danger—even if it goes to considerable lengths to protect civilians—it violates the ethos of humane warfare.
The core principle of humane warfare is that fighters may kill one another at any time, excepting those who are rendered hors de combat, and must avoid targeting civilians. It’s tempting to say that civilians enjoy this protected status because they are innocent, but, as Renic points out, civilians “feed hungry armies, elect bellicose leaders, and educate future combatants.” In Syria, home to a popular revolution, entire towns were mobilized for the war effort. Civilians—even children—acted as lookouts, arms smugglers, and spies. What really matters, then, is the type of danger that someone in a battle zone presents. The moment that a person picks up a weapon, whether donning a uniform or not, he or she poses a direct and immediate danger. This is the crucial distinction between armed personnel and civilians.
But what if the belligerents themselves don’t pose a direct and immediate danger? Renic argues that in such theatres as Pakistan, where Americans deploy remote-controlled drones to kill their enemies while rarely stepping foot on the battlefield, insurgents on the ground cannot fight back—meaning that, in terms of the threat that they constitute, they are no different from civilians. It would then be just as wrong, Renic suggests, to unleash a Hellfire missile on a group of pickup-riding insurgents as it would be to annihilate a pickup-riding family en route to a picnic.
One might respond that, say, the Pakistani Taliban does pose an immediate threat to Pakistani civilians, if not to U.S. soldiers. But Renic contends that the U.S., by avoiding the battlefield, has turned civilians into attractive targets for insurgents eager for a fight. Whether this claim is correct or not, it’s clear that risk-free combat has brought warfare into new moral territory, requiring us to interrogate our old notions of battlefield right and wrong. If we can distinguish combatants from civilians only by the danger that they pose to other combatants, then the long-distance violence of modern warfare is inhumane. Renic concludes that the “increasingly sterile, bureaucratized, and detached mode of American killing” has the flavor of punishment rather than of war in any traditional sense. In Barack Obama’s recent memoir, he writes that, as President, he wanted to save “the millions of young men” in the Muslim world who were “warped and stunted by desperation, ignorance, dreams of religious glory, the violence of their surroundings.” Yet he claims that, owing to where they lived, and the machinery at his disposal, he ended up “killing them instead.” Leaving aside Obama’s crude generalizations, Renic argues that he could indeed have saved them—by “severely restricting” remote warfare.
Renic’s book is part of a broader trend of scholars and human-rights activists contending with the wreckage caused by America’s recent conflicts abroad. Their studies share a basic quest: how can we use rules to make warfare more humane? Whereas Renic focusses on moral rules, much of this other work is concerned with legal rules. In the aftermath of the Raqqa battle, Amnesty International and other organizations sifted through the rubble, carefully documenting whether this or that bombing complied with the laws of war. This work is salutary, but a troubling question looms behind it: in our drive to subject the battlefield to rules, are we overlooking deeper moral truths about the nature of war itself?
The notion that warfare should be governed by rules is ancient, and dates at least to Augustine, who argued that a legitimate ruler can wage war when he has good intentions and a just cause. In the Middle Ages, the Church attempted to ban the crossbow, and took efforts to protect ecclesiastical property and noncombatants from wartime violence. But it was only in the nineteenth century that states attempted to fashion laws and treaties to regulate wartime conduct. During the American Civil War, the Union implemented the Lieber Code, which sought to restrict the imposition of unnecessary suffering—torture or poisoning, for example—on the enemy. The code also enshrined as legal convention the principle of “military necessity”: if violence had a strategic purpose—that is, if it could help win a war—it was allowed. In the Hague Conventions of 1899 and 1907, world powers accepted vague limits on wartime conduct while upholding the principle of military necessity. States agreed to a moratorium on balloon-launched munitions, which had little tactical value, but were silent on the question of motorized aircraft.
Many nations ignored even these lax regulations. The Hague Conventions prohibited “asphyxiating gases,” but world powers flouted the treaties with abandon in the trenches of the First World War. The conventions effectively outlawed the intentional targeting of civilians, but by the Second World War belligerents had recognized the military advantage of bombing towns and villages. In 1942, British policy actually barred aircraft from targeting military facilities, ordering them instead to strike working-class areas of German cities—“for the sake of increasing terror,” as Churchill later put it. In 1943, the U.S. and British Air Forces of Operation Gomorrah rained down fire and steel upon Hamburg for seven nights, killing fifty-eight thousand civilians. Urban bombing campaigns left millions of homeless and shell-shocked Germans roaming a ravaged land that W. G. Sebald later described as the “necropolis of a foreign, mysterious people, torn from its civil existence and its history, thrown back to the evolutionary stage of nomadic gatherers.” Then came the nuclear bombs dropped on Hiroshima and Nagasaki, which killed about two hundred and fifty thousand people. In all, Allied terror raids may have claimed some half a million civilian lives. The pattern continued in the Korean War; Secretary of State Dean Rusk later recalled that the U.S. had bombed “every brick that was standing on top of another, everything that moved.”
During the Vietnam War, a powerful antiwar movement emerged for the first time since the First World War. Through television, the news of such atrocities as the My Lai massacre reached directly into American living rooms, and conscientious objectors and antiwar activists appealed to international law to justify their opposition to the carnage. They were more successful in shaping U.S. conduct than they could have ever imagined. After the war, the Pentagon revamped its arsenal with such inventions as laser-guided munitions, which could carry out “precision strikes.” The U.S. military began to follow the principles of the Hague Conventions, as well as those found in other treaties, calling these combined regulations the Law of Armed Conflict. American terror bombings became a thing of the past. In the first Gulf War, hundreds of specialist attorneys sat alongside generals at CENTCOM headquarters in Saudi Arabia, and elsewhere, to insure that the U.S. followed legal rules of warfare. It was the largest per-capita wartime deployment of lawyers in American history.
On the face of it, scrupulous adherence to the law is a victory for the cause of humane war. Yet the ruins of Syria tell a more complicated story. Not long before the U.S. assault on Raqqa, Russian and Syrian forces launched a major offensive to capture the rebel-held eastern side of Aleppo. Paying no heed to international law, they retook the city with savage efficiency, laying waste to crowded markets and hospitals. Yet the end result looked no different from Raqqa: a large civilian death toll, honeycombed apartment buildings, streets choked with rubble, entire neighborhoods flattened.
The U.S.-led coalition waged its assault on Raqqa with exacting legal precision. It vetted every target carefully, with a fleet of lawyers scrutinizing strikes the way an in-house counsel pores over a corporation’s latest contract. During the battle, the coalition commander, Lieutenant General Stephen J. Townsend, declared, “I challenge anyone to find a more precise air campaign in the history of warfare.” Although human-rights activists insist that the coalition could have done more to protect civilians, Townsend is right: unlike Russia, America does not bomb indiscriminately. The U.S. razed an entire city, killing thousands in the process, without committing a single obvious war crime.
During the summer of 2016, residents of Tokhar, a riverside hamlet in northern Syria, gathered every night in four houses on the community’s edge, hoping to evade gunfire and bombs. This was the farthest point from a front line, a mile away, where U.S.-backed forces were engaging ISIS fighters. Every night, a drone hovered over Tokhar, filming the villagers’ procession from their scattered homes to these makeshift bunkers. The basements became crowded with farmers, mothers, schoolgirls, and small children. On July 18th, at around 3 A.M., the houses exploded. Thick smoke covered the night sky. Limbs were strewn across the rubble. Children were buried under collapsed walls.
People from surrounding villages spent two weeks digging out bodies. The coalition, meanwhile, announced that it had destroyed “nine ISIL fighting positions, an ISIL command and control node, and 12 ISIL vehicles” in the area that night. Eventually, after reports surfaced that many civilians had died, the coalition admitted to killing twenty-four. When a colleague and I visited, a year after the raid, we documented at least a hundred and twenty dead civilians, and found no evidence that any ISIS members had been present near the four houses. A mother told me that some small children were obliterated, their bodies never found.
“We take all measures during the targeting process . . . to comply with the principles of the Law of Armed Conflict,” U.S. Marine Major Adrian J. T. Rankine-Galloway said. The essence of this legal code is that militaries cannot intentionally kill civilians. It is true that no one in the chain of command wished to massacre civilians that night—not the pilot or the targeteers or the lawyers. The U.S. points to this fact in calling the Tokhar incident an error, regrettable but not illegal. Yet, though it is reasonable to invoke intention when referring to the mind-set of an individual—this is the idea behind the legal concept mens rea—it seems odd to ascribe a mental state to a collective actor like an army or a state. It is clear, however, that the coalition could have foreseen the outcome of its actions: it had filmed the area for weeks, and intelligence indicating that the village was populated would not have been difficult to gather. During the coalition’s campaign against ISIS, it often based its bombing decisions on faulty assumptions about civilian life; in Mosul, it targeted a pair of family homes after failing to observe civilians outdoors over the course of a few afternoons. Iraqis typically avoid the blazing midday heat. Four people died. The Law of Armed Conflict excuses genuine errors and proscribes intentional killing, but most American warfare operates in a gray zone, which exists, in part, because the law itself is so vague.
A second pillar of the legal code is the rule of proportionality: states can kill civilians if they are aiming for a military target, as long as the loss of civilian life is proportional to the military advantage they gain by the attack. What this means is anyone’s guess: how do you measure “military advantage” against human lives? During the Mosul battle, snipers went onto the roof of the home of Mohammed Tayeb al-Layla, a former dean of engineering at Mosul University. According to neighbors, he and his wife rushed upstairs, pleading with them to leave. In a flash, a warhead flattened the home, killing the snipers, al-Layla and his wife, and their daughter, who was downstairs. It’s nearly impossible to say how one would weigh two dead snipers against a dead family, but most conventions would consider the killing lawful. Much of the destruction in Raqqa follows the example of the al-Layla household: death by a thousand proportional strikes.
American officials are quick to point out that ISIS deserves a good share of the blame: militants dispersed themselves throughout schools and apartment buildings, and otherwise lived among the civilian population. Yet this does not necessarily absolve the U.S. When counter-insurgency doctrine was in vogue during the conflicts in Iraq and Afghanistan, American forces sought to win “hearts and minds” by embedding in population centers. For an Afghan, few sights stirred as much dread as a column of beige armored Humvees snaking through a crowded market. If a suicide bomber attacked the Humvees, Americans would rightly condemn him for his disregard for the surrounding civilians—even if he had the force of the law, in the guise of proportionality, behind him.
The contradictions of U.S. military conduct don’t go unnoticed. Human-rights organizations frequently accuse the U.S. of committing war crimes, including in the Raqqa battle. In nearly every case, though, the U.S. can muster a convincing defense. What is in dispute is not whether or not the U.S. killed civilians but the interpretation of the law: the U.S. uses a much looser interpretation of intentionality and proportionality than most human-rights groups do. After such deaths occur, no independent arbiter adjudicates the U.S.’s actions—only vanquished forces ever get dragged before an international tribunal. The Pentagon is left to judge itself, and, unsurprisingly, almost always finds in its own favor. The law’s ambiguities allow the U.S. to classify atrocities like that in Tokhar as accidents, even if the deadly results were foreseeable, and therefore avoidable.
How many civilian deaths in Raqqa were avoidable? In Tokhar, it was possible to reconstruct the evidence, but often it is not. Without transparency in the targeting process, the military usually has the final word. Yet there is one way we can intuitively know when an armed force has an alternative to causing civilian suffering. When U.S. forces are faced with a pair of ISIS gunmen on the roof of an apartment building, they can call in a five-hundred-pound laser-guided bomb—or they can approach the enemy on foot, braving enemy fire, and secure the building through old-fashioned battle. In the past, armies have sometimes chosen the harder path: during the Second World War, when Allied French pilots carried out bombing raids on Vichy territory—part of their homeland—they flew at lower altitudes, in order to avoid striking civilians, even though it increased the chances that they’d be shot down. For the U.S. military, however, the rules are blind to the question of risk. The law doesn’t consider whether an armed force could have avoided unnecessary civilian suffering by exposing itself to greater danger. For Neil Renic, wars waged exclusively through drones, therefore, point to the “profound discord between what is lawful on the battlefield and what is moral.”
This may be why the U.S. military today tends to downplay the old martial virtue of courage. Historically, though, the concept was so central to the idea of good soldiering that weapons or tactics lacking in valor sparked objections from the ranks. Renic writes that when aircraft first entered the modern arsenal, in the nineteen-tens, fighter pilots engaged in dogfights reminiscent of the gallantry of a medieval duel. But such long-distance tactics as mortar fire and aerial bombardment had little to do with valor. A pilot from the First World War recalled, “You did not sit in a muddy trench while someone who had no personal enmity against you loosed off a gun, five miles away, and blew you to smithereens.” He concluded, “That was not fighting; it was murder. Senseless, brutal, ignoble.” A British airman from the Second World War wrote, “I was a fighter pilot, never a bomber pilot, and I thank God for that. I do not believe I could ever have obeyed orders as a bomber pilot; it would have given me no sense of achievement to drop bombs on German cities.”
Though sniping causes far less devastation, it has long aroused a similar unease. In the First World War, a British brigadier-general denounced the practice as “an act of cold murder, horrible to the most callous, distasteful to all but the most perverted.” During the American Revolution, a young British officer trained his rifle’s sights on a target, only to decide that “it was not pleasant to fire at the back of an unoffending individual who was acquitting himself very coolly of his duty.” The individual in question was George Washington.
In 2014, the bio-pic “American Sniper” ignited a debate about whether its protagonist, a legendary marksman, had fabricated parts of his story. But, Renic points out, nobody questioned the moral legitimacy of sniping itself, an indication of the extent to which courage has vanished as a battlefield norm in today’s wars. Even if he is overstating the role of valor historically, it’s clear that the U.S. military today goes to great lengths to avoid risk, justifying its conduct instead by extolling the Law of Armed Conflict. A military that emphasizes courage may wind up protecting more civilians, but with bravery comes body bags—and, the moment that body bags arrived in the U.S., we would be forced to contend with the hard questions that the law lets us ignore. Were those deaths of Americans worth it? What is the purpose of this war? Should it be fought, and, if so, fought differently? These are conversations that neither the military nor human-rights organizations appear interested in having.
Critics might say that the ruins of Syria reveal the limited value of the laws of war: two armies, operating under greatly differing norms, produced nearly identical results in Raqqa and Aleppo. Defenders might retort that such rules, even when vague or overly permissive, are better than none at all. Probably both views are correct, but the focus on legality may have lulled us into a comfort with war itself. Human-rights groups have found the U.S. guilty of dozens of war crimes in Afghanistan, but most American killing has been lawful: a housewife wandering too close to a convoy, a farmer gunned down on faulty assumptions, a family made victim to the rule of proportionality. Americans seem to become exercised about the miseries of combat only when the rules are flagrantly violated; as long as they are not, a war quietly slides into the background—even into a permanent state of being. If the Afghan war continued for another twenty years, it’s doubtful whether it would arouse much domestic opposition, even though the over-all suffering may be as great as a wanton slaughter that ended in a decisive victory. The U.S. cannot carry out such a slaughter without violating the law and provoking widespread opposition, and so the conflict remains at a perpetual low boil. The U.S. finds itself in a peculiar situation in which it can neither win nor lose its wars.
Faced with this bitter truth, some thinkers espouse the doctrine of realism, which bluntly states that the battlefield is no place for moral strictures. But this doctrine can be used to excuse terrible and unnecessary suffering. Another approach is pacifism, which, for all its merits, asks us to condemn both the tyrant and those violently resisting tyranny. That leaves the moral tradition of “just war,” which maintains that warfare is a fixture of human existence, so the best we can hope for is to regulate when and how it is waged. This is the essential idea informing the laws of war.
Yet, although armed conflict is not disappearing anytime soon, that doesn’t mean we must reduce war solely to a question of legal violations and battlefield rules. Even if we can never abolish war, Immanuel Kant argued, we should act as if we could, and design our institutions accordingly. Today in America, we could work to insulate the Pentagon’s decisions from defense contractors and other vested interests; more important, we could revert the decision to make war to democratic control. After 9/11, Congress passed the Authorization for the Use of Military Force, which Presidents have since invoked to justify at least thirty-seven military activities in fourteen countries, including the U.S. war in Syria, without formal declaration or public debate. Whether this or that pile of rubble was produced lawfully, or whether or not American boots touched Syrian soil, is not nearly as important as the fact that the U.S. was free to raze a foreign city with no public discussion or accountability. Perhaps only when our foreign adventures are subject to democratic constraints will we view the starting and ending of wars—not just their conduct—as a matter of life and death. ♦