Sunday, February 22, 2015

Edward S. Corwin and the "Totality" of America's World War II

What makes a war “total”? And how is war’s totality experienced? Edward S. Corwin, in the opening of his influential 1947 book Total War and the Constitution, turns to Deuteronomy:
Of the cities of these people, which the Lord thy God doth give thee for an inheritance, thou shalt save alive nothing that breatheth: But thou shalt utterly destroy them…as the Lord thy God hath commanded thee.
The biblical reference enables Corwin to say that total war “is at least as old as recorded history.” He also finds in Deuteronomy a motive for total war. The Bible justified ruthlessness, “For…the Lord thy God hath chosen thee to be a special people unto himself, above all people that are upon the face of the earth.”

Total war, in this sense, went beyond domination, to elimination. Wars have been thought of as “total” when lacking genocidal objectives, however, at least for some participants. Ubiquity of violence is often a central aspect of war’s totality. The Oxford English Dictionary defines it as unrestricted war, especially “war in which civilians are perceived as combatants and therefore as legitimate targets.”

Totality takes a turn when applied to the United States. For Corwin, the totality that was relevant to American law was “functional totality,” which he defined as “the politically ordered participation in the war effort of all personal and social forces, the scientific, the mechanical, the commercial, the economic, the moral, the literary and artistic, and the psychological.” Total war was when “every human element” of a society was involved in the conflict. He draws examples from nations under siege. During the War of 1793 in France, the Committee of Public Safety ordered that “young men will go into battle; married men will forge arms and transport food; the women will make tents, garments, and help in the hospitals.” Even children and the elderly had orders.

In the examples Corwin draws upon, including the 1935 invasion of Ethiopia, a core experience of war’s totality was collective vulnerability to violence. Corwin doesn’t explain how totality could apply to a society distant from the fighting, like the United States in World War II (with the exception of Hawai’i). Instead, he assumes its application, as he turns to the consequences of total war for government power and individual rights.

Another logic is needed to explain an American totality in World War II: a focus on the totality of power, as compared with total vulnerability to violence. Corwin’s application of total war to the American experience suggests that totality is experienced by a collective, society as a whole, with every element in society touched in some way by war. The body that feels war’s totality is the collective, and each human body within that collective might feel only some aspect of war. Many World War II Americans felt the war’s violence directly; others felt it through their connections with loved ones deployed. For others, the impact was felt through income taxes and shortages at the grocery store. The extension of war’s impact beyond its core violence is what makes American war “total,” although this experience of war's totality cannot compare with the lived experience of World War II in Europe, Asia and North Africa.

In his analysis of individual rights in this generative work, Corwin suggests that “the requirements of total war” are incompatible with fundamental American constitutional principles. But perhaps there is something more important in Corwin that we might look for elsewhere in the history of American thought. Perhaps Corwin provides a window on the way American war could be seen as present, personal, and “total,” even though the shooting, killing and dying were thousands of miles away.

I am thinking this through for an upcoming plenary at a Duke conference on violence, and for a lecture as part of a Rutgers symposium on totality, so comments and suggestions are most welcome.

 Cross-posted from Balkinization.

Saturday, February 21, 2015

Civil Rights History, Foreign Affairs, and Contemporary Public Diplomacy

It seems like a good time to reflect on the policy implications of scholarship on the relationship between civil rights and U.S. foreign relations. President Obama has recently emphasized that protecting human rights matters to the fight against terrorism. And the Council on Foreign Relations in DC will soon hold an event on the International Implications of the Civil Rights Movement. The event is not open, and discussion may go in a different direction, but below are a few points I hope to have a chance to get across.

The history of the intersection of civil rights and Cold War era U.S. foreign relations is copiously documented here and here. It took a while for American diplomats and political leaders to grasp the extent of the problem and how to address it. Here’s how they got it wrong, and then right – at least for U.S. public diplomacy:

In the late 1940s, as the U.S. hoped to encourage a newly independent India to ally with the United States, but encountered persistent criticism of U.S. racial segregation and discrimination, American diplomats in India initially made things worse. They dismissed the problem and analogized American racism to the Indian caste system, suggesting that all nations have racial problems. If not exacerbating the U.S. image problem, this at least delayed addressing a critical issue during an important moment in US/Indian diplomacy.

Because the United States argued that American democracy was a model for the world (in the context of a Cold War battle for hearts and minds with the Soviets), the U.S. encountered global criticism for not living up to its own ideals. The more the U.S. emphasized the values of democracy – at the same time that there was global news coverage of American civil rights abuses – the more the U.S. was criticized as hypocritical, and the benefits American democracy were questioned. It took a very long time for American leaders to understand that they couldn’t talk about rights for other nations without protecting rights at home.

Important steps forward – Brown v. Board of Education, sending in the troops in Little Rock, and the Civil Rights Act of 1964 – along with careful management of the global story in U.S. public diplomacy, helped turn this around. By 1964, American diplomats could report that peoples in other nations had come to believe that the American government was on the side of civil rights, rather than being part of the problem. The unfortunate part of the story is that formal legal change, effectively marketed, could accomplish this. Continuing inequality, if below the radar of global news coverage, did not hold the world’s attention.

One obvious takeaway from this history is that a call for global human rights cannot be effective, and could be counter-productive, without meaningful progress toward human rights at home. There has been global coverage of the protests in Ferguson, Missouri, reminiscent of the international interest in American civil rights in the 1950s and 60s. And there has been a devastating hearts and minds problem stemming from abuses at Abu Ghraib, revelations of U.S. torture, and the continuing scar of Guantanamo. If President Obama believes that promoting human rights is important to the fight against terrorism, this history shows that there is only one effective way to begin: by starting at home.

Cross-posted from Balkinization.

Monday, February 9, 2015

Distant War and the Politics of Catastrophe

My earlier musing on this blog are finally turning into a book that puts war death into the history of the war powers. More particularly, I am taking as my point of departure Drew Gilpin Faust, This Republic of Suffering: Death and the American Civil War. During the Civil War, an intimacy with death and dying, and a close experience of war’s brutal after effects, would transform the United States, Faust argues, creating “a veritable ‘republic of suffering’ in the words [of] Frederick Law Olmsted.” If the experience of war death was somehow constitutive of the republic itself during the Civil War, I have been puzzling over how American identity and politics might be affected or even constituted by its comparative absence.

Initially, I thought that all the important action in the story happens after World War II, and especially after Vietnam, when three developments isolate most Americans from the direct experience of war: the absence of a draft, the rise in military contracting, and changes in war technologies. But I’ve come to understand that the entire 20th century requires rethinking as a century of distant war.

There was deep and broad-based engagement of Americans in the two world wars, but geographic distance mattered to the politics of war declaration and authorization. In essence, distant war required a politics of catastrophe, in which presidents made decisions, and then waited for a disaster of sufficient proportions to generate political support to get strong backing from Congress for what had already been decided. Catastrophe didn’t generate a decision for armed conflict, but instead facilitated political mobilization.

This easily fits the Spanish American War, with a war declaration coming on the heels of public uproar over the sinking of the battleship Maine in Havana harbor, mistakenly attributed to the Spanish. And the World War II chapter of my War Time book illustrates the way this fits WWII (though I don’t develop this argument in that book). What was surprising to me was how well it fits World War I.

The important story comes before Woodrow Wilson sought a formal war declaration, in his failed effort to get an “armed neutrality” bill through Congress (which failed not due to the policy but due to Wilson’s political missteps). The bill would have enabled Wilson to arm merchant ships that would, in certain areas, fire upon German U-boats without warning, and would have certainly launched the U.S. into the war. Amid continuing reports of sunken ships and American deaths, Wilson had announced that an “overt act” by Germany would move the United States closer to war. Wilson, his close advisers, and the press then contemplated whether particular sinkings were the “overt act” he had in mind. Ultimately the “overt act” was the sinking of the Laconia, with only three American deaths. Wilson used the incident to build political momentum. Biographer Arthur Link wrote that  “Wilson’s decision to capitalize on the incident was apparently part of his strategy for focusing public pressure on Congress.” Others were puzzled, since many more were killed in previous incidents that had not been the magic “overt act.” This illustrates an important role of catastrophe in war politics. The terrible event doesn’t always lead to a new policy. Instead, a catastrophe is needed for political reasons: to generate support for a decision already made. And catastrophe itself is defined by politics, not by the event itself. Public opinion scholar David Berinsky has written that “the facts of war do not speak for themselves.” Neither do the facts of catastrophe.

I am continuing to work this out. In the meantime, if you are in the SF Bay Area and want to see how it all turns out, my David M. Kennedy Lecture on the United States and the World, May 12 at Stanford, will be on The Politics of Distant War: 1917, 1941, 1964. You can RSVP here. I'll give a similar lecture at the University of Washington on May 21.

Cross-posted from Balkinization.