You are currently browsing the category archive for the ‘tdih’ category.

On this day in 1926, Congressman Victor Berger (Socialist-Wisconsin) met Calvin Coolidge at the White House to ask the president’s support for a pardon of Eugene Debs, the Socialist leader and frequent presidential candidate who spent more than two years in federal prison for a speech of June 1918, whose “probable effect will be to prevent recruiting.”1 Coolidge reportedly displayed “sympathetic interest”—his predecessor Harding had commuted Debs’s sentence to time served.

When reporting on Berger’s meeting, the New York Times said Berger was trying to restore Debs’s citizenship, tout court; while it was apparently commonplace to refer to Debs having lost his citizenship as a result of the conviction, Nick Salvatore explains it was not so, though he might have lost his right to vote.

In September of 1926, Debs’s brother Theodore helped him to register to vote, so he could assert his right irrespective of pardon. Debs died before the election that would have allowed him to test the principle.

1The best line of which is, “You remember that, at the close of Theodore Roosevelt’s second term as President, he went over to Africa to make war on some of his ancestors.”

On this day in 1971, a group of activists broke into an FBI field office in Media, Pennsylvania and stole about a thousand government documents with a mysterious notation, “COINTELPRO.” The public revelation of these documents ranks with the Pentagon Papers as one of the most significant exposés of government secrets in U.S. history.

The burglars, who have never been identified, entered the two-person office in Media as much of the nation was huddled around television sets to watch the Muhammad Ali-Joe Frazier fight. The activists did not fully understand the documents they found, but they quickly decided that the public had the right to see them.

About two weeks later, two prominent antiwar lawmakers and reporters at major newspapers received copies of the files in plain brown envelopes. Most of the recipients accepted the FBI’s judgment that the files were “secret”: the New York Times and Los Angeles Times did not write about the documents, and the legislators returned their sets to the FBI. But Washington Post editors believed that the public had the right to know about the spying. The Post broke the first COINTELPRO story on March 24, 1971, revealing how the bureau had used mail carriers and a campus switchboard operator to eavesdrop on a radical professor at Swarthmore College.

A Senate investigating committee headed by Frank Church of Idaho later revealed the vast reach of COINTELPRO, which was the acronym for the FBI’s counterintelligence program. FBI Director J. Edgar Hoover started the operation in 1956 in response to Supreme Court rulings that made it more difficult to prosecute Communists. Under COINTELPRO, the bureau recruited “informants” – a euphemism for “informer” – to infiltrate the dwindling ranks of the Communist Party, disrupt its plans, and discredit its members. COINTELPRO agents planted “snitch jackets,” or false letters identifying a target as an informer, wrote anonymous poison pen letters, and spread rumors about political apostasies and marital infidelities. In other words, the FBI did not just monitor these individuals, but tried to break up their marriages, “seed mistrust, sow misinformation,” and provoke them to commit crimes so that they could be arrested.

The FBI originally directed this program at American Communists, but it soon broadened its definition of communism. By 1960, when the Communist Party counted about five thousand members in the United States, the bureau maintained more than eighty times that number of files on “subversive” Americans at its headquarters, and FBI field offices around the country collected even more. One purpose of COINTELPRO, according to an official memo, was to “enhance the paranoia endemic in [dissident] circles” and convince activists that “there is an FBI agent behind every mailbox.” The agents believed that paranoid, divided dissident groups were easier to handle than purposeful, united dissident groups. In other words, the FBI conspired to create fear of conspiracy.

By the mid-1960s, COINTELPRO had expanded to spy on, infiltrate, and disrupt a wide variety of activist groups, including the antiwar movement, women’s liberation groups, civil rights organizations, and the black power movement. The FBI also targeted some “white hate” groups like the Ku Klux Klan, but most of its efforts went into disrupting the left. Most notoriously, FBI officials spied on Martin Luther King, Jr. At Hoover’s direction, agents wiretapped King’s phones, bugged his hotel rooms, and did everything they could to take him “off his pedestal and to reduce him completely in influence,” as one FBI memo put it. The FBI peddled evidence of King’s extramarital affairs to public officials and journalists. Just before King was to accept the Nobel Peace Prize in 1964, the assistant FBI director sent the new laureate his own copy of the evidence. King received a composite tape in the mail that included audio recordings of his alleged trysts. A letter sent with the tape concluded with this threat: “King, there is only one thing left for you to do. You know what it is. …You are done. There is but one way out for you.” The FBI, in other words, tried to persuade the internationally recognized leader of the American civil rights movement to kill himself.

The Church Committee denounced COINTELPRO as a “sophisticated vigilante operation aimed squarely at preventing the exercise of First Amendment rights of speech and association.” With all due respect to the committee, FBI agents are, by definition, not vigilantes; they’re agents of the state. In this case, they were state agents of repression. It took vigilantes who fought for the right to know – those burglars in Media – to bring this secret government program to public attention.

It’s always useful to remember how low expectations were for Abraham Lincoln when he took office. Even his ostensible allies sometimes described him as a rube, a hayseed out of his depth in troubled times. As for his political enemies, the editors at Harper’s Weekly*, a publication that had shilled for Stephen Douglas during the 1860 campaign, printed the above cartoon (click here for a larger image) on this day in 1861. Less than a week before Lincoln’s inauguration, the artist, John McLenan, depicted the president-elect, apparently drunk, joking with cronies as a funeral procession for the Constitution and Union passed by in the background.

* The editors at Harper’s maintained a Unionist stance throughout the war. And by the end of the conflict, the publication had become aggressively pro-Lincoln.

On that day in history, the Japanese submarine I-17 surfaced off the West Coast of the United States and fired a brace of shells at the Elwood Oil Refinery complex in southern California. The attack came after dusk on February 23rd, just as the nation was settling into their couches to listen to a fireside chat by FDR. Despite claims otherwise, this was the first foreign attack on the continental United States since the war of 1812.*

c.jpeg The shelling had no appreciable military effect, with none of the shells getting terribly near the refinery itself but instead blowing holes in nearby farm land. What it did provoke, however, was an intensive hunt for the submarine by American forces and a continuing wave of scares on the West Coast over the next few years, including one three days later, when reports of Japanese aircraft over Los Angeles sparked several barrages of anti-aircraft fire and a five hour blackout during which two Angelenos were killed in traffic accidents.

This was not the only Japanese submarine to attack the west coast. In a particularly impressive effort in September 1942, the Japanese submarine I-25 launched a floatplane which flew inland and dropped two bombs on a section of deserted forest in southwestern Oregon.

Read the rest of this entry »

On this day in history, the United States took actions that symbolize the contradictions of the Pacific War, at home and abroad. On February 19, 1942, President Roosevelt signed Executive Order 9066, which effected the internment of ethnic Japanese (Issei) and Japanese-Americans (Nisei) living in the western United States. Three years later, in 1945, forces of the 4th Assault Corps put two divisions on the black sands of Iwo Jima. In a sense, these linked days were, in their own particular way, indicative of the beginning and the end of the Pacific War. The internments–perhaps the most shameful act of Roosevelt’s Presidency–highlight the confusion, fear, and chaos of the immediate months after Pearl Harbor. Iwo Jima, at the other end, demonstrated the bloody grinding that the war had become by 1945.


The attack on Pearl Harbor had thrown the United States into war with Japan. It also reinforced suspicions that many Americans had about the Issei and Nisei living in the west. “Fifth column” activity had been a constant worry in the U.S. since the war in Europe started and suspicious individuals in the east had been questioned by the FBI for their connection to Germany or Italy. What was different in the American west, however, was the rapid shift–driven largely by racism–from the suspicion of individuals to the suspicion of the entire group. The panic that overtook the West Coast after Pearl Harbor soon focused–at least in part–on supposed Japanese fifth columnnists active in California, Oregon, and Washington. The Attorney General of California, Earl Warren, issued a study claiming that Japanese-Americans lived in greater numbers near sensitive military targets. This, Warren thought, meant that they were concentrating themselves and waiting for an opportunity at sabotage. General John L. DeWitt, the head of the Western Defense Command, echoed Warren’s assessment. The result, in mid-February, was Executive Order 9066, which laid the groundwork for the exclusion of individuals from sensitive “military areas.” Read the rest of this entry »

On this day in 1968, guerrilla forces in South Vietnam mounted a massive offensive against both American forces and the Army of the Republic of South Vietnam (ARVN). Timed for the Vietnamese holiday of Tet, the attack caught the defenders unready and shocked an American public that had been assured by both its military and President Lyndon B. Johnson that the U.S. was winning in Vietnam. The result of Tet was a military disaster for the Vietcong attackers but a political victory for the communist effort in South Vietnam. The fallout from Tet brought down a President, as LBJ would drop his reelection effort in 1968, and it essentially doomed the American effort in Vietnam. Tet became mythologized as the embodiment of the violence and futility of Vietnam, never more so than in Stanley Kubrick’s film Full Metal Jacket.

Read the rest of this entry »

In Underworld, Don DeLillo describes Peter Breughel’s 16th century painting Triumph of Death as a “census-taking of awful ways to die.” Indeed, Breughel possessed an expansive view of physical suffering. In his work, scythe-wielding skeletal horsemen cut down peasants like fields of wheat; the already-fallen are gobbled by dogs; throats are opened; maidens ravaged; bodies are hanged, speared, and stuffed into the crotches of trees. One supplicant victim offers a prayer for relief to a god who — if he exists at all — will likely arrive too late to save the poor fellow’s head from being pruned by a broadsword.

If the victims of the 1922 Knickerbocker Theater disaster had thought about it a bit, they might have scoffed at Breughel’s failure to depict anyone being crushed or pressed to death, much less buried — as they had been — along with nearly a hundred others beneath a tumulus of cement, brick, timber, and steel. On January 28 of that year, roughly 500 moviegoers in the northwest corner of the nation’s capital took refuge from a ferocious blizzard by watching a silent comedy called Get-Rich-Quick Wallingford — a film, we presume, about a fellow named Wallingford and his zany schemes to get rich (and quick!) — when the flat roof of Harry Crandall’s theater buckled under the weight of more than two feet of snow. The blizzard, which had begun the previous day, had dumped snow from Greenville, South Carolina to Reading, Pennsylvania and from Knoxville to Cape Cod. By the evening of the 28th, several feet of snow had accumulated in the District of Columbia, with drifts as high as 16 feet in places. The roof of the Knickerbocker — held aloft by an arch of girders rather than support pillars — gave out a few minutes after 9:00 p.m., just as the second feature was getting underway. The New York Times described the moment as a “mighty symposium of exquisite pain.”

With a roar, mighty as the crack of doom, the massive roof of the Knickerbocker broke loose from its steel moorings and crashed down upon the heads of those in the balcony. Under the weight of the fallen roof, the balcony gave way. Most of the audience was entombed. It was as sudden as the turning off of an electric light.

According to an eyewitness who had just entered the theater, a “hearty peal of laughter” preceded the collapse by a few seconds. Everyone in the balcony was crushed, followed by the orchestra beneath.

As survivors tumbled from the blown-out doors and rescuers yanked limp and de-limbed bodies from the mess, a candy store next to the theater was converted into a makeshift hospital.  Doctors from Walter Reed Hospital arrived on the scene and began tending to the injured. Several Catholic priests offered mass absolution to the unknown scores of people still pinioned beneath the rubble. A Christian Science church across the street was used as a temporary morgue. Among the dead was Andrew Jackson Barchfeld, Pennsylvania Republican who had served five terms in the US House before losing his seat in 1916. Among the living was Alben W. Barkley — Harry Truman’s future Vice President — who was then serving as Congressman from Kentucky.  The two men were not attending the film together, though Barkley later helped retrieve his former colleague’s body from the wrecked building.

Newspaper reports were stuffed with the usual familial dramas — newly-wedded husbands searching frantically for wives, children identifying the bodies of their dead parents, the rumored dead turning up alive and well. (Alben Barkley’s own son Murell, initially thought to be among the dead, was not.)  Rescue workers recounted the implausibly prosaic last words of brave men who expired before they could be withdrawn from the rubble, while others described the stoic demeanor of a nameless boy who lay patiently beneath a slab of roofing. The Times reporter, describing the anonymous lad’s plight, swelled with nationalist bathos:

He merely lay there, a set look on his face, a determination to get out of there if that were possible, to die game if that were inevitable. It was the American spirit intensified. Every man aiding in the rescue of this boy knew what that spirit meant, and it helped them mightily. It was the supreme splendor of the nation in the face of crisis. It was boyhood risen to man’s estate.

All told, 98 people lost their lives in the Knickerbocker Theater, while more than 130 were injured. An engineering report published a few months later criticized the design of the building, claiming that it was “not an example of poor engineering but rather an illustration of the entire absence of engingeering,” with the roof and balcony “planned and erected with a total disregard of all consideration of stability.” Five years after the disaster, the building’s architect killed himself.

[Editor’s Note: Professor Lori Clune returns today for another guest post here at EotAW. Thanks, Lori, for your help with this.]

On this night in 1953, 71.7% of American televisions were tuned to CBS as Ricky and Lucy gave birth to their son, Little Ricky, on I Love Lucy. Well, actually Lucy did all the work off-screen. As many of us recall (thanks to endless reruns) Ricky spent much of the episode in outrageous voodoo face makeup for a show at his club, the Tropicana. From Lucy’s calm statement, “Ricky, this is it,” to the nurse holding up the swaddled bundle, the viewer saw no drugs, pain, or mess. Heck, we weren’t even sure how Lucy came to be “expecting” (CBS nixed saying “pregnant”), what with their two twin beds and all. Lucy and Ricky Ricardo (Lucille Ball and Desi Arnez) — also married in real life — had welcomed their second child, also a boy, via scheduled caesarian section that morning.

The next day 67.7% of televisions tuned in to watch Dwight D. Eisenhower take the oath of office as the 34th POTUS. During the 1950s, television was invading American homes. Only 1 in 10 American homes had a television in 1949; by 1959 it was 9 in 10. Eisenhower’s inauguration (while earning a lower rating than Little Ricky’s birth) reached a substantial number of Americans, about seven times more than had seen or heard Truman’s inaugural just four years before.

As Eisenhower explained in his inaugural address, in 1953 the United States faced “forces of evil…as rarely before in history.” No one needed to be told that the forces of evil were communist. Few Americans, however, could have imagined that a force of evil was the very red-head they loved in their living rooms.

Just eight months after the birth of Lucy’s “sons,” the House Committee on Un-American Activities (HUAC) charged Lucille Ball with membership in the Communist Party. Ball had registered with the CP in Los Angeles in the 1930s. She may have even held some meetings at her house. Ball testified to HUAC that she only joined the CP to placate her socialist grandfather who had insisted that she register as a communist. She claimed to be ignorant and “never politically minded.” When pressured by the press, her husband Desi Arnez retorted, “the only thing that is red about this kid is her hair – and even that is not legitimately red.”

Within days of her testimony, HUAC took the unprecedented action of calling a press conference and announcing that “there is no shred of evidence” linking Lucille Ball to the Communist Party. The committee made this unusual “public exculpation” because they wanted to, in their words, “insure that distortion of available facts not be permitted and that rumor not be substituted for truth in any case.” Historians argue that pressure from CBS and I Love Lucy sponsors – particularly Philip Morris – inspired HUAC’s action. Regardless of the reason, after a “seven-day brush with the blacklist,” HUAC cleared Lucille Ball’s name. Few accused communists were so lucky.

President Eisenhower enjoyed remarkable approval ratings, averaging over 60%, throughout his eight years in office. However, based on TV ratings, Americans loved Lucy more. In fact, Americans loved Lucy so much, they were willing to forgive her purported membership in the Communist Party. Desi Arnez explained, “Lucille is 100 per cent an American…as American as Ike Eisenhower.”

Some 50 million Americans watched Lucy during the 1953-54 season. Everyone still loved Lucy. A lot.

[Author’s note: I hope you’ll forgive me for recycling a post from last year. I’m doing so because MLK, Jr., had he not been gunned down on April 4, 1968, would have been 80 years old today. And while I don’t want to let the occasion pass without comment, I’m too tired and busy to think of anything new to say.]

The Martin Luther King of American memory serves this nation as the safe Civil Rights leader. When shrunk to fit within the confines of soundbite history, the pages of a textbook, or the scenes of a primary school pageant, King is cleansed of anger, of ego, of sexuality, and even, perhaps, of some of his humanity.

Counterpoised against the ostensibly violent Malcolm X, who supposedly would have forced America to change its ways by using “any means necessary,” King comes off as a cuddly moderate — a figure who loved everyone, enemies included, even whites who subjugated black people. Although there’s some truth lurking behind this myth, there was more (about both X and King) to the story: complexities and nuances that escape most popular recollections. Martin Luther King, no matter how people remember him now, was not nearly so safe as most of us believe.

On March 12, 1968, less than a month before he was shot and killed in Memphis, Tennessee, King visited the wealthy Detroit suburb of Grosse Pointe. Largely white, Grosse Pointe was — and to some extent still is — a bastion of establishment power. By that point in his career, King had embraced issues that moved well beyond the struggle against de jure segregation in the South. He had begun focusing most of his energy on inequality nationwide — de facto issues of poverty, job discrimination, fair housing, and, as Matthew Yglesias notes, the Vietnam war.

Read the rest of this entry »

In addition to being a thoroughly wretched president, Franklin Pierce delivered the most inarguably depressing opening sentence in the history of American inaugural addresses. On 4 March 1853, the fourteenth President began his unremarkable 3,334-word speech by harshing even the mellowest of mellows. With snow plummeting from the sky, Pierce observed to his audience that

[i]t is a relief to feel that no heart but my own can know the personal regret and bitter sorrow over which I have been borne to a position so suitable for others rather than desirable for myself.

Pierce’s audience would have understood immediately that the New Hampshire Democrat was referring to the unimaginable personal horror that befell his family eight weeks earlier, on January 6. Pierce and his wife, Jane, had already known a lifetime’s worth of hardship during their two decades of marriage. Their first son, Franklin Pierce, Jr., had died within days of his birth in 1836, midway through his father’s second term in the House of Representatives. Jane Pierce, a devout congregationalist who loathed the culture of Washington, D.C., became convinced that her husband’s political career had roiled an angry God against his family. Her theory was seemingly disproved in 1843, when Pierce’s retirement from the Senate — a decision Jane herself had urged — was rewarded with more death. Less than a year after resigning early from his only term in the upper house of Congress, Pierce’s second son, Franklin Robert, succumbed to typhus at the age of four. With fate having vanquished his two brothers, little Benjamin Pierce was now the sole heir to his father’s vast misfortune.

When the Democratic Party summoned him from retirement in 1852 and placed him at the top of the presidential ticket, Franklin Pierce knew that neither his wife nor his son wanted to leave Concord and return to the swampland of the Potomac River. This is why he told them — quite duplicitously — that he was an unwilling nominee, the servant to the “unsolicited expression of [the public] will,” as he claimed in his inaugural speech. Jane Pierce prayed daily for her husband’s defeat; Bennie, her precious son, commiserated with her. Two months after their petitions to the Lord were discarded, the unelected members of the Pierce family joined their husband and father on a journey by rail to Boston. During their return trip, a coupler on the train failed and threw several cars — the Pierces’ among them — down a snowy embankment. Benjamin Pierce’s head was crushed and partly severed, and he died instantly.

Jane Pierce initially believed that God had taken their son for the nation’s benefit, so that her worse half might focus on the affairs of state without the distractions that a son might introduce to the White House. When she learned shortly before the inauguration that her husband had not been the reluctant candidate he claimed to have been — that he had in fact encouraged friends to submit his name when the Democrats could not decide between four equally craptacular nominees — Mrs. Pierce withdrew into an impenetrable brume of grief and resentment. She neglected her minimal, ornamental duties as First Lady and refused to appear at the White House for several weeks after the inauguration. When she did, she draped the state rooms in black bunting and retired to her room, where she spent most of her days staring into space or writing letters of apology to her deceased son. (A devastating, pre-inaugural specimen of these letters can be found here).

The President, debilitated by his own grief and sapped of enthusiasm for the office, returned with great avidity to the only hobby that continued to interest him: palsying himself with drink, a purpose to which he could devote himself without hindrance, now that he was no longer living in the wastelands of New England temperance.

On this date in 1865, the following statement was entered into the public record:

Now, therefore, be it known that I, William H. Seward, Secretary of State of the United States, by virtue and in pursuance of the second section of the act of Congress approved the 20th of April, 1818, entitled “An Act to provide for the publication of the laws of the United States and for other purposes,” do hereby certify that the amendment aforesaid HAS BECOME VALID TO ALL INTENTS AND PURPOSES AS A PART OF THE CONSTITUTION OF THE UNITED STATES.

The “amendment aforesaid” was the thirteenth such alteration to the United States Constitution and the first since the administration of Thomas Jefferson — a slaveholder whose envisioned “Empire for Liberty” had recently been cut from sternum to pelvis by four years of war. Seward’s announcement was in most respects anticlimactic; the eradication of slavery was a near-universal fact throughout the country by late 1865, with former slaves themselves taking the lead in its demolition. Begrudgingly, in areas of the South under Union control, former masters had generally acknowledged the reality of emancipation, though many had hoped in vain that the US Supreme Court might eventually nullify Lincoln’s wartime proclamation, thus leaving the door open for gradual, compensated emancipation or — even better — the retention of the peculiar institution in all but name.

With the appointment of Salmon Chase to the Court in December 1864, however, those fantasies dissipated. Chase was Lincoln’s fifth appointment to the bench, and it virtually assured that his policies would survive any constitutional scrutiny. Political events as well that fall seemed to indicate that slavery would continue to wither as the war itself drew to a close. At the state level, new constitutions brought immediate emancipation to tens of thousands of border state slaves. Maryland ended two and a half centuries of chattel bondage on November 1; Missouri would follow in early January 1865. Pro-slavery advocates in Kentucky and Delaware resisted similar efforts in their own states, but they were clearly staring into a headwind. In November 1864, voters throughout the country elected a House of Representatives with a massive Republican majority — 136 out of 193 members — sufficient to ensure the passage of an amendment that would officially transform a pro-slavery Constitution into an anti-slavery Constitution. The amendment to abolish slavery had narrowly failed in the House in early 1864. Now, however, at Lincoln’s urging, the lame-duck 38th Congress finally passed it, 119-56, with the support of fifteen Democrats, including James Guthrie of Kentucky, an erstwhile opponent of the amendment. (Guthrie would later oppose the Fourteenth Amendment as well as the Freedman’s Bureau, demonstrating that for many Congressmen, abolishing slavery was quite literally the least they could do on behalf of African American freedom.)

Lincoln quickly signed the amendment and watched approvingly as 21 states ratified it; among them was Louisiana, which had recently held free elections, rewritten its constitution, and accepted the Thirteenth Amendment on February 17, 1865. The President would trumpet Louisiana’s transformation on April 11, when he delivered the last speech of his life. Three days later, a few hours prior to his assassination, Lincoln would likely have received word that Arkansas as well had adopted the Thirteenth Amendment, leaving it six states shy of ratification. Over the next few months, the pace of the amendment slowed. Union states like Iowa, California, and Oregon were slow to move on the issue, while New Jersey, Delaware and Kentucky openly rejected the amendment. In November and early December, the new state governments in South Carolina, Alabama, and North Carolina accepted the amendment, and on December 6, Georgia — originally been settled in 1733 as a slave-free colony — unofficially ended slavery throughout the US.

Twelve days later, Seward announced the validity of the Thirteenth Amendment. That same day, a former Republican congressman named Thomas Corwin died in Washington, D.C. Corwin, an Ohioan and ex-Whig, had sponsored an amendment in early 1861 that — had it been ratified by the states — would itself have been the Thirteenth Amendment. The Corwin Amendment, as it was generally known, was part of a final effort among Northern and border state political leaders to capitulate to the demands of slaveholders and to bring back the original seven states of the confederacy. Passed by a single vote in Congress, the amendment nevertheless failed to win ratification from any state but Ohio and Maryland. The stillborn Corwin Amendment would have protected the “domestic institutions” of the states, including the rights of white people to own black people.

The eradication of those “rights” would require a war of unprecedented carnage and an effectively rewritten Constitution, but the Thirteenth Amendment was merely the starting point of a much longer national struggle over the question of how to define citizenship in a world without slavery. “Verily,” noted Frederick Douglass, “the work does not end with the abolition of slavery, but only begins.”

[Editor’s note: silbey’s back for another guest post. Which reminds me, there are only sixteen shopping days until Christmas and thirteen until Hanukkah. Hey, you know what makes a great gift? A beautifully written, deeply researched, and thoughtfully argued book, that’s what. Anyway, thanks, silbey, for your efforts.]

On this day in history (Tokyo time), units of the Imperial Japanese Navy mounted an assault on the American naval base at Pearl Harbor. English language accounts of the attack, whether scholarly or popular, have focused on the American side of things, usually with a nod to Japanese treachery. But it is the Japanese side that is actually—in military terms—the more interesting. Like the Germans in 1940, the Japanese showed with devastating effect the value of a new method of warfare. The attack on Pearl Harbor rewrote the doctrine on naval warfare, and much of the next three years consisted of both navies, Japanese and American, desperately deciphering the writing.

What was revolutionary was not the use of airpower. Or, to put it more accurately, it was not simply the use of airpower. Aircraft carriers had appeared in all the world’s navies in the interwar period and were now integral parts of the fleet. The short-ranged and slow planes flying off an aircraft carrier were, however, largely incapable of inflicting substantial damage on the main line of a battle fleet, Billy Mitchell, notwithstanding. Against the enormously thick armor of the battleships—designed to protect against shells coming in at supersonic speeds—the puny bombs carried by those aircraft did little damage. Thus, the aircraft carrier became an adjunct to the heavies, used for scouting and observation. The battle line charged forward while the aircraft carrier lurked in the background, a second-class citizen.


What changed in the run up to WWII was the pace of technological innovation and doctrinal experimentation. The basics of the aircraft carrier had been settled in the 1920s and remained similar throughout the war, changing only in size. But planes evolved rapidly and dramatically in the 1930s, a pace that accelerated in the late 1930s. Aircraft went from being slow short-ranged biplanes to fast long-ranged mono-wings. Plane generations shifted from year to year, and an aircraft that was cutting-edge one year might be obsolescent the next. The two navies which took the greatest advantage of this were the American and Japanese. Both, surrounded by massive oceans, had a vested interest in naval excellence, and both worked feverishly to figure out how to use these new weapons.

As a result, doctrine sped along with technology. The challenge was to deliver massive amounts of ordnance at extended range against armored, maneuvering ships which, annoyingly, would shoot back. The settled result in both navies was the creation of three kinds of aircraft: dive-bombers, torpedo planes, and fighter escorts. The first, dive-bombers, would attack from high in the sky, diving towards the target and releasing the bomb at the last moment before pulling up. Torpedo planes, on the other hand, would attack from low, angling towards their target and releasing a torpedo which would drive the rest of the way in and hit the ship. Fighter escorts would protect the first two groups on their way to the target and as they attacked.

That was the theory. In practice, it proved enormously difficult for either dive bombers or torpedo planes to find or hit their targets. The Pacific was a big ocean and fleets—no matter how large—were an infinitesimal part of it. Even if found, warships did not *want* to be hit, and did everything they could to avoid it. They shot at the planes. They twisted and turned to avoid the bombs and torpedoes. They launched fighters to counterattack. They sailed into rain squalls. They kept their lights off at night. All of these things meant that the attacking aircraft usually scored an extremely low percentage of hits. Later in the war, at Midway, American planes mounted hundreds of attacks on the Japanese fleet and scored fewer than ten direct hits (all bombs, no torpedos). And Midway was an overwhelming American triumph.


In planes and aerial doctrine, then, both the Japanese and American Navies were similar. Neither had an enormous advantage in plane technology overall. The Japanese torpedo bombers, the Nakajima B5N (“Kate” its US identifier) was better than the American Douglas TBD Devastator. The torpedo it dropped was *much* better. The American Douglas SBD Dauntless was a better plane than its Japanese counterpart, the Aichi D3A (“Val”). The fighter escorts on each side were so different as to be nearly incomparable. The Japanese Mitsubishi A6M (“Zeke”, though better known as the “Zero”) was a lightweight, highly maneuverable, long ranged fighter plane that achieved those qualities by sacrificing any armor protection at all, either for pilot or plane. The American Grumman F4F Wildcat was not particularly maneuverable, rather short-ranged, and pretty heavy, but its toughness and heavy armament were legendary. Whereas Zeros tended to dissolve into flames under fire, the stories of Wildcats making it home after being hit by hundreds or thousands of bullets are too many to recount.

Where the Japanese surpassed the Americans was not in their use of the planes but in their use of carriers. Whereas American carriers were deployed singly, as part of a larger squadron including battleships, the Japanese put their carriers together—for the most important missions—as a single large striking force. Early in 1941, Admiral Isoruoku [ed.- dana]Yamamoto, the commander-in-chief of the Japanese Navy, organized the First Air Fleet, consisting of all of Japan’s carriers. The First Air Fleet, and most particularly the six heavy carriers composing the Kido Butai, or “Mobile Striking Force”, would be the hammer of the fleet. Off their decks could fly more than 300 warplanes, a larger number than that of any other fleet in the world. What they could not achieve in individual accuracy, the Japanese aimed to make up in numbers.


It was this fleet that sailed to attack Pearl Harbor. A British attack on the Italian fleet at Taranto in 1940 had suggested to the Japanese that such an assault was possible. The reasons leading to the attack, and the events of the attack itself, have been detailed innumerable times. For the purposes of this post, however, the critical thing is that the new Japanese doctrine worked well. The 353 aircraft launched from the decks of Kido Butai savaged not only the fleet at anchor in Pearl, but also the aircraft and equipment ashore. It was perhaps the easiest possible target: a naval base taken by surprise, with ships at anchor, boilers dark. Having said that, even such an overwhelming assault failed to destroy critical parts of the base: the American submarine pens, the fuel oil farm, and, most critically, the American aircraft carriers. Such was the inefficiency of air attack in 1941.


It was, nonetheless, a spectacular success for the Japanese. They had demonstrated to themselves and to the world the effectiveness of concentrated naval airpower. Kido Butai would roam the Pacific over the next six months, hammering target after target and ranging as far as Sri Lanka in the Indian Ocean and Australia in the southern Pacific. But the Americans were fast learners and in this new naval doctrine loomed the seeds of Japanese defeat. If the benefit lay not in who could amass an extra ship or two, but an extra fleet or two, then the Americans had a decisive advantage, not only in the production of carriers and planes (24 American vs. 16 Japanese during WWII) or planes (over 300,000 for the U.S. vs. 75,000 for the Japanese) but in the industrialization of pilot training (several hundred thousand vs. 15-20,000). By 1945, the “Murderer’s Row” of the American 3rd and 5th Fleets, with 8-10 carriers and over 500 planes, roamed the Pacific, hammering the Japanese as Pearl Harbor had been hammered. The six carriers of Kido Butai did not live to see that day.


Via Tomasky, Cosell announces the shooting death of John Lennon, this day in 1980. The YouTube embedding is purposely disabled, so you have to follow the link. Here, though, is the seasonally necessary John Lennon:

We want it.

So wrote Joan Didion—born on this day in 1934—on the eve of this year’s Presidential election.  Not that she found this evasion particularly shocking: “This was not an unpredictable occurence,” she wrote in 1998’s “Clinton Agonistes.”  “These were not entirely unpredictable developments,” she writes in this month’s New York Review of Books.  The baseness of American politics may never surprise her, but the stupidity of the American electorate rarely fails to.  The object of her pithy restatements of talking points is never the political actor who mouths them (from whom nothing more nor better can be expected).  In these moments, the indictment is always of the public too stupid to recognize the absurdity of issues:

We heard repeatedly about “our children,” or “our kids,” who were, as presented, avid consumers of the Nightly News in whose presence sex had never been mentioned and discussions of the presidency were routine. (Political Fictions 233)

We could argue over whether or not the McCain campaign had sufficiently vetted its candidate for vice-president, but take at face value the campaign’s description of that vetting as “an exhaustive process” including a “seventy-question survey.” (“A Fateful Election“)

She does recognize the complicity of the press in this—what with its “arresting enthusiasm for overlooking the contradictions inherent in reporting that which only occurs in order to be reported” (Political Fictions 30)—but the fault belongs less to the stewards of democracy and their flacks for behaving thus than to the public who willingly consumes it.  We are a deeply stupid republic, but as Didion quotes William Safire advising the Dukakis campaign, “We hate to be seen to be manipulated” (31).

This is typical Didion: frame the norm such that its absurdity becomes obvious, then crucify someone with their own words.  Safire, speaking on behalf of the American people, instead ridicules them.  Bob Woodward, attempting to establish his thoroughness, instead reveals his methodological commitment to a scrupulous passivity untainted by thought:

The discinclination of Mr. Woodward’s to exert cognitive energy on what he is told reaches critical mass in The Choice, where not much is said to the author by a candidate or potential candidate appears to have been deemed too insignificant for inclusion, too casual for documentation.  (“Most of them permitted me to tape-record the interviews, otherwise I took detailed notes.”) (Political Fictions 196)

Didion’s prose here embodies the very principle it defends: Woodward’s inclusion of irrelevance is proven through the judicious selection of his most irrelevant sentence.  No fat need (nor can) be trimmed from its meat.  This otherwise laudable feature has drawbacks (to more than the the would-be parodist) apparent when inevitable tragedies occur:

Nine months and five days ago, at approximately 9 o’clock on the evening of December 30, 2003, my husband, John Gregory Dunne, appeared to (or did) experience, at the table where he and I had just sat down to dinner in the living room of our apartment in New York, a sudden massive coronary event that caused his death. Our only child, Quintana, then 37, had been for the previous five nights unconscious in an intensive-care unit at Beth Israel Medical Center’s Singer Division, at that time a hospital on East End Avenue (it closed in August 2004), more commonly known as “Beth Israel North” or “the old Doctors’ Hospital,” where what had seemed a case of December flu sufficiently severe to take her to an emergency room on Christmas morning had exploded into pneumonia and septic shock. This is my attempt to make sense of the period that followed, weeks and then months that cut loose any fixed idea I had ever had about death, about illness, about probability and luck, about good fortune and bad, about marriage and children and memory, about grief, about the ways in which people do and do not deal with the fact that life ends, about the shallowness of sanity, about life itself.

I have been a writer my entire life. As a writer, even as a child, long before what I wrote began to be published, I developed a sense that meaning itself was resident in the rhythms of words and sentences and paragraphs, a technique for withholding whatever it was I thought or believed behind an increasingly impenetrable polish. The way I write is who I am, or have become, yet this is a case in which I wish I had instead of words and their rhythms a cutting room, equipped with an Avid, a digital editing system on which I could touch a key and collapse the sequence of time, show you simultaneously all the frames of memory that come to me now, let you pick the takes, the marginally different expressions, the variant readings of the same lines. This is a case in which I need more than words to find the meaning. This is a case in which I need whatever it is I think or believe to be penetrable, if only for myself.

The key clause there—the moment when her prose betrays itself and lays its innards bare—speaks to how her daughter and husband’s death “cut loose any fixed idea [she] ever had.”  For Didion, fixed ideas are so much fat to be trimmed and, needless to say, self-butchery is neither clean nor painless.

Sixty years ago today, the House Un-American Activities Committee announced that Whittaker Chambers, a confessed former Soviet spy, had produced physical evidence of a ring of Communist spies in the New Deal. He had plucked this evidence — rolls of microfilmed documents — out of a hollowed-out pumpkin on his Maryland farm. (Chambers had actually hidden the papers in a dumbwaiter for a decade, and just moved them a few days earlier to the pumpkin, which allegedly he saw as a safer hiding spot.)

The Pumpkin Papers, as they were quickly dubbed, included documents in the handwriting of former State Department official Alger Hiss and former assistant Treasury Secretary Harry Dexter White.  Neither man was still in government at the time, and the documents were more than a decade old.  But they did indicate that a handful of New Deal bureaucrats had stolen information for Moscow.  In the minds of conservatives, they provided proof that the entire New Deal was actually a communist project.

The story of the papers, which became iconic to conservatives, provides the focal point of an annual dinner in Washington, D.C. for a group of a hundred or so aging Chambers fans.  Senators, former CIA directors, Richard Nixon, and even Kenneth Starr have attended.  Because this dinner delights Ari as a historian of memory, I provide below Bruce Craig’s description of it in his great book on White:

When chimes signal the appointed hour, the formally outfitted guests enter the cavernous ballroom, where, in the pitch darkness, flickering jack-o-lanterns adorn all the tables.   At every place setting is a paperback copy of the cognoscenti’s most sacred text: Whittaker Chambers’s Witness.

Before taking our seats all eyes are on the head table, specifically, on the largest jack-o-lantern of all but one that is unlit.  In reverent silence, all watch as a senior member of the group ceremoniously extracts three rolls of 35-mm film from the cavity of the jack-o-lantern, and, with deliberate flair, waves them unceremoniously over his head….

With the strike of the match the face of the traitorous Hiss is outlined in the intricately carved jack-o-lantern, and so begins the annual meeting of the little known and at one time secret institution of the ‘Pumpkin Papers Irregulars.’

Then again, Ari’s love of this anecdote may be unrelated to his intellectual interests and instead a byproduct of his personal cosmology. (See also, here and here.)

“The Legend of John Brown” by Jacob Lawrence

Editor’s note: Caleb McDaniel, who many of you (at least those of you familiar with internet traditions) may remember from modeforcaleb, joins us today for a guest post. Thanks, Caleb, for taking the time to do this. We really appreciate it.

One-hundred-and-forty-nine years ago today, the state of Virginia hung the militant abolitionist and Kansas Free State warrior, John Brown.

A month and a half earlier, Brown had led a band of twenty-two men, including three of his sons, in a daring–and disastrous–raid on the federal armory in Harpers Ferry in western Virginia, a raid intended as a direct strike on the institution of slavery within the South itself. Captured on October 18 and quickly tried by the state, a wounded Brown spent November in a jail cell in Charlestown, Virginia. Then, on December 2, he was escorted from his jail cell to the gallows. As he left the prison, he handed a note to his jailor predicting that more lives–many more–would be lost before slavery died: “I John Brown am now quite certain that the crimes of this guilty, land: will never be purged away; but with Blood.” Barely one year later, South Carolina seceded from the Union, initiating a sequence of events that led to the American Civil War.

Read the rest of this entry »

Football Players

On November 30th, 1899, at Sixteenth and Folsom Streets in San Francisco, Berkeley defeated Stanford 30-0 in the Big Game. The most famous trophy of the game was the Axe, which had been introduced in the baseball Big Game that spring. But with this victory, the second in a row for Cal football, Mayor James Phelan of San Francisco also awarded Berkeley a finer and more substantial trophy, a lifesize bronze statue called “The Football Players”, which stands today in a grove toward the west side of campus, on the way up into the university from downtown Berkeley.

Douglas Tilden was born in 1860, and attended the California School for the Deaf in Berkeley. He went to New York and then to Paris for further studies. He finished “The Football Players” at the end of seven years in Paris — note that, apart from being French, the players are dressed for rugby rather than American football. He did several other public sculptures in the Bay Area, including the “Baseball Player” in Golden Gate Park, the Mechanics’ Monument on Market Street downtown, and the California Volunteers’ Memorial at Market and Dolores. The monuments are bombastic in the style of the day, and hard to look at seriously now. (My daughter likes the Volunteers’ Memorial, not because she’s passionate about the Philippines War but because the horse has wings.) But the “Football Players” does something quite different than any of these, turning the conservative academic style to recognizably human ends — the composition, angles of the limbs, etc, harmonize with the gazes and the points of contact between the bodies, making a vivid if prettified image of male friendship and physical intimacy.

Photo by Flickr user marymactavish used under a Creative Commons license.

Today is the 66th anniversary of the worst nightclub fire in American history. On November 28, 1942, Boston’s Cocoanut Grove — located just south of the Common — erupted in flames that killed hundreds within a mere 15 minutes.  The club was stuffed that Saturday night with sailors on shore leave, young men from other branches who were preparing to head overseas for the war, as well as football fans who’d watched Holy Cross dismantle Boston College, 55-12, earlier in the day.

The fire began innocently enough, when a busboy — trying to replace a light bulb — lit a match while fumbling about in the dark, looking for the socket.  Though he believed he’d extinguished it, the smoldering match accidentally set fire to a cluster of artificial palm fronds. As it turned out, the bulb he was trying to replace had been removed by a young couple who were making out at one of the tables in the Melody Lounge, one of several large rooms in the club.

The busboy survived; the fate of the couple could never be determined.

One of the most popular attractions in the city, the Grove could not have been rigged better for a catastrophe of this magnitude. To deter freeloaders, the club’s owners had sealed off most of the exits, going so far as to weld them shut. At the time of the disaster, there were only two functioning public entrances to the club. One was a pair of doors that swung inward, while the other was a single set of revolving doors. Both exits quickly choked with bodies when the fire — propelled by leaky refrigerator gas, flammable decorations, drapes and furniture that filled the club — surged from one room to another and up the stairwells to the building’s top floors.

John Rizzo, a waiter at the Grove, recalled the fire a half century later in the Boston Globe:

Everybody panicked. I knew there was a door across the dining room, but about 150 people were headed for it, and everybody was pressed together, arms jammed to our sides. The flame came down the side of the dining room like a forest fire, and within minutes, the stage was consumed with fire. Before I could get out, I got pushed through a door and fell head over heels downstairs into the kitchen and landed on other people.

At the foot of the stairs, I was lucky enough to get on my feet. Everybody was scrambling, trying to break doors to the stock room. I said forget it, they don’t go outside. I saw a heavy lady, Mrs. [Katherine] Swett, the cashier. I said, ‘Take the money, let’s go,’ but she said, ‘I can’t leave the money.’ Later, I saw a big person burned to death, and it was her.

Amazingly, some of the club’s employees tried to make sure that fleeing patrons settled their bills and paid for their coats at the check stand. During the recovery effort, officials reported that dozens of corpses had been robbed.

The final death toll eventually reached 492 — roughly half of the night’s patrons.   The owner of the Cocoanut Club, Barney Welansky, served four years in prison for negligent homicide. Released in 1946, he died several weeks later of cancer.

Shortly after the fire, the city council passed an ordinance that banned “Cocoanut Grove” from ever being used to name another building in Boston.

(Fire-related trivia:  The next time you pass through a set of revolving doors, take note of the flanking set of conventional hinged doors.  Those became standard after the Cocoanut Grove fire…)

In the summer of 1968, Charles Schulz—born yesterday in 1922—decided not to take the path of least resistance.  In the first months of the Presidential race, the politics of Peanuts were as inscrutable as ever:

Read the rest of this entry »

On this day in 1963, Jack Ruby shot accused presidential assassin Lee Harvey Oswald on live television, thus providing material for thousands of conspiracy theory books (including mine).

Ruby, the owner of a strip club in Dallas, said he was distraught by the tragedy of the John F. Kennedy assassination, and especially by its effect on Jacqueline Kennedy.  He had visited the Dallas police station a couple of times during the 48 hours since Kennedy had been shot, milling around with reporters.  On November 24, he wandered into the city jail basement just moments before the police moved Oswald to the county jail.  As the prisoner moved past, Ruby lunged forward and shot him in the stomach:

Ruby’s murder of the man who had earlier shouted “I’m a patsy” caused millions to suspect a wider plot.   Although the government’s official report on the assassination dampened speculation for a time, by the mid-1970s upwards of 90 percent of Americans believed in a conspiracy.  The list of potential villains includes the Soviets, the CIA, the FBI, the secret service, the military-industrial complex, the mafia, Fidel Castro, anti-Castro Cubans, the Masons, the Jews, the Federal Reserve bank, aliens, J. Edgar Hoover, Richard Nixon, and Aristotle Onassis.

Ruby (born Jacob Rubenstein) had his own conspiracy theory: that anti-Semites would falsely accuse him of Kennedy’s murder and use his alleged guilt to justify a new holocaust. He told the Warren Commission that they had already begun their work and were torturing and killing Jews in Dallas.  He died of cancer in 1967.

This is officially an award-winning blog

HNN, Best group blog: "Witty and insightful, the Edge of the American West puts the group in group blog, with frequent contributions from an irreverent band.... Always entertaining, often enlightening, the blog features snazzy visuals—graphs, photos, videos—and zippy writing...."