You are currently browsing the category archive for the ‘guest post’ category.

[Editor's note: Thanks, as always, to Ben Alpers, for this post. Ben's book can be found here. You'll find that just one copy is never enough. So avoid the rush: buy three today!]

Sixty-one years ago today, on October 20, 1947, the House Committee on Un-American Activities (HUAC), opened its hearings into alleged Communist infiltration into Hollywood. Out of these hearings came the Hollywood blacklist. They form a useful, if still somewhat arbitrary starting point for the Second Red Scare, which is sometimes mislabeled “McCarthyism” (more on why that’s not my preferred term below).

Rather than recount the probably fairly familiar tale of HUAC and the Hollywood Ten, let me note some things that are important in thinking about this episode today, especially in the wake of Rep. Michele Bachmann (R-MN) recent excursion into what is usually called McCarthyism. For those seeking more information about the event itself and its context, there are a variety of excellent accounts of the Hollywood left, the HUAC hearings, and the blacklist. Although nearly thirty years old, Larry Ceplair and Steven Englund’s The Inquisition in Hollywood is still probably the most thorough account. For those who want a shorter, more recent telling of the tale, see Gorham Kinden’s chapter on HUAC in Thomas Schatz’s Boom and Bust: Hollywood Cinema in the 1940s.

The HUAC hearings were by no means the first time that Hollywood fell under legislative scrutiny. In the early 1930s, in the wake of the Payne Fund Studies, Congress investigated accusations that Hollywood was corrupting America’s youth. At the end of that decade, the California legislature, at the urging of Walt Disney (who was engaged in a labor dispute with his animators) among others, investigated accusations of Communist influence in Hollywood. And in 1941, the U.S. Senate held hearings into accusations that Hollywood was producing interventionist propaganda. Part of the force of each of these investigations was the fact that, as a result of the 1915 Mutual v. Ohio case, motion pictures enjoyed no First Amendment protection, as they were legally considered to be a business, not speech. Until this rather bizarre decision was overturned by the Supreme Court in Joseph Burstyn, Inc. v. Wilson (1952; usually called the Miracle Case, after the film that was at issue), the threat of official censorship was a powerful stick that Congress and state legislatures held over Hollywood during any investigation.

Nonetheless, in these earlier investigations, Hollywood studios mostly stuck together and successfully resisted Congressional (and California Legislative) political pressure. In marked contrast, on November 25, 1947, about a month after the first HUAC Hollywood hearing, studio executives met at the Waldorf-Astoria Hotel in New York and issued what is now known as the Waldorf Statement. The Statement was a response to Congress’s contempt citations, issued the previous day, against the ten “unfriendly” witnesses who had refused to testify before HUAC: Alvah Bessie, Herbert Biberman, Lester Cole, Edward Dmytryk, Ring Lardner, Jr., John Howard Lawson, Abert Maltz, Samuel Ornitz, Adrian Scott, and Dalton Trumbo (all but Dmytryk were screenwriters). In the Waldorf Statement, the studio heads, through their trade organization, the Motion Picture Association of America and its president Eric Johnston, announced that the ten unfriendly witnesses (who would shortly be dubbed the “Hollywood Ten”) would be denied employment in the motion picture industry. Thus began the blacklist.

One cannot overestimate how important industry capitulation was to the establishment of the blacklist. Other entertainment industries, most notably Broadway, were much more willing to hire those who had run afoul of red hunters during the 1950s. So why was Hollywood willing to fold in 1947, when it had stood up to previous investigations? After all, despite Hollywood’s liberal — or at least libertine — reputation, the studio bosses had always leaned politically rightward.

In the case of the Senate hearings into supposed interventionist propaganda, most of the studio heads were themselves avid interventionists, so resistance to anti-interventionists in the Senate came easily. And the studios had the support of the White House in their attempts to warn the U.S. public of the Nazi menace. Most importantly, in all the various hearings during the 1930s and early 1940s, Hollywood figured that it made good business sense to resist efforts at state interference.

By the 1940s the equation was different for a variety of reasons. The U.S. v. Paramount antitrust case was nearing its conclusion (it was to be argued before the Supreme Court in February, 1948). Hollywood had reason to fear that its entire business model would soon be ruled illegal. Only a few months later, the Supreme Court did so. Moreover, Hollywood was in the midst of a series of threatening strikes by some of its craft unions. The Hollywood labor force was divided between more radical CIO unions and more accommodating AFL unions. The Red Scare provided a convenient method for the studios to divide and conquer its labor force, especially given that its most prominent unions, including the Screen Actors Guild (then headed by Ronald Reagan) and the Directors Guild were happy to go along with the blacklist. Finally, of course, the studio heads shared the virulent anticommunism of HUAC. Which brings me to the term “McCarthyism.”

As most readers of this blog know, Sen. Joseph McCarthy did not become a prominent national figure in the Second Red Scare until February 9, 1950, when he delivered a speech in Wheeling, West Virginia, announcing that there were 205 known Communists in the State Department. It thus seems odd to give the label “McCarthyism” to an event, the HUAC Hollywood hearings, that occurred before McCarthy had achieved any national prominence. But there’s a larger problem here. McCarthy is a convenient scapegoat for the much broader phenomenon of political repression in the name of anticommunism. And we still have the tendency when discussing these issues of falling back into old political arguments. The phenomenon of U.S. anticommunism in the post-war period is extraordinarily complicated. We ought to be able to take advantage of historical hindsight in considering it. Yet, even half a century later, public discussions about the period still often reproduce the positions staked out by anti-anticommunists, Cold War liberal anticommunists, or conservative anticommunists. Recent years
have seen more or less serious revivals in the reputation of anticommunist Cold War liberals like Niebuhr and Schlesinger and conservatives like Whittaker Chambers (as well as less serious attempts to revive the reputation of Joe McCarthy). With the apparent guilt of Julius Rosenberg and Alger Hiss, it becomes easy to forget that anticommunism, even of the variety supported by honest-to-goodness Cold War liberals, had its victims — most spectacularly Ethel Rosenberg and those who fell under various blacklists — in Hollywood, the academy, and elsewhere.

So my inclination is always to point people in the direction of work that reminds us of the complexity of these times, such as Kathy Olmsted’s excellent recent post on the Rosenbergs, (which I’m afraid is a lot more impressive than these musings).

To return to the present: however vile Michele Bachman’s attempts to label virtually all Democrats anti-American, they fall into a much broader history of guilt by association that goes back, at the very least, to the First Red Scare (and arguably much earlier), and — as Rick Perlstein in his still-unreviewed-by-this-blog book Nixonland suggests — has been a central element in our politics for most of the last half century. McCarthy fell from grace in 1954. The Hollywood blacklist began to fall in the late 1950s and was entirely over by the mid-1960s. But we needn’t look back that far to see earlier examples of the kind of politics that Bachmann is practicing. Politicians, chiefly Republicans, have been questioning the patriotism of their opponents, chiefly Democrats, for decades. And they never stopped to acknowledge the end of the Red Scare.

On a more positive note, Congresswoman Bachmann is receiving criticism not only from the Democratic Party (which has injected $1,000,000 into her opponent’s congressional campaign), but from the public (who donated $500,000 to her opponent in the day after her public statement), and even from some Republicans: Colin Powell denounced her statements while endorsing Obama; her Republican primary challenger from earlier this year has announced a write-in campaign against her. As the story of the HUAC hearings suggest, political intimidation of this sort needs institutional support to achieve its fullest impact. And we should be glad that, for the moment at least, there seems to be plenty of institutional pushback against the uglier aspects of this fall’s Republican campaign.

[Editor's note: Paul Sutter joins us today to talk about his research on the Panama Canal. Paul is one of my favorite colleagues in the profession and an outstanding environmental historian. His first book, Driven Wild: How the Fight against Automobiles Launched the Modern Wilderness Movement, is smart, readable, and a great stocking-stuffer. The holidays are just around the corner, people; it's never too early to plan ahead. Thanks, Paul, for doing this.]

On October 10, 1913, President Woodrow Wilson, safely ensconced in the Executive Office Building, pressed a button that remotely trigged a dynamite blast on the Isthmus of Panama, a blast that destroyed the Gamboa Dike and, for the first time, created a continuous liquid passage across Central America. It was a moment that the New York Times called, in language typical of the triumphalism that attended the Panama Canal’s construction, “the greatest spectacle in the history of the world’s greatest engineering work.” The Gamboa Dike had kept water out of the famed Culebra Cut, a monumental excavation through the highest point along the canal route, and as the dike collapsed and water rushed into the cut from Gatún Lake, workers joyously rode the rapids in small-draught boats. But the destruction of the Gamboa Dike did not mean that the Panama Canal was ready for business — as the Times put it, “Besides the wreckage of the Gamboa dike there are two earth slides to be cleared away before large vessels can pass from ocean to ocean.”(1) The first complete passage by a boat of any size took place on January 7, 1914. U.S. officials then made plans for an opulent opening celebration, to occur in early 1915, but the exigencies of the First World War scuttled those grand plans. The Panama Canal opened to the world’s oceangoing traffic in August 1914, to minor fanfare.

Nonetheless, the Gamboa Dike’s destruction symbolically fulfilled a centuries-old dream — if only for daredevil canoeists on that October 10th — of a direct water passage from west to east. Or was it east to west? Actually, to be precise, it was, going from the Caribbean to the Pacific, a passage that took ships in a southeasterly direction, the counterintuitive result of a wicked bend in the Isthmus that makes Panama look like an S that fell on its face. But, at the moment, precision mattered much less than superlatives and grand geographical theorizing. The gap had been breached, continental geography defied, and the dream of a passage to India realized. Americans were certifiably full of themselves.

There are so many ways in which the construction of the Panama Canal was earth-shattering in its historical import, and it is easy to fall for the hyperbolic rhetorical celebrations that attended its completion. Finished six months ahead of schedule and under budget, the Canal struck many as an unvarnished triumph of American administrative and engineering know-how. Indeed, during the 1910s, literally dozens of books appeared to crow about how the Americans had succeeded where the French — whose canal-building effort in the 1880s was plagued by financial scandal and catastrophic disease mortality — had failed. If one were to search for a single symbolic moment to mark America’s self-conscious arrival as a global industrial and economic power, the successful completion of the Panama Canal would be as good a candidate as any.

But American triumphalism hid — and in many ways has continued to hide — what was a more complicated achievement. Behind the claims of American achievement, for instance, lay the work of hundreds of thousands of non-American laborers, most of them black West Indians, though there were large numbers of Spaniards, Italians, and even African-American workers whose efforts built the canal. These workers lived in a Canal Zone that, like much of America at the time, was deeply segregated and deeply unfair in terms of return on labor. White American workers were paid in gold and enjoyed excellent housing and clubhouse amenities. Meanwhile the largely non-white work force (and for some, like the Spanish, whether they were white or not was a source of some contention) worked on the silver roll and endured substandard company housing (or none at all), poor food, and no amenities to speak of. American officials justified such disparities by claiming that the operative distinction was one of national origin — successfully recruiting American citizens to work on the canal required the ICC to offer them more than did recruiting non-U.S. workers desperate for a wage of any sort. But as the experiences of African Americans, who found themselves largely confined to the silver roll, attested, national origin mattered little when you were black. The construction of the Panama Canal, then, might rightly be seen as an integral chapter in the rise of Jim Crow.(2)

Read the rest of this entry »

[Editor's note: Teo returns today for more calendar blogging. For more of his superb writing, check out his blogs: here and here.]

On this day in 1582, nothing happened in Spain, Portugal, Poland-Lithuania, or most of Italy. It’s not that this was an uneventful time in those places; far from it. This date, however, was right in the middle of the block of days eliminated from the calendar by the papal bull Inter gravissimas, issued a few months earlier, which recalibrated the civil calendar to bring the date of celebration of Easter back in line with where it had been at the time of the Council of Nicaea in AD 325 by declaring that the day after October 4 would be October 15. Since the bull was issued by Pope Gregory XIII,the resulting calendar is known as the Gregorian Calendar.

This is the calendar we still use today, of course, but it took a while for that to happen. The papal decree only took effect immediately in the parts of Italy where he was also the secular ruler, and the only other rulers to adopt the change on the intended date were Philip II of Spain and Portugal, Stefan Bathory of Poland-Lithuania, and the leaders of various small Italian states, all of them staunch Catholics. Other Catholic rulers, such as Henry III of France and the Austrian Habsburgs, adopted the new calendar within the next couple of years, while most Protestant countries resisted the change for more than a century. In the countries that did not accept the change originally, October 9 occurred as scheduled, and things happened on it. In Italy, Spain, Portugal, and Poland-Lithuania, however, October 19 occurred instead, and things happened then.

[We're lucky to have Ben Alpers back with us today. Ben's excellent book can be found here. Thanks again, Ben, for doing this. We're very grateful to you.]

On this day in 1927, The Jazz Singer premiered in New York City. Though usually credited as the first “talkie,” the film’s innovation is subtler than that designation suggests. To begin with, over a year before The Jazz Singer was released, on August 6, 1926, Warner Brothers had released the first feature film with a synchronized soundtrack, Don Juan. Directed by Alan Crosland, who would later direct The Jazz Singer, Don Juan had no dialogue, but merely sound effects and a musical soundtrack. On the other hand, though The Jazz Singer was the first feature film with synchronized dialogue, only a few short scenes feature it. Most of the film, like Don Juan before it, is essentially a silent film with a synced musical score.

Nevertheless, The Jazz Singer’s premiere was a truly significant event. The film was a huge hit. Audiences flocked to it because it contained not simply synced dialogue, but synced singing by Al Jolson, already a hugely popular entertainer. It was also one of the very few Hollywood movies to concern itself with the complications of the American immigrant experience. Young Jakie Rabinowitz (Jolson) is the son of a Jewish cantor in New York’s Lower East Side. But he loves to sing American “jazz” songs in the local saloon (as Michael Rogin and others have noted, the tin-pan-alley music in the film really isn’t jazz, even by 1927 standards). When Cantor Rabinowitz (Warner Oland) finds out that his son is singing in a saloon he becomes enraged and Jakie runs away from home. The film flashes forward, Jakie is now Jack Robin, a rising vaudeville star who is falling in love with his fellow performer, the apparently WASPy Mary Dale (May McEvoy). When their troupe comes to New York to play Broadway, Jack visits his mother (Eugenie Besserer) and sings her one of his songs. But his father interrupts their reverie (famously yelling “Stop” to bring an end to one of the film’s dialogue sequences, thus thrusting it back into the world of silence). The climax of the film comes when Cantor Rabinowitz falls ill and Jack needs to decide between appearing in his Broadway premiere and going to his father’s shul to sing Kol Nidre in his stead on Yom Kippur, the holiest day of the Jewish year. In classic Hollywood fashion, the film lets Jack have it both ways, singing in the synagogue, reuniting with his father over the later’s deathbed, and finally performing “Mammy” on stage and in blackface, with his own mother in the audience.

The Jazz Singer was, in many ways, oddly self-referential. Born Asa Yoelson in Seredžius, Lithuania, Al Jolson was himself an immigrant and son of a cantor. Indeed, Jolson had been the inspiration for, and star of, the original Broadway production of The Jazz Singer in 1925. (In fact, the revival of the play two years later starring George Jessel was the bigger hit and Warner Brothers had hoped to land him for their movie. But Jessel had demanded too much money, so they settled for Jolson.) The Warner Brothers, too, were assimilated Jewish immigrants. And they decided to premiere The Jazz Singer on Yom Kippur, which ran from sundown on October 5 to sundown on October 6 in 1927. But, on October 5, 1927, the day before the big premiere, one of the four brothers, Sam Warner (born Szmul Eichelbaum) passed away from complications related to either a sinus infection or a rotten tooth (historians seem to disagree about this; the actor William Demarest — best known to people of my generation as Uncle Charlie on TV’s My Three Sons — claimed late in his life that the other brothers had poisoned Sam!). The three remaining brothers, Harry (Hirsz), Albert (Aaron), and Jack (Itzhak) were suddenly faced with a Jack Robin-like choice. Rather than attend the New York premiere, they stayed in Los Angeles to attend to their brother’s funeral arrangements.

As a result of the success of The Jazz Singer, a revolution in movies took place. Within a few short years, public demand essentially killed off silent cinema. The arrival of sound proved hugely expensive to the studios, which not only had to purchase new sound-recording equipment, but also had to pay to wire their theaters for sound (the Hollywood studios were a vertically integrated oligopoly that controlled not only production, but also distribution and exhibition). All of this meant that when the Great Depression hit, the studios were carrying an enormous load of debt.

But for all of its revolutionary impact, The Jazz Singer today seems oddly backward looking. Not only is it largely a silent film (and, by the standards of 1927, not a particularly sophisticated silent film at that), but its main character’s star turn occurs in blackface, which was, by 1927, already losing its cultural centrality as a performance practice in the United States. Michael Rogin’s Blackface, White Noise: Jewish Immigrants in the Hollywood Melting Pot has interesting things to say about this aspect of the film.

Please welcome back David Silbey for the exciting conclusion of this epic saga. Many, many thanks, David.

The Germans knew an attack was coming. They could read a map as well as anyone, and the situation in theater was particularly obvious. The St. Mihiel salient had been a problem for the French and Americans, and an American attack had reduced it. What was next? The French Army held the center of the line, near the river Aisne. The terrain here was flat and, once the Aisne was crossed, without natural barriers until an attacking army hit the River Meuse. Just beyond the Meuse lay a tempting target: the German rail junction at Sedan. Capture that, and the network that supplied the German armies in France would be cut in half.

But along the western line of that open terrain lay one forbidding feature: the Argonne Forest. Heavily wooded and on rocky ground, the Argonne was seemingly purpose-built for defense. If the Argonne remained in German hands, any French advance to the west would be taken under flanking fire by German machine guns and artillery, potentially crippling it. The Germans figured that any major offensive in the area would have to kick off with an assault on the Argonne.

Who would do it? That too was obvious. The massive traffic jam of American troops and supplies behind the lines was clearly apparent to German reconnaissance planes, and trench raids brought back prisoners for interrogation who spoke not French, but English with a peculiar accent. The German commander in the area, General Max von Gallwitz, was determined to give the Americans a stout welcome. He organized four defensive lines, fourteen miles deep and anchored by the Kriemhild Line at the rear. The Kriemhild was part of the larger German defensive system, the Hindenburg Line, that stretched from Switzerland to the Channel. It was, not to put too fine a point on it, the last organized German defense before the Heimat.
Read the rest of this entry »

Continuing his thrilling tale of yesteryear, David Silbey carries forward his epic This Day in History….

The American plan was flawed from the beginning. First, the attacks were spaced too closely together in time. To be successful, offensives in 1918 had to be complex, highly-planned and rehearsed, and heavily supplied. There was plenty of time to plan, supply, and train for the St. Mihiel assault, but not for Meuse-Argonne. American units would have to be pulled out of the St. Mihiel attack, have their casualties replaced, and retrain for the Meuse-Argonne, in the space of about ten days. This was simply not enough time. Second, the attacks were spaced too closely in distance. St. Mihiel and the Meuse-Argonne were next to each other on the front, supplied by the same road network. Even worse, that road network ran through Verdun, the site of near continuous fighting in 1916-17. Heavily damaged and only partially repaired, the roads were simply not up to the task of supplying two major assaults. At the attack on Amiens in August, 1918, the British had built up a stockpile of 6 million artillery shells in the weeks prior, and fired all of them and more during the attack. The Americans would not be able to do the same.

The attack on St. Mihiel pushed off on September 12th. It was a spectacular success. Within four days, the Americans had captured all their targets, 15,000 German POWs, and 400 artillery pieces. The victory raised the stock of the Americans, and Pershing, in the eyes of the French. French President Raymond Poincare visited the salient to congratulate the U.S. forces, though it should also be noted that he owned a small chateau in the liberated territory. Only some skeptical voices within French military intelligence pointed out that the Germans had actually been in the middle of evacuating the salient at the moment of the American attack, and thus had been unprepared to fight resolutely in defense. Given this, American casualties had been worryingly high, with 4,500 dead. “Open warfare” had succeeded, but at some cost.

Going agley.

 

The turnaround for the next attack turned into chaos. A traffic jam stretched back miles from the Meuse-Argonne line, carrying the critical supplies and men. Tens of thousands of trucks vied for space with 90,000 horses and mules pulling wagons. Some of the artillery got into place only hours before the shelling was due to start, at 11:30 PM on September 25th. That night 2700 guns started firing the artillery barrage, while roughly 600 tanks idled behind the lines and about 800 airplanes waited for light to take off. In the front trenches, infantry regiments sat, prepared for H-Hour, 5:30 am, when they would assault the German lines.

The enemy entrenched.

Let us pause for a moment and examine the challenge of their task. In front of them lay a sophisticated defensive system, consisting of trenches and strong points and barbed wire and artillery and machine guns. Allied to that system the German defensive doctrine, which specified exactly how German units should react to an attack. The doctrine, defense in depth, had come out of the hard lessons of 1916. There, at the Battle of the Somme, the Germans had defended their trenches by concentrating most of their troops in the front lines. That, they discovered, made them susceptible to the overwhelming barrage of British artillery fire, unlike anything the Germans had seen before. The “storm of steel,” (stahlgewittern) as the Germans called it, had inflicted heavy casualties. Thus the Somme, while a disaster for the British, had also been a disaster for the Germans. The result was a new doctrine. The front lines would be held lightly by forces that were only expected to slow down an attack. Behind the front lines would be the artillery, which would hammer attackers, and the Eingreif (counterattack) units. The latter, in the case of a successful assault, would launch an immediate counterattack (der Gegenstoss) before the victors could get settled in their gains. If that immediate counterattack did not work, the reserve German forces would build up to a deliberate counterattack (der Gegenangriff) a few days later. The idea was to spare the German defenders from the artillery barrage while enabling them to recover any lost terrain. The new defensive strategy worked well. The French Nivelle offensives of Spring 1917 (named after the French commander Robert Nivelle) failed catastrophically because Nivelle, not understanding the new German methods, put his faith in overwhelming French artillery bombardments slaughtering the defenders. It was the unfortunate French poilus who died by the hundreds of thousands for his error. A British assault at Passchendaele in fall 1917 met the same fate, made even worse by the soupy mud created by an unprecedented rainfall in September and October.

General Henry “Bite” Rawlinson.

But even as the German doctrine succeeded, a counterdoctrine began to develop. The two main proponents of this new doctrine were British Generals Henry Rawlinson and Herbert Plumer. “Bite and hold” assumed that the Germans would mount a counterattack as soon as a British assault showed signs of success. Their idea was explicitly to provoke that counterattack. The British would bite off the front of the German defensive system, and then immediately turn it into a defensive position of their own. The British would thus be ready for a German counterattack, and, Rawlinson and Plumer hoped, hold that assault off while inflicting heavy casualties on the Germans. After a few weeks, during which the Germans would reorganize their defensive system with a new front line, the British would repeat the process. No more breakthroughs to Berlin; instead, steady, if slow, progress.

General Herbert “Hold” Plumer.

The way to defend against this, of course, was to push heavy concentrations of German defenders up to the front lines. That, however, would bring them within the range of British artillery units, who would be only too happy to slaughter them as they had at the Somme. “Bite and hold” worked well at the Battle of Messines Ridge in August 1917 when a British attack, commanded by Plumer, captured a large chunk of the German defensive lines with relatively low casualties. It had worked again at Amiens in August 1918, when Rawlinson’s attack had cracked the entire German defensive line. In a sense, “bite and hold” was the antithesis of “open warfare.” British infantry went into battle heavily weighed down, carrying extra ammunition, equipment, and weapons so that they could set up a defensive perimeter quickly. Soldiers at Amiens had carried a larger load than those at the Somme. But at Amiens, they were escorted across the battlefield by a creeping barrage of artillery fire that kept German heads down, and hundreds of tanks working to suppress German machine guns. There was no possibility that they could break out into the open terrain behind the defensive system and advance quickly. It simply was not possible.

The Americans scoffed at this blinkered mentality and asserted that they would handle it differently at Meuse-Argonne. They would break into the German lines, and then expand outward, pushing the Germans before them into the open terrain behind the trench system. Pershing’s objective for the first day of the attack was the main German railhead at Sedan, forty or so miles behind the lines. Such a distance was otherworldly in a war where advances were measured in yards, not miles. The British and French winced when they heard the Americans’ confidence; it reminded them of their own confidence in 1914. Haig and Foch worried that Pershing had planned another Somme or Passchendaele, one that would end in sanguinary failure.

They were right, and wrong.

to be continued….

We are pleased and privileged to welcome back David Silbey, who has suspended his campaign so he can provide us a truly outstanding This Day in History. Many, many thanks, David.

On this day in 1918, the United States launched an attack against the German trenches in the Meuse-Argonne region of northern France. It was the largest American effort since the Civil War; in absolute numbers it was the largest operation the United States had ever undertaken.

That which traditionally does not survive contact with the enemy.

 

For all that, it was a sideshow to the larger war. After the stagnation of 1916-1917, 1918 had become the year of resolution. The Germans, fresh from their victory over the Russians, had transported hundreds of thousands of soldiers back from the eastern front to the western. They knew that they had a limited amount of time to take advantage of the numbers, before millions of freshly-trained American soldiers arrived in France in late 1918 and 1919. General Erich von Ludendorff, the German Supreme Commander, threw the dice with a series of massive offensives starting in March. For a moment, the Germans broke through and the war looked like it might end before the Americans could make a difference. The crisis was serious enough that the American commander, General John J. Pershing, unbent from his insistence that American units would fight only as part of an American army under an American commander, and began sending divisions piecemeal to shore up the French and British lines. The American reinforcements, resolute defense by the French and British troops, and general exhaustion on the German side enabled the Entente to hold its lines. By late May 1918, the Supreme Entente commander, General Ferdinand Foch, was thinking about large-scale counterattacks.

Marshal Foch.

Foch envisioned a series of hammer-blows all along the German lines. His hammers were to be the British Expeditionary Force and the French Army, strengthened by further injections of fresh American troops. General Pershing flatly refused. The emergency was over, he argued, and in any counterattack, American troops would fight as a single army, responsible to an American commander, not a foreign one. President Woodrow Wilson, Pershing’s commander, had been insistent that “the forces of the United States are a separate and distinct component of the combined forces, the identity of which must be preserved.” Pershing aimed to carry that order out to the letter. Foch fumed and argued but he had no leverage. America had come into war not as an ally, but as an “associated power,” fighting against the Germans, but not necessarily for the British or French. That meant that Foch could not order Pershing to do anything; he could only ask. At a meeting, Foch demanded to know whether Pershing was willing to see the defenders pushed back to the River Loire, south of Paris itself. Pershing responded that he did not care where the Americans fought, only that they fought as a unit.

All together now! except you fellows, you go over there with the French.

 

To this rule, he made only one exception. The 369th (Colored) Regiment would fight with the French. The “Harlem Hellfighters,” as they came to be known, fit uneasily in a largely racist Army uncomfortable with the notion of blacks as combat soldiers. The 369th had not been allowed to participate with New York’s 42nd National Guard Division, the “Rainbow” Division, because, it was explained to the 369th’s commander, “black is not a color in the rainbow.” In France, Pershing felt that the 369th could be dumped on the French, with their experience commanding “colonial” troops.

General Haig. Not pictured: phlegmatism.

Foch’s misgivings about this were not merely related to the difficulties it created in planning. He was also concerned that the Americans had little experience of trench warfare and would thus find fighting on the Western Front hard going. Foch needed the Americans to carry their weight, and he wasn’t sure that they could. The airy pronouncements from American officers that they would break out of the stagnant trench warfare that the Europeans had allowed themselves to be mired in and return to a more chivalric “open warfare” reassured Foch not all. It sounded identical to the assertions of French officers in the pre-war era. The French had discovered in 1914-15 that assertions, like men, died easily in the mud and blood of the Western Front. Machine guns respected no-one’s chivalry. The British and French had painfully learned, over the course of three years, how to mount effective assaults against heavily-defended trench systems. But to Pershing, trench warfare simply reflected Old World incompetence. The strapping sons of the New World had arrived, and they would show the Europeans how to do it. Pershing was wont to make such pronouncements in staff meetings with Foch and General Douglas Haig, the BEF commander. It remains a mystery that one or both of them did not punch him squarely in the mouth as a result, but that can perhaps be laid to Haig’s dour Scottish phlegmatism and Foch’s sense that France, wearied by the slaughter of millions of her young men, needed the Americans more than the Americans needed the French.

General Pershing.

Pershing got his way. The Americans would fight as a single army, though the units that had been sent as reinforcements during the German spring offensive would stay with the French and British. Foch rewrote the plan to put the Americans far out on the right of the British and French lines. Their job would be to reduce two salients, bulges in the defensive line, to ensure that the main French assault would not have Germans on its flanks. The first of these, the St. Mihiel salient, would be attacked on September 12th. After reducing this, the Americans would immediately turn to an attack in the Meuse-Argonne, about 20 miles to the east. There, as part of a larger assault, the Americans would cover the flank of the main French assault. The second attack would be much larger, initially comprising about 240,000 American soldiers, essentially the entire AEF. The attack was scheduled to start with an artillery bombardment at 11:30 PM September 25th. The next morning, the American infantry would go “over the top” and into the assault.

to be continued….

[Editor's Note: Andres Resendez, our correspondent to the frozen wastes of Northern Europe, writes in from Finland today. Which suggests that the blog's reach now encompasses the entire globe. So don't mess with us, people. Anyway, Andres's outstanding new book -- it got an A- from Entertainment Weekly -- can be found here. And we're very grateful to him for taking the time to pitch in. Though really, he's in Finland, so what else does he have to do with his time? It's either this or pick a fight with a Swede. And we all know Andres isn't that kind of guy.]

On this day Mexico celebrates its independence from the Spanish Empire (no, it wasn’t Cinco de Mayo, although the fact that the latter is the better-known date in the United States prompts many interesting questions and a few tentative answers). Miguel Hidalgo y Costilla, a balding priest from an insignificant town in central Mexico, rang the church bell in the wee hours of September 16, 1810 rallying his sleepy parishioners to shake off the thralldom of Spanish imperialism. His movement quickly snowballed into a massive insurrection (perhaps 60 thousand strong at its peak) and in short order captured some the richest mining centers and cities in the Bajío region. In a little more than a month Hidalgo’s mob was within sight of Mexico City threatening Spain’s hold over the entire viceroyalty.

Those interested in the arcana of historical memory may well ask why we commemorate Mexico’s independence on this date. For starters, Hidalgo is not the most obvious choice for a Founding Father: a priest investigated by the Inquisition for gambling, enjoying the company of women, and for his enthusiasm for books featured in the Index Librorum Prohibitorum. But more to the point, Hidalgo’s movement fizzled. Early in 1811, as Hidalgo was desperately trying to make his way to Texas (that inexhaustible fountain of revolutions) to reorganize his movement, he was captured, defrocked, summarily tried, and shot.

Mexico achieved its independence only a decade later as a result of a back-room deal. The broker, Agustín de Iturbide, was a smart dresser and a dashing officer who delighted in pomp and circumstance and—knowing how to seize the moment—organized a triumphal entrance into Mexico City at the head of his army on September 21, 1821 to commemorate the nation’s deliverance from Spain. In other words, Iturbide was the poster child of the Latin American liberator.

So why Hidalgo and not Iturbide, why September 16, 1810 and not September 21, 1821? Both figures and dates had supporters and detractors among early Mexicans. As usual, the choice boiled down to politics with—roughly speaking—Hidalgo coming across as a man of the masses while Iturbide became the more conservative option.

But the Hidalgo camp received a shot in the arm from most unexpected quarters. In the 1860s Maximilian of Austria, the French-backed emperor of Mexico, faced with the problem of how to commemorate Mexico’s independence, decided travel to the tiny town of Dolores—where Hidalgo had launched his movement—and made quite an impression on his subjects by reenacting Hidalgo’s call-to-arms. What made this all the more ironic is that Emperor Maximilian was himself a Hapsburg and therefore a direct descendant of Charles V, the Spanish king during Mexico’s conquest. And thus, following Maximilian’s lead, every September 16 the sitting Mexican president takes a stab at recreating the priest’s stirring harangue, screaming at the top of his lungs Viva México! Vivan los Héroes de la Independencia! (see above) And the Mexican citizenry gets to pass judgment on each performance. If only Hidalgo could have known.

[Teo has been running with wolves and stuff. But he's still kind enough to check in and drop some Southwestern flavah on us. This post either is or soon will be cross-posted at one of Teo's two excellent blogs: here or here.]

On this day in 1692, Don Diego de Vargas Zapata Luján Ponce de León, the governor of the Spanish colony of New Mexico, arrived at the town of Santa Fe, formerly the capital of the province but held since 1680 by the coalition of Pueblo Indians who revolted against the Spanish in that year and managed to drive them out of the area entirely. Vargas, an ambitious royal administrator and member of a distinguished family in Madrid, had only recently been appointed governor, but he had spent almost all of his short term so far planning obsessively for the reconquest of his nominal province, limited for practical purposes to the area immediately around the fortress town of El Paso on the Rio Grande. He was impelled by both his loyalty to the crown and outrage at the audacity of the Indians in betraying it and, perhaps more importantly, his deep piety and desire to reclaim the Indians not just for the king, but for his beloved Catholic faith as well. After much bureaucratic wrangling, and despite the misgivings of some of the veteran soldiers on the frontier consulted by the viceroy in evaluating the reconquest plans (many of which turned out to be quite prescient), Vargas had succeeded in getting approval for his expedition, which left El Paso on August 16.

On the way to Santa Fe the expedition stopped to camp at several abandoned haciendas and Pueblos ravaged by the revolt and the resulting warfare. When it reached the area of Pueblos that were still occupied, they showed signs of being suddenly deserted, presumably in response to news that the Spaniards had returned to wreak vengeance. This was not an irrational response, since there had already been a few attempts at reconquest by previous governors since 1680, none of which had come close to retaking the province but some of which had managed to sack and destroy certain individual Pueblos, particularly Zia and Santa Ana. The Pueblos Vargas passed on his way up the Rio Grande, Cochiti and Santo Domingo, are near Zia and Santa Ana, and have many cultural and linguistic ties to those places as well, so the memory of the destruction brought by Spanish expeditions would have been particularly fresh and meaningful to the people there.

Vargas was disappointed at finding Cochiti and Santo Domingo abandoned. He had prepared for exciting dawn raids at both which might give him some quick military glory, and would at the very least leave him in a strong position for an extended siege, but in fact he was more troubled by the fear the people had of him. The fact was that he didn’t actually want to conquer the province militarily. His strong piety left him with a deep concern for the souls of the Indians, who he felt had been tempted to rebellion and apostasy by the Devil, and more than anything else he wanted them to come back to the Church peacefully and of their own free will. He was willing to impose Christianity by force, of course, but he went to great lengths to avoid having to resort to actual violence (as opposed to threats of violence, which he was quite happy to use).

Read the rest of this entry »

[Editor's Note: Our special guest today is Lori Clune. Professor Clune is an instructor in the history department at Fresno State and a graduate student in our program. Thanks, Lori, for doing this. We appreciate it.]

On this day in 1949, just north of Peekskill, New York, Paul Robeson attempted to stage a concert — again. The performance scheduled for August 27th to benefit the Civil Rights Congress had never happened; violence had prevented Robeson from even getting within a mile of the stage. Even though the Westchester County community (just up the road from the Clintons’ current digs in Chappaqua) had enjoyed three previous, peaceful Robeson concerts in as many years, 1949 proved different. Communism, and fear of it, was on the rise. The Cold War — 1949 style — included hydrogen bomb testing, Chinese communists marching, and American CP leaders trialing. Things really heated up on John McCain’s thirteenth birthday that year, when the Soviet Union shocked the world and tested its first atomic bomb. Fear of nuclear empowered communists — and of their followers — permeated the thick, summer air of 1949.

Paul Robeson was one of them. A Rutgers athlete, class valedictorian, and Columbia Law graduate, he was famous as an actor (especially Othello in England and on Broadway) and baritone singer (“Old Man River” written for him in Show Boat). However, his political activism made him infamous. He spoke out in favor of civil and labor rights and helped found the American Crusade Against Lynching. His left-leaning interests caused many to accuse Robeson of being a member of the Communist Party. As a result, the FBI conducted constant surveillance from 1941 to 1974 (two years before Robeson’s death), resulting in a massive 2,863 page FBI file. To add further to his controversial reputation, just prior to the Peekskill concert Robeson traveled throughout Europe making provocative statements such as, “Blacks will never fight against the Soviet Union, where racial discrimination is prohibited and where the people live freely!” HUAC deemed these statements “unpatriotic” and grilled him before the committee in June.

Read the rest of this entry »

[Editor's note: Senior frontier correspondent, awc, sends along the following dispatch from the wilds of Alaska's history. Please be aware that this post likely will be updated and edited when awc's team of intrepid sled dogs, carrying in their intrepid maws more history, arrives later today at EotAW world headquarters.]

One of the most amusing storylines being floated by Republicans is the idea that Sarah Palin is a maverick. Certainly, she appears less corrupt than her peers, no hard feat in the nation’s most crooked state. And she has diverged from party orthodoxy on a few issues.

The most obvious theme, however, that emerges from Palin’s bio is a Bush-esque obsession with loyalty. We all now know about her alleged attempts to pressure the state police to fire her former brother in-law. Less well known are her actions as mayor of the city of Wasilla.

Shortly after her election in October 1996, she asked the police chief, librarian, public works director, and finance director to resign. In addition, she eliminated the position of city historian. Her critics charged the directors were dismissed for supporting her opponent, John Stein, in the preceding election. Palin further rankled city employees by issuing a gag order, forbidding them to speak to the media without her permission. Her controversial behavior led to demands for her recall. The press characterized the situation like this:

Four months of turmoil have followed in which almost every move by Palin has been questioned, from firing the museum director to hiring a deputy administrator at a cost of $50,000 a year to a short-lived proposal to move the city’s historic buildings from downtown. Critics argue the decisions are politically motivated. Palin says people voted for a change and she’s only trying to streamline government.

Daily Sitka Sentinel, 2/11/1997, 13.

The matter was only resolved three years later, when a federal judge upheld her right as mayor to remove the chief without cause.

Palin claimed she was trying to streamline government, and we might believe her were it not for the trooper incident. But the pattern of evidence suggests that she believes public employees should be politically, ideologically, and personally faithful to her. And if anything has been discredited by this administration, it’s the idea that loyalty is more important than ability.

Heckuva VP choice, Johnny!

[Editor's Note: Vance Maverick, the real original maverick, returns for another guest post. Thanks, Vance, for doing this. We very much appreciate it.]

On August 29, 1952, at the Maverick Concert Hall in Woodstock, New York, David Tudor gave the first performance of John Cage’s composition 4’33” — consisting, notoriously, of nothing but silence. It remains Cage’s best-known piece: many more people have been provoked by it (take our own Dave Noon) than have ever attended a performance. In one sense it’s unrepresentative of Cage’s work — he wrote many, many pieces before and after it, full of all kinds of sound — but it’s a landmark, and understanding it does shed light on his extraordinary career. Further, it’s the representative extreme of a tendency in his work which is well worth learning to listen to — and in the right frame of mind, it’s 4’33” well spent.

John Cage was born in 1912 in Los Angeles, the son of an inventor. He was the valedictorian of Los Angeles High School, and went on to Pomona College. But he soon dropped out, and traveled the world as a young bohemian would-be writer. On his return to Los Angeles, he decided to devote himself to music, and studied composition, most notably with Arnold Schoenberg. There was admiration and resistance on both sides — Schoenberg called him “not a composer, but an inventor, of genius.”

In the later 1930s, he made a name for himself with a series of percussion pieces. Cage was not the first composer to work with percussion alone (Varese, for one, was there first), but he made good use of found instruments like automobile brake drums, and was a witty showman and spokesman.

His next, and richer, innovation was the “prepared piano”. This is a piano temporarily modified by attaching various small objects to the strings, each one adding a characteristic buzz or jangle, or muffling the note, turning the piano to a one-man percussion orchestra. It was a provocative gesture to tinker with the grand piano, master instrument of the European nineteenth century — but a gentle provocation, making the piano quieter and less resonant, and less standardized, and setting it back to rights again after the performance.

At the same time, Cage was growing more interested in composition by system, and under severe procedural constraints. (One rewarding example is the String Quartet in Four Parts, of 1950.) The obvious inspiration for this is the twelve-tone system he learned from Schoenberg. But Cage’s purpose was the opposite of his teacher’s, or of the latest refinements in serial technique (Babbitt or Boulez). Rather than seeking to control his music in all its details, to maximum expressive effect, Cage realized that he wanted to get away from direct conscious control, in order to open his ears, and his audience’s, to something beyond what an individual could intend.

His next steps in this direction, in 1951, were to use chance procedures in composition and performance. For Music of Changes, he used the I Ching to make compositional decisions. And in Imaginary Landscape IV, he specified what the performers should do, but gave them an intrinsically unpredictable instrument, the radio. (At the premiere, the concert ran so late that most stations were off the air, and the realization was mainly silence and static. Cage took this in stride, but his advocate Virgil Thomson was not amused.)

From there, it was only natural that Cage should take the step of not making any sound at all. He had been interested in silence for years. On the one hand, it’s worth listening to: we’ve all experienced highly charged silences between musical sounds, or between words, or in special physical places. And on the other, silence is never truly silent — there are always “incidental” sounds, which we generally bracket out of our experience of music. The concert environment, with its social ritual to focus the attention collectively, is an opportunity to bring the listening skills we’ve trained on Beethoven to bear on silence. And in the outdoor setting at Woodstock, there were plenty of ambient sounds to listen to. That evening, though, Cage didn’t do much to prepare the audience (the program was terse). Some were irritated, and some walked out.

Having done 4’33”, Cage had no need to repeat it. He developed his toolkit of chance techniques in a seemingly endless series of pieces and improvisational events. His influence at home and abroad, already considerable in 1952, only continued to grow. He branched out into other arts. His books, particularly Silence, are very well worth reading; he has rightly been anthologized among poets, and he was a master of anecdote. And he also did strong graphical work — his music scores were better looking than anyone’s, and in the 1980s he made a beautiful series of prints. But of course his main work remains musical: he changed the way we all hear and make music, by making it and by writing about it.

The only performance of 4’33” I’ve attended myself was in a Cage tribute concert at Mills College in Oakland, a year or two after his death in 1992. The concert was four hours thirty-three minutes long, a kind of continuous variety show, with many Bay Area avant-gardists (including Pauline Oliveros and Terry Riley) performing in groups. It began and ended with 4’33” itself. The audience was warm, and well in tune with Cage (and one another), and the silences were rich. The first rendition felt a little long, but the second rang with all the sounds of the evening, and with the history of all the experimental music Cage inspired.

[Editor’s Note: Bryan Waterman, associate professor of English at NYU, joins us today to talk about, well, read it for yourself. Bryan was gracious enough to send along a bevy of links so that I could do some research and “make fun of [him].” To which I’d reply, friend, I’m not sure you understand the seriousness of this blog. And also: I do research at my day job. Anyway, Bryan’s first book, Republic of Intellect, is here. And he blogs, among other places, at a history of new york, where you can visit, if only virtually, Yonah Schimmel’s Knishery and experience some of the things about New York that are missing from your goyishe life in California’s Central Valley. Wait, did I say that out loud?

Thanks, Bryan, for doing this.]

On August 26, 1970, the fiftieth anniversary of the Nineteenth Amendment, the notorious feminist author and activist Betty Friedan, out-going president of the four-year-old National Organization of Women, led tens of thousands of women in a march down Fifth Avenue toward Bryant Park, where, packed on the lawns behind the New York Public Library, the crowd heard addresses from Friedan, Gloria Steinem, Bella Abzug, and Kate Millett, among others.

The Women’s Strike for Equality, as it was billed, called on women to withhold their labor for a day as a way to protest unequal pay—roughly 60 cents to every dollar a man made at the time—though the march itself didn’t begin until after 5 pm in case potential marchers elected to stay on the job. Organizers also asked housewives to refuse work: “Don’t Cook Dinner—Starve a Rat Tonight,” a typical sign read. The Equality march even included some who were old enough to have paraded for women’s suffrage over a half century earlier, and some marchers demanded complete constitutional equality under the Equal Rights Amendment, which, once it passed the House in 1971 and the Senate in 1972, would spend the next decade being debated, ratified (and in some cases rescinded) by states, yet ultimately refused.

(August 26, 1970, also happens to have been the day I was born, across the continent in the rural Southwest, a world away from New York City and Women’s Lib alike. A few years later I would ride with other children on a July 4th parade float, dressed as a tree holding a stop sign that read: “STOP THE ERA!”

But I digress.)

The Times coverage seems by turns both excited by the prospect of the women’s movement and bewildered by the day’s spectacle, noting the support of state and national political figures for commemorative celebrations as well as the apparently surprising fact that the Bryant Park rally was uninterrupted by hecklers. The article also reports on oddball moments: for instance, a smaller crowd had gathered earlier in Duffy Square (Broadway between 46th and 47th), where one “Ms. Mary Ordovan, dressed in cassock and surplice as a ‘symbolic priest,’” consecrated the spot for a statue of Susan B. Anthony, which would replace the one of Father Francis Duffy, a WWI chaplain and Hell’s Kitchen reformer. Crossing herself, Ordovan called on the name of “The Mother, the Daughter, and the Holy Granddaughter. Ah-Women, Ah-Women.”

In a brief aside, the reporter then explains that “‘Ms.’ is used by women who object to the distinction between ‘Miss’ and ‘Mrs.’ to denote marital status.” (Within a year Ms. magazine would be founded by Steinem.)

I first came across this Times article—which was itself my introduction to the history of the Women’s Strike for Equality—a decade ago when, as a grad student in American Studies, I had the chance, by an odd set of circumstances, to teach several semesters of U.S. Women’s History. The experience was rewarding and humbling for several reasons—not least because the classes often included one or two elderly women who spent their retirements as “evergreen” students, taking a class a semester in topics that interested them. Their presence initially made me somewhat uncomfortable once we’d reach the 1940s and I’d realize that from here on out some of my students had lived—as women—through the very history I had to lecture on, as a 28-year-old male.

But the courses were also made challenging by the advent of what was just then being called “post-feminism,” a fact that made me somewhat uncomfortable when I’d inevitably realize that a lot of my younger students thought they had no need for feminism in their own lives. To them the world as all a hold-hands-and-sing Coca Cola Christmas commercial; they thought gender inequality belonged to the past to distant cultures whose traditions, short of female circumcision and slavery, needed to be respected. When I asked them to recall Hillary Clinton’s controversial “stay home and bake cookies” moment during the 1992 campaign—after all, it had happened only five or six years earlier—they reminded me that they had been in middle school at the time; such things were as remote to them as playground bullies and kickball.

Only a quarter-century after the Women’s Strike for Equality, as we were routinely told in the late 1990s, the television series Ally McBeal had driven the last nails in the movement’s coffin. Remember that Time Magazine cover? Looking back, it also seems like a watershed moment when feminist studies in the academy gave way to cultural studies of feminism; rather than argue about what women had or hadn’t gained, how they’d done it, and when, we’d henceforth talk, for better or worse, about how feminists exploited or were exploited by celebrity culture and mass media. Was the Equality march really a landmark event in American women’s history? Or had Friedan’s media tactics simply ensured it would be remembered that way?

Either way, what those 50,000 women had done—their march spilling over from the police-approved single lane, filling the Avenue from curb to curb—seemed almost impossible to imagine, not so much because their feminism seemed outdated, but because so many younger women had become politically apathetic, appeased by a modest set of gains that masqueraded as equality. The media were full of stories about younger women who bought the line that feminism had done them wrong, powerful women who decided to quit their jobs, once they’d begun to reproduce, and give traditional stay-at-home motherhood a chance. And voila! We have contemporary Park Slope, Brooklyn, and its hordes of organic, free-range—but highly monitored—children.

At 3pm on August 26, 1970, according to the Times,

Sixty women jammed into the reception area of the Katherine Gibbs School, on the third floor of the Pan Am building at 200 Park Avenue, to confront Alan L. Baker, president of the secretarial school, with their charges that the school was ‘fortifying’ and ‘exploiting’ a system that kept women in subservient roles in business. Mr. Baker said he would ‘take a good look’ at the question.

About 10 members of NOW, starting at 9 A.M. and continuing on into the afternoon, visited six firms, business and advertising agencies, to present mocking awards for allegedly degrading images of women and for underemploying women.

Among the businesses they visited, the article concludes somewhat dryly, was the New York Times itself. Who knew that NOW anticipated Michael Moore by all those years? Too bad they hadn’t taken more cameras with them.

Betty Friedan, the “mother of modern feminism,” died in 2006 on her 85th birthday; her landmark 1963 book The Feminine Mystique, reductively credited with jump-starting the movement, is now generally considered quaint—even offensive in places—if surprisingly compelling.

Gloria Steinem, on whom I developed a mad, Harold-and-Maude style crush on hearing her speak in the early 90s, is now in her 75th year; during the recent primary season she endorsed Clinton and wrote in a Times op-ed that gender, rather than race, remained the bigger obstacle to equality in American life.

Bella Abzug wore big hats and talked refreshingly brash talk until she died in 1998; I hope she was spared the debate about Ally McBeal‘s impact on the movement.

Kate Millett, who in 1970 had just published her excoriating if wooden Columbia Ph.D. dissertation as Sexual Politics (the only really exciting parts are the summaries and quotations from dirty, sexist books) survived years of troubled relations with media outlets and, more recently, Bowery developers; though her Christmas tree farm has gone the way of her downtown loft, she continues to run an upstate artist’s colony for women at age 74.

Can anyone name four feminist leaders of their stature—or even their celebrity—today? If not, whose fault is it?

[Editor's Note: Ben Alpers has returned for another foray into film history. Ben's excellent book can be found here. Ben himself, looking very serious, can be found here. Unless he's still abroad. Regardless, we are, as ever, grateful for his efforts.]

Sixty-nine years ago today, MGM’s The Wizard of Oz premiered at the Capitol Theater in New York. Like many major movie premieres of the day, this was a gala event. Judy Garland and Mickey Rooney provided live entertainment. Crowds had begun forming outside the theater at 5:30 in the morning. By the time the box office had opened at 8:00, police estimated that ten-thousand people were waiting to get into the 5,486-seat theater. “Two-hours later,” the New York Times reported, “the street queues, five and six abreast, extended from the box-office at Broadway and Fifty-first Street, west on Fifty-first Street, down Eighth Avenue to Fiftieth Street and east on Fiftieth Street back to Broadway.” About an hour later, the theater sent ticket sellers out into the crowd to help speed sales. Due to the enormous crowds, the Capitol presented five shows each weekday and six on Saturday and Sunday. Garland and Rooney continued to perform for over a week.

Today The Wizard of Oz is one of the most beloved films from one of Hollywood’s greatest years.(1) Gone with the Wind (which would win the Best Picture Oscar), Mr. Smith Goes to Washington, Ninotchka, Stagecoach, Goodbye Mr. Chips, Dark Victory, and Young Mr. Lincoln were among the many other significant movies that appeared in 1939. Those of us who grew up in the late 1960s and early 1970s remember The Wizard of Oz as an annual television staple. Watching on my family’s small, black-and-white TV, I was totally unaware of the film’s central visual conceit: Kansas appears in sepia-tones, Oz in glorious (and innovative) Technicolor. Nevertheless, the movie captivated me….though as a young child I was scared to death of the flying monkeys!

But how was The Wizard of Oz received at the time of its initial release? Readers of this blog are doubtless aware of Henry Littlefield’s famous 1964 reading of the film’s source material, L. Frank Baum’s classic children’s tale The Wonderful Wizard of Oz (1900), as a “parable of Populism,” an idea that was later elaborated by a variety of other scholars….until Michael Patrick Hearn pointed out that Baum had actually been a staunch McKinley supporter in 1896.(2) Did critics and audiences in 1939 see any hidden meanings in MGM’s film?

Read the rest of this entry »

[Editor's Note: Michael Elliott returns with another dispatch from the far eastern edge of the American West. Thanks, Michael, for pitching in.]

I’m getting in late on this, but I’ve been thinking about this act of desperation:

As I mentioned earlier, I’ve been spending some time of late hanging around sites related to the civil rights movement, and I’ve been listening, when I can, to some of the veterans of that movement. The fact is that when the Obama candidacy comes up, the civil rights veterans that I have heard really do come close to using some of the language that this ad mocks. I don’t want to generalize too much, because I have only a handful of examples to draw on, but my sense is that for a number of people, specifically African Americans rooted in the civil rights movement, Obama is indeed someone who appears providential. So this ad has made me think about that belief.

One thing to note is that it is not necessarily as naïve a belief as this ad implies. The same people who consider Obama to be a providential leader think of leadership of their own movement in a more nuanced way than the general public does. They emphasize that the civil rights movement was a people’s movement, and their memoirs are full of discussion about the shortcomings of the individuals who tried to steer it, including Martin Luther King, Jr. The other day, I was listening to Rev. C.T. Vivian describe King’s evolution in Montgomery. He said something to this effect: People make great leaders. Now, the Rev. Vivian is someone who has an extraordinary set of social justice bona fides. He was at Nashville, Selma, St. Augustine; he refers to MLK, Jr. as “Martin”. You get the picture. So when he starts talking, as I have also heard him do, about Obama as being a kind of fulfillment of the promise of his movement, it means something. You do not have to share his belief or his brand of ethical Christianity to feel its power when he talks.

I have no idea how representative Vivian is of his generation of leaders, or younger African American leaders, say the preachers to who succeeded him in the pulpits. But I do wonder why on earth the McCain camp thinks it a good idea to mock this sense of destiny that Vivian and at least some others have. When I listen to him speak about the planet bending toward justice, about the long struggle of African Americans for political power, about the tides of history, I think to myself, there is something potent there, something that has a kind of righteousness at its core. The McCain campaign is now making fun of that, and that’s a pot that they should probably leave unstirred. Consider this: At least some evangelical Christians thought it was divinely providential that W. was president during the 9/11 attacks. But John Kerry didn’t mock that belief, because he knew it would only energize W’s base (and maybe because his mother taught him not to make fun of other people’s spiritual beliefs). And yet McCain doesn’t seem worried about that possibility. Of course, this is only one ad, but if this election proves as close as that one, a larger than expected turnout of African Americans is that last thing John McCain wants.

[Editor's Note: Kathy Olmsted, awesome as ever, joins us to talk about Nixon. Perhaps you all could lean on her to just suck it up and join the blog? Here's an argument to use: she wouldn't have to post any more often than she already does.]

On this day in 1974, White House chief of staff Alexander Haig walked into President Richard Nixon’s study in his compound in San Clemente and gave him the decision just handed down by the U.S. Supreme Court in United States of America v. Richard M. Nixon, President of the United States. The president scanned the text and then exploded, snarling epithets about his three appointees who had joined the rest of the court in the 8-0 decision. The court had ruled that the president must surrender the audiotapes of several oval office conversations to the Watergate special prosecutor. Thus began two of the most dramatic weeks in U.S. presidential history, with rumors of coups, nuclear attacks, and drunken Nixonian fits, culminating with the nation’s sigh of relief as the voice of the Silent Majority left the White House for the last time.

The decision itself was a victory for the presidency, if not for the individual president named in the case. Apparently as a means to convince the conservatives on the court to support the unanimous decision, the court enshrined the general concept of executive privilege in the nation’s supreme law. But in this particular case, the court continued, executive privilege did not apply. The tapes contained evidence relevant to criminal trials, and the president must give them up.

The president knew how catastrophic this decision was for him. For years, he had directed his aides to use the government’s powers to spy on and harass his political enemies; when one of the espionage teams was caught breaking into the Democratic National headquarters in the Watergate building on June 17, 1972, he had personally directed an immense cover-up of that burglary and many other associated crimes. As the Supreme Court delivered its decision, the House Judiciary Committee was considering articles of impeachment against him for these crimes. Yet despite the sworn testimony of several of his former aides against him, Nixon had thought he had a good chance of holding onto his office. The Congress was filled with men like Congressman Charles Wiggins and Senate Barry Goldwater who regarded the Watergate scandal as a liberal conspiracy to get a Republican president.

But one particular tape subpoenaed by the special prosecutor worried the president. On the morning of the Supreme Court decision, Nixon called his lawyer in Washington, Fred Buzhardt, to warn him about some possible difficulties ahead. “There may be some problems with the June 23 tape, Fred,” the president said.

Buzhardt went immediately to the tightly guarded White House vault containing the tapes, put some earphones on his head, and listened with growing horror to the president’s conversation with his top aide, Bob Haldeman, on June 23, 1972, six days after the Watergate break-in. The tape answered Senator Howard Baker’s famous question, which Baker had actually designed to exonerate Nixon: “What did the president know and when did he know it?” Plenty, it turned out. On the tape, the president could be heard ordering the Watergate cover-up. He instructed aides to tell the CIA to intervene in the FBI investigation of the burglary, on the specious grounds that the break-in was actually a top-secret CIA operation to protect the nation’s security, instead of a White-House sponsored plot to destroy the president’s domestic political opponents. The tape was the piece of evidence the president’s opponents had been seeking, Buzhardt realized; it was the smoking gun.

The news of the tape rippled through Washington as Haig, after hearing the tape himself, began to alert Nixon’s top supporters that their president might not have been completely honest with them. Vice President Gerald Ford announced that he had concluded “the public interest is no longer served by repetition of my previously expressed belief that on the basis of all the evidence known to me and to the American people, the President is not guilty of an impeachable offense.” In plain English, the vice president was saying the president should leave office. Representative Wiggins, the president’s biggest defender on the House Judiciary Committee, read the transcript and announced bitterly that he and others had been “led down the garden path” by Nixon. Goldwater was even pithier. “Nixon should get his ass out of the White House – today!” he told his Senate colleagues when he learned of the tape. The Arizona senator, then viewed as the dean of Washington conservatives, visited the president, warning him that if it came to a trial only ten senators might support him, with six of those, including himself, still undecided. The Republicans were closing ranks. Nixon was bad for the party, and he had to go.

Nixon’s supporters were jumping ship, but what would the president do? His aides feared the worst. Haig, after hearing a cryptic Nixon comment about soldiers being left alone in a room with a gun, ordered the White House doctors to seize Nixon’s pills. Some aides worried that the president was frequently drunk, irrational, and out of control. Secretary of Defense James Schlesinger had even bigger concerns. He met with the joint chiefs and extracted a promise from them that they would not accept any presidential orders that did not come through Schlesinger. In other words, if the president tried to start a coup, Schlesinger was determined to stop it. Anthony Summers even argues that Schlesinger took away Nixon’s ability to launch a nuclear war.

In the end, with every other option closed to him, the president went quietly. He released the transcript of the smoking gun tape on August 5, and four days later resigned. He chose to let the television cameras record his last speech in the White House (“Oh, Dick, you can’t have it televised!” Pat Nixon warned, and one can see her point). The film of that moment contains the perfect epitaph for the Nixon presidency:

[Editor's Note: Sandie Holguin, a very dear friend and occasional commenter at the EotAW, has kindly agreed to provide us with a history lesson about the Spanish Civil War. Sandie's book can be found here. (Apparently, her middle name is Eleanor. Who knew?) Thanks, Sandie, for doing this.]

On this day in 1936 (also on July 17, aka “yesterday,” if you want to get technical), a group of disgruntled army officers led by General Emilio Mola initiated a military rebellion against Spain’s democratically elected Popular Front government. The revolt began in Spanish Morocco a little earlier than planned, and soon thereafter, the Army of Africa and General Francisco Franco, who secretly had been whisked away from his exile in the Canary Islands, were supposed to use Morocco as a launching point from which to cross the Straits of Gibraltar and invade southern Spain. Meanwhile, other army officers revolted on the Peninsula the next day. Following in the footsteps of nineteenth-century Spanish generals, and, more recently, of dictator Miguel Primo de Rivera (1923-1930), these generals staged a pronunciamiento — or, as we say in English, a coup d’état — to restore order to a chaotic state. But things didn’t quite work out as planned. The Spanish navy remained loyal to the Republic, temporarily foiling the generals’ plans, and the insurgents met fierce resistance by workers’ militias in Madrid, Barcelona, and Valencia. And thus, what was meant to be a simple military rebellion lasting a few days became the Spanish Civil War (cue portentous music).

Very few wars have inspired such international passions nor such polemical histories as the Spanish Civil War (1936-1939). To date, there are thousands of books on the subject, most of which try to figure out whom to blame for the Republican loss and the subsequent 36-year dictatorship of Francisco Franco.**

Those with a very general knowledge of the war tend to describe it as a prelude to World War II, emphasizing the international component of the war, and its grand ideological struggle between fascism and communism. Some may also know it for the great art it inspired, great posters, (This is one of my favorites — I cannot resist animated food), and the intellectuals and artists who participated in it, e.g., Ernest Hemingway, Stephen Spender, George Orwell. But the war itself was caused by conditions endemic to Spain: uneven economic development; land hunger, especially in the south; extreme poverty; conservative landowners and industrialists; an overly powerful Catholic Church; a top-heavy military; a radicalized working class, including a very strong anarchist movement; a regional nationalist movement in Catalonia and the Basque Country, among other things. The various governments of the Second Republic were unable to solve any of these entrenched problems (although many people tried) and violence continued to spiral on the eve of the war.

As to the ideological struggles, communism and fascism were but two of many. In fact, as the conflict drew near, there were very few fascists or communists active in Spain. Those who supported the Nationalist side (the insurgents), included much of the military (who tended toward your basic conservative values); the Spanish fascists or Falange (FE de las JONS); the Carlists (a group revived from the nineteenth century who wanted to return Spain to pre-French revolutionary absolutism and theocratic values); Alfonsine monarchists, who wanted to restore the Alfonso XIII to the throne; the Catholic Church; large landowners, conservative peasants with small landholdings, and, industrialists, many of whom coalesced around the conservative Catholic party known as CEDA. As the war progressed, Franco wanted to impose unity on these disparate groups, and they eventually came under the mellifluous initials the FET y de las JONS. On the Republican side, there were middle-class Republicans, anarchists (CNT-FAI); socialists (PSOE); communists (PCE); Basque and Catalan nationalists; and in Catalonia, the non-Stalinist Marxists (POUM) and the communist controlled Catalan Socialist Party (PSUC). On this side, unity was much more difficult to achieve, especially since the communists and anarchists hated each other, and because the anarchists wanted to have a social revolution at the same time they were fighting the Nationalists, while everybody else on the Republican side wanted to win the war and save the revolution for later.

The war did become internationalized almost immediately, however, when the Germans and the Italians provided artillery and transport planes to Franco so that he could cross the Straits with his armies. Soon thereafter, the Germans and Italians provided more war materiel and troops to win the war. The Republicans hoped that France and Britain would intervene, but those hopes were soon dashed when those two nations decided to remain detached and opted for implementing a Non-Intervention Agreement, which was meant to prevent the selling of arms or the provision of troops to either side. The effect of the agreement, signed by some 27 nations (Including Germany, Italy and the Soviet Union), was to make the Nationalists much better armed than the Republicans. As a response to Italian and German intervention, the Soviet Union intervened on the side of the Republic in September (as did Mexico in a minor role), and with the Soviet Union came the International Brigades, over 59,000 volunteers from 53 countries around the world, ready to “fight fascism.”

The war went poorly for the Republicans, who had very few military victories and who faced internecine violence as well. The anarchists had begun a revolution in Catalonia Aragon and parts of Andalusia, which resulted in the collectivization of agriculture and industry. The socialists, communists, and Republicans kept trying to rein in the revolution, hoping that they could convince Britain and France to intervene on behalf of the “bourgeois Republic.” The tensions culminated in a civil war within a civil war (May 1937), resulting in the purging of the POUM and marginalization of the anarchists so well depicted by Orwell. Demoralized and hungry, the Republicans kept losing battles to the unified and better equipped Nationalists. The Republicans hoped to hold out until the now obviously-looming-in-the-distance WWII broke out, but they were finally defeated by the Nationalists on April 1, 1939, five months before the next outbreak of war.

I’ve left so much out, but I’ve gone on too long—kind of like Franco’s dictatorship.

* “It looked at first sight as though Spain were suffering from a plague of initials.” George Orwell, Homage to Catalonia (San Diego, New York, and London: Harvest/HBJ, 1952), 47.

** The last exact count was 15,000 books and pamphlets in 1968 (Paul Preston, The Spanish Civil War (New York and London: W.W. Norton & Co., 2006), 333. Since Franco’s death in 1975, another buttload of books have been published on the subject.

[Editor's Note: Michael Elliot, whose book is "one of the finest recent works in the field of memory studies" sends along this dispatch from the American South. Contributions to the fund we're setting up to cover Michael's child's therapy bills can be sent directly to Michael's paypal account.]

I’ve just returned from a quick historical road-trip with my son, age 4, through Alabama. We hit civil rights sites and museums in Birmingham, Selma, and Montgomery. (Also on the itinerary: the Sloss Furnaces, minor league baseball, and a zoo.) I ascribe to the blank-slate school of parenting, in which the whole raison d’etre of offspring is the chance to indoctrinate/mold/shape the child into a smarter, better version of oneself. So this kind of road trip is pretty much what I have had in mind since the day G. was born.

Like most of my parenting ventures, this one reminded me how little I know about how to speak to a child of G’s age. About a year ago, for instance, I was driving G. to see the Martin Luther King crypt here in Atlanta and realized that I should probably say a word to him about, um, death, a topic we really hadn’t discussed before. So it turned out that he learned more about dying than anything else that day. For months afterward, whenever G. heard about someone who died, he would ask, “Like Martin Luther King?”

At least I had more time on our Alabama trip, and I thought that if we saw enough things he would come away with some sense of the basic outlines of the struggle. If this were a college course, I know that I would have a set of bullet points on race, class, resistance, civil society, and maybe the African American church. For G., I just want him to know that people used to do terrible things to one another, especially to African Americans, and that the world improved because of the brave actions of people who organized to protest. I suppose I should be glad that even these rudiments are remarkably hard to teach to G. As we toured the museums and visited the monuments, he would ask questions so breathtakingly innocent that they seem like they were ripped from a Hallmark after-school-special: “Why is there so much fighting?” And “why were people so mean?”

It’s hard for me to know exactly what he has retained from it all. In the long-run, I want him to have a thick sense of history, to be captivated by the past in the way that I am. In the short-run, I suppose it’s enough that he realize that things were not always the way they are now.

Strange objects intrigue him, and on this trip he was both frightened and interested in the shell of a bus displayed in the Birmingham Civil Rights Institute. It’s a replica of the Greyhound bus, burned and broken, that was attacked in Anniston, Alabama, during the Freedom Rides. Recognizing a point of pedagogical entry, I talked to G. about how there used be rules on the bus about where people could sit; I explained that he and his friends from preschool would have had to sit in different parts of the bus; and then I told him that some people finally said “that was silly” and decided to ride together by sitting wherever they wanted on the bus. Of course, I couldn’t bear to tell him the next part, about the attacks in Anniston, Birmingham, Montgomery. Maybe next year.

Anyhow, the story of the bus rides seemed to break through. The next day, as we were driving to Selma, he brought up the bus, and we went through the whole story again. I even tried to get him to remember the name “Freedom Rides.” I was feeling pretty pleased with myself until we had dinner that evening. Thanks to some salad dressing, we were discussing mustard. I like it and he does not, and I was assuring him that we could differ on this point. “People can like different things,” I said. There was a pause, and then he replied:

“Just like the kids on the bus.”
“What? What bus?”
“The kids on the bus who said that you can do whatever you want. You can do whatever you want. [Beat.] They also said that people shouldn’t kill each other any more.”

On the one hand, I suppose I should be thrilled that G. knows how to apply the lessons of history. On the other, I am pretty sure that the Freedom Riders would be appalled to think that they spilled their blood just so that my son could refuse mustard and other undesirable foods. I tried to say something about how the situations weren’t quite the same — and that sometimes he needed to eat certain foods because they are good for him. But mostly, I was silently pulling up my stakes to retreat. The last thing my son needs is historical authority to justify him in doing whatever he wants.

[Editor's Note: Ben Alpers, author of this excellent book, is back. And since I spent yesterday first driving to San Francisco, then getting on a plane at SFO, then flying to Cleveland, and then driving from the Cleveland airport to the East Side, all with two kids in tow, I really appreciate the help -- even more than usual, that is.]

On this day in 1916, Mary Pickford signed a contract with the Famous Players Film Company that made her the most highly paid, and powerful, female star in Hollywood. The contract guaranteed her $10,000 per week for two years, thus totaling over $1 million. Just as importantly, it created a separate production unit within the studio, Pickford Film Corporation, over which Pickford, and her mother Charlotte, would have control. This gave the star enormous say over her roles and even the final cut of her films. She was also able to reduce the number of feature films in which she had to appear to six a year, still a very large number by today’s standards.

Pickford’s ability to garner such a deal reflected both her astuteness as a businesswoman and her status as Hollywood’s biggest star. Born Gladys Smith on April 8, 1892 in Toronto, Canada, Pickford became a child star of the stage in Canada and, in 1907, traveled to New York to pursue a career on Broadway. Two years later, she signed with D.W. Griffith and became part of his Biograph motion picture troupe. Pickford moved with Griffith’s company from New York to Hollywood in 1910. By the time she left Biograph in 1911, she had appeared in seventy of the short, one-reelers that dominated the American, and world, film markets until the rise of the feature film in the middle of the 1910s. Griffith famously did not credit his actors, as he feared that they would become too popular and powerful. But Pickford became the first great female star of American motion pictures while still working for him, “The Biograph Girl with the Curls.”

Pickford easily made the transition to features, appearing in seven feature films in 1917 and eight in 1915. In such movies as Tess of the Storm Country (1914), Rebecca of Sunnybrook Farm (1917) and The Poor Little Rich Girl (1917), she created a strong and marketable image. While her beauty was legendary, she usually played plucky, somewhat tomboyish girls, often dramatically younger than the actor’s own age. In The Poor Little Rich Girl, the then-twenty-four-year-old Pickford played a twelve-year-old.

Pickford’s business savvy never left her. In 1919, along with her husband Douglas Fairbanks, Charlie Chaplin, and her former employer D.W. Griffith, Pickford founded the United Artists studio, which was designed to give these screen legends total control over the production and distribution of their films.

But Pickford’s screen persona remained remarkably unchanged, and this eventually proved the undoing of her film career. Pickford continued to play adolescents into the mid-1920s (when the star was in her thirties). But as Pickford aged, these roles became less credible. And audience tastes in female stars, too, began to change. Pickford successfully made the transition to sound, winning a best-actress Oscar for her first talkie, Coquette (1929). But her career was already in a downward spiral. Coquette marked by a belated attempt by the actor to remarket herself. She cut her hair, which had been long, into a fashionable bob. And she played a more overtly sexual character closer to her own age. But audiences never warmed to the new Mary Pickford. Pickford retired from the screen in 1933.

A complete Mary Pickford one-reeler, The Dream (1911), as well as clips from two other Pickford silents, Rags (1915) and Little Lord Fountleroy (1921), can be seen here. A clip from Coquette is available here.

Sources:

Mary Pickford” (PBS’s The American Experience)

“Biography” from the Mary Pickford Library Website.

Gaylyn Studlar, “Mary Pickford” in Geoffrey Nowell-Smith (ed), The Oxford History of World Cinema. New York. 1996, pp. 56-57.

[Editor's Note: Commenter Matt Dreyer sends along the following as food for thought. I don't know how many times I need to tell the rest of you people to step up and start pulling your weight. This is a group endeavor.]

On June 21st, 1789 New Hampshire ratified the United State Constitution. It was the ninth state to do so. Article VII of the Constitution states, in full, “The ratification of the conventions of nine states, shall be sufficient for the establishment of this constitution between the states so ratifying the same.” June 21st is therefore, whatever a bunch of crazed July 4th partisans might want to tell you, the birthday of the United States as opposed to the date, give or take, on which a motley collection of colonies declared independence from Great Britain.

Recent comments

This is officially an award-winning blog

Best group blog: "Witty and insightful, the Edge of the American West puts the group in group blog, with frequent contributions from an irreverent band.... Always entertaining, often enlightening, the blog features snazzy visuals—graphs, photos, videos—and zippy writing...."
Follow

Get every new post delivered to your Inbox.

Join 180 other followers