History News Network - Front Page History News Network - Front Page articles brought to you by History News Network. Mon, 17 Feb 2020 08:10:24 +0000 Mon, 17 Feb 2020 08:10:24 +0000 Zend_Feed_Writer 2 (http://framework.zend.com) https://m.www.hnn.us/site/feed “Patriotic” Veterans Only, Please

 

When news broke back in November 2019 that U.S. Army Lt. Col. Alexander Vindman would be offering testimony at the House impeachment inquiry of President Donald Trump, it didn’t take long for many conservative pundits to attack the National Security Council official’s patriotism.

 

Fox News stalwarts Laura Ingraham and Brian Kilmeade led the charge, publicly intimating that Vindman, despite being wounded in Iraq and earning a combat infantryman’s badge, might be a Ukrainian double agent. More recently, Tennessee Sen. Marsha Blackburn shared a tweet claiming: “Do not let the uniform fool you. He is a political activist in uniform.”

 

While perhaps an indication of our current toxic political environment, the attacks on “unpatriotic” veterans like Vindman—which have continued unabated into this year—actually have a long and checkered history in post-World War II America. Citizens may reflexively honor their countrymen and women who have served—and continue to serve—our nation in uniform, but they too often are eager to attack those who don’t fit their preconceived notions about what it means to be a veteran. 

 

For those who break socially sanctioned views of the “patriotic” veteran, public wrath can be as swift as it is outraged. Along with Vindman, two examples illustrate this point: Ron Kovic, Vietnam War veteran and author of the bestselling memoir Born on the Fourth of July, and recently retired U.S. Army major Danny Sjursen.

 

Kovic is paralyzed from the chest down, wounded in Vietnam as a 22-year old who believed he was defending the world against the evils of communism. Sjursen suffers from PTSD, the result of losing soldiers under his command in combat. 

 

Both love their country, though not in the jingoistic way perpetuated by Lee Greenwood songs or NFL pre-game flyovers. And both have suffered for speaking out against war, for not playing their assigned roles as the “patriotic” veteran.

 

A popular narrative holds that after the Vietnam War, in part because of their collective guilt, Americans became more supportive of veterans, even though they might oppose military interventions overseas. Yet a comparison of Kovic and Sjursen suggests otherwise.

 

After returning home from Vietnam, a grievously-wounded Kovic suffered through a painful recovery in VA hospitals as he increasingly turned against the war. Though wheelchair bound, he was manhandled while participating in antiwar protests, even being forcibly jolted about by Secret Servicemen at the 1972 Republican party national convention. When he, and fellow Vietnam Veterans against the War members, started chanting “Stop the war, stop the bombing,” a delegate wearing a “Four More Years” button ran up to Kovic and spat in his face.

 

Kovic, however, was far from alone. Other antiwar Vietnam veterans endured similar abuse. As protest marchers demonstrated in New Jersey over Labor Day weekend in 1970, a World War II veteran shouted at them: “You men are a disgrace to your uniforms. You’re a disgrace to everything we stand for. You ought to go back to Hanoi.” Apparently, those who fought in war were not allowed to speak out against war.

 

Nearly fifty years later, Sjursen published a piece in the Los Angeles Times condemning U.S. policies overseas. The op-ed asked readers to consider the consequences of our nation being “engaged in global war, patrolling an increasingly militarized world.” Sjursen’s tone was raw, as he passionately sought alternatives to a foreign policy committing the United States to seemingly endless war.

 

The public reactions were not simply ad hominem attacks against the Iraq-Afghanistan veteran, but denigrating, hard-hearted, and downright malicious. One popular Facebook page that shares defense-related news posted Sjursen’s op-ed. Of the roughly 200 comments, nearly all were disapproving. Some called the major “pathetic,” “bitter,” and “sour grapes,” while one critic, noting that Sjursen likely had PTSD, claimed he was a “progressive” who had just “decided to become a victim.” One detractor evidently spoke for many—“the Army is better off with him in retirement.”

 

The experiences of Kovic, Sjursen, and Vindman are noteworthy and should force us to ask some serious questions about the relationships with veterans from our armed forces. From where did our expectations about veteran patriotism and conformity emerge and harden? Has the current fissure in our domestic politics extended to the veteran community, so that if we disagree with members of our all-volunteer force, we reflexively assail their devotion to country?

 

This is not at all to argue that a military uniform automatically confers upon its wearer an elevated public status in which they are honored regardless of their actions. The recent case of Edward Gallagher, the Navy Seal accused of murder yet acquitted of charges, demonstrates the importance of our armed forces being held to the highest of standards when conducting military operations abroad. When President Trump personally intervened in Gallagher’s case, former NATO supreme commander, Adm. James Stavridis, worried publicly—and rightfully so—that the commander-in-chief’s decision would diminish “American moral authority on the battlefield to our detriment internationally.”

 

Gallagher, however, wasn’t speaking out against U.S. foreign policy or debating issues of national security, rather engaging in a social media campaign to protect his own self-interests. All the while, the same conservative media outlets which so vigorously attacked Vindman championed Gallagher’s cause with equal vigor. Might there be a connection?

 

There seems to be something in our current state of domestic politics wherein veterans automatically earn an entitlement to public admiration, while simultaneously losing some of their basic rights of citizenship, particularly their right to voice dissent. Active-duty servicemembers long have had to forfeit some of their freedoms of speech, the Uniform Code of Military Justice (UMCJ), for instance, prohibiting “contemptuous words” against public officials like the president or members of Congress. 

 

Yet to return to Kovic and Sjursen, their dissent against American foreign policy was censured not from within the military, but from those outside who sought to define and then rigorously police the expectations for “appropriate” veterans’ behavior and politics.

 

There’s a critical point to be teased out in all this, which is that people without the experience of directly fighting wars and serving in uniform are disallowing actual veterans from a conversation about future wars and armed conflict.

 

This silencing has extended to any dissent that is not in favor of militarism, even when it comes from the very people whose direct wartime experiences have shaped their opposition to intervention, armed nation-building, and war more generally. Who does that leave, we might ask, to speak out against U.S. military action overseas? Has it become impossible to question and debate our national security strategy while being “patriotic” at the same time?

 

It seems important, then, that we probe more deeply the contradictions in the relationship we have with our veterans. Despite their often audacious communal outpouring of support, far too many Americans prefer to laud only those vets who are unabashedly patriotic and completely silent about any concerns they have with U.S. foreign policy.

 

Of course, the unfortunate reality is that we want to be inspired by stories of young men and women’s “perseverance through combat.” We are heartened by tales linking national security and pride to the resolve of hearty individuals willing to sacrifice for the greater good. And, in the same vein, we are disappointed when soldiers, like one Vietnam veteran, return home and share that they “felt it was all so futile.”

 

In short, we want to find meaning in war, a sense of purpose that makes us feel better about ourselves and our nation. To view war as less than ennobling smacks of unpatriotic apologism. Thus, we attack the iconoclastic veteran-messenger without listening to their actual message. We decry the “broken” vet who somehow makes us look bad by not waving the red, white, and blue.

 

But patriotism, like courage, comes in many forms. Perhaps Americans of all persuasions, political or otherwise, might challenge themselves to be more accepting of veterans’ voices who don’t accord with their own “patriotic” views.

 

Kovic, Sjursen, and Vindman fought because their nation asked them to. The least we can do is allow them to speak up when they return home and respectfully contemplate what they have to say without attacking their “patriotism.”

]]>
Mon, 17 Feb 2020 08:10:24 +0000 https://historynewsnetwork.org/article/174320 https://historynewsnetwork.org/article/174320 0
75 Years After the Dresden Bombings

 

Every year on February 13th, in the cold of the late evening, all the church bells in Dresden ring out at a pre-determined moment; and onlookers in winter coats gather in the old squares to light candles. The bells – their chaotic clamour echoing through the streets for twenty minutes – represent air raid alarms. Standing amid the crowds, I found my gaze drawn up into the dark sky, envisaging the approaching bombers; imagining the dazzling green and red of their initial marker flares falling through the dark. In 1945, the children of Dresden thought these were like ‘Christmas trees’.

 

History in this city is not an abstract academic pursuit; instead, it is palpable and passionate. It matters enormously. This year marks the 75th anniversary of the horror that made Dresden – deep in eastern Germany - a by-word for annihilation. Before the war, Dresden had been known as ‘Florence on the Elbe’, because of its rich array of baroque architecture, its beautiful churches, galleries and opera house, and its strong tradition of art. Surrounded by gentle hills and rocky plains, Dresden always seemed a step apart. Even when it fell under the obscene thrall of Nazism.

 

Throughout the war, Allied bombers had raised infernos in cities all over Germany; but few in Dresden imagined that the British and Americans could turn upon the city in which – pre-war – they had visited and loved in such numbers. Dresdeners were mistaken. 

 

On the night of February 13th 1945, 796 British bombers, their flights staggered over a calculated two waves from 10 PM to 1 AM, unleashed thousands of incendiaries and high explosives that in turn created a super-heated firestorm that killed 25,000 people and reduced the old city to glowing, seething rubble. According to one on-looker, the assault ‘opened the gates of hell’. Women, children and refugees, huddled in inadequate brick cellars, began to bake to death. Others were overcome with poison fumes. Bodies were melted and mummified. 

 

The following day, against a sky still black with ash, American bombers conducted a further raid, dropping fire upon the city’s open wounds. 

 

In the years since, Dresden has sought not only to come to terms with the nightmare memories; but also to find understanding. History here has a clear moral purpose: to explain, and also to allow the truth to be seen. The debate over whether Dresden was a legitimate military target – or chosen for the purposes of atavistic vengeance – is still extraordinarily sensitive. All who tread here must tread carefully.  

 

Today, the restored and rebuilt streets of the old city are patterned with echoes that honour the dead: a statue to schoolboy choristers who perished; a perfect replication of the vast 18th century Frauenkirche, with its great dome and the original fire-blackened stones remaining at its base; a vast chunk of irregular masonry left in the pavement close by, inscribed with the explanation of how it came to be parted from the church. 

 

But there are also brass plaques in buildings and upon pavements commemorating Jewish lives lost not to the Allies, but to Nazi terror. There is an equal awareness that Dresden was under a pall of moral darkness long before the bombers came. 

 

Most valuably, the city archives are now filled with a wide array of personal accounts of the night – diaries, letters, memoirs. Here are hundreds of voices, some in writing, some recorded, telling extraordinary and vividly moving stories from a variety of viewpoints. All of this matters intensely because for a great many years in Dresden, remembrance itself was a battlefield. Indeed, in some quarters, it still is.

 

There are those on the extremes of the political far-right – in east Germany, but elsewhere too – who seek to appropriate the victims of that night for their own purposes; to compare Dresdeners to victims of the Holocaust, for instance, downplaying the persecution of the Jews and magnifying the suffering of gentile German civilians. The extremists want Dresden to be seen as an Aryan martyrs’ shrine. 

 

The people of Dresden are equally implacable in their determination that this should never be so. This is not the first time others have tried to hijack their history.

 

In the immediate aftermath of the war, the Red Army took control of the city and it became part of the Soviet-dominated German Democratic Republic; more totalitarianism. The Soviets had their own version of history to teach in the early years of the Cold War: that the bombing was due to the psychosis of hyper-aggressive ‘Anglo-American gangsters’. The destruction of Dresden was a warning that the Americans were remorseless. 

 

Meanwhile, in the west, the bombing became almost a parable of the horrors of war. An American reporter, just two days after the raids in 1945, inadvertently relayed to the world that the attack on Dresden was ‘terror bombing’. The reporter’s unconsidered phrase fixed history upon its course; in Britain, in that aftermath, there was a sharp, anguished reaction from the Prime Minister himself: Winston Churchill himself sent out a memo decrying the wanton destruction. 

 

The Air Chief Marshal of British Bomber Command, Sir Arthur Harris, had a nickname: ‘Butcher’. That he loathed the German people was no secret; his contempt bled through in memos sent to his superiors. The broad assumption was settled: that Dresden was victim to Harris’s bloodlust. For years, his name would carry the weight of responsibility for a decision that actually lay with committees higher up.  

 

Meanwhile, the city was granted literary immortality in the late 1960s by American author Kurt Vonnegut, who had been there that night as a Prisoner of War, and who was among those forced to excavate mutilated corpses in the days after. His novel ‘Slaughterhouse Five’ had the bombing as its dark bass line.   

 

But Dresden was not bombed because it was an ornament of German high culture that Sir Arthur Harris wanted crushed. It had genuine military significance. Memos and papers left by senior figures revealed the arguments and the wrangling.

 

First, the city was filled with military industry: Dresden had long been a city of scientific endeavour and its many factories were focused on delicate optical instrumentation and precision parts: in other words, it was at the advanced technical end of the German war machine. 

 

Additionally, the city was a teeming military transportation hub; troops shuttled through the busy railway junctions and through the streets on their way to the eastern front which, by that stage in the war, was a mere 60 miles away. 

 

The order to bomb Dresden had been triggered in part by a request from Soviet leader Joseph Stalin; he understood that an attack on this rail and road nexus would severely hamper German movements. Less forgivably, it was also a target because of the large numbers of refugees fleeing from the Red Army and passing through the city. It was calculated that bombardment would cause general chaos, furthering hampering the German troops.

 

But there was no such thing as precision bombing then; the plane crews – after so many missions exhausted, empty, afraid, braced for their own fiery deaths – simply got as close as they could to their designated targets.  

 

The gradual unwarping of history over the years has been largely thanks to the people of Dresden themselves. A careful historical commission a few years back sought to establish finally and definitively the number of victims of that night (Goebbels had deliberately set the number at 250,000).In addition to this, there British historians have helped trace the decisions that led to the city’s targeting. There has also been reconciliation and remorse. A British charity, the Dresden Trust, has done much to foster even greater historical sympathy and understanding.

 

None of this is morbid, incidentally; quite the reverse. The city is wonderfully lively and cosmopolitan and welcoming. But the anniversary of the bombing is always marked with the greatest reverence: there is a performance of the overpoweringly moving Dresden Requiem, and the clangour of those solemn bells. They cannot bring solace. But, as one old lady, a complete stranger, remarked to me after the Requiem: ‘This is for Coventry, too’. She was referring to the savage 1940 firestorm raised in that English midlands city by the Luftwaffe. Remembrance stretches across borders too.     

]]>
Mon, 17 Feb 2020 08:10:24 +0000 https://historynewsnetwork.org/article/174322 https://historynewsnetwork.org/article/174322 0
"Keep American Beautiful" and Personal Vs Corporate Environmental Responsibility David P. Barash is professor of psychology emeritus at the University of Washington; among his recent books is Through a Glass Brightly: using science to see our species as we really are (2018, Oxford University Press). 

Today, many of us accept personal responsibility for climate change and struggle to adjust our carbon footprints, while giving little attention to the much larger effects of major corporations and countries. We might want to learn something from how mid-century Americans concerned about litter were conned and co-opted by forces larger and more guilty than they.

 

Shortly during and after the Second World War, Americans produced much less garbage, having become accustomed to reusing items whenever possible and throwing things away only when absolutely necessary. This became a serious challenge, however, to newly emerging industries that yearned to profit from making and selling throw-away items, notably glass, plastics, and paper products. These industries accordingly launched a vigorous advertising campaign, inducing people to recalibrate their wartime frugality and start throwing things away after a single use. Americans were told that it was more hygienic, more convenient and (not mentioned), more profitable for those manufacturers who made and sold those items.

 

But by the early 1950s, the throw-away campaign had begun to backfire as Americans started seeing nearly everything as garbage.  The glass bottles that people used to rinse and reuse were increasingly thrown out of car windows, which had an unfortunate tendency to end up broken in a field, where grazing cows would either step on them and injure themselves or consume them and die. Dairy farmers became increasingly incensed, especially in Vermont, then as now a dairying state.

 

In response, Vermont passed legislation in 1953 that banned disposable glass bottles. Corporate America worried that this might be a harbinger of restrictions to come, so many of the bottle and packaging companies banded together in a counter-intuitive but politically and psychologically savvy way: They formed something called Keep America Beautiful. It still exists today, under a label that can only be applauded, not only for what it officially stands for but also for its social effectiveness.   Keep America Beautiful began as an example of what is now often criticized as “virtue signaling,” but in this case, the goal wasn’t simply to signal virtue or even to engage in “green- washing.”

 

Rather, the reason such behemoth companies as Coca Cola and Dixie Cup formed what became the country’s premier anti-littering organization was to co-opt public concern and regulatory responses by shifting the blame from the actual manufacturers of litter—those whose pursuit of profit led to the problem in the first place—to the public, the ostensible culprits whose sin was putting that stuff in the wrong place. Garbage in itself wasn’t the problem, we were told, and industry certainly wasn’t to blame either! We were. 

 

It became the job of every American to be a responsible consumer (but of course, to keep consuming) and in the process to Keep America Beautiful. At first and to some extent even now, legitimate environmental organizations such as the Audubon Society and the Sierra Club joined. Keep America Beautiful went big-time, producing print ads, billboards, signs, free brochures, pamphlets and eventually Public Service Announcements.

 

Keep America Beautiful coordinated with the Ad Council, a major marketing firm. People of a certain age will remember some of the results, including the slogan “Every litter bit hurts,” along with a jingle, to the tune of Oh, Dear! What Can the Matter Be: “Please, please, don’t be a litterbug …” Schools and government agencies signed on to the undeniably virtuous campaign. It’s at least possible that as a result, America became somewhat more beautiful but even more important, that troublesome Vermont law that caused such corporate consternation was quietly allowed to die a few years after it had been passed, and – crucially – no other state ever emulated it and banned single-use bottles.

 

But by the early 1970s, environmental consciousness and anti-establishment sensibilities began fingering corporations once again, demanding that they take at least some responsibility for environmental degradation, including pollution more generally. Keep America Beautiful once again got out in front of the public mood and hired a pricey, top-line ad agency that came up with an icon ad that still resonates today with Americans who were alive at that time: Iron-Eyes Cody, aka “The Crying Indian.”

 

Appearing on national television in 1971, it showed a Native American (the actor was actually Italian- American) with a conspicuous tear in his eye when he encountered trash, while a voice-over intoned, “Some people have a deep, abiding respect for the natural beauty that was once this country. And some people don’t. People start pollution. People can stop it.” In short, it’s all our fault.

 

Iron-Eyes Cody’s philosophy is reminiscent of Smokey Bear’s “Only you can prevent forest fires.” Of course, Smokey is right. Somewhat. Individuals, with their careless use of matches, can certainly precipitate forest fires, but as today’s wildfire epidemics demonstrate, there are also major systemic contributions: Global over-heating with consequent desiccation, reduced snow-melt, diminished over-winter insect die-offs that produce beetle epidemics that in turn leave vast tracts of standing dead trees, and so forth. Individuals indeed have a responsibility to keep the natural environment clean and not to start fires, but more is involved.

 

It is tempting, and long has been, to satisfy one’s self with the slogan “peace begins with me.” As logicians might put it, personal peace may well be a necessary condition for peace on a larger scale, but even if it begins with each of us, peace assuredly does not end with me, or you, or any one individual. The regrettable truth is that no amount of peaceful meditation or intoning of mantras will prevent the next war, just as a life built around making organic, scented candles will not cure global inequality or hold off the next pandemic.

 

Which brings us to global climate change. There is no question that each of us ought to practice due diligence in our own lives: Reduce your carbon footprint, turn off unneeded appliances, purchase energy-efficient ones, and so forth. Climate activist Gretta Thunberg is surely right to emphasize these necessary adjustments and, moreover, to model personal responsibility, for example, by traveling to the UN via sailboat. But she is also right in keeping her eyes on the prize and demanding that above all, governments and industry change their behavior.

 

Amid the barrage of warnings and advice about personal blame and individual responsibility, there is a lesson to be gleaned from the corporate manipulations that gave us Keep America Beautiful, and its subsequent epigones: Even as we are implored to adjust our life-styles and as we dutifully struggle to comply, let’s not allow such retail actions to drown out the need for change at the wholesale level, namely by those corporations and governments whose actions and inactions underpin the problem and whose behavior – even more than our own - must be confronted and overhauled. 

 

(Just after writing this piece, I discovered that some of its ideas were covered in an episode titled “The Litter Myth,” aired September 5, 2019, on NPR’s wonderful history-focused podcast, “Throughline”: https://www.npr.org/2019/09/04/757539617/the-litter-myth.  Anyone wanting a somewhat different take on this topic would do well to “give a listen.”)

]]>
Mon, 17 Feb 2020 08:10:24 +0000 https://historynewsnetwork.org/blog/154314 https://historynewsnetwork.org/blog/154314 0
Happy 200th Birthday to Susan B. Anthony (1820-1906)

 

Women have reason to celebrate in 2020. August 26 marks the centennial of the enactment of the Nineteenth Amendment, or the Susan B. Anthony Amendment, granting women the right to vote. February 15 is another important anniversary, the bicentennial of the birth of Susan B. Anthony. Although it is not an official national holiday, Anthony’s birthday has been an occasion that women have celebrated during Anthony’s life and after her passing in 1906.

 

The celebration of Anthony’s eightieth birthday on February 15, 1900, was described as being the greatest event in the woman suffrage movement. An attendee said, “There never has been before and, in the nature of things, there can never be again, a personal celebration having the significance” of this birthday party.

 

The reception began at the Corcoran Art Gallery in Washington, D.C., where Anthony was seated in a queen’s chair at the main entrance, at the head of a receiving line. For three hours, she shook hands with the thousands who amassed to extend congratulations. After greeting Anthony, the crowds made a pilgrimage to the marble busts of Anthony, Elizabeth Cady Stanton and Lucretia Mott on exhibition in a room of the gallery.

 

Glorious speeches and praises and gifts were presented. Among those who honored Anthony was Coralie F. Cooks. “It is fitting on this occasion, when the hearts of women the world over are turned to this day and hour,” she said, “that the colored women of the United States should join in the expressions of love and praise offered to Miss Anthony upon her eightieth birthday.…She is to us not only the high priestess of woman’s cause but the courageous defender of right wherever assailed. We hold in high esteem her strong and noble womanhood…Our children and our children’s children will be taught to honor her memory for they shall have been told that she has always been in the vanguard of the immortal few who have stood for the great principles of human rights.”

 

In addition to being an 80th birthday celebration, it marked Anthony’s retirement from the National American Woman Suffrage Association. Anthony said she gladly surrendered her place now that she was had reached 80, “ten years beyond the allotted age of man.” However, she did not plan to stop working. “I shall work to the end of my time,” she said.

 

Six years later, on Anthony’s birthday, February 15, 1906, a portrait bust of Susan B. Anthony, sculpted by Adelaide Johnson, was accepted by the Metropolitan Museum of Art of New York City. This was considered an especially “crowning achievement” since institutions almost never accepted portraits of a living person as part of their permanent collections. The portrait was placed “in symbolic recognition of where she [Anthony] belongs, at the head of the grand stairway as if to greet the visiting throngs.”

 

Six weeks later, Anthony slipped into a coma, and a few days later, she passed away.  “Among the men and women who have paid tribute to Susan B. Anthony since she closed her eyes in death March 13, not one owes her such a debt of gratitude as I myself,” wrote Mary Church Terrell, a personal friend of Anthony and a leader of the National Colored Women’s Organizations. “. . . To her memory has been erected a monument more precious than marble, more enduring than brass or stone. In the heart of a grateful race, in the heart of the manhood of the world she lives and Susan B. Anthony will never die.”

 

Friends carried on the tradition of celebrating Anthony’s birthday. In 1920, a few women went to the Metropolitan Museum to try to pay tribute at her marble shrine on her 100th birthday. A florist was supposed to deliver a wreath to the museum at 3:30, but when the women arrived at 3:40, the wreath had not arrived. At four o’clock, one of the women went into a rage and began hurling insults at her friends. Three minutes later, the florist arrived with the wreath. The little band of women marched past the museum clerks and up the staircase to take measurements for placing the wreath on the marble bust of Susan B. Anthony. Clerks approached the women and upon learning that they did not have the permission of museum authorities, a clerk phoned the office. A tall, gray-haired man appeared and asked the group to leave. One of the women recalled that the occasion of Anthony’s 100th birthday was “embarrassing and painful beyond words.

 

The grandness of the next birthday made up for the tragedy of the previous year. On February 15, 1921, the Portrait Monument to Lucretia Mott, Elizabeth Cady Stanton and Susan B. Anthony was unveiled in the Rotunda of the U.S. Capitol. More than a thousand women and men came to celebrate the woman’s marble sculpture, the passage of the Nineteenth Amendment, and the 101st birthday of Susan B. Anthony. 

 

However, after the glorious ceremony, the monument of women was removed from the room and placed further down below the dome in the Crypt. The Portrait Monument remained on the lower level for decades yet it still served as a rallying icon for women, particularly on Anthony’s birthday. An elaborate celebration took place in the Capitol Crypt on February 15, 1934, with more than 45 women’s organizations paying tribute to Susan B. Anthony’s leadership “in the movement for Equality for Women.” The Marine Band played as representatives of the organizations placed floral tributes at the base of the monument. The first speaker was Representative Virginia E. Jenckes, who was described as “the realization of Miss Anthony’s dream,” that is, “a woman in Congress, imbued with the Feminist’s point of view.”

 

Rep. Jenckes said, “I pay tribute today as a Member of Congress to the memory of Susan B. Anthony and her associates. To them should go the credit of bringing a woman’s viewpoint into national affairs, and we women who are privileged to render public service are inspired and guided by the ideals of Susan B. Anthony.”

 

Other speakers included Representative Marian W. Clarke of New York, Esther Morton Smith of the Society of Friends, Mrs. William John Cooper of the Association of American University Women, and Mary Church Terrell of the National Colored Women’s Organizations. The last speaker of the afternoon was sculptress Adelaide Johnson. 

 

Today we are privileged to honor ourselves by paying tribute to Susan B. Anthony on this anniversary of her birth. Invincible leader, first of the few of such hosts as never before moved to one call, the uprise of woman. A Leader for more than a half century in the crusade of half of the human race. Mother of the whole—demanding individual liberty.

 

…I might entertain you with reminiscences, as we were intimate friends for more than twenty years, the friendship beginning in 1886, but to give light or information is my role. . . . As Leader of the infant Woman Movement . . . [Miss Anthony] was pelted and driven from the platform when speaking, and experienced mob attack from the rabble. She was arrested for casting a ballot…In contrast, less than a quarter of a century later the same Rochester that hissed her from the platform in the sixties opened its newspaper columns wide in praise to “Our Beloved Susan.” Two thousand hands were grasped by the “Grand Old Woman.”

 

In 1935, Congress honored Susan B. Anthony with a gift on her 115th birthday --- they washed the statue of her (and Stanton and Mott). Of course, after the bath, they left Susan and her friends in the Crypt.

 

At Anthony’s birthday celebration in 1936, women noted the progress that had been made over the decades. Anthony had been the target of rotten eggs and even some women had resented her efforts for woman’s rights. It was recalled: “Women pulled their skirts aside when she passed and declared that her shameful behavior caused them to be ashamed of their sex.” In contrast, it was noted that by 1936 there was a First Lady who was an active leader instead of a mere tea pourer, a woman minister to a foreign country, a woman Secretary in the President’s Cabinet, and a woman director of the United States Mint.

 

In commemorating Anthony’s birthday, women have examined the past, celebrated their progress, and looked to the future. I hope women and men across the nation will continue this tradition on February 15, 2020, the 200th birthday of Susan B. Anthony. 

]]>
Mon, 17 Feb 2020 08:10:24 +0000 https://historynewsnetwork.org/article/174321 https://historynewsnetwork.org/article/174321 0
“Free College” in Historical Perspective

 

“Free college” is a visible and volatile issue in the Democrat candidates’ presidential campaign platforms. Bernie Sanders has showcased this proposal since 2016.  Others, notably Elizabeth Warren, have since echoed the chorus.  No Democratic candidate today can afford to ignore the issue, even if it means taking time to impose strict limits. Pete Buttigieg, for example, argues that federal tuition aid applicable at state colleges should be confined to assisting students from working and middle class families.

 

What is the historical origin of “free college” in presidential platforms? One intriguing clue comes from the 1947 report commissioned by then President Harry Truman, titled Higher Education for American Democracy.  The six-volume work drafted by a blue ribbon panel and chaired by the President of the American Council on Education, invited readers to look “Toward Equalizing Opportunity.” It made the bold case that “The American people should set as their ultimate goal an educational system in which at no level – high school, college, graduate work, or professional school – will a qualified individual in any part of the country encounter an insuperable economic barrier to the attainment of the kind of education suited to his aptitude and interests.”

 

Was this truly the federal government’s blue print for the “free college” campaign promises of 2020? The prose from 1947 resembles the proposals that speechwriters now craft for Democratic candidates heading in to the primaries. On close inspection, however, the 1947 Presidential Report was a false hope, or at very least, a promise delayed for today’s federal “free college” proposals.  That’s because President Truman put the report aside in order to focus on Cold War defense spending.  It was comparable to a script for a movie that never went into production by the president or Congress. 

 

Important components of the task force’s recommendations were fulfilled over the next two decades, but these reforms were implemented through scattered state initiatives with little if any commitment from the federal government.  Starting in 1951, major federal funding for higher education was concentrated in such agencies as the new National Science Foundation and the National Institutes for Health.  High powered federal support for research and development flourished while national plans for college access were tabled.

 

The historic reminder is that creating and funding colleges has been – and remains – the prerogative of state and local governments.  This legacy was reinforced in 1972 when several national higher education associations lobbied vigorously for massive federal funding that would go directly to colleges and universities for their operating budgets. Much to the surprise of the higher education establishment, these initiatives were rejected by Congress in favor of large-scale need-based student financial aid programs, including what we know today as Pell Grants (originally, Basic Educational Opportunity Grants) and numerous related loan and work-study programs.  This was a distinctively American model of portable, need-based aid to individual students as consumers.  The net result was that some, but hardly all, college applicants made gains in college access, affordability, and choice.

 

States, not the federal government, have been the place where issues of low tuition and college costs have been transformed into policies and programs. California was an important pioneer. Its original state constitution of 1868 included explicit provision that the University of California would not charge tuition to state residents.  This continued into the twentieth century.  The only modification came about in 1911 when the legislature maintained the “no tuition” policy but did approve a student fee of $25 per year for non-academic services.  The state’s public regional colleges voluntarily followed the University of California’s no tuition example.  The state’s junior colleges, which were funded by local property taxes as extensions of a town’s elementary and secondary schools, also were free to qualified residents. Less well known is that “no tuition” sometimes was a practice and policy outside California’s public institutions. Stanford University, for example, did not charge tuition when it first admitted students in 1891.

 

Nationwide, the “no tuition” practice gained some following elsewhere. Rice Institute, which opened in 1910, charged no tuition to any enrolled student until 1968.  Berea College in Kentucky and a cluster of other “work colleges” did not charge tuition, but did expect students to work on campus to defray costs. The more widespread practice at colleges in the United States was to keep tuition low. At the historic, private East Coast colleges such as Harvard and Brown, tuition charges remained constant between 1890 and 1910 at about $120 to $150 per year.  Indexing for inflation that would be a charge of about $3,200 today. All this took place without any federal programs.

 

In the decade following World War II California along with states in the Midwest and West invested increased tax dollars into public higher education, usually charging some tuition for state residents while working to keep price relatively low. Such was not the case in all states. Pennsylvania, Vermont and Virginia, for example, represented states with relatively low tax revenues for public institutions whose revenues depended on charging students relatively high tuitions, comparable to a user’s fee. 

 

After 1960 massive expansion of higher education enrollments and new campus construction in California meant that the state’s expensive “free college” programs state programs were unsustainable. By 1967 state legislator (and future governor) George Deukmejian stumped the state, making the case to local civic groups that California could no longer afford its “no tuition” policy at its public four-yearinstitutions. This added support to Governor Ronald Reagan’s 1968 campaign to impose tuition. The immediate compromise was technically to maintain “no tuition,” but to have public campuses charge substantial student fees. This ended in 1980 when the University of California first charged tuition of $300 per academic year for an in-state student, combined with a mandatory student services fee of $419. This trend continued so that by 2011-12 annual tuition and fees for a California resident were $14,460 and out-of-state students at the University of California paid $37,338.

 

An added state expense was that California had extended student choice by creating state scholarship programs that an eligible student could use to pay tuition at one of the state’s independent (private) colleges.  It became an attractive model nationwide by 1980 when at least thirty state legislatures had funded state tuition assistance grant programs applicable to a state’s public and private accredited colleges.

 

Declining state appropriations for higher education in California and other states signaled a reconsideration of “free college” as sound, affordable public policy. Why shouldn’t a student from an affluent family pay some reasonable tuition charge?  Did “no tuition” increase affordability and access from modest income families? Research findings were equivocal at best.

 

Connecting the historical precedents to current presidential “free college” proposals eventually runs into serious concerns. Foremost is that presidential candidates must be aware of the traditional sovereignty of state self-determination on higher education policies. In some states, taxpayers and legislatures decide to fund higher education generously. Other states do not.  Why should a federal program over-ride state policies? Why should taxpayers in a state that supports public higher education generously also be asked to pay high federal taxes to shore up state institutions in another state where legislators and voters do not?  How much federal subsidy should we give to lowering tuition pricewhen a state institution does not demonstrate that it has worked at keeping operating costs down? Also, “free college proposals” today may unwittingly limit student choice if the tuition buy downs are limited to selected institutional categories, such as public colleges and two-year community colleges.

]]>
Mon, 17 Feb 2020 08:10:24 +0000 https://historynewsnetwork.org/article/174324 https://historynewsnetwork.org/article/174324 0
A Gallon of Talk and a Teaspoon of Action, Then Label the Problem "Insolvable and Plan Another Conference

There is an increasing element of the burlesque as world leaders jealously vie for the front row in photographs at annual summits where problems are solemnly addressed but seldom solved. They mulled over trade last June in Osaka. Then climate talks in Biarritz. At the end of January 2020 photographs were posted from Jerusalem where more than forty world leaders deliberated upon a malicious phenomenon that began in that ancient city two thousand years ago on a hill named Golgotha. What shall it avail, the latest speech-laden conference on this topic, when seventy-five years of intensive Holocaust education had no effect on Holocaust deniers and could not stop the resurgence of the virulent anti-Semitism that had infected Germany one hundred years ago? "Tiresome talk and tokens of sympathy without follow-up action!" scornfully wrote the German justice inspector Friedrich Kellner in his assessment of the impotent League of Nations and the short-sighted democratic leaders of his time who did nothing to thwart Adolf Hitler's plans for totalitarian rule over Europe. "Where were men who could recognize the reality?" asked Kellner. "Did they not see the tremendous re-arming of Germany when every German illustrated newspaper had pictures that exposed everything? Every small child here knew at least something about the armament. And the entire world looked on! Poor world!" As an organizer for the Social Democratic Party, Kellner had campaigned against Hitler and his Nazi Party throughout the entire time of the ill-fated Weimar Republic. When Hitler came to power, Kellner began a diary to record Nazi crimes and the German people's overwhelming approval of the murderous agenda. His outspokenness against the regime marked him as a "bad influence" and he was placed under surveillance by the Gestapo. His position as a courthouse administrator gave him some protection from arbitrary arrest. "The French watched calmly as Hitler re-armed Germany without having to suffer any consequences," wrote Kellner about the Allies' failure to respond decisively to Hitler's threats. The British were equally guilty. "Neville Chamberlain should have been a parson in a small village, not the foremost statesman of a world power who had the duty and obligation to immediately counter Hitler." "The Western nations," he declared, "will carry the historical guilt for not promptly providing the most intensive preventative measures against Germany's aggression. When German children were being militarized at the age of ten, and legions of stormtroopers and the SS were being formed, what did their Houses of Commons and Senates undertake against this power?" Kellner regarded Winston Churchill as one of the rare men who did see the reality, who might have preempted the war had he received the reins of government sooner. In his memoirs, Churchill labeled the six years of brutality and terror as "The Unnecessary War" -- a sobering and infuriating epitaph for the tens of millions of victims who needlessly lost their lives.  The justice inspector reserved a special contempt for nations that claimed neutrality while their neighbors and allies were under attack. Sweden and Switzerland grew rich providing Germany with raw materials. In June 1941, as Hitler ruled over a conquered Europe, Kellner derided Americans -- including the celebrated aviator Charles Lindbergh -- who insisted Hitler could be assuaged by diplomacy. "Even today there are idiots in America who talk nonsense about some compromise with Germany under Adolf Hitler. . . . Mankind, awake, and concentrate all your strength against the destroyers of peace! No deliberations, no resolutions, no rhetoric, no neutrality. Advance against the enemy of mankind!" Six months later, after the attack on Pearl Harbor, he wrote, "Japan shows its mean and dishonest character to the world. Will the isolationists in the U.S.A. now open their eyes? What a delusion these cowardly people were under. When you stand on the sidelines claiming neutrality during a gigantic fight for human dignity and freedom, you have actually placed yourself on the side of the terrorist nations." Despite his call for preemptive action against dictators plotting war, Friedrich Kellner knew war firsthand and abhorred it. In 1914, as an infantry sergeant in the Kaiser's army, he was wounded in battle. He highly valued diplomacy and preferred political solutions to the world's problems. For twelve years he campaigned politically as a Social Democrat. But Adolf Hitler showed the futility of diplomacy and politics when it comes to fanatics whose ideologies and dreams of dominion would undo centuries of civilization. We are seriously challenged by such types today. Protected by their totalitarian patrons, Russia and China, the leaders of Iran and North Korea spread disinformation, terror and chaos throughout the Middle and Far East. They openly threaten terror attacks -- and even nuclear strikes -- against the democracies.  Ironically, France and Germany, intent on maintaining business relations with these countries, have led the European Union these past three in resisting the USA's renewed sanctions against Iran. As Winston Churchill said of the neutral nations in 1940, "Each one hopes that if he feeds the crocodile enough, the crocodile will eat him last." Conferences and speeches, and promises by world leaders to never forget the Holocaust, will neither impress nor dissuade modern aggressors. What was needed in Jerusalem was a unified pledge to immediately apply strong and unrelenting economic sanctions against these nations, and to encourage and assist the people in Iran and North Korea to stand up against their own dictators. And they needed to emphasize it would be the only way to deter further military action, such as the recent missile attack that killed the Iranian terrorist mastermind Qasim Sulemeini.  Our leaders should reflect on the observations of a German patriot who did his best to keep a madman from seizing power in his nation, and who saw with despair how the democracies let it occur. He wrote his diary as "a weapon of truth" for future generations, so they could stop their own Nazi-types.  "Such jackals must never be allowed to rise again," he said. "I want to be there in that fight."  

]]>
Mon, 17 Feb 2020 08:10:24 +0000 https://historynewsnetwork.org/article/174327 https://historynewsnetwork.org/article/174327 0
Jimmy Carter: The Last of the Fiscally Responsible Presidents

 

Popular impressions of Jimmy Carter tend to fall into two broad categories.  Many see him as a failed president who mismanaged the economy, presided over a national “malaise,” allowed a small band of Iranian militants to humiliate the United States, and ultimately failed to win reelection.  His final Gallup presidential approval rating stood at 34%—equal to that of George W. Bush.  Among postwar presidents, only  Richard Nixon (24%) and Harry Truman (32%) left office with lower approval ratings.  As the political scientist John Orman suggested some years ago, Carter’s name is “synonymous with a weak, passive, indecisive presidential performance.”  For those who hold this view, Carter represents everything that made the late ‘70s a real bummer.  

 

His supporters, meanwhile, portray him as a unique visionary who governed by moral principles rather than power politics.  They point out that he initiated a groundbreaking human rights policy, forged a lasting Middle East peace agreement, normalized relations with China, pursued energy alternatives, and dived headlong into the most ambitious post-presidency in American history.

 

Both perspectives have their merits.  Yet although even Carter’s admirers would rather ignore his economic record than defend it, popular memory of his economy is off the mark.  Contrary to the prevailing wisdom, by many indices the U.S. economy did relatively well during Carter’s presidency, and he took his role as steward of the public trust seriously.  He kept the national debt in check, created no new entitlements, and steered the nation clear of expensive foreign wars.  Whatever else one may think about the man, it is no exaggeration to say that Jimmy Carter was among the last of the fiscally responsible presidents.

 

Although there is no single measure for evaluating a president’s economic performance, if we combine such standard measures as unemployment, productivity, interest rates, inflation, capital investment, and growth in output and employment, Carter’s numbers were higher than those of his near-contemporaries Ronald Reagan, Richard Nixon, Gerald Ford, and George H.W. Bush.  “What may be surprising,” notes the economist Ann Mari May, “is not only that the performance index for the Carter years is close behind the Eisenhower index of the booming 1950s, but that the Carter years outperformed the Nixon and Reagan years.”  Average real GDP growth under Carter was 3.4%, a figure surpassed by only three postwar presidents: John F. Kennedy, Lyndon Johnson, and Bill Clinton.  Even though unemployment generally increased after the 1960s, the average number of jobs created per year was higher under Carter than under any postwar president.

 

Particularly noteworthy was Carter’s fiscal discipline. Although Keynesian policies were central to Democratic Party orthodoxy, Carter was a fiscal conservative who touted balanced budgets and anti-inflationary measures.  By and large, he stuck to his campaign self-assessment: “I would consider myself quite conservative . . . on balancing the budget, on very careful planning and businesslike management of government.”  

 

Under Carter, the annual federal deficit was consistently low, the national debt stayed below $1 trillion, and gross federal debt as a percentage of GDP peaked below forty percent, the lowest of any presidency since the 1920s.  During his final year in office, the debt-to-GDPratio was 32% and the deficit-to-GDP ratio was 1.7%.  In the ensuing twelve years of Reagan and Bush (1981-1993), the debt quadrupled to over $4 trillion and the debt-to-GDP ratio doubled.  The neoliberal policies popularly known as Reaganomics had plenty of fans, but in the process of lowering taxes, reducing federal regulations, and increasing defense spending, conservatives all but abandoned balanced budgets.

 

The debt increased by a more modest 32% during Bill Clinton’s presidency (Clinton could even boast budget surpluses in his second term) before it ballooned by 101% to nearly $11.7 trillion under George W. Bush.  Not only did Bush entangle the U.S. in two expensive wars, but he also convinced Congress to cut taxes and to add an unfunded drug entitlement to the 2003 Medicare Modernization Act.  During the Obama presidency, the debt nearly doubled again to $20 trillion. (Obama and Bush’s respective totals depend in part on how one assigns responsibility for the FY2009 stimulus bill.)  Under President Trump, the national debt has reached a historic high of over $22 trillion, and policymakers are on track to add trillions more in the next decade.

 

There are a few major blots on Carter’s economic record. Inflation was a killer.  Indeed, much of Carter’s reputation for economic mismanagement stems from the election year of 1980, when the “misery index” (inflation plus unemployment) peaked at a postwar high of 21.98.  The average annual inflation rate during Carter’s presidency was a relatively high 8% – lower than Ford’s (8.1%), but higher than Nixon’s (6.2%) and Reagan’s (4.5%).  The annualized prime lending rate of 11% was lower than Reagan’s (11.6%) but higher than Nixon’s (7.6%) and Ford’s (7.4%).  Economist Ann Mari May concurs that while fiscal policy was relatively stable in the Carter years, monetary policy was “highly erratic” and represented a destabilizing influence at the end of the 70s.

 

Carter’s defenders note that he inherited a lackluster economy with fundamental weaknesses that were largely beyond his control, including a substantial trade deficit, declining productivity, the “great inflation” that had begun in the late 1960s, Vietnam War debts, the Federal Reserve’s expansionary monetary policy, growing international competition from the likes of Japan and West Germany, and a second oil shock.  “It was Jimmy Carter’s misfortune,” writes the economist W. Carl Biven, “to become president at a time when the country was faced with its most intractable economic policy problem since the Great Depression: unacceptable rates of both unemployment and inflation”—a one-two punch that came to be called “stagflation.”

 

In response, Carter chose austerity.  Throughout the 1970s, Federal Reserve chairmen Arthur F. Burns and G. William Miller had been reluctant to raise interest rates for fear of touching off a recession, and Carter was left holding the bag.  After Carter named Paul Volcker as Fed chairman in August 1979, the Fed restricted the money supply and interest rates rose accordingly—the prime rate reaching an all-time high of 21.5% at the end of 1980.  All the while, Carter kept a tight grip on spending.  “Our priority now is to balance the budget,” he declared in March 1980.  “Through fiscal discipline today, we can free up resources tomorrow.”  

 

Unfortunately for Carter, austerity paid few political dividends.  As the economist Anthony S. Campagna has shown, Carter could not balance his low tolerance for Keynesian spending with other Democratic Party interests.  His administration took up fiscal responsibility, but his constituents wanted expanded social programs.  Meanwhile, his ambitious domestic agenda of industrial deregulation, energy conservation, and tax and welfare reform was hindered by his poor relationship with Congress.

 

The Carter administration might have shown more imagination in tackling these problems, but as many have noted, this was an “age of limits.”  Carter’s successors seem to have taken one major lesson from his failings: The American public may blame the president for a sluggish economy, but when it comes to debt, the sky’s the limit.

 

 

]]>
Mon, 17 Feb 2020 08:10:24 +0000 https://historynewsnetwork.org/article/174326 https://historynewsnetwork.org/article/174326 0
New Evidence from the Clinton Presidential Library on Bill Clinton and Helmut Kohl's Diplomatic Relationship

Clinton and Kohl meet in the Bach House, 14 May 1998 (Bachhaus.eisenach - Own work; CC BY-SA 3.0)

 

“I loved him,” said Bill Clinton in his eulogy during Helmut Kohl’s memorial in July 2017. “I loved this guy because his appetite went far beyond food, because he wanted to create a world in which no-one dominated, a world in which cooperation was better than conflict, in which diverse groups make better decisions than individual actors […]. The 21st century in Europe […] really began on his watch.”[1]

 

Indeed, Clinton’s and Kohl’s tenures connected the post-Cold War world with 21st century politics.[2]Both were united in their efforts to establish a new post-Cold War order and a lasting peace. They believed that there should be a strong European Union and an enlarged NATO so that Germany will be surround by NATO members in the East instead of being on the front line of Central European instability. Clinton and Kohl paid meticulous attention to Russia’s inclusion and the emergence of a special set of NATO-Russia partnerships.[3]They saw their endorsement of Russia’s President Boris Yeltsin as an essential investment in democracy and the establishment of market structures in Russia. Both thought that united Germany had to be part of NATO’s military out-of-area engagement in Bosnia, the first deployment of German forces outside of its own country since the end of World War II. The war in Bosnia proved that history and old conflicts had the potential to upset the emergence of a peaceful and prosperous Europe.

 

These issues were at the forefront of Clinton’s and Kohl’s meetings and telephone conversations. In December 2018, the Clinton Presidential Library released nearly 600 pages documenting their perception of international challenges and giving insights into the motives of their statecraft. Clinton and Kohl had trust in one another, and Clinton often called Kohl asking for advice on crucial topics.[4]

 

From today’s vantage point, the formation of Europe’s post-Cold war order looks easy. In fact, it was an enormous task and a constant challenge. In February 1995, for instance, Kohl told Clinton that “with respect to Russia, Central and Eastern Europe, NATO expansion, and the status of Ukraine, these are the essential points. No matter what we do with Moscow, if we fail in Ukraine (and the former Yugoslavia) we are lost. […] The situation in Europe is very vague and ambiguous.”[5]

 

During this critical phase, the Clinton-Kohl partnership provided leadership and vision. Since his days as a Rhodes scholar in Oxford in the late 1960s, Clinton had a keen interest in Europe and sympathy for Germany. During his first meeting with Kohl in March 1993, Clinton noted that “when he had been a student in England he had visited Germany as often as possible. At the time, he had been ‘almost conversational’ in German and still could understand a lot.”[6]

 

Kohl knew that Clinton expected united Germany to assume more international responsibility as a partner in NATO. In 1993, Kohl fought hard for the amendment of Germany’s constitution allowing for the country’s participation in NATO’s first intervention outside the member nations in Bosnia. “Today,” Kohl said at their March 1993 meeting, “good US-German relations are even more important than they had been thirty years ago when the division of Germany and the terrible fear of war had made things psychologically easier. People now have a different fear, and are asking whether their leaders can cope with new challenges or are drifting ‘like wood on the Potomac.’ This makes new German-American ties necessary.”[7]

 

Clinton and Kohl also spoke with one voice when it came to support for Russia’s President Boris Yeltsin. Both used personal diplomacies and positive feelings to interact effectively with Yeltsin despite frank disagreement such as the war in Chechenya, NATO enlargement and the Kosovo war. Both believed in their capacity to bring Yeltsin around on the NATO question doing all they could to allay Russia’s fears and its anxieties.[8]“In part,” as Clinton told Kohl in December 1994, “Yeltsin has a real concern. The Russians don't understand how everything will look 10-15 years from now.”[9]

 

Clinton’s and Kohl’s aim was to open up NATO, but slowly, cautiously and combined with an expanded effort to engage Russia. Indeed, they managed to keep NATO enlargement from harming Yeltsin’s reelection in 1996 while ensuring that NATO responded to Central and Eastern European desires to join the alliance. Both established a close personal rapport with Yeltsin and used countless meetings and telephone conversation to coordinate and synchronize their policies toward Russia. In September 1996, when Yeltsin announced his forthcoming open-heart surgery, Kohl called up Clinton providing a detailed report of his recent visit at Yeltsin’s dacha. “I think it is important that all of us be supportive of him during his surgery and that we do not create an impression of taking advantage of him during his convalescence,” Clinton said.[10]

 

In 1998, when Kohl’s tenure came to an end after 16 years as Chancellor, Clinton called him, praised his achievements and emphasized their enduring friendship: “Hillary and I think you are wonderful. I will always treasure our friendship and will be your friend forever. I am grateful to have worked with you and grateful that you always did the right thing […].”[11]Finally, in April 1999, Bill Clinton awarded Helmut Kohl the Presidential Medal of Freedom for his lifetime achievements and leadership. Their differences in age and style produced bondings rather than frictions. Both saw politics as a vehicle for major improvements in everyday life. Both sensed that Europe and the United States had to build the bridge to the 21st century – and they had to do it together. The Clinton-Kohl documents shed new light on their efforts to facilitate the emergence of an interdependent and transnational world based on freedom, peace, security and prosperity.

 

[1]Remarks by Bill Clinton, European Ceremony of Honor for Dr. Helmut Kohl, Strasbourg, 1 July 2017, see 

http://www.europarl.europa.eu/pdf/divers/eu-ceremony-of-honour-mr-kohl-20170701.pdf.

[2]See Bill Clinton, My Life (New York: Knopf, 2004); Helmut Kohl, Erinnerungen 1990–1994 (Munich: Droemer Knaur Verlag, 2007).

[3]See Strobe Talbott, The Russia Hand. A Memoir of Presidential Diplomacy (New York: Random House, 2002); James Goldgeier and Michael McFaul, Power and Purpose. U.S. Policy Toward Russia after the Cold War (Washington DC: Brookings Institution Press, 2003); James Goldgeier, “Bill and Boris. A Window Into a Most Important Post-Cold War Relationship,” in: Texas National Security Review 1:4 (August 2018), 43–54, see https://tnsr.org/wp-content/uploads/2018/08/TNSR-Vol-1-Iss-4_Goldgeier.pdf

[4]See https://clinton.presidentiallibraries.us/items/show/57651.

[5]Memcon Clinton and Kohl, 9 February 1995, see https://clinton.presidentiallibraries.us/items/show/57651, 198.

[6]Memcon Clinton and Kohl, 26 March 1993, see https://clinton.presidentiallibraries.us/items/show/57651, 16.

[7]Ibid, 15.

[8]See James Goldgeier, Not Whether But When. The U.S. Decision to Enlarge NATO (Washington DC: Brookings Institution Press, 1999); Ronald Asmus, Opening NATO’s Door. How the Alliance remade itself for a new Era (New York: Columbia University Press, 2002); Daniel Hamilton and Kristina Spohr (Eds), Open Door. NATO and Euro-Atlantic Security after the Cold War (Washington DC: Brookings Institution Press, 2019); Mary E. Sarotte, “How to Enlarge NATO. The Debate inside the Clinton Administration, 1993–95,” in: International Security 44:1 (Summer 2019), 7–41, see https://www.mitpressjournals.org/doi/pdf/10.1162/isec_a_00353

[9]Memcon Clinton and Kohl, 5 December 1994, see https://clinton.presidentiallibraries.us/items/show/57651, 166.

[10]Telcon Clinton and Kohl, 10 September 1996, see https://clinton.presidentiallibraries.us/items/show/57651, 364.

[11]Telcon Clinton and Kohl, 30 September 1998, see https://clinton.presidentiallibraries.us/items/show/57651, 581.

]]>
Mon, 17 Feb 2020 08:10:24 +0000 https://historynewsnetwork.org/article/174323 https://historynewsnetwork.org/article/174323 0
Why Trump Is Different than Reagan, Either Bush, Dole, McCain, or Romney—He’s Evil

 

If we look at Republican candidates for president over the last forty years, we find one significant difference between Donald Trump and his party’s predecessors. Despite all of his forerunners’ failings, it would be a mistake to label any of them as evil. Mistaken or misguided at times? Yes. But evil? No. Even progressive leftists should admit that occasionally, and sometimes more than occasionally, the six pre-Trump Republican candidates displayed moments of basic human decency. 

 

A few definitions of evil are “profoundly immoral and wicked” and “something that brings sorrow, trouble, or destruction.” Doesn’t that fit Trump? 

 

Several months ago, Michael Sean Winters, who “covers the nexus of religion and politics” for the National Catholic Reporter, wrote of “the seven deadly sins of Donald Trump.” One after another, the author ticks them off—greed, lust, gluttony, sloth, envy, wrath, and pride—and comments, “What we see with President Donald Trump and his cast of sycophants and co-conspirators . . . is a rare thing: All seven deadly sins on display at once.”

 

Winters observes that “greed has long been a motivating factor in Trump's life.” Since becoming president he has added greed and lust for power to his long-time pursuit of money and fame and his lust for women—the author just mentions in passing “that horrible tape,” where Trump (in 2005) stated he was able to  grab women “by the pussy.” And no mention is made of the some 23 women who since the 1980s have accused Trump of various types of sexual misbehavior including rape. “The evidence of gluttony is an extension of his greed and lust for power: He not only wants power, he can't get enough of it. Never enough money. Never enough women. Never enough wives. Like all gluttons, he leaves a mess in his wake.” Sloth? “As president, he famously can't be bothered reading his briefing papers,” and as of late October, 2019, President “Trump had 224 golf outings.” Regarding all his mentions and put-downs of President Obama, “it must be envy.” As for wrath, Winters predicted “we will see more and more wrath in the coming months.” And sure enough we did in early February, at the 68th annual National Prayer Breakfast (see below) and with the firing of two men who testified against him in impeachment hearings. Finally, we come to pride, “the deadliest of the seven deadly sins.” “What astounds, really, is that Trump's pride is the pride of the con man. He is proud of his ability to make people think he is a man of abilities when he really is a man of few gifts beyond those we associate with showmanship.”

 

In earlier HNN articles (see, e.g., this one of mid-2016), I have criticized Trump for his colossal egotism and lack of humility, a virtue that Winters identifies as pride’s opposite. Many others concerned with ethics have also commented on it, as conservative columnist David Brooks did in 2016 when he wrote that Trump’s “vast narcissism makes him a closed fortress. He doesn’t know what he doesn’t know and he’s uninterested in finding out. He insults the office Abraham Lincoln once occupied by running for it with less preparation than most of us would undertake to buy a sofa.”  

 

Brooks once taught a course at Yale on humility, and his most recent book is The Second Mountain: The Quest for a Moral Life (2019). Conservative Trump critics Michael Gerson and Peter Wehner also wrote a book on morality entitled City of Man: Religion and Politics in a New Era (2010).In February 2019, Gerson delivered a requested sermon in Washington’s National Cathedral. Wehner remains a Senior Fellow at the Ethics and Public Policy Center.

 

More recently Gerson wrote the essay “Trump’s politicization of the National Prayer Breakfast is unholy and immoral.” Trump used“a prayer meeting to attack and defame his enemies,” and “again displayed a remarkable ability to corrupt, distort and discredit every institution he touches,” Gerson observed. Now, after the Senate impeachment trial, Trump “is seized by rage and resentment,” and “feels unchecked and uncheckable.” Gerson also warned that Trump has “tremendous power,” and “we are reaching a very dangerous moment in our national life.”

 

About a week before Gerson’s article appeared, The Atlantic ran Peter Wehner’s much longer essay,“There Is No Christian Case for Trump.” Much of it deals with the impeachment charges against Trump and his wrongdoing regarding Ukraine, but Wehner also quotes favorably a December editorial by Mark Gali in “the evangelical world’s flagship publication, Christianity Today”: “[Trump] has dumbed down the idea of morality in his administration. He has hired and fired a number of people who are now convicted criminals. He himself has admitted to immoral actions in business and his relationship with women, about which he remains proud. His Twitter feed alone—with its habitual string of mischaracterizations, lies, and slanders—is a near perfect example of a human being who is morally lost and confused.”

 

Wehner also mentions other unethical Trump behavior— “authorizing hush-money payments to a porn star,” “misogyny,” “predatory sexual behavior, the “sexualization of his daughters,” and “his use of tabloids to humiliate his first wife, Ivana, when he was having an affair with Marla Maples.”

 

Columnist Ross Douthat is still one more conservative religious critic of Trump. Author of a critical study of Pope Francis, Douthat has had this to say about our president: he is a “debauched pagan in the White House,” and he is “clearly impaired, gravely deficient somewhere at the intersection of reason and judgment and conscience and self-control.”

 

Among writers who are less conservative than Brooks, Gerson, Wehner, and Douthat, comments about Trump’s evilness is even more widespread. To take just one example, we have Ed Simon, a HNN contributing editor. In an earlier article on Trump’s “religion,” I quoted Simon, “If the [Biblical] anti-Christ is supposed to be a manipulative, powerful, smooth-talking demagogue with the ability to sever people from their most deeply held beliefs who would be a better candidate than the seemingly indestructible Trump?”

 

All of the above comments indicating Trump’s evils do not exhaust the list, and just a few should be amplified upon or added. 1) He is a colossal liar. As The Washington Post stated, “Three years after taking the oath of office, President Trump has made more than 16,200 false or misleading claims.” 2) He lacks empathy and compassion. For example, in late 2015, he mocked a journalist's physical disability. 3) His boastful remarks about himself are examples of delusional pride—e.g., “in my great and unmatched wisdom,” and “I have a great relationship with the blacks." 4) Although it’s no easy job to identify Trump’s worst sin, his greatest may be what he is doing to our environment.

 

Any article dealing with Trump’s evil must contend with the overwhelming support he receives from Evangelicals. Why this is so and why they are wrong is dealt with by Wehner’s essay mentioned above. Also, although most evangelicals are conservative and support Trump, there are “progressive evangelicals,” who believe “the evangelical establishment’s embrace of Trumpism—unbridled capitalism, xenophobic nativism, and a willingness to engage with white supremacy—goes against everything Jesus stands for.” 

 

Finally we come to the question, “Does a president’s morals matter?” Did not John Kennedy and Bill Clinton engage in adulterous and/or inappropriate sexual behavior? Was the more upright Jimmy Carter a better president than these two? For all the “trickiness” of “Tricky Dick” Nixon and the shame of Cambodian bombing and Watergate, did he not pursue effective detente policies toward the USSR and Communist China?

 

The answer is presidential morals do matter, but only somewhat—though more than we realized until Trump demonstrated how costly their absence can be. Political wisdom, which itself requires certain virtues, is important, but so too are other skills like interpersonal and administrative ones.     

 

Some historians have written of the importance of presidential values and virtues. FDR biographer James MacGregor Burns maintains that “hierarchies of values . . . undergird the dynamics of [presidential] leadership,” and “considerations of purpose or value . . .lie beyond calculations of personal advancement.” Focusing on Presidents Lincoln, the two Roosevelts, and Lyndon Johnson, Doris Kearns Goodwin writes that the four presidents were “at their formidable best, when guided by a sense of moral purpose, they were able to channel their ambitions and summon their talents to enlarge the opportunities and lives of others.” Ronald Feinman has stated that “the most significant factor” in rating presidents’ greatness “is when they demonstrate moral courage on major issues that affect the long term future.” But he, as well as presidential historians Robert Dallek and Michael Beschloss, have commented on Trump’s lack of positive values and unfitness for office. 

 

Still, in the midst of primary season, much of the political talk is about various candidates proposed policies and whether they would be better or worse than what Trump has delivered and promises. “Medicare for all?” “Free public college tuition?” “More or less government regulation?” Etc. Etc. But we are missing the major point. Like Trump’s six previous Republican presidential candidates (and like many Trump supporters), all of the major 2020 Democratic candidates are decent human beings. Trump is evil. He is a liar, hatemonger and polarizer who has little knowledge of, or respect for, America’s traditions and better political values. Unable to tolerate criticism, he increasingly surrounds himself with flatterers and toadies. 

 

In Episode 8 of the second season of the HBO series “Succession,” Logan Roy’s brother says this about the media tycoon, “He's morally bankrupt. . . . In terms of the lives that will be lost by his whoring for the climate change deniers, there's a very persuasive argument to be made that he's worse than Hitler.” I thought not only about Rupert Murdoch, head of the media conglomerate that runs Fox News, but also of Trump. For many, now and in the future, the 2020 presidential election may not just be an ordinary U.S. election, but quite literally a matter of life or death.  

]]>
Mon, 17 Feb 2020 08:10:24 +0000 https://historynewsnetwork.org/article/174329 https://historynewsnetwork.org/article/174329 0
Remembering Soldier's Bravery at Iwo Jima, 75 Years Later

Though I was just thirteen when I decided to march on Japan, ride the wave of American retribution, and make the Japanese pay for the attack on Pearl Harbor, I had already passed from boy to man. I thought I knew it all. Though I had yet to see friends evaporate before my eyes, or an enemy bleed out and die by my own hand, I had loved and now I had hated and I considered myself more than ready to go to war.

 

That I would find both the need and the strength to pull a live hand grenade to my gut while a second grenade lay beneath me, ready to detonate, would have astonished me even in my moments of greatest bravado. I went to war with vengeance in my heart. I went to war to kill. Such is the irony of fate that I will be remembered for saving the lives of three men I barely knew.

 

My journey into manhood began one bleak October afternoon when my beloved father drew his final breath, having lost his long battle with cancer. I was eleven years old. Afterward, I pushed away any man the lovely widow, Margaret Lucas, attempted to bring into our lives. I did not need a man; I was one. I was a tough kid who loved to fight. I was rebellious by nature and had a hair-trigger temper. Troubled in general: that was the young Jack Lucas.

 

As my inner turmoil heated up so did world events, and with the reprehensible bombing of Pearl Harbor, we both boiled over. I lived by my wits and often made up my own rules, leaving a trail of broken jaws and busted lips as I went along. So, it was not much of a stretch on my part when I found a way to join the United States Marine Corps, though I was only fourteen at the time. I went AWOL to catch a train headed in the direction of the war. Then I stowed away on a ship to reach one of the Pacific’s worst battlefields. I figured, if I figured anything at all, that if I was shrewd enough to impose my will on the United States Marine Corps, the Japanese would give me little trouble.

 

Having already borne the weight of my life’s biggest loss, I was not afraid to face whatever awaited me on Red Beach One, Iwo Jima. I had no way of knowing that in a matter of a few short hours I would make the most important decision of my life and in the lives of three members of my fire team. The choice would be mine: either I could die alone or all of us would die together.

 

Excerpted from Indestructible: The Unforgettable Memoir of a Marine Hero at the Battle of Iwo Jima. Reprinted with permission. Copyright Harper Collins, 2020. 

]]>
Mon, 17 Feb 2020 08:10:24 +0000 https://historynewsnetwork.org/article/174325 https://historynewsnetwork.org/article/174325 0
An Interview with Historian Dr. David Dzurec of the University of Scranton

 

The University of Scranton is a private Catholic and Jesuit institution of 3,729 undergraduate students located in Scranton, Pennsylvania founded in 1888. The History Department engages with the public through its connectin to local history. The History Department “seek[s] to provide our students with an understanding of the significant institutions, events, trends and individuals that have shaped that experience, thus helping them to develop a better understanding of contemporary cultures and the human condition.” I spoke with Dr. David J. Dzurec, the History Department Chair at the University of Scranton, about the University’s involvement with the local area. He discussed the department's emphasis on service learning how they use local history inside the classroom.

 

Q: What is the value in local history?

A: Local History helps provide a context and an immediate importance for the work and the research that our students are conducting.  Even in a survey level class, integrating aspects of local history or connections to the region help bring national and world history events home to students.  Additionally, the larger Scranton community benefits greatly from know its own past.  Ideally, one of the jobs of the history department at the University of Scranton is to help bring these things (student work and the larger community) together.

 

Q: How are students engaging in the classroom with local history through service-learning?

A: At the University of Scranton, in addition to numerous internships, our history students have engaged in a variety of projects within the community and in coordination with local, state, and national institutions.  Our students have conducted research and helped organize projects with the Lackawanna County Historical Society (which is right on campus), the Weinberg Library’s Special Collections, the Steamtown National Historic Site, and the Pennsylvania State Library.  Students in our “Craft of the Historian” course worked with several of these institutions to digitize some of the Scranton family papers.  This project not only helped the students develop digitization and preservation skills, but it also allowed them to connect to the history of the local community (especially since the University’s campus is built in part on the Scranton family estate).  In another course, students studied the process of conducting oral history interviews and applied those skill by interviewing members of the Latinx community in Scranton.  These interviews were employed in research projects as part of our “Digital History” course.  Another ongoing project in our digital history class builds on student work researching the history of coal mining in the Scranton region.

 

Q: How is civil learning relevant to an understanding of history?

A: Understanding local history helps students develop a context for global and national historic events.  When students are able to connect moments like the “Square Deal” to the Lackawanna County Court House in downtown Scranton it adds a level of understanding they might not otherwise have had.

 

Q: Are there any specific documents that the University of Scranton possesses of historical significance either to the local area or beyond? If so, how do students and the public get to engage with these resources?

A: The University’s Special Collections include some of the Scranton family papers (part of which one of our classes worked to digitize), the Congressional Papers of Joseph McDade, Reports from the Pennsylvania Mining Commission, and the Passionist Historical Archives.  In addition to digitizing some of these special collections, one of our faculty members, Fr. Robert Carbonneau, works with students to make use of the Passionist Archives in their research and Fr. Carbonneau helped to organize a special exhibit of these documents. A number of our classes have conducted research in the various collections at the University and many of our students have completed internships working with Librarian Michael Knies in historic preservation.

 

Q: What do you see as Scranton’s broader role in history? How does it contribute to any other historical narratives?

A: Scranton’s long history of mining (specifically Anthracite Coal) and the wave of immigrants who came to the region to work in those mines, place the history of Scranton squarely in the center of the history of the United States in the early 20th Century.  Most notably the Anthracite Coal Strike of 1902, serves as a critical moment in the history of industrialization, labor history, and TR’s “Square Deal.”   

 

Q: How do you hope to expand and broaden the History Department’s public engagement?

A: Local history has been come a rich source of material for our students as they develop their research skills and learn about the variety of tools available to them in the research and writing process.  Going forward I hope to see our students continue to tell the stories of Scranton’s immigrant community, whether they arrived in the early 20th century or the early 21st century, in a variety of formats (from classroom presentation to digital projects).  I would also like to see us expand on the work we have already done to develop additional projects in conjunction with some of our local historic sites like the Lackawanna County Historical Society and the Steamtown National Historic Site.  I also think our students would benefit from the development of a new digital history lab, that would allow them to develop projects based on their local historical research that could be made widely available on the web.

 

 

 

]]>
Mon, 17 Feb 2020 08:10:24 +0000 https://historynewsnetwork.org/article/173764 https://historynewsnetwork.org/article/173764 0
"We Can Always Learn More History:" An Interview with Historian Seth Center

 

Seth Center is senior fellow and director of the Brzezinski Institute’s Project on History and Strategy at the Center for Strategic and International Studies (CSIS). His scholarship employs a historical lens to examine the contemporary national security agenda, develop applied history findings to inform responses to future challenges, and connect diplomatic and military historians to the policy community (You can read more about Dr. Center from the CSIS website here).

 

When did you decide you were interested in history?

 

I was fortunate to have terrific undergraduate professors at Cornell University like Michael Kammen, Walter LaFeber, and Sherman Cochran. That experience led to graduate school at the University of Virginia, where I had equally terrific professors including Melvyn Leffler (my dissertation advisor), Brian Balogh, and Philip Zelikow. Mentorship is exceptionally important in helping aspiring historians become professional historians.

 

How did that initial interest ultimately lead you to become the senior fellow and director of the Brzezinski Institute’s Project on History and Strategy at the Center for Strategic and International Studies?

 

Serving in government allowed me to use historical methods for practical purposes like explaining the origin of a particular diplomatic problem or the evolution of a part of the US government. It’s not a traditional academic job, but it does use the same historical methods. Think tank historical work is similar--it’s about leavening policy dialogue that usually runs through presentist concerns with a little bit of historical perspective.

 

Prior to joining the CSIS, you served at the National Security Council (NSC) as the director for National Security Strategy and History and at the U.S. Department of State as a historian. How was historical research and analysis utilized to inform policy?

 

History is ubiquitous in making foreign policy, conducting diplomacy, explaining actions, and understanding partners’ approach to the world. Unfortunately, history can be as recent as yesterday in government. Because people are constantly moving and shifting jobs, why we are who we are and how we got here is often as distant as ancient history. A good historian can help an organization think about the costs and benefits of sustaining a current path or changing course.

 

History can help recapture the original reasons a decision was made and help to surface whether current assumptions are still valid. It can help policymakers think through alternative policy choices. History can provide a sense of proportion and scale. It can help answer the question of “are we confronting something new?”; “How important or significant is the event we are facing?”

 

History comes in two forms. First, comparison or analogy. History can provide similar episodes in the past to help understand or assess current conditions. Making a current event to a past event can help clarify what is novel and what is familiar, and then allow one to think about how to respond to a current situation with more precision, or at least better judgment. Second, history can illuminate the deeper roots of a specific situation or event. This type of history is particularly useful in helping to understand the evolution of a diplomatic relationship or to understand how a competitor is approaching a situation.

 

One of the goals stated on the CSIS website for the project on history and strategy is to forge the “connections needed for policymakers and historians to be more useful to each other.” Why is this relationship between policymakers and historians so important? What makes applying a historical lens, in your opinion, effective in informing policymakers?

 

Time and urgency are important dynamics in policymaking. History and historical reflection often takes time and history is produced without any particular regard for any urgent contemporary concern. The challenge for getting history to policymakers is to ensure it gets to the reader in a timely matter so they can think about its meaning and implication before they have to act. Often windows for analysis and action are tight, historians have to hit those windows to be effective.

 

When advocating for your research to those who do not already have an academic foundation of historical knowledge, what is the most difficult aspect of communicating the value of considering historical ideas or concepts?

 

Time is almost always the biggest barrier. Getting busy people to consider the past and take the time to read about what has come before is usually the initial barrier. The second challenge is the natural tendency to see issues as unique or without precedent, which discourages looking backwards. That tendency is amplified when the most common historical comparison might suggest a particular analysis or course of action could produce disaster.

 

Are there any unexpected or insufficiently discussed ways in which history is useful today?

 

We can always learn more history. In an ideal world, policymakers would possess or seek to understand the history of a particular problem before making a decision, and also consider what similar situations in the past might illuminate the challenges of a particular situation and help anticipate the best ways to move forward.

 

Taking on common and popular historical myths is always frustrating for historians. Changing basic interpretations of events or people once they have formed in the popular imagination is tough--that’s true for academic historians and historians in the policy world.

 

Is there a particular accomplishment or project that stands out since working at the CSIS? Why was this achievement valuable?

 

One interesting project we are working on is exploring why the Cold War has become such a prevalent analogy for understanding the current US-China relationship. We have asked historians to assess the many ways the analogy is being used in an effort to inform current policy debate with historical knowledge. An interesting and important question has emerged: if the dissimilarities outweigh the similarities, then should the analogy be used at all? History does not lend itself to basic mathematical formulas, which leaves a lot of room for interpretation. We are trying to make the comparison a little more precise by sharpening the distinctions between the past and present.

 

The HNN website states that “the past is the present and the future too.” What does that mean to you, and do you agree or disagree?

 

Basically, we confront very few truly novel challenges in the world. A deeper knowledge of history can help us focus on what those novel challenges are. For the rest of the problems, we should consider how we have responded as individuals, institutions, and nations so that we can anticipate future action, and, in an ideal world reduce risks and mistakes.

 

 

 

 

 

 

]]>
Mon, 17 Feb 2020 08:10:24 +0000 https://historynewsnetwork.org/article/173705 https://historynewsnetwork.org/article/173705 0
Trump's Acquittal and the History of the Intentionally Undemocratic Senate

Years and decades from now, it’s not improbable that the January 31st scheduling confluence of both Great Britain’s official exit from the European Union and the Senate’s vote to dismiss witness testimony in the Donald Trump impeachment “trial” will mark that date as a significant nadir in trans-Atlantic democracy. A date that will live in infamy, if you will, or perhaps rather “perfidy” as Senate Minority Leader Chuck Schumer admirably put it. The Senate Republicans’ entirely craven, self-serving, undignified, and hypocritical vote to shield their president from any sort of examination is entirely unsurprising, though somehow still shocking. 

Senator Lamar Alexander’s cynical justification of his vote, whereby “there is no need for more evidence to prove something that has already been proven,” is in some manner the credo of the contemporary Republican Party. Spending the last three years bellowing “fake news” at anything which was disagreeable to them, Alexander’s honesty and commitment to reality is in a way refreshing. Alexander acknowledges that Trump is guilty – he just doesn’t care, so why waste time with an actual trial? What’s more surprising than the Republicans setting the process up for the inevitable “acquittal” this week is that so-called “moderates,” like Senator Susan Collins and Senator Mitt Romney, actually did the right thing. The better to trick some centrist Democrats into thinking that the GOP hadn’t completely lost its mind. 

Nobody of good conscience or sense could possibly think that the Republican role in the Senate impeachment proceedings was anything other than a successful attempt at cover-up, one with the ramification of letting Trump correctly know that he can do whatever he wants with absolutely no repercussions. The crossing of this particular Rubicon is by no means the only, or by far even the worst, democratic degradation over the past few years, but it’s certainly a notable one as Republicans from Senate Majority Mitch McConnell on down are not even bothering to hide their lack of ethics. From that perspective, as disturbing as Republican cravenness may be, it’s very much in keeping with the zeitgeist. Theorist Astra Taylor observes this in her excellent treatise Democracy May Not Exist, but We’ll Miss it When it’s Gone, when she notes that “recent studies reveal that democracy… has weakened worldwide over the last decade or so… It is eroded, undermined, attacked… allowed to wither.” Though the concept of “democracy” and the upper-chamber of the United States legislative branch are hardly synonymous with one another, it’s crucial now more than ever to keep in mind what’s intentionally undemocratic about the Senate as an institution. 

In the hours after the predictable Senate vote, reactions from centrist liberals to those further to the left seemed to anecdotally break down into two different broad, emerging consensuses. While most camps can, should, and must be united in trying to make sure that Trump only serves one term, the analysis of what the inevitable Senate acquittal means was wildly divergent. 

Among many centrist liberals there was a halcyon valorization of a Senate that never quite existed, a Pollyannaish pining for a past of process, decorum, and centrist sensibility. Such is the sentiment of Boston Globe editorialist Yvonne Abraham when he earnestly asked “What is there to say about this week’s shameful events in the U.S. Senate that doesn’t sound hopelessly naïve?” Eager to prove that they never enjoyed the West Wing, pundits further left emphasize, accurately, that the Senate itself is explicitly an institution predicated on the rejection of the popular will, or as one wag on Twitter put it, “watching the resistance libs get all hot and bothered about how fundamentally undemocratic the senate is would be balm for the soul, but they won’t learn a thing from this whole nonsense affair.” 

Except here’s the thing – both things can be true. The Senate can be an institution always predicated on unequal representation, and the Republican vote can still be a particularly shameful moment. What can and must be learned from the affair isn’t that resistance to Trump has to be fruitless, but rather that we can’t expect institutions and procedures to be that which saves us.  

Crunching the numbers is sobering if one really wants to know precisely how undemocratic the Senate actually is. An overwhelming majority of Representatives voted to impeach Trump in the far more democratic (and Democratic) House of Representatives, reflecting a January 20th CNN poll which found that 51% of Americans narrowly supported the president’s removal from office. Yet the Senate was able to easily kill even the possibility of such a result (even beyond the onerous 2/3rds requirement for conviction, which has historically made such an outcome a Constitutional impossibility). Ian Millhiser explains in Vox that “more than half of the US population lives in just nine states. That means that much of the nation is represented by only 18 senators. Less than half of the population controls about 82 percent of the Senate.” He goes onto explain that in the current Senate, the Republican “majority” represents fifteen million less people than the Democratic “minority.” 

Such an undemocratic institution is partially, like the Electoral College, a remnant of an era when small states and slave owning states were placated by compromises that would give them political power while the Constitution was being drafted. The origins of the institution are important to keep in mind, because even though population disparities between states like Wyoming and California would have been inconceivable to the men who drafted the Constitution, the resultant undemocratic conclusions are a difference of degree but not of kind. When the Senate overturns the will of the people, that’s not a bug but a feature of the document. The point of the Senate was precisely to squelch true democratic possibility – it’s just particularly obvious at this point. What’s crucial for all right-thinking people who stand in opposition to Trump is to remember that that’s precisely the purpose of the Senate, and that a complacent belief in the fundamental decency of institutions is dangerous. 

So valorized is the Constitution in American society, a central text alongside the far-more-radical Declaration of Independence in defining our covenantal-nationality, that there can be something that almost seems subversive in pointing out its obviously undemocratic features. Yet the purpose of the Constitutional Convention was in large part to disrupt the popular radicalism of the Articles of Confederation that structured governance from the Revolution until Constitutional ratification. While there may be truth in the fact that the Constitution was necessary to forge a nation capable of defending and supporting itself, the Articles were a period of genuine democratic hope, when radical and egalitarian social and economic arrangements were possible in at least some states. Literary scholar Cathy Davidson argues in Revolution and the Word: The Rise of the Novel in America, that far from enacting some kind of democratic virtue, ratification signified an eclipse of radical possibility, noting “the repressive years after the adoption of the Constitution.” For Davidson, the much-valorized drafters in Philadelphia met to tamper down the democratic enthusiasms of the Articles, concerned as they were about the “limits of liberty and the role of authority in a republic.” 

The Constitutional Convention is thus understood more properly as a type of democratic collapse, like Restoration after the seventeenth-century English Revolution, or the end of Reconstruction following the American Civil War. Historian Woody Holton writes in Unruly Americans and the Origins of the Constitution that though “Today politicians as well as judges profess an almost religious reverence for the Framers’ original intent,” reading Federalist arguments from the eighteenth-century indicates that the purpose of the Constitution was “to put the democratic genie back in the bottle.” Such was the position of one Connecticut newspaper which in 1786 argued that state assemblies paid “too great an attention to popular notions,” or of the future Secretary of the Treasury Alexander Hamilton, recently transformed by a Broadway musical into a hero of neoliberal meritocratic striving, who complained that he was “tired of an excess of democracy.” Only two generations later, and partisans of democratic reform understood all too clearly that the Constitution was a profoundly compromised document, with abolitionist William Lloyd Garrison describing it as “a covenant with death, an agreement with hell.” Historian Gordon Wood famously argued that the “American Revolution was not conservative at all; on the contrary: it was as radical and as revolutionary as any in history,” and that may very well be true. But what also seems unassailable is that the Constitution was in some manner a betrayal of that radicalism.

If Garrison, and the young Frederick Douglass, were in agreement with other radicals that the Constitution was reactionary, then progressives would come to embrace the document because of an ingenious bit of rhetorical redefinition born of necessity during the Civil War. President Abraham Lincoln is the most revolutionary of Constitutional exegetes, because he entirely reframed what the document meant through the prism of the democratic Declaration of Independence. Garry Wills in the magisterial The Words that Remade America argued that at Gettysburg, “Lincoln was here to clear the infected atmosphere of American history itself, tainted with official sins and inherited guilt. He would cleanse the Constitution… [by altering] the document from within, by appeal from its letter to the spirit, subtly changing the recalcitrant stuff of that legal compromise, bringing it to its own indictment.” Calling it among the “most daring acts of open-air sleight of hand ever witnessed by the unsuspecting,” Wills argues that “Lincoln had revolutionized the Revolution, giving people a new past to live with that would change their future indefinitely.” By providing a jeremiad for a mythic Constitution that never was, Lincoln accomplished the necessary task of both imparting to it a radical potential which didn’t exist within its actual words, while suturing the nation together. 

This was a double-edged sword. On the one hand, by having rhetorical and legal recourse to Constitutionality much progress has been made throughout American history. But there is also the risk of deluding oneself into thinking that the Constitution is actually a democratic document, which goes a long way to explaining the frustration among many when the Senate acts as they should expect it to. When we assume that procedure is salvation, heartbreak will be our inevitable result. The question for those of us on the left is how do we circumnavigate the extra-democratic aspects of the Constitution while living within a Constitutional republic? We could attempt another rhetorical “cleansing” of the Constitution in the manner of Lincoln, a reaffirmation of its spirit beyond its laws – and much could be recommended in that manner. 

I wonder, however, if disallowing ourselves of some of our illusions might be preferable, and that there might be something to recommend in embracing a type of “leftist devolution,” a commitment to a type of small scale, regional, and local politics and solidarity that we often ignore in favor of the drama of national affairs. Too often we’re singularly focused on the pseudo-salvation of national politics, forgetting that democracy is far-larger than the Constitution, and has more to do than just with what happens in Washington. Tayler writes that “Distance tends to give an advantage to antidemocratic forces… because people cannot readily reach the individuals in power or the institutions that wield it,” explaining that “Scale is best understood as a strategy, a means to achieve democratic ends,” for “Democracy begins where you live.” All politics must be local, something that the right has understood for generations (which is, in addition to inequities established in their favor, part of why they’re so successful right now). Democracy, and agitation for it, happens not just in the Senate, but in state-houses, on school boards, on city councils, in workplaces. It must happen everywhere.

]]>
Mon, 17 Feb 2020 08:10:24 +0000 https://historynewsnetwork.org/article/174255 https://historynewsnetwork.org/article/174255 0
Was Nixon Really Better Than Trump?

 

For those who think the Trump/Nixon comparisons are overdone or exaggerated, or that Nixon was a better person or president than Trump, think again. Both men were cut from the same bolt of cloth. They share common character defects: hyper-insecurity, overwhelming feelings of victimhood and unfair vilification, hatred of losing, authoritarian and bullying complexes, and loathing of enemies.

 

We recently witnessed another shared attribute: when things go well, they turn ugly. Neither man seems capable of enjoying or appreciating moments of vindication or success. Instead, they became laser-focused on revenge against those who they believe dishonestly attacked them. At last on the mountaintop, their instinct was to start throwing rocks and boulders at those below.

 

The difference between Trump and Nixon is that Trump acts out in public; Nixon did it behind the scenes as secret microphones picked up his every word. Ominously for the nation, there is another important difference. Nixon sought revenge at the beginning of his second term but was effectively stopped from exercising his worst impulses when the Watergate scandal began to cripple his presidency. In this respect, Trump is Nixon in reverse. He has survived his impeachable scandal and now has free rein to act out on his instincts for vengeance.

 

Trump took to the warpath the day after winning acquittal in the Senate impeachment trial. At the National Prayer Breakfast the following morning, Trump held up two newspapers with the headline “ACQUITTED” as he entered the room and then he mocked House Speaker Nancy Pelosi and her faith, saying, “I don’t like people who say, ‘I pray for you’ when they know that that’s not so.” She sat just feet away on the dais. Trump called his impeachment “a terrible ordeal by some very dishonest and corrupt people.”

 

That was only the start. Later in the day, the president “celebrated” his acquittal in the East Room of the White House, surrounded by Republican members of Congress and Fox News hosts in an hour-long profanity-laced pep rally. “We went through hell unfairly, did nothing wrong,” he told an animated audience. He called the Russia investigation “bullsh*t.” He reiterated his theme that what he had gone through in his first three years in office has been “evil.” He said: “It was corrupt. It was dirty cops. It was leakers and liars. And it should never ever happen to another president.”

 

And as might be expected the knives have come out. Lieutenant Colonel Vindman and his brother and Gordon Sondland, impeachment witnesses and a bystander, have been summarily dismissed.

 

Nixon’s tapes show an embattled president acting the same way, only behind closed doors.

 

On the night of January 23, 1973, President Nixon announced on national radio and television that an agreement had been reached in Paris to end the war in Vietnam. This was the culmination of his most important pledge as a presidential candidate in 1968 and 1972. Nixon swore he’d bring “peace with honor,” and finally in the first days of his second term he could proclaim its fulfillment. Nothing was more important to this man who was raised by a devout Quaker mother. In his first inaugural, in fact, Nixon declared that “the greatest honor history can bestow is the title of peacemaker.” This is the epitaph that graces his headstone at his presidential library and childhood homestead in Yorba Linda, California.

 

Yet on the day of this supreme breakthrough, Nixon was almost desperate to get even with his perceived enemies. His meetings and calls with presidential assistant (and later evangelical Christian) Chuck Colson are astonishing and vulgar.

 

Once it was clear the peace accord had been initialed, Nixon met with Colson and begged him to marshal anyone and everyone in the administration to start hitting back at those who had opposed him. Nixon had taken a huge gamble following his re-election in November 1972. The North Vietnamese who had been negotiating with Henry Kissinger in Paris became recalcitrant. Though Nixon scored a landslide win, he had no coattails and Democrats gained seats in the Senate (one being Joe Biden of Delaware). The North Vietnamese therefore dug in, recognizing that a Democratic Congress (the House, too, remained in Democratic hands) would likely pull financial support for the war when they reconvened in January 1973. All they had to do was wait things out.

 

Nixon, Kissinger and General Alexander Haig met in mid-December to discuss what to do. As Kissinger says on the tape of the meeting, it was time to “bomb the beJesus” out of the North, including population centers like Hanoi. The infamous Christmas bombing began on December 18. Nixon consulted no one in Congress. The world shrieked, thinking the American president had gone mad.

 

But the brutality worked. The North Vietnamese came back to the table and a peace accord was hashed out almost in time for Nixon’s second inaugural. It should have been a moment of gratitude and elation; it was anything but.

 

“This proves the president was right,” Nixon says to Colson. “We’ve got go after our enemies with savage brutality.” At another point, Nixon froths, “We’ve got to kick them in the b*lls.” He encouraged Colson to sue Time magazine for libel over a Watergate story and called his press secretary, Ron Ziegler, to order the blackballing of “the bastards atTime magazine”for the rest of his presidency. 

 

Scores were to be settled; nothing was to be held back.

 

Later that night, after calmly telling the nation of the peace agreement, Nixon was back on the phone with Colson working himself into a lather. They chortled over how correspondents at CBS were “green, hateful, disgusted, and discouraged” at the news. “I just hope to Christ,” Nixon spewed, “some of our people, the bomb-throwers, are out, because this ought to get them off their ass, Chuck.” He wanted his team energized to work that very night to start the process of retaliation. “Those people who wanted me to have an era of good feeling,” he snarled, “if they bring that memo to me again, I’m going to flush it down the john.”

 

Nixon was unleashed. But the Senate voted a little over two weeks later to start the Ervin Committee to investigate irregularities in the 1972 election. Nixon became embroiled in the “drip, drip, drip” of bombshell disclosures and revelations and he never regain his footing. Over time, he became powerless to get even with his enemies, resigning in the summer of 1974 as he faced impeachment.

 

Trump is on the other side of his scandal, and he not only survived, he is virtually “locked and loaded.” He has nothing to rein him in absent a defeat in November. With no other check in sight, the country will now see just how far President Trump will go in settling old scores. If last week is the prelude, it’s going to get even uglier. Imagine if he wins re-election?

]]>
Mon, 17 Feb 2020 08:10:24 +0000 https://historynewsnetwork.org/article/174296 https://historynewsnetwork.org/article/174296 0
Re-Animating the 1619 Project: Teachable Moments Not Turf Wars

 

 

Who wins when distinguished historians, all white, pick fights over the history of slavery with prominent New York Times journalists, all black, who developed the newspaper’s 1619 Project? Beginning last year, a stream of well known scholars have been objecting publicly to the journalists’ contention that slavery, white racism and African American resistance so fundamentally define the American past that it acts as our history’s (so to speak) prime mover. The historian/ critics’ basic reply: Wrong. It’s more complicated!

 

One of these scholars, Sean Wilentz, quite recently detailed his objections in the January 20, 2020 issue of The Atlantic. At about the same time a dozen additional dissidents published their critiques (to which the 1619 Editors responded) in the History News Network. Meanwhile, out-front bigots like Newt Gingrich, Tucker Carlson, Michael Savage and Rush Limbaugh hijacked the controversy. They have captured the headlines, dominated the news cycle and-- as they would have it-- taken home the trophy and delivered it to Donald Trump. The New York Times journalists have emerged with collateral damage and the historians as unmitigated losers in the court of public opinion.  Lost as well was a rare opportunity for a substantial evaluation of slavery’s role in shaping our shared American experience. 

 

But here’s what’s most important. Those of us who value the 1619 Project can reclaim our “teachable moment” by excavating beneath the heated rhetoric. There we will discover that the journalists and the historians embrace conflicting but equally valuable historical truths regarding slavery’s power to shape our nations past and present. I will soon articulate why this is so and what we can learn as a result.

 

First, however, we must move beyond the conflict that erupted when Wilentz, joined by James M. McPherson, Gordon Wood, James Oakes, and Victoria Bynum, eminent scholars all, forgot that they also have an obligation to serve us as educators, not as censors. By so harshly attacking credibility of the 1619 Project in their letter to The New York Times, they squandered the “teachable moment” that the Project itself intended to create. Instead, these scholars appointed themselves gatekeepers charged with the heavy enforcement of their personal versions of high academic “standards." 

 

Instead of constructively dissenting and inviting dialogue, they berated the 1619 journalists for pushing “politically correct” distortions grounded in Afro-centric bias. “The displacement of historical understanding by ideology” is how one of them phrased it. They demanded retractions, worked assiduously (and failed) to recruit scholars of color to their cause, and sent their complaints directly to the top three editors of the Times and its Publisher A.G Sulzberger. That looks a lot like bullying. Dialogue dies when one contending party publicly attempts to undercut the other with his/her bosses.

 

The historians, however, were not alone when criticizing the 1619 Project. Newt Gingrich proclaimed that “The NYT 1619 Project should make its slogan ‘All the Propaganda we want to brainwash you with.’”  Ted Cruz fulminated that “There was a time when journalists covered ‘news.’ The NYT has given up on even pretending anymore. Today, they are Pravda, a propaganda outlet by liberals, for liberals.” The Trumpite commentators who had been attacking the 1619 Project since last August seized the distinguished historians’ arguments and repeated them on FOX News. Eric Erickson’s reactionary website, The Resurgent, freely appropriated them (without acknowledgement). Though the Times has defended the Project’s integrity while other media outlets have highlighted the general controversy and The Atlantic has published Wilentz’s academic critique, the Trumpistas have high-jacked the conversation. 

 

So thanks to the triumph of Team Bigotry we have yet to discover what the historians, the journalists and the 1619 Project more generally can actually teach us. But we can make a strong start by reflecting on the contrasting points of view signaled by these titles:  John Hope Franklin’s From Slavery to Freedom and August Meier’s and Elliot Rudwick’s From Plantation to Ghetto. Back in the 1960s, when African American history was first establishing itself as a mainstream field of study, these two dominated the textbook market. Together they presented sharply differing alternatives for teaching about slavery and its legacies. Each is as influential today as it was back then. 

 

Pick From Slavery to Freedom and you develop a history course around a text that foregrounds sustained activism that produced significant change, presumably for the better. Select Plantation to Ghetto and you prepare students for a sobering overview of racist continuity that has persisted across the centuries despite all the struggles against it. Martin Luther King perfectly captured the spirit of Franklin’s text when affirming that “The arc of history bends toward freedom.” Amiri Baraka (Leroi Jones) did exactly that for Meier’s and Rudwick’s text when lamenting the “ever changing same” of African American history. 

 

Guided by both King and Baraka we hit bedrock. Their conflicting insights, partial though they are, carry equal measures of truth. Heeding King and the historian/critics who share his perspectives, let’s inquire: Was African American history replete with life-changing moments, epochal struggles, liberating ideologies, unexpected leaps ahead, and daring cross racial collaborations? History’s reply is “of course.”  Heeding the 1619 journalists who share Baraka’s perspective, let’s ask: Was African-American history determined by a white racism so intense that it repeatedly crushed aspirations, inflicted terrible violence, undercut democratic values, made victories pyrrhic and pulled leaps ahead rapidly backwards? The answer again is “of course.” By making these two affirmations we have finally identified a central question that can reanimate our “teachable moment. 

 

Imagine the Times journalists and an open-minded group of historian-critics debating this question:  “To What Degree has the Arc of African American History Bent toward Freedom?” Also imagine them pursuing this discussion in a spirit of informed collegiality in, say, a nationally televised PBS forum. And since we are in the business of reanimating, let’s finally imagine that we have summoned the ideal Moderator/ Commentator, a journalist who made history, and who did more than any other survivor from slavery to educate Americans about the depth of white racism and black resistance. Frederick Douglass. Can you imagine a more “teachable moment?"

 

This scene, fanciful as it is, amply suggests what should have taken place had the historian-critics chosen to be teachers, not gatekeepers. It also provokes us to realize that we can searchingly interrogate our nation’s complex chronicle of racial injustice while acknowledging the areas in which we have made palpable progress. Opportunity still awaits us to snatch back the trophy from Team Bigotry and push back together, journalists and historians alike, against our own rising tide of white supremacy. 

 

 

Editor's note: The History News Network is attempting to create a forum to discuss how historians can invite the public to learn and reflect on the pain and paradoxes of African American history and how journalists and historians might learn from and collaborate with one another in addressing this history. We hope to update readers soon. 

]]>
Mon, 17 Feb 2020 08:10:24 +0000 https://historynewsnetwork.org/article/174256 https://historynewsnetwork.org/article/174256 0
The Travesty of the Century

 

After three years in office, one can hardly be surprised at what Trump is capable of saying, doing, or scheming. In the middle of his impeachment trial, Trump finally released his “deal of the century”—a deal that completely ignored several United Nations resolutions, accords that were sponsored by the European community and the United States, and bilateral agreements between Israel and the Palestinians. Trump assigned his ‘internationally recognized top expert on Middle Eastern affairs’, Jared Kushner, to come up with a deal to solve a seven decades-old conflict that has eluded every American administration since 1948. Whereas the US has over the years played a central role in the effort to solve the Israeli-Palestinian conflict, no American administration has provided such a detailed proposal, certainly not one that grants Israel all of its wish list. Every past administration knew full well that prejudicing the outcome would doom any prospective deal from the start, and thus settled on providing a general outline consistent with prior internationally recognized agreements. When it comes to Trump though, prior interim agreements, UN resolutions, and numerous face-to-face negotiations between both sides simply do not matter. Instead, he relies on his “negotiating skills” and crude audacity to offer a solution that no one who has any deep knowledge of the history of the conflict, its intricacies, and its psychological and pragmatic dimensions would even contemplate. It is important to note however, that both Israel and the Palestinians have over the years denied each other’s right to exist in an independent state, and to suggest that one side or the other is innocent and wholly wronged is a fallacy. Both have contributed to the impasse and both are guilty for failing to adhere to the numerous agreements sponsored by the international community, to which they initially subscribed. The following offers a synopsis of these resolutions and agreements. On November 29, 1947, the United Nations General Assembly passed Resolution 181, stating that “independent Arab and Jewish States…shall come into existence in Palestine…not later than 1 October 1948.” On November 22, 1967 the UNSC passed Resolution 242 “Emphasizing … respect for and acknowledgement of the sovereignty, territorial integrity and political independence of every State in the area…to live in peace within secure and recognized boundaries…” On October 22, 1973 the United Nations Security Council Resolution 338 “Calls upon the parties concerned to start immediately after the cease-fire the implementation of Security Council resolution 242 (1967) in all of its parts…” On September 17, 1978 the Camp David Accords declared “the agreed basis for a peaceful settlement is…United Nations Security Council Resolution 242, in all its parts”. On September 13, 1993 the Oslo Accords aimed to establish principles of self-government, “leading to a permanent settlement based on Security Council resolutions 242 (1967) and 338 (1973).” On March 28, 2002 the Arab Peace Initiative, which was unanimously endorsed by the Arab League and the international community, including a majority of Israelis, “[called] for…Israel’s acceptance of an independent Palestinian state with East Jerusalem as its capital…” On April 30, 2003 the Quartet’s (US, EU, UN, and Russia) Road Map for Peace insists “a settlement…will result in the emergence of a…Palestinian state living side by side in peace and security with Israel and its other neighbors.” Trump however, in his wisdom, chose to completely ignore these prior resolutions and instead focused primarily on what he considers ‘best for Israel’, albeit the deal will do more harm to Israel than he could possibly imagine. I dare say that he may well understand the dire implications for Israel, but cares less as long as it serves his interests. Trump is known for violating international accords; he withdrew from the Paris Accord on climate change, the Iran deal (JCPOA), and trade agreements with China, Canada, and Mexico, and domestically revoked scores of regulations enacted by the Obama administration. To be sure, he wants to put his own mark on everything, whether he agrees or disagrees with the subject matter. This raises the question by what logic Trump can assume upon himself the political, religious, and moral right to divide an occupied land between Israel and the Palestinians in defiance of all previous accords and internationally recognized agreements? Whereas he consulted with the Israelis ad nauseum on every provision of his Deal, he completely ignored the Palestinians. Notwithstanding the fact that the Palestinians severed direct talks with the US as a result of Trump’s recognition of Jerusalem as Israel’s capital, at a minimum he should have initiated back-channel contacts with Palestinian leaders and considered their requirements that could ensure some receptivity rather than outright rejection. Moreover, for Trump to unveil his grandiose Deal standing side-by-side Netanyahu sent an unambiguous message as to where he really stands and to whom he is appealing. This scene alone was enough to disgust even moderate Palestinians, who otherwise would have at least paid lip service to the Deal. But that was not on Trump’s agenda. On the contrary, he did so deliberately for his targeted audience—and in that, he succeeded. Like everything else, whatever Trump touches dies, and if there was any hope for an Israeli peace it has now been deferred for years if not decades. The Israelis will waste no time to act on all the provisions provided by the Deal. Ganz, the leader of the Blue and White Party, has already stated that if he formed the new Israeli government, he would annex all settlements and the Jordan Valley. For Ganz, just like for Netanyahu, American political support is what matters, irrespective of any other internationally recognized accords that have granted the Palestinians the right to establish an independent state of their own. To be sure, Trump’s peace plan should be renamed “the travesty of the century” for which Israelis and Palestinians will pay with their blood.

]]>
Mon, 17 Feb 2020 08:10:24 +0000 https://historynewsnetwork.org/article/174314 https://historynewsnetwork.org/article/174314 0
The Dramatic Relationship Between Black America and the Academy Awards

 

After weeks of hype, the Oscars air this evening. The announcement of the nominations last month reignited discussions of race and the Oscars. The number of nominees of color in the main categories was almost nil. With the exception of the nomination of Black British singer/actress Cynthia Erivo for her splendid performance in the Kasi Lemmons film, Harriet, no other non-White person scored a nomination. Anticipation that the mesmerizing Latina actress Jennifer Lopez, deeply engaging Asian actress Akwafina, hilarious legendary comedian Eddie Murphy, and other performers of color would be acknowledged by the Academy for their performances were quickly dashed on January 13th. In fact, it was a very white, very male roster of honorees.

 

As has been the case over the past few years, the roster of acting and directing nominees were intensely dissected and debated. Multiple Oscar nominated actor Joaquin Phoenix spoke truth to power at the BAFTA awards when he directly and passionately decried and criticized many of his peers, as well as himself, for the pitiful dearth of actors of color being acknowledged for their performances. Mega influential fiction writer, Steven King found himself in the middle of an intense debate when he stated that talent is the sole criteria that should be considered when judging art and that race, gender etc. should not be considered. 

 

After a torrent of fierce criticism directed toward him, the much beloved and admired author backtracked and retreated from his initial comments and conceded that race is indeed a crucial factor in Hollywood, as it is in virtually every avenue of American life. His initial remarks demonstrated a degree of tone deafness and quite frankly, arrogance among a person who prides himself on being politically, socially and culturally progressive. While his comments may have been fuel for commentary among the mainstream media, they were hardly surprising among many, if not most, Black Americans. Indeed, King's comments epitomized and validated a long running historical pattern of White liberal arrogance. The white liberal who believes his or herself to be so socially and culturally adept that they eventually arrive at the conclusion that any comments they espouse are valid or worthy of praise. Mr. King got a rabid dose of reality. Whether he was genuine in his sudden contrition not, the fact is King’s “sudden epiphany” was right on target. This has particularly been the case as it relates to Black America and the academy.

 

In July 2013, American film marketing executive Cheryl Boone Isaacs was named president of the Academy of Motion Pictures of Arts and Sciences. (AMPAS). She was the first black American (and only third woman after actresses Bette Davis and Fay Kanin) to be selected to head such a prestigious organization. She held this position until 2017. Several decades earlier in 1939, Hattie McDaniel became the first black person to win an Oscar for her performance in the classic movie Gone With the Wind. Her victory was a bittersweet one in the fact that her speech was prepared for her and she and her guests were forced to set in a segregated section of the building where the event took place. These two similar yet distinctive examples are representative of the complex relationship between black Americans and the Oscars.

 

From its origin as a small dinner party of a few actors and actresses, movie executives and producers in 1927, the Academy Awards soon moved to ceremony status in 1929. Along with fellow ceremonies such as the Miss America Pageant, the Super Bowl, the Grammys, etc., the Oscars has remained one of the most watched annual events by Americans as well as viewers throughout the entire world. 

 

Throughout its existence, the African American community has had an ambiguous relationship with the Academy Awards. While the selections of McDaniel and Boone-Isaacs as best supporting actress and Academy president were significant by any standard and well applauded, the history of blacks and the Academy has been somewhat complex. After Hattie McDaniel won her Oscar, it was not until a decade later in 1949 that another black actress, Ethel Waters, was nominated for best supporting actress for her performance in the movie Pinky. Unlike McDaniel, Waters was unsuccessful in her quest to win an Academy Award. She lost to fellow Pinky co-star Jeanne Crain.

 

In 1954, beautiful Dorothy Dandridge was the first Black woman nominated for best actress in a lead role for her performance in Carmen Jones. The Oscar that year went to Grace Kelly for her performance in The Country Girl. It was not until 1958 that a black male, Sidney Poitier, was nominated for an Oscar for his convict role in The Defiant Ones. Poitier was also nominated in 1963 and became the first black man to win best actor as well as first black person to win an Oscar for best actor for his lead role as Homer Smith, a mercenary carpenter who serves as a handyman assisting a group of international nuns in the Arizona desert in Lillies of the Field.

 

While his performance was a good one, there is no doubt that the broader cultural and political landscape influenced Poitier’s win. Martin Luther King Jr.’s delivered his iconic “I Have a Dream” speech at the steps of the Lincoln memorial several months earlier during the summer of 1963 and newly elected president Lyndon Johnson was planning to sign the 1964 Civil Rights Act into law a few months later in July. These events likely influenced a number of Oscar voters. Legendary mid 20th century gossip columnist Hedda Hooper, who was also known to be racist and anti-Semitic, was also an Anglophobe who detested the number of British actors who were nominated that year. As a result, she aggressively campaigned for Poitier. Talk about politics of the surreal. Such behavior gave credence to the saying that politics makes strange bedfellows! Thus, the political, social and cultural climate had aligned with the stars to work in the actor’s favor. This also demonstrated credence to what the late, high society author and cultural critic Truman Capote stated - that politics and sentiment are major factors as it relates to the Oscars.

 

Milestones aside, Poitier’s triumph was short lived as Black performers received scant and sporadic nominations throughout the next decade following his historic win. James Earl Jones, Diana Ross, Cicely Tyson and Diahann Carroll were among those who received the academy’s blessing during the early 1970s. Then, from 1975–1981, not a single Black performer was acknowledged with a nomination. Howard Rollins’ nomination in Ragtime in 1981 disrupted the long running drought. The following year,. Louis Gossett Jr. won an award in 1982 for his role as a tight and tough as nails drill sergeant in the movie An Officer and a Gentlemen. Frustrated by the continuing dearth of representation among Black performers gracing movie screens, in the early 1980s, the Hollywood branch of the NAACP criticized the Academy for what it saw as a chronic lack of black nominees. 

 

During the mid 1980s, The Color Purple, a film directed by Stephen Spielberg based on the 1983 Pulitzer Prize-winning novel by Alice Walker was nominated for 11 Academy Awards. The film became a lightning rod of controversy and set off a number of heated and passionate debates in the black community, particularly for its less than stellar depiction of black men. In fact, even conservative publications such as the National Review, the vanguard of American conservatism at the time, denounced the film stating that there were not any black men in the movie that had any admirable or redeeming qualities.

 

The Color Purple marked the first time that multiple black actresses received nominations for the same film; Oprah Winfrey, Margaret Avery and Whoopi Goldberg all received nominations. Despite its multiple nominations and excessive amount of attention it garnered, the movie failed to win any Oscars and tied the record for the most nominated film to not win any Oscars with the 1977 movie, The Turning Point. What made this controversy even more interesting (arguably amusing) was the fact that many of the same people, including the Hollywood branch of NAACP, who were critical of the movie “threatened to sue” the Academy for failing to award any Oscars to the movie. There is no doubt that such a suit would have been unsuccessful—how, for example, would anyone know which of the 4800 members voted for the for or against the movie?

 

Eventually, the Hollywood NAACP came to its better senses but its initial reaction was not the finest moment for the civil rights organization.

 

During the 1988 Oscar ceremony, Hollywood mega superstar Eddie Murphy took the Academy to task decrying what he perceived as the lack of sufficient recognition given to black performers in the movie industry before giving the award for best picture that year.

 

During the 1990s, black actors were increasingly nominated for their performances by the Academy, including Denzel Washington, Angela Bassett, Whoopi Goldberg, Laurence Fishburne, Morgan Freeman and others. Goldberg became the second black woman to win an Oscar in 1990 for her role as the psychic medium in Ghost.

 

By the 21st century, black nominees have been a regular staple in the Oscar circuit. In 2001, a year dubbed by a number of blacks and (and non-blacks) as “the year of the black Oscars,” the Academy honored Sidney Poitier with a lifetime achievement award. Denzel Washington, Will Smith and Halle Berry were also nominated for best actress/actor. Berry and Washington were victorious. For Washington, it was his second academy award. Berry would become the first and currently only black woman to win best actress.

 

In 2004, Jamie Foxx became the first black nominee to receive two nominations in the same year. He won the Academy Award for his spellbinding performance in the movie Ray. Forrest Whitaker, Morgan Freeman, Jennifer Hudson, Octavia Spencer, Monique, Viola Davis and Regina King have also taken home Hollywood’s most coveted honor. 

 

 A number of black nominees including Chiwetel Ejiofor, Lupita Nyongo, Barkhad Abdi have been nominated in the best actor and supporting actor and actress categories. In 2013, black British director Steve McQueen became the first Black person to receive the best film honor for his film 12 Years a Slave and Lupita Nyong'o won best supporting actress for her role in the film. In the weeks leading up to the award ceremony, Fox Searchlight Pictures aggressively showcased posters of the film displaying the statement “It’s time.” There was nothing ambiguous about the message.

 

McQueen’s triumph aside, there have only four black directors nominated: McQueen, the late John Singleton (1991’s “Boyz n the Hood”), Lee Daniels (2009’s “Precious”), and Barry Jenkins (“Moonlight”). Moonlight won best picture in 2016 allowing Jenkins to join McQueen as the second Black person to win an Oscar for best picture. A Black director did receive an Oscar in the past year, but it was an honorary one. Never one to shy away from controversy, legendary director Spike Lee used his acceptance speech in 2019 to forcefully, directly and candidly remind the industry that the world is changing, and will soon be minority white. “All the people out there,” he said, “who are in positions of hiring: You better get smart.”

 

Whatever your opinion, in spite of its historically reductive and often complex history as it relates to race, the truth is that the Oscars have been a mainstay in American popular culture and have been influential in the lives of a number of black entertainers and I, like millions of people all over the world, will likely be tuning in on February 9th to see who will take home an Academy Award.

 

Editor's note: This piece was updated to note that Lupita Nyong'o won for best supporting actress in 2013. 

]]>
Mon, 17 Feb 2020 08:10:24 +0000 https://historynewsnetwork.org/article/174245 https://historynewsnetwork.org/article/174245 0
Should the Democratic Presidential Candidates Announce Their VP Picks Now?

Some pundits have recently suggested that presidential candidates should identify their running mates before the caucus and primary season gets under way. Ted Rall wrote on January 24, 2020 that the United States should require presidential candidates to “announce their veep picks at the same time they announce their intent to run.”  Rall believes such a requirement would be more democratic and provide primary voters useful information about the possible successor.  The same day, Matt Bai made the narrower suggestion in the Washington Post that older presidential candidates in this year’s race should “at least release a short list of possible running mates now” before the voting begins.  Bai’s column focused on Senator Bernie Sanders but he would apply this requirement to former Vice President Joe Biden and Senator Elizabeth Warren, too. Gordon Weil has suggested that Biden, Sanders, and Donald Trump disclose their running mates before the first caucuses and primaries.  Jared Cohen argues that a Democratic presidential candidate should name his or her running mate “now” as a strategy to “surge ahead of a crowded field.”        

 

The recent history of vice-presidential selection and of the vice presidency generally suggests, however, that these well-intended proposals rest on mistaken judgments about the way the vice-presidential selection system now operates, exaggerate the benefits of the proposed reforms, and underestimate the difficulty of implementing those remedies.

 

The vice presidency, especially as it has developed in recent decades, has two significant functions.  On an ongoing basis, the vice president provides a close adviser and trouble-shooter for the president who can improve the quality and implementation of American public policy.  The vice president also serves the contingent function of providing a qualified and prepared presidential successor (in case of a presidential death, resignation, or removal) or temporary pinch-hitter (in case of a presidential inability).  These two functions require that the vice president be presidential and politically and personally compatible with the president.

 

The political calendar presents an apparent challenge, however, since vice-presidential candidates are chosen with an eye towards the November election. Presidential candidates invariably consider political factors in choosing a running mate.

 

Yet increasingly most presidential candidates conclude that their political and governing interests coincide when choosing a running mate and that both dictate choosing a running mate who is capable of being president. Most vice-presidential candidates during the last six or seven decades have been politicians whose past or subsequent public service marked them as presidential figures as perceived by reachable voters. Recent vice presidents Richard M. Nixon, Lyndon B. Johnson, Hubert H. Humphrey, Walter F. Mondale, George H.W. Bush, Al Gore, Dick Cheney, and Joe Biden were among their parties’ leading lights when chosen for their ticket.  Defeated running mates like Estes Kefauver, Henry Cabot Lodge, Edmund Muskie, Bob Dole, Lloyd Bentsen, Jack Kemp, Joe Lieberman, Paul Ryan, and Tim Kaine were also among their parties’ most well-regarded figures when chosen. Dan Quayle is often mocked but he was a well-regarded senator who made important contributions to the Bush administration.  Geraldine Ferraro’s credentials (three terms in the House of Representatives) were more modest than most but she was chosen when women were largely excluded from public service.  Senators like John Sparkman and especially Tom Eagleton had distinguished careers in the upper chamber (Eagleton was one of two former senators given the unprecedented honor of speaking when the Senate celebrated its bicentennial) and Sargent Shriver had performed ably in the executive branch in domestic and foreign policy roles. Mike Pence is widely viewed as a plausible future presidential aspirant.  The questionable choices—William Miller, Spiro T. Agnew, John Edwards, Sarah Palin—were chosen by candidates facing uphill races (Miller, Palin) or were simply mistakes.

 

History suggests that Rall’s reform (that presidential candidates announce their running mate when they announce their candidacy) and Weil’s (that certain older candidates announce their running mates before caucus or primary voting begins) would diminish the quality of vice-presidential candidates.  Rall’s and probably Weil’s proposal would eliminate from consideration anyone running for president, thinking of that option or supporting a rival candidate.  John F. Kennedy could not have chosen Johnson, Ronald Reagan Bush, and Barack Obama Biden.  The possibility that Humphrey would run in 1976 would have precluded Mondale, a Humphrey protégé, from being Jimmy Carter’s running mate.

 

Moreover, the context in the summer of a presidential election year, when presidential candidates identify their running mates, is more conducive to a good choice than a year or two earlier when presidential candidates now announce. Some successful presidential nominees—Jimmy Carter, Mike Dukakis, Bill Clinton, Obama, Trump, among others—were surprises.  It’s inconceivable that Mondale, Bentsen, Gore, Biden or Pence would have cast their lot with these improbable presidential nominees.  Kemp wouldn’t have signed on as Dole’s running mate early on either.  Cheney declined to be considered as George W. Bush’s running mate in spring 2000 but changed his mind after working with Bush suggested what his role might be given Bush’s operating style.  

 

The transformation presidential candidates experience between candidacy announcement and vice-presidential selection encourages better decisions.   Understandably, presidential candidates and their top aides initially focus on securing the nomination.  The vice-presidential selection becomes their major pre-occupation once that success is assured.  Carter would not have chosen Mondale initially.  Carter only concluded Mondale was his best option only after they spent time together and Carter spoke to others who knew the prospective candidates. Dole disparaged Kemp as “the Quarterback” but after examining alternatives concluded that choosing this long-time rival made sense.  Sometimes presidential candidates learn about running mates as they interact through the primary process, an experience that probably contributed to Romney’s selection of Ryan and Obama’s of Biden.

 

Finally, vice-presidential candidates are chosen after a long, intensive and intrusive vetting process.   That essential part of the vice-presidential selection process would be prevented if the decision were made as Rall and Weil propose.

 

Cohen recommends naming a running mate now for its political, not governing, benefits.  Yet it is not at all clear that the sort of running mate who could conceivably move the needle in a positive direction would decide to join forces at this early stage with a presidential candidate who needed to pursue such an atypical strategy to succeed.  Unless a presidential candidate secured a running mate who was appealing and ready for prime time, his or her candidacy would likely be hurt, rather than helped.

 

Bai starts from a sensible premise (that the likelihood of a succession is greater with an older president) and his proposal, that older candidates announce a list of perhaps three running mates, is more limited in its reach (to older candidates) and is less intrusive (a list, rather than a choice).  Yet history shows that vice-presidential selection is not so unilateral as he suggests and he is overly-optimistic regarding the merits of his proposal.

 

Although presidential candidates select a running mate they do so after lengthy consultation and after considering the likely reaction of reachable voters.  Bai is right that the 2008 Republican convention accepted Palin but that was because it liked her and inferred from her selection that Senator John McCain was more conservative than the delegates feared.  In fact, part of the reason McCain chose Palin, not Lieberman, was he feared an adverse reaction if he chose Gore’s former running mate. Sure, conventions don’t want to make trouble for their presidential nominee but they rubber stamp vice-presidential nominees partly because nominees weigh party sentiment heavily in making the choice.  And party sentiment doesn’t prevent a bad choice.  Edwards had done well in the primaries and was very popular with the Democratic electorate and convention.

 

A requirement that older candidates disclose a list from which they would choose their running mate would be hard, if not impossible, to implement and counterproductive.  A list limited to three would exclude some who, come summer, were attractive running mates.  Rival candidates might be omitted but if not, they might feel compelled to slam the door on the vice presidency to preserve credibility as a presidential candidate.  An unlimited list would be unrevealing.  In 1988, the Bush campaign leaked some possible running mates on the eve of the convention.  It was sufficiently long that the likelihood that Quayle would be chosen seemed so inconceivable that his prospects didn’t draw any scrutiny.  And if such a proposal has merit, which it doesn’t, why not apply it to all candidates since the modern vice presidency helps in governing more often than it supplies a successor and the selection provides information regarding a candidate’s values.

 

There’s no harm in asking about vice-presidential selection, as Bai and Weil propose, but succession concerns can be better addressed by insisting that candidates of both parties release meaningful information of their medical history and that presidential candidates choose running mates who would be plausible presidents.  Presidential candidates generally realize that choosing a running mate who cannot withstand the intense scrutiny of a national campaign, including a vice-presidential debate, is bad politics as well as bad government.  Public expectations of the enhanced vice-presidential role and recognition of the possibility of succession gives presidential candidates greater reason to choose well than was once the case.

 

The vice-presidential selection system has worked pretty well in recent decades and has allowed presidents to enlist the help of able vice presidents who are compatible with them.  That progress is best preserved by giving presidential candidates incentive to choose well, not by introducing artificial and counter-productive requirements.

 

Copyright Joel K. Goldstein 2020

]]>
Mon, 17 Feb 2020 08:10:24 +0000 https://historynewsnetwork.org/article/174243 https://historynewsnetwork.org/article/174243 0
Roundup Top 10!  

 

Who’s Really Shredding Standards on Capitol Hill?

by Joanne Freeman

Naming the alleged whistle-blower is much worse than tearing up a speech.

 

Bernie Sanders Has Already Won

by Michael Kazin

Whether he captures the White House or not, he has transformed the Democratic Party.

 

 

What winning New Hampshire — and its media frenzy — could mean for Bernie Sanders

by Kathryn Cramer Brownell

The New Hampshire returns tell us a lot about the leading candidates.

 

 

America held hostage

by David Marks

Forty years after the Iran hostage crisis, its impact endures.

 

 

Is Pete Buttigieg Jimmy Carter 2.0?

by J. Brooks Flippen

To win the White House and be a successful president, he must learn from an eerily similar candidate.

 

 

When White Women Wanted a Monument to Black ‘Mammies’

by Alison M. Parker

A 1923 fight shows Confederate monuments are about power, not Southern heritage.

 

 

Donald Trump’s continued assault on government workers betrays American farmers

by Louis A. Ferleger

Government scientists made U.S. agriculture powerful, but Trump administration cuts could undermine it.

 

 

The Civil War Wasn't Just About the Union and the Confederacy. Native Americans Played a Role Too

by Megan Kate Nelson

“Inasmuch as bloody [conflicts] were the order of the day in those times,” their report read, “it is easy to see that each comet was the harbinger of a fearful and devastating war.”

 

 

The forgotten book that launched the Reagan Revolution

by Craig Fehrman

While Reagan’s biographers have explored the influence of GE and SAG on the budding politician, they’ve largely ignored what came next — namely “Where’s the Rest of Me?”

 

 

Shifting Collective Memory in Tulsa

by Russell Cobb

The African-American community is working to change the narrative of the 1921 massacre.

 

]]>
Mon, 17 Feb 2020 08:10:24 +0000 https://historynewsnetwork.org/article/174319 https://historynewsnetwork.org/article/174319 0
The Communist Manifesto Turns 172

This month marks 172 years since the first publication of the Communist Manifest. All around the world people will be commemorating February 20th with group read-alouds, and other ways of noting the occasion. Undoubtedly, this is a moment that we should not allow to pass without some reflection on the meaning to us today of Marx and Engels’ pamphlet. Originally published anonymously and in German by the Workers’ Educational Association in 1848, an English translation of the Manifesto would not appear until 1850. For the first decades of its life the Manifesto was mostly forgotten, and it would not be published in the United States until 1872. We are living at a time when – if not communism – at least socialism is gaining ground in this country, to a degree that few could foresee only a decade ago. Bernie Sanders, for example, is a self-proclaimed Democratic Socialist, and a frontrunner among the Democratic candidates seeking the presidential nomination. When it comes to communism, however, there are still grave misgivings about being labeled as such even by those who identify with the radical left. At the same time, we are entering an era of unprecedented inequality; in which wealth has become concentrated in the hands of a few to a degree that is almost hard to imagine – when literally three or four individuals in this country for instance have the wealth exceeding the total wealth of over fifty percent of the population. The vast inequality and ever growing concentration of capital is one of the many reasons why the Manifesto is as important now – if not more so – than when it first saw the light of day during that fateful year of 1848. Income inequality in this country has been growing for decades. The Pew Research Center reports that in 1982, the highest-earning 1 percent of families received 10.8 percent of all pretax income, while the bottom 90 percent received 64.7 percent. Three decades later, the top 1 percent received 22.5 percent of pretax income, while the bottom 90 percent’s share had fallen to 49.6 percent. As Helene D. Gayle, CEO of the Chicago Community Trust, observed, “The difference between rich and poor is becoming more extreme, and as income inequality widens the wealth gap in major nations, education, health and social mobility are all threatened.” The gap between those who have and those who have not is becoming ever wider – while the rights of workers are under attack around the world. Union leaders are threatened with violence or murdered. Indeed, the International Trade Union Confederation reports that 2019 saw “the use of extreme violence against the defenders of workplace rights, large-scale arrests and detentions.” The number of countries which do not allow workers to establish or join a trade union increased from 92 in 2018 to 107 in 2019. In 2018, 53 trade union members were murdered – and in 52 counties workers were subjected to physical violence. In 72 percent of countries, workers have only restricted access to justice, or none at all. As Noam Chomsky observed, “Policies are designed to undermine working class organization and the reason is not only the unions fight for workers' rights, but they also have a democratizing effect. These are institutions in which people without power can get together, support one another, learn about the world, try out their ideas, initiate programs, and that is dangerous.” In fact, labor union membership has been declining for well over fifty years right here in the US. Unions now represent only 7 percent of private sector workers – a significant drop from the 35 percent of the 1950s. Moreover, studies have shown that strong unions are good for the middle-class; the Center for American Progress reports, for example, that middle-class income has dropped in tandem with the shrinking numbers of US union members. This weakening of unions and collective bargaining has allowed employer power to increase immensely, contributed to the stagnation of real wages, and led to “a decline in the share of productivity gains going to workers.” Around the world, children are still forced to labor in often unsafe and extremely hazardous conditions. Approximately 120 million children are engaged in hazardous work – and over 70 million are under the age of 10. The International Labour Organization estimates that 22,000 children are killed at work globally every year. The abolition of child labor was of course one of the immediate reforms demanded in the Manifesto – and 172 years later it has yet to become a reality. Studies estimate that as many as 250 million children between the ages of 5 and 14 work in sweatshops in developing countries around the world. The US Department of Labor defines a sweatshop as a factory that violates two or more labor laws. They often have poor and unsafe working conditions, unfair wages and unreasonable hours, as well as a lack of benefits for workers. Economists sometimes argue that sweatshops help to alleviate poverty, that as bad as they are they are still better than working in rural conditions. These claims are dubious at best – but more to the point, sweatshops are inconsistent with human dignity. As Denis Arnold and Norman Bowie argue in their essay “Sweatshops and Respect for Persons”: the managers of multinational enterprises that “encourage or tolerate violations of the rule of law; use coercion; allow unsafe working conditions; and provide below subsistence wages, disavow their own dignity and that of their workers.” It is often assumed – wrongly – that Marx and Engels described in full what they thought the future communist society would look like. But aside from a few tantalizing suggestions they offered very little in this regard – not in the Manifesto, nor anywhere else, preferring instead to analyze the social contradictions inherent to the capitalist mode of production itself – contradictions which they thought would lead inevitably to its demise. One thing that is clear however from their few suggestions is that workers would not be alienated from the process of production and from the fruits of their labor – which implies something like worker self-management, workplace democracy – or, perhaps most accurately, worker self-directed enterprises, to borrow a phrase from economist Richard Wolff. As Wolff points out, these enterprises “divide all the labors to be performed… determine what is to be produced, how it is to be produced, and where it is to be produced” and, perhaps most crucially, “decide on the use and distribution of the resulting output or revenues.” Such firms of course exist already; most notably, for example, Mondragon in Spain. We know conclusively that workplace democracy can and has been successful – and that they can in fact outcompete traditional, hierarchically organized capitalist firms. All of which is to say that the Communist Manifesto is not a historical relic of a bygone era, an era of which many would like to think we have washed our hands. As long as workers’ rights are trampled on, and children are pressed into wretched servitude; as long as real wages stagnate, so that economic inequality continues to grow, allowing wealth to be ever more concentrated in the hands of the few – then the Communist Manifesto will continue to resonate and we will hear the clarion call of workers of the world to unite, “for they have nothing to lose but their chains. They have a world to win.”

]]>
Mon, 17 Feb 2020 08:10:24 +0000 https://historynewsnetwork.org/article/174316 https://historynewsnetwork.org/article/174316 0
How Somalis Use Theatre to Rebuild Culturally Recently Mogadishu suffered yet another horrific terrorist attack, the deadliest in two years, killing nearly 80 people and wounding more than 100 people. Much of the discourse about wars such as this merely chronicles numbers: lives lost, dollars of damage done, years to rebuild. The focus of post-conflict transition, then, becomes remains on the recovery of formal institutions, the type of the regime, the growth of the economy, and the strength of the electoral processes. However, we seldom make room for the equally important process of cultural rebuilding that gradually takes place, in particular the efforts to rescue democratic spaces that facilitate everyday peace. For Somalis, the arts, especially theatre has remained a crucial site for the social and political reimagination of the nation. 

 

Modern Somali theatre rose to prominence in the 1960s, the period following the independence and the subsequent unification of Somaliland and Somalia. By the 1970s, there were multiple shows a night in any given city and the average Somali adult was considered a regular theatre goer regardless of socioeconomic status.

 

Several elements make the Somali theatre tradition unique, most distinctly its inextricable tie to poetry. Large parts of the play, usually the ones that carry the emotional weight of the play are conducted in verse. The ease with which poetry and prose coexist on stage reflects the unparalleled space that poetry occupies in Somali culture. For Somalis, poetry signifies immense national and linguistic pride. Poetry consistently outsells fiction and public readings routinely draw massive crowds. In fact, the successful playwrights of the twentieth century were poets themselves, echoing the fluidity that exists between the genres. 

 

And since the rise of Somali theatre coincided with the golden age of Somali music, a live band accompanies most theatrical performances. This combination has allowed for the sustained musical component in Somali plays, where characters intermittently perform songs to convey difficult emotions or profess their love. During the song, audience members are allowed to go on stage to deliver flowers or dance with the actors. As the song concludes, the fourth wall resumes. 

 

It is on stage that most urgent of social issues are deliberated and tacit taboos tackled. Successful plays, such as Hablayahow Hadmaad Guursan Doontaan (Ladies When Will You Marr?) and Beenay Wa Run (The Lie is the Truth) take up questions about feminism, Somalia’s increasing integration into global geopolitics, religion, and the changing conventions of love and marriage. During the 1960s and 1970s, northern artists performed alongside southern ones with only accents marking them apart. In a society, where regional and clan divisions govern almost all aspects of social life, the stage was a significant exception. Gender roles were rigorously contested, expectations revisited. Female artists formerly considered social outcasts and an affront to familial and communal honor became widely revered. And to this day, the stage remains the only acceptable and safe place for a man to cross-dress. 

 

The popularity of theatre and the social latitude allowed to artists to comment and contest the status quo have earned poets and playwrights an irrevocable place in the historical trajectory of the region. It is widely believed that the 1968 play Gaaraabidhaan (Glow Worm) by famed playwright Hasan Shiekh Muumin inspired Siyad Barre’s military coup in 1969. And it was the poets and the playwrights of the 1980 who proved instrumental in the resistance against Barre’s oppressive regime. For many Somalis, Landcruiser, a play by Cabdi Muxumed Amiin staged at the National Theatre of Somalia in 1989, incited the uprising that eventually led to Barre’s fall. 

 

Theatre and poetry continued to play a crucial role in the post-rapture period, serving as a tool to initiate peace talks and promote social healing. After the collapse of the central government and the start of the civil war in Somalia, a collective of Somali poets, singers, and playwrights from across the region staged a play in Mogadishu, titled Qoriga Dhig, Qaranka Dhis (Put Down the Gun, Build the Nation). In the early 2000s, Mohamed Ibrahim Warsame Hadraawi, hailed as the greatest living Somali poet, embarked on Socdaalka Nabada (Peace Journey). Walking the entire length of Somalia, including a visit to the prison in which he spent five years for his poetry, Hadraawi performed for a renewed commitment to peacebuilding. A citizen of Somaliland, Hadraawi expressed his resistance to the severing of cultural ties along nationalist borders. IREX Europe, a non-profit that supports democracy and human rights initiatives, led a UN-funded theatre and poetry caravan across Somaliland in 2010 to facilitate post-conflict dialogue. 

 

As the civil war continued to ravage Somalia and Somaliland began the gradual process of recovery, formal support for the arts largely disappeared. Yet, there endured sustained grassroot efforts to preserve the artistic heritage of Somalis and cultivate sites for the engagement in cultural production. After Al-Shabab captured large parts of Somalia in the mid 2000s and banned music and other forms of entertainment, people organized underground concerts and shared music in clandestine memory cards risking imprisonment or worse execution. In Somaliland, the Hargeisa Cultural Center is home to over 14,000 cassette recordings of plays and music that were collected, preserved, and donated by individuals who saved them as they fled the war that reportedly destroyed over 90 percent of their city. 

In 2012, the Somali National Theatre in Mogadishu re-opened its doors for the first time in 20 years after ordinary citizens and local businesses partnered with the first transitional federal government to raise the funds required to restore the theatre. Its first play, a comedy, garnered an audience of around a thousand people. Two weeks later a suicide bomber attacked the theatre, killing 10 people and wounding many more. The restoration of the Hargeisa National Theatre began a few years ago when the government sold it to a private developer. To calm the public apprehension about the privatization of the prominent cultural landmark, the developers promised to place the restored 3500-seat theatre at the heart of their new seven-story commercial center. 

 

The collective ownership of high art not by an elite few but by the unremarked many is arguably Somalis’ most cherished and best sustained experiment in democracy. A heightened communal experience, Somali theatre, at its core, demands a tenacious faith in a public. This past year, the Somali National Theatre has once again attempted a reconstruction; large crowds once again gathered for light-hearted, politically poignant entertainment in defiance of the terror to which they and their city are frequently subjected. The persistence of theatre and poetry to govern the daily discourse of a wounded people, to permeate in each one of their recovering buildings and fractured lands indicates not only the resilience of the Somali people but the unflagging democratic spirit that resides within them. 

]]>
Mon, 17 Feb 2020 08:10:24 +0000 https://historynewsnetwork.org/article/174295 https://historynewsnetwork.org/article/174295 0
Who Deserves the Credit for a Good Economy? Ronald L. Feinman is the author of “Assassinations, Threats, and the American Presidency: From Andrew Jackson to Barack Obama” (Rowman Littlefield Publishers, 2015).  A paperback edition is now available.

In the State of the Union speech, President Donald Trump emphasized the strength of the American economy and took credit for an economic boom. As this claim will likely dominate Trump’s reelection campaign, it’s valuable to examine the last 50 years of presidential and economic history. 

 

The long economic expansion of the 1960s under Democrats John F. Kennedy and Lyndon B. Johnson ended in 1969 during the Nixon Administration, with a nearly year long recession until late 1970, followed by a longer recession under Nixon and Ford from late 1973 to early 1975. It was directly caused by the Arab Oil Embargo, after the Yom Kippur War between Egypt and Israel in October 1973, and caused high inflation as well as rising unemployment.

 

The short recession of the first half of 1980 under Democrat Jimmy Carter was also related to the second Arab Oil Embargo, which led to high inflation in 1979 and 1980, as in 1974-1975,  with both recessions and inflationary spirals major factors in the electoral defeats of Ford in 1976 and Carter in 1980.  Of course, Ford was also harmed by the pardoning of Richard Nixon and Carter was unpopular for his handling of the Iranian Hostage Crisis and the Soviet invasion of Afghanistan in the year before his reelection campaign.

 

In the Reagan Presidency, a more serious recession occurred, leading to the highest unemployment rate since 1939, provoked by the Federal Reserve’s effort to rein in the high inflation that still existed after Carter lost reelection.  Fortunately for Reagan, the recovery that came about in 1983-1984 led to a landslide reelection victory in 1984. 

 

During the first Bush Presidency, a recession occurred in the last half of 1990 into early 1991, caused by the tough economic restraints of the Federal Reserve and the effects of the Tax Reform Act of 1986 on real estate. This led to a lingering high unemployment rate.  Despite many people’s approval of Bush’s handling of the Gulf War, the troubles in the economy plus the independent candidacy of H. Ross Perot in 1992 influenced Bush’s 1992 loss.

 

During the administration of George W. Bush, two recessions occurred. The first lasted from March to November 2001 and was caused by the dotcom bubble, accounting scandals at major corporations, and the effects of the September 11 attacks. The economy quickly bounced back and Bush won reelection in 2004. 

 

However, a much more serious economic downturn called “The Great Recession” took effect from December 2007 to June 2009 and was caused by a major housing bubble. This hurt John McCain’s presidential campaign in 2008 as many people wanted a change in leadership.  This economic collapse was worse than the Ford or Reagan recessions in its long-term effects, and it posed a major challenge for Barack Obama as he entered office with the worst economy of any president since Franklin D. Roosevelt in 1933.

 

Barack Obama rose to the challenge and presided over the most dramatic drop in unemployment rates in modern economic history. The unemployment rate peaked at 10 percent in the fall of 2009. By the time Obama left office in January 2017, the unemployment rate had fallen to  4.7 percent. The stock market rose by about 250 percent in the Dow Jones Industrial Average from 2009 to 2017.

 

By comparison, Franklin D. Roosevelt came into office with a 24.9 percent rate of unemployment in 1933. Unemployment dropped every year through 1937 to 14.3 percent, but then rose with a new recession causing the unemployment rate to rise to 19 percent in 1938 and 17.2 percent in 1939. The unemployment rate then went down to 14.6 percent in 1940, 9.9 percent in 1941, and finally, with World War II in full swing, it lowered to 4.7 percent in 1942 and under 2 percent for the remainder of the war years. 

 

Clearly, Donald Trump has benefited from what is now the longest economic expansion in American history. The unemployment rate has dropped to as low as 3.4 percent. The question that lingers is who deserves the credit? Much of the hard work that created economic recovery came under Obama’s administration, and is simply continuing for now under Trump, which may benefit him in November 2020.

]]>
Mon, 17 Feb 2020 08:10:24 +0000 https://historynewsnetwork.org/blog/154312 https://historynewsnetwork.org/blog/154312 0
Remembrances, Race, and Role Models: The Renaming of a Middle School

 

After students of the Richard B. Haydock Academy of Arts and Sciences studied the City of Oxnard’s history, on May 15th of last year two middle schoolers petitioned the Oxnard School District board of trustees to rename their campus. Why? Because Haydock, as an early twentieth century superintendent of the district and a long-time councilmember, espoused racist views toward people of color. Furthermore, as a public official he advanced policies of racial exclusion in housing, public facilities, and schools.

 

The students discovered this from their study of David G. Garcia’s Strategies of Segregation: Race, Residence, and the Struggle for Educational Equality (2018).

 

Garcia, a UCLA professor and product of Oxnard’s public schools, documented how city and Ventura County officials purposefully set policies that injured the life chances of people of color by the provision of inferior housing and educational opportunities. While denying services enjoyed by Oxnard’s white residents, in 1917 councilmember Haydock blamed victims of municipal neglect when he stated, “We have laws to prevent the abuse of animals . . . but the people are allowed to abuse themselves. The ignorant are allowed to breed under conditions that become a threat and a menace to the welfare of the community.” Who were “the people” and “The ignorant,” according to Haydock? Ethnic Mexicans.

 

Four years later, at an Oxnard Rotary Club assembly, Haydock publicly bemoaned the presence of African Americans in the United States. For him and others of his class of this era, such as Superintendent of Ventura County Schools Blanche T. Reynolds, the restriction of people of color was the solution.

 

This history matters as it informs us of persistent social and economic inequities long after racist strategies—in the form restrictive real estate covenants, residential redlining, gerrymandered school attendance boundaries engineered around segregated neighborhoods, and employment practices—were deemed illegal. This knowledge also complicates an appreciation of our nation’s ethos of equal opportunity contrasted by official actions that prohibited the realization of this value for people of color and women. For example, our history informs us that we are a nation endeared with democratic tenets simultaneously as elected leaders decreed oppressive acts of land dispossession, genocide, and slavery.

 

But should nefarious practices and views advanced by figures such as Haydock be completely stricken from public memory? Absolutely not. Just as I don’t favor the textual removal of racially restrictive covenants in residential deeds (although I support their inert state), the new name of the Academy of Arts and Sciences, whatever it will be, should have a recognizable footnote so people can learn not only about the totality of Oxnard’s history but also its relationship with larger national currents. The Supreme Court case of Brown vs. Board of Education, 1954 comes to mind as it declared unconstitutional the separate but equal doctrine of Plessy v. Ferguson, 1896.

 

Let’s not forget Mendez v. Westminster of 1946 that preceded Brown. For this case, future U.S. Supreme Court Justice Thurgood Marshall assisted in the writing of an amicus curiae brief for the National Association for the Advancement of Colored People.

 

By the study of this history, students can appreciate struggles for social justice by people from all backgrounds as well as the challenges that lay ahead.

 

And yes, past sins are inalterable. But we can motivate students to be agents of change for the better. Especially as we hear racist slurs of the past echoed by President Donald J. Trump.

 

So, I propose that the school be renamed the Rachel Murguia Wong Academy of Arts and Sciences. Raised in an era when ethnic Mexican students were not only segregated but also corporally and psychologically assaulted for speaking California’s first European language (Spanish), Murguia Wong committed her life to young people. As a Ventura County resident, she served on numerous civic and educational advisory committees. And after her work as an OSD community-school liaison in La Colonia’s Juanita (now Cesar Chavez) Elementary, Murguia Wong won an elected seat on the district’s board of trustees in 1971.

 

As a trustee, Murguia Wong championed Title 1 compensatory programs, teacher diversity, as well as the district’s full compliance with the summary judgment of Judge Harry Pregerson in Soria v. Oxnard School District Board of Trustees of 1971. Based on the agreed-upon facts in this case, Pregerson ordered busing as a means to dismantle the decades-long de facto (unofficial) segregation of its schools.

 

After a resistant board majority appealed the decision, the Ninth Circuit ordered a trial in 1973. This time school board minutes of the 1930s surfaced that documented the district’s implementation of de jure (official) segregation in violation of the plaintiffs’ rights of equal protections guaranteed under the Fourteenth Amendment of the United States Constitution.

At the behest of white parents, districts records of 1937 and 1938 revealed that Superintendent Haydock, in collusion with the trustees, schemed byzantine strategies to segregate ethnic Mexican students. In a time when social Darwinist ideas of Anglo-Saxon superiority was popular, miscegenation was the primary fear of the nation’s white establishment.

 

Hence, Murguia Wong, unlike Haydock, was on the right side of history. The renaming of the Academy in her honor would provide opportunities for all the children to learn Oxnard’s nuanced history.

]]>
Mon, 17 Feb 2020 08:10:24 +0000 https://historynewsnetwork.org/article/174247 https://historynewsnetwork.org/article/174247 0
The Erasure of the History of Slavery at Sullivan’s Island

 

The history of slavery in America is, to a great extent, the history of erasure.  For most of the century and a half since the Civil War ended, families, communities, churches, universities, banks, insurance companies, and a host of other institutions have gone out of their way to ignore past involvement in what Rev. Jim Wallis of Sojourners has labeled “America’s original sin.”

 

I was reminded of this a few months ago when my wife and I attended the annual meeting of the Association for the Study of African American Life and History in Charleston, South Carolina.  As we hadn’t visited the city in some three decades, we occasionally played hooky from the conference to go be tourists.  One morning, we took a commercial boat tour of Charleston harbor.  The captain gave a running commentary throughout our voyage as we passed major sites including the Battery, Fort Sumter, and others.  When we made the long drift alongside Sullivan’s Island at the entrance to the harbor, he shared the most extensive portion of his monologue.  We learned of the Battle of Fort Moultrie there during the American Revolution, of the island’s inflated real estate market, and of its many celebrity homes.

 

Curiously, the captain’s commentary completely ignored the most significant part of the history of Sullivan’s Island: its role in the importation of enslaved Africans.  The island served as a quarantine station for Africans arriving on slave ships, who spent days or weeks in “pest houses” until deemed “safe” for public auction.  Some 40% of the nearly 400,000 Africans imported into British North America and the young United States passed through this place.  It has been termed the “Ellis Island” of African Americans.

 

As someone who has specialized in African American history throughout my academic career, with a particular focus on slavery and abolition, this example of erasure proved particularly jarring.  It shouldn’t have.  I know, for example, that southern plantation homes regularly fail to inform tourists about the enslaved people who toiled at these places. Nevertheless, the warm sea breeze we had experienced during the earlier part of the tour immediately evaporated in a cold bath of anger and sadness.  Our effort to be tourists had failed to isolate me – even momentarily – from an awareness of the extent to which Americans still seek to eradicate the heritage of slavery from our collective consciousness.

 

Near the end of the harbor tour, almost in passing, the captain pointed out the site of a new African American history museum opening in Charleston.  Indeed, officials broke ground for the International African American Museum on October 25, 2019.  It is expected to greet visitors in 2021.  The press release announcing the ground breaking observed that the museum will “illuminate the story of the enslaved Africans who were taken from West Africa, entered North America in Charleston, SC, endured hardship and cruelty, and then contributed so significantly to the greatness of America. The museum . . . will honor the site where enslaved Africans arrived.”

 

Maybe the new museum, along with other recent developments of a similar nature, is a sign that we can get beyond the usual erasure of slavery from our national mind.  It would be a welcome change.

]]>
Mon, 17 Feb 2020 08:10:24 +0000 https://historynewsnetwork.org/article/174248 https://historynewsnetwork.org/article/174248 0
The Making of a Periphery

 

The islands of Southeast Asia were once sites for the production and trade of prized commodities—including cloves, nutmeg, and mace—so valuable that in the fifteenth and sixteenth centuries they attracted seafaring explorers and traders from distant European countries. Today Malaysia, the Philippines, and Indonesia principally export their surplus of cheap labor. Over ten million emigrants from Island Southeast Asia provide their labor abroad. How did this once-prosperous region transform into a “peripheral” one? This is the question I am addressing in my new book, The Making of a Periphery: How Island Southeast Asia Became a Mass Exporter of Labor, published with Columbia University Press.

 

The word periphery is a classical concept that figured prominently in the work of Nobel Prize winner Arthur W. Lewis and has become quite famous thanks to Immanuel Wallerstein’s world-system analysis. The work of Lewis and Wallerstein is of immense importance for understanding why parts of the world that were relatively prosperous in the past have sunk to the lower or even lowest echelons of economic performance today. This question has been taken up Daron Acemoğlu and James Robinson in their bestseller Why Nations Fail. The strength of Wallerstein and Acemoğlu & Robinson is that they explain global divergences from a historical perspective using a single theory. Whereas history is indeed crucial for economic analysis, an unavoidable drawback of unifying theories is that these homogenize our understanding of complicated and diverse processes of long-term historical change. At the same time, it is impossible to do any serious global history and contribute to development economics without any theoretical and unifying perspective.

 

A way out of this dilemma is to start from the generally accepted position that plantation economies have a long-term negative effect on economic development. In this respect Island Southeast Asia resembles the Caribbean nations, where the legacies of the plantation economies consisted of meagre economic growth and massive unemployment. Today, massive emigration is the fate of the Caribbean region as much as it is of Island Southeast Asia. As Arthur W. Lewis has pointed out, the problem was not that plantations were sectors of low productivity, but that the unlimited supplies of labor in these regions suppressed wages. 

 

A central argument in my book is that Lewis’ thesis of the unlimited supplies of labor is still important for understanding how parts of the world have become a periphery. We know for the Caribbean where this labor came from: millions of Africans were kidnapped, enslaved, and transported across the Atlantic Ocean to produce sugar, tobacco and other crops for Europe and America. But where did the masses working at the plantations in the Philippines, Malaysia and Indonesia come from? For Malaysia it is clear that its plantations and mines imported Chinese and Indian labor on a massive scale. But for the Philippines and Indonesia it was natural demographic growth that guaranteed abundant labor supplies. One of the most fascinating stories my book deals with is the relatively successful smallpox vaccination in Java and the northern islands of the Philippines in the early years of the nineteenth century. The vaccine resulted in a precocious demographic growth of over 1.5 percent per annum. Together with a stagnant manufacturing sector and declining agricultural productivity, this created the abundant labor supplies for the developing plantation economies.

 

Still, this abundance of labor was not a sufficient cause for a region to be turned into a periphery. Coercion was another crucial factor. We know enslaved workers were coerced by the whip to grow commodities, but massively left the plantations after emancipation, even though poverty was waiting. Coercion was also a necessary condition for the plantations in Southeast Asia. The plantation economies that emerged in parts of the Philippines and particularly in Java in the nineteenth century could not function without the collaboration of local elites and existing patron-client relationships. Local aristocrats and village elites supported the plantation economy in their role as labor recruiters and by forcing villagers to rent their land to plantations. They shared in the profits for each worker and for each piece of land they managed to deliver.

 

In the Northern Philippines and Java, plantation economies were successfully embedded in existing agrarian systems. The Dutch introduced forced coffee cultivation in the early eighteenth century and a more comprehensive forced cultivation system on Java in 1830. Local elites played a crucial facilitating role in this transformation of existing agrarian and taxation systems for colonial export production. Java in particular suffered from economic stagnation and its population from malnutrition at the peak of the colonial plantation economy. Per capita income lagged behind other parts of the Indonesian archipelago, where independent peasants produced rubber, copra or coffee for the global markets. 

 

Once Indonesia and Malaysia had become free and independent nations, in 1949 and 1965 respectively, their governments branded plantations as colonial institutions and encouraged smallholder cultivation. They did so for a perfectly good reason: to ensure the revenues would benefit local development. Unfortunately, this decolonization was never completed. Palm oil, one of the world’s most important tropical commodities, has been a driving force in the establishment of new plantation regimes in Indonesia and Malaysia, which are the world’s first- and second-largest producers of this commodity. Over the past decades, we have seen the return of appalling coerced-labor conditions that were supposed to have been buried alongside colonialism. Palm oil plantations cause not only grave ecological damage, but also serious human rights violations.

 

The peripheral position of Southeast Asia in the world of today is the result of a long-term development, as many scholars from Immanuel Wallerstein to Daron Acemoğlu have pointed out. But high demographic growth and local systems of labor bondage are crucial elements in the making of a periphery. This book invites us to rethink the geography of colonialism, in which the Southeast Asian and Caribbean archipelagos share a history of massive coerced plantation work and present day mass emigration.

]]>
Mon, 17 Feb 2020 08:10:24 +0000 https://historynewsnetwork.org/article/174253 https://historynewsnetwork.org/article/174253 0
The 1619 Project Debate with History of Slavery in New York City Author's note: “Represent NYC” is a weekly program produced by Manhattan Neighborhood Network (MNN). The show’s guests usually discuss topics like affordable housing, education policy and domestic violence. I was invited to discuss the New York Times’ 1619 Project and the long-term impact of slavery on New York and American society for a Black History Month broadcast. This post includes the questions I prepared to answer and notes for my responses. New York will soon need its own 1626 Project. 1626 was the year that the Dutch West India Company brought the first eleven enslaved Africans to the New Amsterdam colony that would later become New York City.

 

(1) Why has there been controversy over the New York Times 1619 Project?

 

There has been controversy over assertions made in the New York Times’ 1619 Project on the impact of slavery on the history of the United States. I am not knowledgeable on everything discussed in the project report, particularly African American cultural contributions, but I think the history was basically very sound and the projects critics are off base. Three areas of contention are the role slavery played in fomenting the American Revolution, Abraham Lincoln’s attitude toward Black equality, and the white role in the struggle for African American Civil Rights. While there should always be legitimate disagreement about historical interpretation, I tend to side with Nikole Hannah-Jones and the 1619 Project.

 

American slaveholders lived in constant and hysterical fear of slave revolt and were always terrified that Great Britain would not provide adequate protection for their lives and property. In 1741, white New Yorkers invented a slave conspiracy and publicly executed more than two-dozen enslaved Africans in Foley Square. Tacky’s Rebellion in Jamaica in 1760, which took months to suppress and led to the death of sixty white planters and their families, sent shock waves through British North America.

 

As President, Abraham Lincoln preserved the nation, but he never believed in Black equality. In 1861 he endorsed the Corwin Amendment that would have prevented the federal government from interfering with slavery in the South. In his December 1862 State of the Union message, a month before issuing the Emancipation Proclamation, Lincoln offered the South a deal which they rejected. He proposed keeping the slave system until 1900, that slaveholders be compensated when Blacks were freed, and that freed Africans would voluntarily be resettled in Africa, the Caribbean, and Central American colonies. In his second inaugural address just before he was assassinated, Abraham Lincoln offered amnesty to rebel states and slaveholders that would have left freedman technically free but in a perpetual state of subservience.

 

Last, there were many whites prominent in the African American Civil Rights movements after 1955 and Montgomery, but very few before except for leftist activists, and unfortunately, too many walked away from King and the Civil Rights movement after passage of the 1964 Civil Rights Act and the 1965 Voting Rights Act.

 

2) What are the top ways America benefited economically from the slave trade and from free labor? What about New York? 

 

Before 1500 the world was regionalized with little interaction between people and regions. What is now the United States was sparsely settled by hundreds of indigenous groups including the Leni Lenape, an Algonquin people who lived in what would become the New York metropolitan area. 

 

The Columbian Exchange launched the first wave of globalization and started the transformation of that world into the interconnected, globalized world we have today. At the core of what we call the Columbian Exchange was the trans-Atlantic Slave trade and the sale of slave produced commodities, sugar, tobacco, indigo, rice, and later cotton. Contemporary capitalism with all its institutional supports was the product of slavery. The slave trade led to the development of markets for exchange, regular shipping routes, limited liability corporations, and modern banking and insurance practices.

 

In colonial New Amsterdam and colonial New York enslaved Africans built the infrastructure of the settlement, the roads, the fortifications, the churches, the houses, and the docks. They cleared the fields and dredged the harbors.

 

In the 19th century, because it had a good harbor and was at the Northern edge of the coastal Gulf Stream current, New York became the center for the financing, refining, and transport of slave produced commodities around the world. Sugar from the Caribbean and cotton from the deep South were placed on coastal vessels and shipped to the Port of New York, loaded onto ocean going clipper ships, and then transported to Europe.

 

3) How did enslaved Africans change the landscape of New York City? 

 

If we look at images of Manhattan Island before the coming of Europeans and Africans to the area, it was very different from today. The Wall Street slave market, established by the City Common Council in 1711 was on the corner of Pearl and Wall Street where ships docked at the time. Now the waterfront is two blocks further east. As Africans built the village, they actually expanded the physical city with landfill. We know that Africans built the original Wall Street wall and the subsequent Chambers Street wall which was the northern outpost of the city at the time of the American Revolution. A project by Trinity Church has established that enslaved African labor was used to build the original church and St. Paul’s Chapel where George Washington prayed and which still stands just south of City Hall.

 

4) Once slavery was abolished in New York City, how were African Americans still oppressed financially and politically? 

 

Slavery ended in New York City and State gradually between 1799 and 1827. Essentially Africans were required to pay for their freedom through unpaid labor during this transitional time period. When finally freed, they received no compensation for their labor or the labor of their enslaved ancestors. Once free, even when they were able to acquire land, it was difficult for African Americans to prove ownership or protect their land from government seizure. One of the greatest injustices was the destruction of the largely African American Seneca Village in the 1850s when the city confiscated their land to build Central Park. If they had not been displaced, their land would be worth billions of dollars today. Politically, there were a series of discriminatory laws limiting the ability of African American men to vote; of course no women were allowed to vote.

 

5) Please describe how this financial oppression caused a lack of wealth for generations of African Americans. 

 

Most of the financial injustice and the wealth gap we see today is the product of ongoing racial discrimination with roots in slavery but enacted into federal law in the 1930s and 1940s. The New Deal established the principle that federal programs would be administered by localities, which meant that even when African Americans were entitled to government support and jobs, local authorities could deny it to them. After World War II African American soldiers were entitled to GI Bill benefits, but were denied housing and mortgages by local banks and realtors creating all white suburbs.

 

Originally Social Security was not extended to agricultural and domestic workers, major occupations for Black workers in the 1930s. Social Security benefits are still denied to largely minority domestic and home health care workers who work off of the books. Many jobs held by African Americans were not covered under New Deal labor legislation and Blacks in the South were excluded from the programs like the Civilian Conservation Corps.

 

6) What is redlining and how did it affect African Americans in New York? How did segregation affect financial disparities? How did Jim Crow? Please connect how these practices developed financial disparities between black and white Americans for generations. 

 

Banks and realtors reserved some areas for white homeowners and designated others for Blacks. On Long Island, Levittown had a clause in the sales agreement forbidding the resale or renting to Black families. Blacks were directed to declining areas like the town of Hempstead or areas prone to flooding like Lakeview. I grew up in a working-class tenement community in the southwest Bronx. My apartment building had 48 units and no Black families. This could not have been an accident. The only Black student in my class lived in public housing because his father was a veteran. There were a lot of Black veterans, but very few were sent to our neighborhood. Segregated neighborhoods also meant segregated schools. None of this was an accident.

 

Brooklyn Borough President Eric Adams was heavily criticized for remarks about white gentrifiers from out-of-state who moving into former Black and Latino communities in Brooklyn and Harlem. The mock-outrage misdirected attention away from what is actually taking place. The new gentrifiers are not becoming parts of these communities. They are settlers displacing longtime residents. Partly as a result of a new wave of gentrification, homelessness in the city has reached its highest level since the Great Depression of the 1930s including over a hundred thousand students who attend New York City schools.

 

7) How did a person's zip code affect their quality of life? How does it now?

 

Zip codes were introduced in 1963, so the world I grew up in was pre-Zip Code. 

I think the three biggest impacts of where you grow up are the quality of housing, the quality of education, and access to work. In neighborhoods with deteriorating housing lead poisoning from paint and asthma exacerbated by rodent fecal matter and insect infestations are major problems. Children in these neighborhoods grow up with greater exposure to crime, violence, drug abuse and endemic poverty. Food tends to be lower quality because of the dearth of supermarkets and the abundance of fast-food joints. Schools tend to be lower functioning because teachers have to address all of the social problems in the communities, not just educational skills. For adults, they tend to have greater distances to travel to get to and from work, which essentially means hours of unpaid labor time.

 

8) How does access to education affect financial disparities between white and black communities? 

 

According to a Newsday analysis of Long Island School District funding, Long Island’s wealthiest school districts outspend the poorest districts by more than $6,000 per student. Long Island school districts that spend the most per pupil include Port Jefferson in Suffolk County, where the student population is 87% white and Asian, and Locust Valley in Nassau County, where the student population is 80% white and Asian. On the other end of spending spectrum, the school districts that spend the least per pupil include Hempstead and Roosevelt in Nassau County where the student populations are at least 98% Black and Latino. 

 

In New York City, parent associations in wealthier neighborhoods can raise $1,000 or more per student to subsidize education in their schools. At one elementary school in Cobble Hill, Brooklyn they raise $1,800 per child. Parents in the poorest communities are too busy earning a living to fund raise and too economically stressed to make donations. These dollars pay for a range of educational supplements and enrichments so that students who already have the most goodies at home also get the most goodies at school.

 

9) Why is it important to understand history when trying to understand the financial disparities between black and white communities in the U.S.? 

 

It is too easy in the United States to blame poverty on the poor, their “culture,” and their supposed “bad choices.” When drug abuse was perceived of as an inner-city Black and Latino problem, it was criminalized and the solution was to build more prisons. Now that many white Midwestern and southern states have opioid epidemics, suddenly drug abuse is an illness that society must address.

 

10) What are some specific New York City examples of legislation or culture that impacted the financial divide between black and white families? 

 

In New York urban renewal in the 1960s, that era’s name for gentrification, was also known as Negro removal, a term coined by James Baldwin. The 1949 federal Housing Act, the Taft-Ellender-Wagner Act, Robert Wagner was a New York Senator, provided cities with federal loans so they could acquire land and clear areas they deemed to be slums and then be developed by private companies. In the 1950s, the Manhattantown project on the Upper West Side condemned an African-American community so that developers could construct middle-class, meaning white, housing. Before it was destroyed, Manhattantown was home to a number of well-known Black musicians, writers, and artists including James Weldon Johnson, Arturo Schomburg, and Billie Holiday. Lincoln Center was part of the "Lincoln Square Renewal Project" headed by John D. Rockefeller III and Robert Moses. To build Lincoln Center, the largely African American San Juan Hill community was demolished. It was probably the most heavily populated African American community in Manhattan at that time.

 

 

11) What kind of New Yorker invested in the slave trade and how did that affect their family's wealth for generations? Can you give us a profile of this type of person. What kind of family benefitted? 

 

In the colonial era prominent slaveholders included the Van Cortlandt family, the Morris family, the Livingston family, and the Schuyler family, of which Alexander Hamilton’s wife and father-in-law were members. Major slaveholding families also invested in the slave trade. Francis Lewis of Queens County, who was a signer of the Declaration of Independence was a slave-trader.

 

12) What New York corporations benefited from slavery? Why does this matter today? 

 

Moses Taylor was a sugar merchant with offices on South Street at the East River seaport, a finance capitalist, an industrialist, and a banker. He was a member of the New York City Chamber of Commerce and a major stockholder, board member or officer in firms that later merged with or developed into Citibank, Con Edison, Bethlehem Steel and ATT. Taylor earned a commission for brokering the sale of Cuban sugar in the port of New York and supervised the investment of profits by the sugar planters in United States banks, gas companies, railroads, and real estate. The Pennsylvania Railroad and the Long Island Railroad were built with profits from slave-produced commodities. Because of his success in the sugar trade, Moses Taylor became a member of the board of the City Bank in 1837 and served as its president from 1855 until his death. When he died in 1882, he was one of the richest men in the United States.

 

 

]]>
Mon, 17 Feb 2020 08:10:24 +0000 https://historynewsnetwork.org/article/174246 https://historynewsnetwork.org/article/174246 0
The Cold War New and Old: Architectural Exchanges Beyond the West Recent reports on Chinese and Russian involvement in infrastructural developments in Africa and the Middle East coined the phrase “New Cold War”. Yet when seen from the Global South, the continuities with the 20th century lie less in the confrontation with the West, and more in reviving the collaboration between the “Second” and the “Third” worlds. In my recent book Architecture in Global Socialism (Princeton University Press, 2020) I argue that architecture is a privileged lens to study multiple instances of such collaboration, and their institutional, technological, and personal continuities.  

 

When commentators discuss China’s expanding involvement in Africa, they often point at its continuity with the Cold War period. This includes the “Eight Principles” of Chinese aid, announced by prime minister Zhou Enlaiin 1964 during his visit to Ghana, at the time of China’s widening ideological split with the Soviet Union. Less discussed is the fact that, in spite of this split, these principles closely echoed the tenets of Soviet technical assistance from which China had benefitted and which was under way in Ghana. Like China later, so the Soviet Union offered low-interest loans for the purchase of Soviet equipment, the transfer of technical knowledge to the local personnel, and assurances of mutual benefit and respect for the sovereignty of the newly independent state. 

 

 

Above: Nikita Khrushchev and president Sukarno inspect the model of the National Stadium in Jakarta (Indonesia), 1960. R. I. Semergiev, K. P. Pchel'nikov, U. V. Raninskii, E. G. Shiriaevskaia, A. B. Saukke, N. N. Geidenreikh, I. Y. Yadrov, L. U. Gonchar, I. V. Kosnikova. Private archive of Igor Kashmadze. Courtesy of Mikhail Tsyganov.

 

 

In 1960s West Africa, the Soviet Union and its Eastern European satellites used technical assistance to promote socialist solidarity against the United States and Western Europe. Architects, planners, engineers, and construction companies from socialist countries were instrumental in the implementation of the socialist model of development in Ghana, Guinea, and Mali. It was based on industrialization, collectivization, wide distribution of welfare, and mass mobilization. Until today, many urban landscapes in West Africa bear witness to how local authorities and professionals drew on Soviet prefabrication technology, Hungarian and Polish planning methods, Yugoslav and Bulgarian construction materials, Romanian and East German standard designs, and manual laborers from across Eastern Europe.

 

 

Above: International Trade Fair, Accra (Ghana), 1967. Ghana National Construction Corporation (GNCC), Vic Adegbite (chief architect),  Jacek Chyrosz, Stanisław Rymaszewski (project architects). Photo by Jacek Chyrosz. Private archive of Jacek Chyrosz, Warsaw (Poland)

 

 

Some of these engagements were interrupted by regime changes, as it was the case in Ghana, when the socialist leader Kwame Nkrumah was toppled in 1966. But socialist or Marxist-Leninist regimes were not the only, and even not the main, destinations for Soviet and Eastern European architects and contractors. Their most sustained work, often straddling several decades, took place in countries which negotiated their position across the Cold War divisions, such as Syria under Hafez al-Assad, Iraq under Abd al-Karim Qasim and the regimes that followed, Houari Boumédiene’s Algeria, and Libya under Muammar Gaddafi. By the 1970s Eastern Europeans were invited to countries with elites openly hostile to socialism, such as Nigeria, Kuwait, and the United Arab Emirates.

 

Some countries in the Global South collaborated with Eastern Europe in order to obtain technology embargoed by the West. More often, they aimed at a stimulation of industrial development and at offsetting the hegemony of Western firms. Many of these transactions exploited the differences between the political economy of state socialism and the emerging global market of design and construction services. For example, state-socialist managers used the inconvertibility of Eastern European currencies to lower the costs of their services. In turn, barter agreements bypassed international financial markets when raw materials from Africa and Asia were exchanged for buildings and infrastructures constructed by components, technologies, and labor from socialist countries.

 

Above: State House Complex, Accra(Ghana), 1965. Ghana National Construction Corporation (GNCC),  Vic Adegbite (chief architect), Witold Wojczyński, Jan Drużyński (project architects). Photo by Ł. Stanek, 2012.

 

 

The 1973 oil embargo was a game changer for Eastern European construction export. The profits from oil sales, deposited by Arab governments with Western financial institutions, were lent to socialist countries intent on modernizing their economies. Yet when the industrial leap expected from these investments did not materialize, Hungary, Poland, and East Germany struggled to repay huge loans in foreign currencies. Debt repayment became a key objective for the stimulation of export from many Eastern European countries, in particular as the Soviet Union was increasingly unwilling to subsidize them with cheap oil and gas. Faced with the shrinking markets for their industrial products, Eastern Europeans boosted the export of design and construction services. 

 

By the 1970s, the main destinations of this export were the booming oil-producing countries in North Africa and the Middle East. In Algeria, Libya, and Iraq, state-socialist enterprises constructed housing neighborhoods, schools, hospitals, and cultural centers, as well as industrial facilities and infrastructure, paid in convertible currencies or bartered for crude oil. Eastern Europeans also delivered master plans of Algiers, Tripoli, and Baghdad, and worked in architectural offices, planning administration, and universities in the region. 

 

 

Above: Flagstaff House housing project, Accra(Ghana), 1963. Ghana National Construction Corporation (GNCC), Vic Adegbite (chief architect), Károly (Charles) Polónyi (design architect). Photo by Ł. Stanek, 2012.

 

 

These collaborations were as widespread as they were uneven. Under pressure of state and party leadership to produce revenue in convertible currencies, state-socialist enterprises were highly accommodating to the requests of their North African and Middle Eastern counterparts. In turn, the latter were often constrained by path-dependencies of Eastern European technologies already acquired. 

 

Many would see such transactions as indicative of a shift in socialist regimes from ideology to pragmatism, if not cynicism. But a more complex picture emerges when these transactions are addressed through the lens of architecture, which spans not only economy and technology, but also includes questions of representation, identity, and everyday life. 

 

While largely abandoning the discourse of socialist solidarity, Eastern European architects continued to speculate about the position in the world that they shared with Africans and Middle Easterners. In so doing, they aimed at identifying professional precedents useful for the tasks at hand. For example, the expertise of tackling rural underdevelopment, which had been long claimed by Central European architects as their professional obligation, provided specific planning tools in the agrarian countries of the Global South. In turn, the search for “national architecture” in the newly constituted states in Central Europe after World War I became useful when architects were commissioned to design spaces representing the independent countries in Africa and Asia. 

 

 

Above: Municipality and Town Planning Department, Abu Dhabi(UAE), 1979-85.  Bulgarproject (Bulgaria), Dimitar Bogdanov. Photo by Ł. Stanek, 2015.

 

 

During the Cold War, these exchanges were carefully recorded in North America and Western Europe. But after 1989, they were often forgotten by policy makers, professionals, and scholars in the West. By contrast, buildings, infrastructures, and industrial facilities co-produced by West Africans, Middle Easterners, and Eastern Europeans are still in use in the Global South, and master plans and building regulations are still being applied. Some of the modernist buildings are recognized as monuments to decolonization and independence, while others sit awkwardly in rapidly urbanizing cities.  

 

Memories of collaboration with Eastern Europe are vivid among professionals, decision makers and, sometimes, users of these structures. They result in renewed engagements when, for example, Libyan authorities invite a Polish planning company to revisit its master plans for Libyan cities delivered during the socialist period. These memories speak more about the experience of collaboration that bypassed the West, and less about a Cold War confrontation, old or new.  

 

 

 

]]>
Mon, 17 Feb 2020 08:10:24 +0000 https://historynewsnetwork.org/article/174252 https://historynewsnetwork.org/article/174252 0
Frank Ramsey: A Genius By All Tests for Genius

 

It is hard to get our ordinary minds around the achievements of the great Cambridge mathematician, philosopher, and economist, Frank Ramsey. He made indelible contributions to as many as seven disciplines: philosophy, economics, pure mathematics, mathematical logic, the foundations of mathematics, probability theory, and decision theory. My book Frank Ramsey: A Sheer Excess of Powers, tells the story of this remarkable thinker. The subtitle is taken from the words of the Austrian economist Joseph Schumpeter, who described Ramsey as being like a young thoroughbred, frolicking with ideas and champing at the bit out of a sheer excess of powers.  Or as another economist, the Nobel Laureate Paul Samuelson said ‘Frank Ramsey was a genius by all tests for genius’.

 

Ramsey led an interesting life in interesting times. He began his Cambridge undergraduate degree just as the Great War was ending; he was part of the race to be psychoanalyzed in Vienna in the 1920s; he was a core member of the secret Cambridge discussion society, the Apostles, during one of its most vital periods; as well as a member of the Bloomsbury set of writers and artists and the Guild Socialist movement. He lived his life via Bloomsbury’s open moral codes and lived it successfully. 

 

The economist John Maynard Keynes identified Ramsey as a major talent when he was a mathematics student at Cambridge in the early 1920s. During his undergraduate days, Ramsey demolished Keynes’ theory of probability and C.H. Douglas’s social credit theory; made a valiant attempt at repairing Bertrand Russell’s Principia Mathematica; and translated Ludwig Wittgenstein’s Tractatus Logico-Philosophicus, and wrote a critique of the latter alongside a critical notice of hat still stands as one of the most challenging commentaries of that difficult and influential book.

 

Keynes, in an impressive show of administrative skill and sleight of hand, made the 21-year-old Ramsey a fellow of King’s College at a time when only someone who had studied there could be a fellow. (Ramsey had done his degree at Trinity). 

 

Ramsey validated Keynes’ judgment. In 1926 he was the first to figure out how to define probability subjectively and invented the expected utility that underpins much of contemporary economics. Beginning with the idea that a belief involves a disposition to act, he devised a way of measuring belief by looking at action in betting contexts. But while Ramsey provided us with a logic of partial belief, he would have hated the direction in which it has been taken. Today the theory is often employed by those who want to understand decisions by studying mathematical models of conflict and cooperation between rational and self-interested choosers. Ramsey clearly believed it was a mistake to think that people are ideally rational and essentially selfish. He also would have loathed those who used his results to argue that the best economy is one generated by the decisions of individuals, with minimal government intrusion. He was a socialist who favored government intervention to help the disadvantaged in society. 

 

In addition to his pioneering work on decisions made under uncertain conditions, Ramsey with encouragement from Keynes wrote two pathbreaking papers for the latter’s Economic Journal. The first, “A Contribution to the Theory of Taxation” founded the sub-field of optimal taxation and laid the foundation for the field of macro-public finance, so much so that any research problem about optimal monetary or fiscal government policy is now called a Ramsey Problem. The second, “Mathematical Theory of Saving” founded the field of optimal savings by trying to determine how much a nation should save for future generations. This work on intergenerational justice has been expanded and improved upon by economic luminaries such as Kenneth Arrow, Partha Dasgupta, Tjalling Koopmans, and Robert Solow. As Ramsey suggested, the theory has been applied not only to income, but also to exhaustible resources such as the environment, resulting in yet another new sub-discipline in economics called ‘optimal development’. 

 

Ironically, Ramsey told Keynes that it was a waste of time to write these papers, as he was preoccupied with much more difficult work in philosophy, and didn’t want to be distracted. 

 

His contributions in the latter field were enormous. Ramsey was responsible for one of the most important shifts in the history of philosophy. He had a profound influence on Ludwig Wittgenstein, persuading him to drop the quest for certainty, purity, and sparse metaphysical landscapes in the Tractatus and turn to ordinary language and human practices. Wittgenstein is one of the most influential philosophers in the history of the discipline and the shift from his early to his later position, caused by Ramsey, is one of the major signposts in the contemporary philosophical landscape.

 

More importantly, his own alternative philosophical views are still being mined for gems. Ramsey’s theory of truth and knowledge is the very best manifestation of the tradition of American pragmatism. His most illustrious contemporaries in philosophy—Bertrand Russell, G.E. Moore, the early Wittgenstein, and the members of the Vienna Circle—sought to logically analyze sentences so that the true ones would mirror a world independent of us. In contrast, Ramsey was influenced by C.S. Peirce, the founder of American pragmatism, who characterized truth in terms of its place in human life. When Ramsey died, he was in the middle of writing a book that is only now starting to be appreciated for its unified and powerful way of understanding how all sorts of beliefs are candidates for truth and falsity, including counterfactual conditionals and ethical beliefs. His general stance was to shift away from high metaphysics, unanswerable questions, and indefinable concepts, and move towards human questions that are in principle answerable. His approach, to use his own term, was ‘realistic’, rejecting mystical and metaphysical so-called solutions to humanity’s deepest problems in favor of down-to-earth naturalist solutions. 

 

Although Ramsey was employed by Cambridge as a mathematician, he only published eight pages of pure mathematics. But those eight pages yielded impressive results. He had been working on the decision problem in the foundations of mathematics that David Hilbert had set in 1928. It called for an algorithm to determine whether or not any particular formula is valid or true on every structure satisfying the axioms of its theory. Ramsey solved a special case of the problem, pushed its general expression to the limit, and saw that limit very clearly. Shortly after his death, in one of the biggest moments in the history of the foundations of mathematics, Kurt Gödel, Alonzo Church and Alan Turing demonstrated that the general decision problem was unsolvable. But a theorem that Ramsey had proven along the way, a profound mathematical truth now called Ramsey’s Theorem, showed that in large but apparently disordered systems, there must be some order. That fruitful branch of pure mathematics, the study of the conditions under which order occurs, is called Ramsey Theory. 

 

His work in mathematics and philosophy is only the tip of the iceberg. A query sent out to Twitter, asking for innovations named after Ramsey, produced an astonishing nineteen items. Most of them are technical and would take an article of their own to explain. One, however, is easily accessible. In 1999, Donald Davidson, a leading philosopher of the twentieth century,  coined the term ‘the Ramsey Effect’: the phenomenon of discovering that an exciting and apparently original philosophical discovery already has been presented, and presented more elegantly, by Frank Ramsey. 

 

Ramsey did all this, and more, in an alarmingly short lifespan. He died at the age of 26 probably from leptospirosis (bacteria from the feces of animals) contracted by swimming in the river Cam.

 

His death made his friends and family (including his brother Michael, who later became Archbishop of Canterbury) question the meaning of life. Ramsey had something to say about that too. His poignant remarks in 1925 on the timeless problem of what it is to be human are just as important as his technical work:

 

My picture of the world is drawn in perspective, and not like a model to scale. The foreground is occupied by human beings and the stars are all as small as threepenny bits. … I apply my perspective not merely to space but also to time. In time the world will cool and everything will die; but that is a long time off still, and its present value at compound discount is almost nothing. Nor is the present less valuable because the future will be blank. Humanity, which fills the foreground of my picture, I find interesting and on the whole admirable. I find, just now at least, the world a pleasant and exciting place. You may find it depressing; I am sorry for you, and you despise me. But [the world] is not in itself good or bad; it is just that it thrills me but depresses you. On the other hand, I pity you with reason, because it is pleasanter to be thrilled than to be depressed, and not merely pleasanter but better for all one’s activities.

 

Unlike his friends Russell and Wittgenstein who focused on the vastness and the unknowability of the world, Ramsey believed it was more important to concentrate on what is admirable and conducive to living a good life. Rather than focus on a ‘oneness’ or God, like his brother Michael, he thought the good life was to be found within our human, fallible, ways of being.

 

]]>
Mon, 17 Feb 2020 08:10:24 +0000 https://historynewsnetwork.org/article/174250 https://historynewsnetwork.org/article/174250 0
How Religious History Helps Us Understand Russia's War Against Ukrainian Independence

 

In recent months, the ongoing Russian-Ukrainian multidimensional military, political, economic and cultural conflict has become an issue with which most people around the world are now familiar with. What is less well known beyond Eastern Europe is the important religious aspect of Moscow’s “hybrid war” against Ukrainian national independence. According to its Constitution, Ukraine is, to be sure, a secular country where churches and all religious organizations are separated from the state. On the other hand, Ukraine is one of the most religious countries in Europe, and has a Christian church whose history reaches back more than a millennium. 

 

The origins of Ukrainian Orthodoxy date back to the Middles Ages when Prince Volodymyr the Great of Kyiv received Christianity from Constantinople in 988. The baptism of the Kyivan Rus was one of the crucial events in early Ukrainian history. It helped create a first proto-national community out of which today’s modern Ukrainian nation would later emerge. 

 

Eastern Christianity heavily influenced the rise of the first Eastern Slavic state, the Kyivan Rus, on the territories of today central and northern Ukraine, eastern Belarus as well as western Russia. Later on, Orthodoxy in Eastern Europe transformed, however, from a medium of cultivation and unification into an instrument of domination and subversion. Increasingly reimagining itself as an empire, Tsarist Russia used the Orthodox church to justify and implement its control over Ukraine and Belarus. 

 

Against this background, the last almost three decades since Ukraine’s independence were marked by a fight of the Ukrainian Orthodox Church of the Kyiv Patriarchate (UOC KP) and Ukrainian Autocephalous Orthodox Church (UAOC) against the dominance of the Ukrainian Orthodox Church of the Moscow Patriarchate (UOC MP) in Ukraine. This confrontation has political undercurrents as the UOC MP is – in spite of its official name – de facto a branch of the Russian Orthodox Church (ROC) which, in turn, is a manifestly national (rather than pan-national) church that is unofficially, but closely linked to the Kremlin. As such, the ROC and UOC MP were and are today important soft-power instruments in the Kremlin’s hybrid warfare against Ukraine. They are a major medium for Moscow’s foreign policy and facilitate the Kremlin’s neo-imperial schemes under such headings as “Orthodox civilization,” “Russian World,” or “Eastern Slavic brotherhood.” 

 

In 2014, the so-called “Ukraine crisis” began. This is the common, but misleading label for the war that broke out as a result of Russia’s illegal annexation of Crimea and covert intervention in the Donets Basin. Since then, the question of religious independence from Russia has become more pressing than ever for many Ukrainians. As a result of prolonged negotiations, in January 2019, the Ecumenical Patriarch of Constantinople Bartholomew I. handed to a Ukrainian delegation in Istanbul a so-called Tomos (literally: little book), i.e. an official document that grants canonical independence to the newly-established unified Orthodox Church of Ukraine (OCU). This was a major achievement not only for Ukraine’s religious autonomists. It was also a historic success of the Presidency of Petro Poroshenko whose team had, since 2016, done most of the diplomatic work in preparation of Constantinople’s momentous move. 

 

The Russian reaction to this historic act was expectedly vitriolic and full of conspirology. Already before the finalization of Constantinople’s move in early 2019, among many others, Patriarch Kirill of the Russian Orthodox Church condemned, in late 2018, with anger and hyperbole Ukraine’s forthcoming autocephaly: “The concrete political goal was well-formulated by, among others, plenipotentiary representatives of the United States in Ukraine and by representatives of the Ukrainian government themselves: it is necessary to tear apart the last connection between our people [i.e. the Russians and Ukrainians], and this [last] connection is the spiritual one. We should make our own conclusions [concerning this issue] including on the tales which [the West], for a long time, tried to impose on us, during so many years, about the rule of law, human rights, religious freedom and all those things which, not long ago, were regarded as having fundamental value for the formation of the modern state and of human relations in modern society. Ukraine could become a precedent and example for how easily one can do away with any laws, with any orders [and] with any human rights, if the mighty of this world need it.”

 

The new Metropolitan of the Orthodox Church of Ukraine (OCU) Epiphanius responded to these and many other Russian attacks as he said that the “the Russian Orthodox Church is the last advance post of Vladimir Putin in Ukraine,” and that the “appearance of the OCU undercuts the imperial goals of the Kremlin leader. Putin is losing here in Ukraine the support which he had before because if he had not had this support, there would not have been a war in the Donbas. And therefore, we will consistently maintain ourselves as a single church – recognised and canonical in Ukraine. And gradually Russia will lose this influence through the souls of Orthodox Ukrainians here."

 

To be sure, the acquisition of canonical independence of the newly established Orthodox Church of Ukraine was not only a church matter and source of division between Russia and Ukraine. It also played a role in Ukrainian domestic affairs, and, in particular, in the Ukrainian presidential elections of 2019. On the day of Epiphanius’s enthronization on February 2, 2019, then President Petro Poroshenko stressed that the OCU is and will be independent of the state. At the same time, he stated that “the church and the state will now be able to enter onto a path toward genuine partnership of the church and state for joint work for the good of the country and the people."

 

Representatives of the new OCU repeatedly assured that the state does not meddle in religious affairs, but merely contributed to the unification process. Yet, former President Poroshenko actively presented Ukraine’s acquisition of autocephaly as his political victory vis-à-vis Russia during his 2019 election campaign, and even went on a so-called Tomos-tour through Ukraine. While such manifest political instrumentalization spoiled the acquisition of Ukrainian autocephaly, the OCU’s independence is not a mere side-product of political maneuvering by Ukraine’s former President. It is the result of a decades long struggle of many Ukrainian Christians against the dominance of the UOC-MP and of aspirations of many Orthodox believers in Ukraine.

 

According to American theologian Shaun Casey, Ukraine’s Tomos, i.e. her obtainment of autocephaly for her Orthodox church, will lead to unification around the OCU and give new opportunity to deal with religious diversity. Among others, Archimandrite Cyril Hovorun has emphasized that Ukraine’s acquisition Constantinople’s Tomos is a move that corresponds to the general structure of the world-wide religious Orthodox community, and national character of the individual Eastern Christian churches. Unlike in the centralized and pyramidal structure of the Catholic Church with the Pope at its top, Orthodoxy is divided into local churches and constitutes an international Commonwealth rather than unified organization.

 

The ongoing dominance of an Orthodox Church subordinated to Moscow rather than Kyiv on the territory of independent Ukraine had thus always been an anomaly. It became an absurdity once Russia started a war against Ukraine in 2014. Therefore, Ukraine’s acquisition of autocephaly for its Orthodox church can be viewed as an opportunity to heal the schism between the various Eastern Christian communities on Ukrainian territory, and to eventually unite most Orthodox believers living in Ukraine. 

 

For that to happen, the international recognition by other Orthodox churches is crucial as it legitimizes the young OCU in the Eastern Christian world. So far, only the Patriachate of Alexandria and the Standing Synod of the Church of Greece have officially recognized the canonical independence of the OCU. Even these were contested decisions. The former Greek Defence Minister Panos Kammenos called it a crime: “If anything happens in the next few months, the Holy Synod [of the Greek Orthodox Church] will hold all responsibility for the termination of guarantees granted by Russia, due to the recognition of the illegal Church of Ukraine.” 

 

In contrast to Greece’s Holy Synod, the Serbian Orthodox Church, a close ally of the ROC, has made publicly clear that it will not recognize the OCU. It follows Moscow’s line when claiming that “the Kyiv-based Metropolia cannot be equated with current Ukraine as it has been under the jurisdiction of the Moscow Patriarchate since 1686." Events in Ukraine have gained additional meaning in the Western Balkans as Montenegro – NATO’s most recent new member – is currently discussing a contentious religious bill that enables the state to confiscate property of the Serbian Orthodox Church. The latter has, in response, blamed Kyiv for this development: “It appears that recent events in Ukraine, where the previous authorities and Constantinople Patriarchate legalized the schism, are currently repeated in Montenegro. Schismatics should confess and achieve reconciliation with the Serbian Orthodox Church." 

 

Reacting to recent developments in Ukraine, Greece and former Yugoslavia, Moscow’s Patriarch Kirill warns now that “new work will now be done to strengthen Orthodoxy's canonical purity, and even greater efforts made to preserve and restore unity where this has been shaken." OCU’s Metropolitan Epiphanius at Kyiv, in contrast, predicts that, in the near future, “at least three or four more churches will recognize our autocephaly.”   

Such radical statements are due to the fact that the OCU’s independence has the potential to change the balance of influence in the entire Orthodox world. Representatives of several other Christian and non-Christian religions have welcomed the emergence of a canonical and independent Ukrainian Orthodox church in 2019. The partly harsh rejection of the OCU by a number of Russian and pro-Russian Orthodox hierarchs has largely to do with the Moscow-dominated power relations in the international network of Eastern Christian churches that are under threat. The emergence of a potentially large competitor in Eastern Europe could encourage certain other local churches currently under the Moscow Patriarchy to follow Ukraine’s example. 

 

Religion will remain an important factor in the ongoing conflict between Russia and Ukraine, and divide world-wide Orthodoxy as long as Moscow does not recognize Ukrainian autocephaly. The emergence of the OCU and its growing recognition among other local Orthodox churches will impact profoundly the post-Soviet and other regions of the world. It will probably provoke Moscow towards even harsher actions as the Kremlin is gradually losing a vital instruments of its hybrid warfare against Ukraine. While autocephaly has been an aim for many Ukrainian Christians for centuries, Constantinople’s 2019 Tomos for the OCU is perceived, in- and outside Ukraine, as a highly symbolic answer to the Russian military attack on Ukraine – an aggression of one largely Orthodox people against another. Against this geopolitical background, the OCU’s acquisition of autocephaly undercuts the crypto-imperial mood in the Moscow Patriarchy.

 

This article is an outcome of a project within the 2018-2019 Democracy Study Center training program of the German-Polish-Ukrainian Society and European Ukrainian Youth Policy Center, in Kyiv, supported by the Foreign Office of the Federal Republic of Germany. #CivilSocietyCooperation. Umland's work for this article benefited from support by "Accommodation of Regional Diversity in Ukraine (ARDU): A research project funded by the Research Council of Norway (NORRUSS Plus Programme)." See blogg.hioa.no/ardu/category/about-the-project/.

]]>
Mon, 17 Feb 2020 08:10:24 +0000 https://historynewsnetwork.org/article/174251 https://historynewsnetwork.org/article/174251 0
JFK's New Hampshire Primary Hope Resonates Today

 

The day before the 1960 New Hampshire presidential primary, candidate John F. Kennedy talked about America's great hope for disarmament. Speaking at the University of New Hampshire, JFK said "No hope is more basic to our aspirations as a nation" than disarmament. JFK exclaimed "there is no greater defense against total nuclear destruction than total nuclear disarmament."  It's vital our presidential candidates today also share this goal of JFK.  Despite previous arms control treaties there are still 14,000 nuclear weapons in the world, most held by the U.S. and Russia. The danger of a new, expensive arms race looms large.   JFK believed that the U.S., Russia and other nuclear powers have a common interest in disarmament. In his speech JFK noted "that funds devoted to weapons of destruction are not available for improving the living standards of their own people, or for helping the economies of the underdeveloped nations of the world." Nuclear spending fosters instability at home and abroad by stealing resources from the impoverished.   We need this same type of thinking as we negotiate progress toward nuclear disarmament. But sadly, treaties are being rolled backed by the Trump administration, furthering the nuclear danger.  We need to extend the New START treaty achieved by President Obama, which limits deployed strategic nuclear weapons for Russia and the U.S. We don't want to risk the possibility of having no arms control treaty with Russia in place.  But Trump has been stalling in renewing the treaty, despite most everyone urging him to do so.  Lt. Gen. Frank Klotz says "The most prudent course of action would be to extend New START before it expires in 2021 and thereby gain the time needed to carefully consider the options for a successor agreement or agreements and to negotiate a deal with the Russians.”  Extending New START also takes on extra meaning right now because of Trump's withdrawal from the INF Treaty with Russia, which has escalated nuclear dangers. The treaty, achieved by President Ronald Reagan, had eliminated short and medium range nuclear missiles.  Kennedy said in his speech that disarmament would take "hard work." We clearly have to work harder at  diplomacy today, which was our main tool in controlling the nuclear threat during the Cold War.  The Trump administration can start by ratifying the long overdue Comprehensive Nuclear Test Ban Treaty, which bans all nuclear test explosions.  President Dwight Eisenhower first pursued negotiations on a nuclear test ban with the Soviets during the Cold War. Kenned continued Ike's efforts and achieved the great breakthrough of the Limited Nuclear Test Ban Treaty of 1963. This treaty banned nuclear tests in the atmosphere, underwater, and outer space. It came just one year after the Cuban Missile Crisis brought the Soviets and the U.S. to the brink of nuclear war. But underground tests continued.  So we need to finish the job Ike and JFK started and finally ratify the Comprehensive Nuclear Test Ban Treaty. All Trump has to do is pick up the phone and ask the Senate to finally ratify. We should encourage North Korea and China to ratify as a confidence building measure toward nuclear disarmament in Asia.   We also need to convey the wastefulness of nuclear spending.  The Congressional Budget office warns "The Administration’s current plans for U.S. nuclear forces would cost $494 billion over the 2019–2028 period—$94 billion more than CBO’s 2017 estimate for the 2017–2026 period, in part because modernization programs continue to ramp up."  Daryl Kimball of the Arms Control Association says we need extend New START and then build more arms reduction treaties to start cutting nuke costs.  Think of how tens of billions of dollars each year are going to be poured into nuclear weapons. Then think of all the different ways that money could be spent to better society. Those nuke dollars could feed the hungry, cure cancer and other diseases, improve education and infrastructure.  The World Food Program estimates that 5 billion a year could feed all the world's school children, a major step toward ending global hunger. We should fund that noble peace initiative instead of nukes.  The candidates for president must take up the cause of nuclear disarmament. 

]]>
Mon, 17 Feb 2020 08:10:24 +0000 https://historynewsnetwork.org/article/174244 https://historynewsnetwork.org/article/174244 0
The Unique Struggles of Women and Native Americans to Vote

 

Wealthy white men have had the right to vote in America since the beginning of our republic. It’s been a very, very different story for women and Native Americans. 

Women’s voting rights took a long time. Native Americans’ took longer. 

The struggle for women’s voting rights began in April 1776, when 32-year-old Abigail Adams sat at her writing table in her home in Braintree, Massachusetts, a small town a few hours’ ride south of Boston. 

The Revolutionary War had been going on for about a year. A small group of the colonists gathered in Philadelphia to edit Thomas Jefferson’s Declaration of Independence for the new nation they were certain was about to be born, and Abigail’s husband, John Adams, was among the men editing that document. 

Abigail had a specific concern. With pen in hand, she carefully considered her words. Assuring her husband of her love and concern for his well-being, she then shifted to the topic of the documents being drafted, asking John to be sure to “remember the Ladies, and be more generous and favourable to them than [were their] ancestors.”

Abigail knew that the men drafting the Declaration and other documents leading to a new republic would explicitly define and extol the rights of men (including the right to vote) but not of women. She and several other well-bred women were lobbying for the Constitution to refer instead to persons, people, humans, or “men and women.” 

Her words are well preserved, and her husband later became president of the United States, so her story is better known than those of most of her peers. 

By late April, Abigail had received a response from John, but it wasn’t what she was hoping it would be. “Depend on it,” the future president wrote to his wife, “[that we] know better than to repeal our Masculine systems.”

Furious, Abigail wrote back to her husband, saying, “If perticular [sic] care and attention is not paid to the Ladies, we are determined to foment a Rebellion.” 

Abigail’s efforts were unrewarded. 

Adams, Jefferson, Hamilton, and the other men of the assembly wrote, “We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the Pursuit of Happiness. That to secure these rights, Governments are instituted among Men, deriving their just Powers from the Consent of the Governed . . . ” 

The men had won. 

At that time, a married woman couldn’t make out a will because she couldn’t independently own property. Her husband owned anything she’d brought into the marriage. If he died, a man appointed by a court would decide which third of her husband’s estate she could have and how she could use it, and he would supervise her for the rest of her life or until she remarried. A woman couldn’t even sue in court, except using the same laws applied to children and the mentally disabled with a male executor in charge. 

And, for sure, a woman couldn’t vote. 

The Generational Fight for Womens Suffrage 

Nearly a hundred years later, things hadn’t changed much. Susan B. Anthony went to her ward’s polling station in Rochester, New York, on November 1, 1872, and cast a vote. 

Justifying her vote on the grounds of the 14th Amendment, Anthony wrote, “All persons are citizens—and no state shall deny or abridge the citizen rights.”

Six days later, she was arrested for illegally voting. The judge, noting that she was female, refused to allow her to testify, dismissed the jury, and found her guilty. 

A year later, in the 1873 Bradwell v. State of Illinois decision, concerning the attempt of a woman named Myra Bradwell to practice law in Illinois, the US Supreme Court ruled that women were not entitled to the full protection of persons for purposes of voting or even to work outside the home. 

Justice Joseph P. Bradley wrote the Court’s concurring opinion, which minced no words: “The family institution is repugnant to the idea of a woman adopting a distinct and independent career from that of her husband. So firmly fixed was this sentiment in the founders of the common law that it became a maxim of that system of jurisprudence that a woman had no legal existence separate from her husband, who was regarded as her head and representative in the social state.”

After another 50 years, suffragettes eventually won the right to vote with the passage of the 19th Amendment in 1920. But burdensome laws, written and passed mostly by men, continue to oppress women to this day. These include voter suppression laws that hit women particularly hard in Republican-controlled states. 

Those states, specifically, are the places where “exact match” and similar ALEC-type laws have been passed forbidding people to vote if their voter registration, ID, or birth certificate is off by even a comma, period, or single letter. The impact, particularly on married women, has been clear and measurable. As the National Organization for Women (NOW) details in a report on how Republican voter suppression efforts harm women: 

Voter ID laws have a disproportionately negative effect on women. According to the Brennan Center for Justice, one third of all women have citizenship documents that do not identically match their current names primarily because of name changes at marriage. Roughly 90 percent of women who marry adopt their husband’s last name. That means that roughly 90 percent of married female voters have a different name on their ID than the one on their birth certificate. An estimated 34 percent of women could be turned away from the polls unless they have precisely the right documents.

MSNBC reported in a 2013 article titled “The War on Voting Is a War on Women, “[W]omen are among those most affected by voter ID laws. In one survey, [only] 66 percent of women voters had an ID that reflected their current name, according to the Brennan Center. The other 34 percent of women would have to present both a birth certificate and proof of marriage, divorce, or name change in order to vote, a task that is particularly onerous for elderly women and costly for poor women who may have to pay to access these records.” The article added that women make up the majority of student, elderly, and minority voters, according to the US Census Bureau. In every category, the GOP wins when women can’t vote. 

Silencing and Suppressing Native Voices 

Republicans generally are no more happy about Native Americans voting than they are about other racial minorities or women. Although Native Americans were given US citizenship in 1924 by the Indian Citizenship Act, that law did not grant them the right to vote, and their ability to vote was zealously suppressed by most states, particularly those like North Dakota, where they made up a significant share of the nonwhite population. 

Congress extended the right to vote to Native Americans in 1965 with the Voting Rights Act, so states looked for other ways to suppress their vote or its impact. Gerrymandering was at the top of the list, rendering their vote irrelevant. But in the 2018 election, North Dakota took it a step further. 

Most people who live on the North Dakota reservations don’t have separate street addresses, as most tribes never adopted the custom of naming streets and numbering homes. Instead, people get their mail at the local post office, meaning that everybody pretty much has the same GPO address. Thus, over the loud objections of Democratic lawmakers, the Republicans who control that state’s legislature passed a law requiring every voter to have his or her own unique and specific address on his or her ID.

Lots of Native Americans had a driver’s license or even a passport, but very few had a unique street address. When the tribes protested to the US Supreme Court just weeks before the election, the conservatives on the Court sided with the state.

In South Dakota, on the Pine Ridge Reservation, the Republican-controlled state put polling places where, on average, a Native American would have to travel twice as far as a white resident of the state to vote. And because that state’s ID laws don’t accept tribal ID as sufficient to vote, even casting an absentee ballot is difficult. 

Although the National Voter Registration Act of 1993, also known as the Motor Voter Act, explicitly says that voting is a right of all US citizens, that part of that law has never been reviewed by the Supreme Court and thus is largely ignored by most GOP-controlled states. As a result, you must prove your innocence of attempted voting fraud instead of the state proving your guilt. 

Reprinted from The Hidden History of the War on Voting with the permission of Berrett-Koehler Publishers. Copyright © 2020 by Thom Hartmann. 

]]>
Mon, 17 Feb 2020 08:10:24 +0000 https://historynewsnetwork.org/article/174249 https://historynewsnetwork.org/article/174249 0
Manipulation of Spanish History Has Manufactured A Constitutional Crisis

 

Spain is facing an existential crisis. Catalonia, one of its seventeen constituent regions, has a government supported by a small majority that seeks to be fully independent from Spain. Because Spain had not been able to form a permanent government at the national level without support from the Catalan separatists in parliament, the Socialist prime minister had to promise to allow a referendum to determine if Catalonia would gain independence from the rest of Spain. Most Spaniards oppose such a referendum. The entire country will very likely fall into a state of civil war if the referendum is granted. The roots of this dilemma lie in the manufactured crisis brought on by warring nationalist groups with competing narratives of Spain’s past. 

 

Spanish nationalists, those descended from the Fascists who supported Spanish dictator Francisco Franco, always ignore the cultural diversity that characterizes Spain. Catalan nationalists, descended from a group of anti-Francoists, pretend that Catalonia is and has always been so radically distinct from the rest of Spain that it cannot possibly tolerate political union with the rest of the country. 

 

Both sides want to seize control by destroying Spain’s diversity. Spanish nationalists seek to undo the current political status quo---a unitary nation that has given significant power to its regions, called autonomous communities. Instead, they want to create a single, unified Spanish (Castilian) identity upon all ethno-linguistic minorities in the country. Catalan nationalists also want to impose a single cultural and political identity within a completely independent nation where ethno-linguistic minorities such as non-white Latin Americans and Castilian-speaking Spaniards would be subordinated to Catalan cultural and political power. 

 

Both sides use history to make their case but in each case their reading of history is flawed. The truth is more complicated. Spain is an accident in that until 1640, it was not obvious that Portugal was any different from the rest of the Iberian Peninsula. Just like India, Bangladesh, and Pakistan used to be simply called India, Portugal used to be part of what was called Hispania, then the name of the entire Iberian Peninsula.

 

During the Roman era, Latin spread throughout most of the Iberian Peninsula except for the northern region where it failed to take root due to strong Basque resistance. When the Roman Empire collapsed in the Latin West before 476, the Visigoths and Suebi established kingdoms in what was then called Hispania. The Suebi influenced Latin speakers in the western part of the Peninsula, while the Visigoths influenced the southern, central, and eastern portions of the Peninsula. The Basques managed to survive in the northeastern part of Hispania. Thus, before the Islamic conquest ended the earliest phase of the Middle Ages on the Iberian Peninsula in the years after 711, there were three major groups on the Peninsula:Hispano-Romans with more Visigothic than Suebic influence, Hispano-romans with more Suebic than Visigothic influence, and non-Latinized Basques.

 

When Muslims from North Africa conquered the Peninsula, a few areas remained either independent or quickly became independent form Muslim rule. Most of the free regions were partially or completely Basque, with one key exception. Theone Latinate region freed from Muslim rule early on was Asturias. Today’s Spain largely derives from its politically and cultural legacy of Asturias, which became a kingdom less than a generation after the invasion of the Iberian Peninsula.

 

Asturias developed out of the Hispano-Roman/Visigothic tradition. When it liberated Galicia from Muslim control in 739, it was joined by a major group from the Hispano-Roman/Suebic cultural sphere. The Galicians helped reconquer and colonize what became Portugal. As a result, Portugal and Galicia have more in common with each other than either does to the rest of the Peninsula. The differences between the Suebic-influenced cultures (Galicia and Portugal) and Visigothic-influences cultures like the Asturias were and are fairly minimal and certainly do not justify Portuguese independence.

 

Around 790, the Franks (a Germanic people who deeply influenced both France and Germany) intervened during the re-conquest (or Reconquista) of Hispania. They established the Marca Hispanica, which included the Christian areas of what is now Catalonia, increased Christian control in certain areas on the peninsula and established the basic building blocks of the northeastern and north-central regions of medieval Hispania---the counties of Aragon and what became known as the Catalan counties. Catalan nationalists claim that the Frankish influence and the nature of the Catalan counties made Catalonia irreconcilably different from the rest of Hispania, but this a late modern fudge and exaggerates small differences between Aragon and the Catalan counties. The Catalans are not merely Frankish or French as some nationalists have implied but strongly linked to the Hispano-Romans. 

 

In sum, Frankish intervention, Basque resistance, and the continued Reconquista fought between Christian and Muslim states, with Latin speakers on both sides, led to a mishmash of states and cultures in Hispania. As the Christians of Hispania reclaimed the Peninsula, the Ibero-Romance languages came to dominate the whole peninsula, except the Basque areas. By 1492, Islam was defeated in the Peninsula and most of the disparate cultures and territories were united under the personal union of two monarchs, Ferdinand and Isabella. This union eventually led to what we call Spain. Spain ruled over Portugal from 1580-1640. The Portuguese gained their independence by force.

 

Culturally and historically then, the Iberian Peninsula is one unit. Only Portugal never was consolidated into Spain. While Catalans are different from Castilians, as are Aragonese, Asturians, and Galicians from each other. 

 

The history of Spain does not support the Spanish nationalist claim that Castilian is the natural language and culture for all people of the Peninsula. Castilian evolved in lands between Asturias and the Muslim heartland and became dominant through luck, politics, and force of arms. However, Castilian was not the dominant language of much of Spain until the last three centuries. More importantly, Castilian was the choice of the peoples on which it was imposed. Today, it is impossible to divide Spain according to “irreconcilable cultural differences” without giving in to hate, xenophobia and racism. It is also impossible to rightly impose one culture, one language, one ethnicity upon all Spaniards. Only diversity and inclusion can save a united Spain and the only form of government that can constitutionally support inclusion and diversity is federalism. The alternative is chaos. 

]]>
Mon, 17 Feb 2020 08:10:24 +0000 https://historynewsnetwork.org/article/174254 https://historynewsnetwork.org/article/174254 0
The Remarkable Success of Midwestern Front Porch Campaigns

 

The ways that presidential candidates have reached out to voters over the course of US History have changed partly because the franchise has been opened to more people so more voter participation in campaigns has naturally ensued. During the Jeffersonian Era, candidates followed the maxim that the Founding Fathers subscribed to—the office seeks the man, not the other way around. 

 

As the Jacksonian Era unfolded, some states extended the franchise to all white men. By 1840, William Henry Harrison became the first presidential candidate to tour and give speeches to adoring throngs. Going on the road seemed to make sense; as the franchise opened to more voters, they naturally wanted to hear the candidates and “feel” their presence. Harrison won the contest, but between 1852 and 1872, the next four major party nominees that stumped lost. Touring proved rife with landmines for men such as Winfield Scott, Stephen Douglas, Horatio Seymour, and Horace Greeley.

 

It was from this historic backdrop that James Garfield had to figure out how to campaign for the presidency in 1880. He did not want to stay mum and seem disinterested, but he also did not want to follow in the footsteps of the four, previous stumping candidates. Inviting voters to his home to hear him give speeches, meet his family, and take some food from his farm as a gift struck a middle ground between being too quiet or stumping and risking errors. The Republican candidate espoused a high tariff to protect American’s jobs, homes, and families from his porch with his wife and children by his side. 

 

As the campaign wore on and Garfield started looking like the favorite, he received visits from increasingly larger groups. Union Civil War veterans, women, German voters, businessmen from Cleveland, first-time voters, and plain folks from Ohio all came to his Lawnfield residence in the small, bucolic town of Mentor, Ohio, to participate in the first front porch campaign for the presidency. While the age of industrialization catalyzed an age of impersonalization, where workers barely met or saw their bosses, now voters were able to go to a candidate’s home, meet him and his family, shake their hands, and were even gifted food from his garden. Garfield also started an important trend among Republicans running from their front porches—he won.

 

Eight years later, Republican Benjamin Harrison decided to follow in Garfield’s footsteps. His efforts took place in Indianapolis and 350,000 people from around the country came to his porch. This time much larger groups brought gifts to the candidate, then Harrison delivered heavily pre-prepared speeches and afterwards shook hands with as many visitors as possible. He made his appearances with his family and supported a high protective tariff, like his predecessor. Harrison also championed the compensation rights of Union soldiers in order to protect their families and homes. The soldiers’ marches to his home reasserted their masculinity at a time when the age of reconciliation was threatening it for some. He also spoke to African American voters about protecting their rights just as the official onset of Jim Crow legislation was starting. Harrison’s optics were juxta positioned by the Republican press against his nonactive opponent, Grover Cleveland, and contributed to his close win. 

 

In 1896 William McKinley ran the largest and most famous front porch campaign from Canton, Ohio. He saw 750,000 visitors from around the country. This time Union and Confederate soldiers marched together as the age of reconciliation continued unfolding. Women, African Americans, various ethnic groups, and first-time voter groups all visited.

 

The campaigns of both Harrison and McKinley changed their towns dramatically. Like his predecessors, McKinley espoused a high tariff and class solidarity. While pro-McKinley sheets were able to print his speeches in full for their reading audiences across the country, his stumping, swashbuckling opponent, William Jennings Bryan, delivered extemporaneous speeches replete with divisive, class rhetoric that newspaper reporters barely kept up with and could only partially report. Republican sheets also largely ignored the crime that occurred in Canton, making the events look like organized pageantry.  Bryan may have been a more entertaining speaker, but the front porch strategy helped McKinley convey his full opinions for a national audience through the newspapers and he won.

 

After nearly a quarter of a century hiatus, Warren Harding brought front porch campaigning back to presidential politics from Marion, Ohio, in 1920. Harding’s campaigners used his porch appearances and speeches to advertise their man nationally. His promise for a “return to normalcy” fit well with the homebound style. Harding saw the same swaths of visitors that previous Republicans had. This time a small group of Union soldiers that were still alive came to Marion. African Americans came to a town which experienced race riots and expelled black folks from the community a year before. Harding also had to beg women in his audience not to vote against Republican candidates who were anti-suffrage before the passage of the Nineteenth Amendment. The fourth and final porch campaign yielded the same result as the first three—the Republican won. 

 

No presidential electioneering technique has ever been as successful while appearing and disappearing as quickly as front porch campaigning. The porch allowed Republicans to show a certain amount of interest in the office without appearing too interested. At a time when what was acceptable for presidential canvassing was questionable, front porch campaigning became a consistent, winning formula for Republican presidential candidates throughout the Gilded Age and in 1920. Throughout the 21st Century, candidates will continue to look for inventive ways to reach out to diverse swaths of voters.

]]>
Mon, 17 Feb 2020 08:10:24 +0000 https://historynewsnetwork.org/article/174197 https://historynewsnetwork.org/article/174197 0
Harriet Tubman and a National Legacy of Midnight Skies and Silent Stars

Stewart's Canal in Harriet Tubman Underground Railroad National Monument

 

Cynthia Erivo, who is nominated for best actress in a leading role in this weekend’s Oscars, stars in the gripping biopic “Harriet.” The movie, which tells the story of abolitionist Harriet Tubman, captures the miraculous physical, emotional, and spiritual journey of Harriet Tubman as she escapes from slavery to become an American icon. 

 

Of course, the horrors of slavery and the courage of the enslaved heroes that defied it are foremost to the movie’s story. Not to be overlooked, however, is the incredible landscape through which she traveled along the Underground Railroad. 

 

The cinematography of the film beautifully depicts the serene yet daunting setting of the 19th century Delmarva Peninsula. In some ways, this setting is as integral to the tale as the characters themselves. But wearelosing a piece of this living history, just as the history books are beginning to capture a more complete picture of our nations’ past. 

 

A new park

 

Located in Maryland’s Eastern Shore, Harriet Tubman Underground Railroad was formally designated as a National Monument in 2013 and as a National Historical Park in 2014. The contemporary application of the Antiquities Act has garnered recent attention over massive, new monument designations followed by equally, if not more controversial, border reductions at National Parks such as Bears Ears and Gold Butte.

 

Far less splashy, from a total area perspective, have been designations within the past decade of sites like Harriet Tubman, Mill Springs Battlefield, Camp Nelson Heritage, and Fort Monroe National Monuments. While relatively small in size, these sites are absolutely critical to preserving and interpreting the cultural history of the United States. The interpretation of these histories is aided by a preservation of the landscape to reflect the historical period of record.

 

For Harriet Tubman Underground Railroad, that means managing a landscape to reflect the natural assets and ambiance that was present at the site more than 150 years ago. In other words, the dark skies and quiet spaces that were present during Harriet Tubman’s life should be available to visitors of the park on a daily basis rather than only reflected through Hollywood films. In addition to their historic value, these landscapes are of high value to wildlife and to people for their physical and mental health benefits. They are also under threat. 

 

Light and noise: New threats to our nation’s natural heritage

 

As part of a comprehensive program to assess the condition of natural resources in all national parks, a new study of Harriet Tubman Underground Railroad National Historical Park concludes that the park is threatened on multiple fronts. 

 

Climate change, sea-level rise, and non-native biological invasions are among the most pervasive and well-documented threats to protected lands, especially coastal system rich in wetland resources. Land development and associated habitat destruction is also one of the leading causes of wildlife extinctions and can have severe consequences to ecosystem processes such as water filtering and flood control. 

 

A relatively new emphasis in the National Park Service is a focus on the natural sounds and night skies that give many parks their distinctive character. To appreciate Harriet Tubman’s life and her journey requires the park to maintain a certain fidelity to the stars and sky that were her tools to navigate her way to freedom. An important part of the visitor experience is being able to escape the light and noise pollution associated with modern society.

 

"The midnight sky and the silent stars have been the witness to your devotion to freedom and your heroism." - Frederick Douglass

 

The United States has some of the highest levels of artificial lighting in the world. Fewer than one-third of Americans experience sky conditions dark enough to view the Milky Way on a regular basis. Dark skies are valued in parks for their wildlife function, sense of wilderness, and astronomical stargazing. In parks like Harriet Tubman Historical Park, they are also essential to the historical interpretation.

 

Managing light pollution in Harriet Tubman Underground Railroad National Historical Park is a challenge because of its proximity to several large cities including Washington D.C., Baltimore, Richmond, and Norfolk, Virginia. These urban areas produce substantial amounts of artificial light that are reflected into the atmosphere and decrease the night sky quality for hundreds of miles. 

 

The buzz around the film and the popularity of the biopic with critics and general audiences alike creates an opportunity to discuss and shine a light on the important landscape associated with this amazing historical figure. 

]]>
Mon, 17 Feb 2020 08:10:24 +0000 https://historynewsnetwork.org/article/174219 https://historynewsnetwork.org/article/174219 0
Israel as Palestine: One State is Not the Solution, It’s Reality

In my years as a graduate student at Berkeley in the early 1970s I was a member of a group of idealistic and eccentric Jewish students known as the Radical Jewish Union.  We were directly involved in struggles to save Soviet Jewry, resist the draft, convince Jewish welfare federations to direct more funds to Jewish education, revive Jewish cultural and artistic traditions, fight anti-Semitism, and encourage Israel to move in progressive directions.  

 

In 1972 I wrote an article in the group’s newspaper, The Jewish Radical, announcing formation of a petition campaign to be called “Yaish Breira” (in Hebrew, “There is an alternative.”) We published and energetically distributed a petition calling on the Israeli government to stop settling Jews in the West Bank and accept the establishment of a non-belligerent Palestinian state alongside Israel in the West Bank and Gaza Strip. In my article I argued that if settlement of the West Bank continued political forces would arise in Israel that would make future withdrawal impossible, thereby “locking the Jewish state in a political dungeon from which it would never escape.”  At that time there were fewer than 2000 West Bank settlers outside of expanded East Jerusalem. Now there are nearly 400,000. The half dozen activists who were the core of Yaish Breira (most of whom moved to Israel, and are still living there), were liberal Zionists.  We were desperate to protect a democratic Israel from irredentist expansion that would mean repression, discrimination, and the prevention of Palestinian self-determination.

 

This was the still avoidable catastrophe that obsessed us and drove us to reach out to Jewish activists all over the world. Committed to keeping the “dirty laundry” of conflict among Jews from being seen by the gentiles, we wrote and spoke only in Jewish publications and in Jewish events.  At that time there was very little awareness of the phenomenon of Jewish settlement of the West Bank and the immense implications of that settlement for the options Israelis and Palestinians would have in the future. Even figures who would eventually become leaders of the peace movement, such as Colonel Morele Bar-On and Rabbi Arthur Hertzberg, criticized our efforts.  It was ridiculous, they claimed, to imagine that settlements would ever amount to anything of political significance.   

 

We were regularly harassed and insulted as “self-hating Jews,” “anti-Semites,” and even “Nazis.”  I was even fired from my part-time job as a Hebrew-school teacher.  Nevertheless, we collected over 400 signatures from Jewish activists in fourteen countries.  In 1973 we submitted our petition to the Golda Meir government, which completely ignored us.

 

West Bank Palestinians, however, took notice.  Shortly before the 1973 war erupted, the front page of one Arabic newspaper in Jerusalem—Sout al-Jamaheir-- featured a long article about our petition with the excited headline that there were actually Jews who opposed settlements in the West Bank and favored establishment of a Palestinian state.

 

Times change and so do majority opinions. After decades of delay and continued settlement activity, most Israelis did accept our argument that West Bank settlements posed a dire threat to the only negotiated solution to the conflict--a Palestinian state in the West Bank and Gaza Strip.  Unfortunately, however, no government in Israel acted decisively enough or soon enough to make that vision a reality.   For almost a decade now, there has been no real possibility of achieving it, even though it remains convenient for Americans, Israelis, and some Palestinians to pretend otherwise. It is for saying that, for saying that I no longer support the two-state solution, that I am now denounced as an anti-Semite and self-hating Jew.

 

It used to be that Israel, after 1967, was a state within the boundaries established after the 1948 war, temporarily “occupying” other territories, not in the state, that could have been a Palestinian state.  No more. One out of eleven Jewish Israelis live in the territories captured in 1967 that Israel calls “Judea, Samaria, and East Jerusalem.”  There are no Israeli Jews living in the Gaza Strip, but the two million inhabitants of that ghettoized region are for all intents and purposes also ruled by the policies and power of Israeli governments.  The state of Israel collects taxes from Gazans, regulates their trade, controls entry and exit, and determines whose homes and whose lives will or will not be protected from destruction (the fundamental function of a state). Even official Israeli maps depict no international borders separating “Israel” from Palestinian enclaves or from the Gaza Strip.  If Israel leaves much for Hamas to do in Gaza, and something for the Palestinian Authority to do in the West Bank, so too do Palestinian organizations operate in Israeli prison yards without thereby removing them from the state of Israel.  

 

The two-state solution is no longer a viable political objective or a practical, useful plan for thinking about the problem. Officially, however, it is still honored, even by those who privately know the truth.  It is, in effect, a “Dead Solution Walking.”  But because one state controls the entire territory between the Mediterranean Sea and the Jordan River does not mean we have witnessed the coming of the one-state solution. What we have is a one-state reality.  

 

The future, and with it prospects for peace and a better set of problems than those afflicting Jews and Palestinians today, will not be determined by renewed diplomacy or negotiations.  The diplomatic merry-go-round may continue to turn, but as all merry-go-rounds, its purpose is simply to keep turning, not to go anywhere.  Instead the future will be determined by long struggles to change the one state, Israel, that has united the entire country.  This will mean what it did for freed black slaves in the Unites States, for non-whites in apartheid South Africa, for Irish Catholics in nineteenth century Britain, and for women in all western industrialized countries: generations of conflict and shifting alliances that transform limited democracies by incorporating formerly excluded populations thereby ending, or at least greatly reducing, discrimination and inequality.  In this way, eventually, all those whose lives are subject to the power of the Israeli state, Jews in the Galilee, Arabs in Gaza, Jews in the West Bank, and Arabs in Haifa, will have equal rights and equal representation in the government that governs them.  

 

In other words, the struggle for peace between Israel and the Palestinians has been replaced by struggles, likely to take generations, for the equalityof Jews and Arabs within the state now known as Israel.

                  

]]>
Mon, 17 Feb 2020 08:10:24 +0000 https://historynewsnetwork.org/article/174190 https://historynewsnetwork.org/article/174190 0
Celia Sánchez Manduley: The Most Famous Woman You Have Never Heard Of

 

Who is Celia Sánchez?

If the name Celia Sánchez Manduley does not ring a bell, you are not alone. In April 1965, Andrew St. George began his article for Parade with a rhetorical question: “Who is Celia Sánchez?” St. George went on to claim, “it is a reasonable if regrettable guess that, as this is written, not one American in a thousand knows.” Even President John F. Kennedy did not recognize the name when it appeared on blue CIA stationery under a red striped top-secret cover at a National Security Council meeting. Kennedy frowned upon seeing the unfamiliar name and asked, “But who is Celia Sánchez?” Ambassador A. A. Berle reportedly replied: “Sánchez seems to be . . .  the most influential person in Havana.”

 

What Kennedy soon learned was that Sánchez was the highest ranking and most revered woman within the Cuban revolutionary government. She had earned the status of “first guerrilla of the Sierra Maestra,” as Fidel Castro’s primary confidant, and as the Cuban Revolution’s staunchest loyalist. Years later, her image would appear on two Cuban postage stamps, a commemorative one peso coin, and in the watermark of the twenty-peso note. Still today, however, few Cubans can recall the details of her life beyond a few anecdotes published in state-controlled newspapers on the anniversaries of her birth and death. Most U.S. citizens have still never even heard of her. 

 

Sánchez would have liked it that way. Her legendary aversion to the press meant that few journalists ever interviewed her or captured her on film. One of Sánchez’s long-time colleagues described her to me as “allergic” to cameras. Sánchez even threatened to change her name after the revolutionary war ended in order to evade further press scrutiny. Sánchez was undoubtedly one of the primary architects of the silence surrounding her life, but her story needs telling.

 

Celebrating Sánchez

This year marks two important anniversaries related to Cuba’s most revered female revolutionary leader. One hundred years ago (9 May 1920), Sánchez was born in a small sugar mill town, Media Luna, on the eastern end of Cuba. The citizens of Media Luna bristle when recalling that the most famous song composed in Sánchez’s honor links her to nearby Manzanillo. A number of boldly painted billboards posted along Media Luna’s main road proudly claim Sánchez as one of their own. She is more than a local hero; she is their most intimate connection to Cuba’s broader revolutionary story. She is also the primary draw for the few hundred tourists who pass through town each year, many of whom are traveling with bicycle touring companies. They stop to visit her childhood home, which became a national museum in 1989 and houses the largest single collection of her personal possessions anywhere on the island.

 

Cubans will also honor the anniversary of Sánchez’s death this year. Sánchez died in Havana on 11 January 1980—just a few months shy of her sixtieth birthday—following a quiet battle with a “fungus” that she knew was really lung cancer. Tens of thousands of Cubans joined her funeral procession as it made its way slowly from Havana’s Revolution Plaza to the Colón cemetery. Witnesses claim that this was the only time they ever saw Fidel Castro cry in public. Her crypt is marked only with the number “43,” but visitors regularly place mariposa blooms (the Cuban national flower) in the torch-shaped crypt knob. 

 

Writing Sánchez

This year also marks the release of my new biography of Sánchez—Celia Sánchez Manduley: The Life and Legacy of a Cuban Revolutionary (University of North Carolina Press). The first biography to critically examine her life and legacy, it is the result of over twenty years of struggle with Sánchez’s own aversion to publicity, the vagaries of U.S.-Cuban diplomatic relations, and the high-level security surrounding her personal papers. Approval to access those papers arrived at the last possible moment during my writing process, following a final heartfelt plea to archive officials. Their hesitation is understandable. Granting a U.S. citizen access to the inner chamber of national revolutionary memory is not a decision that authorities took lightly. Only a handful of foreign researchers have ever even entered the repository since its founding in 1964. 

 

My work in the highest-security archives in Cuba transformed the book. With the access I received to her correspondence, ledgers, and personal diary, I could fully grasp the critical role Sánchez played in not only forging the revolutionary nucleus, but also shaping and preserving the history of its accomplishments. These documents opened a new window onto the consciousness of this private woman, revealing how she strategically constructed her own legacy within a history still dominated by men. She reflects in a deeply personal way on her struggles with violence, her political development, and the sacrifices she made as she evolved from an organizer and combatant to the highest-ranking woman within the post-revolutionary Cuban government. These sources humanize the icon in new and powerful ways.

 

Remembering Sánchez

The work to preserve and study the history of the Cuban Revolution and its contributors—like Celia Sánchez Manduley—is arduous. Foreign scholars like myself have had to maneuver through inscrutable bureaucratic systems for years and sometimes decades in order to conduct our work on the island. There is, however, some reason to believe that change is coming. On 19 December 2019, current president Miguel Díaz-Canel Bermúdez proclaimed that, “preserving Cuban national historical memory is a task for all of us.” Lamenting the deterioration of many archives across the island, Díaz-Canel called for the increased digitization of historic documents and promised new funding to support the project. The possibility of future digital access to archival resources in Cuba would be a game-changer for domestic and foreign scholars alike. 

 

I hope that the Cuban government will also consider supporting historic preservation projects centered on women’s specific contributions to the revolutionary process. With the exception of a few small museums dedicated to individual women—like Sánchez’s childhood home museum in Media Luna—the story of women’s revolutionary experiences in Cuba is largely scattered in bits and pieces across a few national museums. Were Cuban officials to create a museum dedicated solely to revolutionary women, Sánchez would undoubtedly figure prominently within that repository. Other women’s stories would also merit recognition, however. I agree with president Díaz-Canel that preserving Cuban historical memory is a collective enterprise and I humbly offer my new book as a contribution to that important endeavor.

]]>
Mon, 17 Feb 2020 08:10:24 +0000 https://historynewsnetwork.org/article/174192 https://historynewsnetwork.org/article/174192 0
Can We Save the Truth? Steve Hochstadt is a professor of history emeritus at Illinois College, who blogs for HNN and LAProgressive, and writes about Jewish refugees in Shanghai.

 

 

Is there truth? During the late 20th century, the humanities became enmeshed in esoteric discussions about truth. Deconstructionists argued that all writing was relative to the writer, whose identity and biases created work that might be true for the writer, but not for other people with different identities. This argument came out of a radical critique of Western white male hegemony, which has since been expanded to the analysis of all forms of hierarchy. White supremacy, male chauvinism, homophobia, and all other kinds of discrimination lead to claims by dominant groups that they possess truth, when they are actually only expressing their self-interest.

 

This line of thinking was taken up especially by literary scholars, who argued that every text has multiple, perhaps infinite meanings. There is no true interpretation of a piece of writing. When this was expanded into other disciplines, it became more confusing. Some historians argued that it is impossible to make a true historical statement. Every statement can have multiple, even contradictory meanings. Excellent examples of this would be statements that really do depend on the position of the author: Wikipedia’s article on Fort Seybert in the Allegheny Mountains is mainly about fighting at the Fort in 1758, and includes the phrase “the Indians massacred 17 to 19 people”. Such statements were assumed to be truths until recently, as long as the number of dead were accurate. But: the attackers were not “Indians”, but members of the Shawnee and Delaware tribes; “massacre” implies mass murder, when this was one battle in a war begun by white invaders of Native American lands; etc.

 

Much historical writing had to be rewritten to remove what turned out to be obvious biases in language and meaning. Much remains to be done.

 

Scientific ideas also were criticized as claiming truth when there was none. An example that has upended many social assumptions is the idea of gender, a seemingly biological concept. How do we tell the difference between a man and a woman? We should not simply adopt traditional ideas, supposedly scientific, which are merely social ideas put into scientific language. In athletic contests that question has led to many controversies about who may compete against whom. The relativists’ argument would be that gender is a matter of self-definition, not truth.

 

Does this mean that there is no truth? That any statement can be shown to be untrue by people with a different point of view? Objectivity is impossible, so there is no objective truth.

 

I always have regarded these questions as irrelevant to my work as an historian and political commentator. I recognize that every historical statement can be attacked as not precisely true. Six million Jews were killed during the Holocaust. Well, certainly not exactly six million, which is an estimate, the best one we have. Many of the people the Nazis murdered as “Jews” did not consider themselves Jewish. Many more millions were killed by the Nazis using the same methods in the same places – they should not be left out of the concept of Holocaust.

 

I’m sure that nearly every sentence I have ever written could be taken apart and shown to be not as true as some other much more complicated formulation.

 

Please pardon this lecture on abstruse intellectual arguments. They were all the rage when I was a graduate student, and tended to make doing historical writing difficult. For my own writing, I have settled on a method of writing and rewriting in which I seek to improve places where I use imprecise categories and labels, where I slide over gaps in my knowledge with vague phrases, where my ignorance leads to false statements. I find and fix many such places in the process of revising. I hope to produce writing which is as close as possible to being objective and true. That I have such goals indicates that I do not accept the idea that there is no truth. I do believe that truth is very hard to reach, that nearly every proposition in history or science can be improved by more work, that we are imperfect seekers of truth. So we can approach truth, but perhaps never reach it.

 

At last we arrive at my point. When these arguments were raging in the academy, conservatives were greatly annoyed. Conservative historians asserted that relativists were ruining everything, that truth did exist. They criticized post-modernism as a mask for moral relativism, connected this immorality with the popular movements of the 1960s, and asserted their own moral primacy (the Moral Majority).

 

I was prompted to write this because of the great irony that the political conservatism, which once argued for objective truth, now relies on the broadest attack on truth that we have ever experienced. Political lies are nothing new and nothing inherently conservative. President Lyndon Johnson’s lie about a North Vietnamese attack on American ships in the Tonkin Gulf in 1964 led to a disastrous expansion of the war, which was extended by years of lying laid out in the Pentagon Papers. But today we suffer from a multiplication of lies as a Republican tactic to win elections.

 

The platform of the Republican Party about climate change and health care, two of our most pressing issues, is just one big lie. The use of a fabricated story about Ukraine and Joe Biden is a set of lies, that then led to one of the greatest scenes of collective public lying in American history, the response of Republican Representatives and Senators to the impeachment.

 

We are being bombarded with carefully crafted lies throughout cyberspace, designed to distort the results of the 2020 election. False stories about Joe Biden and Ukraine have already spread virally to millions of people. Fighting them takes enormous effort and resources, well beyond anything that will be deployed this year.

 

Disinformation spread by bots can come from anywhere on the globe. The technology is non-partisan. But the use is not. Russia’s online campaign in 2016 was designed to help elect Trump. The Trump campaign is now using one of these Ukraine stories in various media. CNN refused to run it, but it’s up on Facebook.

 

The intersection of a Republican Party which sees no value in distinguishing between truth and lies and an emerging technology that makes spreading lies incredibly easy is a great political danger. Is there truth? Not if those in power in America don’t care.

]]>
Mon, 17 Feb 2020 08:10:24 +0000 https://historynewsnetwork.org/blog/154308 https://historynewsnetwork.org/blog/154308 0
Roundup Top 10!  

Why You May Never Learn the Truth About ICE

by Matthew Connelly

The National Archives is letting millions of documents, including many related to immigrants’ rights, be destroyed or deleted.

 

Rosa Parks on Police Brutality: The Speech We Never Heard

by Say Burgin

In 1965, Rosa Parks would have had a lot to say about police brutality.

 

 

The Risky Dream of the Fast Food Franchise

by Marcia Chatelain

Americans have long pinned economic hopes on fast-food chains. And where there are hopes, there are scams.

 

 

We’re Still Living in Stalin’s World

by Diana Preston

At the Yalta Conference 75 years ago, the Soviet leader got everything he wanted — and shaped global politics for decades.

 

 

A Union Broken With a Senate Surrender

by Jamie Stiehm

The real rub is that the president is changing us Americans, giving light to a dark crevice in our character. He embodies — and emboldens — baleful defiance. The great presidents, like cheerful, sunny Franklin D. Roosevelt, bring out the best in us.

 

 

The United States and Saudi Arabia aren’t allies. They never have been.

by Ellen R. Wald

One of our key ideas about the Middle East is wrong.

 

 

What J-Lo and Shakira missed in their Super Bowl halftime show

by Petra Rivera-Rideau

Their performance perpetuated the marginalization of Afro-Latinos and other people of African descent.

 

 

Black History Month has a little known Catholic history as well

by Shannen Dee Williams

In 1949, famed Harlem Renaissance writer Langston Hughes celebrated Negro History Week (the precursor to Black History Month) with members of the Oblate Sisters of Providence and their students at the all black and Catholic St. Alphonsus School.

 

 

The divisive case for giving Rush Limbaugh the Medal of Freedom

by Brian Rosenwald

One of our most transformative figures has also been deeply polarizing.

 

 

 

The Art of the Deal, Pentagon-Style

by William J. Astore

The list of recent debacles should be as obvious as it is alarming: Afghanistan, Iraq, Libya, Somalia, Yemen (and points around and in between).

]]>
Mon, 17 Feb 2020 08:10:24 +0000 https://historynewsnetwork.org/article/174242 https://historynewsnetwork.org/article/174242 0
The Pioneers: Heroic Settlers or Indian Killers?

 

Last year’s publication of David McCullough’s The Pioneers: The Heroic Story of the Settlers Who Brought the American Ideal West presents the pioneers who settled Ohio as heroic settlers. Some reviews like those in The New York Times (NYT) and Washington Post(WP) fault McCullough for downplaying the settlers’ mistreatment of Native Americans. In May 2019, an AP article, abridged on HNN, detailed more criticism. 

 

In NYT, historian Joyce Chaplin wrote, “McCullough plays down the violence that displaced the Indians, including the actual Ohio people. He adopts settlers’ prejudiced language about ‘savages’ and ‘wilderness,’ words that denied Indians’ humanity and active use of their land.” In WP, historian Andrew C. Isenberg stated,“To McCullough, the natives were little more than impediments to progress. . . .The fortitude of the settlers McCullough describes was quite real. So too was land fraud, racial hierarchy and the ousting of Native Americans from their homes.”

 

But other reviews in more conservative publications applaud McCullough’s depiction of the pioneers as heroes. The National Review[NR], for example, does not mention the Native Americans except to state that the Northwest Ordinance of 1787 called for “the just treatment of Indians and their lands.” Instead NR complains, “Not long ago, the westward expansion of America, along with the names of the great pioneers, such as Daniel Boone, was taught to schoolchildren—not as history for which to apologize but as something to celebrate and revere, a source of national pride. That was for good reason: What the early pioneers accomplished was remarkable.” 

 

The Washington Times (WT) reviewalso all but ignores the fate of Native Americans except to note that “the Native American communities living in the territory ‘did not believe land was something to be owned,’ for example, and were troubled by the settlers.” And “many treaties were signed between the Indians and settlers, but distrust still ruled the land in early America.” The review’s final sentence reads, “The early pioneers ‘accomplished what they had set out to do not for money, not for possessions or fame,’ writes Mr. McCullough, ‘but to advance the quality and opportunity of life.’ Their success is our success."

 

The contrasting reviews of McCullough’s Pioneers reveal several important  messages: 1) Our nation is too polarized; 2) We, including historians, should seek truth rather than a confirmation of our biases; 3) Our nation’s history, like that of all nations, contains both noble and ignoble deeds; and 4) We need the national humility to own up to dishonorable deeds like decimating Native Americans, slavery, and racism. 

 

About our national polarization we have already read much, and thus can move on to our second point. About the importance of truth-seeking and historians’ responsibility “to tell the truth, warts and all,” I have already written, but an additional point should be added. HNN recently linked to a discussion about a lengthy report on American textbooks by NYT reporter Dana Goldstein. It analyzed how those on U. S. history in California and Texas reflect our nation’s political polarization. President Obama once quoted former New York Senator Daniel Patrick Moynihan who said, “Everybody is entitled to his own opinion, but not his own facts.” Yet, we have become so polarized that the history our students learn, the so-called “facts,” depends on which red or blue state they attend school. 

 

Concerning point 3, that our history contains noble and ignoble deeds, there can be no real argument. As indicated in an earlier HNN article, “two of its most heinous crimes” were “genocide against Native Americans and slavery.” As some of the reviews of McCullough’s The Pioneers stressed, the pioneers were especially implicated in the whites’ treatment of Native Americans. Pioneer heroes like Daniel Boone and Davy Crockett often fought against native peoples, who regarded the white settlers as usurpers of their lands. An acquaintance of Boone’s, Abraham Lincoln (grandfather of the future president), was shot by an Indian and died. President Lincoln himself shared many of the prejudices towards Indians exhibited by the great majority of our presidents.

 

But Lincoln was not as bad as Andrew Jackson, who presided over the Indian Removal Act (1830) and other policies that led to deaths of thousands of Native Americans--susceptibility to white men’s diseases had earlier been an even bigger killer. (Grant Foreman’s 1930 book Indians and Pioneers: The Story of the American Southwest before 1830 was an early account of southwest conflicts that culminated in the Removal Act.) After Jackson’s two-term presidency, the battles of pioneers and the U.S. military against Native Americans led to additional Indian deaths, with the 1890 massacre of about 150 Lakota Sioux at Wounded Knee Creek in South Dakota being especially significant. 

 

After the massacre, a local publication, the Saturday Pioneer stated, “The Pioneer has before declared that our only safety depends upon the total extermination of the Indians. Having wronged them for centuries we had better, in order to protect our civilization, follow it up by one more wrong and wipe these untamed and untamable creatures from the face of the earth.”

 

In my own youth I often viewed movies where poor white pioneers in their covered wagons circled to defend themselves against “savage” Indians. My daughter Jenny read many Laura-Ingalls-Wilder novels about heroic pioneers who sometimes said “the only good Indian is a dead Indian.”

 

Besides the stain of decimating Indians and slavery, our past has also been dishonored by other flaws such as greed and imperialism. The Spanish-American War, beginning in 1898, is a good example. It wrested Cuba, Puerto Rico, Guam, and the Philippines away from Spain. Republican Senator Albert Beveridge declared that “American factories are making more than the American people can use . . . the trade of the world must and shall be ours. . . . The Philippines are logically our first target.” But not all of the Filipinos wished to come under new foreign control. To subjugate the Filipinos, U.S. troops had to wage war against guerilla forces. By the end of 1902, over 200,000 Filipinos had died. One participating U. S. officer had this to say about the conflict: “Our men have been relentless, have killed to exterminate men, women, and children, prisoners and captives, active insurgents and suspected people, from lads of ten up, an idea prevailing that the Filipino was little better than a dog, a noisome reptile in some instances, whose best disposition was the rubbish heap.” (See here for sources of quotes regarding the Philippines.)

 

The final point to be made here, the need for national humility, also addresses the question posed in our title, were the pioneers “heroic settlers or Indian killers? Respect for truth leads to the response “both.” Among the many pioneers, stretching over several centuries, some displayed heroic qualities like courage and bravery in enduring great hardships and some killed Native Americans. Some, like Daniel Boone and Davy Crockett, did both.  

 

In an earlier HNN essay, I criticized Jarrett Stepman’s The War on History: The Conspiracy to Rewrite America's Past (2019) for stating, “An informed patriotism is what we want. . . . Is the essence of our civilization—our culture, our mores, our history—fundamentally good and worth preserving, or is it rotten at its root?” His question poses a false either-or choice. Our history is “worth preserving,” but not because it is “fundamentally good.” Like the history of most countries, our past contains noble and ignoble deeds. Decimating Native Americans and slavery are part of our past, but so too are nobler deeds and words of our Founding Fathers, Lincoln, Franklin Roosevelt, and Martin Luther King, Jr. (MLK). 

 

Despite many differences that liberals and progressives might have with conservative  columnist Ross Douthat, he wrote some truths about our past in his 2017 article entitled “Who Are We?” He noted that many Americans prefer “the older narrative” and identify “with the Pilgrims and the Founders, with Lewis and Clark and Davy Crockett and Laura Ingalls Wilder.” Moreover, “Trump’s ascent is, in part, an attempt to restore their story to pre-eminence. It’s a restoration attempt that can’t succeed, because the country has changed too much, and because that national narrative required correction.”

 

The “correction” that is required is a “warts-and-all” history. Historians’ main job is not to teach a false “patriotism,” but to tell the truth as best they can. But in addition, patriotism is about loving one’s country, not ignoring its past sins. Just as we can love others despite their sins, so too we can love our country despite its past ignoble deeds. 

 

While personal humility is thought wise and sometimes praised--even if less practiced--political and national humility are less fashionable. In 2005, Democratic Congressman David Price (N.C.) stated: “Humility is out of fashion these days. Political leaders, advocates, and pundits often display an in-your-face assertiveness, seeming to equate uncertainty or even reflectiveness with weakness and a lack of moral fiber.” In our present Trumpian period, Price’s words are truer than ever. 

 

In his 1966 book The Arrogance of Power, Senator J. William Fulbright wrote about the lack of humility in powerful nations like ours, which are “peculiarly susceptible to the idea that its power is a sign of God’s favor.” In 1967, about a year before a bullet ended his life, MLK warned in a speech about the Vietnam War that “enlarged power means enlarged peril if there is not concomitant growth of the soul. . . . Our arrogance can be our doom. It can bring the curtains down on our national drama.” 

 

In their 2006 book Ethical Realism Anatol Lieven and John Hulsman stressed the need for nations to act with “a sense of humility,” and indicated how such political thinkers as Reinhold Niebuhr and George Kennan stressed this idea. In Rumsfeld’s Wars: The Arrogance of Power (2008),Dale R. Herspring criticizes President Bush’s secretary of defense for his lack of humility, which contributed to our government’s tragic Iraqi involvement. 

 

Now in 2020, as we face upcoming elections that will help determine our future, we need more than ever forthrightly to face our past. Confident, secure individuals are unafraid to admit past mistakes. So too should nations be thus bold. Unless we in the USA acknowledge not only the heroic, but also heinous deeds of our past, we will  fail to face our future with the courage needed to overcome such ills as racism and our present political polarization.  

]]>
Mon, 17 Feb 2020 08:10:24 +0000 https://historynewsnetwork.org/article/174189 https://historynewsnetwork.org/article/174189 0
Kindergarten Goes to the University Classroom: The Educational Value of Show and Tell

 

During the Fall 2019 semester, I experimented with having students do 10-minute Show and Tell presentations in my Secularisms/Atheisms religious studies and U.S. history seminar. The results couldn’t have been better. Students learned about numerous world views through everyday objects and the related conversations.

 

We had one presentation at the beginning of each class. I asked students to bring something important to them that would give us insight into their life -- with an emphasis on atheisms, theisms, and/or secularisms. They could also share an important video clip or website, for example, if they preferred. I asked everyone to pass around what they brought and tell us why it holds importance to them. Afterward, questions were encouraged. 

 

The results were amazing. Every student came prepared on their assigned day. Every student brought not just one, but several artifacts from their life!

 

Some students brought items speaking directly to atheisms. 

 

In particular, one student created a secular group in the high school he attended, the first of its kind. He told us about the struggle he faced starting the group and shared copies of written correspondence he had with administrators. He also brought flyers from the student group he successfully created.

 

Two students shared artwork related to their atheism. One person brought a self portrait and explained that “the painting depicts me in Plato’s Cave, walking toward the entrance and looking back at the others chained to the wall. Plato’s ‘Allegory of the Cave’ is a secular text that I did and still do regard with reverence.” The other student brought two of her paintings and explained how the canvas is her escape.

 

Some of the presentations served to teach the class about the world’s religious traditions. 

 

One student brought artifacts from her husband’s Islamic faith, including a prayer carpet, a set of prayer beds, and a Qibla compass (a compass that points to Mecca). Another student taught us about his faith and the Church of Jesus Christ of Latter-day Saints. He brought the card necessary to enter a Mormon temple and pictures of a Mormon temple. Another student created a slideshow with pictures describing her deep Christian faith. Another student brought her annotated study Bible.

 

Some students brought items with special significance. 

 

One student brought his three lucky charms - a coin, a purple stone, and a necklace. Another student brought a Virgin Mary keyholder and a Virgin Mary necklace and explained how her mother gifted the keyholder as a housewarming gift and as something to keep her safe. Another student, no longer religious, brought the gifts from his grandmother after his first holy communion in the Catholic Church. 

 

And so much more.  

 

Show and Tell quickly generated more and more excitement among the class as the days went by. There were also plenty of questions, sometimes more than we had time to address. 

 

Items student brought and the resulting conversations, all in the first ten minutes of class, helped students appreciate and connect with the material in both academic and in personal ways and helped serve as a “warm up” before our discussions over the assigned texts for the day. 

 

Show and Tell served multiple learning outcomes. Students presenting practiced their oral presentation skills by explaining deeply personal artifacts to others and by answering questions from others. People who weren’t presenting on a given day learned about people and their religious beliefs or lack of religious beliefs -- there was no overlap between any of the presentations -- and demonstrated active listening and learning by asking questions. All students were always engaged with discourses relevant to history and to religious studies.

 

When asked for feedback, the consensus was unanimous: Show and Tell was deeply exciting and meaningful to students (and to me). 

 

“Show and Tell reminds me of kindergarten but in a good way. I feel like they are an honest way to understand what is important to people and to get a look into their lives. In the context of our class, it has helped me have a better understanding of other people’s beliefs and has shown me what matters to the people I share the classroom with.”

 

“Show and Tell is a great practice in that it accomplishes so much in so little time. In just a few minutes, students not only learn about different perspectives from which to see the world we all find ourselves in, but also gain a sense of closeness and unity with our classmates by being able to put a foot in the door of other people's lives. This relationship results in greater empathy, understanding, and cooperation.”

 

“Show and Tell has been valuable on multiple levels. On a personal level, I got to consider how religion impacted my thinking as a youth and how those lessons continue to affect my thinking as an adult. I've also got to see how religion and irreligion have impacted my classmates, for better and for worse. The artifacts shared (the ‘show’ portion of the exercise) have given their experiences a tangible ‘realness’ that would be difficult to achieve through telling alone. More broadly, the exercise has further confirmed lessons I've learned in this class and previous religious studies classes, that no two people experience religion in the same way and that religion is pervasive, affecting even the most secular of individuals.” 

 

Most importantly, Show and Tell created memories. 

]]>
Mon, 17 Feb 2020 08:10:24 +0000 https://historynewsnetwork.org/article/174195 https://historynewsnetwork.org/article/174195 0
If the Pentagon is worried by climate change, shouldn’t we be worried too?  

Damage after the battle of Raqqa in the Syrian Civil War, 2017

 

In 2008, Syria’s representative to the UN’s Food and Agriculture Program, Abdullah Bin Yehia, made an urgent request for agricultural aid. Since 2006, Syria had been gripped by its worst drought in centuries. Acres of productive land had turned to dust and farmers were losing hope. The drought, rapid population growth, an influx of refugees from war-torn Iraq, and the global economy created what Yehia called a “perfect storm.” A meager $20 million in aid would help keep Syrian farmers on their land and growing essential crops. However, Yehia received only token assistance from the international community. 

 

Yehia was forced to focus his aid request on the hardest hit areas, specifically “small-holding farmers in northeast Syria in an effort to preserve the social and economic fabric of this rural, agricultural community.” A US diplomatic cable warned “Yehia predicts mass migration from the northeast, which could act as a multiplier on social and economic pressures already at play and undermine stability.” The situation raised alarm at the highest levels of the Syrian government and the Agriculture Minister publicly declared that the drought’s economic and social fallout was “beyond our capacity as a country to deal with.” 

 

Although the United States was spending over $100 billion annually waging a war in neighboring Iraq, America was remarkably tightfisted when it came Syrian farmers. Rejecting Yehia’s increasingly desperate requests, the State Department responded, “given the generous funding the US currently provides to the Iraqi refugee community in Syria we question whether limited US government resources should be directed towards this appeal.” The Syrian farmers were on their own.

 

By 2010, the drought had thrown almost 3 million Syrians into extreme poverty and devastated agricultural production. Fields throughout northern Syria were abandoned. Nearly 1.5 million rural farmers migrated to cites like Aleppo, Homs, and Damascus in search of jobs that did not exist. Internally displaced persons comprised 20% of the Syrian population, a staggering proportion in peacetime. The shattered social fabric, lack of economic opportunity, and disgruntled urban arrivals created a tinderbox. Soon, the Arab Spring provided the spark that set Syria aflame. 

 

Climate-induced drought contributed to this explosive situation. As the world warms, Syria offers a cautionary tale about how climate change may destabilize societies and sow global chaos.

 

Unnatural Disaster

 

Northern Syria lies within the ancient Fertile Crescent, which historians call “the cradle of civilization.” Agriculture was first developed there over 10 thousand years ago. Ironically, the place that has produced food for longer than anywhere on earth now finds itself barren. 

 

In modern Syria, human activities have greatly exacerbated longstanding issues of water scarcity. The country’s population boomed from 4 million in the 1950s to over 22 million today. Greater water needs coupled with resource mismanagement by the Syrian government have worsened water shortages. Since the 1980s, government policies incentivized farmers to plant export crops like wheat and cotton that were particularly water-intensive. In addition, poor irrigation methods have wasted large quantities of water and further depleted freshwater aquifers. Syrians have also felt the effects of water diversion programs in upstream Turkey, which have reduced the flow of the Euphrates river into Syria

 

Drought has always lurked behind these local stressors. When the rains stop, the reservoirs fall, the streams dry, and farmland turns to dust. That specter has grown far more menacing in recent years. In 2010, the Syrian Environmental Association reported that droughts have grown more frequent from “once every 55 years, to every 27 years, to every 13. Now, droughts happen every 7 or 8 years.” The drought that began in 2006 was particularly horrendous. NASA researchers determined it was the worst one Syria had experienced in 900 years and perhaps far longer. Five years of severe drought obliterated the nation’s agricultural sector as over 75% of Syria's farms failed and 85% of Syrian livestock died. 

 

Climate scientists have identified a clear link between the climate change and a drier, hotter Middle East. Climate change makes droughts more frequent and more severe. A report in the International Journal of Climatology found “consistent warming trends since the middle of the 20th century across the region.” A study in the Proceedings of the National Academy of Sciences examined trends in “precipitation, temperature, and sea-level pressure” and corroborated them with climate models to conclude that human-induced global warming made a severe drought in Syria (like the one in 2006) “2 to 3 times more likely than by natural variability alone.” Climate alone did not cause the conflict in Syria, but it surely turned up the heat.  

 

A Global Crisis

 

The Syrian Civil War enters its 10th year with no end in sight. What began with protests against Bashar al-Assad has morphed into a hellish multisided struggle of international proxies and religious extremists. In 2016, the UN human rights chief called the war "the worst man-made disaster since World War II.” He continued: "the entire country has become a torture chamber, a place of savage horror and absolute injustice.”  Ceaseless fighting has claimed 500,000 lives and displaced 12 million people. Over 5 million Syrian refugees have sought shelter abroad. The migration of refugees has increased pressure on neighboring states, including Lebanon, where 25% of people within its borders are Syrian. Further away in Europe and America, the refugee crisis has infected the political discourse and provided fuel for populist demagogues.

 

Syria provides a frightening narrative for conflicts in an overheated world. The 2006 climate-induced drought crippled Syrian agriculture and contributed to a wave of internal migration. These events amplified existing social and economic tensions beyond the capacity of Assad’s regime to address them. 

 

Threat Multiplier

 

Although few politicians speak openly about the role of climate change in geopolitics, the US military has studied the topic for years. A 2007 Pentagon report called climate change “a threat multiplier for instability in some of the most volatile regions of the world.” 

 

The 2007 Pentagon report also argued that “many developing nations do not have the government and social infrastructures in place to cope with the type of stressors that could be brought about by global climate change.” Furthermore, “when a government can no longer deliver services to its people, ensure domestic order… conditions are ripe for turmoil, extremism and terrorism to fill the vacuum." Analysts anticipated that even stable nations would be taxed as huge numbers of refugees arrived at their borders. This forecast sounds eerily like a script for Syria’s collapse just 4 years later.

 

Pentagon assessments of future climate hotspots read like dystopian fiction. Fragile nations across Africa and Asia are battered by worsening climate-related impacts from crop failures to massive flooding. Weakened regimes fall victim to ethnic divisions and civil unrest. Diplomatic cooperation breaks down as countries battle for increasingly scarce resources. Wars and insurgencies spill over into neighboring states. In a globalized world, disturbances are impossible to confine: terrorism and political ideology can be exported far and wide. 

 

Climate scientists and security experts are already collaborating to identify these hotspots before they erupt in chaos. The Dutch Water, Peace, and Security Partnership has created a tool to predict conflicts based on levels of water stress. By 2050, the UN estimates nearly 5 billion people may face water shortages. Researchers at ETH in Zurich have demonstrated strong relationships between food prices and local conflicts. By 2050, the UN estimates that food production needs to rise by 60%. Climate change will increasingly threaten water supplies and food production, creating new conflict hotspots. 

 

Even America’s own military capabilities are at risk from climate change. A 2018 Department of Defense report indicated that over half of US bases and installations were vulnerable to climate-related impacts. That year, Hurricane Michael caused over a billion dollars in damage to US military assets. 

 

The Pentagon is preparing for a world on fire: a world that is far more dangerous, unpredictable, and violent that the one we currently inhabit. There will be more Syrias, more states that are pushed over the edge by climate change. Regional tensions can erupt into global conflicts.  If the Pentagon is worried by climate change, shouldn’t we be worried too?  

 

Related Link: A History of Climate Change Science and Denialism

]]>
Mon, 17 Feb 2020 08:10:24 +0000 https://historynewsnetwork.org/article/174191 https://historynewsnetwork.org/article/174191 0
Bob and Carol and Ted and Alice are Back, and the Sexual Revolution with Them

 

The late 1960s was the heyday of America’s sexual revolution and nowhere was it bigger than in California and nowhere in California was it wilder than in the bedrooms of Bob and Carol and Ted and Alice, two married couples who went to a sexual awareness seminar. There, they learned free love was the name of the game and plunged ahead.  Sexually energized, they dove under the sheets with anybody who was breathing.

 

Their fictional life became a hit movie in 1969, directed by Paul Mazursky. This fall was the 50th anniversary of the film and now The New Group, in New York, has turned it into a musical, Bob and Carol and Ted and Alice. If you saw the movie, you will love this play. If you did not see the movie, you will still love this play. It is uproariously funny and is an X-rated (sort of) look at what was a very new sexual atmosphere in America in 1969.

 

The play, that opened Tuesday at the Pershing Square Signature Theater on W. 42d Street, New York, is now a musical and Grammy winner Suzanne Vegas sings most of the songs as the “bandleader.” The songs are OK, but, frankly, there are too many of them. More dialogue would have been more useful.

 

The play, written by Jonathan Marc Sherman based on the movie script, starts when Bob, a film maker, goes to San Francisco to do some work and has an affair with a graduate student he met there. He is so proud of his sexual awakening with the girl, 24 to his 35, that he rushes home to tell his wife all about it. This is all right with her. She then begins an affair with her tennis club’s pro. This is perfectly OK with Bob (I don’t know about that.) 

 

After those two sexual adventures, Bob and Carol and Ted and Alice slowly start to give each other the eye. Will the fearsome foursome swap partners?

 

This play about sex is very indecent without ever being indecent. The idea of sex is in your head right from the start, but there is no nudity or simulated sex, just a lot of suggestions.

 

What this is, really, is a romp. Pillows and sheets and well, other things, fly through the bedroom amid arguments between the four and a lot, and I mean a lot, of sexual tension.

 

This 50-year-old story unfolds as if it were written yesterday. The playwright has written a sharp, bold comedy laced with sexual antics that is genuinely funny.

 

Do these 1969 era sexual circuses still take place today? Sure, they do. There have been documentaries and magazine articles on them. Are they right now? Were they right in 1969? Who knows?

 

An interesting aspect of the play is that from time to time the actors pull audience members on to the stage and talk to them, or confess to them, or argue with them without the audience members speaking or acting. It is fascinating to watch the audience people react to the actors. They picked people out of the first row of seats. I was sitting in the second row. Thank God!

 

The strength of the play is that these friends remain so throughout the story and continually forgive each other for their transgression. There is an expression today, “friends with benefits.” That certainly would have applied to this quartet.

 

The director of the musical, Scott Elliott, must be given considerable credit. The play is set in a stage surrounded on three sides by the audience. Members of the audience are close enough to, well, touch a towel. Elliott’s genius is making the audience part of the play and yet a step back from the play. It works. He keeps the actors moving on and off the stage through the aisles at a brisk pace, towels flying. There is never a dull moment in the comedy.

 

Elliott gets fine performance from Bob (Joel Perez), Carol (Jennifer Damiano), Ted (Michael  Zegen), Alice (Ana Nogueira) and the singer (Suzanne Vega), plus Jamie Mohamedein. They are bold, brave and very funny. You are appalled by their sexual activity at first, but they grow on you and by the end of the play you see them is a soft, but rather unorthodox, light.

 

PRODUCTION:  The play is produced by The New Group. Music: Duncan Sheik and Amanda Green, Sets:  Derek McLane, Costumes: Jeff Mahshie, Lighting: Jeff Croiter, Sound: Jessica Paz, Musical staging: Kelly Devine. The play is directed by Scott Elliott. It runs through March 22.

]]>
Mon, 17 Feb 2020 08:10:24 +0000 https://historynewsnetwork.org/article/174217 https://historynewsnetwork.org/article/174217 0
Why Thomas Jefferson Was Really No Friend of Religious Freedom

 

Thomas Jefferson, because of the passage of his Bill for the Establishment of Religious Freedom, is customarily viewed by scholars as a paladin of religious freedom. Yet there is reason to question that view. To show why that is so, a distinction between advocacy of religious freedom and advocacy of religious tolerance is needed.

 

Advocacy of religious freedom means one believes sectarian religiosity is a good thing and it is beneficial to society to have the freedom to choose one’s own religion.

 

Advocacy of religious tolerance means one believes sectarian religiosity is bad or neutral and it is necessary for liberal society to tolerate discordant religious beliefs.

 

An advocate of religious freedom focuses on the pluses of religious freedom. An advocate of religious toleration focuses on the social ills of not having religious freedom.

 

In Query XVII of Notes on the State of Virginia, Jefferson writes succinctly but unfavorably of metaphysical religious squabbling. “The way to silence religious disputes, is to take no notice of them.” If all religions are allowed expression and none is given political succor, their disputes will produce at best parochial turbulences that will be swamped out catholically.

 

Religion for Jefferson is a personal matter, while established sectarian religions are politicized. The rituals attending on politicized religions are put into place for the sake of political and moral oppression. Want of religious freedom and religious taint of politics are responsible for ignorance, superstition, poverty, and oppression. Republicanism demands religious freedom.

 

For Jefferson, religion, correctly apprehended, is equivalent to morality. Religion is only legitimate when it acts in the service of morality and justice—that is to say subtly, when it works quietly. However, formal religious systems—political in nature and inveigling people through mysteries of miracles(such as a dead man coming back to life and water mysteriously and immediately being converted to wine)and other matters at odds with common sense, such as one god being three—are anything but quiet, and thus, for Jefferson, are mostly metaphysical twaddle for the sake of establishing empleomaniacal priestly intermediaries between humans and God.

 

People, essentially social beings for Jefferson, are adequately equipped with a moral sense to guide them in social situations, as a benevolent deity would not make humans social beings and also create them to be morally deficient. In short, humans have an innate and natural sensual understanding of their moral duties, which are, thinks Jefferson, both other-directed and god-directed.

 

Duties to man they fulfill through recognizing correct moral action in circumstances in their daily interactions with others. There is no need of moral instruction, as a sense of morally correct action is innate—hence, Jefferson’s disadvises nephew Peter Carr (10 Aug. 1787) concerning attending lectures on morality, as our moral conduct is not “a matter of science”—but there is need of goading and honing morality to incite persons to act when circumstances call for action.

 

Duties to God they fulfill through study and care of the cosmos in which they were placed. That largely explains Jefferson’s appreciation for, and love of, science. The truths disclosed by science allow humans to get a glimpse of the mind of God. That is why Jefferson, in his tour of the French villagers, speaks disparagingly of the mass of French peasants, who, having no farm houses, huddle in villages, and “keep the Creator in good humor with his own works” by mumbling “a mass every day.” Consider also what Jefferson writes about Maria Cosway, living at the time in a convent, to Angelica Church (27 Nov. 1793). “I knew that to much goodness of heart she joined enthusiasm and religion; but I thought that very enthusiasm would have prevented her from shutting up her adoration of the God of the universe within the walls of a cloister.”

 

True, natural religion is natural morality. There is no need of priests, as intermediaries between people and God, as natural religion is generic, exoteric, and simple. Jefferson writes to James Fishback (27 Sept. 1809) of sectarian religions: “every religion consists of moral precepts, & of dogmas. in the first they all agree. all forbid us to murder, steal, plunder, bear false witness Etc. and these are the articles necessary for the preservation of order, justice, & happiness in society.” Yet most of the doctrines of a sectarian religion are esoteric; they are not crafted for the sake of honest or benevolent living. Clerics use indecipherable metaphysical claims to political advantage. They also engage in fatuous disputes. Jefferson continues:

 

in their particular dogmas all differ; no two professing the same. these respect vestments, ceremonies, physical opinions, & metaphysical speculations, totally unconnected with morality, & unimportant to the legitimate objects of society. yet these are the questions on which have hung the bitter schisms of Nazarenes, Socinians, Arians, Athanasians in former times, & now of Trinitarians, Unitarians, Catholics, Lutherans, Calvinists, Methodists, Baptists, Quakers Etc. among the Mahometans we are told that thousands fell victims to the dispute whether the first or second toe of Mahomet was longest; & what blood, how many human lives have the words ‘this do in remembrance of me’ cost the Christian world!

 

Sectarian religions do not follow, but deviate from, nature.

 

Outside of certain core beliefs that all genuinely religious persons share, there are considerable differences in personal religious convictions. Yet none of those differences has any bearing on a citizen’s capacity to govern, or to be governed. To ingeminate a sentiment from Notes on Virginia: “It does me no injury for my neighbor to say there are twenty gods, or no god. It neither picks my pocket nor breaks my leg.” Religious conviction is a personal matter. “The care of every man’s soul belongs to himself,” writes Thomas Jefferson in his Notes on Religion, written in 1776. “Laws provide against injury from others; but not from ourselves.” Four decades later (6 Aug. 1816), Jefferson writes similarly to Mrs. Harrison Smith: “I have ever thought religion a concern purely between our God and our consciences, for which we are accountable to Him, and not to the priests. God himself will not save men against their wills.”

 

Since it is a personal matter, religion cannot be politicized. When clergy, driven by empleomania, engraft themselves into the “machine of government,” he tells Jeremiah Moor (14 Aug. 1800), they become a “very formidable engine against the civil and religious rights of man.” That shows Jefferson’s distrust of the politically ambitious religious clerics of his day and of prior days.

 

For Jefferson, government functions best when it is silent, and it is most silent when its laws are few. Those governing must be like machines insofar as they grasp and actuate the will of the majority. Jefferson says to Dr. Benjamin Rush (13 June 1805) about his role as president, “I am but a machine erected by the constitution for the performance of certain acts according to laws of action laid down for me, one of which is that I must anatomise the living man as the Surgeon does his dead subject, view him also as a machine & employ him for what he is fit for, unblinded by the mist of friendship.” When government is stentorian and its intrusions are many, its loudness and intrusions are sure signs that the rights of its citizens—choice of religion, being one—are being suffocated.

 

Jefferson says to Miles King (26 Sept. 1814) that deity has so authorized matters that each tree must be judged by its fruit. The suggestion here is not that each action is to be judged moral or immoral when it has a fruitful or fruitless outcome. Jefferson, like Aristotle, is referring to actions over the course of a lifetime.  He adds that religion is “substantially good which produces an honest life,” and for that, each is accountable solely to deity. “There is not a Quaker or a Baptist, a Presbyterian or an Episcopalian, a Catholic or a Protestant in heaven; …on entering that gate, we leave those badges of schism behind.” The suggestion, if not implication, is that religious clerics are otiose. Allen Jayne states, “Jefferson eliminated all intermediate authorities between God and man as the source of religious truth, such as exclusive revelation or scripture, church or tradition, and most of all, the clergy.”

 

In sum, careful analysis of Jefferson’s advocacy of freedom of religion shows little consideration of large respect for sectarian religions, but instead worry that lack of freedom of religious expression and, especially, partnership of any one religion with government will be prohibitive of republican government, which has as its primary function protection of the liberties and rights of the citizenry. In sum, republican government demands freedom of religion and for Jefferson, whose views on sectarian religions are far from reverential, that amounts to advocacy of religious toleration, not religious freedom. If all religions are allowed free expression and none is given political sanction, then the empleomania of religious clerics will be reduced to provincial metaphysical squabbles which will be drowned out at the levels of state and federal government.

]]>
Mon, 17 Feb 2020 08:10:24 +0000 https://historynewsnetwork.org/article/174198 https://historynewsnetwork.org/article/174198 0
What’s in a Nickname? From Caligula to Sleepy Joe

Donald Trump’s habit of bestowing nicknames on his rivals in politics and the media is by now notorious. Wikipedia has a helpful page listing well over a hundred of them.  Lyin’ Ted, Little Marco, Rocketman, and Pocohontas: they’re all there.

 

Why does Trump do this? We may think it is a way to grab attention, sort of like suddenly announcing that you want to buy Greenland.  But my study of politics in ancient Rome – which was rife with nicknames – suggests something more.  Using nicknames helps you to control the narrative.

 

Nicknames come in many forms.  A common type is a shortening of a person’s name, like Abbie for Abigail, or JLo for Jennifer Lopez.   Linguists often call this type of name a hypocorism, from an ancient Greek word meaning “child’s talk.”  

 

The nicknames Trump uses are of a different, but still common, type.  They are descriptive.  They make a person seem more familiar to us by highlighting an important trait.  Old Blue Eyes or Good Queen Bess are examples of this sort of name.

 

Even Homer used phrases like these to liven up his epics.  Achilles is the Swift-footed One, Odysseus the Sacker of Cites.  And as linguists Robert Kennedy and Tania Zamuner have shown, descriptive names like this are common among sports fans.  Think the Sultan of Sweat, Air Jordan, the Great One.

 

All types of celebrities are given nicknames by their admirers or critics, especially in the media. Leona Helmsley was the Queen of Mean. Prince William is Wills.

 

Political campaigns also invent nicknames to make their candidates more attractive.  Lincoln was already known as Honest Abe in 1860.  But at the Illinois Republican convention of that year he became the Rail Splitter.  This suggested his humble childhood in a log cabin and made him relatable to ordinary voters.

 

Trump is unusual among modern presidents in that he is the one to bestow the nicknames, almost always derisive, and he does so publicly, especially via Twitter.

 

Renaming somebody can be a source of power.  In ancient Rome, the people, who had little recourse against overbearing emperors, could at least call emperor something by which they didn’t want to be known and dent their popularity.  Nero was the Matricide, Commodus the Gladiator.  

 

One of our richest sources for unflattering nicknames in Rome is Suetonius’ Lives of the Caesars, a set of biographies of twelve emperors, starting with Julius Caesar.  It is thanks to Suetonius we learn that long before Tiberius became emperor, his army buddies spotted his love of drink and renamed him Biberius, playing on the Latin word “to drink.”

 

In incorporating unflattering nicknames, Suetonius, like other ancient biographers and historians, was able to present a negative view of the emperors he wrote about that is with us to the present day.  This is controlling the narrative with a true vengeance.

 

The tradition about Caligula is a good example.  The name Caligula itself was a nickname, meaning “little boot.”  Caligula acquired it as an infant when he was displayed in his father Germanicus’ military camp dressed up in a miniature uniform, complete with soldier’s boots.  

 

Like many since, as an adult he grew to detest his childish nickname.  He wished to be known by such titles as Greatest and Best Caesar.

 

In calling Caligula by his nickname, later writers were refusing to let Caligula define himself. While given in childhood, “Little Boot” reinforced the idea that Caligula had no military accomplishments as emperor.  

 

As Suetonius writes, when Caligula wanted to celebrate a triumph in Rome and had no captives to show in his parade, he rounded up the tallest Gauls he could find and made them dye their hair blond and grow it long to look like ferocious German warriors. 

 

One interesting thing about Trump is that, other than The Donald, he has no particularly famous nicknames.  Certainly in 2016 his political rivals did not succeed in giving him one.  Rubio’s talk of “smalls hands” was a flop.

 

But Trump really doesn’t have a positive nickname either.  Reagan is still warmly remembered as the Gipper.  Similarly, when a Soviet newspaper dubbed Margaret Thatcher the Iron Lady, she shrewdly appropriated it for herself.  It has stuck with her ever since – and certainly beats Milk Snatcher, the name Thatcher acquired when she abolished free milk for children as Education Secretary. 

 

If Suetonius, who served on the staff of two Roman emperors, were alive today, I think he’d advise Trump to spend less time coming up names for others and think about what he could call himself.

 

Trajan, the first emperor Suetonius worked for, was called by the Senate the Best Emperor, optimus princeps.  He is still called that today. 

 

Josiah Osgood © 2020

]]>
Mon, 17 Feb 2020 08:10:24 +0000 https://historynewsnetwork.org/article/174193 https://historynewsnetwork.org/article/174193 0
What Did Hitler Think About Anglo-American Capitalism?

 

Did Adolf Hitler initiate the Holocaust because he was concerned about the strategic threat of “Anglo-American capitalism” rather than his own personal hatred of the Jews?

 

This assertion, that the extermination of the Jews came as a secondary objective in an overall plan to create a new German empire, is just one of the many controversial conclusions made by historian Brendan Simms in his new book, Hitler: A Global Biography. 

 

According to Simms, a professor of international relations at Cambridge University, the Nazi dictator was “preoccupied” with the rise of “Anglo-American” capitalism. Hitler saw Great Britain and the U.S. as a single global empire. The Nazi leader was particularly impressed by America’s rapid geographic and economic expansion which he believed was due to its ruthless policy of eradicating indigenous peoples and a legal system mandating white supremacy. 

 

Simms contends that Hitler initially formed these beliefs as a German soldier in the last years of World War I. As a private on the front lines, he encountered a powerful, well-equipped American army manned by tall, well-fed soldiers; they presented a startling contrast with the exhausted, underfed Germany army. Hitler was particularly confounded by some captured Americans who were immigrants from Germany. To him, they represented a key reason for Germany’s decline: it was losing many of its most productive citizens.   

 

After Germany’s surrender in November 1918, Hitler concluded that the only way Germany could rise to become a global power was to adopt the “American model” of geographic expansion and white racial supremacy. 

 

In Simms’s accounting, all off Hitler’s major initiatives after he came to power--including the conquest of France and the invasion of the Soviet Union in 1941--were designed to create a vast Germany that could rival the Anglo-American one. 

 

Hitler’s persecution of the Jews was simply a “major plank” of the dictator’s multi-pronged “containment strategy” against the Anglo-American forces of domination, Simms claims. Prior to 1940, and the outbreak of a full-scale war with Great Britain, he viewed Germany’s Jews as “hostages” to insure the good behavior of “the supposedly Jewish-controlled United States.” Once the world war began in earnest (with the invasion of France in 1940), he could no longer use the hostages, so he gave the signal to his lackeysto start the mass extermination of the Jews.

 

Although Hitler publicly railed against “Jewish bolshevism,” Simms asserts that in ordering the invasion of the Soviet Union in 1941, his primary goal was not to destroy communism, but rather to secure resources for Germany that could be used to fight the Anglo-American enemy. 

 

Simms states, “Hitler targeted the Soviet Union not so much for what it was (ideologically) as for where it lay (geographically). There was, so to speak, nothing personal about it.”

 

This is just one of many conclusions that run counter to the view of most contemporary European historians. Simms acknowledges that his book breaks with the “prevailing views” of Nazi Germany. He claims that his biography is based on “neglected” source materials. As a result, his book is written much like a doctoral thesis and is stuffed with 2,800 footnotes (the majority referring to German language documents). Simms piles up fact after fact creating blocks of evidence with no compelling narrative.

 

Another point of confusion for readers is Simms’s maddening lack of context when quoting Hitler. Too often we read that the Nazi leader “proclaimed,” “vowed” or “promised,’ but we are not told where or to whom the dictator was speaking. Obviously, it makes a difference whether the dictator was making a nationwide broadcast or just idly chatting with cronies during tea-time at Berchtesgaden. 

 

Histoire croisée

Simms, who wrote eight previous books including Europe: The Struggle for Supremacy, notes in his introduction that he was inspired by the new historical method of Histoire croisée.This is a French term meaning“interconnected” or “interwoven;” the proponents of this method(also called transnational history)seek to understand political and social developments as global phenomena, trends that cannot be confined to nation states. Thus, Simms believesHitler’s stunning rise was not just a product of the German nation but hadinternational origins. 

 

Whatever Simms’s theoretical background, many students of World War II history will find it hard to accept a number of his conclusions. If Hitler was so focused on defeating the Anglo-American empire (rather than communism), he would have made this a priority for the vast military machine he created.  For example, he consistently shortchanged the Germany Navy, the one military force that could sever American supply lines to Britain.  He consistently ignored the few experts who warned of American military capabilities. And, although he briefly contemplated invading England, he finally followed his “destiny” and ordered the invasion of Russia, a reckless act of self-destruction that cost 25 million lives including his own. 

 

At the heart of Simms’s thesis is the assumption that Hitler was mentally stable. Simms portrays Hitler (until the last months in the Berlin bunker) as a rational, sophisticated adult. Simms accepts him as a person driven by ideology with a well-defined intellectual superstructure rather than a deeply insecure, narcissistic sociopath. 

 

In contrast, Ian Kershaw, the acclaimed author of a best-selling, two-volume biography of Hitler, describes the dictator as an “empty vessel,” a man lacking any deep personal relationships. In Kershaw’s words, he was in the “privileged position” of “one who cares about nothing but himself.” Hitler was not consistent in his ideology or military strategy, instead he was a narcissist, a gambler driven to taking risk after risk, all to prove his self-worth. 

 

Simms’s book adds to the growing body of literature examining the relationship between the German state and America, both pre-war and post-war. In the past two decades, a re-unified Germany has engaged in Vergangenheitsbewältigung, a public debate about its problematic past. One result is the many new monuments and museums devoted to the tragedy of the Holocaust. 

 

In response, a number of historians and social scientists have suggested that the U.S. could learn from Germany’s effort. They suggest that Americans undertake a similar public reckoning about our country’s three hundred years of slavery. 

 

Earlier this year, the American philosopher and essayist Susan Neiman published Learning from the Germans: Confronting Race and the Memory of Evilwhich outlines the case for such a project. 

 

Perhaps we are headed for one of history’s great ironic twists. Hitler sought to model Germany’s racial laws on American ones. His efforts resulted in a catastrophe, with millions murdered and a nation in ruins.  Now, 75 years later, the modern Germany may offer a model for Americans on how to account for our nation’s historic evil of slavery. 

]]>
Mon, 17 Feb 2020 08:10:24 +0000 https://historynewsnetwork.org/article/174196 https://historynewsnetwork.org/article/174196 0
How the South Dominated American Education

 

The Rise of the South in American Thought and Education: The Rockefeller Years (1902-1917) and Beyond by John Heffron.  Reprinted by permission of Peter Lang Publishing, Inc, NY.

 

Meant not so much as an impeachment of the region than a query around its apotheosis—a false one—from the post-Reconstruction period forward, the book is a study of the generalization of southern values and institutions, including but not limited to their racial and class dimensions, to a national reform movement in education in which private philanthropy—first and foremost the Rockefeller General Education Board (GEB)—played a decisive role. In 1903 Lyman Abbott, editor of The Outlook, a magazine famous for crossing religious lines with social and political ones, addressed delegates to the Sixth Conference for Education in the South, a conference providing many northerners (among them the Rockefellers) their first introduction to Southern educational conditions. There he opined: “We may well hope that the present Southern educational enthusiasm may spread to the Northern states, where education is in danger of becoming somewhat perfunctory, may inspire it with new and deeper life, and may end by creating throughout the Nation an educational revival, the modern analogue of the evangelistic revivals of a past epoch …” 

 

The post-Reconstruction history leading up to the establishment of the GEB a year earlier in 1902 provides important background for understanding why the Board and its friends should have been so interested in the Southern example, or what it viewed as such. Eager to reconcile change with continuity, dynamic economic development with cultural and political stability, industrial statesmen like the Rockefellers understood the country’s need for orderly change, which would integrate the latest scientific ideas with its most cherished traditions. For the GEB, the triumvirate of science, southernness (the median—a centripetal one), and vocationalism became the elements of a “comprehensive system” of education that its leaders hoped would reconcile urban-industrialization—a rapid and rabid process—with those values most endangered by it—family, church, and community.  “Private power and public purpose, industrial productivity and godliness, grass roots support generated by agents of New York millionaires, talk of universal education and democratic purpose in a caste society— these seem contradictory if not hypocritical in retrospect,” as David Tyack and Elizabeth Hansot have written. “But in the special millennialism of the day in the South, the [educational] awakening brought to whites a dream of Progress that combined a Protestant social evangelism with the promise of modern efficiency, a union of missionaries and social engineers.” How this “dream of Progress”—a social philosophy rooted in and loyal to an idealized Southern past, peddled by godly mercantilists in both the North and the South (“a union of missionaries and social engineers”), and marching under the banner of science and reason—how this dream found ultimate expression in the annals of American education is the subject of the book. The question it poses—How did the South educate the educators?—suggests a different pattern of events than the familiar one of Radical Reconstruction, Republican apostasy, and liberal disillusionment, often couched in the literature as the “abandonment” not only of Reconstruction but of the freed people as a whole, abandoned to allegedly backward-looking modes of oppression and social control.

 

The South’s traditional rural character; its “special millennialism” combining a religious heritage in revealed Christology with a scientific one in Baconian doxology; its paternalistic system of race relations inherited from slavery; even its “culture of honor”; these are just a few of the distinctive values and mores that allegedly set the South apart from the rest of the nation during what was a critical period in its urban-industrial development—from the end of Reconstruction and the return of home rule, to the rise of a new “Redeemer South,” to American entry into World War I. This same period saw not coincidentally the rise of the so-called New Education, a movement originating among pro-Southern progressives in the North, principal among them Charles W. Eliot and Abraham Flexner, members both of them of the GEB. The New Education became a vehicle for the introduction and ultimately for the acceptance of Southern values as dominant and peculiarly American values. Supported by interlocking philanthropic forces, North and South, what united the New Education and its allies was a desire not only to improve public education in the South, but in the process to articulate a more generalized vision of education itself, one that would hold an equal appeal to northern and southern elites alike.

 

At the turn of the 20th century, black lives mattered in the worst sense of the term, as sociological fodder for a racialized vision of public education (what I call “race education for all”) aided and abetted by philanthropists and their friends in progressive education, promoting vocational, non-college-bound schooling for the children of recent immigrants, for blacks, and for most working class whites— all lumped together now as so-called “dependent peoples”—and a college education for the respectable middle-class. Speaking in 1901 at a convention of the Southern Industrial Association, Robert C. Ogden, a wealthy businessman from Philadelphia who served as a trustee of Hampton and later Tuskegee Institute, agricultural and industrial training institutes for southern blacks originating in the work of one Samuel Chapman Armstrong, stated: “The breadth of view which General Armstrong inspired has brought a large company of people through the influence of negro education to the consideration of white education, and thus to see the Southern educational question as a unit, with the negro as a great incident, but nevertheless incidental to the larger question.” And the larger question? A popular education in which “Commerce and Education are twins,” said Ogden, the foreign population of the North and the illiterate white and Negro people of the South being “the material upon which this educational work must be done.” In this regard, as he reminded his audience, “The questions of the South are historic and organic that carry with them national responsibility.” Modern educators wanting to put into historical context relations of class, race, and ethnicity as they persist in today’s schools will find much here to inform them, putting to rest, for example, false distinctions in the history of school reform between a liberal-progressive North and a conservative and reactionary South. So completely did the themes of Southern life and culture enter into the educational plans of Northern elites that not only do we need to question the whole trope of “northernization,” but more drastically the rationalist, liberal humanitarian roots of progressivism itself. . . .The southern work of the General Education Board (GEB), many of whose officers were transplanted, loyal sons of the South, shows that philanthropy—the most popular form of northern aid to the South (and in education certainly the largest)—was motivated less by eleemosynary ideals than by a clear conception of the value of the southern experience to national concerns, foremost among them the need for a more efficient and effective system of public education. . . .

 

It was not simply that North and South developed a “culture of conciliation,” as one historian has put it, in the aftermath of the Civil War and Reconstruction. Conciliation has an air of submissiveness that fails to capture the aggressive way in which northern business and educational leaders, in the cause of the nation’s social and economic development, studied, monitored, and ultimately co-opted characteristic elements of traditional southern culture. These include a “spiritualizing” rural culture that even as it was fading in the South became a template for the new agricultural education in the North; an industrial culture designed to uplift poor whites no less than poor blacks and that took its lead from the Christian ethic of work as redemption, not the Republican one of free labor; a tried and tested paternalistic system of race and ethnic relations; and a religious culture that opposed an evangel of sinfulness and spiritual rebirth to the Christian secularism and Godless materialism of the North. The apotheosis of southern folkways took place at a time when American industry was growing at an accelerating pace, producing not only vast new sources of wealth but also new forms of organization and workplace management—the corporation, mass production, scientific management. Progress was widening the gap between rich and poor, spawning crowded and congested cities, and creating the conditions for civil conflict and unrest while pitting Labor against Capital in a war of all against all. Securing the acquiescence of northern workers in the conditions of their own alienation would require a return to, a re-spiritualization of cultural traditions that stressed self-help not social activism, faith not victimhood, home and hearth not the picket-line or the street. Powerful foundations supported this view, the General Education Board alone with some $324 million to shore up public education, support agricultural extension, and “harness the powerful motives of religion to the educational chariot.” The propertied white South may have lost its battle to save slavery but with its industrial allies in the North—united now around a “taxpayers’ viewpoint”—was winning its war to save the soul of America, putting the country back on a path of growth and development that, eschewed of the politics of the old, would be steady, secure, and relatively free of the burdens of the present. . . .

 

Not simply during the period under examination but today as well, the South is less a place defined by strict geographical boundaries—much less by the Mason-Dixie line once demarcating slave states from non-slave states—than an idea, a deep-seated one, combining elements of racial and ethnic separatism, the elevation of a mythic rural order, the countryside as a foil for urban squalor and discontent, and Godliness, the “one truth” of God finding its earthly equivalent in science married to nature study, the solution to all human ills. In its symbolic guise the South may not be so “peculiar” after all, only a “local phase,” in the words of W.E.B DuBois, of a much larger global phenomenon—“the subordination of people of color to the western world.” But it is more than that still. What the book attempts to document, the work of progressives at the GEB to transpose traditional Southern values and institutions to a national reform movement in education, finds its modern equivalent, a global one, in efforts (no less putatively “progressive”) to bring to an end the traditional North-South divide between developed and developing countries, the former looking to the latter—in a familiar cant—to tackle shared vulnerabilities, build resilience, and boost development. The North-South Centre of the Council of Europe in its call for greater “North-South interdependence and solidarity” pointed in 1988 to an interdependence made especially salient not only by mass migration, but with it by new physical proximities of social and economic inequality. Although for very different reasons, new forces of anti-globalization in the North begin ironically to echo those of the so-called Global South. What traditionally was a protest by the vast majority of underdeveloped countries in the Global South against the economic and political dominance of the Global North—exemplified, for example, in its veto power on the United Nations Security Council—has been turned on its head, a protest now, a populist one, against the infiltration, real and imaginary, of millions of the former into the precincts of the latter, the North experiencing its own fears of economic underdevelopment. A persistent racism—and its flipside, ethnocentricism—in all areas of the world only exacerbates the problem, a problem less of difference any more than of sameness, globalization with a vengeance. What role the American South, symbolic or otherwise, plays in this new state-of-affairs, or not, is the question before us. . . . .

 

If it makes sense to speak of the incorporation of the American South into a Global South, one in transition to a Global North—the differences between the two elided by common challenges to social and economic development, pro-globalization elites in both regions of the world united against a “left-behind hinterland” in both regions—Rockefeller’s “comprehensive system,” defined here as an amalgamation of science, southernness, and vocationalism, may be just what the world has ordered, a palliative (if not a cure) for the dislocations of incipient backwardness, while an encouragement to interdependent forces of change and persistence. That so many progressive educators on both sides of the fictional North/South divide, from then until now, have helped to bring into effect and justify (on progressive grounds) such a system is less an indictment of any one region than of liberal humanitarianism itself, at least in its corporate-industrial mold.

]]>
Mon, 17 Feb 2020 08:10:24 +0000 https://historynewsnetwork.org/article/174194 https://historynewsnetwork.org/article/174194 0
Mary Jo Binker on Eleanor Roosevelt and Advice for Future Historians

 

“You know what I love about Eleanor Roosevelt and the thing that I try to emulate the most about Eleanor Roosevelt? She had tremendous zest for life and for new experience and that animates me.”

 

Mary Jo Binker, Editor of The Eleanor Roosevelt Papers, recounts her favorite qualities about the beloved first lady. As I would later come to understand, Roosevelt seemed to be a driving force in Binker’s life- one that would ultimately motivate her to become her the accomplished historian she is today. Here’s what Ms. Binker had to say about how history has animated her life and how it hopes it can do the same for the coming generations:

 

Can you tell me a little about yourself and how you got involved in the history world?

 

Well, I’ve always been interested in history and when I was a freshman in college, two history professors talked to me about actually changing my major because, at that time, I was majoring in journalism and communication. I turned them down and I often think about that. You know, that they saw something in me that I didn’t see in myself. So I went on, I finished, and my first career was in advertising and public relations and journalism. Then after I was married and had a child- I was in my forties- I decided to go back to school. I went back and got my masters in American History at George Mason.

 

You mention that you were a journalism major, do you find that there are a lot of connections between academics in history and people in journalism?

 

Yes, I think that’s very true. Very often, I think it’s because the skill set is so similar, you know? You’re both dealing with information, you’re dealing with packaging of information in different formats. Journalists often times are focused on a story that their following, but journalists are often readers- many journalists have an interest in history- and so I think there are a lot of natural connections between journalist and historians who are both interested in the evidence. And what does the evidence show? And what can you make of the evidence?

 

Going off of two things you mentioned, you said that journalists are really focused on the story and on your website you mention that you are a “storyteller,” so how does this relate? Can historians also be story tellers?

 

Oh, totally!

 

And if they are, how is it different than a journalist’s story?

 

Well I think that historians definitely can be story tellers. They should be story tellers! I think how it differs from journalism is that sometimes historians have more room to tell a story. They have more room to tell a nuanced story. Journalists are often on a deadline and they often report on an unfolding story, day by day. Impeachment is a perfect example. You only know as much as you know by the end of every day when you’re a journalist. Whereas when you’re a historian, you’re looking at something that is far enough back that you can begin to see the whole picture, not necessarily all of it because history itself is unfolding too and different generations ask different questions. But, you have a more complete picture of the evidence so you can tell a more nuanced story, usually in a longer space, although not always. Journalists have to write short. Historians can write long. So you can look at something and you can look at it from some different angles and so you can also be more critical. Historians can be more critical than a journalist. A journalist has to be a little more dispassionate, you have to put “on the one hand, on the other hand." Whereas a historian is making an argument. They’re saying, “look this is the evidence and this is what I think the evidence means."

 

So what would you say to people who claim history cannot have a point of view and that history is void of opinion because “facts are facts”?

Facts are facts, that is true. You know, the American Revolution happened, right? Those are the facts… but what did it mean? That’s the historian’s role, to talk about the meaning that comes out of those events. How did that event move a society forward or backward?  What was the impact of the revolution on the people who lived there? What was the impact of the revolution on the British? And within those parameters you can say, what was the impact of the revolution on women, what did it mean for women? What did it mean for people of color? What did it mean for the growth of the country? If the revolution hadn’t happened?

 

You seem to be doing a lot of that with the Eleanor Roosevelt Papers. We know the facts, we know about her life, we know the facts about the presidency we know she was a first lady, but I think what you all are doing is providing a different angle. Could you maybe elaborate on that a little?

 

What the Eleanor Roosevelt Papers does […], is when you work as a documentary editor, you assemble a group of documents from different repositories, and then you look at those documents, and you make a selection from those documents, and then you take those documents and you put them together in chronological order and then you annotate those documents. In other words, you put footnotes, you put endnotes, you explain maybe the background of a particular document. But, the document itself is the thing that the scholar or the student should focus on. All we’re doing is providing background so that you, the scholar or the student, can look at the document and make your own determination as to what you think the evidence means. That’s a little bit different than a historian even telling the story because the historian would take those documents that I have collected and annotate, but then he or she is going to take them and move them in a different direction. They’re going to make an argument on the basis of those document and different people are going to make different arguments. My job as a documentary editor is not to say “that argument is the right argument or this argument is the better argument,” it is simply to lay out the material in a form so that another scholar can tell that nuanced story.

 

So I had some questions prepared for you about how we can make teaching history better, or more accessible to students, so do you think that would be the first step? Doing something, like what you do, by providing people with the right resources to then let them make their own decision?

 

Yes. I am a big believer in using primary source documents in the classroom because I want you to have the most direct experience possible with the past. I don’t necessarily want it to filter through me, but I want you to connect with the person on the page. So, I think using primary source documents is very important and I think that’s a way to generate interest and enthusiasm among students. The other thing I think that is also really good, and if you can do it would be great, is to take students to historical sites. I like to think that there’s a residue of energy there that you can connect with if you’ve got some historical imagination. You can take somebody to Mount Vernon! Well, how many different stories can you tell about what happened in Mount Vernon? You can think about George Washington and you can think about him in his military and political careers, but then what about the enslaved people? What story are they telling? If you’re going and looking at those buildings and looking at how they were living, what their daily life was like, what they had to do day to day, there’s a connection that you can make there. If you go and you stand on that beautiful outdoor area, where there are all those chairs, where it leads down to the river and you think about how that’s George Washington’s view. What did he see? And what in his imagination did he see beyond that? So taking people to a place, visiting a place is very evocative and a way to encourage people's interest.

 

That’s very interesting. I hadn’t thought of it that way.

 

Well think about people who go to Civil War battlefields. They’re looking for that connection.

 

Then what would you is your favorite period in history or favorite part to study?

 

Well, you know, I really love the period of Eleanor’s life. She was born in 1884 and she lived until 1962, so that encompasses a huge chunk of American history. You can talk about the Progressive Era, you can talk about WWI, you can talk about the Roaring 20s and what that was like, The Depression, WWII, the Cold War. So there’s that… I think those years. I really like the 30s and 40s because you can see the beginnings of our modern society. Franklin Roosevelt harnessed the power of radio, mass communication and to study his presidency is to see in embryonic form our modern presidency and so I think that’s very interesting. On the other hand, I think it’s incredibly interesting to study the period after the civil war, Reconstruction, and the Jim Crow era, which is a little bit later, because you see people grappling with a whole new set of circumstances. Southerners have lost their property because now the slaves are free. Their whole financial and social order is completely overturned because now there are people who were formerly enslaved now competing with white southerners for the same jobs, advancement, for education for all the things we think of as very normal to life. Watching people grapple with, and looking at the emotion of that, and the memory of that, and how does memory affect that I think is fascinating.

 

The way you just described Reconstruction sounds a lot like the narrative people use to describe today, with white privilege and white people starting off farther than some of their counterparts. Do you think learning history would be easier or more relatable if you talked about it in connection with today’s circumstances?

 

I think this gets to the question of “does history repeat itself?” and I think that’s not exactly right… in my humble opinion. I think you can say that human nature is surprisingly consistent across time. Technology changes. But certain kinds of circumstances pose different challenges. We’re living now in a very wired world, whereas in Reconstruction certainly they didn’t have, so you can’t make a one-for-one correlation. But I think what you can do is, you can look at how people reacted to events and you can look at, one of the things I think is really interesting about history, is what we chose to remember and what we chose to forget  and who gets to make those decisions. Looking at that and thinking about what happened in the past and what happens in the future, what happens when you hang on to a memory? Is that a good thing or is that a bad thing? How does that memory keep you either moving forward or does it keep you stuck in the past? And that is the kind of question that you can pose today. It is just as valid as it was 150 years ago. That is where history can be useful.

 

Okay, so I will pose that question right now. You said that it is interesting to see what we choose to remember and what we choose to forget, and relating it to the work that you do, I see that often we forget women in talking about history. So what do you think about this younger generation trying to rename history “her-story”? Plus, how do we incorporate women into the narrative more than we have in the past?

 

I think this gets to two issues: the people who are underrepresented in history- women being one group, minorities being another- and what to do about difficult and painful memories. Here, I am thinking about the controversies over the statues, the Civil War statues. They were actually put up after the war, and many of them during the Jim Crowe era. So, I remember talking with a historian friend of mine who happened to be African American and I posed the same question to him. He said you know, “we got to name it and claim it." By that I think he meant we have to take our history as a whole and we have to tell a story that’s about all of us – good and bad. We’re the people who, in our history, after the war with the Marshall Plan, we rebuilt Europe. They used our money to rebuild Europe. Winston Churchill called that “the most unsordid act in history,” so you can put that on the positive side of the ledger. On the other hand, we exterminated a lot of indigenous people and we did that in the name of westward expansion, and colonizing the country, and exploration, and all of that. We had this idea “manifest destiny." It was our idea to go from coast to coast. Okay but…we exterminated those people. We took their land and then we turned around and enslaved another group of people to work that land. Those are not good things, so those go on the not-so-good side of our ledger. The point of it is, you can’t say we’re a good people or a bad people. It’s not about making a moral judgement. It’s about saying, oh, we’re a flawed people! We’ve done all these things. We’ve done good thing in the world and we’ve done bad things.

 

So where does “naming it and claiming start”? Does it start in the classroom, or does it start in everyday life?

 

I think it starts in both places. I don’t mean to give a fast answer, but I’m a parent, I raised a child, and I tried to give him an understanding of the world as it was and the world as it is. Then when he went to school, he learned other parts of that. I think as a parent you want to keep an eye on your child’s education. You want to know what they’re learning. My nose was in his history books. I am trying to figure out, what do you know? And what do you need to know? And can I fill in those gaps? Now, I’m a history nerd who’s also a parent. The average parent might not do that.

 

So is it then up to the parent to educate themselves?

 

I think partly, yeah. All of education- education in general- should be a lifetime pursuit- really and truly. We're the ones who’ve made it into this “you start in preschool and end in graduate school and the once you get your degree you’re done and you don’t need to know anything else." That’s a distortion. A proper education goes on all your life- that’s a very Eleanor thought. A proper education does go on all through your life, you’re always constantly learning or should be constantly learning. You should be constantly challenging yourself to learn I think. If you’re not, you’re not getting everything out of life that you could be getting out of it. You’re taking up space but you’re not really contributing to the whole picture. 

 

History is a great tool for that. I talk to a lot of people in my age bracket and a lot of them spend a lot of time watching television, of one kind or another. One set of talking heads, as opposed to another set of talking heads. I don’t have a lot of time to watch TV. Basically what I do, I read couple of newspapers and I read history. It is really, pretty remarkable that when I get in a conversation with somebody, I’ve got a lot to contribute because I can say… well look, from a historical perspective, this, this and this are playing into whatever the situation is. Have you considered this from that angle? Because of this, this, and this.

 

So does being historian or knowing about history give you a leg up in having these kinds of conversations?

 

I think so. I’d like to think so. I sometimes meet with people and I’m taking them around Washington and they say to me, “talk to my kids, I really don’t know anything,” and I feel like there’s so much of our history that has a bearing on what we’re doing now and how perceive what happening is now that not to know it, is to really deprive yourself of the full picture. It’s really shocking and its sad in so many ways.

 

So we have room to grow?

 

Yeah.

 

With that, I think I can ask the final question. What’s your favorite thing about Eleanor Roosevelt? I know you could go on for hours…

 

You know what I love about Eleanor Roosevelt and the thing that I try to emulate the most about Eleanor Roosevelt? She had tremendous zest for life and for new experience and that animates me. I want to be engaged with what’s going on around me. Kyla’s been so great to let me be a little bit a part of your [history] class, and I, like Eleanor, love to connect with young people. I love to work with young people because I feed off your energy, I feed of your ideas. There’s just so much to learn to grow to explore in that sweet spot where you’re coming into all of that. It’s exciting to really share that. So I think that’s one of my most favorite things about Eleanor.

]]>
Mon, 17 Feb 2020 08:10:24 +0000 https://historynewsnetwork.org/article/173854 https://historynewsnetwork.org/article/173854 0