When I was in university, I had this one really awful sociology class taught by a terribly obnoxious left-wing demagogue. I was just beginning to formulate my own ideological identity in those days, but I knew whatever this guy was — with his conspiratorial critiques of “capitalist society,” fawning idolatry of third-world Marxist guerrillas, and relentless bashing of the “American Empire” — I certainly wasn’t.
I had bought Margaret Thatcher’s (then) recent book, Statecraft, around the same time, and quickly found it to be a marvelous counterbalance to teacher’s ramblings. It was Thatcher’s last published work, and as such attempted to offer some right-wing parting shots to many of the popular liberal movements of the early 2000s: environmentalism, globalism, opposition to the War on Terror, and so on.
I haven’t re-read lately; I’m sure the J.J. of today would probably find a lot of it paranoid and simplistic, but to 20-year-old J.J. it was a remarkable window into a fresh alternative philosophy I was definitely not receiving from my post-secondary educators. If nothing else, owning that book guaranteed I always heard both sides of the story.
It’s hard to objectively evaluate the merits of Margaret Thatcher’s political career without having lived under it, but as an ideological figure, it’s undeniable her greatest achievement was providing an alternative. Andrew Sullivan wrote an excellent piece today about just how strange, backwards, and socialistic (in his condensed summary: “insane”) pre-Thatcher Britain was, in part because the post-war British political class lacked the courage to contest the “managed economy” social-democratic consensus that had taken hold in the 1950s and 60s. Thatcher came along and said no, there was an alternative, not only to the modest standard of living Britons enjoyed, but the means through which it was achieved. Taxes, government, regulation, and state control were not necessarily the generators of economic prosperity and social stability, but could actually be obstacles to it. The Victorians were not necessarily oppressed, cruel, busybodies, but perhaps a hard-working and dignified generation from which we could learn a thing or two.
It was a deeply contrarian perspective, but she spoke with such conviction you believed it. And it was novel enough you were willing to give it a try.
Left or right, the legacy of Thatcher is that ideas can matter in politics, and the conventional wisdom of the present is not forever destined to be. Her electoral popularity in England helped move the Labour Party rightward in envy; her enthusiastic alliance with Ronald Reagan turned two relatively disparate leaders into the most influential political movement to grip the modern Anglosphere. People used to speak of low taxes and restrained government as “Thatcherism;” she said her favorite compliment was when that “ism” became too mainstream to require a special name.
Beyond her most rigidly dogmatic opponents, just about everyone concedes that Margaret Thatcher was the sort of politician for whom the expression “agree or not, you have to admire their conviction” was designed for. I can think of very few who deserve that description today.
I don’t know how often you guys read my twice-a-week Canadian politics columns for the Huffington Post (you really should by the way—a link to the most recent one can always be found in the right sidebar), but this Monday’s one got a bit more attention than most.
I basically wrote a long piece attacking Green Party leader Elizabeth May, whom I’ve always found to be one of the most annoying figures of modern Canadian politics. This is a woman whose claim to fame derives almost entirely from the fact that she goes around calling herself a “party leader,” even though the party-of-one she leads is probably the single least successful movement in the history of Canadian politics (electing one MP in the last three decades) and these days seems to exist primarily to raise May’s own profile.
Anyway, yesterday Ms. May took the unprecedented step of writing a rebuttal to my editorial. You can read both pieces and see who makes a more convincing argument. Personally I think “party leaders” should have better things to do than feud with some no-name internet pundit like myself.
If nothing else, it doesn’t exactly disprove my thesis of May’s remarkable sense of self-importance.
During my unhappy year in Japan, a favorite method for passing the days was collecting CDs of Japanese clip-art. Like many foreigners, I had been overwhelmed by Japan’s dizzying array of holidays and festivals, and hoped studying clip-art could offer a crash course in their traditions and symbols. What more condensed summary of a holiday is there, after all, than the cartoons found on a hastily-made pool closure sign?
Japanese "mother" clip-art
Browsing these cartoon archives, what immediately struck was the bizarrely retro depiction of women. If a CD contained a stock image of a “mother,” she was probably wearing a kitchen apron (a pink one) and holding a frying pan or a duster or some other tool of domestic life. Sometimes even at the park. There were no clip-art pictures of female executives, female athletes, female cops, female construction workers, or basically any female in a physically demanding or otherwise non-stereotypical job. Everyone had long hair. Everyone was in a dress. Coming from a culture where even the back of board game boxes are painstakingly crafted to create a perfectly politically-correct tableau of gender and multicultural empowerment, the contrast was striking.
I asked some of my female Japanese co-workers about it, and they seemed nonplussed. “That’s just how women are seen here,” they said. But then again, many of them were only working to kill time before marriage.
This is the sort of cultural context missing — in fact, aggressively not present — in Anita Sarkeesian’s recently-released and much-watched YouTube mini-documentary on video games “Damsels in Distress,” the first of what promises to be a long series of feminist critiques of depictions of women in gaming, or as she puts it, “Tropes Vs. Women.”
Sarkeesian spends 23 minutes criticizing the gross and cliched way females were depicted in video games during the 1980s and ’90s, which, as her thesis/title suggests, was primarily as passive victims captured by villains, and thus shiny objects to be collected by the game’s hero, rather than characters with any sort of agency of their own. It’s an accurate observation, but considering almost every game she cites in her catalog of “Damsels in Distress” are Japanese titles produced by Japanese developers for a primarily Japanese market, there’s an obvious cultural commonality here that goes inexcusably unexplored.
I’m no master of Japanese sociology, but it’s hardly an obscure fact that Japan has one of the worst track records of any major industrialized democracy when it comes to rates of female participation in the workforce, female political participation, female representation in corporate leadership, female university enrollment, and female income equality. Until very recently, the Japanese classified pages openly stated that women need not apply for certain jobs. A deliberate lack of childcare options ensures the working single mother is virtually non-existent.
The social limitations Japanese women face in daily life obviously manifest clearly in Japanese popular culture. I suppose until you’ve been there, it’s hard to fully appreciate just how entrenched and uncontroversial the image of the hysterical, weeping, fragile, dependent, know-nothing female remains. You see her constantly in soap operas, in anime, in music, in advertisements, even in politics. (I was once handed a flyer by a supporter of a woman who was running for mayor. The thing was entirely pink and offered little argument beyond “why not a woman?”)
Thanks to the perennial western fascination with Japan’s more depraved subcultures, most of us are also now well familiar with the grotesquely vicious and misogynist images that abound in the robust Japanese industry of highly-specialized cartoon fetish porn, and the soft-prostitution racket known as “maid cafes,” which feature women serving men cookies and drinks clad not only in skimpy outfits, but absurdly fawning and servile attitudes even a Hooter’s waitress would find demeaning.
In her 23 minutes, Sarkeesian says the word “Japanese” exactly once. Completely disinterested in the cultural roots of her subject matter, to her, “video games” are simply things that randomly emerged from some neutral ether, as opposed to a particular sort of corporation run by a particular sort of person living in a particular sort of country.
This is not an uncommon perspective for white progressive-types to take, of course, loath as they are to offer any critique that could possibly smack of bigotry or ethnocentrism. But the uncomfortable fact remains that feminism, of the sort we in the west are most familiar with, is simply not entrenched in Japan the way it is here. Any worthwhile critique of female video game depictions in the 1980s and 1990s would thus have to focus on the extent to which these images arrived in our households through a “perfect storm” of culturally regressive variables, in which the industrialized world’s most dominant creator of video game software was also the nation with some of the most unapologetically un-American views towards what is and isn’t a culturally permissible way to present women.
And for that matter, we should recall that these insensitive depictions didn’t go unnoticed in the States. Far from blindly embracing the oft-offensive female “Damsel in Distress” images Japan offered in their games, the degree to which American business aggressively sought to soften and tame the harsh sexist edges of Japanese video games after import is a fascinating story in its own right, and an absolutely critical component of any larger discussion of gaming from a western, feminist perspective.
At one point, Sarkeesian passively notes that the female “Damsel in Distress” in Double Dragon (a 1987 offering of Tokyo-based Technos Japan) has her panties involuntarily exposed in “several versions” of the game. She neglects to mention that those “several versions” were the Japanese ones — at the time, Nintendo of America had a stated policy banning images “which specifically denigrates members of either sex,” and so Double Dragon‘s damsel, like many other female video game characters of the time, was forced to cover up prior to the game’s U.S. release. Nintendo of America was actually quite the little moralizing busybody in those days, removing strippers, Playboy bunnies, scantily-clad fairies, and even bare-chested Greco-Roman sculptures from all manner of Japanese titles during the 1980s and ’90s, lest any impressionable young Americans be subjected to such crass depictions.
Sarkeesian similarly opts to ignore the heavy-handed American role model-ization of otherwise sexist and forgettable female video game stars that defined the era she purports to document.
She sneers derisively at the fact that Princess Toadstool only appeared as a playable character in Super Mario Bros. 2 “kinda accidentally” because the real Japanese version of SMB2 was considered too difficult for American audiences, necessitating some other game be released stateside. So Americans got a different game with a playable Princess, which was also the last time she appeared in a starring role. The exception that proves the rule, in other words.
And that’s true. No other Mario game was ever again explicitly designed for an American audience. I don’t think anyone who grew up with the American Super Mario Bros. 2 can forget how enormously popular playing as the Princess was — I have fond memories of my father dogmatically insisting there was no better character, thanks to her gimmicky long-jump power — and I think it’s fair to say her inclusion was actually something of a positive watershed moment in the way Americans perceived women in games. Not that a playable female character was anything particularly new for U.S. audiences, of course — America was also the country that created Ms. Pac-Man, lest we forget.
It’s also worth noting that while Princess Toadstool’s Japanese personality — the whiny, weepy, petticoat-wearing airhead depicted in the Mario games — is undeniably cringe-inducing, her American image was always significantly different thanks to a vast American-made canon of Mario Bros. comic strips, coloring books, choose-your-own-adventure novels, television shows, educational software, and even a feature film that collectively broadened her personality well beyond that of a one-dimensional prop. In American media, in fact, the Princess was basically a leading example of one of the great feminist media tropes of the 1990s, the cliched “Wise Woman,” who stands alone as an island of adult sanity amidst a supporting cast of bumbling, infantile men.
American Mario fans who read the Valiant comic book series or watched the Saturday morning cartoon met a Princess who was calm, collected, and sensible, a savvy political ruler of a vast kingdom (the Japanese games never take her role as “princess” this literally) and “definitely no old-fashioned damsel-in-distress,” in the words of the TV show’s writers’ bible. I recently re-watched an episode of The Adventures of Super Mario Bros. 3 animated series I remember quite liking as a child — the plot featured a workaholic Princess taking a “much-needed vacation” in Hawaii while her kingdom fell into predictable chaos with dopey Mario and Luigi in charge.
Princess Zelda as the Japanese knew her (above) versus how Americans did (below).
The same was true of the other damsel singled out by Sarkeesian for particular disdain, Princess Zelda of Legend of Zelda fame. Once again we see a pattern: a sexist and corny Japanese in-game depiction softened by aggressive American attempts to establish the Princess as a competent and self-possessed heroine via an extended universe of comics and cartoons.
Zelda’s American makeover was even more dramatic than Toadstool’s. While the Japanese games depicted Zelda as a stereotypical princess in a flowing pink gown who did little more than sit around waiting to be rescued (in Zelda II: The Adventure of Link she spends literally the entire game under a Sleeping Beauty-style curse), the American comic book and cartoon version of the character was an athletic, aggressive bow-and-arrow slinging warrior-princess with knee-high boots and an all-business attitude. She was, in fact, a vastly more likable personality than the stupid and surly Link, the franchise’s supposed hero, whose pointless California accent and annoying catchphrases grate to this day.
In the case of Sonic the Hedgehog, a series Sarkeesian mentions only briefly, Sega of America’s merchandising department was so desperate for Sonic to have a strong female counterpart they created out of whole cloth: Princess Sally Acorn. The cuddly pink Amy Rose Sarkeesian cites as Sonic’s sexist answer to Toadstool and Zelda, though popular in Japan, was largely unknown to American audiences until quite recently. As was the case with the Zelda and Mario series, American Sonic fans who consumed the larger folkloric canon surrounding their gaming hero were repeatedly reminded that their on-screen male protagonist owed a lot to his “better half.”
Sarkeesian ignores absolutely all this, and instead asks for a feminist evaluation of the cultural impact of ’90s-era video games in a bizarre, vacuumed-sealed context in which countries, culture, politics, economics, and history simply do not exist. Hers is a sophistic argument in which a thoroughly American critique is given to a foreign nation’s cultural products as a way to draw some larger point about female rights in her own country, while simultaneously ignoring the large role progressive-minded American corporations — sensitive to decades of activism from American feminists — played in seeking to curb the very elements of Japanese sexism she finds so problematic.
I understand Sarkeesian’s video series was controversial when first proposed, generating both exaggerated contempt from defensive males unwilling to have their playthings insulted and exaggerated support from righteous feminists convinced that any self-proclaimed feminist critique of anything is always an unquestionable good.
I personally don’t know if we need a feminist critique of Japanese video games released nearly 30 years ago. In her video, Sarkeesian certainly makes no effort to explain why I — or anyone else — should care, since her confused muddling of cultures, ignorance of context, and disinterest in impact results in a “critique” without a clear target or purpose.
Something is not always better than nothing.
Rand Paul’s 13-hour filibuster didn’t impress me much. He was protesting the Obama Justice Department’s recently-leaked legal opinion stating that drone strikes can be used to kill American citizens living abroad, if the President believes they may be supporters of Al-Qaeda or its various spin-offs. On Wednesday, Paul took this legal opinion for a ride down the slippery slope, as many civil libertarians of various sorts have lately.
If the President can use drone strikes to kill American citizens abroad, the Senator wondered, can he also use them to kill American citizens at home? As in, on American soil? On mere suspicion of being affiliated with a terrorist? The Attorney General, in a hilariously patronizing letter, responded with one word: “no.”
While I, like most people, have obvious concerns about the Obama drone policy, I think it’s important to acknowledge the truth about the ideological hang-ups that motivate its most obsessive critics. Though some liberals have expressed delight that the right is “finally” starting to care about wartime civil liberties, I think most Republican indignance on this issue flows from a different headspace entirely: the tired Obama-as-dictator trope.
As David Weigel writes in Slate:
[Paul] invoked the shootings at Kent State in 1970, and asked whether the government could have used drones to kill Jane Fonda. A conservative who slapped a “Not Fonda’ Kerry” sticker on his Dodge Ram nine years ago didn’t hear a defense of anti-Vietnam War activists. He heard Paul, and thought about the government maybe targeting right-thinking Americans who rallied at Tea Parties.
This is a real forrest-for-the-trees type issue. Populist critics like the result of the drone policy (government killing Americans) because it’s a powerful rhetorical trope that can be used to fan paranoia and conspiracy theories. And as Weigel notes, many-a long and prosperous political career has been built on the back of peddling paranoia and conspiracy. What populist critics don’t like, however, is questioning the context in which the drone policy actually arose.
To refresh: the government’s purpose in launching drone strikes against American citizens is not to kill for the sake of killing. Nor is it to satiate some bloodthirsty Obama desire to establish a totalitarian precedent that the American government can kill its critics without trial. No, the drone policy exists because the United States was attacked by Al-Qaeda on September 11, 2001 and the United States Congress demanded the President “use all necessary and appropriate force” to fight terrorists making war on the United States.
In the years since, Al-Qaeda has proven to be a tricky foe. Their soldiers do not wear uniforms, they operate out of remote locations scattered around the world, they seem to be plotting attacks constantly, and their motive is a fanatical ideology that has proven capable of winning followers of all races and regions — even citizens of the industrialized west. In order to effectively fight these guys, successive presidential administrations have had to resort to ever-more complicated and convoluted tactics, including some that require undermining the constitutional rights of non-terrorist Americans.
If this is all too much, then oppose the war. Demand a unilateral cessation of hostilities with Al-Queda. Seek an immediate repeal of any and all legislation giving the Pentagon or White House the authority to continue fighting them. That’s the root cause of everything, after all.
I don’t support that position, and neither does Rand Paul, who has distinguished himself from his isolationist father, in part, by voicing broad support for the War on Terror in general — if not every little tactic along the way. The same is true of Glenn Beck and Rush Limbaugh, and all the other mainstream conservative voices suddenly spouting libertarian fears about overreach by the trigger-happy Commander-in-Chief.
It’s much easier to distract and dodge and rally against some convenient straw man of Presidential tyranny than embrace the discomfort — and suffer the political price — of condemning a still-popular war against a still-unpopular enemy.
Back in 2011, me and a couple friends ran a little blog called The Mace that offered running commentary on the Canadian parliamentary election of that year. It got a fair bit of mainstream media attention near the end, but with that election long over, I decided to let the URL lapse.
As a result, I’ve migrated all my editorials that were previously only available on TheMace.ca to my article archive on this site.
One of these days I’m gonna make an eBook of my favorite essays of all time.
I did another interview with Ballast magazine that you guys might find interesting. The topic is Canadian Senate reform; I debate with editor Paul Hiebert as to whether or not we’d be better of just drafting random citizens to run the thing.
I’m against, because…
The main problem Joe and Jane Average Canadian have with the Senate has nothing to do with the calibre of the senators themselves. This is a point I’ve been trying to make in the aftermath of Brazeau-gate and all the rest of it — how good or bad or elitist or populist or whatever the individual senators are is not really the issue. The issue is whether or not they are accountable for the decisions they make.
This seems to be shaping up into a something of a running feature I’m going to be doing with the Ballast folks.
I don’t mind Dick Morris, though I can certainly understand people who do. I wouldn’t deny that he exudes a strong vibe of self-satisfied, opportunistic phoniness (as any man whose greatest fame comes from switching parties inevitably would) but there’s also an upbeat and enthusiastic side to him I find quite fresh and compelling. This is a guy who undeniably enjoys politics and history in a genuine and guileless way; many years ago I read Power Plays — his study of the political strategies of history’s greatest democratic leaders — and found it a lot of nonpartisan fun. I like some of his American history vlogs as well; they’re usually fair and earnest and educational in a way that contrasts nicely with the dogmatic revisionism of someone like Glenn Beck.
Morris deserved to be fired from Fox, however. There’s a line between positive spin and outright deception, and Dick’s delusional predictions of a “Romney landslide” in the final days of the 2012 campaign were a truly cringe-inducing example of the latter. You can lie to yourself, but there has to be some consequence for lying to voters on national television.
Media Matters has a nice little essay about Morris’ career at Fox, and the various embarrassments along the way. Since I’m determined to put a positive spin on the guy’s body of work, here are four lessons I believe the Morris downfall has to teach anyone interested in punditry:
Lesson One: Don’t let personal grudges influence your analysis.
Dick Morris became a household name as an advisor to President Clinton, and without getting into too much detail, he eventually left the Clinton White House on a sour note. Much of his subsequent political analysis reflects this. Furious at Bill and especially Hillary, Morris consistently allowed his bitterness to color all subsequent analysis of the couple, often to absurd extremes.
Morris wanted Hillary to lose. He predicted she would lose her 2000 senate bid to Rick Lazio, and then, even more implausibly, predicted she would lose to whoever ran against her in 2006. He predicted she would lose the 2008 primaries much worse than she actually did, and he predicted she would have no role in the Obama White House. None of this analysis seems to have been dictated by any sort of consistent logic beyond a churlish hope that the woman Morris hates will never get anything she wants.
Lesson Two: Don’t always assume the most interesting thing will happen.
Before deciding she would lose the primaries to Obama, Morris wrote a 300-page book about how Condoleezza Rice would run against Hillary in 2008. That would have been a cool presidential race. But being a good pundit requires knowing that real life isn’t always cool.
Morris consistently anticipates crazy political events that never occur, I assume mostly because he knows that they technically could, and merely knowing things that could happen seems to be all it takes to prove political genius in our culture these days. He thought Obama might face a 2012 primary challenge or drop out. He thought Trump would run. I don’t think there’s ever been a convention he didn’t think would wind up brokered. He thought McCain would pick an outlandishly unorthodox VP (hey, even a broken clock’s right twice a day).
Too many voices in the mainstream media spread plausible-sounding “what-ifs” and constitutional trivia in place of actual insight. The Canadian press consistently predicts that every election might result in a coalition government or intervention by the governor-general; American journalists waste an enormous deal of time flouting “potential” presidential or vice presidential candidates that are practically screaming their disinterest in the job. This sort of thing doesn’t look insightful, it looks childish and gimmicky. Viewers are smarter than a lot of pundits think.
Lesson Three: Do not mix punditry with business
The Media Matters people do a good job of pointing out the many time Morris was covertly paid by Republican candidates for some service and gave fawning “analysis” on Fox in return. In 2012, Morris took the candidacies of Michelle Bachmann and Herman Cain way more seriously than any seasoned political analyst of his calibre ever should, all because they were offering him a paycheque behind the scenes.
There’s nothing wrong with being a paid mouthpiece for a political campaign. But if that’s how you want to make a quick buck, it will almost certainly ruin your ability to provide impartial — let alone rational — analysis.
Lesson Four: Don’t just be a partisan hack.
When he worked for the Clintons, Morris was a conservative supporter of a Democratic administration. He was a leading proponent of the idea the Democrats could forge a permanent governing coalition by “triangulating,” or moderating their positions on certain issues where liberals were seen as being too weak, but Republicans too strong — most notably welfare reform.
This is an interesting strategy, and it’s what made him famous. But after leaving the Clintons, possibly because he hated them so much (see lesson one), Morris abandoned his pragmatic pretences and just became a run-of-the-mill Republican apologist, blindly endorsing every GOP candidate for every office and unthinkingly supporting every tired Republican (or later, Tea Party) talking point. It allowed him to enjoy great success as a partisan darling in the short term, but clearly led to the dopey predictions that caused his downfall. He knew what side his bread was buttered, and he told his base what they wanted to hear, no matter how nuts.
I don’t know if Dick Morris will ever recover, but his is clearly a fine example of a career ruined by talking the easy way out.
The Harper government’s Bill C-53, the supposed quick-fix modernization of the controversial 1701 Act of Settlement, has now been officially denounced as unconstitutional by both republicans and monarchists, as well as a number of leading Canadian constitutional scholars on royal matters.
It’s becoming increasingly obvious that Canada’s constitutional relationship with the monarchy is governed by almost entirely incoherent realm of law that survives only because it’s rarely contested. The controversies that have arisen so quickly from a small change to the Act of Settlement have proven that it doesn’t take much to get the whole mess unraveling.
Last Friday, the federal government submitted six questions about the Senate — another notoriously ambiguous Canadian institution — to the Supreme Court of Canada on the hope that its judges would clarify exactly how the government can or cannot go about modifying parliament’s upper chamber in a legal, constitutional manner. If I was in charge, I’d also send along the following five questions about the monarchy:
1) Is there such a thing as a “Canadian monarchy” that exists as a completely separate entity from the monarchy of the United Kingdom of Great Britain and Northern Ireland?
2) Is Britain’s 1701 Act of Settlement part of the Constitution of Canada?
3) Will the government’s imminent passage of Bill C-53 modify the Act of Settlement in regards to the order of succession for the “Canadian monarchy,” or merely the UK monarchy?
4) The Constitution Act says that only a constitutional amendment passed with unanimous provincial consent can alter “the office of the Queen.” Does “the office of the Queen” refer to the institution of monarchy as a whole, or merely the powers and duties of the incumbent monarch?
5) If the United Kingdom became a republic, would Canada automatically become a republic as well?
EDIT: Alert tweeter @gillespk123 says the following question is relevant too:
6) If Canada does not change the Act of Settlement correctly, is it possible a different member of the Windsor family could wind up as monarch of Canada than monarch of the United Kingdom?
Obviously some of these questions are contradictory, and some answers will eliminate multiple questions. But right now we don’t have satisfactory answers to any of them.
A couple months ago, an old friend of mine named Paul Hiebert raised an amazing $25,000 on Kickstarter to start a new politics-and-culture blog called Ballast.
Last week, he asked me to do a little interview/debate with him, on the topic of extending the terms of members of parliament from four years to ten. It was fun. You can read it here.
There may be more of these to come!
This essay is definitely one of the more useful and insightful things I’ve read in a while.
It’s by Kenneth Westhues, a Canadian sociology professor from the University of Waterloo. Ostensibly, it’s an editorial bemoaning good professors who get railroaded out of their departments for crimes against political correctness, but in practice, the bulk of it is an investigation of “modern” versus “post-modern” styles of debate and argument. Which sounds dreary, but Westhues makes it quite engaging. He makes a very good case, in fact, that the clash between these two styles is actually at root of a lot of the political nastiness and anxiety of modern times.
To put it broadly, modernists are folks who want to debate everything in a cold, intellectual, detached, abstract way, while post-modernists are those who debate using passion, emotion, and identity. Modernists believe nothing they say should ever offend anyone (“it’s nothing personal, but…”), postmodernists believe causing offense is one of the worst social crimes of our time (“how can you talk like that?”).
A good real-world example would be the debate over illegal immigration in the United States. Many conservatives tend to be extreme modernists on this issue, in the sense of only being able to perceive illegal immigrants as an abstract problem to be solved, with the sheer size of the problem requiring particularly harsh solutions. “We” (being the government) need to do this-or-that to “them;” nothing personal, but “they” are lawbreakers and yadda yadda.
Many liberals, in turn, are extreme postmodernists on immigration. Illegals are not abstract things, they say, they are human beings with feelings and identities and cultures; any discussion over their fate thus has to be shaped by sympathy and understanding of these needs, and their sensitivities as individuals. Conservatives who display a lack of sympathy and understanding, in turn, are quite clearly racists or bigots.
Dr. Westhues’s piece is quite unapologetically biased towards modernists; he thinks postmodernists are generally hypersensitive to the point of delusion, and able to launch all manner of cruel accusations and insults against modernists they don’t like (such as sassy, off-color professors) simply on the basis of hurt feelings caused by offensive words, deeds, and other sins (colonialism, ableism, etc) that the modernist wasn’t even aware of committing.
But Westhues also admits that no one “is entirely modern or postmodern,” and that everyone can be postmodern (which is to say, hypersensitive and emotionally-motivated) on certain issues, depending on what’s at play.
I think of myself as a fairly modernist sort of guy, and even friends say I’m sometimes unappealingly indifferent to the feelings of others when debating something passionately, simply because I only want to think about the issues in cold, intellectual terms. But if I’m honest with myself, I have to admit that there are times when I hear right-wing types debate gay rights and get upset in an “irrational,” emotional way simply because I can’t believe folks can say such nasty and mean-spirited things about people like me.
All modernists have their “triggers,” to use a beloved postmodern term. And all postmodernists undoubtedly have their blindspots.
Many of the biggest political debates of our time center around issues where policy meets the personal, and where the fate of some ethno/cultural/socioeconomic demographic group is being discussed by outsiders. This is not only the case with issues of immigration and gay rights, but also abortion, affirmative action, poverty, child-rearing, mental illness, and (in Canada) various controversies surrounding First Nations and aboriginal peoples. And much of the viciousness of our politics, in turn, comes when postmodernists make accusations of bigotry and hate against people who (usually) are just trying to debate “their” issues with detachment or irreverence, though the postmodernists themselves, of course, would say their viciousness is a justified response to the amount of insensitive modernists everywhere.
Is there a way to reconcile the two traditions in favor of a more civil discourse? I know both sides generally just want their opponents to surrender, to “get over themselves,” and stop being such big bullies/pansies, but that attitude itself just illustrates the magnitude of the problem.