‘What is necessary is to rectify names’: Ramsay’s Game is sheer indoctrination

In 1994, an impish Northern Irishman stood in front of a room full of smelly, doubtful-looking sixteen-year-old boys, and declared “By the end of this term, I will make you love Keats.” Incredibly, it worked. After weeks of luxuriating in every syllable of “Ode to a Nightingale,” pulling apart every line of “To Autumn,” we loved John Keats.

So it was quite a shock, not long after, to get to university and learn that this approach to literature didn’t cut it in the English Department. That books, or ‘texts’ as we soon acquired the tic of calling them, were social products that play a range of political and cultural roles, not all of them good. “Your enjoyment,” as one lecturer told us, gently but firmly, “is none of my business.”

The Long Room, Trinity College, Dublin (photo: the author, 2016)

Outrage at this discovery is how a certain class of young fogey is made, latter-day F.R. Leavis-es railing against those damned postmodernists ruining everything. It almost happened to me. Perhaps it’s what happened to Senator James Paterson, the latest IPA-adjacent voice to attack the Australian National University for rejecting the Ramsay Centre’s Bachelor of Arts (Western Civilisation) program.

On Paterson’s telling, the rejection demonstrates the very thing the Ramsay program is designed to counter: a “rampant anti-Western bias” among academics and a corresponding lack of “viewpoint diversity” in our universities.

For people supposedly hostile to the West, humanities academics in Australia teach disturbingly little else. This year I’ll spend all of six weeks teaching non-Western philosophy, and even that’s unusual. As for the dreaded ‘cultural Marxism,’ the Ramsay Centre’s curriculum contains considerably more Marx than ours does.

The Ramsayite complaint however is less about what we teach as how we teach it. Sure, we cover European thought and history, they say, but we’re just so damn critical, obsessed with the sins of colonialism and cultural imperialism rather than the achievements of our forebears.

Ironically, that critical stance is itself an Enlightenment value, which is precisely why the Enlightenment’s loudest critics came from within. This fact troubles those who want to gerrymander the ‘West’ into a clean, linear narrative that takes in everyone from Plato to Hayek but excludes ‘postmodernists,’ as if Nietzsche and Foucault weren’t as much the children of Aristotle and Christ as Kant and Locke were.

Why are academics so critical? Because that’s the job you pay us to do. The function of universities is to create and disseminate knowledge, and you can’t do that by simply nodding along enthusiastically. Hagiography, like huffing Keats, is often fun but rarely useful for understanding the world, let alone fixing it.

Critical isn’t the same as hostile. It’s just what we have to do in order to do our job as scholars: determine which ideas are serious candidates for the truth, and which aren’t. It’s not our job to make you feel comfortable about your heritage. Your enjoyment is none of my business.

Yet the Ramsay vision seems to be all about providing such comfort. It fluffs the pillows of a dying, self-congratulatory view of history while complaining that the doctors won’t admit what excellent health their patient is in.

Tony Abbott insists that the Ramsay Centre — his idea, as we now know — is ‘in favour’ of Western Civilisation, not simply ‘about’ it: “it is “for” the cultural inheritance of countries such as ours, rather than just interested in it”. Not all cultures, Abbott has argued for several years now, are equal.

But therein lies a fatal contradiction. If you think that the Western tradition is valuable because its beliefs — say, science or universal human rights — are in fact true, then your interest is not actually Western Civilisation, but truth per se, and you should fund courses in that. (Good news: you already do).

If on the other hand you think universities should study Western Civilisation as a distinct culture or contingent historical formation, then you’re committed to studying it with the same critical detachment as anything else.

In other words, the Ramsay Centre’s approach turns out to be incoherent: either it’s about Western Civilisation, or it’s in favour of truths that transcend it. To be ‘in favour of’ Western Civilisation itself simply because it’s one’s own heritage would not be scholarship, but something else entirely.

One of those non-Western philosophers, Kongzi — or as the West insisted on calling him, Confucius — argued that the first step in putting the world to right is giving things their proper name. So let’s call the Ramsay proposal what it is: indoctrination.

Senator Paterson declares that “universities don’t receive such generous funding so that fringe academics can impose their narrow worldview on the next generation of students.” Perhaps he could tell his friends at the Ramsay Centre to stop trying to do just that.

 

Originally posted on Medium.

Meet the Infrels: A Thought Experiment

Think about people who have no friends.

Imagine that there are many reasons why they have no friends. Some might be very shy or simply lacking in social skills. Some might live in places or move in circles where they just don’t get to meet potential friends. Some might simply be unpleasant to the point where people don’t want to be their friend.

Naturally we’d feel sympathy: loneliness is terrible, and a life without friends is missing something that, for most people (but perhaps not everyone) is very important.

Imagine if, in some of the more obscure pockets of the internet, people without friends began to interact with each other. They compare and commiserate over their suffering and start to thematize their plight. They build an identity for themselves: they are the ‘involuntarily friendless,’ or, for brevity, ‘infrels.’

The most efficient solution to their situation would be for infrels to simply befriend other infrels. But imagine that, through a cruel twist of fate, infrels aren’t the sort of friends infrels are looking for.

Imagine infrels notice that friendship isn’t evenly distributed across the population. Some people seem to have all the friends, while others are perpetually friendless. To explain this distribution, they decide there must be a ‘friendship market,’ and that they are unjustly shut out of this market; from the viewpoint of the friendship market, they are ‘low-status males.’

Oh, did I not mention that part? Imagine that infrels are pretty much all men, in one of those stunning random coincidences that nobody can explain. There are plenty of women who don’t have friends, but curiously, they don’t identify as infrels.

Imagine infrels weren’t the first to notice that friendship is unevenly distributed. Imagine that sociologists, economists, and psychologists had often studied the social dynamics of how friendships form. But imagine that for infrels, this isn’t simply a theoretical problem, but a practical, even existential one. They want to ‘get’ a friend, and they blame others for their inability to do so.

Imagine that, within their networks, infrels become increasingly angry and resentful. They rail against stereotypical figures they call ‘Todds,’ who have more friends than they need, and ‘Sallys,’ who — in their view — befriend Todds too eagerly. There are, after all, practical limits on how many close friends you can have — so surely the Todds are soaking up all the available friends, and the Sallys are cruelly enabling them? For some reason they don’t seem to expect the Todds to give up having lots of friends, but they do blame the Sallys for befriending them.

Imagine that one of their high-profile supporters thinks the answer is enforced best-friending, so that there are friends left over for low-status males, even though he also argues that hierarchies are morally desirable because lobsters are — no, wait, scratch that part. Nobody would believe that.

Imagine there are also professionals who will be friendly with you for a fee. Call them ‘friend workers.’ Friend work is controversial: some people think acting in a friendly way to others for money is demeaning, while others think this is simply another way of selling labour. Imagine some people say infrels should just hire friend workers if their unmet friendship needs are so great.

But imagine that infrels don’t want that: they want a friend, but they don’t want to have to pay for a friend (even though they also think that everyone pays for friendship in some way). If the Todds don’t have to pay for friends, why should they?

In that, perhaps, infrels might have a point, just not the one they think they do. If infrels went to friend workers, they’d be getting to do some of the fun things friends do with each other, and for some people that might well be more than enough. A fun,vno-strings-attached chat now and then might even be preferable for some people.

But that’s not the same thing as friendship. Friendship isn’t bought and sold on a market. Friendship isn’t owed, isn’t earned, isn’t a reward for effort or a standing entitlement we can claim from others. Friendship is, in an important sense, always gratuitous. It just happens, or it doesn’t. It is bad to be without friendship, but it’s not something you can force, let alone demand.

Of course, there are certain qualities we value in a friend. Yet ultimately, every friendship is a unique relation, born of the event of encounter between two persons. Hence, friendship isn’t something that can be redistributed, because if we tried, what we’d be distributing would not be real friendship but a mere simulacrum. By turning friendship into a commodity — something fungible, tradeable, market-appraisable — we’d be converting the most personal, joyously spontaneous thing about ourselves into something impersonal. We’d be destroying what friendship really is.

Imagine that infrels don’t care about any of that. They want a friend and they resent the Sallys for not friending them. Imagine some of them start outlining elaborate fantasies about redistributing friendship. Imagine those plans amount to the Sallys surrendering control over their friendliness. Imagine infrels don’t seem to care about the Sallys’ autonomy at all, so long as they get what they want. Imagine this all starts to look like it’s less about friendship and more about people who can’t accept that Sallys get to decide for themselves who they will and won’t befriend.

Imagine if they started killing people.

 

Originally posted at Medium.

Is it OK to punch Nazis?

Office for Emergency Management. War Production Board, c. 1942-3. National Archives and Records Administration (US)

Hey, remember 2016? When all those beloved celebrities kept dying and we couldn’t wait for the year to be over? We’re now less than a month into 2017 and a week into Donald Trump’s presidency, and the internet finds itself seriously conflicted over whether it’s ok to punch Nazis.

Nostalgic yet?

The dapper Backpfeifengesicht of the Alt-Right

Backpfeifengesicht n. (German) ‘A face in need of a slap’
Vas Panagiotopoulos/flickr

Meet Richard Spencer. Spencer is a major figure in the alt-right, a term he claims to have invented. Profiles of him tend to note his education, dapper suits, expensive watch, and haircut.

He is also an ardent and wholly unrepentant white supremacist and ethnonationalist who advocates what he calls “peaceful ethnic cleansing” to achieve a “white homeland.” He says America belongs to white people, who he claims have higher average IQs than Hispanic and African Americans and that the latter are genetically predisposed to crime. He has called for eugenicist forced sterilisation (but it’s ok, you see, because “They could still enjoy sex. You are not ruining their life”). The nice suits are no accident: Spencer deliberately cultivates a ‘respectable’ facade from which to spout his grotesque racist ideology.

And he loves him some Trump. At an alt-right conference not long after Trump won the 2016 election, Spencer yelled “Hail Trump!” prompting some of his supporters to give the Nazi salute. He insisted that’s somehow acceptable because it was done in a spirit of “irony and exuberance.”

Some – Spencer himself, for one – insist Richard B. “Let’s party like it’s 1933” Spencer is not, in fact, a neo-Nazi. To some extent it’s a moot question, but for present purposes we’ll follow popular usage and use ‘Nazi’ as shorthand for ‘people who advocate the sort of views Richard Spencer advocates’ rather than anything more specific to the historical NSDAP.

On the day of Trump’s inauguration, Spencer was giving a street interview when a masked protester emerged from the crowd and punched him in the head. At least two video cameras captured the incident. And because we’re living in the future now, within hours Twitter was flooded with remixes of the punch video set to music.

There’s at least two ethically salient questions in play here: is it morally permissible to punch Nazis? And is it morally permissible to enjoy or exploit footage (or even the fact) of Nazis being punched?

Consequences of Nazi-Punching

One major line of reasoning against Nazi-punching runs like this: if you start punching Nazis, you thereby legitimate or encourage forms of political violence that can be used against you. This is not an argument about rights – it doesn’t say ‘if you punch Nazis they’re allowed to punch you back’ – or draw any false moral equivalence between Nazis and non-Nazis. It’s simply about consequences. Such reasoning is vulnerable to counter-arguments however: for instance, that Nazi-punching productively serves to make being a Nazi harder, and in any case, Nazis will, if given a chance, punch people anyway.

These are important inputs into our moral reasoning. But they’re not the whole story. Analogous arguments are sometimes offered against things like torture and capital punishment. “Torture doesn’t produce reliable information” or “the death penalty doesn’t act as a deterrant” are relevant facts when considering what’s wrong with such practices. But very often they don’t so much answer the moral question as try to put it out of play. They’re a way of saying “we don’t actually have to answer whether it’s morally wrong to punch Nazis because it’s strategically a bad idea.”

Political violence and the liberal state

A key feature of the liberal democratic state as it has emerged over the last two hundred years or so is that the State reserves the use of force for itself. Outside of consensual settings like boxing rings, private citizens are limited to using violence only in self-defense.

Political violence, according to this understanding, only becomes legitimate in contexts when the liberal democratic sphere, and the protections and freedoms it affords, has broken down – for instance, where tyranny makes certain forms of violent resistence effectively self-defense. That some such point exists seems hard to dispute. The difficulty is knowing when that point has been reached, such that politics legitimates the use of violence against others, and what sort of violence is thereby legitimated.

The US is still, at least at the time of writing, a more-or-less functional liberal democracy. That cannot be assumed to hold: given his attacks in the space of one week on the media, women, immigrants, science, and apparent threat to invade Chicago, it’s far from clear whether and how the US as we know it can survive Trump. But at least on January 20th, punching an unarmed Nazi in the street doesn’t appear to rise to the level of self-defense.

But then it’s easy for me to say that. I’m not American, and more importantly, I’m not a target of the genocidal. I’ve never felt what it is to be hated by someone who thinks people like me should not exist. And of course it’s much easier to insist the norms of civil society are still in place if you’re systematically less likely to be harassed or killed by agents of the State.

Breakdowns of the moral sphere

Still, if you want to say that punching Nazis is ok, the first step is to make a case that we find ourselves in one of those exceptional periods in which things have so broken down that the use of political violence has become temporarily legitimate. (Or, alternatively, argue for a very different understanding of legitimate political violence than that which holds in contemporary liberal democracies).

Under such conditions, even philosophers and theologians who put deferential love of others at the core of their ethics have been moved to assist in violent resistance. Dietrich Bonhoeffer and Knud Ejler Løgstrup were both Lutheran priests, and both ethicists centrally concerned with our love of others. During World War II Løgstrup took part in the Danish Resistance (which assassinated around 400 Nazi collaborators, some dubiously), while Bonhoeffer was involved with the Abwehr conspiractors who hatched the 20 July Plot to assassinate Hitler. Løgstrup was forced into hiding but survived; Bonhoeffer was imprisoned and executed at Flossenbürg concentration camp in the dying weeks of the war.

Violence and guilt

But Bonhoeffer did not think that necessity washed away the moral stain of violence:

when a man takes guilt upon himself in responsibility, he imputes his guilt to himself and no one else. He answers for it… Before other men he is justified by dire necessity; before himself he is acquitted by his conscience, but before God he hopes only for grace.

Political violence unavoidably reduces the life and body of another human being to a means to achieve a political end. There are desperate circumstances in which that becomes necessary. But in those instances one does not avoid guilt – rather one takes on the guilt of violence for the sake of preserving the moral life we share. Violence may become necessary, but that does not make it good, merely least-worst. It is not clear that punching Richard Spencer was the least-worst available option.

That brings us to the second question. Most of the people discussing the punching of the Nazi Spencer have not actually punched a Nazi and in most cases are unlikely to do so anytime soon. They’re simply commenting on and remediating images of someone else doing so.

That seems to be a long way removed from seeing political violence as a regrettable but sadly necessary means of repairing the fabric of ethical society. It’s just enjoying the sight of another – utterly repugnant – person being punched.

Ok, but don’t we cheer the punching of Nazis in other contexts? Don’t we cheer for Indiana Jones and Captain America when they’re doing just that?

https://twitter.com/KaraCalavera/status/822620397044723719/

Yes, but when we do so, we’re watching fiction. Moreover, we’re watching fictional violence offered in response to violent antagonism, and carried out for a clear purpose. We’re in different territory when we cheer not the purpose for which violence is done, but the act of violence itself. In that case we don’t regret an instance of necessary force, but simply revel in suffering.

The ConversationAgain, I speak from a position of privilege. I’ve never been threatened with violence in word or deed on the basis of who I am. I don’t presume to tell those who have how they should feel about Nazis like Spencer. But for people like me at least, deciding what to share and endorse, things look bleak enough without cheering on the darkness.

This article was originally published on The Conversation. Read the original article.

The ‘no’ campaign on marriage equality owes us better arguments

 

Kurt Löwenstein Educational Center International Team/Wikimedia commons

With the re-election of the Turnbull government, the likelihood that marriage equality in Australia will be the subject of a harmful, expensive, and non-binding plebiscite becomes ever more likely. That may also mean the ‘yes’ and ‘no’ campaigns will receive public funding to put their cases.

If we’re going to go ahead with this, then we need to hold both campaigns to the highest standards of public reason. We need to call out bad arguments whenever and wherever they appear, and expose any hidden premises for scrutiny.

The response of prominent ‘no’ voices to new adoption reforms in Queensland suggests we’re going to have our hands very full doing so.

Adoption changes

On 6th August, the Queensland Communities Minister Shannon Fentiman announced changes to the state’s adoption laws, allowing same-sex couples, singles, and couples undergoing IVF to adopt.

Shortly thereafter, the Australian Christian Lobby, probably the most visible proponents of the case against allowing same-sex couples to be legally married, put out a press release condemning the changes:

Today’s announcement by the Palaszczuk Government to allow single people and same-sex couples to adopt orphaned children ignores society’s obligation to provide them with a mother and father.
“It is not like there is a shortage of married couples who cannot have children who would gladly provide orphans with the stability and nurture that comes from an adoptive mother and father,” the Australian Christian Lobby’s Queensland Director Wendy Francis said.

We’ll come back to that emphasis on orphans in a moment. Francis goes on:

“They are, through no fault of their own, without biological parents, and are in need of a new permanent family. To deny them a mother’s love, or a father’s care, is compounding their loss. As we know, and social science proves, it is in the best interests of a child to experience the love of a mum and a dad, wherever possible,” she said.

Who is a parent?

There’s at least two meanings to the term ‘parent,’ though the fact many of us are parents in both senses tends to obscure the difference.

One meaning is biological and purely descriptive. Parenthood in this sense is also an either/or proposition: a DNA test, for instance, reveals you’re either the biological father of a child, or you’re not.

The other sense of ‘parent’ is social and normative. A parent in this sense is a person who fulfils a particular role in a child’s life. That role is complex, but includes distinctive types of responsibility for unconditionally loving, raising, and protecting children. This sense admits of degrees, too: you can be more or less of a father in the social sense.

Darth Vader is Luke’s father only in the biological sense. Conversely, many step-parents and adoptive parents are parents in the second sense – not merely substitute parents or guardians or in loco parentis, but actually, substantively, parents.

No, Luke, I am your father, but only in one of the two relevant senses.
Jim Reynolds/Wikimedia

Adoptive parents

The ACL often give the impression that their concern is fundamentally about biological rather than social parenthood. ACL’s managing director Lyle Shelton has framed much of his organisation’s opposition to same-sex marriage on the basis that it would legitimize same-sex parenting (despite the fact this already occurs, perfectly successfully) and that this form of parenting involves “taking a baby away from its mother, from the breast of its mother and giving it to two men.”

That, pretty clearly, puts the biological ahead of the social sense of ‘parent,’ given that Shelton also insists “Our objection … is not that same-sex parents cannot be good parents – of course they can be.”

However, ACL’s opposition to allowing orphans in particular to be adopted by same-sex couples explodes that argument. If your objection to same-sex parenting is based on the premise that children should be raised by their biological parents wherever possible, that objection can have no force in cases where there’s no possibility of that happening. As orphans by definition cannot be raised by their biological parents anymore, the adoptive “mum and dad” Francis insists upon must be parents in the social sense, not the biological sense.

So if it’s not about biological relation, why should the gender of the adoptive parents matter?

It seems we’re left with two options: either parents need to ‘model’ gender roles for their children, or men and women parent differently in ways that are jointly beneficial. Let’s consider these options in turn.

Modelling gender

Commentators like the ACL often rail against the understanding that gender is “fluid.” This opposition suggests, then, that they believe gender (or some part thereof) to be fixed, innate, and biologically essential.

But if that’s the case, then it’s not clear why kids would need parents of specific genders to act as exemplars at all. Surely innate gender traits would develop regardless of who does the parenting? Left-handed kids don’t need a left-handed parent to show them how to be left-handed; if ‘boyishness’ and ‘girlishness’ are similarly innate, why would they need to be modelled?

On the other hand, if gender roles are not innate, it might make sense that they need to be modelled and taught in order to be acquired. Or perhaps they’re more like talents: innate, but requiring training and practice to realise. But in either case, we now need to hear a reason why they should be taught and realised. We need a reason to believe gender roles matter, or at least matter enough to frame laws around who can adopt.

But what reason could that be? ‘No’ proponents clearly think gender is important, but never tell us why. And given the obvious harms gender norms inflict on people – telling women to be submissive, men to be emotionally distant and aggressive etc. – any attempt to argue in their favour would be starting from a very long way behind.

Parenting styles

That brings us to the second option: the claim that men and women parent differently, and children benefit from having one of each. Shelton has made this point too: “no matter how great a mum is, she is not a father. And however great a dad is, he is not a mother.”

But again we’re left wondering why having ‘one of each’ should be best for children, or why we’d think a lack of both harmful. Even a full-blown gender essentialist who thinks the differences between ‘fathering’ and ‘mothering’ are biologically grounded owes us an account of why that makes those differences valuable. (Aggression and selfishness might also be biologically grounded, but we don’t think that makes being aggressive or selfish morally obligatory – quite the opposite in fact).

Ideology and theology

The ACL, Marriage Alliance, and other such groups like to warn of the dangers of a vaguely-defined “rainbow ideology.” More broadly we’re told that programs like Safe Schools are “ideological,” which we’re instantly meant to understand as a Bad Thing. As Louis Althusser, the French Marxist philosopher (three strikes right there…) put it, “the accusation of being in ideology only applies to others, never to oneself.”

That, according to Althusser, is precisely how ideology works: it makes a given state of affairs seem as if it was obviously how things must be, “common sense” rather than contingent and changeable. Such as, for instance, the view that that men ‘naturally’ parent in a certain way and women in a different way.

That the ACL’s view is ideological isn’t a fault, by the way. All views are. But the ‘No’ campaign need to be honest with us about their premises. They aren’t just worried about children’s access to their biological parents. As the reaction to orphan adoption shows, their concern is that they think gender roles and heterosexual parenting are somehow normative.

In short, they’re relying on unstated assumptions that are deeply contentious, and hard to motivate properly unless you first believe that human beings are created by God, and that this creation infuses aspects of our biology, such as gender and reproduction, with moral purpose.

That’s a view with a long and rich history. It’s also one that has no place determining policy in a genuinely secular society that rightly excludes revelation claims from public ethics.

The ConversationThe ‘no’ campaign has a right to prosecute its case (though not, as I’ve previously argued here, in a way that violates norms against vilification). But doing so brings with it a responsibility to be honest with the public about what they’re really arguing. Now would be a good time for them to start.

This article was originally published on The Conversation. Read the original article.

A Philosophical Dialogue (That May Or May Not Have Something To Do With Recent Events)

This piece originally appeared on The Conversation’s Cogito blog.

goodsamaritanThe UNSUBTLEBERG family – MOTHER, FATHER, BROTHER and SISTER – are driving along a lonely country road, on their way to camping holiday.

MOTHER: Wait, what’s that?! Stop the car!

[They screech to a halt]

FATHER: Wow, that’s quite a crash! That car’s completely wrecked!

BROTHER: Hey, there’s the driver! Over there, by the side of the road!

SISTER: I think he’s still alive! Look, he’s moving!

MOTHER: [unbuckling her belt] Ok, someone grab the blankets and medical kit out of the boot, there’s heaps of bandages in there, and I’ll –

FATHER: Whoa, just a second there, honey. What – what are you doing?

MOTHER: What do you mean, ‘what am I doing?’ That person needs help! We have to help!

FATHER: Well, sure, I agree this is a terrible situation. But it’s not our fault.

MOTHER: What?

FATHER: It’s not our fault. We didn’t cause this accident, and so while it’s good that we feel sympathy here, we’re not obliged to stop and help. And, you know, we have a camping holiday to get to. Though I must say, it’s very much to our credit that we feel so bad for this poor sap. Really speaks to what great people we are that we feel so dreadful.

MOTHER: It may not be our fault – though who knows, maybe we’re all complicit in the condition of country highways – but this person needs help!

FATHER: Yes, but what matters here is that we do what’s just. And there’s no injustice in our driving past: we’re not breaking a promise, or stealing something that’s not ours, or injuring the driver ourselves, or anything like that. So, it’s ok to just drive on.

MOTHER: You don’t think it’s unjust that someone’s dying in a ditch while we’re sitting in a nice warm Audi arguing about what’s just?

FATHER: Unjust? No. Terrible? Sure. But no-one has wronged anybody here, so it’s nobody’s responsibility to do anything. Why should I fix this? I didn’t make the world. Besides, we worked hard for this Audi.

MOTHER: We bought this Audi with the money you inherited from your aunt, and you ‘work hard’ at a job you only got because your parents could afford to send you to private school where you met your future boss.

FATHER: Exactly. We’ve earned it.

MOTHER: Look, I’m not talking about obligations in that narrow sense. Circumstance has placed a suffering person into our hands, and that means we’re responsible for what happens to them, even if it’s not our fault. Sucks, but there it is. Now, will somebody please grab the first aid kit from the –

BROTHER: No way, Mum. I mean, yeah, Dad’s right that it’s sad this has happened and all, and it would be very nice of us to help, I agree. But I just think we should look after our own first.

MOTHER: What the actual f-

BROTHER: No, seriously, is that person’s name “Unsubtleberg”? Because we only have so many resources here in this car and we should use them to look after people named Unsubtleberg first.

MOTHER: Why?

BROTHER: Well, we’re the Unsubtlebergs. We have the same surname. That means we have special obligations to each other. We’re obliged to put the interests of our family members first, ahead of strangers.

MOTHER: Our ‘interest’ here being that we don’t lose an hour or two out of our camping holiday? You really think the random chance that made you an Unsubtleberg gives you the right to ignore a guy bleeding out not ten feet away from where you sit?

BROTHER: Of course I do. Also I didn’t want to say anything, but I got a papercut a few minutes ago while thumbing through this copy of Clumsy Allegory Enthusiast magazine. Don’t you think we as a family should be tending to our own wounds first before we start helping others?

SISTER: Besides, we’re four people in a mid-sized sedan. What do you want us to do: squish up? That would be less comfortable.

FATHER: And they might bleed all over the upholstery!

BROTHER: Hey look, there’s someone on a bicycle out there! The rider has gone over to the injured person. That means they got there first, so they should take the injured driver to hospital, not us.

MOTHER: On a bicycle? You can’t dink someone to hospital from the middle of nowhere. We’re in a car, so it’s up to us to help.

BROTHER: Why? Just because they need help doesn’t mean they have the right to sit in a nice vehicle. If they’re really injured, being slung over the bars of a ten-speed should be good enough for them, right? Otherwise they’re not even a patient, they’re a hitchhiker.

SISTER: Also, if we stop to help, aren’t we just encouraging accidents? If we rescue people from accidents like this, more and more people will start driving on country roads.

BROTHER: The only thing that really matters here is that we Stop The Traffic.

FATHER: [Opens window, calls to cyclist] Hey buddy, how’s the patient looking?

CYCLIST: I’m… I’m afraid they’re gone. Just a moment ago. I didn’t have any bandages or anything, there was just nothing I could do…

SISTER: Wow. Look at that poor body, lying there like that. Heartbreaking.

BROTHER: I’ll have that image seared into my brain forever.

FATHER: Gosh, really makes you think, doesn’t it?

BROTHER: Yep. Really makes you think.

[They drive on]

This article was originally published on The Conversation. Read the original article.

Why Conspiracy Theories Aren’t Harmless Fun

This piece originally appeared on The Conversation’s Cogito blog

WTC7We’ve just seen another mass shooting in the US. This time it was a church, and race hate was the cause. Other times it’s a school, or a cinema, or a university, or a shopping mall.

By now, the script is sickeningly familiar: the numbing details of horror, followed by the bewildered outrage, the backlash that attempts to delimit and isolate and resist any wider analysis, and finally the inevitable failure to act.

But in the age of the internet there’s an extra, ghoulish twist.

Within days, and increasingly, within mere hours and minutes, a tragic event is being filtered through a worldview that insists these events are not what they seem. Conspiracy theorists leap on the tragedy as yet more evidence of dark forces manipulating the world for their own nefarious ends. The kids killed at Sandy Hook Elementary in Newtown, CT? They never existed. Their grieving families? “Crisis actors.” This is all Obama, you see, and his one-world-government comrades staging ‘false flag’ attacks to justify disarming the citizenry. He’s coming for your guns.

And yes, this process has already started around Charleston. Lunar right media identity Alex Jones’ Infowars immediately questioned if the shooting was a “false flag.” Others came out of the woodwork to insist the shooter’s manifesto was a fraud, and that the “shooter” was in fact a 33 year old Marine and former child star of Star Trek: The Next Generation and Doogie Howser MD.

Did you laugh just then? It’s understandable if you did. Most conspiracy theories are, unless you’re the one pushing them, pretty absurdly funny. Conspiracy theorists are always good for a chuckle.

Until they aren’t.

In his critical introduction to conspiracy theories, the sociologist Jovan Byford notes that the academic study of conspiracy theories went through a phase where scholars treated these theories as intriguing pop-culture artefacts that were essentially harmless. In the X-Files-inflected 90s, decades out from the horrific anti-Semitic conspiracy fantasties of Nesta Webster and the Protocols of the Elders of Zion, it was easy to treat conspiracy theory as an exercise of playful postmodern irony. No-one gets hurt, right?

Tell that to Gene Rosen, who helped kids who had fled the shooting at Newtown only to be hounded with abusive phone messages from people accusing him of being a government stooge. Tell that to the families of Grace McDonnell and Chase Kowalski, two seven year olds killed at Newtown, whose parents had to endure a phone call from the man who stole the memorial to their children telling them their children never existed.

But the harmfulness of conspiracy theory arguably goes much deeper than this. It’s not just that conspiracy belief sometimes causes people to do terrible things. It’s that attachment to the conspiracy worldview violates important norms of trust and forbearance that are central to how we relate to each other and the wider world.

There’s remarkably little philosophical work done on conspiracy theory, though intriguingly most of what has been done has been done by Australians and New Zealanders such as David Coady, Charles Pigden, Steve Clarke, and recently Matthew Dentith (I haven’t yet had a chance to get hold of Dentith’s new book on the philosophy of conspiracy theory but it looks interesting). Most of what has been done has concentrated on issues of rationality and epistemology: is it rational to believe in conspiracy theories?

Interestingly, the answer is: more rational than we might think. After all, conspiracy theories manage to explain all the loose ends (“errant data”) that the ‘official’ story doesn’t. Viewed purely as a form of inference to best explanation, conspiracy reasoning doesn’t seem to be inherently illogical on its face.

However, as Byford points out, conspiracy theory is a “tradition of explanation” (conspiracy theories don’t arise from nowhere but draw upon earlier narratives, often with deeply problematic origins) that has a shockingly bad strike rate. Real conspiracies have certainly happened – Watergate, Iran-Contra etc. – but how many have ever been uncovered by conspiracy theorists?

Academic discussions of conspiracy theory tend to focus on long-lived varieties, the ones that attract large numbers of adherents around a relatively stable core. It’s that duration that allowed Steve Clarke to analyse these theories using the framework of progressive and degenerating research programs, borrowed from the philosopher of science Imre Lakatos. In science, progressive research programs explain more and more observations and make successful predictions. When confronted by data that seems to disconfirm the theory, they posit ‘auxiliary hypotheses’ that actually strengthen the theory, by allowing it to explain and predict even more.

Degenerating research programs, by contrast, are stuck on the defensive: they don’t explain any new observations, nor make successful predictions, and are constantly having to defend themselves from new data that contradicts the theory. Clarke is right that most conspiracy theories are like that. If the various US shootings are government false flags designed to help Obama implement gun control, why is it taking so long? Shouldn’t at least one whistleblower have come forward?

A conspiracy theorist, led by the inexorable logic of their tradition of explanation, might double down at this point: the conspiracies we think we know about are just covers for the real conspiracies, while the reason conspiracy theory never seems to yield results is that the conspirators are making sure it doesn’t. Absence of evidence isn’t evidence of absence: it’s proof positive of a conspiracy.

You can see how that sort of theorising is, to a certain extent, a frictionless spinning in a void. Any observation confirms the conspiracy, and any data that seems to disconfirm it also confirms it. It’s an ‘explanation’ of observed reality that comes at the cost of making its central beliefs unfalsifiable. But that’s not the only problem.

To believe in conspiracy theory, you must believe in conspirators. To maintain a conspiracy theory for any length of time, you must claim that more and more people are in on the conspiracy. Clinging to degenerating research programs of this type involves making more and more unevidenced accusations against people you know nothing about. That’s not without moral cost. Suspicion should always involve a certain reluctance, a certain forbearance from thinking the worst of people – a virtue that is sacrificed in the name of keeping the conspiracy theory going. In the process, real human tragedy is made into a plaything, fodder for feverish speculation that does no real epistemic or practical work.

Our relationship to each other and to society as a whole also only works against a generalised assumption of trustworthiness. Imagine if you believed by default that everyone is lying to you: how could you possibly function, or even communicate? Laura D’Olimpio recently wrote on Cogito about the importance of trust, and the corresponding vulnerability that requires us to accept. One crucial dimension of trust as a pervasive phenomenon in our lives is its role in our epistemology: most of what we know, we actually take on trust from the testimony of others. I only know that Iceland exists because I don’t believe, to borrow a phrase from Tom Stoppard, in a ‘conspiracy of cartographers.’

To maintain a conspiracy theory requires us to throw out more and more of our socially-mediated sources of knowledge, and to give up more and more of the trust in each other and in our knowledge-generating mechanisms that we are utterly dependent upon. On some level, the ‘conspiracy theory of society’ ultimately asks us to give up on society altogether. And that takes us to a very dangerous place indeed.

This article was originally published on The Conversation. Read the original article.

Feeding the beast: why plagiarism rips off readers too

By now you’ve likely heard about psychiatrist and columnist Tanveer Ahmed’s recent opinion piece in The Australian in which he effectively blamed radical feminism for domestic violence.

Others have explained better than I could why Ahmed’s piece was so offensive (as Clementine Ford summed it up, “It is not the job of women to absorb men’s suffering”), but one seemingly tangential fact that keeps cropping up is that Ahmed is an admitted plagiarist, having been dropped by the Sydney Morning Herald in 2012 after a Media Watch story on his habit of copying other people’s work.

Yesterday, the blogger and commentator Ketan Joshi took time out from exposing the silliness of anti-wind activists to do something no-one had apparently thought to do: check Ahmed’s Australian piece for plagiarism.

Using freely-available online tools, Joshi quickly established that Ahmed’s piece had lifted language directly from a Prospect article by Amanda Marcotte. Joshi also found further instances where Ahmed had recycled his own work. By day’s end, the Australian had dropped Ahmed.

Busting someone for plagiarism after they suggested men’s violence against women is somehow feminism’s fault for taking away power men were never entitled to in the first place might seem a bit like sending Al Capone up the river for tax evasion.

When he was caught in 2012, Ahmed admitted his copying was wrong, but situated what he did in the context of our contemporary “comment monster” that “needs to be fed”. He has a point: in the age of churnalism, and with the internet desperate for ever-more shareable content to throw into its mighty Clickhole, is copying a paragraph here and there really so bad?

Well, yes, actually, it is; but we need to understand why.

Writers are understandably highly sensitive to plagiarism, both of having it done to them and of being accused of it. Just this month I ended up offending a journalist I admire greatly for cocking an eyebrow too publicly over her piece’s resemblance to one of mine. It was a coincidence (great minds and all that) but the mere suspicion is utterly poisonous for everyone involved.

For academics, it’s even worse: plagiarism is among the worst of sins, and potentially one of the most catastrophic. It’s not so long ago that decades-old plagiarism allegations cost a Group of Eight vice-chancellor his job. An academic who presents the ideas of others as their own is violating the very integrity of the process by which knowledge is generated, and demeaning their fellow researchers. Understandably, we take that pretty seriously.

Students too face enormous consequences for plagiarising, which in the age of essay mills and Google is an irresistible temptation for many, Turnitin be damned. For teachers, it’s hard not to take plagiarism personally, as if the student is saying: “This stuff you’ve devoted your life to? It’s not important enough for me to bother even trying to care about.”

But plagiarism also falls on a spectrum, from outright copying (relatively rare, but occasionally spectacular) to sloppy referencing or missing quotation marks. For every student who’s tried to pull one over you, there’s five more who simply haven’t understood what’s expected of them and hence don’t realise that they’ve done anything wrong.

Ahmed doesn’t have that excuse. He’s been caught before, and he knew even back then that what he was doing was wrong. Still, you might reply, he isn’t writing as an academic, but as a paid columnist. He is not presenting his work as the outcome of laborious and expensive research, nor is he submitting his work to the unforgiving gamut of peer review.

Some of Ahmed’s infractions are actually self-plagiarism, or recycling, which doesn’t rip off another writer. Self-plagiarism is alarmingly easy to do accidentally, particularly where multiple drafts exist or one piece splits into two. Just recently I sent off two academic articles (which had begun life as a single piece) without realising I’d repeated a paragraph in each; it was just dumb luck that I caught it before it got to print.

So, why is Ahmed’s plagiarism a sackable offence?

Using the model of intellectual property we could argue that the misdeed here is selling a product that doesn’t belong to him, either because it was written by someone else, or because, in the case of recycling, he had previously sold it to another publisher.

In short, it’s a type of theft. (That might also explain why self-plagiarism doesn’t seem that bad in cases where no money has changed hands: you’re repeating yourself, but not actually stealing).

From the point of view of the victims – both writers and commissioning editors – the theft model makes a lot of sense. But I don’t think theft alone is the whole story, because plagiarism isn’t just about ownership. I’ve heard stories of students angrily insisting that a ghost-written essay “is my own work – I paid for it, so I own it!”

The effrontery of that response actually gives us some idea of why the “intellectual property” model of plagiarism doesn’t quite yield a full understanding of what’s wrong with it. Plagiarism isn’t just a form of theft; it’s also a form of insincerity.

That may sound odd: surely the plagiarist is sincerely agreeing with what they’ve copied? But sincerity is not just a matter of saying something you take to be true. The 20th-century philosopher K.E. Løgstrup spoke of insincerity as violating an “openness of speech” that we reasonably assume from others: we expect that their words are “new”. By “new” he meant that they weren’t calculated or premeditated but were spontaneous, sincere, without guile. “Old” words put up a barrier between speaker and hearer and thereby frustrate true dialogue.

The late Robert C. Solomon, in developing the idea of sex as a form of communication, argued that perverted sex is to sex as insincerity is to language: using language to frustrate communication by obfuscating or concealing what you really think or intend is perverting the very function of language.

(That analogy took Solomon in some odd directions – at one point he says masturbation is basically talking to yourself – but the idea has something to it; faking arousal is arguably a form of insincerity that violates the communicative dimensions of intimacy.)

Plagiarism, in a sense, disrupts the contact between the author and the reader. It insinuates someone else’s “old” words between us. We come to an article wanting contact with the author’s mind, not a collage of other minds they’ve assembled to hide behind.

Plagiarism is theft, but it is also a failure to, in E.M. Forster’s phrase, “only connect!” The need to feed the beast shouldn’t distract us from that task of connecting.

This article was originally published on The Conversation.
Read the original article.

We don’t need no (moral) education? Five things you should learn about ethics

The human animal takes a remarkably long time to reach maturity. And we cram a lot of learning into that time, as well we should: the list of things we need to know by the time we hit adulthood in order to thrive – personally, economically, socially, politically – is enormous.

But what about ethical thriving? Do we need to be taught moral philosophy alongside the three Rs?

Ethics has now been introduced into New South Wales primary schools as an alternative to religious instruction, but the idea of moral philosophy as a core part of compulsory education seems unlikely to get much traction any time soon. To many ears, the phrase “moral education” has a whiff of something distastefully Victorian (the era, not the state). It suggests indoctrination into an unquestioned set of norms and principles – and in the world we find ourselves in now, there is no such set we can all agree on.

Besides, in an already crowded curriculum, do we really have time for moral philosophy? After all, most people manage to lead pretty decent lives without knowing their Sidgewick from their Scanlon or being able to spot a rule utilitarian from 50 yards.

But intractable moral problems don’t go away just because we no longer agree how to deal with them. And as recent discussions on this site help to illustrate, new problems are always arising that, one way or another, we have to deal with. As individuals and as participants in the public space, we simply can’t get out of having to think about issues of right and wrong.

Yet spend time hanging around the comments section of any news story with an ethical dimension to it (and that’s most of them), and it quickly becomes apparent that most people just aren’t familiar with the methods and frameworks of ethical reasoning that have been developed over the last two and a half thousand years. We have the tools, but we’re not equipping people with them.

So, what sort of things should we be teaching if we wanted to foster “ethical literacy”? What would count as a decent grounding in moral philosophy for the average citizen of contemporary, pluralistic societies?

What follows is in no way meant to be definitive. It’s not based on any sort of serious empirical data around people’s familiarity with ethical issues. It’s a just tentative stab (wait, can you stab tentatively?) at a list of things people should ideally know about ethics, and based, on what I see in the classroom and, online, often don’t.

1. Ethics and morality are (basically) the same thing

Many people bristle at the word “morality” but are quite comfortable using the term “ethical”, and insist there’s some crucial difference between the two. For instance, some people say ethics are about external, socially imposed norms, while morality is about individual conscience. Others say ethics is concrete and practical while morality is more abstract, or is somehow linked to religion.

Out on the value theory front lines, however, there’s no clear agreed distinction, and most philosophers use the two terms more or less interchangeably. And let’s face it: if even professional philosophers refuse to make a distinction, there probably isn’t one there to be made.

2. Morality isn’t (necessarily) subjective

Every philosophy teacher probably knows the dismay of reading a decent ethics essay, only to then be told in the final paragraph that, “Of course, morality is subjective so there is no real answer”. So what have the last three pages been about then?

There seems to be a widespread assumption that the very fact that people disagree about right and wrong means there is no real fact of the matter, just individual preferences. We use the expression “value judgment” in a way that implies such judgments are fundamentally subjective.

Sure, ethical subjectivism is a perfectly respectable position with a long pedigree. But it’s not the only game in town, and it doesn’t win by default simply because we haven’t settled all moral problems. Nor does ethics lose its grip on us even if we take ourselves to be living in a universe devoid of intrinsic moral value. We can’t simply stop caring about how we should act; even subjectivists don’t suddenly turn into monsters.

3. “You shouldn’t impose your morality on others” is itself a moral position.

You hear this all the time, but you can probably spot the fallacy here pretty quickly: that “shouldn’t” there is itself a moral “shouldn’t” (rather than a prudential or social “shouldn’t,” like “you shouldn’t tease bears” or “you shouldn’t swear at the Queen”). Telling other people it’s morally wrong to tell other people what’s morally wrong looks obviously flawed – so why do otherwise bright, thoughtful people still do it?

Possibly because what the speaker is assuming here is that “morality” is a domain of personal beliefs (“morals”) which we can set aside while continuing to discuss issues of how we should treat each other. In effect, the speaker is imposing one particular moral framework – liberalism – without realising it.

4. “Natural” doesn’t necessarily mean “right”

This is an easy trap to fall into. Something’s being “natural” (if it even is) doesn’t tell us that it’s actually good. Selfishness might turn out to be natural, for instance, but that doesn’t mean it’s right to be selfish.

This gets a bit more complicated when you factor in ethical naturalism or Natural Law theory, because philosophers are awful people and really don’t want to make things easy for you.

5. The big three: Consequentialism, Deontology, Virtue Ethics

There’s several different ethical frameworks that moral philosophers use, but some familiarity with the three main ones – consequentialism (what’s right and wrong depends upon consequences); deontology (actions are right or wrong in themselves); and virtue ethics (act in accordance with the virtues characteristic of a good person) – is incredibly useful.

Why? Because they each manage to focus our attention on different, morally relevant features of a situation, features that we might otherwise miss.

So, that’s my tentative stab (still sounds wrong!). Do let me know in the comments what you’d add or take out.

This is part of a series on public morality in 21st century Australia.

The Conversation

This article was originally published on The Conversation.
Read the original article.

Strange bedfellows: euthanasia, same-sex marriage, and libertarianism

Originally published at The Conversation; feel free to join in the discussion there.

The suspension of Philip Nitschke’s medical registration, and the events leading up to it, has sparked one of the most heated discussions about euthanasia in Australia for some time.

What’s surprising, however, is that the debate hasn’t split along the usual pro-euthanasia versus “pro-life” lines. Instead, advocates of both euthanasia and doctor-assisted suicide themselves have been condemning Nitschke for failing to urge a 45-year-old man, who had no terminal illness but who expressed a wish to take his own life, to seek psychiatric help.

Nitschke has insisted that it wasn’t his role to try to dissuade someone from “rational suicide”:

If a 45-year-old comes to a rational decision to end his life, researches it in the way he does, meticulously, and decides that … now is the time I wish to end my life, they should be supported. And we did support him in that.

The pushback against Nitschke from euthanasia campaigners such as Rodney Syme (as well as mental health advocates such as beyondblue’s Jeff Kennett) provides a valuable lesson about what can happen when two very different ethical approaches converge on the same policy prescription; it becomes important to discuss the principles, not just the policy.

The importance of liberty

This problem isn’t unique to the euthanasia debate. Last week, newly-minted Liberal Democrats senator David Leyonhjelm announced plans to introduce a bill to legalise same-sex marriage.

As a libertarian, Leyonhjelm has called for lower taxes and a massively reduced role for government. Yet his position on marriage equality aligns him with a policy more closely associated with the political left.

Andrew Becraft/Flickr, CC BY-NC-SA

He’s not the only right-wing supporter of same-sex marriage of course. But when someone like British Prime Minister David Cameron declares “I don’t support gay marriage despite being a Conservative, I support gay marriage because I’m a Conservative,” he is doing something very different: he’s saying that marriage is a substantive good, and committed same-sex couples can and should be able to participate in that good.

Philosophers such as Richard Mohr have argued that committed same-sex relationships already are marriages in a substantive sense, and the law should simply recognise that.

For libertarians (for the most part), the only real substantive good is individual autonomy. Leyonhjelm doesn’t argue, as far as I can see, that certain types of relationship have a special, substantive value; he simply thinks “It is not the job of the government to define relationships.“ (In which case, we might ask, why should governments get involved in certifying marriage at all?)

Those of us who support same-sex marriage can probably live with that tension, if it delivers the outcome we want. But the philosophical tension between approaches is still there.

And the very moral thinness of libertarianism, its refusal to trade in any ethical currency other than liberty, sits uneasily with issues of life and death, where all sorts of other moral considerations are in play.

The limits of autonomy

That’s precisely why Nitschke’s comments about suicide are so shocking. Most arguments for euthanasia come down to a concern to alleviate needless suffering.

One reason death is viewed as normally being a harm to the person who dies is that it deprives us of goods we would have enjoyed had we lived. In a situation where there is nothing left in the patient’s future but pain and loss of dignity, there are no more goods to lose.

Compassionate regard for someone whose fate is in our hands may mean helping them achieve a quicker, more dignified death is the least-worst option.

Autonomy plays a crucial role in that, of course: we need to respect the patient’s decisions regarding their treatment, including their refusal of further interventions. Compassionate concern for others may mean allowing them a degree of control over their impending death.

Having only one source of light, the libertarian landscape is dimly lit. Graham Hodgson/Flickr, CC BY-NC-SA

What Nitschke’s libertarian position does, however, is strip out everything but autonomy and reduce the whole issue to one of individual choice.

Libertarianism’s moral moonscape

If you think, as Nitschke apparently does, that the question here is simply about exercising a right to suicide, why should it matter whether someone is terminally ill or not? If someone wants to die, and they’re clear-headed enough to make competent decisions, who are we to interfere with their personal liberty in order to stop them?

And yet most of us do have fairly clear moral intuitions that the suicide of an otherwise physically healthy person, possibly with treatable mental health issues, is a terrible thing.

Libertarianism either can’t make sense of that intuition, or treats it as irrelevant.

When teaching classes on the ethical debate over euthanasia, I’ve found that students often seem to struggle with explaining why it should matter whether the patient is dying (or at least permanently debilitated) or not. Yet from a mercy perspective, it matters very much that there are, in fact, no truly good options left open.

In part, this is because mercy is a particular kind of response towards another, a response that acknowledges their distinctive value – and understanding that value is essential to understanding the full tragedy of death, of what is lost when a person dies.

Acknowledging that value means accepting some limits on autonomy where avoidable death is involved.

Respect for patient autonomy needn’t involve the sort of wilful blindness Nitschke has shown. If we want to make the case for progressive reforms, such as euthanasia and marriage equality – as we should, vigourously and doggedly – we should resist doing so in terms that leave us unable to make sense of our moral environment.

Anyone seeking support and information about suicide can contact Lifeline on 131 114 or beyondblue 1300 22 46 36

Dangerous ideas, honour killings and moral seriousness

This article was originally published on The Conversation. Read the original article.

Last night, after a public outcry, the Sydney Opera House’s Festival of Dangerous Ideas pulled a presentation from its upcoming program. The talk in August by Sydney writer and Hizb ut-Tahrir representative Uthman Badar, was to have been called Honour Killings are Morally Justified.

Most of us would react to a title like that with immediate revulsion. It promises a defence of something utterly indefensible. Indeed, on his Facebook page, Badar insisted he didn’t choose the title (but did consent to it) and that it misrepresented what he’d planned to speak about:

the suggestion that I would advocate for honour killings, as understand [sic] in the west, is ludicrous.

I’m rather unsettled by that “as understood in the West” qualifier, for reasons that will probably become apparent below, but Badar’s statement does suggest that the title was more a marketing hook than a real description of his argument.

And of course no-one is taking away his right to speak on the topic; having a right to free speech doesn’t mean you’re owed a turn at the megaphone.

But the Festival of Dangerous Ideas exists to consider, well, dangerous ideas. Can an idea ever be so dangerous it can’t even be discussed?

In her seminal paper Modern Moral Philosophy, G.E.M. Anscombe famously claimed that, yes, some ideas are simply off the table:

But if someone really thinks, in advance, that it is open to question whether such an action as procuring the judicial execution of the innocent should be quite excluded from consideration – I do not want to argue with him; he shows a corrupt mind.

Anscombe was, in one important sense, wrong. In a universe that throws morally tragic situations at us with gut-wrenching regularity, thinking the unthinkable – or at least thinking about thinking about it – sometimes becomes unavoidable.

There are good reasons to accept (as I do) that torture, for instance, is always and everywhere wrong, a grotesque violation that no society should ever tolerate. But that doesn’t mean all those who entertain the idea that sometimes torture might be the least-worst option are simply amoral.

Uthman Badar in 2012.
Paul Miller/AAP Image

Some are, no doubt. But others are responding to the pull of a genuine moral concern, namely, saving innocent lives. The concern may be legitimate even if the conclusion drawn is wrong.

The question here is whether the argument is made with what we might call moral seriousness. What’s right about Anscombe’s declaration that certain things are simply unthinkable is that it expresses just that moral seriousness: if you think it’s OK to kill an innocent person, you’re not attending properly to what people are and why they matter. You’re talking the language of ethics, but you’re not taking it seriously.

But could you declare, with anything approaching moral seriousness, that honour killings are sometimes morally permissible? I don’t see how.

How could you possibly construct a justification for killing someone on the basis of cultural or social norms of “honour” without completely losing sight of the wrongness of destroying a human life?

Undeniably, our cultural and religious traditions provide much of the raw content of our moral concepts. But part of moral seriousness is a commitment to the idea that morality is not simply a function of those traditions, but the standard by which we in turn judge culture or religion.

That’s asking quite a lot of us. To some degree we’re all inescapably bound up in the social, political, and spiritual traditions in which we’re raised, in ways we can barely even begin to notice, let alone transcend.

But our ethical judgments must be understood as pointing to a reality that goes beyond these things. That reality is what moral philosophy, in the broadest terms, strives to discern and articulate.

And in doing so, we acquire the tools to evaluate and critique social and cultural norms. If a culture sanctions domestic violence, or racism, or if a religion says someone should be punished for loving the “wrong” person, then that culture or religion is, just to that extent, mistaken about moral reality.

Take away the view that moral reality transcends culture, and you take away the very idea of moral progress: you end up having to say that slavery, for instance wasn’t wrong, just different.

Or you end up appealing to arguments that depend on religious revelation, and are thus useless as arguments: anyone who doesn’t share your faith in the revelation already won’t be persuaded. (And as you try to work out whether a thing is good because a deity says so, you’ll probably stumble into a Euphythro Problem for your trouble too.)

But maybe there’s a lost opportunity in all this. On Facebook, Badar said he didn’t choose the topic of his proposed talk:

I, in fact, suggested a more direct topic about Islam and secular liberalism (something like “The West needs saving by Islam” – how’s that for dangerous?), but the organisers insisted on this topic, which I think is still a worthy topic of discussion, for many reasons, as my presentation will, God-willing, show, hence I accepted.

Badar belongs to Hizb ut-Tahrir, an international group that seeks to establish the Caliphate. In a week where Islamophobe activists tried to stop construction of a mosque in Bendigo, here’s someone offering to try to defend the very idea of Islamic theocracy that’s such a key trope of anti-Muslim discourse.

Again, I can’t see how such an argument could possibly succeed without appealing to divinely revealed premises, which on the level of public ethics rules it out right from the start.

But ideas are most dangerous when they’re not exposed to argument.