Drowning Mercy: Why We Fear The Boats

[Originally posted at The Conversation; I’d invite you to join the discussion there but they shut it down. Things got a bit heated, I think.]

There’s a Latin word: misericordia.

It’s usually translated “mercy” or “pity”. Thomas Aquinas took misericordia to be a kind of grief at the suffering of others as if that suffering were our own. Alasdair MacIntyre, the leading modern exponent of Thomist virtue ethics, sees misericordia as a responsiveness to the distress of others that offers the same concern we would normally show to those in our own family, community or country to total strangers.

Misericordia in this sense is the virtue of the Good Samaritan; it’s the virtue the ancient Chinese sage Mencius describes in the way we would rush to help a child who has fallen down a well, not through hope of reward, but simply through concern for the child – any child.

You might say this particular virtue went missing at sea the day the Special Air Service was ordered to board the MV Tampa. We have been doing our best to keep it from surfacing ever since.

And so it has come to this. Not only are we denying asylum seekers arriving by boat any prospect of resettlement in Australia, we are publishing pictures of their anguish at being told so. Whether this is genuinely meant to deter people from risking death on the high seas, or whether, as Melbourne philosopher Damon Young put it, it’s “immigration torture porn for xenophobes”, it seems we now see the suffering of others as an opportunity to exploit rather than a call to action.

The consensus among the commentariat has been that all this will go over beautifully in an electorate that has long seen boat arrivals as a standing existential threat. As journalist David Marr points out, Rudd is merely the latest prime minister to play on our disproportionate fear of “boat people”. The moves are new but the game itself is decades old.

It’s a bizarre national obsession and it begs for answers: why are we so scared of the boats?

No doubt straightforward racism is a very big part of it. But that can’t be the whole story: if it were, why has the government not been pilloried for talking about raising the overall humanitarian intake? Why is there no comparable outrage over the considerably larger number of asylum seekers who arrive in Australia by air? If it’s about respect for Australian law, where’s the outrage over visa overstayers, a much larger cohort than asylum seekers? If it’s driven by opposition to population growth, where were all those “F—– Off, We’re Full” stickers when the Baby Bonus was introduced?

So what is it that boat arrivals symbolise that other forms of arrival don’t? Well, here’s a stab at an answer: they remind Australians that we haven’t earned what we’ve got.

Consider John Howard’s “We will decide who comes to this country and the circumstances in which they come” line. That same message has been built into all our rhetoric on boat arrivals ever since: we don’t have to take you, and unless you come here entirely on our terms, we won’t. Form an orderly line, jump through this set of hoops, and maybe we’ll let you in. You’re welcome.

This emphasis on sovereignty over mercy serves to bolster the idea that “our way of life” is somehow ours by right, and so within our gift to bestow or withhold however we see fit. A gift, by its nature, must be freely given and gratuitous; it cannot be demanded of us. And it must be ours to give; we can only share what we ourselves are entitled to.

Except, of course, we haven’t earned such an entitlement at all.

Consider what Howard’s former chief of staff Arthur Sinodinos told a Q&A audience this week:

…our obligation…is to give people protection. It is not to guarantee them a first world lifestyle in every case when they come to Australia.

But then, by what right are we guaranteed such a lifestyle? What law of nature or reason determines that we get to live in luxury simply by virtue of the accident of birth?

Consider the slogan bandied about during the Cronulla riots: “I grew here, you flew here” – as if it was a personal achievement to be born on this part of the earth’s surface at this point in history. It’s not. It’s sheer dumb luck. It’s nice to be lucky, but it’s no merit.

I suspect on some level that’s what boat arrivals remind us of: the radical contingency of everything we have. It’s not just that we’re repulsed by undeserved misfortune – “there but for the grace of God go I”; “really makes you think, doesn’t it?” – we’re deeply unsettled by undeserved good fortune too.

As a species we always have been. That’s why we invented doctrines like karma, so we could insist that those born into abased misery or obscene privilege must, somehow, be getting their just deserts. In the modern West we have a similar myth: that “anyone can make it” if they just work hard enough, and so the poor must simply be lazy, undeserving – as if talent and even the capacity for hard work itself aren’t themselves dealt out by random chance.

Acknowledging such radical contingency knocks the ground from under out feet. It suggests our claim to our prosperity ultimately rests on happy accident rather than cosmic justice. No amount of “Cronulla capes” and bumper stickers and half-remembered tales of Bradman riding Phar Lap to victory at Gallipoli can change that.

K.E. Løgstrup, a 20th century Danish moral philosopher who deserves to be much better known outside Scandinavia than he is, argued that once we see the gratuitousness of what we have, we can no longer stand on our own rights in order to begrudge others our help. Our individual sovereignty is shattered by the realisation that everything we have is, ultimately, a gift we’ve received, not an entitlement we’ve earned.

Perhaps that’s why the boats scare us: they remind us of a far more demanding ethics lurking behind our comfortable norms of reciprocity and exchange. Perhaps that’s at least part of why we go to such lengths to dehumanise, to demonise, to refuse to see asylum seekers in their full humanity.

None of what I’ve just written fixes the problem or even offers any policy suggestions whatsoever. Understanding our motives won’t stop people dying at sea – as I write this, yet more lives have just been lost.

But the moral demand to respond with misericordia hasn’t gone away. And we cannot act morally, or even see others properly, if we’re more concerned about justifying our own privilege.

Ethics is a jealous God: self-regulation vs self-sacrifice

St Olaf College, June 2013
St Olaf College, June 2013

[Originally posted at The Conversation; feel free to join in the discussion there]

Late one night recently I got a very frustrated email from a close friend. He’d just spent the evening arguing with investors about whether they needed to take ethics into account in their investment decisions.

Oh no, his colleagues had said: after all, we’re in the business of making money, not value judgements. Besides, no-one can agree on what’s morally acceptable anyway, so such decisions should be left to individuals.

This is something I’ve been noticing a lot lately, both in class and online: the assumption that because there is disagreement over the content of ethics, then ethics itself must be a wholly subjective matter, a matter of individual choice – personal taste with a self-righteous sheen.

So, I pounded out a quick response to the email: ask these investors what they’d think about someone taking all their money off them. Would they merely feel annoyed at the inconvenience, or actually wronged, as if someone had unjustly violated their rights?

Moral relativists, at least of a certain stripe, tend to retreat pretty quickly at that point. More often than not they tacitly subscribe to, and indeed rely upon, a liberal framework according to which everyone should be allowed to pursue their own conception of the good, so long as they don’t hurt anyone else. That liberal framework is explicitly put together to accommodate disagreement about right and wrong, but it is nonetheless at least minimally moral.

This liberal principle of non-interference is precisely what makes the idea of self-regulation so attractive. We want industries, companies and individuals to minimise the harm their activities cause while pursuing their rational self-interest. If they can do this without coercion from government, then so much the better, no?

The obvious rejoinder when a given industry or sector claims authority to regulate itself is that they don’t always do a particularly good job of it. There’s an obvious tension between self-interest and self-regulation that’s only partially dealt with by saying it’s in a business’ interest to play nice, if only to avoid reputational risk.

Immanuel Kant famously gave us the image of a shopkeeper who never over-charges his customers simply because doing so might give him a bad name and hurt his trade. Kant insists such a merchant is not, in fact, acting morally at all. Still, you might say, surely a person or business acting honestly from selfish motives is better than their not acting honestly at all?

But the problem goes deeper than this. The irritating thing about ethics is that its demands are categorical in a way that other kinds of norm aren’t. No amount of beauty, convenience or benefit can outweigh a moral claim. You can’t make enough profit out of doing something morally impermissible to make it okay again, unless, perhaps, the profit itself provides a better moral outcome for some other reason.

This would be fine if what is in our interests and what is morally right always coincided, but they don’t. Ethics often requires us to act against our overall self-interest, not just defer short-term rewards (immediate profit) in order to secure greater long-term goods (reputation). Indeed, ethics can even require us to sacrifice everything up to and including our very existence. The right and the good are jealous gods, and they can demand self-sacrifice in a way wholly incompatible with an “enlightened self-interest” model of self-regulation.

Being prepared to set aside self-interest in this categorical way may simply be expecting too much of any company. A public company’s legal duty, indeed its very reason for being, is to generate profit for its shareholders. It will of course operate within legal parameters, but what’s legal and what’s right frequently don’t map onto each other perfectly.

“We should stop doing x because x-ing will make us unpopular and reduce profits” may be good business, but it’s a prudential rather than an ethical norm. “We should stop doing x just because x-ing is wrong” however clearly is an ethical norm – and just for that reason can’t be fit into a framework that takes self-interest as its overriding value.

And acting ethically might involve more than just saying that some practices are unacceptable. It might involve acknowledging that some companies, organisations and industries simply should no longer exist.

Recently the farming lobby managed to shut down a major component of an anti-factory farming campaign by Animals Australia. The various farmers groups have sought, both in their public statements and in a social media campaign, to claim the ethical mantle of “animal welfare”:

Let’s be very clear: Australian farmers are committed to animal welfare. Our farmers raise, care for and nurture their stock and care deeply about what happens to them. We understand that improvements need to be made, but farmers, working with respected animal welfare groups, the community and governments, will be the ones who make them.

The claim here is that farmers are already acting ethically – indeed are motivated by concern and a desire to nurture – and should be trusted to regulate and improve their own practices.

The statement goes on to say that Animals Australia’s “real agenda” is “to end animal agriculture in this country”. No further comment is added; the subtext seems to be that, to the NFF, such an outcome is simply unthinkable. The very suggestion is, in their mind, self-refuting.

To use a metaphor that I’m quite sure will never be used this appropriately ever again: turkeys don’t vote for Christmas.

What ethics may demand (and I stress may here, as I’m not actually arguing one way or the other about the legitimacy of animal farming, or any other industry or practice for that matter) is not just upping standards, but shutting down. There’s an argument to be had there, justifications that need to be offered, objections to be countered. But by insisting on self-regulation, the NFF is effectively calling the outcome of that argument ahead of time. Its own survival as an industry – precisely what ethics might ask it to give up – is non-negotiable.

Can self-regulation ever work? Quite possibly. But only if the parties are prepared to put ethics in its rightful place as the highest demand, instead of making it, at best, one priority among others. Morality cannot be a mere marketing tool or cultural ethos, respected and lauded but ultimately subordinate to self-interest and indeed to survival.

Self-regulation demands turkeys that can vote for Christmas. That’s one species we don’t seem to be raising many of these days.

A Horse is a Horse: Sexism vs. Speciesism

‘Year-in-review’ articles are meant to get people talking. Or, fill out column inches during a quiet time of year. Either way, I doubt the Daily Telegraph’s Phil Rothfield and Darren Hadland were expecting the backlash they received just before Christmas, when they declared racehorse Black Caviar the Tele’s ‘Sportswoman of the Year.’

Not surprisingly, Twitter rounded on Rothfield almost immediately, with media figures such as Wendy Harmer and Tara Moss weighing in on what clearly looks to be, at best, obliviously sexist. Rothfield telling Harmer to ‘pull her head in’ on the basis that ‘Caviar is a girl’ didn’t help.

The astute observer may have noticed that whatever else she is, Black Caviar is not a woman. She is female, but she is not a woman (or a girl for that matter). To horribly oversimplify, ‘female’ refers to the biological category of sex, while ‘woman’ refers to the social category of gender. A horse has a sex, but it does not have a gender. ‘Woman’ is a specifically human category, one that involves situation in a network of meanings that simply aren’t available or applicable to nonhuman animals.

And this is why awarding the title of ‘Sportswoman of the Year’ to Black Caviar is so galling: it reduces women to their female bodies. The decision suggests there were no actual women worthy of the title, so we’ll just pick the nearest deserving female as if that’s the same thing. That, in turn, collapses ‘woman’ into ‘female’ and thus essentialises gender. This is the old trick of sexism: women come to be defined by their biology, men do not. As Simone de Beauvoir noted, both men and women secrete hormones, but men are never accused of thinking ‘hormonally’ no matter how much testosterone is involved.

In the context of the position of women in sport, the Tele’s decision looks tin-eared at best and sinister at worst. Harmer took to her blog to point out how insulting this decision looks given “the utter bullshit [sportswomen] have to cope with – year in and year out.” I’ve no doubt she’s right. The effect of the article is clearly belittling, playing to the idea that women’s sport is necessarily boring, secondary, less legitimate. In sport as in other aspects of life, women are, as de Beauvoir put it, made into the ‘other.’ The defaults of the species are implicitly set to ‘male.’

What was interesting though was the sheer incredulity displayed by many at the very idea that a horse could even be considered as competing with humans. ABC journo Jeremy Fernandez, for instance, tweeted that Australia II also ‘stopped a nation’ as Black Caviar had done, but that didn’t make it a sportswoman. No-one, as best I can tell, pulled Fernandez up for comparing a sentient nonhuman animal to a yacht, equating a horse with a mere object.

Given the perfectly valid focus on gender, no-one, it seemed, stop to ask: why shouldn’t a horse be in the running (sorry) for recognition alongside human sportspeople? If we’re going to laud extraordinary feats of strength and endurance, why must the only animals to be so rewarded be homo sapiens? Perhaps there are valid answers to that question, but what struck me was that no-one even thought to raise it.

We find ourselves caught here between sexism and speciesism. We’ve finally come to a point where we can recognize the former, though clearly we still have a very long way to go. Speciesism, however, barely even registers.

The moral progress of humanity has been largely a process of coming to see the wrongness of discrimination on the basis of morally irrelevant differences – gender, race, sexuality, and so on. With regard to how we treat nonhuman animals, the question is basically this: which features that distinguish humans from animals are morally relevant and therefore justify differential regard? What is it that humans have that animals don’t that justifies putting our interests ahead of those of nonhuman animals, in what ways and to what extents?

As the last few decades of animal ethics has shown, these turn out to be deeply complex questions, to which there have been no shortage of answers put forward. I’m not denying there are such relevant features, by the way, as if human and nonhuman animals are morally equal. The capacity for rationality and self-reflection, for instance, seem to make a vast difference morally. But is it an absolute difference? And does it matter in the same way in all contexts?

Let’s stick to what we’re rewarding here: sporting performance. We’re not talking about ‘best and fairest.’ We’re simply talking about who can run the fastest or score the highest. Of course most sports involve a degree of conceptualisation that is not available to nonhuman animals – but if we’re going to laud individuals for doing physical things that almost no other individuals of their species can do, doesn’t Black Caviar fall into that category? Why is a human running really fast around a track qualitatively different, in a morally relevant way, from a horse doing the same thing (with ‘really fast’ relative to species-average in each case)?

Perhaps the simplest, most elegant solution for the Telegraph would have been to declare Black Caviar “Athlete of the Year.”

That would have done the Tele’s Sportsman of the Year out of his award too. But if Black Caviar trumped every human sportswoman in 2012, I dare say a good argument could also be made for her beating Rothfield and Hadland’s pick, Michael Clarke.

Mind you, I’m not much of a cricket fan. Though if Clarke had to play with someone sitting on his back, whipping him every time it looked like he was about to be run out, I’d probably watch. And I quite like the idea that Ricky Ponting is now living out his days in a nice paddock somewhere, with all the apples and sugar cubes he could wish for.

So this solution would have avoided the obnoxious sexism of the Tele’s conflation of ‘female horse’ with ‘woman’ whilst simultaneously taking Black Caviar seriously as an athlete, regardless of her species. Win-win, no?

Of course, maybe we’re not prepared to take nonhuman animals seriously as athletes. If that’s the case, perhaps we should stop forcing them to perform athletic feats for our entertainment? Just a thought.

I tweet dead people: can the internet help you cheat your maker?

[Originally published at The Conversation; feel free to join in the discussion there]

Can you believe it’s been a year already? I’m sure we all remember where we were when we heard the terrible news we’d lost Gregg Jevin.

You know, Gregg Jevin? The Gregg Jevin?

Don’t worry if the name doesn’t ring any bells. There never was a Gregg Jevin. Yet he “died” on 24th February last year, in a tweet from British comedian Michael Legge:

Sad to say that Gregg Jevin, a man I just made up, has died. #RIPGreggJevin

— Michael Legge (@michaellegge) February 24, 2012

Within hours, #RIPGreggJevin was trending on Twitter, with celebrities, companies and ordinary punters rushing to express “condolences”. Some of it was genuinely hilarious. Even the odd philosopher had a go at it.

The Jevin affair suggests something genuinely interesting about “Twitter mourning”: we’ve been doing it long enough that it’s developed its own conventions, which users know how to satirise when given the chance. I’m sure it’s no coincidence that Legge’s tweet went viral only days after Twitter’s outpouring of grief for Whitney Houston. The “death” of Gregg Jevin briefly gave people a sandbox in which to play around with the language of online mourning without causing genuine offence.

Now, just when we’d somehow managed to pick up the pieces and move on without Gregg, a startup called _LivesOn claims it will change the way Twitter users interact with the dead.

Details are scant, but the idea seems to be that the service will use an algorithm to generate new tweets of behalf of dead users, tweets that sound like those the user themselves posted in life. The net effect is that, as _LivesOn put it, “When your heart stops beating, you’ll keep tweeting.”

It’s hard to know how seriously to take these claims. Representatives of _LivesOn deny being a publicity stunt, describing itself as an project jointly conducted by an ad agency and a university. But even if it’s deadly serious (sorry), commentators have questioned whether the technology could possibly deliver what it promises.

This is not the first time a company has held out the prospect of perpetuating an online presence after your demise. Intellitar’s “Virtual Eternity” service – currently closed, supposedly for further development – offers an animated avatar that can interact with users, using artificial intelligence to “answer” questions as you would have done. The results, frankly, aren’t impressive, at least not yet.

The interesting point is not whether these technologies will ever be any good, but that they’re being discussed at all. What does it say about us that we’re reaching for this kind of digital immortality?

It seems silly to think you could somehow survive your death through a service that tweets on your behalf. But consider how much of our communication with others is now mediated through social media: might there be some sense in the idea that extending your online presence after your death would keep you in existence somehow?

Yes and no. In research published last year, I looked into the increasingly common practice of memorialising the profiles of dead Facebook users. For a large number of us, Facebook has become a large part of our presence in the lives of others. When Facebook users die, their digital traces persist; through them, the dead arguably do retain something of their presence in our lives. Perhaps that’s why people continue to post on the walls of dead Facebook users long after their passing.

So social media can, in one sense, help the dead remain with us. But why isn’t this thought much comfort?

To answer that, I suggest we consider some recent developments in the philosophy of personal identity. Discussions in this field have increasingly begun to differentiate between the “person” and the “self” (or in a slightly different version, the “narrative self” or “autobiographical self” and the “minimal self” or “core self”).

The distinction is applied somewhat differently by different theorists, but it goes roughly like this: the self is the subject you experience yourself as being here and now, the thing that’s thinking your thoughts and having your experiences, while the person is a physical, psychological and social being that is spread out across time.

One of the questions I focus on in my work is how these two kinds of selfhood interact, and the ways in which they can come apart. In this case, something like _LivesOn might in fact extend the identity of your person, albeit in a very thin and diminished sense. If you’re a regular tweeter, it might serve in some small way to enhance your ongoing presence in the life of other people.

But it doesn’t extend your self. There’s no experience to look forward to, no subject at the core of your tweets. Perhaps it helps you live on for others, to some small degree, but not for yourself.

So perhaps we shouldn’t hope for too much from our posthumous online presences. Perhaps we should leave posterity to worry about itself and simply live the best we can here and now.

It’s what Gregg would have wanted.

Philosophy under attack: Lawrence Krauss and the new denialism

[Originally posted at The Conversation; feel free to join in the discussion there]

I really shouldn’t let myself watch Q&A. Don’t get me wrong, the ABC’s flagship weekly panel show is usually compelling viewing. But after just a few minutes I end up with the systolic blood pressure of Yosemite Sam and so fired up I can’t get to sleep for ages afterwards. Good thing it’s on when the kids are in bed, or they’d pick up all sorts of funny new words from Daddy yelling at the screen.

I’m expecting more of the same this week when physicist and author Lawrence Krauss is on the panel. Krauss is an entertaining commentator and science populist – if often quite provocative, especially on matters of religion. He should be fun to watch.

But Krauss has also been at the forefront of what looks like a disturbing recent trend among media-savvy scientists. There has rightly been a lot of concern lately about “science denialism”. But many of those sounding the alarm themselves seem to be engaging in what we might call philosophy denialism.

The subtitle of Krauss’ 2012 book A universe from nothing: why there is something rather than nothing is tantalising, offering to answer one of philosophy’s fundamental questions. Not everyone liked Krauss’ answer, however.

In response to a critical review by philosopher of physics David Albert, Krauss called Albert a “moronic philosopher” and told the Atlantic’s Ross Andersen that philosophers are threatened by science because “science progresses and philosophy doesn’t”.

Krauss’ gripe with philosophy seems to be, as Massimo Pigliucci eloquently points out, that philosophy hasn’t solved scientific problems. The same charge is levelled even more bluntly by none other than Stephen Hawking, who in 2010 declared philosophy was dead.

According to these scientists, philosophy and physics were chasing the same prize – an understanding of the ultimate nature of reality – and physics simply got there first.

Yet this misses the point of what philosophy does, and how it relates to and differs from other disciplines. “Is 2 to the power of 57,885,161 minus 1 a prime number?” or “Did Richard III murder the princes in the tower?” are questions for mathematics and history.

“What is a number?” or “Does the past exist?”, however, are not. It’s when these fundamental conceptual questions arise that the philosophical rubber hits the road.

So when Krauss complains that philosophy of science is unjustified because it doesn’t influence the way physicists do physics, he’s assuming both that philosophy only matters if it helps to answer the same questions scientists ask, and that questions only matter if they can be answered in a certain type of way. Those assumptions need to be argued for – and that’s a philosophical task.

In fairness, Krauss has walked back a lot of his rhetoric in the months since, even expressing admiration for some forms of philosophy – but he still insists that when it comes to his question of why there is something rather than nothing, the claims made by philosophers are “essentially sterile, backward, useless and annoying”. The empirical exploration of reality, he tells us, changes “our understanding of which questions are important and fruitful and which are not”.

But what makes an explanation sterile or fruitful? Should an explanation’s being “annoying” (or elegant for that matter) matter, and in what way? Don’t look now, but in asking questions like that we’re doing philosophy, even if Krauss doesn’t seem to think we need to. And he’s not the only one.

It’s usually assumed that science is descriptive rather than normative; it tells you how the world is, not how it should be. But recently Sam Harris and Michael Shermer have each argued that science can indeed answer moral questions, with Shermer lamenting that science has “conceded the high ground of determining human values, morals, and ethics to philosophers”. Both start from the rather Aristotelian assumption that the object of morality is securing the flourishing of conscious beings, and that science can tell us how to do that. But this simply puts off the question of why we should value flourishing at all, rather than answering it.

But, you might ask, does any of this really matter? After all, if scientists are too busy trying to cure cancer to wade through Heidegger’s Being and Time, do we really care?

I don’t want to overplay the gulf between scientists and philosophers, or between disciplines like theoretical physics and metaphysics; equally, there’s no question many philosophers (including me) could do with greater scientific awareness. But when prominent scientists start dismissing the questions philosophy asks as irrelevant – not just scientifically irrelevant, but irrelevant as such – I think we do have a problem.

Harris and Shermer are right that moral decisions need to take empirical data into account. Our ability to be effective moral beings depends upon our capacity to understand and respond to the world around us. Moral philosophy, in that sense, cannot ignore science. But science, equally, needs to be aware when it is straying beyond its own borders in problematic ways. Those who are concerned to defuse the charge of “scientism” need to watch out for precisely this sort of overreach.

If it seems from the outside like philosophy doesn’t make progress, perhaps that’s because our questions haven’t changed that much since Socrates’ day: what is the nature of existence? What do we know, and how do we know that we know it? How are we to live?

These aren’t questions we can simply answer and move on. But equally, they don’t go away just because we ignore them.

Love Thy Neighbour: religious groups should not be exempt from discrimination laws.

[Originally posted at The Conversation – feel free to join in the discussion there]

A little over a century ago, our first prime minister told our first parliament that “the doctrine of the equality of man was never intended to apply to the equality of the Englishman and the Chinaman”. Barton had the abstract principle right, but he couldn’t see non-Europeans as the sort of people to whom it could apply. He could not see their inner lives, their concerns, passions and beliefs, as being as morally significant as his own.

In his recent book, The Better Angels of Our Nature, Stephen Pinker points out that for all our talk of moral decline, we are actually living in the least violent, least cruel and most peaceful era in human history.

Much of this progress, I’d suggest, is due not to better moral reasoning or principles, but to our improving moral vision.

We have come a long way since that first parliament, and we’re learning – gradually, fitfully, and painfully – to see what Barton could not see, to view others as no less worthy of our regard on the basis of irrelevant differences such as race, religion, or sexuality.

But what happens when the competing demands of religious and sexual identities collide in the public sphere?

The Gillard government has announced it will preserve existing exemptions in anti-discrimination legislation, allowing religious organisations to refuse to employ LGBT individuals – and indeed anyone else whose very presence might cause “injury to the religious sensitivities of adherents of that religion”. The Australian Christian Lobby has hailed this as a win for “religious freedom”.

This is disingenuous at best. The issue really has nothing to do with the free exercise of one’s religion and everything to do with denying the moral depth of gay and lesbian lives.

To be clear, we are not simply talking about issues of job performance. It’s hard to see how being gay would be an impediment to doing many, if any, of the diverse jobs available in the various faith-based charities, schools, hospitals and universities around Australia.

Rather, groups such as ACL are defending the “right” to refuse to hire someone, not because their actions might be contrary to the mission or ethos of a religious employer, but because who they are might offend against someone’s “religious sensitivities”. It is a rejection of who the employee is, not what they do.

This fact is sometimes obscured by calling homosexuality a “lifestyle” – a deliberately shallow, superficial word designed to deny the profundity of someone’s core relationships. It implies that homosexuality is some sort of inessential add-on rather than a defining feature of the person. My relationship is a central, non-negotiable part of who I am; your relationship is just some stylistic choice, like installing marble benchtops or wearing Crocs.

Sometimes, instead, discrimination is justified by claiming that what’s hated is the sin, not the sinner. This may be a sincerely held view, but it too ignores just how deeply integral romantic and sexual love is to our practical identities. It denies the same depth to same-sex and heterosexual relationships, and so implicitly refuses to acknowledge the significance of what it claims to be offended by.

The question here isn’t whether (some) religious believers are right or wrong to be offended by homosexuality in this way. Nor is it whether we should respect the deep religious convictions of believers.

Rather, it’s whether society is obliged to respect this sort of offence enough to override other moral considerations. And this takes us to the clash between private faith and public moral reasons.

Religious faith, however it finds expression, is an essentially inward, private state of profound certainty. It may involve reasons, but these are not the sort of reasons that can be shared with non-believers. I doubt anyone has ever been moved from atheism into genuine religious faith (as opposed to mere lip service) by force of rational argument alone. Trying to argue someone into religious belief is like trying to make someone fall in love with you by telling them all the reasons why they should: it won’t work, and even if it did, it wouldn’t be because of your arguments themselves but because of something else.

For some believers, then, there may be an unshakable inner certainty that homosexuality is immoral. It would be wrong for those of us who disagree to simply trivialise that view, as it may be linked to fundamental beliefs that are central to the believer’s conception of him or herself and what a good life comprises. I’ve heard Christians say how torn they are between their love for gay friends and family and their belief in the authority of scripture. I don’t doubt their sincerity, nor the difficulty it presents them.

But when it comes to matters of public ethics, beliefs that are grounded in religious faith simply don’t cut it on their own. Believers and non-believers have to share a society, and that means our moral discourse has to be based on premises it’s at least possible for us to agree upon. “I find working with gay people offensive because God says homosexuality is wrong” is simply not such a premise.

It may be an important fact about the lives of some believers, but it doesn’t justify employment discrimination. Surely we’ve come at least far enough to see that.

Between Guilt and Innocence: 2Day FM and the Moral Blame Game

[This post originally appeared at The Conversation. Feel free to join in the discussion there.]

This past weekend, we saw the media – old, new, and social – trying to digest the indigestible. The death of Jacintha Saldanha, the British nurse who apparently took her own life after being caught up in a prank phone call from 2DayFM DJs Mel Greig and Michael Christian, is one of those stories that is so sad, so utterly pointless and bewildering, as to leave us gasping for something, anything, coherent to say about it.

There’s also something frighteningly random in how things seem to have played out: a simple, farcical prank call from the other side of the planet, and suddenly a 46 year old woman – a mother of two and from all accounts a dedicated and well-regarded professional – is dead. That she is appears to be, from what we know at this stage, the result of decisions that had nothing to do with her. No one set out to cause this, no one could have seen it coming, but the feeling remains that someone – Greig and Christian, Austereo management, the hospital, the media, the cult of celebrity itself – must be to blame.

Blaming, as it happens, is something the internet is very good at. Within hours, the 2DayFM Facebook page was inundated with angry messages. Twitter lit up with outrage. Some of the response has been decidedly sinister.

But among the calls for retribution there were others defending the presenters and expressing concern for their welfare. The DJs clearly didn’t intend for anything like this to happen. As Peter FitzSimons, writing for Fairfax points out, such pranks are an everyday part of the FM radio repertoire.

FitzSimons doesn’t stop to ask whether it’s ever OK to misrepresent yourself in order to make someone the unwitting object of fun, let alone whether calling a hospital to gain private information on a patient’s condition is ever acceptable.

But FitzSimons’ article is revealing in another way. He draws on his “garden-variety legal studies” to remind us that the test of negligence is “whether or not a ‘reasonable man’ might have had any expectation that their actions would have resulted in the kind of tragedy we have seen.”

Surely, we can’t accuse Greig and Christian of negligence given there’s no way they could have predicted this outcome?

But legal responsibility isn’t the same thing as moral responsibility. Courts have to make clear decisions, a practical purpose for which they need artificial rules and procedures. The “reasonable man” test provides a rough but workable way to delimit responsibility: if the consequences of our action are so remote that a reasonable person could not have predicted them, then we’re not answerable for those consequences.

That might be good enough for a courtroom, where we need final decisions about who is responsible for what. But without this artificial context, the boundaries of moral responsibility seem to be far more ambiguous.

Indeed, there is an uneasy grey area between moral guilt and complete innocence. Philosophers have been troubled by this ever since Bernard Williams coined the term “moral luck” more than 30 years ago. Strictly speaking, “luck” shouldn’t have anything to do with morality: since Kant, the standard view has been that you’re only responsible for what you do, or could have done but failed to.

Yet in fact, it’s alarming just how much of what we praise and blame people for depends upon factors beyond their control. We condemn the coward, but no-one willingly chooses cowardice. We regard the drunk driver who kills a pedestrian as more culpable than one who doesn’t, even though it’s only random chance that separates the two cases. We tolerate, and indeed reward, an uneven and unearned distribution of talents. The idea that we’re only responsible for what we can control seems to be strained at every turn by our moral intuitions and practices.

And as Williams notes, there is a phenomenon of “agent-regret,” a sense that it would have been better if we had acted differently. Such agent-regret remains even when we know what has happened is not, strictly, our fault. There is, according to Susan Wolf, a “nameless virtue that urges us, as a matter of both moral character and of psychic health, to recognise and accept (to an appropriate degree) the effects of our actions as significant for who we are and for what we should do.”

Journalist Jane Hansen’s revealingly honest piece in the Australian illustrates the grey zone of agent-regret perfectly. It’s a sobering reminder of how unclear the boundaries between guilt and innocence, between culpable agent and victim of circumstance, often are.

That’s not what we want, of course. We want to affix blame and move on. We want to carve the world up into the innocent and the guilty and hand out their just deserts.

But we can’t. Think about it for more than a tweet-length and suddenly even our most basic ideas about the limits of responsibility fail us. Who is to blame? Are Greig and Christian to be pilloried or pitied? Can it be both? Neither? It’s simply indigestible.

Conference Announcement: Kierkegaard in the World

Nyhavn, March 2008

Nyhavn, March 20082013 promises to be a banner year in Kierkegaard Studies. It’s SK’s 200th birthday on 5th May, and there’s conferences, seminars and publications planned throughout the year, in Denmark and around the world.

As part of the bicentenary, Jeff Hanson and I have been organizing a conference to be held at ACU (with support from the Centre for Citizenship and Globalization at Deakin), 16th-18th August 2013, and we’re very pleased to issue a call for papers:

Kierkegaard in the World celebrates the 200th anniversary of Kierkegaard’s birth by examining the ways in which the world figures in his thought, and the ways in which his thought has entered the world.

Kierkegaard’s work is rightly seen as a corrective of “worldliness,” but he is equally attuned to the necessity that the life of faith appear in the world (not in monastic retreat from it). This conference aims to explore how worldly life is transformed by Kierkegaard’s insights. How does the Kierkegaardian subject appear in the world? What about the incognito: Is it a form of strict invisibility or does its counter-worldliness paradoxically show up in the world? Kierkegaard is a thinker of transcendence, but is there a Kierkegaardian theory of immanence? The priority of subjective truth is obvious in Kierkegaard’s philosophy, but what of his theory of objective truth? How would subjective truth make its way in the world? How would it be embodied or transmitted? What implications does Kierkegaard’s thought have for political orders, cultural artefacts, communicative strategies, or the founding and perpetuation of traditions? How might Kierkegaard’s work intersect with various world religions? And how has Kierkegaard’s own thinking been translated, transmitted, and given expression in contexts across time and space?

We invite papers of not more than 3,000 words that confront these and related questions. Abstracts of no more than 300 words should be sent to Dr. Patrick Stokes and/or Dr. Jeffrey Hanson no later than 15th March 2013.

Keep an eye on the conference website for updates – including some exciting forthcoming keynote announcements…

No, You’re Not Entitled To Your Opinion

Vienna, April 2009

Vienna, April 2009[Originally posted at The Conversation – feel free to join in the discussion there]

Every year, I try to do at least two things with my students at least once. First, I make a point of addressing them as “philosophers” – a bit cheesy, but hopefully it encourages active learning.

Secondly, I say something like this: “I’m sure you’ve heard the expression ‘everyone is entitled to their opinion.’ Perhaps you’ve even said it yourself, maybe to head off an argument or bring one to a close. Well, as soon as you walk into this room, it’s no longer true. You are not entitled to your opinion. You are only entitled to what you can argue for.”

A bit harsh? Perhaps, but philosophy teachers owe it to our students to teach them how to construct and defend an argument – and to recognize when a belief has become indefensible.

The problem with “I’m entitled to my opinion” is that, all too often, it’s used to shelter beliefs that should have been abandoned. It becomes shorthand for “I can say or think whatever I like” – and by extension, continuing to argue is somehow disrespectful. And this attitude feeds, I suggest, into the false equivalence between experts and non-experts that is an increasingly pernicious feature of our public discourse.

Firstly, what’s an opinion?

Plato distinguished between opinion or common belief (doxa) and certain knowledge, and that’s still a workable distinction today: unlike “1+1=2” or “there are no square circles,” an opinion has a degree of subjectivity and uncertainty to it. But “opinion” ranges from tastes or preferences, through views about questions that concern most people such as prudence or politics, to views grounded in technical expertise, such as legal or scientific opinions.

You can’t really argue about the first kind of opinion. I’d be silly to insist that you’re wrong to think strawberry ice cream is better than chocolate. The problem is that sometimes we implicitly seem to take opinions of the second and even the third sort to be unarguable in the way questions of taste are. Perhaps that’s one reason (no doubt there are others) why enthusiastic amateurs think they’re entitled to disagree with climate scientists and immunologists and have their views “respected.”

Meryl Dorey is the leader of the Australian Vaccination Network, which despite the name is vehemently anti-vaccine. Ms. Dorey has no medical qualifications, but argues that if Bob Brown is allowed to comment on nuclear power despite not being a scientist, she should be allowed to comment on vaccines. But no-one assumes Dr. Brown is an authority on the physics of nuclear fission; his job is to comment on the policy responses to the science, not the science itself.

So what does it mean to be “entitled” to an opinion?

If “Everyone’s entitled to their opinion” just means no-one has the right to stop people thinking and saying whatever they want, then the statement is true, but fairly trivial. No one can stop you saying that vaccines cause autism, no matter how many times that claim has been disproven.

But if ‘entitled to an opinion’ means ‘entitled to have your views treated as serious candidates for the truth’ then it’s pretty clearly false. And this too is a distinction that tends to get blurred.

On Monday, the ABC’s Mediawatch program took WIN-TV Wollongong to task for running a story on a measles outbreak which included comment from – you guessed it – Meryl Dorey. In a response to a viewer complaint, WIN said that the story was “accurate, fair and balanced and presented the views of the medical practitioners and of the choice groups.” But this implies an equal right to be heard on a matter in which only one of the two parties has the relevant expertise. Again, if this was about policy responses to science, this would be reasonable. But the so-called “debate” here is about the science itself, and the “choice groups” simply don’t have a claim on air time if that’s where the disagreement is supposed to lie.

Mediawatch host Jonathan Holmes was considerably more blunt: “there’s evidence, and there’s bulldust,” and it’s no part of a reporter’s job to give bulldust equal time with serious expertise.

The response from anti-vaccination voices was predictable. On the Mediawatch site, Ms. Dorey accused the ABC of “openly calling for censorship of a scientific debate.” This response confuses not having your views taken seriously with not being allowed to hold or express those views at all – or to borrow a phrase from Andrew Brown, it “confuses losing an argument with losing the right to argue.” Again, two senses of “entitlement” to an opinion are being conflated here.

So next time you hear someone declare they’re entitled to their opinion, ask them why they think that. Chances are, if nothing else, you’ll end up having a more enjoyable conversation that way.

New Paper: Philosophy Has Consequences!

Near Hatfield, Hertfordshire, July 2010

Near Hatfield, Hertfordshire, July 2010While teaching at University of Hertfordshire last year I ran a small pilot program in using online surveys to assist students in identifying apparent contradictions in their moral intuitions and track how their views changed across time. A paper explaining the project, its rationale and results, called “Philosophy has Consequences! Encouraging Metacognition and Active Learning in the Ethics Classroom” has just been published in Teaching Philosophy (a journal I recommend for anyone in this line of work).

Huge thanks to my students and colleagues at UH for their help with this project and the paper.