Fuming with outrage: Nazis, nannies and smoking

[Originally posted at The Conversation; feel free to join in the discussion there]

A few years ago I saw a poster stuck to the wall of a train station in Copenhagen. The poster was a protest paid for by a prominent Danish musician against new regulations against smoking in public. At the top was a sarcastic “Congratulations on the smoking ban” followed by the German phrase “Gesundheit Macht Frei” (good health makes you free).

You might think invoking “Arbeit Macht Frei,” the slogan above the gates at Auschwitz, to complain about not being able to smoke in bars is pretty tasteless. More likely than not, it’ll also distract from the message you’re trying to send.

So, lesson learnt, defenders of smoking: no more comparing smoking ban proponents to Nazis, okay?

Enter The Australian’s Adam Creighton, comparing the Rudd government’s increase on tobacco excise to the anti-smoking campaigns of Nazi Germany. This should end well.

Now, to his credit, Creighton isn’t just running a lazy “argumentum ad Hitlerum” i.e. “You know who else hated smoking?” Rather, he’s claiming that Australia’s attempts to discourage smoking are “being sold with the same flawed economic and moral arguments that underpinned Nazi Germany’s policies.” Which arguments are these?

Individuals and the State

What the Nazis and the Italian Fascists believed, roughly, is that individuals only have significance and purpose through and in the State. This sort of totalitarianism is indeed repugnant, not just because of the suffering it causes, but because of the distorting and reductive view of the moral value of human beings it presents.

Against this, Creighton appeals to the liberal harm principle, as championed by figures like John Stuart Mill. Again, very roughly, this principle states that we’re only entitled to interfere in the actions of others where those actions cause harm to other people. You are morally permitted to do whatever you like so long as you’re not harming anyone else in the process.

So according to defenders of smoking, coercive attempts to reduce smoking infringe on an area that, so long as no-one else is affected, is properly a matter of free individual choice. Creighton accepts that restrictions on smoking in public are legitimate given the dangers of second-hand smoke, but punitive measures designed to stop people smoking are not:

government should butt out of individuals’ decision to smoke privately, or to engage in any other behaviour that might entail personal costs without harm to others.

Part of the reason Mill’s harm principle is intuitively satisfying is that individual liberty does matter. Both classical liberalism and its more radical libertarian offshoot respond to genuine and important features of the moral landscape: all else being equal it’s better if we let people do what they want. At its best, strident liberalism is a healthy bulwark against excessive paternalism and coercion.

But these positions also rely on a hopelessly atomistic picture of what human beings are. They see each of us as a free, rational, self-contained, self-directed agent, an independent, sovereign individual living alongside other sovereign individuals, entering into free contracts for mutual benefit.

Where does harm end?

Philosophers have spent a lot of time taking that view of human nature apart: we are far less free, transparent-to-ourselves and rational than liberalism (and the economic theories it underpins) assumes. We’re also far more radically interconnected and dependent upon others. Our borders are considerably more porous than the sovereign individual model would suggest.

But even within his own liberal worldview, Creighton’s argument runs into serious problems. For one thing even Mill had to allow that there are some harms you’re not permitted to inflict even upon yourself, such as selling yourself into slavery or committing suicide.

If you wrestle a gun away from a would-be suicide, we don’t take you to be committing assault – but surely killing yourself is an essentially “private” matter if anything is? If we’re allowed to stop people throwing themselves off bridges, why aren’t we entitled to at least make it harder (if not impossible) for them to kill themselves with tobacco?

And is smoking only a harm to the individual? The harm principle notoriously runs into problems with questions like this. Creighton insists that things like the “psychological costs of premature death” are “purely personal costs” and so none of the state’s business.

But of course death does not only affect the person who dies; deaths ramify through families, friendship circles, workplaces, social networks – just where does the private end and the public begin?

There’s more

And then there’s this:

Amazingly – given smokers choose to smoke – popular estimates of “net costs” ignore any personal benefit smokers might derive from smoking. And they disregard the offsetting savings from substantially lower health and age-pension costs as a result of smoking-induced premature deaths.

You read that right: we should factor in the money we save from smokers dying early as a benefit.

And therein lies the problem: casting this wholly as a private, personal freedom issue is basically a refusal to take the moral gravity of premature death seriously. That, in turn, involves denying that persons have an intrinsic worth, beyond whatever economic or social value they might happen to have – to understand the value of persons you have to understand what is lost to the world when they die, and vice versa.

Those who complain about a “nanny state” trying to stop people from getting themselves killed are ignoring the significance of death and the responsibilities that generates. And an outlook that thinks we should weigh that human tragedy against the money it saves us has long since lost any right to call itself morally serious.

None of this should be read as a plea to ban smoking: prohibition doesn’t exactly have a glittering history of success anyway. To reiterate, personal freedom matters, and we often need to leave people alone to make their own objectively dreadful choices.

But that right of non-interference may not be absolute, as the suicide and slavery examples show; and there is plenty of scope for policy moves designed to discourage people from harming themselves.

Of course, ideally we wouldn’t need to interfere in people’s lives at all. If you don’t want a nanny telling you what to do, maybe it’s time to grow up.

Drowning Mercy: Why We Fear The Boats

[Originally posted at The Conversation; I’d invite you to join the discussion there but they shut it down. Things got a bit heated, I think.]

There’s a Latin word: misericordia.

It’s usually translated “mercy” or “pity”. Thomas Aquinas took misericordia to be a kind of grief at the suffering of others as if that suffering were our own. Alasdair MacIntyre, the leading modern exponent of Thomist virtue ethics, sees misericordia as a responsiveness to the distress of others that offers the same concern we would normally show to those in our own family, community or country to total strangers.

Misericordia in this sense is the virtue of the Good Samaritan; it’s the virtue the ancient Chinese sage Mencius describes in the way we would rush to help a child who has fallen down a well, not through hope of reward, but simply through concern for the child – any child.

You might say this particular virtue went missing at sea the day the Special Air Service was ordered to board the MV Tampa. We have been doing our best to keep it from surfacing ever since.

And so it has come to this. Not only are we denying asylum seekers arriving by boat any prospect of resettlement in Australia, we are publishing pictures of their anguish at being told so. Whether this is genuinely meant to deter people from risking death on the high seas, or whether, as Melbourne philosopher Damon Young put it, it’s “immigration torture porn for xenophobes”, it seems we now see the suffering of others as an opportunity to exploit rather than a call to action.

The consensus among the commentariat has been that all this will go over beautifully in an electorate that has long seen boat arrivals as a standing existential threat. As journalist David Marr points out, Rudd is merely the latest prime minister to play on our disproportionate fear of “boat people”. The moves are new but the game itself is decades old.

It’s a bizarre national obsession and it begs for answers: why are we so scared of the boats?

No doubt straightforward racism is a very big part of it. But that can’t be the whole story: if it were, why has the government not been pilloried for talking about raising the overall humanitarian intake? Why is there no comparable outrage over the considerably larger number of asylum seekers who arrive in Australia by air? If it’s about respect for Australian law, where’s the outrage over visa overstayers, a much larger cohort than asylum seekers? If it’s driven by opposition to population growth, where were all those “F—– Off, We’re Full” stickers when the Baby Bonus was introduced?

So what is it that boat arrivals symbolise that other forms of arrival don’t? Well, here’s a stab at an answer: they remind Australians that we haven’t earned what we’ve got.

Consider John Howard’s “We will decide who comes to this country and the circumstances in which they come” line. That same message has been built into all our rhetoric on boat arrivals ever since: we don’t have to take you, and unless you come here entirely on our terms, we won’t. Form an orderly line, jump through this set of hoops, and maybe we’ll let you in. You’re welcome.

This emphasis on sovereignty over mercy serves to bolster the idea that “our way of life” is somehow ours by right, and so within our gift to bestow or withhold however we see fit. A gift, by its nature, must be freely given and gratuitous; it cannot be demanded of us. And it must be ours to give; we can only share what we ourselves are entitled to.

Except, of course, we haven’t earned such an entitlement at all.

Consider what Howard’s former chief of staff Arthur Sinodinos told a Q&A audience this week:

…our obligation…is to give people protection. It is not to guarantee them a first world lifestyle in every case when they come to Australia.

But then, by what right are we guaranteed such a lifestyle? What law of nature or reason determines that we get to live in luxury simply by virtue of the accident of birth?

Consider the slogan bandied about during the Cronulla riots: “I grew here, you flew here” – as if it was a personal achievement to be born on this part of the earth’s surface at this point in history. It’s not. It’s sheer dumb luck. It’s nice to be lucky, but it’s no merit.

I suspect on some level that’s what boat arrivals remind us of: the radical contingency of everything we have. It’s not just that we’re repulsed by undeserved misfortune – “there but for the grace of God go I”; “really makes you think, doesn’t it?” – we’re deeply unsettled by undeserved good fortune too.

As a species we always have been. That’s why we invented doctrines like karma, so we could insist that those born into abased misery or obscene privilege must, somehow, be getting their just deserts. In the modern West we have a similar myth: that “anyone can make it” if they just work hard enough, and so the poor must simply be lazy, undeserving – as if talent and even the capacity for hard work itself aren’t themselves dealt out by random chance.

Acknowledging such radical contingency knocks the ground from under out feet. It suggests our claim to our prosperity ultimately rests on happy accident rather than cosmic justice. No amount of “Cronulla capes” and bumper stickers and half-remembered tales of Bradman riding Phar Lap to victory at Gallipoli can change that.

K.E. Løgstrup, a 20th century Danish moral philosopher who deserves to be much better known outside Scandinavia than he is, argued that once we see the gratuitousness of what we have, we can no longer stand on our own rights in order to begrudge others our help. Our individual sovereignty is shattered by the realisation that everything we have is, ultimately, a gift we’ve received, not an entitlement we’ve earned.

Perhaps that’s why the boats scare us: they remind us of a far more demanding ethics lurking behind our comfortable norms of reciprocity and exchange. Perhaps that’s at least part of why we go to such lengths to dehumanise, to demonise, to refuse to see asylum seekers in their full humanity.

None of what I’ve just written fixes the problem or even offers any policy suggestions whatsoever. Understanding our motives won’t stop people dying at sea – as I write this, yet more lives have just been lost.

But the moral demand to respond with misericordia hasn’t gone away. And we cannot act morally, or even see others properly, if we’re more concerned about justifying our own privilege.

Ethics is a jealous God: self-regulation vs self-sacrifice

St Olaf College, June 2013
St Olaf College, June 2013

[Originally posted at The Conversation; feel free to join in the discussion there]

Late one night recently I got a very frustrated email from a close friend. He’d just spent the evening arguing with investors about whether they needed to take ethics into account in their investment decisions.

Oh no, his colleagues had said: after all, we’re in the business of making money, not value judgements. Besides, no-one can agree on what’s morally acceptable anyway, so such decisions should be left to individuals.

This is something I’ve been noticing a lot lately, both in class and online: the assumption that because there is disagreement over the content of ethics, then ethics itself must be a wholly subjective matter, a matter of individual choice – personal taste with a self-righteous sheen.

So, I pounded out a quick response to the email: ask these investors what they’d think about someone taking all their money off them. Would they merely feel annoyed at the inconvenience, or actually wronged, as if someone had unjustly violated their rights?

Moral relativists, at least of a certain stripe, tend to retreat pretty quickly at that point. More often than not they tacitly subscribe to, and indeed rely upon, a liberal framework according to which everyone should be allowed to pursue their own conception of the good, so long as they don’t hurt anyone else. That liberal framework is explicitly put together to accommodate disagreement about right and wrong, but it is nonetheless at least minimally moral.

This liberal principle of non-interference is precisely what makes the idea of self-regulation so attractive. We want industries, companies and individuals to minimise the harm their activities cause while pursuing their rational self-interest. If they can do this without coercion from government, then so much the better, no?

The obvious rejoinder when a given industry or sector claims authority to regulate itself is that they don’t always do a particularly good job of it. There’s an obvious tension between self-interest and self-regulation that’s only partially dealt with by saying it’s in a business’ interest to play nice, if only to avoid reputational risk.

Immanuel Kant famously gave us the image of a shopkeeper who never over-charges his customers simply because doing so might give him a bad name and hurt his trade. Kant insists such a merchant is not, in fact, acting morally at all. Still, you might say, surely a person or business acting honestly from selfish motives is better than their not acting honestly at all?

But the problem goes deeper than this. The irritating thing about ethics is that its demands are categorical in a way that other kinds of norm aren’t. No amount of beauty, convenience or benefit can outweigh a moral claim. You can’t make enough profit out of doing something morally impermissible to make it okay again, unless, perhaps, the profit itself provides a better moral outcome for some other reason.

This would be fine if what is in our interests and what is morally right always coincided, but they don’t. Ethics often requires us to act against our overall self-interest, not just defer short-term rewards (immediate profit) in order to secure greater long-term goods (reputation). Indeed, ethics can even require us to sacrifice everything up to and including our very existence. The right and the good are jealous gods, and they can demand self-sacrifice in a way wholly incompatible with an “enlightened self-interest” model of self-regulation.

Being prepared to set aside self-interest in this categorical way may simply be expecting too much of any company. A public company’s legal duty, indeed its very reason for being, is to generate profit for its shareholders. It will of course operate within legal parameters, but what’s legal and what’s right frequently don’t map onto each other perfectly.

“We should stop doing x because x-ing will make us unpopular and reduce profits” may be good business, but it’s a prudential rather than an ethical norm. “We should stop doing x just because x-ing is wrong” however clearly is an ethical norm – and just for that reason can’t be fit into a framework that takes self-interest as its overriding value.

And acting ethically might involve more than just saying that some practices are unacceptable. It might involve acknowledging that some companies, organisations and industries simply should no longer exist.

Recently the farming lobby managed to shut down a major component of an anti-factory farming campaign by Animals Australia. The various farmers groups have sought, both in their public statements and in a social media campaign, to claim the ethical mantle of “animal welfare”:

Let’s be very clear: Australian farmers are committed to animal welfare. Our farmers raise, care for and nurture their stock and care deeply about what happens to them. We understand that improvements need to be made, but farmers, working with respected animal welfare groups, the community and governments, will be the ones who make them.

The claim here is that farmers are already acting ethically – indeed are motivated by concern and a desire to nurture – and should be trusted to regulate and improve their own practices.

The statement goes on to say that Animals Australia’s “real agenda” is “to end animal agriculture in this country”. No further comment is added; the subtext seems to be that, to the NFF, such an outcome is simply unthinkable. The very suggestion is, in their mind, self-refuting.

To use a metaphor that I’m quite sure will never be used this appropriately ever again: turkeys don’t vote for Christmas.

What ethics may demand (and I stress may here, as I’m not actually arguing one way or the other about the legitimacy of animal farming, or any other industry or practice for that matter) is not just upping standards, but shutting down. There’s an argument to be had there, justifications that need to be offered, objections to be countered. But by insisting on self-regulation, the NFF is effectively calling the outcome of that argument ahead of time. Its own survival as an industry – precisely what ethics might ask it to give up – is non-negotiable.

Can self-regulation ever work? Quite possibly. But only if the parties are prepared to put ethics in its rightful place as the highest demand, instead of making it, at best, one priority among others. Morality cannot be a mere marketing tool or cultural ethos, respected and lauded but ultimately subordinate to self-interest and indeed to survival.

Self-regulation demands turkeys that can vote for Christmas. That’s one species we don’t seem to be raising many of these days.

Divine astroturf: should anti-vaccinationists get their own church?

Loch Sport, July 2013
Loch Sport, July 2013

 [Originally posted at The Conversation; feel free to join in the discussion there]

The akedah narrative – the story of Abraham’s willingness to sacrifice his son Isaac at God’s command – is one that has long inspired and haunted Jews, Christians and Muslims.

In being prepared to kill his own son, Abraham is presented as the “father of faith,” an exemplar of pious obedience and unwavering belief that God would, somehow, fulfil his earlier promise to Abraham that through Isaac he would found a great nation.

It’s hard not to find the story deeply unsettling. How does Abraham know he’s hearing a command from God? Mightn’t he just be dreaming, or deluded? And what sort of God would ask such a thing? Can even God override such a basic ethical principle as that of not murdering one’s child?

No wonder the philosopher Søren Kierkegaaard, whose 200th birthday has just passed, called his exploration of the story Fear and Trembling.

In this work, “Johannes di silentio”, one of the many pseudonyms Kierkegaard uses in order to decentre authorial authority, considers whether there can ever be a “teleological suspension of the ethical.” That is, can there be justified exceptions to moral laws on the basis of a direct command from God?

Clearly, the problem of whether faith can exempt people from earthly laws and human morality is not a new one. And interestingly, it’s flared up again in Australia just this last week.

The NSW parliament has introduced legislation to allow childcare centres to refuse to enrol unvaccinated children. It didn’t take long for Australia’s main anti-vaccination group to suggest a loophole for those wanting to get around the new laws: find religion.

For just $25 you can join the Church of Conscious Living, which was set up with the express purpose of providing “believers” with a religious exemption from vaccination.

There’s no liturgical basis to this church, apparently no organised community, no scriptures, no theology beyond a handful of broad statements about bodily sanctity and vaccines. They haven’t released a newsletter since 2010. Even the recipe for scalloped potatoes they offer looks a bit thin.

We’ve seen this phenomenon of “astroturfing” many times – where something that looks like a grass-roots movement turns out to have been cut from whole cloth by a corporation or public relations company.

Now, it seems we’ve got the religious equivalent – a “religion” that has been concocted for other purposes.

Joining a church to claim an exemption rather than out of genuine spiritual belief might seem a bit sleazy. Still, you might ask, is this any worse than joining a religion to placate your partner’s family or so you can get married in their faith?

Besides, who has the right to tell you that your religious belief isn’t sincere? How can the state determine whether your beliefs count as religious or not?

Actually, the state has already been doing that for some time. That’s why “Jedi” still isn’t recognised by the Australian Bureau of Statistics, despite thousands of people listing it as their religion on census forms.

Nonetheless, defining religion is a notoriously difficult business. Trying to define religion either by listing its essential features or describing the function it fulfils leads to serious difficulties. Given this ambiguity, might anti-vaccinationism be entitled to be considered a new religion?

The reasons why people believe in anti-vaccination myths are many and varied and often specific to individuals. No doubt many are seeking answers as to why their child has a health problem, an answer which anti-vaccine narratives appear to offer.

Still, when reading online anti-vaccination discussions, particularly those that shade into endorsing alternative medicine, a number of overlapping themes keep coming through. One is a visceral distrust and resentment of authority, whether government, medical or judicial.

Associated with that distrust is selective regard for expertise: someone with years of university education and published research under their belt is clearly corrupt and can be dismissed, while homeopaths, naturopaths and cancer quacks are lauded as brilliant sages.

Another recurring idea is that of a secret body of knowledge that offers the initiate a short-cut to health or other goods. Just eat the right foods, take the right supplements, and even the most terrifying of diseases can’t hurt you. (The unspoken corollary is that if they do hurt you, it must be your fault).

This idea that the world can be hacked to work the way you want it to, so long as you know the cheat codes, even carries over into bizarre pseudo-legal beliefs such as “Freeman on the Land” defences, which anti-vaccinationists have sometimes tried. For the record, this never works.

You can, in fact, discern something like a proto-religious worldview in all this, complete with its own myth of the Fall and promise of salvation.

The natural world is understood as a fundamentally benign place. If we suffer, it’s because, in our hubris, we’ve fallen away from a paradisical state of nature to our present artificial condition.

Only through purging ourselves of our corruption (read: “toxins”) and returning to a “natural” way of life can we return to our blessed prelapsarian state.

That’s actually quite an old story. There are broad themes here that are familiar from many religious texts, from the Eden narrative in the Abrahamic faiths to pre-Qin Chinese religious texts like the Daodejing and the Zhuangzi, with their insistence on returning to the primordial dao or the “way”, from which we have strayed.

But a few broad themes do not a religion make. And even if they did, it’s not clear that a belief that entails causing risk not just to yourself but to your children and to others in the community deserves accommodation.

I have argued before that the collision of deeply-held faith beliefs and public ethics is often messy. Negotiating the collision requires thoughtfulness and care.

But where people seek to engage in activity that harms others on the basis of reasons that cannot be shared from the perspective of public ethics, it’s far from clear why we should be obliged to accept this.

In Fear and Trembling, di silentio has to conclude that he cannot understand Abraham. Perhaps God really did order him to kill his son but, in human terms, Abraham must be accounted a murderer. Kierkegaard’s point is that the believer must regard Abraham as an exemplar of faith despite this humanly valid judgement.

But in public ethics, faith-based reasons have no place – even, or perhaps especially, when religious exemptions would lead to real harm to innocent people.

Burying Thatcher: why celebrating death is still wrong

[This article originally appeared at The Conversation; feel free to join in the discussion there]

Today, a funeral ceremony will take place for former British prime minister Margaret Thatcher in London’s St Paul’s Cathedral. Outside, protesters will be turning their backs on the coffin as it passes through central London.

This final act of defiance is, compared with much of what we’ve seen in the days since Thatcher’s death, quite mild. Street parties were held from Brixton to Glasgow to celebrate her death. So many people bought Ding-Dong! The Witch Is Dead that it went to number two on the UK charts.

Even here in Australia, the University of Melbourne Student Union passed a motion “unreservedly” celebrating the death of Baroness Thatcher.

This has touched off some interesting debate about whether there is something unseemly about celebrating the death of another human being. But there are at least two importantly different issues in play we need to keep separate here.

The first is the idea that we shouldn’t speak ill of the dead. Is this just a matter of etiquette, or does it have an important moral dimension as well?

When someone dies, we begin to tie up the narrative of their lives, finalising and knotting off the strands of meaning that they wove while they lived. It is a time for consoling the bereaved, but it is also, inevitably, a time for forming judgements.

At the moment of death, a person’s life will always and forever be just what it was. We reflect on the totality of that life and form a view of the whole, much as we do when the credits roll on a movie or we finish the end of a book. The notion of a “final judgement” of the dead in another realm, which features in religions from ancient Egypt to Christianity and Islam, largely reflects what humans do down here already.

With Thatcher, who left so much anger and pain in her wake, this was always going to be an unusually fraught and raw process. But that doesn’t mean it should be put aside in the interests of delicacy. And as Glenn Greenwald has argued in recent days, maintaining a respectful silence may allow the discussion to be co-opted for political ends.

That doesn’t imply that anything goes, however. Philosophers from Aristotle onwards have argued that we have moral obligations to the dead, even though the dead no longer exist. The arguments used to justify this claim are quite ingenious, but I won’t rehearse them here. We don’t simply refrain from slandering the dead because it will hurt their loved ones, but because this somehow harms the dead person themselves. Likewise, we go out of our way to right past injustices because this is what we owe to the dead, not just their survivors or descendants.

Our dealings with the dead are just as ethically governed as our dealings with the living. And the dead are especially vulnerable – “prey” to the living as Sartre put it – because among other things they cannot defend themselves. That was part of Julie Bishop’s rationale in attacking Bob Carr for describing apparently racist comments Thatcher made in his presence. That the dead can’t speak for themselves shouldn’t give them a free pass from criticism, but it does make the responsibility for making that criticism fair and defensible all the more pressing.

But speaking ill of the dead is one thing; celebrating someone’s death is something else.

When a Navy SEAL team killed Osama bin Laden, there were scenes of spontaneous rejoicing in the US. Few stopped to ask whether celebrating a death, even of someone as evil as bin Laden, was appropriate. At least in that case, many people believed that bin Laden’s death constituted some sort of justice, and made the world a “safer place”. It’s hard to see how Thatcher’s passing, at eighty-seven years of age in her bed in the Ritz Hotel, looks like “justice” to even her most forgiving opponent.

Some might say that what’s being celebrated is not the death of Thatcher herself, but of what she stood for. But in that case, the party comes both too early and too late.

It’s too late in that Thatcher stopped being prime minister more than two decades ago. If we’re just relieved that she’s gone, well, she’s been “gone” for a very long time.

Yet it’s also too early in that Thatcher’s legacy is still with us. Indeed, the neo-liberal policies she pursued have become entrenched orthodoxies, well beyond Thatcher’s sphere of direct influence. The death of a frail, elderly dementia patient does nothing to change that.

So what we’re left with, despite protestations to the contrary, seems to be glorying in her death itself. That does take us into very different moral territory. In this case we don’t simply judge a person’s life or actions but repudiate them as a human being. Rejoicing in someone’s death is tantamount to denying them even that most basic moral regard. That should indeed trouble us, however harshly we may judge Thatcher and her legacy.

There will be a moment on Wednesday, late in the ceremony, when the priest will “commend to Almighty God our sister Margaret”. This simple, stark phrase reminds us that at the centre of the drama of death is a person: someone who cared about their life and was loved by others. That fact, unsettling and even infuriating as it sometimes is, cannot be ignored without degrading the regard for even the worst of us that holds the moral sphere together.

Every death, as John Donne reminds us, diminishes us. Rejoicing in death only diminishes us further.

A Horse is a Horse: Sexism vs. Speciesism

‘Year-in-review’ articles are meant to get people talking. Or, fill out column inches during a quiet time of year. Either way, I doubt the Daily Telegraph’s Phil Rothfield and Darren Hadland were expecting the backlash they received just before Christmas, when they declared racehorse Black Caviar the Tele’s ‘Sportswoman of the Year.’

Not surprisingly, Twitter rounded on Rothfield almost immediately, with media figures such as Wendy Harmer and Tara Moss weighing in on what clearly looks to be, at best, obliviously sexist. Rothfield telling Harmer to ‘pull her head in’ on the basis that ‘Caviar is a girl’ didn’t help.

The astute observer may have noticed that whatever else she is, Black Caviar is not a woman. She is female, but she is not a woman (or a girl for that matter). To horribly oversimplify, ‘female’ refers to the biological category of sex, while ‘woman’ refers to the social category of gender. A horse has a sex, but it does not have a gender. ‘Woman’ is a specifically human category, one that involves situation in a network of meanings that simply aren’t available or applicable to nonhuman animals.

And this is why awarding the title of ‘Sportswoman of the Year’ to Black Caviar is so galling: it reduces women to their female bodies. The decision suggests there were no actual women worthy of the title, so we’ll just pick the nearest deserving female as if that’s the same thing. That, in turn, collapses ‘woman’ into ‘female’ and thus essentialises gender. This is the old trick of sexism: women come to be defined by their biology, men do not. As Simone de Beauvoir noted, both men and women secrete hormones, but men are never accused of thinking ‘hormonally’ no matter how much testosterone is involved.

In the context of the position of women in sport, the Tele’s decision looks tin-eared at best and sinister at worst. Harmer took to her blog to point out how insulting this decision looks given “the utter bullshit [sportswomen] have to cope with – year in and year out.” I’ve no doubt she’s right. The effect of the article is clearly belittling, playing to the idea that women’s sport is necessarily boring, secondary, less legitimate. In sport as in other aspects of life, women are, as de Beauvoir put it, made into the ‘other.’ The defaults of the species are implicitly set to ‘male.’

What was interesting though was the sheer incredulity displayed by many at the very idea that a horse could even be considered as competing with humans. ABC journo Jeremy Fernandez, for instance, tweeted that Australia II also ‘stopped a nation’ as Black Caviar had done, but that didn’t make it a sportswoman. No-one, as best I can tell, pulled Fernandez up for comparing a sentient nonhuman animal to a yacht, equating a horse with a mere object.

Given the perfectly valid focus on gender, no-one, it seemed, stop to ask: why shouldn’t a horse be in the running (sorry) for recognition alongside human sportspeople? If we’re going to laud extraordinary feats of strength and endurance, why must the only animals to be so rewarded be homo sapiens? Perhaps there are valid answers to that question, but what struck me was that no-one even thought to raise it.

We find ourselves caught here between sexism and speciesism. We’ve finally come to a point where we can recognize the former, though clearly we still have a very long way to go. Speciesism, however, barely even registers.

The moral progress of humanity has been largely a process of coming to see the wrongness of discrimination on the basis of morally irrelevant differences – gender, race, sexuality, and so on. With regard to how we treat nonhuman animals, the question is basically this: which features that distinguish humans from animals are morally relevant and therefore justify differential regard? What is it that humans have that animals don’t that justifies putting our interests ahead of those of nonhuman animals, in what ways and to what extents?

As the last few decades of animal ethics has shown, these turn out to be deeply complex questions, to which there have been no shortage of answers put forward. I’m not denying there are such relevant features, by the way, as if human and nonhuman animals are morally equal. The capacity for rationality and self-reflection, for instance, seem to make a vast difference morally. But is it an absolute difference? And does it matter in the same way in all contexts?

Let’s stick to what we’re rewarding here: sporting performance. We’re not talking about ‘best and fairest.’ We’re simply talking about who can run the fastest or score the highest. Of course most sports involve a degree of conceptualisation that is not available to nonhuman animals – but if we’re going to laud individuals for doing physical things that almost no other individuals of their species can do, doesn’t Black Caviar fall into that category? Why is a human running really fast around a track qualitatively different, in a morally relevant way, from a horse doing the same thing (with ‘really fast’ relative to species-average in each case)?

Perhaps the simplest, most elegant solution for the Telegraph would have been to declare Black Caviar “Athlete of the Year.”

That would have done the Tele’s Sportsman of the Year out of his award too. But if Black Caviar trumped every human sportswoman in 2012, I dare say a good argument could also be made for her beating Rothfield and Hadland’s pick, Michael Clarke.

Mind you, I’m not much of a cricket fan. Though if Clarke had to play with someone sitting on his back, whipping him every time it looked like he was about to be run out, I’d probably watch. And I quite like the idea that Ricky Ponting is now living out his days in a nice paddock somewhere, with all the apples and sugar cubes he could wish for.

So this solution would have avoided the obnoxious sexism of the Tele’s conflation of ‘female horse’ with ‘woman’ whilst simultaneously taking Black Caviar seriously as an athlete, regardless of her species. Win-win, no?

Of course, maybe we’re not prepared to take nonhuman animals seriously as athletes. If that’s the case, perhaps we should stop forcing them to perform athletic feats for our entertainment? Just a thought.

I tweet dead people: can the internet help you cheat your maker?

[Originally published at The Conversation; feel free to join in the discussion there]

Can you believe it’s been a year already? I’m sure we all remember where we were when we heard the terrible news we’d lost Gregg Jevin.

You know, Gregg Jevin? The Gregg Jevin?

Don’t worry if the name doesn’t ring any bells. There never was a Gregg Jevin. Yet he “died” on 24th February last year, in a tweet from British comedian Michael Legge:

Sad to say that Gregg Jevin, a man I just made up, has died. #RIPGreggJevin

— Michael Legge (@michaellegge) February 24, 2012

Within hours, #RIPGreggJevin was trending on Twitter, with celebrities, companies and ordinary punters rushing to express “condolences”. Some of it was genuinely hilarious. Even the odd philosopher had a go at it.

The Jevin affair suggests something genuinely interesting about “Twitter mourning”: we’ve been doing it long enough that it’s developed its own conventions, which users know how to satirise when given the chance. I’m sure it’s no coincidence that Legge’s tweet went viral only days after Twitter’s outpouring of grief for Whitney Houston. The “death” of Gregg Jevin briefly gave people a sandbox in which to play around with the language of online mourning without causing genuine offence.

Now, just when we’d somehow managed to pick up the pieces and move on without Gregg, a startup called _LivesOn claims it will change the way Twitter users interact with the dead.

Details are scant, but the idea seems to be that the service will use an algorithm to generate new tweets of behalf of dead users, tweets that sound like those the user themselves posted in life. The net effect is that, as _LivesOn put it, “When your heart stops beating, you’ll keep tweeting.”

It’s hard to know how seriously to take these claims. Representatives of _LivesOn deny being a publicity stunt, describing itself as an project jointly conducted by an ad agency and a university. But even if it’s deadly serious (sorry), commentators have questioned whether the technology could possibly deliver what it promises.

This is not the first time a company has held out the prospect of perpetuating an online presence after your demise. Intellitar’s “Virtual Eternity” service – currently closed, supposedly for further development – offers an animated avatar that can interact with users, using artificial intelligence to “answer” questions as you would have done. The results, frankly, aren’t impressive, at least not yet.

The interesting point is not whether these technologies will ever be any good, but that they’re being discussed at all. What does it say about us that we’re reaching for this kind of digital immortality?

It seems silly to think you could somehow survive your death through a service that tweets on your behalf. But consider how much of our communication with others is now mediated through social media: might there be some sense in the idea that extending your online presence after your death would keep you in existence somehow?

Yes and no. In research published last year, I looked into the increasingly common practice of memorialising the profiles of dead Facebook users. For a large number of us, Facebook has become a large part of our presence in the lives of others. When Facebook users die, their digital traces persist; through them, the dead arguably do retain something of their presence in our lives. Perhaps that’s why people continue to post on the walls of dead Facebook users long after their passing.

So social media can, in one sense, help the dead remain with us. But why isn’t this thought much comfort?

To answer that, I suggest we consider some recent developments in the philosophy of personal identity. Discussions in this field have increasingly begun to differentiate between the “person” and the “self” (or in a slightly different version, the “narrative self” or “autobiographical self” and the “minimal self” or “core self”).

The distinction is applied somewhat differently by different theorists, but it goes roughly like this: the self is the subject you experience yourself as being here and now, the thing that’s thinking your thoughts and having your experiences, while the person is a physical, psychological and social being that is spread out across time.

One of the questions I focus on in my work is how these two kinds of selfhood interact, and the ways in which they can come apart. In this case, something like _LivesOn might in fact extend the identity of your person, albeit in a very thin and diminished sense. If you’re a regular tweeter, it might serve in some small way to enhance your ongoing presence in the life of other people.

But it doesn’t extend your self. There’s no experience to look forward to, no subject at the core of your tweets. Perhaps it helps you live on for others, to some small degree, but not for yourself.

So perhaps we shouldn’t hope for too much from our posthumous online presences. Perhaps we should leave posterity to worry about itself and simply live the best we can here and now.

It’s what Gregg would have wanted.

Philosophy under attack: Lawrence Krauss and the new denialism

[Originally posted at The Conversation; feel free to join in the discussion there]

I really shouldn’t let myself watch Q&A. Don’t get me wrong, the ABC’s flagship weekly panel show is usually compelling viewing. But after just a few minutes I end up with the systolic blood pressure of Yosemite Sam and so fired up I can’t get to sleep for ages afterwards. Good thing it’s on when the kids are in bed, or they’d pick up all sorts of funny new words from Daddy yelling at the screen.

I’m expecting more of the same this week when physicist and author Lawrence Krauss is on the panel. Krauss is an entertaining commentator and science populist – if often quite provocative, especially on matters of religion. He should be fun to watch.

But Krauss has also been at the forefront of what looks like a disturbing recent trend among media-savvy scientists. There has rightly been a lot of concern lately about “science denialism”. But many of those sounding the alarm themselves seem to be engaging in what we might call philosophy denialism.

The subtitle of Krauss’ 2012 book A universe from nothing: why there is something rather than nothing is tantalising, offering to answer one of philosophy’s fundamental questions. Not everyone liked Krauss’ answer, however.

In response to a critical review by philosopher of physics David Albert, Krauss called Albert a “moronic philosopher” and told the Atlantic’s Ross Andersen that philosophers are threatened by science because “science progresses and philosophy doesn’t”.

Krauss’ gripe with philosophy seems to be, as Massimo Pigliucci eloquently points out, that philosophy hasn’t solved scientific problems. The same charge is levelled even more bluntly by none other than Stephen Hawking, who in 2010 declared philosophy was dead.

According to these scientists, philosophy and physics were chasing the same prize – an understanding of the ultimate nature of reality – and physics simply got there first.

Yet this misses the point of what philosophy does, and how it relates to and differs from other disciplines. “Is 2 to the power of 57,885,161 minus 1 a prime number?” or “Did Richard III murder the princes in the tower?” are questions for mathematics and history.

“What is a number?” or “Does the past exist?”, however, are not. It’s when these fundamental conceptual questions arise that the philosophical rubber hits the road.

So when Krauss complains that philosophy of science is unjustified because it doesn’t influence the way physicists do physics, he’s assuming both that philosophy only matters if it helps to answer the same questions scientists ask, and that questions only matter if they can be answered in a certain type of way. Those assumptions need to be argued for – and that’s a philosophical task.

In fairness, Krauss has walked back a lot of his rhetoric in the months since, even expressing admiration for some forms of philosophy – but he still insists that when it comes to his question of why there is something rather than nothing, the claims made by philosophers are “essentially sterile, backward, useless and annoying”. The empirical exploration of reality, he tells us, changes “our understanding of which questions are important and fruitful and which are not”.

But what makes an explanation sterile or fruitful? Should an explanation’s being “annoying” (or elegant for that matter) matter, and in what way? Don’t look now, but in asking questions like that we’re doing philosophy, even if Krauss doesn’t seem to think we need to. And he’s not the only one.

It’s usually assumed that science is descriptive rather than normative; it tells you how the world is, not how it should be. But recently Sam Harris and Michael Shermer have each argued that science can indeed answer moral questions, with Shermer lamenting that science has “conceded the high ground of determining human values, morals, and ethics to philosophers”. Both start from the rather Aristotelian assumption that the object of morality is securing the flourishing of conscious beings, and that science can tell us how to do that. But this simply puts off the question of why we should value flourishing at all, rather than answering it.

But, you might ask, does any of this really matter? After all, if scientists are too busy trying to cure cancer to wade through Heidegger’s Being and Time, do we really care?

I don’t want to overplay the gulf between scientists and philosophers, or between disciplines like theoretical physics and metaphysics; equally, there’s no question many philosophers (including me) could do with greater scientific awareness. But when prominent scientists start dismissing the questions philosophy asks as irrelevant – not just scientifically irrelevant, but irrelevant as such – I think we do have a problem.

Harris and Shermer are right that moral decisions need to take empirical data into account. Our ability to be effective moral beings depends upon our capacity to understand and respond to the world around us. Moral philosophy, in that sense, cannot ignore science. But science, equally, needs to be aware when it is straying beyond its own borders in problematic ways. Those who are concerned to defuse the charge of “scientism” need to watch out for precisely this sort of overreach.

If it seems from the outside like philosophy doesn’t make progress, perhaps that’s because our questions haven’t changed that much since Socrates’ day: what is the nature of existence? What do we know, and how do we know that we know it? How are we to live?

These aren’t questions we can simply answer and move on. But equally, they don’t go away just because we ignore them.

Love Thy Neighbour: religious groups should not be exempt from discrimination laws.

[Originally posted at The Conversation – feel free to join in the discussion there]

A little over a century ago, our first prime minister told our first parliament that “the doctrine of the equality of man was never intended to apply to the equality of the Englishman and the Chinaman”. Barton had the abstract principle right, but he couldn’t see non-Europeans as the sort of people to whom it could apply. He could not see their inner lives, their concerns, passions and beliefs, as being as morally significant as his own.

In his recent book, The Better Angels of Our Nature, Stephen Pinker points out that for all our talk of moral decline, we are actually living in the least violent, least cruel and most peaceful era in human history.

Much of this progress, I’d suggest, is due not to better moral reasoning or principles, but to our improving moral vision.

We have come a long way since that first parliament, and we’re learning – gradually, fitfully, and painfully – to see what Barton could not see, to view others as no less worthy of our regard on the basis of irrelevant differences such as race, religion, or sexuality.

But what happens when the competing demands of religious and sexual identities collide in the public sphere?

The Gillard government has announced it will preserve existing exemptions in anti-discrimination legislation, allowing religious organisations to refuse to employ LGBT individuals – and indeed anyone else whose very presence might cause “injury to the religious sensitivities of adherents of that religion”. The Australian Christian Lobby has hailed this as a win for “religious freedom”.

This is disingenuous at best. The issue really has nothing to do with the free exercise of one’s religion and everything to do with denying the moral depth of gay and lesbian lives.

To be clear, we are not simply talking about issues of job performance. It’s hard to see how being gay would be an impediment to doing many, if any, of the diverse jobs available in the various faith-based charities, schools, hospitals and universities around Australia.

Rather, groups such as ACL are defending the “right” to refuse to hire someone, not because their actions might be contrary to the mission or ethos of a religious employer, but because who they are might offend against someone’s “religious sensitivities”. It is a rejection of who the employee is, not what they do.

This fact is sometimes obscured by calling homosexuality a “lifestyle” – a deliberately shallow, superficial word designed to deny the profundity of someone’s core relationships. It implies that homosexuality is some sort of inessential add-on rather than a defining feature of the person. My relationship is a central, non-negotiable part of who I am; your relationship is just some stylistic choice, like installing marble benchtops or wearing Crocs.

Sometimes, instead, discrimination is justified by claiming that what’s hated is the sin, not the sinner. This may be a sincerely held view, but it too ignores just how deeply integral romantic and sexual love is to our practical identities. It denies the same depth to same-sex and heterosexual relationships, and so implicitly refuses to acknowledge the significance of what it claims to be offended by.

The question here isn’t whether (some) religious believers are right or wrong to be offended by homosexuality in this way. Nor is it whether we should respect the deep religious convictions of believers.

Rather, it’s whether society is obliged to respect this sort of offence enough to override other moral considerations. And this takes us to the clash between private faith and public moral reasons.

Religious faith, however it finds expression, is an essentially inward, private state of profound certainty. It may involve reasons, but these are not the sort of reasons that can be shared with non-believers. I doubt anyone has ever been moved from atheism into genuine religious faith (as opposed to mere lip service) by force of rational argument alone. Trying to argue someone into religious belief is like trying to make someone fall in love with you by telling them all the reasons why they should: it won’t work, and even if it did, it wouldn’t be because of your arguments themselves but because of something else.

For some believers, then, there may be an unshakable inner certainty that homosexuality is immoral. It would be wrong for those of us who disagree to simply trivialise that view, as it may be linked to fundamental beliefs that are central to the believer’s conception of him or herself and what a good life comprises. I’ve heard Christians say how torn they are between their love for gay friends and family and their belief in the authority of scripture. I don’t doubt their sincerity, nor the difficulty it presents them.

But when it comes to matters of public ethics, beliefs that are grounded in religious faith simply don’t cut it on their own. Believers and non-believers have to share a society, and that means our moral discourse has to be based on premises it’s at least possible for us to agree upon. “I find working with gay people offensive because God says homosexuality is wrong” is simply not such a premise.

It may be an important fact about the lives of some believers, but it doesn’t justify employment discrimination. Surely we’ve come at least far enough to see that.

Between Guilt and Innocence: 2Day FM and the Moral Blame Game

[This post originally appeared at The Conversation. Feel free to join in the discussion there.]

This past weekend, we saw the media – old, new, and social – trying to digest the indigestible. The death of Jacintha Saldanha, the British nurse who apparently took her own life after being caught up in a prank phone call from 2DayFM DJs Mel Greig and Michael Christian, is one of those stories that is so sad, so utterly pointless and bewildering, as to leave us gasping for something, anything, coherent to say about it.

There’s also something frighteningly random in how things seem to have played out: a simple, farcical prank call from the other side of the planet, and suddenly a 46 year old woman – a mother of two and from all accounts a dedicated and well-regarded professional – is dead. That she is appears to be, from what we know at this stage, the result of decisions that had nothing to do with her. No one set out to cause this, no one could have seen it coming, but the feeling remains that someone – Greig and Christian, Austereo management, the hospital, the media, the cult of celebrity itself – must be to blame.

Blaming, as it happens, is something the internet is very good at. Within hours, the 2DayFM Facebook page was inundated with angry messages. Twitter lit up with outrage. Some of the response has been decidedly sinister.

But among the calls for retribution there were others defending the presenters and expressing concern for their welfare. The DJs clearly didn’t intend for anything like this to happen. As Peter FitzSimons, writing for Fairfax points out, such pranks are an everyday part of the FM radio repertoire.

FitzSimons doesn’t stop to ask whether it’s ever OK to misrepresent yourself in order to make someone the unwitting object of fun, let alone whether calling a hospital to gain private information on a patient’s condition is ever acceptable.

But FitzSimons’ article is revealing in another way. He draws on his “garden-variety legal studies” to remind us that the test of negligence is “whether or not a ‘reasonable man’ might have had any expectation that their actions would have resulted in the kind of tragedy we have seen.”

Surely, we can’t accuse Greig and Christian of negligence given there’s no way they could have predicted this outcome?

But legal responsibility isn’t the same thing as moral responsibility. Courts have to make clear decisions, a practical purpose for which they need artificial rules and procedures. The “reasonable man” test provides a rough but workable way to delimit responsibility: if the consequences of our action are so remote that a reasonable person could not have predicted them, then we’re not answerable for those consequences.

That might be good enough for a courtroom, where we need final decisions about who is responsible for what. But without this artificial context, the boundaries of moral responsibility seem to be far more ambiguous.

Indeed, there is an uneasy grey area between moral guilt and complete innocence. Philosophers have been troubled by this ever since Bernard Williams coined the term “moral luck” more than 30 years ago. Strictly speaking, “luck” shouldn’t have anything to do with morality: since Kant, the standard view has been that you’re only responsible for what you do, or could have done but failed to.

Yet in fact, it’s alarming just how much of what we praise and blame people for depends upon factors beyond their control. We condemn the coward, but no-one willingly chooses cowardice. We regard the drunk driver who kills a pedestrian as more culpable than one who doesn’t, even though it’s only random chance that separates the two cases. We tolerate, and indeed reward, an uneven and unearned distribution of talents. The idea that we’re only responsible for what we can control seems to be strained at every turn by our moral intuitions and practices.

And as Williams notes, there is a phenomenon of “agent-regret,” a sense that it would have been better if we had acted differently. Such agent-regret remains even when we know what has happened is not, strictly, our fault. There is, according to Susan Wolf, a “nameless virtue that urges us, as a matter of both moral character and of psychic health, to recognise and accept (to an appropriate degree) the effects of our actions as significant for who we are and for what we should do.”

Journalist Jane Hansen’s revealingly honest piece in the Australian illustrates the grey zone of agent-regret perfectly. It’s a sobering reminder of how unclear the boundaries between guilt and innocence, between culpable agent and victim of circumstance, often are.

That’s not what we want, of course. We want to affix blame and move on. We want to carve the world up into the innocent and the guilty and hand out their just deserts.

But we can’t. Think about it for more than a tweet-length and suddenly even our most basic ideas about the limits of responsibility fail us. Who is to blame? Are Greig and Christian to be pilloried or pitied? Can it be both? Neither? It’s simply indigestible.