The Naked Self is the culmination of a project I’ve been working on for the last eight years. From the dust jacket:
Across his relatively short and eccentric authorial career, Søren Kierkegaard develops a unique, and provocative, account of what it is to become, to be, and to lose a self, backed up by a rich phenomenology of self-experience. Yet Kierkegaard has been almost totally absent from the burgeoning analytic philosophical literature on self-constitution and personal identity. How, then, does Kierkegaard’s work appear when viewed in light of current debates about self and identity—and what does Kierkegaard have to teach philosophers grappling with these problems today?
The Naked Self explores Kierkegaard’s understanding of selfhood by situating his work in relation to central problems in contemporary philosophy of personal identity: the role of memory in selfhood, the relationship between the notional and actual subjects of memory and anticipation, the phenomenology of diachronic self-experience, affective alienation from our past and future, psychological continuity, practical and narrative approaches to identity, and the intelligibility of posthumous survival. By bringing his thought into dialogue with major living and recent philosophers of identity (such as Derek Parfit, Galen Strawson, Bernard Williams, J. David Velleman, Marya Schechtman, Mark Johnston, and others), Stokes reveals Kierkegaard as a philosopher with a significant—if challenging—contribution to make to philosophy of self and identity.
This piece originally appeared on The Conversation’s Cogito blog.
The UNSUBTLEBERG family – MOTHER, FATHER, BROTHER and SISTER – are driving along a lonely country road, on their way to camping holiday.
MOTHER: Wait, what’s that?! Stop the car!
[They screech to a halt]
FATHER: Wow, that’s quite a crash! That car’s completely wrecked!
BROTHER: Hey, there’s the driver! Over there, by the side of the road!
SISTER: I think he’s still alive! Look, he’s moving!
MOTHER: [unbuckling her belt] Ok, someone grab the blankets and medical kit out of the boot, there’s heaps of bandages in there, and I’ll –
FATHER: Whoa, just a second there, honey. What – what are you doing?
MOTHER: What do you mean, ‘what am I doing?’ That person needs help! We have to help!
FATHER: Well, sure, I agree this is a terrible situation. But it’s not our fault.
FATHER: It’s not our fault. We didn’t cause this accident, and so while it’s good that we feel sympathy here, we’re not obliged to stop and help. And, you know, we have a camping holiday to get to. Though I must say, it’s very much to our credit that we feel so bad for this poor sap. Really speaks to what great people we are that we feel so dreadful.
MOTHER: It may not be our fault – though who knows, maybe we’re all complicit in the condition of country highways – but this person needs help!
FATHER: Yes, but what matters here is that we do what’s just. And there’s no injustice in our driving past: we’re not breaking a promise, or stealing something that’s not ours, or injuring the driver ourselves, or anything like that. So, it’s ok to just drive on.
MOTHER: You don’t think it’s unjust that someone’s dying in a ditch while we’re sitting in a nice warm Audi arguing about what’s just?
FATHER: Unjust? No. Terrible? Sure. But no-one has wronged anybody here, so it’s nobody’s responsibility to do anything. Why should I fix this? I didn’t make the world. Besides, we worked hard for this Audi.
MOTHER: We bought this Audi with the money you inherited from your aunt, and you ‘work hard’ at a job you only got because your parents could afford to send you to private school where you met your future boss.
FATHER: Exactly. We’ve earned it.
MOTHER: Look, I’m not talking about obligations in that narrow sense. Circumstance has placed a suffering person into our hands, and that means we’re responsible for what happens to them, even if it’s not our fault. Sucks, but there it is. Now, will somebody please grab the first aid kit from the –
BROTHER: No way, Mum. I mean, yeah, Dad’s right that it’s sad this has happened and all, and it would be very nice of us to help, I agree. But I just think we should look after our own first.
MOTHER: What the actual f-
BROTHER: No, seriously, is that person’s name “Unsubtleberg”? Because we only have so many resources here in this car and we should use them to look after people named Unsubtleberg first.
BROTHER: Well, we’re the Unsubtlebergs. We have the same surname. That means we have special obligations to each other. We’re obliged to put the interests of our family members first, ahead of strangers.
MOTHER: Our ‘interest’ here being that we don’t lose an hour or two out of our camping holiday? You really think the random chance that made you an Unsubtleberg gives you the right to ignore a guy bleeding out not ten feet away from where you sit?
BROTHER: Of course I do. Also I didn’t want to say anything, but I got a papercut a few minutes ago while thumbing through this copy of Clumsy Allegory Enthusiast magazine. Don’t you think we as a family should be tending to our own wounds first before we start helping others?
SISTER: Besides, we’re four people in a mid-sized sedan. What do you want us to do: squish up? That would be less comfortable.
FATHER: And they might bleed all over the upholstery!
BROTHER: Hey look, there’s someone on a bicycle out there! The rider has gone over to the injured person. That means they got there first, so they should take the injured driver to hospital, not us.
MOTHER: On a bicycle? You can’t dink someone to hospital from the middle of nowhere. We’re in a car, so it’s up to us to help.
BROTHER: Why? Just because they need help doesn’t mean they have the right to sit in a nice vehicle. If they’re really injured, being slung over the bars of a ten-speed should be good enough for them, right? Otherwise they’re not even a patient, they’re a hitchhiker.
SISTER: Also, if we stop to help, aren’t we just encouraging accidents? If we rescue people from accidents like this, more and more people will start driving on country roads.
BROTHER: The only thing that really matters here is that we Stop The Traffic.
FATHER: [Opens window, calls to cyclist] Hey buddy, how’s the patient looking?
CYCLIST: I’m… I’m afraid they’re gone. Just a moment ago. I didn’t have any bandages or anything, there was just nothing I could do…
SISTER: Wow. Look at that poor body, lying there like that. Heartbreaking.
BROTHER: I’ll have that image seared into my brain forever.
We consider the border not to be a purely physical barrier separating national states, but a complex continuum stretching offshore and onshore, including the overseas, maritime, physical border and domestic dimensions of the border.
That language set ABF up for obvious lines of parody (“the border is just a state of mind, maaan…”) and the inevitable “It’s the vibe of the thing” memes. It all sounded a bit too, well, philosophical for a government department, let alone a newly-uniformed and armed organisation.
It’s certainly an unfamiliar thing when governments start to sound like philosophers, though there is precedent. In the middle of last decade, for instance, the Israeli Defence Force experimented with strategies based on Critical Theory and twentieth century French philosophy, particularly the work of Deleuze. It wasn’t a success, reportedly because “Not every officer in the IDF had the time or the inclination to study postmodern French philosophy.”
Imaginary lines and real lives
But waxing philosophical about borders is a perfectly reasonable thing to do, for borders, in the literal sense, are inherently abstract. They are the legal and cartographic expression of historical, cultural, and political contingencies. Not surprisingly, imposing these abstractions on the physical world sometimes leads to absurdity.
For instance, as a result of a complex set of Medieval treaties and land purchases, the Dutch municipality of Baarle-Nassau contains a patchwork of Belgian enclaves (Baarle-Hertog), some of which themselves contain parcels of Dutch territory, nested like Russian dolls. There are cafes that straddle the border; at one time, when Dutch law imposed early closing time on restaurants, patrons sitting in the Netherlands would simply get up and move to a table on the Belgian side of the room.
Elsewhere, an interminable dispute between Egypt and Sudan over which of two century-old borders is the right one means that Bir Tawil, an uninhabited 2,060 km2 patch of desert, is unclaimed by any nation: Egypt insists it’s part of Sudan, and Sudan insists it’s part of Egypt.
And for every such piece of quaint geopolitical trivia, there are uncountable tragedies connected with or occasioned by borders: tragedies of separation, of deprivation, of conflict, of death. We should be thinking hard about borders. They may be abstractions but their impact is desperately real.
Sovereignty and control
In December of 2014, the newly appointed Secretary of the Department of Immigration and Border Protection, Mike Pezzullo, gave a speech on “Sovereignty in an Age of Interdependency” in which he attempted to do just that sort of thinking. It’s a significant speech with far-reaching implications, one that both puts his department’s conceptualisation of borders into context, and unwittingly exposes the very conceptual problems at the heart of how we think about migration.
Pezzullo declared that while the mission of DIBP’s predecessor institutions going back to 1945 had been one of nation-building, now it was one of negotiating the tension between the openness required by globalization and the post-Westphalian state’s “ancient coding as a vehicle for territoriality and exclusion”:
I see them [borders] as mediating between the imperatives of the global order, with its bias towards the flow of people, goods, capital, data and knowledge, and the inherent territoriality and capacity for exclusion which comes with state sovereignty.
In the simplest of terms, modern states demand the right to determine how they will live within their own boundaries, but also seek the benefits of open movement of goods and people and of a rule-governed international order. The tension here is that we claim the right to make rules for ourselves while living in a broader environment that requires us to subject ourselves to external rules if we are gain certain benefits.
In that context, ABF’s “complex continuum” model of the border makes more sense. But it also draws our attention to what that border really amounts to.
Our gift (not) to give?
The language of ‘border protection’ is useful for governments, as it conjures
up images of patrolling a physical frontier, of keeping a walled populace safe from a hostile world without. That we’re now being told this wall is in fact a “continuum” that exists all the way to Flinders Street (even if Fortitude is what the ABF regards as a “behind the border” operation) gives the lie to this imagery. Entitlements to remain in a country aren’t created by borders, but the other way around: borders exist because of such entitlements. They are functions of a right that states claim for themselves, a right that Pezzullo sums up as being “able to determine who and what has the right, or gift, of entry or exit, and under what conditions.”
As I’ve argued here before, part of what makes the problem of asylum seekers so disturbing for us in the developed world is that these people’s very existence calls into question our assumed entitlement to live where we do, as we do. What moral rights does the mere accident of birth bestow upon us? Why should I be rich and safe and the other debased and imperilled? How do we derive rights of territorial exclusion from such sheer contingency?
The more fundamental question raised by this concept of the border is not how to balance sovereignty against the demands of global commerce; the question is what entitles us to make – or withhold – a gift of something we haven’t ourselves earned.
Community and contingency
You might reply that rights of abode derive from certain forms of connectedness to the community. The taunt of “I grew here, you flew here” is meant to convey that the speaker has the relevant kind of connections, and so an entitlement to be here, while the ‘newcomer’ does not. But simply being born here doesn’t automatically mean you’re connected to the community, while as the current case of Mojgan Shamsalipoor attests, connection to the community is no protection against the threat of deportation either.
Even if we could establish such a right on the basis of concrete connections, we’d still be left with the more fundamental challenge of whether we have a right to insist on ‘sovereignty’ ahead of our duty of concern for the other. In his speech, Pezzullo speaks of border protection as giving governments the “space” to be “compassionate” towards asylum seekers – phrasing that suggests compassion is somehow one policy option among others rather than a standing moral demand.
He points out that the sheer scale of the global refugee population means no one nation or even group of nations can take on the whole burden themselves. Yet nations already do take on that burden – asylum seekers physically have to be somewhere, after all – so what this really means is that developed nations cannot comfortably take on such burdens. But why simply assume we have a right to be comfortable? What grounds such a right? And just what flows from it?
Confronting our assumptions
If our current policy settings are to be believed, almost anything is licensed by our ‘right’ of exclusion, up to and including offshore detention in conditions so horrific it is clearly meant to be a cruel deterrent to anyone who would dare challenge us, not a bureaucratic mechanism for the orderly flow of people across borders.
The spectre of uniformed quasi-police checking papers in the middle of Melbourne rightly disturbed enough of us that it caused an immediate backlash. And while Australia continues to pull its hair out over relatively tiny numbers of “irregular” arrivals, Europe continues to experience appalling tragedies as it struggles to deal with incoming refugees and migrants.
Both these events confront us with what – if anything – underpins our claimed right of exclusion, even in the face of suffering and death. We should indeed be thinking about borders. We just might not like where that thinking takes us.
This piece originally appeared on The Conversation’s Cogito blog.
Whatever you think of his views, or of how he came to sit in the Senate, it’s hard to deny that David Leyonhjelm is the real deal: a conviction politician whose positions are governed by principle, not populism.
The problem for his supporters is that Leyonhjelm is exposing the disturbing moral thinness of the libertarian principles he espouses.
In the wake of a parliamentary committee recommending a referendum on constitutional recognition for Indigenous Australians, Leyonhjelm repeated the claim that there’s doubt in anthropological circles that the Aboriginal nations were the first inhabitants of the Australian continent.
That claim struck many as decidedly odd. However, this empirical claim is just one component of a larger position that Leyonhjelm outlined in a speech in March. Appeals to anthropological data and a curious concern not to exclude those who don’t respect Aboriginal culture are just ornaments to his main objection. That comes near the end of his speech, and is both perfectly consistent with his ideological commitments, and perfectly emblematic of what’s wrong with libertarianism:
Every human being in Australia is a person, equal before the law. Giving legal recognition to characteristics held by certain persons – particularly when those characteristics are inherent, like ancestry – represents a perverse sort of racism. Although it appears positive, it still singles some people out on the basis of race.
This is a familiar argument: if we’re all equal, and if it’s wrong to discriminate on the basis of race, then it’s just as wrong to discriminate positively as negatively.
The problem with talking about equality at this level of abstraction is that it makes the reality of material privilege invisible. And the bigger problem is that for libertarians, and a great many classical liberals, that’s not actually a problem at all.
The skinless enlightenment man
Treasurer Joe Hockey insists the state promises “equality of opportunity” but not “equality of outcome”. But “equality of opportunity” here is understood as mostly formal. It’s the equality Anatole France spoke of:
The majestic equality of the law, which forbids rich and poor alike to sleep under bridges, beg in the streets, and steal loaves of bread.
But if you can strip human beings back to a self so abstract that purely formal equality seems compelling, you can convince yourself that practical disadvantage doesn’t matter. Leyonhjelm’s speech name-checked the Enlightenment, and this is fitting. What emerges from the Enlightenment and its early modern antecedents is, as the philosopher Charles Taylor puts it, a “buffered” self, an autonomous agent impervious to external forces.
The men of the Enlightenment – figures like Locke, Hume, Kant, Jefferson and Rousseau – laid out the value of liberty and the essential dignity of humans spectacularly well, but the humans they were describing looked an awful lot like themselves. Being white, male, heterosexual, well-educated and materially comfortable – qualifications which allow you to pass through the world without the kinds of friction that others encounter – makes it much easier to conceive of yourself as an objective centre of disembodied reason and freedom.
Abstract reason doesn’t go hungry. Abstract reason has no skin; it is not born into a body situated into a world of meanings it cannot control.
Nor does it have a history. In speaking of everyone “celebrating ancestry”, Leyonhjelm quite explicitly collapses the experiences of an Indigenous Australian, an asylum seeker, and an Anglo-Celt into one very big but very shallow bucket. Racial identity is reduced to “ancestry” and shunted back into a past that’s available for voluntary “celebration” but exerts no real force on the present.
The “buffered self” isn’t buffeted, let alone constrained or determined, by the winds of history. It stands above history just as it stands above embodiment.
And to suggest otherwise? To suggest that history and its sequelae must be acknowledged? Why, that would be singling people out on the basis of their race. That’s racist.
Saving us from ourselves
In some ways this is all in keeping with libertarianism’s refusal to see anything but individual liberty as having decisive moral weight. Freedom, just so we’re clear, is desperately important. It’s one of the main features of the moral landscape that politics must be responsive to. But a myopic focus on individual liberty, linked to a thin conception of persons that sees human dignity simply as the free exercise of autonomy, obscures other vital features of that landscape.
Leyonhjelm has apparently won support for a parliamentary inquiry into the “Nanny State”. Once again, there is commendable philosophical clarity and consistency in his position:
The issue here is, to what extent is the government entitled to legislate – and we’re not talking about just giving advice – but to legislate, to protect you from your own bad choices. Bicycle helmets are a very good example of that: nobody is hurt if you fall off. If you don’t wear a bicycle helmet, your head’s not going to crack into somebody else and damage them.
This is the classic Liberal Harm Principle: no-one is entitled to interfere with your personal behaviour so long as it doesn’t impact on anyone else. Hence if you want to smoke, or ride a bike without a helmet, this is an essentially “personal” matter that no-one else should interfere with.
Leyonhjelm hit back against criticism for apparently being more concerned about the imaginary health effects of wind turbines than the very real health effects of tobacco. Such criticism misses the point: libertarians don’t care what you do to yourself, just to other people. Smoke ‘em if you got ‘em.
I’ve noted before that even classical liberals like Mill drew the line at suicide – as this destroys the very freedom that the Harm Principle is meant to respect – though some libertarians such as the late Robert Nozick were prepared to countenance a wider right of self-disposal.
But consider whether you have a right to wrestle a would-be suicide down from a window ledge or bridge. To conclude that this would be an unfair interference in their personal autonomy involves a certain blindness, a whittling of the person down to the point where their only remaining value is rational autonomy. The independent, buffered Enlightenment subject: a pure atomistic locus of self-directed freedom, including the freedom to jump.
What is bled out of that picture is the essential interconnection of persons, grounded in our intersubjective constitution. When John Donne famously declared “no man is an island entire of itself”, he knew exactly what that implied:
Any man’s death diminishes me, because I am involved in mankind. And therefore never send to know for whom the bell tolls; it tolls for thee.
No-one “merely” harms themselves, but inevitably harms those around them in doing so. My life is not entirely my own – nor is its value reducible to my autonomy.
Thick value and power-blindness
People – real, concrete, loving, feeling, people – matter in deep, distinctive ways, ways that strain the resources of our moral language. And, accordingly, their deaths – which rob the world of something inherently precious – also matter, at least enough for us to sometimes try and save people from their own objectively bad choices. But that sort of thick moral value is lost in the remorseless thinning-down of libertarian calculation.
Even Leyonhjelm’s support of same-sex marriage, for instance, doesn’t seem to be grounded in a view that long-term same-sex relationships are intrinsically good things that deserve access to the same sort of recognition as heterosexual ones, so much as a pervasive dislike of governments saying “no” to people.
Also, when you denude the world of moral pith by abstracting people down into their Enlightenment ghosts in this way, you end up peeling away the level on which real power operates. That makes it easier to pretend we’re now living in some sort of post-racial utopia in which any attempt to redress ongoing power imbalances becomes “reverse racism”.
Equality, it seems, is achieved simply by refusing to acknowledge that inequality remains to be overcome, and by refusing to see the privilege of one’s own position.
Think of Human Rights Commissioner Tim Wilson complaining that that the law – and not just social sanction – prohibits racially loaded terms being used by some people but not others. This misses the point that the words in question aren’t just words used to denigrate minorities: they’re words used by white people to denigrate others.
Wilson doesn’t magically stop being white when he speaks, and he doesn’t get to sidestep the historical meanings of a white man using those words. None of us gets to be the pure monad of ahistorical, acultural reason the Enlightenment imagined us to be.
But this charge of “reverse racism” is deeply attractive from a certain perspective. It’s a way of pretending you can talk about racism, or sexism, or homophobia, without talking about power. That’s comforting for those who sense true equality would mean that they – we – might have to give up some of that power.
Patrick will be on hand for an author Q&A between 3PM and 4PM AEST on Tuesday, June 30. Post your questions in the comments section below.
Editor’s note: This piece was updated after publication to clarify Human Rights Commissioner Tim Wilson’s comments.
This piece originally appeared on The Conversation’s Cogito blog
We’ve just seen another mass shooting in the US. This time it was a church, and race hate was the cause. Other times it’s a school, or a cinema, or a university, or a shopping mall.
By now, the script is sickeningly familiar: the numbing details of horror, followed by the bewildered outrage, the backlash that attempts to delimit and isolate and resist any wider analysis, and finally the inevitable failure to act.
But in the age of the internet there’s an extra, ghoulish twist.
Within days, and increasingly, within mere hours and minutes, a tragic event is being filtered through a worldview that insists these events are not what they seem. Conspiracy theorists leap on the tragedy as yet more evidence of dark forces manipulating the world for their own nefarious ends. The kids killed at Sandy Hook Elementary in Newtown, CT? They never existed. Their grieving families? “Crisis actors.” This is all Obama, you see, and his one-world-government comrades staging ‘false flag’ attacks to justify disarming the citizenry. He’s coming for your guns.
And yes, this process has already started around Charleston. Lunar right media identity Alex Jones’ Infowars immediately questioned if the shooting was a “false flag.” Others came out of the woodwork to insist the shooter’s manifesto was a fraud, and that the “shooter” was in fact a 33 year old Marine and former child star of Star Trek: The Next Generation and Doogie Howser MD.
Did you laugh just then? It’s understandable if you did. Most conspiracy theories are, unless you’re the one pushing them, pretty absurdly funny. Conspiracy theorists are always good for a chuckle.
Until they aren’t.
In his critical introduction to conspiracy theories, the sociologist Jovan Byford notes that the academic study of conspiracy theories went through a phase where scholars treated these theories as intriguing pop-culture artefacts that were essentially harmless. In the X-Files-inflected 90s, decades out from the horrific anti-Semitic conspiracy fantasties of Nesta Webster and the Protocols of the Elders of Zion, it was easy to treat conspiracy theory as an exercise of playful postmodern irony. No-one gets hurt, right?
Tell that to Gene Rosen, who helped kids who had fled the shooting at Newtown only to be hounded with abusive phone messages from people accusing him of being a government stooge. Tell that to the families of Grace McDonnell and Chase Kowalski, two seven year olds killed at Newtown, whose parents had to endure a phone call from the man who stole the memorial to their children telling them their children never existed.
But the harmfulness of conspiracy theory arguably goes much deeper than this. It’s not just that conspiracy belief sometimes causes people to do terrible things. It’s that attachment to the conspiracy worldview violates important norms of trust and forbearance that are central to how we relate to each other and the wider world.
There’s remarkably little philosophical work done on conspiracy theory, though intriguingly most of what has been done has been done by Australians and New Zealanders such as David Coady, Charles Pigden, Steve Clarke, and recently Matthew Dentith (I haven’t yet had a chance to get hold of Dentith’s new book on the philosophy of conspiracy theory but it looks interesting). Most of what has been done has concentrated on issues of rationality and epistemology: is it rational to believe in conspiracy theories?
Interestingly, the answer is: more rational than we might think. After all, conspiracy theories manage to explain all the loose ends (“errant data”) that the ‘official’ story doesn’t. Viewed purely as a form of inference to best explanation, conspiracy reasoning doesn’t seem to be inherently illogical on its face.
However, as Byford points out, conspiracy theory is a “tradition of explanation” (conspiracy theories don’t arise from nowhere but draw upon earlier narratives, often with deeply problematic origins) that has a shockingly bad strike rate. Real conspiracies have certainly happened – Watergate, Iran-Contra etc. – but how many have ever been uncovered by conspiracy theorists?
Academic discussions of conspiracy theory tend to focus on long-lived varieties, the ones that attract large numbers of adherents around a relatively stable core. It’s that duration that allowed Steve Clarke to analyse these theories using the framework of progressive and degenerating research programs, borrowed from the philosopher of science Imre Lakatos. In science, progressive research programs explain more and more observations and make successful predictions. When confronted by data that seems to disconfirm the theory, they posit ‘auxiliary hypotheses’ that actually strengthen the theory, by allowing it to explain and predict even more.
Degenerating research programs, by contrast, are stuck on the defensive: they don’t explain any new observations, nor make successful predictions, and are constantly having to defend themselves from new data that contradicts the theory. Clarke is right that most conspiracy theories are like that. If the various US shootings are government false flags designed to help Obama implement gun control, why is it taking so long? Shouldn’t at least one whistleblower have come forward?
A conspiracy theorist, led by the inexorable logic of their tradition of explanation, might double down at this point: the conspiracies we think we know about are just covers for the real conspiracies, while the reason conspiracy theory never seems to yield results is that the conspirators are making sure it doesn’t. Absence of evidence isn’t evidence of absence: it’s proof positive of a conspiracy.
You can see how that sort of theorising is, to a certain extent, a frictionless spinning in a void. Any observation confirms the conspiracy, and any data that seems to disconfirm it also confirms it. It’s an ‘explanation’ of observed reality that comes at the cost of making its central beliefs unfalsifiable. But that’s not the only problem.
To believe in conspiracy theory, you must believe in conspirators. To maintain a conspiracy theory for any length of time, you must claim that more and more people are in on the conspiracy. Clinging to degenerating research programs of this type involves making more and more unevidenced accusations against people you know nothing about. That’s not without moral cost. Suspicion should always involve a certain reluctance, a certain forbearance from thinking the worst of people – a virtue that is sacrificed in the name of keeping the conspiracy theory going. In the process, real human tragedy is made into a plaything, fodder for feverish speculation that does no real epistemic or practical work.
Our relationship to each other and to society as a whole also only works against a generalised assumption of trustworthiness. Imagine if you believed by default that everyone is lying to you: how could you possibly function, or even communicate? Laura D’Olimpio recently wrote on Cogito about the importance of trust, and the corresponding vulnerability that requires us to accept. One crucial dimension of trust as a pervasive phenomenon in our lives is its role in our epistemology: most of what we know, we actually take on trust from the testimony of others. I only know that Iceland exists because I don’t believe, to borrow a phrase from Tom Stoppard, in a ‘conspiracy of cartographers.’
To maintain a conspiracy theory requires us to throw out more and more of our socially-mediated sources of knowledge, and to give up more and more of the trust in each other and in our knowledge-generating mechanisms that we are utterly dependent upon. On some level, the ‘conspiracy theory of society’ ultimately asks us to give up on society altogether. And that takes us to a very dangerous place indeed.
I’m delighted to say that Narrative, Identity and the Kierkegaardian Self is now available.
Edited by John Lippitt and myself, this is the first collection on Kierkegaard and narrative personal identity in over a decade – think of it as Kierkegaard After MacIntyre After Kierkegaard After MacIntyre – and brings together leading narrativists and Kierkegaardians in a new and productive dialogue. This book is one of the outcomes of the Selves in Time project and follows on from the conference we ran at Hertfordshire in November 2011. We hope it will mark an important moment in the ongoing discussion about what Kierkegaard can contribute to our understanding of the self.
But hey, don’t just take my word for it:
‘Are our lives enacted dramatic narratives? Did Kierkegaard understand human existence in these terms? Anyone grappling with these two questions will find in these excellent essays a remarkable catalogue of insights and arguments to be reckoned with in giving an answer. That is no small achievement.’
- Professor Alasdair MacIntyre, University of Notre Dame
Here’s what’s inside:
The Moments of a Life: On Some Similarities between Life and Literature – Marya Schechtman
Teleology, Narrative, and Death – Roman Altshuler
Kierkegaard’s Platonic Teleology – Anthony Rudd
Narrative Holism and the Moment – Patrick Stokes
Kierkegaard’s Erotic Reduction and the Problem of Founding the Self – Michael Strawser
Narrativity and Normativity – Walter Wieizke
The End in the Beginning: Eschatology in Kierkegaard’s Literary Criticism – Eleanor Helms
Forgiveness and the Rat Man: Kierkegaard, ‘Narrative Unity’ and ‘Wholeheartedness’ Revisited – John Lippitt
The Virtues of Ambivalence: Wholeheartedness as Existential Telos and the Unwillable Completion of Narravives – John J. Davenport
Philosophers love to complain about bad reasoning. How can those other people commit such silly fallacies? Don’t they see how arbitrary and inconsistent their positions are? Aren’t the counter examples obvious? After complaining, philosophers often turn to humor. Can you believe what they said! Ha, ha, ha. Let’s make fun of those stupid people […] It puts us out of touch partly because they cannot touch us: we cannot learn from others if we see them as unworthy of careful attention and charitable interpretation. This tendency also puts us out of touch with society because we cannot touch them: they will not listen to us if we openly show contempt for them.
You can disagree with the specifics of McBrayer’s causal claim that the way ethics is discussed in schools contributes to universal moral antirealism (roughly, the view that the universe contains no moral facts), but he’s right that antirealism seems to be a great many people’s default view, even if their choices and actions suggest otherwise.
As I’ve said before before, this is a bugbear of mine. Moral antirealism might turn out to be true, but it’s not just obviously true. There are only so many essays and online comments where people don’t even understand the suggestion that ethics might be more than subjective before it starts to get to you.
So page after page of comments on McBrayer’s piece insisting that of course there are no moral facts and it’s ridiculous that a so-called philosopher could think otherwise got me snarky. This, I sneered, is why we can’t have nice things. Here’s a professional moral philosopher trying to explain a matter within his expertise and being dismissed, even belittled, by people who clearly don’t even understand what he’s saying. Why would people simply ignore what he’s saying like this? Would they do this to a scientist, or a surgeon, or a lawyer?
Well, yes, of course they would. We live in an age in which everyone, labouring under the delusion that they are always and everywhere entitled to their own opinion, feels fully equipped to tell experts they are flatly wrong about their area of expertise. So this is in many ways a problem of degree rather than kind.
But science denialists of various stripes – anti-vaccinationists, climate denialists, 9/11 Truthers, Wind Turbine Syndrome proponents – usually at least pay lip service to playing the game. They make (bad) arguments, cite (dodgy) sources, and generally at least try to give the impression they are doing better science than actual scientists.
Philosophy denial, it seems to me, is a somewhat different beast. Philosophy denialists – including a disheartening number of high-profilephysicists – deny the value of philosophy itself rather than simply taking issue with specific philosophical claims.
And as Sinnot-Armstrong points out, a large part of that is philosophers’ own fault. He notes that while scientists frequently make an effort to explain what they do to the general public, philosophers don’t do so nearly as often:
As a result, the general public often sees philosophy as an obscure game that is no fun to play. If philosophers do not find some way to communicate the importance of philosophy, we should not be surprised when nobody else understands why philosophy is important.
Fortunately, more and more philosophers are taking up this challenge. This new group blog you’re reading now hopes to be a contribution in that direction. It will feature writing from a team of Australian philosophers committed to the idea that philosophy can’t solely be a purely abstract pursuit, but must also connect with how we live and what we care about.
I say ‘must’ quite deliberately. Put simply, philosophy is too good, and too important, to keep locked up in the academy. Philosophy may appear ‘an obscure game,’ but it’s also uniquely powerful in its ability to illuminate, complicate, and break wide open things we consider settled and clear.
Just as importantly, at its best, philosophy’s probing of the physical, conceptual, logical, aesthetic, and moral universes turns back upon the questioner themselves. It encourages the mental activity we might now call metacognition and the corresponding virtue of metarationality. To use an older language, it teaches us to know ourselves and to know our own limits, how to reason and how to map the limits of our ability to do so. At the core of philosophy lies the Delphic saying that motivates so many of Plato’s dialogues, γνῶθι σεαυτόν: ‘know yourself.’
But here be dragons.
Across twenty-five canonical dialogues (and another ten of dubious authorship) Plato depicts his mentor Socrates down in the market place, asking questions of passers-by. Socrates speaks from a position of professed ignorance. He knows nothing, but he at least knows he knows nothing, which already puts him ahead of his neighbours – who mistakenly think they know a great deal. And so Socrates asks his fellow Athenians about the most fundamental, seemingly obvious matters. Then through careful, incisive, and frequently prolonged questioning, he turns their preconceived understandings on their heads, sometimes reducing his interlocutors to bewildered, humiliated wrecks.
That ended about as well as you’d expect. Socrates saw himself as a “gadfly,” fated “to sting people and whip them into a fury, all in the service of truth.” Gadflies are rarely welcomed. In the Apology, Plato’s account of the trial of 399 BCE in which Socrates was sentenced to death for impiety and corrupting the youth, Socrates describes the general reaction to his method:
… young men of the richer classes, who have not much to do, come about me of their own accord; they like to hear the pretenders examined, and they often imitate me, and examine others themselves; there are plenty of persons, as they soon enough discover, who think that they know something, but really know little or nothing: and then those who are examined by them instead of being angry with themselves are angry with me (Apology 23c-d)
Socrates, it must be said, does himself no favours in the Apology. Given the chance to plead for his life, he simply doubles-down on the things that made the Athenians want to kill him in the first place, and then literally demands a reward for doing so. I once polled a philosophy class at the start of a lecture on the Apology on whether or not the Athenians were right to execute Socrates. Then I polled them on the same question when the lecture finished. Slightly more voted in favour of death the second time around. Socrates, it’s fair to conclude, was just really annoying.
But the deeper point here is that what makes philosophy so powerful is also precisely what makes it so uncomfortable: it dissolves obviousness. It takes things that seem so unimpeachably self-evident we don’t even notice them and throws them into doubt. It shakes the unshakeable and corriges the incorrigible.
That is thrilling, liberating, even intoxicating; but it is also unsettling and even infuriating. Finding out you might have been wrong about things that seem obvious – such as that there are no moral facts, for instance – is rather inconvenient. The snide comments on McBrayer’s article are of a piece with the impatience of Neil Degrasse Tyson and Lawrence Krauss’ impatience with philosophical questions. Philosophy just spins its wheels, gets in the way and slows us down.
My philosophical first love, Søren Kierkegaard, writes his unwieldy masterwork Concluding Unscientific Postscript to Philosophical Fragments from the persona of a certain Johannes Climacus, a thirty year old idler and Socratic gadfly. In an age of increasing reflection, sophistication, and haste, Climacus is on a mission, as Paul Muench points out, to slow his reader down. Oh, so you think you’ve understood the basics, know what’s what, and are impatient to move on to more challenging questions? Really? Linger a while, friend. Have you really understood what the good is, or how you should live, or what it means that you’ll die? Really? You sure?
Welcome to the blog. We hope it gets in your way and slows you down.
Busting someone for plagiarism after they suggested men’s violence against women is somehow feminism’s fault for taking away power men were never entitled to in the first place might seem a bit like sending Al Capone up the river for tax evasion.
When he was caught in 2012, Ahmed admitted his copying was wrong, but situated what he did in the context of our contemporary “comment monster” that “needs to be fed”. He has a point: in the age of churnalism, and with the internet desperate for ever-more shareable content to throw into its mighty Clickhole, is copying a paragraph here and there really so bad?
Well, yes, actually, it is; but we need to understand why.
Writers are understandably highly sensitive to plagiarism, both of having it done to them and of being accused of it. Just this month I ended up offending a journalist I admire greatly for cocking an eyebrow too publicly over her piece’s resemblance to one of mine. It was a coincidence (great minds and all that) but the mere suspicion is utterly poisonous for everyone involved.
For academics, it’s even worse: plagiarism is among the worst of sins, and potentially one of the most catastrophic. It’s not so long ago that decades-old plagiarism allegations cost a Group of Eight vice-chancellor his job. An academic who presents the ideas of others as their own is violating the very integrity of the process by which knowledge is generated, and demeaning their fellow researchers. Understandably, we take that pretty seriously.
Students too face enormous consequences for plagiarising, which in the age of essay mills and Google is an irresistible temptation for many, Turnitin be damned. For teachers, it’s hard not to take plagiarism personally, as if the student is saying: “This stuff you’ve devoted your life to? It’s not important enough for me to bother even trying to care about.”
But plagiarism also falls on a spectrum, from outright copying (relatively rare, but occasionally spectacular) to sloppy referencing or missing quotation marks. For every student who’s tried to pull one over you, there’s five more who simply haven’t understood what’s expected of them and hence don’t realise that they’ve done anything wrong.
Ahmed doesn’t have that excuse. He’s been caught before, and he knew even back then that what he was doing was wrong. Still, you might reply, he isn’t writing as an academic, but as a paid columnist. He is not presenting his work as the outcome of laborious and expensive research, nor is he submitting his work to the unforgiving gamut of peer review.
Some of Ahmed’s infractions are actually self-plagiarism, or recycling, which doesn’t rip off another writer. Self-plagiarism is alarmingly easy to do accidentally, particularly where multiple drafts exist or one piece splits into two. Just recently I sent off two academic articles (which had begun life as a single piece) without realising I’d repeated a paragraph in each; it was just dumb luck that I caught it before it got to print.
So, why is Ahmed’s plagiarism a sackable offence?
Using the model of intellectual property we could argue that the misdeed here is selling a product that doesn’t belong to him, either because it was written by someone else, or because, in the case of recycling, he had previously sold it to another publisher.
In short, it’s a type of theft. (That might also explain why self-plagiarism doesn’t seem that bad in cases where no money has changed hands: you’re repeating yourself, but not actually stealing).
From the point of view of the victims – both writers and commissioning editors – the theft model makes a lot of sense. But I don’t think theft alone is the whole story, because plagiarism isn’t just about ownership. I’ve heard stories of students angrily insisting that a ghost-written essay “is my own work – I paid for it, so I own it!”
The effrontery of that response actually gives us some idea of why the “intellectual property” model of plagiarism doesn’t quite yield a full understanding of what’s wrong with it. Plagiarism isn’t just a form of theft; it’s also a form of insincerity.
That may sound odd: surely the plagiarist is sincerely agreeing with what they’ve copied? But sincerity is not just a matter of saying something you take to be true. The 20th-century philosopher K.E. Løgstrup spoke of insincerity as violating an “openness of speech” that we reasonably assume from others: we expect that their words are “new”. By “new” he meant that they weren’t calculated or premeditated but were spontaneous, sincere, without guile. “Old” words put up a barrier between speaker and hearer and thereby frustrate true dialogue.
The late Robert C. Solomon, in developing the idea of sex as a form of communication, argued that perverted sex is to sex as insincerity is to language: using language to frustrate communication by obfuscating or concealing what you really think or intend is perverting the very function of language.
(That analogy took Solomon in some odd directions – at one point he says masturbation is basically talking to yourself – but the idea has something to it; faking arousal is arguably a form of insincerity that violates the communicative dimensions of intimacy.)
Plagiarism, in a sense, disrupts the contact between the author and the reader. It insinuates someone else’s “old” words between us. We come to an article wanting contact with the author’s mind, not a collage of other minds they’ve assembled to hide behind.
Plagiarism is theft, but it is also a failure to, in E.M. Forster’s phrase, “only connect!” The need to feed the beast shouldn’t distract us from that task of connecting.
Barring some sort of last-minute miracle, two relatively young Australian men, Andrew Chan and Myuran Sukumaran, are going to be killed by the Indonesian state. They will not be the first to die this way in 2015. Six other drug criminals have already been executed in Indonesia this year, and more are scheduled.
The Brazilians and Dutch recalled their ambassadors in response to the last executions, which involved two of their citizens. Australia is reportedly doing what it can to save Chan and Sukumaran, but apparently to no avail, and it remains to be seen if we would follow the Brazilian and Dutch examples. ACU Vice-Chancellor Greg Craven has claimed that:
The attitude of Australia and Australians will become part of the reason why these men are executed if we are not sending the right signals to Indonesia.
Meanwhile, Roy Morgan polling finds that 62% of Australians think their government should not do more to stop the execution. A slim majority, 52%, favour such executions going ahead. Yet as recently as 2009 Roy Morgan also found a clear majority not in favour of capital punishment in Australia – not even for murder.
That re-introducing the death penalty doesn’t have majority support isn’t surprising. The consequentialist arguments against the death penalty aren’t hard to find. The irreversibility of execution means sickening and unfixable miscarriages of justice are more or less inevitable.
The imperative to avoid those injustices makes the process extremely slow – almost 15 years on average in the US, as of 2010 – and accordingly more expensive than life imprisonment. As a deterrent it simply doesn’t seem to work, perhaps because, oddly enough, it turns out humans aren’t rational, cool-headed calculators who deliberate carefully and act accordingly.
We know all this, or at least we should.
So, it seems a large proportion of us are against the death penalty, but say we are for executions abroad and don’t want us to try harder to stop them. How do we reconcile these apparently incompatible beliefs?
I suspect we’re doing it with “arguments” that at bottom have nothing to do with the death penalty as such, but are really just excuses for not caring about it in particular cases. We don’t like the death penalty, we just don’t want the discomfort of having to care about the people it’s applied to. And so we trot out a series of trite, clichéd slogans.
‘Do the crime, do the time’
The obvious rejoinder to this is that execution isn’t “doing time” – even if the years spent on death row is.
But as the ancient Greek philosopher Epicurus insisted, not existing for what would have been the rest of your life is not the same as suffering for that many years.
You might rephrase this as: “Do the crime, pay the penalty”. But just how far do we follow that principle? Penalties can be excessive, or unjust. So surely for such a principle to have force, the penalty has to be proportional to the crime.
If you say Chan and Sukumaran should accept their punishment, you’re thereby committed to saying that execution is a fitting punishment for drug trafficking – a claim that needs to be argued for.
Sukumaran and Chan knew the penalty if they were caught. You cannot arrive anywhere in Indonesia without signs explicitly stating the consequences of importing and exporting drugs on Indonesian soil. It is Indonesian law.
But this confuses moral responsibility with prudential responsibility. If you leave your car unlocked with the window down and your laptop on the front seat, someone might say that you “deserve” to have your laptop stolen. But being imprudent in such a case isn’t the same thing as moral culpability: that rests with the thief. We wouldn’t refuse to stop the thief, or let him off the hook, simply because you’d been careless.
You can agree that Chan and Sukumaran were stupid to take the risk they did, and even that what they did morally deserves punishment, given the misery and death heroin brings with it. But the argument that “they knew the risks” doesn’t, on its own, make their execution appropriate. Those who assert this still owe us an argument.
‘Different countries have different laws and we should respect that’
Linnell, like many others, also insisted that:
… the Bali Nine controversy is equally about sovereign rights and the penalties imposed on those who decide to flout them.
Sovereignty has moral weight, and it’s not hard to imagine cases where it might be ethically right to abide by local laws or norms you nonetheless disagree with.
But, again, this only goes so far. It would be obscene to subordinate the profound wrongness of killing – the very thing many death penalty proponents appeal to – to the need to respect sovereignty or avoid giving offence. To say that the laws of other countries must always be respected – no matter what they demand – is not so much a statement of principle as a moral abdication.
And that, in a nutshell, is the problem with making arguments like this. It’s not so much ethical reasoning as ritual hand-washing.
If there’s some knock-down argument that makes premeditated killing on the part of the government appropriate, or shows how more death and misery is somehow going to put the world to right, let’s hear it. Those who think we shouldn’t care about this owe us better than rhetorical fig leaves for indifference.
Bali Governor Made Mangku Pastika has said that the executions should take place – just not in Bali. It seems it’s OK for things like this to happen, so long as they don’t happen here, where we have to confront the full reality of what is done when the state ends a life, of what it is to shoot a man tied to a stake.
The human animal takes a remarkably long time to reach maturity. And we cram a lot of learning into that time, as well we should: the list of things we need to know by the time we hit adulthood in order to thrive – personally, economically, socially, politically – is enormous.
But what about ethical thriving? Do we need to be taught moral philosophy alongside the three Rs?
Ethics has now been introduced into New South Wales primary schools as an alternative to religious instruction, but the idea of moral philosophy as a core part of compulsory education seems unlikely to get much traction any time soon. To many ears, the phrase “moral education” has a whiff of something distastefully Victorian (the era, not the state). It suggests indoctrination into an unquestioned set of norms and principles – and in the world we find ourselves in now, there is no such set we can all agree on.
Besides, in an already crowded curriculum, do we really have time for moral philosophy? After all, most people manage to lead pretty decent lives without knowing their Sidgewick from their Scanlon or being able to spot a rule utilitarian from 50 yards.
But intractable moral problems don’t go away just because we no longer agree how to deal with them. And as recent discussions on this site help to illustrate, new problems are always arising that, one way or another, we have to deal with. As individuals and as participants in the public space, we simply can’t get out of having to think about issues of right and wrong.
Yet spend time hanging around the comments section of any news story with an ethical dimension to it (and that’s most of them), and it quickly becomes apparent that most people just aren’t familiar with the methods and frameworks of ethical reasoning that have been developed over the last two and a half thousand years. We have the tools, but we’re not equipping people with them.
So, what sort of things should we be teaching if we wanted to foster “ethical literacy”? What would count as a decent grounding in moral philosophy for the average citizen of contemporary, pluralistic societies?
What follows is in no way meant to be definitive. It’s not based on any sort of serious empirical data around people’s familiarity with ethical issues. It’s a just tentative stab (wait, can you stab tentatively?) at a list of things people should ideally know about ethics, and based, on what I see in the classroom and, online, often don’t.
1. Ethics and morality are (basically) the same thing
Many people bristle at the word “morality” but are quite comfortable using the term “ethical”, and insist there’s some crucial difference between the two. For instance, some people say ethics are about external, socially imposed norms, while morality is about individual conscience. Others say ethics is concrete and practical while morality is more abstract, or is somehow linked to religion.
Out on the value theory front lines, however, there’s no clear agreed distinction, and most philosophers use the two terms more or less interchangeably. And let’s face it: if even professional philosophers refuse to make a distinction, there probably isn’t one there to be made.
2. Morality isn’t (necessarily) subjective
Every philosophy teacher probably knows the dismay of reading a decent ethics essay, only to then be told in the final paragraph that, “Of course, morality is subjective so there is no real answer”. So what have the last three pages been about then?
There seems to be a widespread assumption that the very fact that people disagree about right and wrong means there is no real fact of the matter, just individual preferences. We use the expression “value judgment” in a way that implies such judgments are fundamentally subjective.
Sure, ethical subjectivism is a perfectly respectable position with a long pedigree. But it’s not the only game in town, and it doesn’t win by default simply because we haven’t settled all moral problems. Nor does ethics lose its grip on us even if we take ourselves to be living in a universe devoid of intrinsic moral value. We can’t simply stop caring about how we should act; even subjectivists don’t suddenly turn into monsters.
3. “You shouldn’t impose your morality on others” is itself a moral position.
You hear this all the time, but you can probably spot the fallacy here pretty quickly: that “shouldn’t” there is itself a moral “shouldn’t” (rather than a prudential or social “shouldn’t,” like “you shouldn’t tease bears” or “you shouldn’t swear at the Queen”). Telling other people it’s morally wrong to tell other people what’s morally wrong looks obviously flawed – so why do otherwise bright, thoughtful people still do it?
Possibly because what the speaker is assuming here is that “morality” is a domain of personal beliefs (“morals”) which we can set aside while continuing to discuss issues of how we should treat each other. In effect, the speaker is imposing one particular moral framework – liberalism – without realising it.
4. “Natural” doesn’t necessarily mean “right”
This is an easy trap to fall into. Something’s being “natural” (if it even is) doesn’t tell us that it’s actually good. Selfishness might turn out to be natural, for instance, but that doesn’t mean it’s right to be selfish.
This gets a bit more complicated when you factor in ethical naturalism or Natural Law theory, because philosophers are awful people and really don’t want to make things easy for you.
5. The big three: Consequentialism, Deontology, Virtue Ethics