Death According to Derek Parfit

As most people who follow the news about the discipline will now, a renowned philosopher, Derek Parfit, died on the New Year’s Day. There’s already plenty of obituaries praising his life around – I wonder if the authors ever consider that such praise might have been nicer if the recipient could still receive it. In any case, I think that if not the best, then at least the most productive fate that can meet a person of ideas is the continued discussion of his or her labour.

Much of Parfit’s work is dedicated to the issue of personal identity. Not in the sense of one’s skin colour, personal qualities and quirks as observed by others, but in the sense of what makes a person that particular person – what makes people feel convinced that they are themselves. This of course faces obstacles in the form of our assumption and intuition that there is nothing to dissect. After all, it is a common thing to experience and a banal thing to express: “I am me. For as long as I have lived, I have always been me, and until I die, I shall always remain me”. But who is this “me” that is you? Is it just about your brain? Surely, if we take a part of your brain out of your skull, it wouldn’t be fair to say that you’ve stopped being you – we do this to people with brain tumours or people with epilepsy and while they may change, the changes usually aren’t extreme enough to warrant calling them different people. Similarly, if we cut your corpus callosum, the part of your brain that connects both hemispheres (which also used to be a treatment administered for some cases of epilepsy), we wouldn’t say that there are suddenly two of you. And yet, such a brain (more accurately, two brains) will develop two separate perceptions. On the basis of some behavioural changes, it may seem warranted to say that in fact there are two persons in a split brain skull now. Yet, maybe just for our sanity, we keep on treating these cases as one and the same person.

But should we? And, more generally, what is the relationship of one’s personal identity to one’s psychological and biological identity? Can we have a meaningful identity without psychological continuity? Parfit explores some of these in his 1971 article in The Philosophical Review.

Speaking of continuity and the changes within biological creatures that continue existing, one might wonder what would happen as the changes to the brain accumulate? At what point do we say that that person is no longer there? Certainly death sounds like a good candidate. But what makes death so different? In reading one of his obituaries, I came across this quote from Parfit’s Reasons and Persons:

There will later be some memories about my life. And there may later be thoughts that are influenced by mine, or things done as the result of my advice. My death will break the more direct relations between my present experiences and future experiences, but it will not break various other relations. This is all there is to the fact that there will be no one living who will be me. Now that I have seen this, my death seems to me less bad.

Whether this is really a comforting thought is up to the reader to decide. For Parfit at least, it’s no longer an issue.

Advertisements

Elections and Legitimacy

The United States presidential election spectacle is equally entertaining and embarrassing to observe from a distance. Meant to be an illustration of the democratic ideal, it rather resembles a satirical spectacle, in which the pretence of democratic ideals goes through the motions, with the participation of campaign machines and media commentators, with a common agreement that it’s all fake. It’s also interesting from the perspective of political legitimacy.

Political legitimacy, broadly speaking, is the attribute of a selected government which makes it genuinely one that embodies the selectors’ wishes. In simpler terms, it’s about whether it’s really right for some people to be laying down the law, as opposed to just dictating it as they see fit, with the help of the violence that they can enact on those who oppose them.

A major development in political philosophy in the 17th and 18th centuries, that is, in the Enlightenment Age, was the formulation and elaboration of the idea that the common people – everyday labourers which sustain a country via their work – should be able to also decide how their country is governed. The beginnings of this thought was not at all as democratic: Hobbes wouldn’t advocate a representative constitutional republic (he was a monarchist), but he advocated that society is governed by a contract, in which humans agree to cooperate. Elaborating upon this idea, John Locke would advance a more radical notion: that whether a government is that of a king or of a representative body of elected officials, what gives it true power is the consent of the ruled, who transfer their power to those who govern. In other words, what is legitimate is what is agreed to. This thought certainly can be found at the basis of a widespread ideal of how modern politics should function.

This is where it gets interesting. One may ask oneself about the legitimacy of the future president of the United States. And here, a curious phenomenon arises: while there surely will be only one winner decided by a vote, the winner – presuming it’s either Donald Trump or Hillary Clinton – will be the most unpopular in the history of polling. So unpopular, in fact, that a definite majority of voters support neither candidate. Thus we arrive at what seems a paradox of the political construct within the country – that a government will be lead by an official, elected by a majority of voters, who (according to the data) doesn’t actually represent the wishes or leanings of the majority of voters. In other words, we have agreement and representation without actual consent. Many things could be blamed for this state – some of it is certainly the failed idea of “lesser evil voting”. Regardless of how many pundits and public intellectuals advocate for it, the undeniable consequence is that things can get arbitrarily bad, so long as a worse alternative is presented. But the fact is that neither of the options presented is agreeable to those meant to decide.

What I think should be the conclusion is that the seeming paradox is a yet another illustration of a – more and more obvious – reality. It is the reason for the common antipathy towards the political system, and the feeling that working within it is pointless that so many people feel. On one level or another, the masses who don’t care to vote, and the young people who don’t care whatsoever, all realize the same thing: that they don’t actually have any control. They are not represented, the system is broken, and they don’t have any influence on it. And this feeling is not just reflective of a subjective appraisal: it’s an actual conclusion of a study. People who aren’t rich really don’t have power.

It’s therefore important to ponder this question: if the formalities of power are just theatrics, if the real power lies outside of the reach of the ruled, to what extent is this power legitimate? It might be a scary thing to ponder. Whatever happens, whoever they choose, Americans will be confronted with a yet another spectacle, and the illusion will become more and more apparent.

No Ought from Is

Presenting philosophical concepts is usually easier when they relate to something current, so it’s in some sense a good thing that Neil deGrasse Tyson, the celebrity astrophysicist, has been defending his idea of Rationalia. Before I comment on it, I would like to step back and talk about a British XVIII century philosopher, David Hume, who provided good reasons for why the thinking behind Rationalia is flawed.

Hume, one of the brightest minds of his time, thought and wrote on a range of issues, from philosophy of science, through ethics, all the way to political theory. It is Hume who clearly delineated the problems we face when we infer general principles from observing particular events. It’s also Hume who stated, in his “A Treatise of Human Nature”:

In every system of morality, which I have hitherto met with, I have always remarked, that the author proceeds for some time in the ordinary way of reasoning, and establishes the being of a God, or makes observations concerning human affairs; when of a sudden I am surprized to find, that instead of the usual copulations of propositions, is, and is not, I meet with no proposition that is not connected with an ought, or an ought not. This change is imperceptible; but is, however, of the last consequence. For as this ought, or ought not, expresses some new relation or affirmation, it is necessary that it should be observed and explained; and at the same time that a reason should be given, for what seems altogether inconceivable, how this new relation can be a deduction from others, which are entirely different from it. But as authors do not commonly use this precaution, I shall presume to recommend it to the readers; and am persuaded, that this small attention would subvert all the vulgar systems of morality, and let us see, that the distinction of vice and virtue is not founded merely on the relations of objects, nor is perceived by reason.

Book III, Part I, Section I

Hume’s complaint here is that people have a tendency to assume that some moral truths follow in a straightforward manner from some matters of fact. For example, an abolitionist might point out that slavery creates object-like property from human beings, and thus goes against the idea of equality of all humans. A monarchist might argue that the king, by the virtue of high aristocratic position, or by being a unique leading figure which everyone obeys, is in the best position to lead a stable country. Notice that, while nowadays people are more likely to agree with the first proposition (“slavery should be abolished”) than the second (“monarchy should be restored”), both go from facts (“humans are changed into property”/”a king is the most influential member of nobility”) to statements of principle. What Hume points out is that, while we can reason about whether or not the facts are true, this will not help us in establishing whether the principles are right.

Of course, people who sincerely believe in their principles will often postulate that it’s obviously the case that they follow from their evidence. And while it’s obviously practically beneficial to live in a society in which people don’t believe that murder is fine (since it lowers your chance of being murdered), that doesn’t actually establish that it’s rational. The problem is that “rational” has become – I assume through overuse – a somewhat nebulous term, living somewhere in the semantic vicinity of “espousing a modern scientific worldview” and “affirming humanitarian ideas of liberal societies”. While it’s true that rational thinking has influenced both modern science and morality, rationality itself is simply a matter of following reason – basing one’s worldview on facts and striving towards logical coherence in conclusions. Notice that the matter of morality is completely absent from this picture. The scientific research done by Kurt Blome was no less rational than that carried out by Linus Pauling. And herein lies the problem.

We cannot say – Hume claims and I agree – that evidence or logic would lead us to formulating “better” principles or policies. As scientific facts don’t take any sides, it is not possible to establish norms of behaviour by analysing them. We cannot deduce an “ought” from an “is”.

Here is where the concept of Rationalia comes in. Rationalia is, in Tyson’s own description, a country with a constitution which only contains one sentence:

All policy shall be based on the weight of evidence

https://twitter.com/neiltyson/status/748157273789300736

The problems with this approach are apparent once we consider the is-ought problem. But first, it’s hard to even say what basing policy on weight of evidence would look like. Who would get to decide what the weighing, or the threshold of certainty is? Assuming everyone can somehow agree on this – which would be interesting, since the choice of, let’s say, 95% certainty seems to be just as arbitrary as 96% – we still wouldn’t have moved forward. We can have all the evidence in the world that a given choice would bring about a given set of consequences, but this still doesn’t tell us whether such consequences are desirable, or whether the choice itself is good. Example: spending n billion dollars on renewable energy sources would offset the effects of global warming as efficiently as letting m million people die from poverty. Should we flip a coin?

The problem that Tyson doesn’t seem to understand is that there is always an element that lies beyond the reach of rationality in making any policy choices.ยน There is always an underpinning principle that guides policymaking, which establishes what it is that we’re trying to achieve – what is the result that we care about? What is the ought? Only with that can we say that, for example, preserving wealth is less important than preserving human life. In refusing to acknowledge this, Tyson continues enumerating supposed advantages of Rationalia, never quite explaining how they follow from no ethical commitments. Thus, he produces numerous paragraphs about how there would be more funding for social sciences, a better science education, freedom to be irrational, and other benefits available in Rationalia, but he fails to explain why they would be there. His post is a good example of failing to analyze one’s position carefully and to note one’s assumptions. And such analysis is worthwhile, because it is one of the tools which can take us from everyday thinking to philosophy.

1. Ironically, he cites the U.S. constitution as an example of a document that doesn’t discuss morals in the very same sentence in which he provides an example of just such a thing – the restriction on the use of the military.

Would It Matter If the Universe Were a Simulation?

The “is the Universe a simulation” question seems to be returning every so often to the headlines, as one or other academic or celebrity (or both) makes a stab at conjecturing either way. The most recent case of this, a claim made by Elon Musk, shows – if anything – that people don’t get tired of rerunning the same arguments. However, I think that what position one should take is less interesting a question than why should anyone bother to take any.

The starting point – as mentioned – is the extraordinary claim by the tech magnate Musk that there is a “one in billions” chance that we are not living in a simulated world. The gist of his claim is that the progress of technology is so rapid that it is almost certain that we will have access to perfect virtual reality environments. Given that these virtual reality games would be playable on the same type of set-top boxes that we use today (just with more processing power), and there would be billions of them, Musk’s claim is that there is a very slim (specifically, one to billions) chance that what we’re experiencing right now is “base reality” (his term for real world). But this argument is spurious because there is no actual data underlying this estimate. Musk could just as well have chosen “millions”, “trillions” or “hundreds”, and it wouldn’t have mattered, since the devices he proposes are – for now – mere fiction. We don’t know for sure if we will ever have fully immersive virtual reality.

But even assuming that such VR devices would be common household items in the future doesn’t automatically mean that everyone (or anyone) will be using them for their entire life. Why wouldn’t those be like games that people play nowadays, or like Star Trek’s holodeck – something that you use for entertainment for a while, and then turn off? The argument here:

  1. Technology is advancing very fast.
  2. We will likely have the capacity to perfectly simulate reality for everyone.
  3. Therefore, everyone will be (and likely is now) in such a simulated reality.

is a classical example of a non sequitur – which simply means “doesn’t follow” in Latin. There is nothing connecting the premises 1 and 2 with the conclusion, although people wishing to live inside a virtual reality might take the leap and assume there is no gap.

There are lengthier responses to Musk available online. All that I’ve seen were dodgy to a greater or lesser degree, but I won’t be reviewing them here. The question I’m interested in is “what does it matter?” Think of it as follows:

  1. Assume that we are living in a virtual reality. The basic physical constants are what we know them to be. The interactions of matter that follow from them are – just like the constants – simulated.
  2. Assume that we are in the real world – in other words, there is no “deeper” level to go in terms of probing reality. The basic physical constants are what we know them to be. The interactions of matter that follow from them are – just like the constants – ultimately real.

The problem should become obvious now – while there might be a difference between a perfectly simulated world and the real, underlying one, we would never be able to see it. Both versions would be perceived to be apparently correct. Our very act of perception is nothing more than interaction with the world, but then perceiving this (or any) world as “real” or “virtual” is nothing more than making an assumption that some of our sensory data is not as it should be. However, people in both worlds could make such a claim. We thus cannot see this world as fake or genuine, because no matter which one it is, all that we’re seeing is dictated by the world. It is like claiming that the whole world you live in – your house, the town, the places you’ve been on vacation – is really just a big, fancy room. When you then claim that there is something “beyond the room”, and you go and stand where you say “outside” is, how do you know that that isn’t a part of the room too? Here the only difference is that the room is always with you, because it is your senses. Therefore, even if someone were to show you the “real world” – assuming you were in a simulation – you would have no reason to believe them.

Thus, wondering about whether this Universe is real or simulated seems pointless to me. We have no way of telling, because everything we could do to tell the two apart – all measurements and experiments – would take place in the one we’re in right now. The whole notion that we could “look outside” of a simulated world to see the difference is based on the assumption that we would be able to tell when we’re really looking outside and when we’re being duped by the simulation. But by the nature of the simulation we wouldn’t be able to tell which is the case, since the simulation is all that we got to experience.

A more interesting question could probably be: why would people want to think of the Universe as a simulation? At least, why would Musk want to believe so?

Arguably we should hope that that’s true, because if civilization stops advancing, that may be due to some calamitous event that erases civilization.

It’s interesting that there’s little commentary that I’ve seen on this comment. In a world riddled by economic and environmental crises, one could easily think of calamitous events that erase civilization. Some of them could even have something to do with producing more and more consumer electronics to feed our escapism. Perhaps it’s more comforting to think that the world is just a simulation, and that we will soon be making more like it, so whatever we do is not that big of an issue. But – for the reasons I mentioned – trashing this world will feel real to us, no matter whether we’re in a simulation, so it seems important to take care of it, regardless of the odds.

Further reading: the original simulation argument was made by Nick Bostrom. His paper and the commentaries on it are available at http://www.simulation-argument.com/.

Misplaced Nostalgia

One of the cliches that seems to be prevalent, at least in the sphere of “Western” thought, is the notion that childhood is somehow a time of purity and bliss. In that often-entertained stereotype, there’s an aura of nostalgic longing that many people seem to assume should be the default way of thinking about one’s earliest years. (Setting aside the issue that this stereotype is of very limited applicability to begin with) While I don’t doubt that some people’s childhoods were genuinely blissful in the sense that there was a lot of simple enjoyment involved, I think that longing for childhood and idealizing the period is a way to deceive oneself.

First of all, the obvious realization should be that childhood is not really a period of such purity as people would like to think. Children are generally uncritical of both their moral and hygienic principles, which makes them metaphorically and literally likely to have dirt under their fingernails. Children can and do bully each other from an early age, as well as hurt animals out of sheer curiosity. And when not all of them grow out of such behaviour, we see and label the adult as deviant, cruel and wicked – pretending that it’s something that only adults take part in.

But one could also think that childhood is a blissful period simply because it’s a time before one gets to taste the full complexity of life. If your life was similar to the optimistic childhood stereotype, then, as a child, you didn’t have to worry about taxes, jobs, corruption in politics, mortgages, war or any other adult topic. You were completely unaware of the notion of cheating in a relationship, and the biggest dilemma might have been which television show to watch. Compare this to your life as an adult, and it seems clear as a day that your childhood was better – no worries, no debts, just playing and resting. Then, teenage years came along and destroyed everything and made you miserable.

This sort of explanation is as alluring as it is shallow. Childhood was better because there were more things that you liked than there are now. What it doesn’t say is why exactly life is that way, and why do people think they had more of the things they liked in childhood. I think the reason the analysis stops here is that most people don’t really like to admit that their childhood doesn’t only feel more free, it genuinely was more free. And not only in the simplistic sense that a parent would pay for something you wanted, but in the deeper sense that what you wanted was closer to how you felt. As children, before we are introduced into the wider world, and before we are acquainted with notions such as fashion, expectations, grades and other problems created by other people, we have no reason to not pursue our genuine interests. We might have no clue, but we surely have conviction. If the process of “growing up” amounts mostly to getting used to the idea that other people are going to force you to do things the way they want – whether they are schoolmates who would ostracize you for not liking what they like, or your boss who will fire you for working too slowly – then it’s no wonder that what came before seems idyllic.

Therefore, I think that people who feel nostalgic towards their early youth are in reality just feeling a misplaced sense of loss. The passing of childhood symbolized replacement of their values with those constructed by other people and a loss of self-direction. Their unwillingness to either stand up to or get used to the situation they are in now is recast as a mythical loss of purity.

Nostalgia about your early years is therefore more like indulging a fantastic vision – it wasn’t like that, but what you do is substitute reflection on your current state with wishful thoughts about the past. It’s not your childhood that is blessed, it’s your life right now that is miserable.

Reading Philosophy from the Past

It’s usually a good decision to dedicate some time to studying past works in any discipline one wishes to explore. The knowledge of the historical background, or “how we got here” can make it easier to see why certain issues are being debated now. In the Internet age, the ability to access old philosophical texts is greater than ever before, but in a strange way it might become a problem.

There is an abundance of philosophical texts available nowadays, but beyond their sheer amount, there is a problem of preserving meaning in translations over time. When dealing with a guide, summary or commentary, there’s always the risk of confusing what the interpreter thought the author meant and what the author really meant. The fact that, in philosophy, the author’s actual meaning is in many cases debatable doesn’t help. For this reason, while translations can still be a source of confusion (as the meaning of words can shift over time), they are usually less risky. But even the most faithful translators have to adjust the wording to fit the audience (and time) they’re translating for. And when they do err, their mistranslations can linger on for a long time, causing confusion. (see “hexis”)

Now that I’ve given this warning, I thought I would share a link to a collection of texts that were translated and had their original vocabulary and sentence structure changed for the sake of accessibility, landing somewhere between the two categories. If you’re interested in philosophy classics and wish to go closer to the source, you can find a collection of works of various thinkers from the early modern period – Machiavelli’s political advice, Spinoza’s attempts to geometrize ethics, and a lot more – available at http://www.earlymoderntexts.com/ .

The Allure of the Popular

Reflecting on the rise of populist parties around the world, it’s interesting how little influence the intellectual effort of various philosophers throughout the centuries has had on the world. Socrates saw the enemy in sophists, who (at least in his characterization) took gaining influence to be the only worthy goal, John Stuart Mill wrote a whole book about logical fallacies, calling for people to think in a more nuanced way, and of course Orwell and Eco wrote plenty about the fairly-modern tactics of language twisting and behaviour manipulation by people seeking political power – but it all seems to have passed people by. The same style of argumentation – simplistic and pandering – always seems to win people over.

Although a lot of the parties gaining support due to populism are classified as right wing, the truth is that populism has no political leanings. It doesn’t matter what a particular politician actually believes, since votes aren’t won by common appeal – and promises aren’t enforced, so abandoning them can be expected.

What’s clear is that power that has historically been triumphant over intellect is fear, and those who can create and soothe it most efficiently have the greatest chance of gaining popular support. If the main feature of populism is that it appeals the most to thinking lacking in reflection out of fear, then the fault for its rise should lie with those decision makers and officials who have made modern education what it is – or so the argument would go.

But if we blame public officials for the ills of society, then we should also ask who got them in power in the first place. It seems that democracy is working to defeat itself by promoting those who, in the end, care only about their own power. But this way of thinking brings us straight to the conclusion that democracy is to be blamed for the problems of society. The uneducated choose manipulators to lead them, and manipulators dumb them down even further. It’s easy to go from this to the conclusion that the masses should not be allowed to vote, that democracy itself should be revoked.

But the question remains of why would poorly-educated, people who don’t think critically support populists, even when they call for genocide and increase of economic inequality? Here we might think to go back to the idea of “human nature”. Hobbes’s idea was that people are naturally mean-spirited and vicious, and would rob and kill one another if allowed to. But the less presuming explanation is that people act based on the situation their in, and in desperate situations will accept anyone who offers them a way out. And with the disappearance of the middle class as an economic power in the US, and an unemployment rate steady in the double digits in many European countries, it’s only logical that appealing to everyone’s economic fear is a good strategy.

Neither people’s natural instincts, nor their education, nor their economic state are on their own enough as an explanation for their love of populism – it’s a problem of multiple causes and multiple effects. The only thing philosophy can offer the world is the ideas and the invitation to critical thinking – a defence of mind against populism.

Further reading:
Gorgias – Socrates’s views on rhetoric, as related by Plato
Ur-Fascism – Umberto Eco’s attempts at elucidating the nature of fascism