Live Now, Consume

My goodness, don’t you remember when you went first to school? And you went to kindergarten.
And in kindergarten, the idea was to push along so that you could get into first grade,
and then push along so that you could get into second grade, third grade, and so on,
going up and up and then you went to high school and this was a great transition in life.
And now the pressure is being put on, you must get ahead, you must go up the grades and finally be good enough to get to college.
And then when you get to college, you’re still going step by step, step by step, up to the great moment in which you’re ready to go out into the world.

And then when you get out into this famous world, comes the struggle for success in profession or business.
And again, there seems to be a ladder before you, something for which you’re reaching for all the time.
And then, suddenly, when you’re about 40 or 45 years old, in the middle of life, you wake up one day and say “Huh? I’ve arrived. And, by Jove, I feel pretty much the same as I’ve always felt. In fact I’m not so sure that I don’t feel a little bit cheated.”

Because, you see, you were fooled.
You were always living for somewhere where you aren’t.
And while, as I said, it is of tremendous use for us to be able to look ahead in this way and to plan, there is no use planning for a future, which, when you get to it and it becomes the present you won’t be there – you’ll be living in some other future which hasn’t yet arrived.

And so in this way, one is never able actually to inherit and enjoy the fruits of one’s actions.
You can’t live it all unless you can live fully now.

If you’ve been using the Internet for some time, there’s a chance you’ve heard these words. Possibly, they were presented to you in spoken form by an eloquent and confident-sounding narrator, whose voice had a hint of a British accent to it. There was some vaguely-uplifting music playing in the background and the visuals were various pieces of stock footage.

Now, am I describing:

  1. A modified excerpt from a talk by an American philosophy popularizer, Alan Watts, or
  2. A Volvo commercial?

I’m sure that even if you’re bad at tests and, in fact, you have never heard these words, you can get it right, because the answer is both.

There is a certain level of irony involved in human life, and all our endeavours seem to be constantly in danger of producing the opposite results. Before I say more about this, I want to make a note about history.

Having seen their parents live out a preplanned life of the 50s, with the promise that theirs will be the same – so long as they keep the US economy going by continued consumption – the new generation became increasingly rebellious. Out of the disappointments of the previous decade, came the United States of the 60s – a chaotic and energized place.  In the age in which student protest movements, Vietnam War, cold war, hippie culture, drug use and a host of other things were mixing together, the climate could probably be best summed up as “radical” or, more accurately, “deliberately antagonistic” – counter-cultural and aware of it. There was a feeling of decay of the Western world, strengthened by the use of science for military purposes and the increased chance of a nuclear conflict. University administrations were being castigated for taking part in weapons research programs, and protests were taking place at meetings where scientists taking part in such research showed up. As an example of the level the negativity could reach, during a 1970 meeting of the American Academy for the Advancement of Science, Edward Teller, one of the creators of the hydrogen bomb, was labelled a war criminal. The scientists were definitely aware of the light they as a whole were viewed in: Albert Szent-Györgyi, a physiologist and winner of a Nobel prize in medicine, summed the situation up as follows:

Because science is used for war, we have lost the respect for the people and there is a revulsion against scientists.


In summary, the feeling of the period was born out of an overlap of various polarizing social changes.[2] It gave rise to discontent, and a desire, particularly among younger people, to escape from the reality of the time and find a new set of values to attach themselves to. It was this attitude of openness to new ideas that would prove to be a fertile ground for various philosophies (and interpretations thereof) imported from the Far East. Many immigrant monks and practitioners have been residing in Western countries for years, prompting slow but steady growth of interest in the subject by mainstream and academic audiences. Western travellers to India and beyond brought with them various impressions of the cultures they witnessed. Adding to this, the Beat Generation’s fascination with Buddhism helped make eastern thought hip by association. A certain image of the East was formed. The mystique of an exotic belief system was coupled with the allure of a peaceful ethical message and a unifying worldview, made for the perfect counter-cultural package, and it didn’t matter how distorted some of the depictions of the Orient were. As with all new things that caught the public eye, there was a need for introductory and beginner-level materials that multiple authors would oblige over the years.

If one believes in fortune, Watts could be called fortunate for having been born at the time he was born. Although he was an Episcopal priest from 1945 to 1950, his interest in Buddhism went all the way back to the 30s. It was thanks to a book on Buddhism, The Way of Zen, published in 1957, that Watts became widely known – first among the counter-culture milieu fascinated with Eastern philosophy, then in wider mainstream circles as the book became a bestseller. Undoubtedly, Watts’s popularity had led to him being a guest lecturer at the Esalen Institute. Esalen, a retreat center which has over the years presented lectures on non-traditional topics by speakers ranging from Bob Dylan to Buckminster Fuller, is also one of the key destinations connected with the Human Potential Movement. It helped originate ideas that feed into what would nowadays be considered a “New-Agey” worldview. In particular, it propagated the idea that human beings should realize and cherish their uniqueness – that they would become better by realizing their potential. This view fit well into the framework of the counter-culture at the time.

Having established some historical background, I believe we can now return back to the question of irony which I mentioned in the beginning.

In some ways, everyone knows how the rest played out – the counter-culture movement fizzled out. The radicals of the yesteryear became the establishment. What is interesting is how the existing powerful companies managed to benefit from the social discontent – it wasn’t by opposing the non-conformists, but by catering to them that the corporations managed to profit on what seemed unprofitable. This was partially made possible by research which allowed advertisers to divide their target audience into categories of potential customers (a practice currently known as “market segmentation”) – categories which they could appeal to on a more individual level. [3] Through making a guarantee to the rebels that their individuality could be represented by their buying choices, the companies could channel opposition towards the socioeconomic order which created them into sales that sustain them.

Nowadays, psychological research plays a fundamental part of marketing. And I have very little doubt that the decision to build a marketing campaign around an Alan Watts quote was a very-well researched one. After all, what could you want more in these times of economic and geopolitical turmoil than to live fully? Now, preferably?

This is, therefore, how the irony of life plays out. In joining a counter-culture movement that rejected careerism and consumption, Watts was creating material for future marketing campaigns that aim to channel our search for a meaningful life into a desire for a product. Regardless of any non-materialist intentions he might have had, the sheer accessibility of his words has made them into excellent advertising material. The end result is a prime example of capitalist creativity when it comes to translating every valuable thing and thought into profit.

As a final note, if people were to truly reflect on Watts’s advice, I believe the campaign would have been a complete failure. After all, what is a car that you don’t yet have but another future that you’re now told to live for? If you genuinely wish to stop living for things that are yet to happen, if you want to stop being disappointed with the present, how will another purchase help you? And do you honestly think that it will be the last one you’ll be led to?

If you want to live fully now, then you already have what you need. Forget about the car.

[1] Quotes after Sarah Bridger’s 2011 dissertation “Scientists and the Ethics of Cold War Weapons Research”

[2] For a history journal article about the period, see Agar, John, “What happened in the sixties?”, BJHS 41 (4): 567–600, December 2008.

[3] As an example, take Stanford Research Institute’s Values and Lifestyles.

Death According to Derek Parfit

As most people who follow the news about the discipline will now, a renowned philosopher, Derek Parfit, died on the New Year’s Day. There’s already plenty of obituaries praising his life around – I wonder if the authors ever consider that such praise might have been nicer if the recipient could still receive it. In any case, I think that if not the best, then at least the most productive fate that can meet a person of ideas is the continued discussion of his or her labour.

Much of Parfit’s work is dedicated to the issue of personal identity. Not in the sense of one’s skin colour, personal qualities and quirks as observed by others, but in the sense of what makes a person that particular person – what makes people feel convinced that they are themselves. This of course faces obstacles in the form of our assumption and intuition that there is nothing to dissect. After all, it is a common thing to experience and a banal thing to express: “I am me. For as long as I have lived, I have always been me, and until I die, I shall always remain me”. But who is this “me” that is you? Is it just about your brain? Surely, if we take a part of your brain out of your skull, it wouldn’t be fair to say that you’ve stopped being you – we do this to people with brain tumours or people with epilepsy and while they may change, the changes usually aren’t extreme enough to warrant calling them different people. Similarly, if we cut your corpus callosum, the part of your brain that connects both hemispheres (which also used to be a treatment administered for some cases of epilepsy), we wouldn’t say that there are suddenly two of you. And yet, such a brain (more accurately, two brains) will develop two separate perceptions. On the basis of some behavioural changes, it may seem warranted to say that in fact there are two persons in a split brain skull now. Yet, maybe just for our sanity, we keep on treating these cases as one and the same person.

But should we? And, more generally, what is the relationship of one’s personal identity to one’s psychological and biological identity? Can we have a meaningful identity without psychological continuity? Parfit explores some of these in his 1971 article in The Philosophical Review.

Speaking of continuity and the changes within biological creatures that continue existing, one might wonder what would happen as the changes to the brain accumulate? At what point do we say that that person is no longer there? Certainly death sounds like a good candidate. But what makes death so different? In reading one of his obituaries, I came across this quote from Parfit’s Reasons and Persons:

There will later be some memories about my life. And there may later be thoughts that are influenced by mine, or things done as the result of my advice. My death will break the more direct relations between my present experiences and future experiences, but it will not break various other relations. This is all there is to the fact that there will be no one living who will be me. Now that I have seen this, my death seems to me less bad.

Whether this is really a comforting thought is up to the reader to decide. For Parfit at least, it’s no longer an issue.

One Month After: Notes on US Election Results

And so, after having lost the popular vote by more than 2.6 million votes, Donald Trump will become the next US president. Now that everyone has had the time to adjust to the news, I thought I should recount a couple of facts. Those might be as well taken to be a commentary on the US politics, since assuming that the president is meant to represent the will of the people, one could be forgiven for thinking that the reality which they reveal takes place in a bizarre opposite world.

They are as follows:

In all honesty, no one should be surprised at these picks – they match the presidential program in everything but the populist appeal. Equally, it’s unsurprising that the man picked as the secretary of defense proclaims that shooting people is quite fun, the national security adviser will be a man who believes that it’s rational to fear Muslims as Shariah law is spreading in the US, and leading the Department of Homeland Security will be a man who oversaw the US detention and torture facility in Guantanamo. And a man who is a neo-fascist favorite with the views one would expect from such a person will be the “chief strategist”.

Equally unsurprising is the amount of media and Democratic flip-flopping about the results: from New York Times which went from claiming that Trump was propelled by a “crisis of whiteness” before the election to claiming more humbly that they can’t really say what was the reason of his success and that he should be given a chance, to Bernie Sanders who, after claiming that Trump would be a disaster for the US, has now promised to work with him whenever it would benefit the  working class.

In the previous post, I named the US political system a satirical and embarrassing spectacle. I stand by that statement.

Elections and Legitimacy

The United States presidential election spectacle is equally entertaining and embarrassing to observe from a distance. Meant to be an illustration of the democratic ideal, it rather resembles a satirical spectacle, in which the pretence of democratic ideals goes through the motions, with the participation of campaign machines and media commentators, with a common agreement that it’s all fake. It’s also interesting from the perspective of political legitimacy.

Political legitimacy, broadly speaking, is the attribute of a selected government which makes it genuinely one that embodies the selectors’ wishes. In simpler terms, it’s about whether it’s really right for some people to be laying down the law, as opposed to just dictating it as they see fit, with the help of the violence that they can enact on those who oppose them.

A major development in political philosophy in the 17th and 18th centuries, that is, in the Enlightenment Age, was the formulation and elaboration of the idea that the common people – everyday labourers which sustain a country via their work – should be able to also decide how their country is governed. The beginnings of this thought was not at all as democratic: Hobbes wouldn’t advocate a representative constitutional republic (he was a monarchist), but he advocated that society is governed by a contract, in which humans agree to cooperate. Elaborating upon this idea, John Locke would advance a more radical notion: that whether a government is that of a king or of a representative body of elected officials, what gives it true power is the consent of the ruled, who transfer their power to those who govern. In other words, what is legitimate is what is agreed to. This thought certainly can be found at the basis of a widespread ideal of how modern politics should function.

This is where it gets interesting. One may ask oneself about the legitimacy of the future president of the United States. And here, a curious phenomenon arises: while there surely will be only one winner decided by a vote, the winner – presuming it’s either Donald Trump or Hillary Clinton – will be the most unpopular in the history of polling. So unpopular, in fact, that a definite majority of voters support neither candidate. Thus we arrive at what seems a paradox of the political construct within the country – that a government will be lead by an official, elected by a majority of voters, who (according to the data) doesn’t actually represent the wishes or leanings of the majority of voters. In other words, we have agreement and representation without actual consent. Many things could be blamed for this state – some of it is certainly the failed idea of “lesser evil voting”. Regardless of how many pundits and public intellectuals advocate for it, the undeniable consequence is that things can get arbitrarily bad, so long as a worse alternative is presented. But the fact is that neither of the options presented is agreeable to those meant to decide.

What I think should be the conclusion is that the seeming paradox is a yet another illustration of a – more and more obvious – reality. It is the reason for the common antipathy towards the political system, and the feeling that working within it is pointless that so many people feel. On one level or another, the masses who don’t care to vote, and the young people who don’t care whatsoever, all realize the same thing: that they don’t actually have any control. They are not represented, the system is broken, and they don’t have any influence on it. And this feeling is not just reflective of a subjective appraisal: it’s an actual conclusion of a study. People who aren’t rich really don’t have power.

It’s therefore important to ponder this question: if the formalities of power are just theatrics, if the real power lies outside of the reach of the ruled, to what extent is this power legitimate? It might be a scary thing to ponder. Whatever happens, whoever they choose, Americans will be confronted with a yet another spectacle, and the illusion will become more and more apparent.

No Ought from Is

Presenting philosophical concepts is usually easier when they relate to something current, so it’s in some sense a good thing that Neil deGrasse Tyson, the celebrity astrophysicist, has been defending his idea of Rationalia. Before I comment on it, I would like to step back and talk about a British XVIII century philosopher, David Hume, who provided good reasons for why the thinking behind Rationalia is flawed.

Hume, one of the brightest minds of his time, thought and wrote on a range of issues, from philosophy of science, through ethics, all the way to political theory. It is Hume who clearly delineated the problems we face when we infer general principles from observing particular events. It’s also Hume who stated, in his “A Treatise of Human Nature”:

In every system of morality, which I have hitherto met with, I have always remarked, that the author proceeds for some time in the ordinary way of reasoning, and establishes the being of a God, or makes observations concerning human affairs; when of a sudden I am surprized to find, that instead of the usual copulations of propositions, is, and is not, I meet with no proposition that is not connected with an ought, or an ought not. This change is imperceptible; but is, however, of the last consequence. For as this ought, or ought not, expresses some new relation or affirmation, it is necessary that it should be observed and explained; and at the same time that a reason should be given, for what seems altogether inconceivable, how this new relation can be a deduction from others, which are entirely different from it. But as authors do not commonly use this precaution, I shall presume to recommend it to the readers; and am persuaded, that this small attention would subvert all the vulgar systems of morality, and let us see, that the distinction of vice and virtue is not founded merely on the relations of objects, nor is perceived by reason.

Book III, Part I, Section I

Hume’s complaint here is that people have a tendency to assume that some moral truths follow in a straightforward manner from some matters of fact. For example, an abolitionist might point out that slavery creates object-like property from human beings, and thus goes against the idea of equality of all humans. A monarchist might argue that the king, by the virtue of high aristocratic position, or by being a unique leading figure which everyone obeys, is in the best position to lead a stable country. Notice that, while nowadays people are more likely to agree with the first proposition (“slavery should be abolished”) than the second (“monarchy should be restored”), both go from facts (“humans are changed into property”/”a king is the most influential member of nobility”) to statements of principle. What Hume points out is that, while we can reason about whether or not the facts are true, this will not help us in establishing whether the principles are right.

Of course, people who sincerely believe in their principles will often postulate that it’s obviously the case that they follow from their evidence. And while it’s obviously practically beneficial to live in a society in which people don’t believe that murder is fine (since it lowers your chance of being murdered), that doesn’t actually establish that it’s rational. The problem is that “rational” has become – I assume through overuse – a somewhat nebulous term, living somewhere in the semantic vicinity of “espousing a modern scientific worldview” and “affirming humanitarian ideas of liberal societies”. While it’s true that rational thinking has influenced both modern science and morality, rationality itself is simply a matter of following reason – basing one’s worldview on facts and striving towards logical coherence in conclusions. Notice that the matter of morality is completely absent from this picture. The scientific research done by Kurt Blome was no less rational than that carried out by Linus Pauling. And herein lies the problem.

We cannot say – Hume claims and I agree – that evidence or logic would lead us to formulating “better” principles or policies. As scientific facts don’t take any sides, it is not possible to establish norms of behaviour by analysing them. We cannot deduce an “ought” from an “is”.

Here is where the concept of Rationalia comes in. Rationalia is, in Tyson’s own description, a country with a constitution which only contains one sentence:

All policy shall be based on the weight of evidence

The problems with this approach are apparent once we consider the is-ought problem. But first, it’s hard to even say what basing policy on weight of evidence would look like. Who would get to decide what the weighing, or the threshold of certainty is? Assuming everyone can somehow agree on this – which would be interesting, since the choice of, let’s say, 95% certainty seems to be just as arbitrary as 96% – we still wouldn’t have moved forward. We can have all the evidence in the world that a given choice would bring about a given set of consequences, but this still doesn’t tell us whether such consequences are desirable, or whether the choice itself is good. Example: spending n billion dollars on renewable energy sources would offset the effects of global warming as efficiently as letting m million people die from poverty. Should we flip a coin?

The problem that Tyson doesn’t seem to understand is that there is always an element that lies beyond the reach of rationality in making any policy choices.¹ There is always an underpinning principle that guides policymaking, which establishes what it is that we’re trying to achieve – what is the result that we care about? What is the ought? Only with that can we say that, for example, preserving wealth is less important than preserving human life. In refusing to acknowledge this, Tyson continues enumerating supposed advantages of Rationalia, never quite explaining how they follow from no ethical commitments. Thus, he produces numerous paragraphs about how there would be more funding for social sciences, a better science education, freedom to be irrational, and other benefits available in Rationalia, but he fails to explain why they would be there. His post is a good example of failing to analyze one’s position carefully and to note one’s assumptions. And such analysis is worthwhile, because it is one of the tools which can take us from everyday thinking to philosophy.

1. Ironically, he cites the U.S. constitution as an example of a document that doesn’t discuss morals in the very same sentence in which he provides an example of just such a thing – the restriction on the use of the military.

Would It Matter If the Universe Were a Simulation?

The “is the Universe a simulation” question seems to be returning every so often to the headlines, as one or other academic or celebrity (or both) makes a stab at conjecturing either way. The most recent case of this, a claim made by Elon Musk, shows – if anything – that people don’t get tired of rerunning the same arguments. However, I think that what position one should take is less interesting a question than why should anyone bother to take any.

The starting point – as mentioned – is the extraordinary claim by the tech magnate Musk that there is a “one in billions” chance that we are not living in a simulated world. The gist of his claim is that the progress of technology is so rapid that it is almost certain that we will have access to perfect virtual reality environments. Given that these virtual reality games would be playable on the same type of set-top boxes that we use today (just with more processing power), and there would be billions of them, Musk’s claim is that there is a very slim (specifically, one to billions) chance that what we’re experiencing right now is “base reality” (his term for real world). But this argument is spurious because there is no actual data underlying this estimate. Musk could just as well have chosen “millions”, “trillions” or “hundreds”, and it wouldn’t have mattered, since the devices he proposes are – for now – mere fiction. We don’t know for sure if we will ever have fully immersive virtual reality.

But even assuming that such VR devices would be common household items in the future doesn’t automatically mean that everyone (or anyone) will be using them for their entire life. Why wouldn’t those be like games that people play nowadays, or like Star Trek’s holodeck – something that you use for entertainment for a while, and then turn off? The argument here:

  1. Technology is advancing very fast.
  2. We will likely have the capacity to perfectly simulate reality for everyone.
  3. Therefore, everyone will be (and likely is now) in such a simulated reality.

is a classical example of a non sequitur – which simply means “doesn’t follow” in Latin. There is nothing connecting the premises 1 and 2 with the conclusion, although people wishing to live inside a virtual reality might take the leap and assume there is no gap.

There are lengthier responses to Musk available online. All that I’ve seen were dodgy to a greater or lesser degree, but I won’t be reviewing them here. The question I’m interested in is “what does it matter?” Think of it as follows:

  1. Assume that we are living in a virtual reality. The basic physical constants are what we know them to be. The interactions of matter that follow from them are – just like the constants – simulated.
  2. Assume that we are in the real world – in other words, there is no “deeper” level to go in terms of probing reality. The basic physical constants are what we know them to be. The interactions of matter that follow from them are – just like the constants – ultimately real.

The problem should become obvious now – while there might be a difference between a perfectly simulated world and the real, underlying one, we would never be able to see it. Both versions would be perceived to be apparently correct. Our very act of perception is nothing more than interaction with the world, but then perceiving this (or any) world as “real” or “virtual” is nothing more than making an assumption that some of our sensory data is not as it should be. However, people in both worlds could make such a claim. We thus cannot see this world as fake or genuine, because no matter which one it is, all that we’re seeing is dictated by the world. It is like claiming that the whole world you live in – your house, the town, the places you’ve been on vacation – is really just a big, fancy room. When you then claim that there is something “beyond the room”, and you go and stand where you say “outside” is, how do you know that that isn’t a part of the room too? Here the only difference is that the room is always with you, because it is your senses. Therefore, even if someone were to show you the “real world” – assuming you were in a simulation – you would have no reason to believe them.

Thus, wondering about whether this Universe is real or simulated seems pointless to me. We have no way of telling, because everything we could do to tell the two apart – all measurements and experiments – would take place in the one we’re in right now. The whole notion that we could “look outside” of a simulated world to see the difference is based on the assumption that we would be able to tell when we’re really looking outside and when we’re being duped by the simulation. But by the nature of the simulation we wouldn’t be able to tell which is the case, since the simulation is all that we got to experience.

A more interesting question could probably be: why would people want to think of the Universe as a simulation? At least, why would Musk want to believe so?

Arguably we should hope that that’s true, because if civilization stops advancing, that may be due to some calamitous event that erases civilization.

It’s interesting that there’s little commentary that I’ve seen on this comment. In a world riddled by economic and environmental crises, one could easily think of calamitous events that erase civilization. Some of them could even have something to do with producing more and more consumer electronics to feed our escapism. Perhaps it’s more comforting to think that the world is just a simulation, and that we will soon be making more like it, so whatever we do is not that big of an issue. But – for the reasons I mentioned – trashing this world will feel real to us, no matter whether we’re in a simulation, so it seems important to take care of it, regardless of the odds.

Further reading: the original simulation argument was made by Nick Bostrom. His paper and the commentaries on it are available at

Misplaced Nostalgia

One of the cliches that seems to be prevalent, at least in the sphere of “Western” thought, is the notion that childhood is somehow a time of purity and bliss. In that often-entertained stereotype, there’s an aura of nostalgic longing that many people seem to assume should be the default way of thinking about one’s earliest years. (Setting aside the issue that this stereotype is of very limited applicability to begin with) While I don’t doubt that some people’s childhoods were genuinely blissful in the sense that there was a lot of simple enjoyment involved, I think that longing for childhood and idealizing the period is a way to deceive oneself.

First of all, the obvious realization should be that childhood is not really a period of such purity as people would like to think. Children are generally uncritical of both their moral and hygienic principles, which makes them metaphorically and literally likely to have dirt under their fingernails. Children can and do bully each other from an early age, as well as hurt animals out of sheer curiosity. And when not all of them grow out of such behaviour, we see and label the adult as deviant, cruel and wicked – pretending that it’s something that only adults take part in.

But one could also think that childhood is a blissful period simply because it’s a time before one gets to taste the full complexity of life. If your life was similar to the optimistic childhood stereotype, then, as a child, you didn’t have to worry about taxes, jobs, corruption in politics, mortgages, war or any other adult topic. You were completely unaware of the notion of cheating in a relationship, and the biggest dilemma might have been which television show to watch. Compare this to your life as an adult, and it seems clear as a day that your childhood was better – no worries, no debts, just playing and resting. Then, teenage years came along and destroyed everything and made you miserable.

This sort of explanation is as alluring as it is shallow. Childhood was better because there were more things that you liked than there are now. What it doesn’t say is why exactly life is that way, and why do people think they had more of the things they liked in childhood. I think the reason the analysis stops here is that most people don’t really like to admit that their childhood doesn’t only feel more free, it genuinely was more free. And not only in the simplistic sense that a parent would pay for something you wanted, but in the deeper sense that what you wanted was closer to how you felt. As children, before we are introduced into the wider world, and before we are acquainted with notions such as fashion, expectations, grades and other problems created by other people, we have no reason to not pursue our genuine interests. We might have no clue, but we surely have conviction. If the process of “growing up” amounts mostly to getting used to the idea that other people are going to force you to do things the way they want – whether they are schoolmates who would ostracize you for not liking what they like, or your boss who will fire you for working too slowly – then it’s no wonder that what came before seems idyllic.

Therefore, I think that people who feel nostalgic towards their early youth are in reality just feeling a misplaced sense of loss. The passing of childhood symbolized replacement of their values with those constructed by other people and a loss of self-direction. Their unwillingness to either stand up to or get used to the situation they are in now is recast as a mythical loss of purity.

Nostalgia about your early years is therefore more like indulging a fantastic vision – it wasn’t like that, but what you do is substitute reflection on your current state with wishful thoughts about the past. It’s not your childhood that is blessed, it’s your life right now that is miserable.

Reading Philosophy from the Past

It’s usually a good decision to dedicate some time to studying past works in any discipline one wishes to explore. The knowledge of the historical background, or “how we got here” can make it easier to see why certain issues are being debated now. In the Internet age, the ability to access old philosophical texts is greater than ever before, but in a strange way it might become a problem.

There is an abundance of philosophical texts available nowadays, but beyond their sheer amount, there is a problem of preserving meaning in translations over time. When dealing with a guide, summary or commentary, there’s always the risk of confusing what the interpreter thought the author meant and what the author really meant. The fact that, in philosophy, the author’s actual meaning is in many cases debatable doesn’t help. For this reason, while translations can still be a source of confusion (as the meaning of words can shift over time), they are usually less risky. But even the most faithful translators have to adjust the wording to fit the audience (and time) they’re translating for. And when they do err, their mistranslations can linger on for a long time, causing confusion. (see “hexis”)

Now that I’ve given this warning, I thought I would share a link to a collection of texts that were translated and had their original vocabulary and sentence structure changed for the sake of accessibility, landing somewhere between the two categories. If you’re interested in philosophy classics and wish to go closer to the source, you can find a collection of works of various thinkers from the early modern period – Machiavelli’s political advice, Spinoza’s attempts to geometrize ethics, and a lot more – available at .

The Allure of the Popular

Reflecting on the rise of populist parties around the world, it’s interesting how little influence the intellectual effort of various philosophers throughout the centuries has had on the world. Socrates saw the enemy in sophists, who (at least in his characterization) took gaining influence to be the only worthy goal, John Stuart Mill wrote a whole book about logical fallacies, calling for people to think in a more nuanced way, and of course Orwell and Eco wrote plenty about the fairly-modern tactics of language twisting and behaviour manipulation by people seeking political power – but it all seems to have passed people by. The same style of argumentation – simplistic and pandering – always seems to win people over.

Although a lot of the parties gaining support due to populism are classified as right wing, the truth is that populism has no political leanings. It doesn’t matter what a particular politician actually believes, since votes aren’t won by common appeal – and promises aren’t enforced, so abandoning them can be expected.

What’s clear is that power that has historically been triumphant over intellect is fear, and those who can create and soothe it most efficiently have the greatest chance of gaining popular support. If the main feature of populism is that it appeals the most to thinking lacking in reflection out of fear, then the fault for its rise should lie with those decision makers and officials who have made modern education what it is – or so the argument would go.

But if we blame public officials for the ills of society, then we should also ask who got them in power in the first place. It seems that democracy is working to defeat itself by promoting those who, in the end, care only about their own power. But this way of thinking brings us straight to the conclusion that democracy is to be blamed for the problems of society. The uneducated choose manipulators to lead them, and manipulators dumb them down even further. It’s easy to go from this to the conclusion that the masses should not be allowed to vote, that democracy itself should be revoked.

But the question remains of why would poorly-educated, people who don’t think critically support populists, even when they call for genocide and increase of economic inequality? Here we might think to go back to the idea of “human nature”. Hobbes’s idea was that people are naturally mean-spirited and vicious, and would rob and kill one another if allowed to. But the less presuming explanation is that people act based on the situation their in, and in desperate situations will accept anyone who offers them a way out. And with the disappearance of the middle class as an economic power in the US, and an unemployment rate steady in the double digits in many European countries, it’s only logical that appealing to everyone’s economic fear is a good strategy.

Neither people’s natural instincts, nor their education, nor their economic state are on their own enough as an explanation for their love of populism – it’s a problem of multiple causes and multiple effects. The only thing philosophy can offer the world is the ideas and the invitation to critical thinking – a defence of mind against populism.

Further reading:
Gorgias – Socrates’s views on rhetoric, as related by Plato
Ur-Fascism – Umberto Eco’s attempts at elucidating the nature of fascism

The Non-categorical World

Immanuel Kant is one of the most important figures in the history of Western philosophy. The influence of his writing is so immense that almost all subsequent authors commented on or alluded to his ideas. At the same time, his texts are often difficult to understand, and beginners can end up being just being more confused after having read them. But because his ideas are so important, it’s worth to know even a single one than none. One of these, central to his outlook on ethics, and one that will be briefly presented here is the categorical imperative.

What is it? When we think about our actions, we notice that there is almost always a reason for why we choose to do something. We drink because we are thirsty, we read novels because we’re interested in the story, we get annoyed because the neighbours are being noisy again. Those are the examples of almost automatic actions – we don’t really think about the reasons involve, but what about moral decisions – those that people tend to debate? Some people choose to be kind because that’s what their parents told them to do, some people choose to help others because they hope they will get something in return. But such “morality” seems shaky – there appears to be no solid reason why one would obey their moral code outside of convenience or someone’s demands.

For Kant this was unacceptable. A true guideline for acting should always apply. There are no exceptions, because if there were, people could just break their moral code whenever they pleased. If we demand, instead, that people always treat others how they wish themselves and everyone else to be treated, then we demand nothing less than a universal rule. And such is Kant’s formulation:

Act only in accordance with that maxim through which you can at the same time will that it become a universal law.

Groundwork for the Metaphysics of Morals, p. 422

All it is saying is that the maxims – the reasons for one’s actions – should be chosen so that they can be made always applicable.

The categorical imperative fits neatly into Kant’s moral system, which revolves around an unquestioning fulfilment of one’s duties. The system was to be ruled by reason, so duties would be universal, not corrupted by people’s individual desires. (p. 434) (It’s worth noting that Kant specifically adds that we should treat humanity as an end, and not a means to an end.)

We can ask whether such a system is truly the right moral system, and we can wonder whether there is any set of rules that can be applied blindly, but one thing remains certain – this world is not a Kantian one. Whether in private or public life, people lie, cheat, and engage in all sorts of cynicism.

Kant’s principled approach stands in contrast with the reality most people know – it is clearly not how we govern ourselves. One can only imagine what an outrage it would be if a Middle-Eastern country decided to invade a Western one. Or what if everyone could establish themselves as legal entities in a tax heaven? Things like the Universal Declaration of Human Rights sound wonderful, but thinking about Kant’s philosophy, one quickly realizes they’re anything but universal. We live in an inconsistent world, and it’s only logical that we end up in a mess.

Further reading: Kant’s Groundwork for the Metaphysics of Morals, more background on Kant