Live Now, Consume

My goodness, don’t you remember when you went first to school? And you went to kindergarten.
And in kindergarten, the idea was to push along so that you could get into first grade,
and then push along so that you could get into second grade, third grade, and so on,
going up and up and then you went to high school and this was a great transition in life.
And now the pressure is being put on, you must get ahead, you must go up the grades and finally be good enough to get to college.
And then when you get to college, you’re still going step by step, step by step, up to the great moment in which you’re ready to go out into the world.

And then when you get out into this famous world, comes the struggle for success in profession or business.
And again, there seems to be a ladder before you, something for which you’re reaching for all the time.
And then, suddenly, when you’re about 40 or 45 years old, in the middle of life, you wake up one day and say “Huh? I’ve arrived. And, by Jove, I feel pretty much the same as I’ve always felt. In fact I’m not so sure that I don’t feel a little bit cheated.”

Because, you see, you were fooled.
You were always living for somewhere where you aren’t.
And while, as I said, it is of tremendous use for us to be able to look ahead in this way and to plan, there is no use planning for a future, which, when you get to it and it becomes the present you won’t be there – you’ll be living in some other future which hasn’t yet arrived.

And so in this way, one is never able actually to inherit and enjoy the fruits of one’s actions.
You can’t live it all unless you can live fully now.

If you’ve been using the Internet for some time, there’s a chance you’ve heard these words. Possibly, they were presented to you in spoken form by an eloquent and confident-sounding narrator, whose voice had a hint of a British accent to it. There was some vaguely-uplifting music playing in the background and the visuals were various pieces of stock footage.

Now, am I describing:

  1. A modified excerpt from a talk by an American philosophy popularizer, Alan Watts, or
  2. A Volvo commercial?

I’m sure that even if you’re bad at tests and, in fact, you have never heard these words, you can get it right, because the answer is both.

There is a certain level of irony involved in human life, and all our endeavours seem to be constantly in danger of producing the opposite results. Before I say more about this, I want to make a note about history.

Having seen their parents live out a preplanned life of the 50s, with the promise that theirs will be the same – so long as they keep the US economy going by continued consumption – the new generation became increasingly rebellious. Out of the disappointments of the previous decade, came the United States of the 60s – a chaotic and energized place.  In the age in which student protest movements, Vietnam War, cold war, hippie culture, drug use and a host of other things were mixing together, the climate could probably be best summed up as “radical” or, more accurately, “deliberately antagonistic” – counter-cultural and aware of it. There was a feeling of decay of the Western world, strengthened by the use of science for military purposes and the increased chance of a nuclear conflict. University administrations were being castigated for taking part in weapons research programs, and protests were taking place at meetings where scientists taking part in such research showed up. As an example of the level the negativity could reach, during a 1970 meeting of the American Academy for the Advancement of Science, Edward Teller, one of the creators of the hydrogen bomb, was labelled a war criminal. The scientists were definitely aware of the light they as a whole were viewed in: Albert Szent-Györgyi, a physiologist and winner of a Nobel prize in medicine, summed the situation up as follows:

Because science is used for war, we have lost the respect for the people and there is a revulsion against scientists.


In summary, the feeling of the period was born out of an overlap of various polarizing social changes.[2] It gave rise to discontent, and a desire, particularly among younger people, to escape from the reality of the time and find a new set of values to attach themselves to. It was this attitude of openness to new ideas that would prove to be a fertile ground for various philosophies (and interpretations thereof) imported from the Far East. Many immigrant monks and practitioners have been residing in Western countries for years, prompting slow but steady growth of interest in the subject by mainstream and academic audiences. Western travellers to India and beyond brought with them various impressions of the cultures they witnessed. Adding to this, the Beat Generation’s fascination with Buddhism helped make eastern thought hip by association. A certain image of the East was formed. The mystique of an exotic belief system was coupled with the allure of a peaceful ethical message and a unifying worldview, made for the perfect counter-cultural package, and it didn’t matter how distorted some of the depictions of the Orient were. As with all new things that caught the public eye, there was a need for introductory and beginner-level materials that multiple authors would oblige over the years.

If one believes in fortune, Watts could be called fortunate for having been born at the time he was born. Although he was an Episcopal priest from 1945 to 1950, his interest in Buddhism went all the way back to the 30s. It was thanks to a book on Buddhism, The Way of Zen, published in 1957, that Watts became widely known – first among the counter-culture milieu fascinated with Eastern philosophy, then in wider mainstream circles as the book became a bestseller. Undoubtedly, Watts’s popularity had led to him being a guest lecturer at the Esalen Institute. Esalen, a retreat center which has over the years presented lectures on non-traditional topics by speakers ranging from Bob Dylan to Buckminster Fuller, is also one of the key destinations connected with the Human Potential Movement. It helped originate ideas that feed into what would nowadays be considered a “New-Agey” worldview. In particular, it propagated the idea that human beings should realize and cherish their uniqueness – that they would become better by realizing their potential. This view fit well into the framework of the counter-culture at the time.

Having established some historical background, I believe we can now return back to the question of irony which I mentioned in the beginning.

In some ways, everyone knows how the rest played out – the counter-culture movement fizzled out. The radicals of the yesteryear became the establishment. What is interesting is how the existing powerful companies managed to benefit from the social discontent – it wasn’t by opposing the non-conformists, but by catering to them that the corporations managed to profit on what seemed unprofitable. This was partially made possible by research which allowed advertisers to divide their target audience into categories of potential customers (a practice currently known as “market segmentation”) – categories which they could appeal to on a more individual level. [3] Through making a guarantee to the rebels that their individuality could be represented by their buying choices, the companies could channel opposition towards the socioeconomic order which created them into sales that sustain them.

Nowadays, psychological research plays a fundamental part of marketing. And I have very little doubt that the decision to build a marketing campaign around an Alan Watts quote was a very-well researched one. After all, what could you want more in these times of economic and geopolitical turmoil than to live fully? Now, preferably?

This is, therefore, how the irony of life plays out. In joining a counter-culture movement that rejected careerism and consumption, Watts was creating material for future marketing campaigns that aim to channel our search for a meaningful life into a desire for a product. Regardless of any non-materialist intentions he might have had, the sheer accessibility of his words has made them into excellent advertising material. The end result is a prime example of capitalist creativity when it comes to translating every valuable thing and thought into profit.

As a final note, if people were to truly reflect on Watts’s advice, I believe the campaign would have been a complete failure. After all, what is a car that you don’t yet have but another future that you’re now told to live for? If you genuinely wish to stop living for things that are yet to happen, if you want to stop being disappointed with the present, how will another purchase help you? And do you honestly think that it will be the last one you’ll be led to?

If you want to live fully now, then you already have what you need. Forget about the car.

[1] Quotes after Sarah Bridger’s 2011 dissertation “Scientists and the Ethics of Cold War Weapons Research”

[2] For a history journal article about the period, see Agar, John, “What happened in the sixties?”, BJHS 41 (4): 567–600, December 2008.

[3] As an example, take Stanford Research Institute’s Values and Lifestyles.


One Month After: Notes on US Election Results

And so, after having lost the popular vote by more than 2.6 million votes, Donald Trump will become the next US president. Now that everyone has had the time to adjust to the news, I thought I should recount a couple of facts. Those might be as well taken to be a commentary on the US politics, since assuming that the president is meant to represent the will of the people, one could be forgiven for thinking that the reality which they reveal takes place in a bizarre opposite world.

They are as follows:

In all honesty, no one should be surprised at these picks – they match the presidential program in everything but the populist appeal. Equally, it’s unsurprising that the man picked as the secretary of defense proclaims that shooting people is quite fun, the national security adviser will be a man who believes that it’s rational to fear Muslims as Shariah law is spreading in the US, and leading the Department of Homeland Security will be a man who oversaw the US detention and torture facility in Guantanamo. And a man who is a neo-fascist favorite with the views one would expect from such a person will be the “chief strategist”.

Equally unsurprising is the amount of media and Democratic flip-flopping about the results: from New York Times which went from claiming that Trump was propelled by a “crisis of whiteness” before the election to claiming more humbly that they can’t really say what was the reason of his success and that he should be given a chance, to Bernie Sanders who, after claiming that Trump would be a disaster for the US, has now promised to work with him whenever it would benefit the  working class.

In the previous post, I named the US political system a satirical and embarrassing spectacle. I stand by that statement.

Elections and Legitimacy

The United States presidential election spectacle is equally entertaining and embarrassing to observe from a distance. Meant to be an illustration of the democratic ideal, it rather resembles a satirical spectacle, in which the pretence of democratic ideals goes through the motions, with the participation of campaign machines and media commentators, with a common agreement that it’s all fake. It’s also interesting from the perspective of political legitimacy.

Political legitimacy, broadly speaking, is the attribute of a selected government which makes it genuinely one that embodies the selectors’ wishes. In simpler terms, it’s about whether it’s really right for some people to be laying down the law, as opposed to just dictating it as they see fit, with the help of the violence that they can enact on those who oppose them.

A major development in political philosophy in the 17th and 18th centuries, that is, in the Enlightenment Age, was the formulation and elaboration of the idea that the common people – everyday labourers which sustain a country via their work – should be able to also decide how their country is governed. The beginnings of this thought was not at all as democratic: Hobbes wouldn’t advocate a representative constitutional republic (he was a monarchist), but he advocated that society is governed by a contract, in which humans agree to cooperate. Elaborating upon this idea, John Locke would advance a more radical notion: that whether a government is that of a king or of a representative body of elected officials, what gives it true power is the consent of the ruled, who transfer their power to those who govern. In other words, what is legitimate is what is agreed to. This thought certainly can be found at the basis of a widespread ideal of how modern politics should function.

This is where it gets interesting. One may ask oneself about the legitimacy of the future president of the United States. And here, a curious phenomenon arises: while there surely will be only one winner decided by a vote, the winner – presuming it’s either Donald Trump or Hillary Clinton – will be the most unpopular in the history of polling. So unpopular, in fact, that a definite majority of voters support neither candidate. Thus we arrive at what seems a paradox of the political construct within the country – that a government will be lead by an official, elected by a majority of voters, who (according to the data) doesn’t actually represent the wishes or leanings of the majority of voters. In other words, we have agreement and representation without actual consent. Many things could be blamed for this state – some of it is certainly the failed idea of “lesser evil voting”. Regardless of how many pundits and public intellectuals advocate for it, the undeniable consequence is that things can get arbitrarily bad, so long as a worse alternative is presented. But the fact is that neither of the options presented is agreeable to those meant to decide.

What I think should be the conclusion is that the seeming paradox is a yet another illustration of a – more and more obvious – reality. It is the reason for the common antipathy towards the political system, and the feeling that working within it is pointless that so many people feel. On one level or another, the masses who don’t care to vote, and the young people who don’t care whatsoever, all realize the same thing: that they don’t actually have any control. They are not represented, the system is broken, and they don’t have any influence on it. And this feeling is not just reflective of a subjective appraisal: it’s an actual conclusion of a study. People who aren’t rich really don’t have power.

It’s therefore important to ponder this question: if the formalities of power are just theatrics, if the real power lies outside of the reach of the ruled, to what extent is this power legitimate? It might be a scary thing to ponder. Whatever happens, whoever they choose, Americans will be confronted with a yet another spectacle, and the illusion will become more and more apparent.

No Ought from Is

Presenting philosophical concepts is usually easier when they relate to something current, so it’s in some sense a good thing that Neil deGrasse Tyson, the celebrity astrophysicist, has been defending his idea of Rationalia. Before I comment on it, I would like to step back and talk about a British XVIII century philosopher, David Hume, who provided good reasons for why the thinking behind Rationalia is flawed.

Hume, one of the brightest minds of his time, thought and wrote on a range of issues, from philosophy of science, through ethics, all the way to political theory. It is Hume who clearly delineated the problems we face when we infer general principles from observing particular events. It’s also Hume who stated, in his “A Treatise of Human Nature”:

In every system of morality, which I have hitherto met with, I have always remarked, that the author proceeds for some time in the ordinary way of reasoning, and establishes the being of a God, or makes observations concerning human affairs; when of a sudden I am surprized to find, that instead of the usual copulations of propositions, is, and is not, I meet with no proposition that is not connected with an ought, or an ought not. This change is imperceptible; but is, however, of the last consequence. For as this ought, or ought not, expresses some new relation or affirmation, it is necessary that it should be observed and explained; and at the same time that a reason should be given, for what seems altogether inconceivable, how this new relation can be a deduction from others, which are entirely different from it. But as authors do not commonly use this precaution, I shall presume to recommend it to the readers; and am persuaded, that this small attention would subvert all the vulgar systems of morality, and let us see, that the distinction of vice and virtue is not founded merely on the relations of objects, nor is perceived by reason.

Book III, Part I, Section I

Hume’s complaint here is that people have a tendency to assume that some moral truths follow in a straightforward manner from some matters of fact. For example, an abolitionist might point out that slavery creates object-like property from human beings, and thus goes against the idea of equality of all humans. A monarchist might argue that the king, by the virtue of high aristocratic position, or by being a unique leading figure which everyone obeys, is in the best position to lead a stable country. Notice that, while nowadays people are more likely to agree with the first proposition (“slavery should be abolished”) than the second (“monarchy should be restored”), both go from facts (“humans are changed into property”/”a king is the most influential member of nobility”) to statements of principle. What Hume points out is that, while we can reason about whether or not the facts are true, this will not help us in establishing whether the principles are right.

Of course, people who sincerely believe in their principles will often postulate that it’s obviously the case that they follow from their evidence. And while it’s obviously practically beneficial to live in a society in which people don’t believe that murder is fine (since it lowers your chance of being murdered), that doesn’t actually establish that it’s rational. The problem is that “rational” has become – I assume through overuse – a somewhat nebulous term, living somewhere in the semantic vicinity of “espousing a modern scientific worldview” and “affirming humanitarian ideas of liberal societies”. While it’s true that rational thinking has influenced both modern science and morality, rationality itself is simply a matter of following reason – basing one’s worldview on facts and striving towards logical coherence in conclusions. Notice that the matter of morality is completely absent from this picture. The scientific research done by Kurt Blome was no less rational than that carried out by Linus Pauling. And herein lies the problem.

We cannot say – Hume claims and I agree – that evidence or logic would lead us to formulating “better” principles or policies. As scientific facts don’t take any sides, it is not possible to establish norms of behaviour by analysing them. We cannot deduce an “ought” from an “is”.

Here is where the concept of Rationalia comes in. Rationalia is, in Tyson’s own description, a country with a constitution which only contains one sentence:

All policy shall be based on the weight of evidence

The problems with this approach are apparent once we consider the is-ought problem. But first, it’s hard to even say what basing policy on weight of evidence would look like. Who would get to decide what the weighing, or the threshold of certainty is? Assuming everyone can somehow agree on this – which would be interesting, since the choice of, let’s say, 95% certainty seems to be just as arbitrary as 96% – we still wouldn’t have moved forward. We can have all the evidence in the world that a given choice would bring about a given set of consequences, but this still doesn’t tell us whether such consequences are desirable, or whether the choice itself is good. Example: spending n billion dollars on renewable energy sources would offset the effects of global warming as efficiently as letting m million people die from poverty. Should we flip a coin?

The problem that Tyson doesn’t seem to understand is that there is always an element that lies beyond the reach of rationality in making any policy choices.¹ There is always an underpinning principle that guides policymaking, which establishes what it is that we’re trying to achieve – what is the result that we care about? What is the ought? Only with that can we say that, for example, preserving wealth is less important than preserving human life. In refusing to acknowledge this, Tyson continues enumerating supposed advantages of Rationalia, never quite explaining how they follow from no ethical commitments. Thus, he produces numerous paragraphs about how there would be more funding for social sciences, a better science education, freedom to be irrational, and other benefits available in Rationalia, but he fails to explain why they would be there. His post is a good example of failing to analyze one’s position carefully and to note one’s assumptions. And such analysis is worthwhile, because it is one of the tools which can take us from everyday thinking to philosophy.

1. Ironically, he cites the U.S. constitution as an example of a document that doesn’t discuss morals in the very same sentence in which he provides an example of just such a thing – the restriction on the use of the military.

Misplaced Nostalgia

One of the cliches that seems to be prevalent, at least in the sphere of “Western” thought, is the notion that childhood is somehow a time of purity and bliss. In that often-entertained stereotype, there’s an aura of nostalgic longing that many people seem to assume should be the default way of thinking about one’s earliest years. (Setting aside the issue that this stereotype is of very limited applicability to begin with) While I don’t doubt that some people’s childhoods were genuinely blissful in the sense that there was a lot of simple enjoyment involved, I think that longing for childhood and idealizing the period is a way to deceive oneself.

First of all, the obvious realization should be that childhood is not really a period of such purity as people would like to think. Children are generally uncritical of both their moral and hygienic principles, which makes them metaphorically and literally likely to have dirt under their fingernails. Children can and do bully each other from an early age, as well as hurt animals out of sheer curiosity. And when not all of them grow out of such behaviour, we see and label the adult as deviant, cruel and wicked – pretending that it’s something that only adults take part in.

But one could also think that childhood is a blissful period simply because it’s a time before one gets to taste the full complexity of life. If your life was similar to the optimistic childhood stereotype, then, as a child, you didn’t have to worry about taxes, jobs, corruption in politics, mortgages, war or any other adult topic. You were completely unaware of the notion of cheating in a relationship, and the biggest dilemma might have been which television show to watch. Compare this to your life as an adult, and it seems clear as a day that your childhood was better – no worries, no debts, just playing and resting. Then, teenage years came along and destroyed everything and made you miserable.

This sort of explanation is as alluring as it is shallow. Childhood was better because there were more things that you liked than there are now. What it doesn’t say is why exactly life is that way, and why do people think they had more of the things they liked in childhood. I think the reason the analysis stops here is that most people don’t really like to admit that their childhood doesn’t only feel more free, it genuinely was more free. And not only in the simplistic sense that a parent would pay for something you wanted, but in the deeper sense that what you wanted was closer to how you felt. As children, before we are introduced into the wider world, and before we are acquainted with notions such as fashion, expectations, grades and other problems created by other people, we have no reason to not pursue our genuine interests. We might have no clue, but we surely have conviction. If the process of “growing up” amounts mostly to getting used to the idea that other people are going to force you to do things the way they want – whether they are schoolmates who would ostracize you for not liking what they like, or your boss who will fire you for working too slowly – then it’s no wonder that what came before seems idyllic.

Therefore, I think that people who feel nostalgic towards their early youth are in reality just feeling a misplaced sense of loss. The passing of childhood symbolized replacement of their values with those constructed by other people and a loss of self-direction. Their unwillingness to either stand up to or get used to the situation they are in now is recast as a mythical loss of purity.

Nostalgia about your early years is therefore more like indulging a fantastic vision – it wasn’t like that, but what you do is substitute reflection on your current state with wishful thoughts about the past. It’s not your childhood that is blessed, it’s your life right now that is miserable.

The Allure of the Popular

Reflecting on the rise of populist parties around the world, it’s interesting how little influence the intellectual effort of various philosophers throughout the centuries has had on the world. Socrates saw the enemy in sophists, who (at least in his characterization) took gaining influence to be the only worthy goal, John Stuart Mill wrote a whole book about logical fallacies, calling for people to think in a more nuanced way, and of course Orwell and Eco wrote plenty about the fairly-modern tactics of language twisting and behaviour manipulation by people seeking political power – but it all seems to have passed people by. The same style of argumentation – simplistic and pandering – always seems to win people over.

Although a lot of the parties gaining support due to populism are classified as right wing, the truth is that populism has no political leanings. It doesn’t matter what a particular politician actually believes, since votes aren’t won by common appeal – and promises aren’t enforced, so abandoning them can be expected.

What’s clear is that power that has historically been triumphant over intellect is fear, and those who can create and soothe it most efficiently have the greatest chance of gaining popular support. If the main feature of populism is that it appeals the most to thinking lacking in reflection out of fear, then the fault for its rise should lie with those decision makers and officials who have made modern education what it is – or so the argument would go.

But if we blame public officials for the ills of society, then we should also ask who got them in power in the first place. It seems that democracy is working to defeat itself by promoting those who, in the end, care only about their own power. But this way of thinking brings us straight to the conclusion that democracy is to be blamed for the problems of society. The uneducated choose manipulators to lead them, and manipulators dumb them down even further. It’s easy to go from this to the conclusion that the masses should not be allowed to vote, that democracy itself should be revoked.

But the question remains of why would poorly-educated, people who don’t think critically support populists, even when they call for genocide and increase of economic inequality? Here we might think to go back to the idea of “human nature”. Hobbes’s idea was that people are naturally mean-spirited and vicious, and would rob and kill one another if allowed to. But the less presuming explanation is that people act based on the situation their in, and in desperate situations will accept anyone who offers them a way out. And with the disappearance of the middle class as an economic power in the US, and an unemployment rate steady in the double digits in many European countries, it’s only logical that appealing to everyone’s economic fear is a good strategy.

Neither people’s natural instincts, nor their education, nor their economic state are on their own enough as an explanation for their love of populism – it’s a problem of multiple causes and multiple effects. The only thing philosophy can offer the world is the ideas and the invitation to critical thinking – a defence of mind against populism.

Further reading:
Gorgias – Socrates’s views on rhetoric, as related by Plato
Ur-Fascism – Umberto Eco’s attempts at elucidating the nature of fascism

Debating Charitably

In the age of emotionally-charged bickering replacing reasonable discussion (especially when it comes to politics), framing the opponent’s argument in a way that’s easy to defeat is a common tactic. If you’ve heard about “straw man arguments”, you know what distortion of the opponent’s position does to debate – it reduces two sides to arguing past each other.

If you look at how people who care about discussion try to alleviate the problem of finding a set of common points to argue, you’ll probably find out about the “principle of charity”. It’s pretty simple: try to interpret the other side’s argument so that as many statements as possible turn out to be rational. Not because you have to agree, but because a common ground is necessary for finding solutions, and undermining someone’s rationality rarely gives you any.

But a cursory glance at the state of most public debates shows that we’re not generally treating this principle seriously. Why? One of the answers, it seems, lies in the fact that conflict is simply entertaining. And in a society that expects entertainment in life, the most spectacular forms will always triumph over the reasonable ones. That’s where the audience will gather. And the result is that even though we know better, our appetite for emotional outbursts overcomes us. It’s as though, in the words of Neil Postman, we want to “Amuse Ourselves to Death”.

Nobody is a saint when it comes to arguing, and I am as guilty of misrepresenting people’s arguments at one point or another as anyone else. It’s something that just happens in exchanges. But once you know of it, there is no reason to not try and be charitable. If you want to be taken seriously, why not treat the other side as such?