Sunday, July 22, 2018

Review: The Righteous Mind, by Jonathan Haidt

I.


The Righteous Mind: Why Good People Are Divided by Politics and Religion, by Jonathan Haidt (2012), rose to fame largely because of its explanation of the political differences between conservatives and liberals in the United States. The topic has become ever more relevant, as the 2016 election has shown just how far American politics has been polarized. So though this review may come relatively late -- as book reviews go -- there is ample reason to revisit Haidt’s work on this topic six years later.

The Righteous Mind belongs to a category of literature I would put in the broad category of “human evolutionary behaviorism”. Though the topic has antecedents in Darwin, the serious scientific study of human behavior on a wide scale through the lens of experimental psychology as such is relatively young. I would personally point to the behavioral economics studies of Kahneman and Tversky in Thinking Fast and Slow as the origin of the discipline. That may be an arbitrary line, however, because people have been seriously thinking about the motivations for human behavior for a very long time, and anthropologists and psychologists have contributed value insights since the births of their respective fields. The reason why I emphasize the recentness of the field is that...as far as I can tell, our scientific knowledge of the principles governing human behavior largely depends on a few decades of studies. This also tends to mean that if you’ve read one popular book on the subject, there are pretty significant diminishing returns for each subsequent book that you read. 

My personal introduction to human evolutionary behaviorism was The Moral Animal, by Robert Wright (1994). The Moral Animal focused on an evolutionary psychology explanation of human behavior, particularly behavior such as altruism. From my recollection - it’s been a while since I’ve read the book - The Moral Animal essentially makes the claim that altruistic behavior can be selected for via evolution due to A) personal interest, in the sense that reciprocal altruism can benefit everyone in the long term and B) the “selfish gene” which “cares” about spreading itself (i.e. the gene), not about the individual person, so people with similar genes (especially families, but in theory can extend to everyone in a species) behave altruistically because it’s generally good for the spread of closely-related genes. The Righteous Mind considers these two possibilities but also emphasizes a third possibility: group selection, which is the idea that groups which behave altruistically within themselves tend to outlast groups that don’t. I’ll say a bit more about this later, but I just want to put the book in the context of some of the other ideas that are out there.

Anyway, The Righteous Mind is divided into three sections, each of which has a central claim. Haidt puts a lot of emphasis (maybe even too much emphasis) on the main takeaways of each section and chapter, so I’m not doing much editorializing here when I describe what they are.

Claim 1: Human moral thinking is primarily driven by intuition/emotion/sentiment, not reason. Haidt here contrasts the view of Plato, who believed that emotions ought to follow the intellect, with David Hume, who stated that “Reason is, and ought only to be the slave of the passions, and can never pretend to any other office than to serve and obey them.” Haidt is firmly on the side of Hume, using a metaphor of “the elephant and the rider.” The elephant is emotion and intuition, the rider is the intellect. According to Haidt, the elephant does most of the work, the rider is just along for the journey. More precisely, when it comes to morality, the intellect is used to defend the conclusions that intuition has already decided, not the other way around. The way Haidt proves this claim is via a battery of experiments wherein he asks participants how they would judge a variety of scenarios which intuitively feel very wrong but wherein it is difficult to see any utilitarian downside (e.g. one-time consensual secret incest between a brother and sister with birth control). Participants will often struggle for a while to come up with an argument for why something they intuitively believe to be wrong is actually bad, and then when they fail to come up with an argument they don’t necessarily change their view.

Claim 2: Morality is about more than harm. Here, Haidt discusses his “Moral foundations theory” which posits that instead of just being about suffering and happiness, intuitive morality comes in six different “flavors”, which he calls moral foundations. The flavors are Care/Harm, Liberty/Oppression, Fairness/Cheating, Loyalty/Betrayal, Authority/Subversion, And Sanctity/Degradation. Care/Harm is the easy one, it’s about empathizing with other people (and cute animals) and caring that they are happy and don’t get hurt. Liberty/Oppression is about freedom from tyrannical people who abuse their power. Fairness/Cheating has to do with making sure that people are rewarded for good behavior, punished for bad behavior, and don’t get more than their fair share (liberals) or than they deserve (conservatives). Loyalty/Betrayal focuses on being loyal to your group and involves things like treating your country’s flag with dignity or not speaking ill of your country on a foreign radio station. Authority/Subversion relates to respecting social hierarchies, like children honoring their parents or students listening to their teachers. It is also the basis of religious morality and submission to the authority of God. Finally, Sanctity/Degradation is the foundation that includes sensibilities about cleanliness, sanctioned and illicit sexual behavior (e.g. incest), and foods that one may eat (kosher for Jews, halal for Muslims, and not eating insects for secular Westerners). 

Haidt argues that although people have some sort of innate proclivity towards all of these moral foundations, some people place the emphasis in different places. In particular, liberals (in the American sense, i.e. progressives) tend to emphasize care/harm and liberty/oppression (in the sense of certain kinds of oppression, like that of minorities and marginalized groups) over the other moral foundations, having little respect for authority hierarchies, loyalty, or most forms of sanctity. In contrast, conservatives tend to have a more balanced palate, appreciating all six moral foundations pretty much equally. Haidt also demonstrates these proclivities via questionnaires about moral intuitions among people across the political spectrum and obtaining graphs that look like this:



Haidt says that part of the reason why liberals have such difficulty swaying conservatives is that liberals don’t how to engage the “taste buds” of loyalty, authority, and sanctity, which are core conservative values. 

Claim 3: Humans are naturally groupish and hive-minded, and this explains a lot of our values, especially the ones that conservatives tend to value more. Haidt’s metaphor for this section is “We Are 90 Percent Chimp and 10 Percent Bee.” There are a lot of obvious examples of this phenomenon, like religions, sports fans, political tribes, and so on. Haidt argues that this is not just an emergent aspect of collective human behavior; rather he thinks - drawing on the work of Emil Durkheim - that we have a sort of psychological “switch” that turns individual sense of self into an expanded, collective self. In particular, synchronized physical movement, such as marching in the army or the ecstatic dancing of Aztec tribes, can expand the mind from the individualistic “chimp” state to the collectivist “bee” state. 

Collectivist mentality can entail a reification of “society” as an entity that can be harmed. If you burn a flag in private, even if no one is being visibly harmed, there is a non-quantifiable sense in which you are harming your country’s social fabric (no pun intended) by removing yourself from the collective and damaging one of its sacred symbols. Loyalty, authority, and purity all start to make sense when seen in this light. Conservatives tend to be strong believers in the “sacred platoons” of Edmund Burke; the religions, clubs, and teams that bind people together. These platoons usually involve sacred symbols, hierarchies, strict demarcation of ingroup and outgroup, etc. They will also sometimes involve costly signaling rituals (like fasting on Ramadan) which demonstrate a commitment to the collective. In return, the collective provides increased “social capital” in the sense of trust and altruism between members of the group. Diminished social capital within societies means less trust and higher transaction costs. (For example, Orthodox Jewish diamond dealers in New York are able to outperform their competition because of their internal high-trust social network.) Being part of a collective also has psychological benefits, and Haidt here notes a positive correlation between individualistic societies and suicide prevalence.  

This is where group selection comes in. Unlike the outspokenly atheist proponents of evolutionary psychology such as Richard Dawkins, Haidt believes that religious tendencies are a feature which emerges natural selection, not a “bug” that arose as a byproduct of otherwise beneficial cognitive tendencies (such as seeking out conscious intent in nature, which is a beneficial skill in social contexts). According to Haidt, because religious societies tend to be high in social capital, groups with religious beliefs and practices tended to be more prone to survival than non-religious groups. As such, proclivities towards religious beliefs may have actually been selected for via natural selection. 

Overall, Haidt says that conservatives (including people from many traditional and religious societies) understand the value of social capital and the invisible fabric of society in a way that liberals (and libertarians) don’t. Liberals would thus be well-advised to heed the moral flavors that they tend to neglect, as they ignore the greater part of the moral palate at their own peril. 


II.


One point that Haidt mentions briefly, although not nearly as strongly as he should, is that the book is about descriptive morality, not prescriptive morality. In other words, Haidt's work - particularly regarding moral foundations - is designed to describe how people in the real world behave, not how they should behave. There’s one line in the book where he says that his preferred moral theory is utilitarianism, but he doesn’t dwell much on this point. Instead, the thrust of the book tends to be toward convincing liberals that the intuitions about morality held by conservatives are worthwhile paying attention to. Haidt himself seems to have started out as a liberal who grew to appreciate certain aspects of conservatism and traditional societies, and he thus finds himself in a position of urging his (presumably liberal, but also open-minded) readership to follow suit.

Sarah Constantine has a great post where she groups together Haidt, Jordan Peterson, and Geoffrey Miller (whose work I am not familiar with) as “psycho-conservatives”. In short, psycho-conservatives are people who believe that some form of conservatism/traditionalism is optimally suited to human psychology, and this entails a broad swath of implications for things like gender roles, criminal justice policy, race relations, and so on. And there does seem to be some sort of merit to this argument. But Sarah makes the following point:

“If you used evolved tacit knowledge, the verdict of history, and only the strongest empirical evidence, and were skeptical of everything else, you’d correctly conclude that in general, things shaped like airplanes don’t fly.  The reason airplanes do fly is that if you shape their wings just right, you hit a tiny part of the parameter space where lift can outbalance the force of gravity.  “Things roughly like airplanes” don’t fly, as a rule; it’s airplanes in particular that fly.”

In other words, while it seems like most ancient, traditional and conservative civilizations had to hit all of the taste buds on the moral palate to construct a society optimally in tune with human psychological needs, that might only be because we haven’t yet figured out the precise mix of ingredients necessary to create an optimal society without the deleterious effects of traditional values (diminished status of women, inter-group fighting, false beliefs, excessive obsession with symbolism, etc.)

Psycho-conservatives thus tend to run afoul of the naturalistic fallacy -- that evolutionary “is” entails moral “ought”. Robert Wright in The Moral Animal was better about this and was very careful to state that though evolutionary psychology can inform the project of morality to some degree, the moral thing to do is often the opposite of what our evolution-based intuitions would suggest. For example, people often have an intuitive preference for harsh punishment, but from a utilitarian standpoint, it would seem that cutting off the hands of thieves and stoning adulterers is not a good way to run a society. 

And this, I think, is where Haidt needs to be taken with a grain of salt - to continue the taste bud metaphor. We might all have some level of predilections toward all of the six moral foundations, but the devil is in the details - and more specifically, the mixing quantities. The tongue’s taste buds have very different sensitivities to different flavors. For example, the taste threshold for strychnine, a bitter toxin, is 0.0001 millimolar, while the threshold for sucrose is 20 millimolar, a difference of over five orders of magnitude (source). The same might be true for the best mix of the moral foundations. There’s no compelling a priori reason to believe that liberal emphasis of care/harm and dismissal of most aspects loyalty and hierarchy is the wrong way to do things from a prescriptive standpoint. In the simplified world where the six moral foundations are the only axes along which moral systems vary, the utilitarian problem reduces to finding the correct set of weights along each of those axes which maximizes human flourishing...and we don’t know what those weights are. And beyond that, they may vary from person to person, society to society, and environment to environment. That’s why politics (in the sense of creating good policy) is hard.

At the same time, though, politics is still worth doing. Why? Because “multiple optimal solutions for different individuals, societies, and environments” is not the same as “all solutions are optimal.” Acknowledging that diversity may exist in the way to construct a healthy society does not mean that moral relativism is the answer. (In his conclusion, Haidt also rejected relativism, but again didn’t really expand on how to go from his moral foundations theory to a non-relativistic moral system.)

I have the benefit/misfortune of a journey in the opposite directions as Haidt’s, going from a very traditional/conservative society (Orthodox Judaism) to a very liberal one (secular academia in Israel.) And while Haidt is enamored with the benefits of traditional society, I have close acquaintance with the dark underbelly...and it’s not pretty. Many forms of Judaism, including the more liberal forms of Orthodox Judaism (like Modern Orthodoxy, my background) do a reasonable job of promoting human happiness and flourishing despite their very different weighting of the moral taste buds than secular society. And like Haidt said, the strict rules do go a long way in promoting a strong sense of community and social trust. I still maintain strong social ties to the Orthodox Jewish world, largely because the community there is stronger and more fulfilling than anything I’ve managed to find outside of it thus far. That being said, there is a point at which religious communities go too far, and I’ve seen it. The obvious examples are the extremely insular Hassidic communities in the US and the Haredi (Ultra-Orthodox) communities in Israel that prevent their adherents from studying secular subjects, including basic science, math, and English. I am sure that the sense of community is stronger among those groups than among the Modern Orthodox; insularity results in that almost by definition. At a certain point, we have to put our foot down and say...no. That is not an optimal solution. You can’t trade “strong sense of community” for literally everything else involved in the human experience, like having some basic knowledge about the world around you. The harrowing accounts of people who have left that world, like Shulem Deen’s All Who Go do Not Return, leave you with a sense that while these communities have some positive aspects, they are dystopias, pure and simple. 

So where do I draw the line between acceptable diversity and unacceptable relativism? That’s also a hard question. I suppose I’m not really interested in lines, so much as proximity to optimal solutions. And I think that this is something that people within a society intuitively feel, and it’s something that’s quantifiable. I know that for me, living as a secular person is better than an Orthodox person. Yes, I miss lots of things about Orthodoxy, and I often dip back into that social world when I feel the need. But I feel more comfortable with myself living outside of the Orthodox bubble than inside it. Most importantly, as a secular person, I have true intellectual freedom; I can think what I want without it being judged according to the tenets of Orthodox theology. Of course, this is all still my personal experience. I feel like my personal experience should be generalizable, but interindividual variability is a very real thing.

Really what we need is a reasonably good battery of questions which captures human flourishing on a variety of axes (maybe coupled with some neurological measures, though that may end up in Goodhart territory) see how different societies measure up on average. This is, of course, the great utilitarian project, and at the moment it seems like it is being won the Scandinavian countries, which are very heavily atheist but also keep many of the cultural trappings of religion (link). This might be a good direction to go in. On the other hand, Scandinavian countries also have a lot of other things going for them in the political and economic realm, so it’s a bit hard to disentangle their politics from their religious predilections (and the two might be causally related anyway). 

In sum...maybe all six moral foundations are kind of important, and we should think about psychology when designing policy. But as long as we have reasonably acceptable metrics for how happy people tend to be in different kinds of societies, we don’t need to resort to a priori reasoning from psychological first principles. Instead, we can directly measure what kinds of solutions work and what don’t, and then do more of the stuff that works and less of the stuff that doesn’t work. And sure, we’re talking about things that can be tough to quantify and prone to various sorts of measurement error, but that’s life. Reasoning based on evidence about the target that you’re actually interested in will always be better than armchair philosophizing about how people are wired. People are wired in complicated and diverse ways, but some moral systems seem to produce strictly better outcomes than others, and that’s what we should be looking at when it comes to prescriptive morality.

Saturday, July 14, 2018

How I Judge Scientific Studies


If it matters to you whether a new drug will cure you or leave you debilitating side effects, if it's sensible to produce legislation to prevent climate change, or if drinking coffee will your raise blood pressure - it pays to know what the facts are. How do we find out what the facts* are? Usually scientific studies are the best place to look. But we've all heard about the replication crisis, we know that researchers can sometimes make mistakes, and there are some fields (like nutrition) that seem to produce contradictory results every other month. So how do we separate scientific fact from fiction?

I don't really have a great answer to this question. At a first pass, of course, the solution is to use Bayesian reasoning. Have some prior belief about the truth of the hypothesis and then update that belief according to the probability of the evidence given the hypothesis. In practice, though, this is much easier said than done because we often don't have a good estimate of the prior likelihood of the hypothesis and it's also difficult to judge the probability of the evidence given the hypothesis. As I see it, the latter problem is twofold.


  1. Studies don't directly tell you the probability of the evidence given the hypothesis. Instead, they give you p-values, which tell you the probability of the evidence given the null hypothesis. But there are many possible null hypotheses, such as the evidence being accounted for by a different variable that the authors didn't think of. This is why scientists do controls to rule out alternative hypotheses, but it's hard to think of every possible control.
     
  2. The evidence in the study isn't necessarily "true" evidence. You have to trust that the scientists collected the data faithfully, did the statistics properly, and that the sample from which they collected data is a representative sample**. 

In theory, the only strategy here is to be super-duper critical of everything, try to come up with every possible alternative hypothesis, and recalculate all the statistics yourself to make sure that they weren't cheating. And then replicate the experiment yourself. But, as a wise woman once said, "ain't nobody got time for that." It goes without saying that if the truth of a particular claim matters a lot to you, you should invest more effort into determining its veracity. But otherwise, in most low-stakes situations (e.g. arguing with people on Facebook) you're not going to want to do that kind of legwork. Instead, it's best to have a set of easy-to-apply heuristics that mostly work most of the time, so that you can "at a glance" decide whether something is believable or not. So I've come up with a list of heuristics that I use (sorta kinda, sometimes) to quickly evaluate whether a study is believable.
  1. Mind your priors. The best way to know whether a study is true or not is to have a pretty good idea of whether the claim is true or not before reading the study. This is kind of hard to do if it's a study outside of your field, but if it does happen to be in your field, you should have a pretty good idea of what kind of similar work has been done before reading the study. This can give you a pretty good idea going in of how believable the claim is. If you've been part of a field for a while, you develop an sixth sense (seventh if you're a buddhist) or a kind of intuition for what things sound plausible. At the same time...
  2. Beware of your own confirmation bias. Don't believe something just because you want it to be true or because it confirms your extant beliefs or (especially) political views. And don't reject something because it argues against your extant beliefs or political views. Don't engage in isolated demands for rigor. If you know that you have a pre-existing view supporting the hypothesis, push your brain as hard as you can to criticize the evidence. If you know that you have a pre-existing view rejecting the hypothesis, push your brain as hard as you can to defend the evidence.
  3. Beware of your own experience bias: Your personal experience is a highly biased sample. Don't disbelieve a high-powered study (with a large and appropriately drawn sample) simply because it contradicts your experience. On the other hand, your personal experience can be a good metric for how things work in your immediate context. If a drug works for 90% of people and you try it and it doesn't work for you, it doesn't work for you. At the same time, be careful, because people don't necessarily quantify their personal experience correctly.
  4.  Mind your sources. Believability largely depends on trust, so if you are familiar with the authors and think they do usually do good work, be trusting, if you don't know the authors, rely on other things. My experience is that university affiliation doesn't necessarily matter that much, don't be overly impressed that a study came out of Harvard. In terms of journals: peer-reviewed academic journals are best, preferably high-tier and highly cited but I think that probably gives you diminishing returns. High-tier journals often focus on exciting claims, not necessarily the best-verified ones. Unpublished or unreviewed non-partisan academic stuff is also at least worthwhile to look at and should be given some benefit of the doubt. Then comes partisan academish sources, like think tanks which promote a particular agenda but are still professional and know what they're doing. You should be more skeptical of these kinds of studies, but you shouldn't reject them outright. Be skeptical if they find evidence in favor of their preferred hypothesis, be more trusting if their evidence is seemingly against their preferred hypothesis.

    Don't trust non-professionals, including journalists, especially highly. Training exists for a reason. Some publications like The Economist seem to have some people there who know what they're doing in terms of data science and visualization. Some individual journalists can be relied upon to accurately report findings of trained scientists. The best way to do source criticism is to weight sources by how close to the truth they've been in the past, like the Nate Silver and the Five Thirty-Eight blog does with political polling. Don't trust politicians citing studies. Ever. Doesn't matter if they're from your tribe or not. Same thing goes for political memes, chain emails forwarded from grandma, etc. You are not going to get good information from people who care about partisan goals more than a dispassionate understanding of truth. And sure, everyone is biased a bit, but there are levels, man.
  5. Replication. The obvious one. If a bunch of people have done it a bunch of times and gotten the same results, believe it. If a bunch of people have done it a bunch of times and gotten different results, assume regression to the mean. If a bunch of people have done it a bunch of times and got the opposite results of the study you're looking at, the study you're looking at is probably wrong.
  6. Eyeball the figures. If the central claim of a study is easily born out or contradicted from a glance at the graphs, don't worry too much about p values or whatnot. This won't always work, but we're aiming for a "probably approximately correct" framework here, and a quick glance at figures will usually tell you if a claim is qualitatively believable.
    Linear Regression
  7. Different fields have different expectations of reliability. Some fields produce a lot more sketchy research than others. Basic low-level science like cell biology tends to have fairly rigorous methodology and a lot of results are in the form of "we looked in a microscope and saw this new thing we didn't see before." So there's usually no real reason to disbelieve that sort of thing. Most of the experimental results in my field (dendritic biophysics) are more or less of that nature, and I tend to believe experimentalists. Fields like psychology, nutrition, or drug research are more high-variance. People can be very different in their psychological makeup or microbiome, so even if someone did the most methodologically rigorous study in the field it might not generalize to the overall population. Simple systems tend to be amenable to scientific induction (an electron is an electron is an electron) it's harder to generalize from one complex system (like a whole person or a society) to other complex systems. That doesn't mean all findings in psychology or nutrition are wrong; I would just put less confidence in them. That being said, some fields involving complex systems (like election polling) are basically head-counting problems. I tend to believe this kind of data within some margin of error. This is because on average it tends to be more-or-less reliable on average and because...
  8. Bad data is better than no data. If you have a question and there's only one study that tries to answer the question, unless you have a really good reason to have an alternative prior, use the study as an anchor, even if it's sketchy. That is, unless it has glaring flaws that render it completely useless. The value of evidence is not a binary variable of "good" or "not good", it's a number on a continutious spectrum between 0 and 1 between "correlated with reality" and "not correlated with reality". So use the data that you have until something better comes along.
  9.  The test tests what the test tests. If the central claim of a study does not seem to be prima facie falsifiable by the experiment that they did, then run in the other direction. I tend to have faith in academia, but sometimes people do design studies that don't don't demonstrate anything. Moreover, a lot of studies are taken further than they are meant to be. Psychology studies can inform policy, but don't expect to be able to always draw a line from a psychology experiment done in a laboratory to policy. Policy interventions based on psychology need to be tested and evaluated as policy interventions in the real world (Looking at you, IAT). Drug studies on rats need to be carried out on humans. Etc. Popular media often sensationalizes scientific results, which are usually limited in scope.
  10. Not all mistakes are invalidating. Scientists are obsessive about getting things right, because we have to defend our claims to our peers. Nevertheless, science is hard and we sometimes make mistakes. That's life. A single mistake in a study doesn't mean that the study is entirely wrong. The scope of a mistake is limited to the scope of the mistake. Some mistakes will invalidate everything, some won't. Often a qualitative result will hold even if someone messed up the statistics a bit. It's easy to point out mistakes; the challenge is to extract a signal of truth from the noise of human fallibility.
  11. When in doubt, remember Bayes. Acquire evidence, update beliefs according to reliability of evidence. The rest is commentary.
* In science, at least in my field, we hardly ever use the word "fact"; we prefer to talk about evidence. Still, there are findings that are so well-established that no one questions them. Discussions at scientific conferences usually revolve around issues that haven't been settled yet, so scientific conferences are ironically fact-free zones.

** "Sample size" is often not the issue (that's the classic low-effort criticism from non-scientists about scientific studies, which is sometimes valid but usually the standard statistical measures take that into account; if your sample size is too small you won't get reasonable p-values). Rather, even if your sample size is large enough to produce a statistically significant effect for your particular test sample, you have to be wary of generalizing from your test population to the general population (sampling bias). If the two populations are "basically the same" - that is, your experimental population was uniformly sampled from the broader population - then you can use the standard sampling error metric to estimate how far off your results are for the general population. But if your test population is fundamentally different than the general population (i.e. you ran all your experiments on white undergraduate psychology students) there's reason to be skeptical that the results will generalize for the broader population.

Saturday, July 7, 2018

Judaism, Buddhism, and the Sense of Self

I recently finished two books: What the Buddha Taught, by Walpola Rahula, and The Great Shift, by James Kugel. What the Buddha Taught is an introductory text to Buddhism, The Great Shift is a book about changing senses of self and God in biblical and second-temple Judaism. I picked up the two books on different occasions and became interested in their respective topics for different reasons. As it happens, What the Buddha Taught and The Great Shift have some converging themes, so I thought I’d write a joint review of the two books and use that as a springboard to talk about the “idea of self” in Buddhism and biblical Judaism.

I.

First, a brief sketch of the two books. In The Great Shift, Kugel traces the development of how the Bible writes about individuals and their encounters with God over the course of the biblical timeline. Kugel notes that initially, biblical characters are mostly portrayed as acting, rather than thinking. We are not explicitly told of the motivations of Abraham, the doubts of Joseph, etc. Instead, biblical stories are told from a third-person omniscient perspective that focuses on the actions of the biblical characters and their consequences. Kugel suggests that this is not simply a literary choice; rather the lack of internal dialog of biblical characters might reflect the way that people thought of themselves in biblical times. In contrast to the familiar idea that people generate their own thoughts, people’s sense of themselves in biblical times may have been quite different. People in biblical times might have believed that their emotions and thoughts originated from outside themselves, possibly in the form of spirits that could enter a person. Kugel draws from anthropological research claiming that in many cultures, especially in the past, the idea of the individual, bounded self was very different than the Western notion we have today. The accounts that Kugel cites to this effect come off as bizarre to the Western reader, such as the Dinka people who have no concept of “mind” or “memory”.

Kugel claims that biblical characters, in particular those who had prophetic or revelatory experiences, held that the self was “semi-permeable”. In other words, one’s experiences could be taken over or influenced by external forces. Moreover, the distinction between thought, imagination, and reality was blurred; to the ancient man images produced by the mind might have been considered part of the “undifferentiated outside”. Thus, visions of conversations with God, wrestling matches with angels, and burning bushes were very much a reality to ancient biblical man. And in a world with little scientific understanding of reality, both the existence of deities and the idea that they would communicate with man was eminently plausible. So it is conceivable that the personalities of the Bible - assuming they existed - actually believed they were communicating with God and angels.

As time went on, however, the individual sense of self began to congeal in the Bible, leading us to characters like Jeremiah and Job who do introspectively reflect on their emotional state. Together with the crystallization of the self emerged a distancing from God, in the sense that people no longer had direct “sensory” experiences of encountering with the Divine. As history progressed, this led to a focus on law and prayer as opposed to sacrificial service at the Temple, which was predicated on a more experiential conception of God. No longer part of lived reality, God became distant from man, to the extent that religious people talked about a “re-establishing of God’s sovereignity” on Earth as opposed to the old assumption that God was ever-immanent. 

II. 

What the Buddha Taught also touches on the sense of self, albeit in a different way. Rahula describes Buddhism as a system of thought and practice dedicated to eradicating dukkha, usually translated as “suffering” but actually have a broader meaning referring to the impermanence of material things (c.f. The term הבל, usually translated as “futility” in Kohelet, but literally means something like “smoke” and ostensibly refers to a similar concept of impermanence or transience). In the Buddhist way of thinking, man is constantly confronted by suffering due to the unreliable nature of the material world, whether it be in the form of sickness, old age, death, or any of the myriad ways in which life can go wrong. Buddhism’s solution to dukkha is the idea of anatta, or absence of self. The only way that person can extricate himself from the suffering that arises from the impermanence of the world is to negate the self itself. In other words, Buddhism prescribes viewing the “self” as indistinct from any other sensory information (AKA “one with everything”, which is apparently what the Dalai Lama orders when he goes to a pizza shop). 

Anatta in Buddhism is a practical challenge in addition to being an intellectual claim. Even if it is true that there is no such thing as the self, that is a very difficult thing to feel experientially. The solution to this practical problem that Buddhism recommends is meditation. When one meditates, he can observe thoughts emerging and dissipating with the attitude of an objective, disinterested observer. As a person advances in his meditation practice, he gains the ability to dissociate his awareness from the thoughts, feelings, and perceptions that he is aware of. When one can fully dissociate awareness from the objects of awareness, and realizes in both in an intellectual and experiential sense that the “self” is simply a conglomerate of mental objects (i.e. memories, sensory experiences, emotions, etc.), he will have achieved the state of nirvana. In this state, a person will be free from all forms of desire and lust for the material, resulting in the ultimate bliss. In this way, nirvana is a recipe for both happiness and morality. In that immoral behavior stems from material desires and biological urges, being able to free oneself from the “self” allows one to act completely selflessly, devoting his thoughts to the love of all creatures and his behavior towards the betterment of the lives of others.

[Aside: As a neuroscientist (uh oh, here we go) I have a strong sympathy for the idea of anatta. I think neuroscientists would generally agree that thoughts, emotions etc. are coded in the brain and are brought to awareness by deterministic(ish) neural dynamics. We haven’t solved the hard problem of consciousness, of course, but the information relayed to consciousness - the spikes that eventually become qualia - are ostensibly present for everything that you think and feel. There isn’t a categorical distinction between sensory information and self-information like emotions and memories; in the brain it’s all just spikes. So in principle the observation that self-thoughts aren’t that different from sensory perceptions and that the “self” is really just a conglomerate of a variety of sources of information is well-taken in the modern scientific view.]

III.

So where am I going with all this? First of all, I think there’s an argument to be made from both What the Buddha Taught and The Great Shift that Western people take the idea of self too seriously. I’m not sure there is a right or wrong answer to what the self is, but at least from a scientific and philosophical standpoint, the Buddhists might actually have a better framework for thinking about the question than the Western world does. 

Beyond that, though, there seem to be very real psychological ramifications for how we contextualize our ideas of self and individual identity. In biblical times, if we accept Kugel’s view, a fluid sense of self could result in “real” encounters with the divine in a manner that even very religious people have a hard time with today (at least in the absence of psychedelic drugs, which further argue for the idea of the malleable self). And if the Buddhists are right, annihilating the self will lead to eternal bliss (and there seems to be at least some evidence that meditation has some positive effects for anxiety and depression). 

Amusingly, both Buddhism and Judaism would seem to be in favor of the abnegation of a strong sense of self, albeit for different reasons. In Judaism, a weaker sense of self might make possible the lived experience of divine encounters. And in Buddhism, of course, the absence of the sense of self is the terminal goal. I’m not sure either of these visions are ones which we should adopt, but at the very least it would be worthwhile for the modern West to question whether we should really take the self for granted.