Sunday, November 4, 2018

In Defense of non-Realist Morality

Some people are terrified of non-realist positions on morality. The reasoning goes as follows: if there is no objective morality, then what makes ISIS wrong? Why can't we behead people, or vivisect babies in a bathtub? Shouldn't there be some sort of objective standard that we can use to prove, on a rational basis, what is right and wrong?
On the one hand, I sympathize with the sentiment, but on the other hand, I think it is misguided. First of all, of course, the fact that something doesn't sit with us emotionally doesn't mean that it's not true. But I'm not even sure that the concern itself is well-founded. Consider: do you think that, if you were to engage an ISIS member in a levelheaded conversation about morality, you would be able to convince him that <insert your favorite moral system here> is the objectively true moral system, and not radical Islam? Radical Islam (like most religions) also has a realist understanding of morality, and their realist interpretation includes beheading infidels. So in order to convince them to adopt your realist system, you'd have to convince on the basis of...what, exactly? If you don't have a shared moral epistemology - that is to say, if you don't agree on a way to discover objective moral truth - then there's no way to convince the ISIS member not to behead people.
The worst case scenarios of non-realist views of morality, I think, are far less scary than those of the realist views. If no one believes in objective morality you tend toward an Ayn Rand-type world where everyone acts in what they believe to be their own self-interest. While it is possible to get stuck in suboptimal equilibria here and have complete anarchy, it's also quite likely that - without any moral realism anywhere - people will cooperate and create prosocial social norms which people perceive to be in their self-interest. And people can also be motivated - simply via self-interest - to create a government that passes laws, again for the common good. You can have strongmen who act in their own interest to harm everyone else, but in the long term the strength of numbers will always overpower the strength of a single person. And when the people don't overpower the strongman, there's often some sort of realist ideology (e.g. communism) which is preventing the natural emergence of opposition to an individual who ruins things for everyone else.
The worst case scenario for realist views of morality are far more scary, especially when there is no agreement on moral epistemology. That's how you get ISIS.
Societies have become less cruel over time (if you are of the view that things are getting better) not because we have discovered better moral principles, but simply because - via trial and error - we have figured out systems and social contracts that are more suited to people's preferences, and because people have figured out how to effectively utilize their power in numbers in order to acheive the preferences of the masses.
What about the individual? For the most part, if you live in a healthy society, social norms and laws will be sufficient to ensure that the individual behaves himself. The incentives in the system should be sufficient, in 99% of cases, to constrain individual behavior to be prosocial. And in the other 1% of cases, people will have to contend with their own conscience. But there are almost always consequences to anti-social behavior. Eventually.

Monday, October 22, 2018

Consciousness and Information (or: Why I am a Cartesian Dualist)

I.
One of the major points of confusion I see in many modern theories of consciousness, including Integrated Information Theory (IIT), Global Neuronal Workspace theory (GNW) and others is an unjustified jump from information and the processing thereof to conscious, subjective experience of that information. The strong versions of these theories tend to make the mistake of saying once you have the right type of information processed in the right way, subjective experience will emerge. On these theories, the brain, which processes a lot of information and combines different kinds of information with each other (IIT) or selectively focuses on a particular subset of information (GNW) thus produces consciousness.

I think that underlying these ideas is a fundamental misconception of what information is and what it can do. Information is a mathematical concept, not a physical one. We can use physical systems to represent information, like we can use a pair of gloves to represent the number "two". But that is just a matter of cognitive convenience; if I wanted, the pair of gloves could represent the number "ten" by counting the number of fingers on the gloves. Information is the same in this respect; Information is a property of random variables, not matter.  I can use a coin to represent a random variable by saying "this coin has two states, heads and tails, and flipping the coin assigns a Bernoulli distribution to the random variable with p = 0.5, and the side on which the coin lands after being flipped determines the outcome of the random variable after the experiment." But this is, again, a matter of convention. I could, for example, consider the number of times the coin flips in the air as the random variable, which would then have a different number of states (in principle, the set of all natural numbers) and a different probability distribution. So the coin itself doesn't contain information intrinsically, the information depends on what we, as observers, choose the coin to represent. (Things become a bit more nuanced with subatomic particles, which have physical states that seem to be at least somewhat well-defined and restricted in terms of the information that they convey, and there's also the issue of Landauer's principle which needs to be addressed, but I'll leave those aside for the moment.)

Image result for gloves
A primitive computer

In addition to physical objects being able to represent different information depending on the how the observer chooses to define the state space, information is substrate independent; in other words, you can store the same information in a variety of physical media and it will be identical from a mathematical standpoint. Two rocks and two socks can both convey the number "two". Let's take another example: the string "Hello" is equivalent to the binary string 01001000 01100101 01101100 01101100 01101111 in ASCII encoding, where every letter in the English letter is assigned an 8-bit binary code. A printer can convert the above binary string stored in transistors on your computer to the English string written in ink on a piece of paper. Different physical media, same information.

Let us posit, for the sake of argument, that ASCII was never invented. In other words, no one ever created a mapping between English and 8 digit-numbers. Does 01001000 01100101 01101100 01101100 01101111 still mean the same thing as "Hello"? Well, this is kind of a trick question. In information theory, entropy, the standard measure of information, doesn't answer questions of the form "does A mean the same thing as B". Instead, entropy measures the amount of information in a probability distribution, but it doesn't tell you anything about "meaning." So, for example, the average letter in English language has about 2-3 bits of entropy (after compressing via word-frequency and so forth) meaning that if you want to have a binary system that can represent any arbitrary string in English, you'd need, say, 15 bits to encode all 5 letters of hello. So entropy tells us that 01001000 01100101 01101100 01101100 01101111 could represent "Hello" with some bits to spare if we wanted it to; we would just need to create the encoding scheme that performs the appropriate mapping. But there's no (lossless) mapping from the English language to binary that could produce the word "Hello" with a single bit.

II.

Information theory can actually do a bit more for us though. There's a measure called mutual information, which does indeed tell us how much information A contains about B (and vice versa, mutual information happens to be symmetric). However, mutual information requires some additional knowledge about A and B, namely the joint probability, or the probability that in a given experiment A will have value x and B will have value y. So, for example, there is a non-zero mutual information between a person's height and weight, because height is at least somewhat predictive of weight. In this sense, mutual information is similar to correlation, but it is a stronger measure because correlation only captures linear relationships between A and B whereas mutual information tells you the maximum information you can extract from A about B using an optimal function.

If we go back to our ASCII example if we already have a computer that translates binary to English,  we can calculate the joint probability between the binary code stored in its memory and the words that it prints on its screen or on a piece of paper, and from there we can determine that the mutual information between the ASCII code and English is maximal. If we don't already have such a computer, though, then the joint probability between ASCII and English is simply not defined and the mutual information can't be calculated.

Our mutual information measure also doesn't really approach anything resembling "meaning". All we've said is that it is possible to convert from one string of symbols into another string of symbols without losing information. Because I know that my computer uses a well-defined mapping from bits to letters, I can reconstruct text from documents stored on a hard drive. That's great. But if English-speaking humans weren't around to understand the semantics of English, this would be a pointless exercise; a meaningless conversion of one string of symbols to another. The same is true, by the way, when it comes to information processing. By performing a mathematical operation on some data, I'm simply converting one string of symbols to another string of symbols by means of a function (i.e. via a Turing Machine algorithm). I could have a string of symbols a trillion trillion bits long, and I could perform a trillion trillion operations on it (if you want, I can even make the operations behave like a network, because that seems to be something that people think is important), and at the end I'd still be left with...a string of symbols. There is no Turing function of which I am aware which can take a string of symbols, perform a mathematical operation on it, and return something other than a string of symbols.

III.



If you want to get something other than a string of symbols out of another string of symbols, you have to leave the realm of mathematics and return to the world of physics, with particles that bump into each other and that sort of thing. The best example here is from molecular biology. In the classic (extremely simplified) central dogma of molecular biology, DNA is transcribed into RNA which is then translated into a protein. DNA is a system equivalent to binary except with a base-4 system (A,T,G,C) instead of binary's base-2 system or our more commonly used base-10 system. All of these systems are of course basically the same, the differences are merely representational. DNA is translated to RNA, which has the same bases as DNA except T is converted to U. The RNA strand is complementary to the DNA strand, which means that wherever G appears in the DNA strand a C appears in the RNA strand, but the two strands are informationally equivalent, because there is a simple algorithm to reconstruct the DNA string from the RNA string and vice versa. The string of As, T, Gs, and Cs, only matters, though, when the RNA is translated into amino acids, which create proteins, because proteins actually do stuff in the cell. Proteins help to catalyze chemical reactions, transfer materials within and between cells, and so on. So the DNA only has "meaning", once it's converted into a protein. And, for what it's worth, information is actually lost when RNA is translated into a protein, because multiple RNA trigrams can code for the same amino acid, meaning that you can't tell from looking at a protein exactly what RNA sequence created it.

Image result for ribosome video
RNA Translation

If I were to create a DNA strand a trillion trillion sequences long by randomly concatenating base pairs and transfecting it to your genome, it probably wouldn't do anything especially useful (actually there's a good chance it would kill you). And again, same goes for if I would process the information in that strand a trillion trillion times. The amount of information or information processing that occurs to a string of information is not highly correlated with the usefulness and/"meaning" of that information. The important step for "meaning" is the translation of the information to a physical substrate, not the symbolic representation itself. The choice of symbolic represenation is arbitrary; it just has to be long enough to compress whatever physical information you need to "read out".

IV.

So far I've been tossing around the word "meaning" without defining it, because this is where consciousness comes in. When we look at the word "Hello" written on a screen and we see it -- I mean experientially -- that cannot be information processing. Information processing can turn "Hello" into "Cello" or it can translate "Hello" into 01001000 01100101 01101100 01101100 01101111 (in the case of the brain, into the code of firing neuronal action potentials). It can associate "Hello" with other strings of information, such as an image of a waving hand - and by "associate" I mean "perform some mathematical operation whereby the symbolic representation of the waving hand and the symbolic representation of the word "hello" are combined to produce a new string of symbols. But information processing - even in network structures - cannot remove "Hello" from the world of strings of symbols to the world of conscious experience. And if it can, you have to explain how, because such a claim very much looks like a category error.

In my view, if consciousness is anything, it is unlikely to be information or to emerge from information, because we have no examples of things that are not strings of symbols emerging from strings of symbols. Consciousness (or perhaps more precisely, qualia) is a substrate which interacts with information, it is not the information itself. And what is interesting about consciousness is not the information that it reads, but rather the fact that it can read anything at all, even if the information it reads is simplistic. The brain does a lot of processing, but the purpose of this is not to create consciousness, it is simply to prepare the information for consciousness's interaction with it.



Consciousness is not necessarily a physical substrate, though I find the view of a "consciousness particle" a lot more plausible than the idea that consciousness is something that spontaneously emerges from information processing. I also believe that attempts to ground consciousness in the combination of information processing and "causal networks" of physical interactions in which that information is stored (as exists in IIT) are misguided. The best approach, in my view, is to view consciousness as a categorically independent object of inquiry and to distinguish consciousness from the information with which it interacts, the latter lending itself to network computation-related explanations.

Once we divorce consciousness from information, we are left with something very small; a consciousness that does not contain memory, personality, or anything that could be considered persistent. Persistent information is stored in the brain and may be projected to consciousness, but consciousness does not store information for more than a negligible amount of time. This version of consciousness is so small that, in the absence of information, it is indistinguishable (as far as we know) from being absent. To sharpen this point, consider the following question - is a sleeping person actually unconscious or is his brain simply not projecting any information to the conscious substrate? If consciousness is synonymous with information and the processing thereof, the question is meaningless. But if consciousness is a substrate of information; consciousness can be present in the absence of information, just like a computer can be on in the absence of data held in memory. You might not see anything on the screen, but it's still there, awaiting informational input.

Wednesday, October 10, 2018

Netivot Olam: Torah as Teleology

In the last post, we discussed the view of the Maharal of the Torah as a teleological text;  the Torah contains instructions for how the universe and man ought to be. Maharal is careful to distinguish between chochma, or knowledge, and Torah,  which literally means "instruction." The Torah says how things should be; it provides order. It is not necessarily a description of how things are.

To the Maharal, this is not simply an abstract philosophical claim. He believes that there are practical, real-world implications for the Torah being an organizing force in the universe. To this end, he cites the Talmud (Eruvin 54a) which states that if a man walks alone on a road (i.e. between cities, where it is uninhabited) and he is not accompanied by anyone, he should study Torah. Moreover, if a person has pain in his head, throat, internal organs, bones, or even his whole body, he should study Torah, as the Torah will heal him. Rabbi Yehuda the son of Rabbi Chiya adds to this that God is not like man, because when man gives a medical treatment, the treatment might be helpful in one way and harmful in another (i.e. medicine has side effects) but God's treatment, the Torah, is equally good for everything; it heals a person completely.

Maharal first focuses on the statement about walking alone on a road. Rabbinic tradition has it that "all roads are assumed to be dangerous" (Kohelet Rabba, 3:3). On Maharal's view, this isn't just because of the presence of highwaymen and dangerous animals and whatnot. Rather, there is a metaphysical sense in which uninhabited parts of the Earth are incomplete, or not fulfilling their teleological duty. [An aside: one of the interpretations of the story of the Tower of Babel is that the sin of the creators of the Tower of Babel was that they wanted to build a city to unify all of mankind in a single place, instead of spreading out over the entire world and inhabiting it.] The source text for this is Isaiah 45:18, "he did not create [the world] to be empty, but formed it to be inhabited."  Parts of creation that are not in their teleologically optimal state, such as uninhabited roads, have a sort of inherent danger associated with them. A person thus either needs to travel with someone else (when multiple people travel together, they form a social entity, and are thus considered as turning an uninhabited place into an "inhabited" one) or to study Torah, which itself has the power to bring the world to its optimal state.

Image result for road less traveled
Fig 1. Here there be dragons.

The same reasoning applies, says the Maharal, to diseases of the body. The teleological state of the body is a healthy body, and bringing the Torah in contact with a sick body brings the body back to its optimal state. This is easier to understand with pain of the head, which is the seat of the intellect and thus would more naturally attuned to the Torah's healing powers, but it is true for the rest of the body also. In this context, Maharal mentions two kinds of intellect, the analytic intellect (שכל עיון) which resides in the head and the linguistic intellect (שכל דיבור) which is associated with the throat.

Regarding the difference between Godly medicine (the Torah) and man-made medicine, Maharal notes that all man-made medicine has some sort of physical properties, and those physical properties will necessarily be harmful for some things even if they are helpful for others. He gives the example of a "hot" medicine being good for limbs which are "hot" but bad which are for limbs which are supposed to be cold. The Maharal is working with outdated medical science and terminology here, but one can easily think of many examples of drugs with side effects due to the drug's adverse reactions with non-targeted organ systems. The Torah, in the sense of it being an organizing force that brings matter to its teleological state, doesn't have any of the downsides of physical drugs because it is not a physical substance, rather it simply reorients matter to its teleologically optimal state. (Evidence that this occurs and an explanation of the mechanism by which this occurs are conveniently not mentioned.)

Interestingly, Maharal emphasizes that the Torah has the same healing powers for psychological ailments as it does for physical ailments. The particular psychological issues that the Maharal mentions are "jealousy" and "desire", which are associated with the heart and liver, respectively. In other words, all of the body parts that the Talmud mentioned also have some association with a psychological ailment. While this may seem far-fetched, there is growing evidence that the enteric nervous system and the microbiome play an important role in psychological health, so at least in the case of the digestive system, there is an element of truth in the mind-body connection. In any event, it is worthwhile to note that the Maharal considered psychological ailments sufficiently important for them to warrant medical attention in an age where clinical psychology and psychiatry did not exist as medical disciplines.


Tuesday, October 2, 2018

Maharal's Cosmology and the Is-Ought Problem

David Hume, the 18th century Scottish philosopher, famously introduced philosophy to the is-ought problem - the fact that the universe is a certain way can never imply that a person ought to do something. Even a seemingly morally compelling "is" claim like "dolphins are sentient beings" doesn't entail that people shouldn't eat dolphins; you need to start with a moral axiom like "it is wrong to eat sentient beings." Or, perhaps more in line with Hume's example "God exists and says you shouldn't eat dolphins" doesn't imply that you shouldn't eat dolphins unless you accept "it is wrong to disobey God" as a moral axiom.

Hume's position puts moral philosophy in a quandary because if you can't derive moral axioms from fact claims about the world, where exactly can you derive moral axioms from? The simple answer is that "ought" only has meaning in terms of some sort of goal. So if your goal is to get to the supermarket around the corner, it's legitimate can say "I ought to put my pants on." But there are no cosmic goals that can be derived from facts about the universe, and thus no cosmic morality.

Enter the Maharal. The Maharal draws on two rabbinic statements. The first statement is "God looked into the Torah and created the world". The second statement is "All of the work of the creation of the world was suspended until the sixth of the month of Sivan (the date of the holiday of Shavuot, which celebrates the receiving of the Torah); if Israel would accept the Torah, good; if not, the universe would return to formlessness and emptiness."

For the Maharal, the Torah is not simply a book of law for human conduct. It has another hidden facet - it is a blueprint for the physical organization of the universe. The Torah is thus both prescriptive - in terms of telling man how to behave - and descriptive - in terms of describing the universe. Both aspects of the Torah describe teleological necessities - how the universe ought to be and how man ought to act. The former was carried out in a deterministic fashion when God created the world, the latter is subject to the free will of man to observe or not observe, but when he chooses not to observe the Torah, he violates the divine teleology of the Torah. (This is related to the idea of teleology in classical Greek philosophy, where an object's teleological "purpose" is necessary in order for it to move. This is in contrast to the modern view, where objects in motion stay in motion and they change their motion when acted upon by a force.)

Related image

Fig. 1. The universe. Some of it, anyway.


The is-ought problem is thus turned on its head: we do not derive "ought" from "is", rather the "is" derives from "ought". The universe exists because God commanded it via the Torah, and morality is compulsory for the same reason. The Torah rejects the idea that morality derives from truth claims about the world; rather the trueness of the world derives from the Torah, which is a fundamentally moral document.

 Maharal continues this idea, citing another rabbinic passage. "Why was the world created with ten utterances? To teach that the righteous are rewarded for upholding the world that was created with 10 utterances, while the wicked are punished for destroying the world that was created with 10 utterances." (Avot ch. 5)

A naive understanding of this passage would be something to the effect of "God wanted to reward the righteous a lot and punish the wicked a lot, so he spent more effort on creating the universe, which makes sins and good deeds more significant." To the Maharal, this reading of the passage in incorrect and misses a fundamental concept.

The number 10, in the Maharal's view, has a symbolic meaning. In our base-10 number system, the digits represented by the numbers 1-9 are unique and individuated - they are each assigned a separate symbol. Once we arrive at the number 10, the unique, individual numbers are "collected", so to speak, and converted into a 1, moved over by a decimal place. (The Maharal is actually thinking here in terms of the Hebrew Gematria number system, but the same reasoning applies there as well.) 10 thus symbolizes the individuated coming together in an organized collective. The "world created with 10 utterances" therefore means "the world that was organized and collected from disparate units into a single unit."

In the Netivot Olam's cosmology, God initially created the world "like a hammer splitting a stone" (Jeremiah 23). In other words, creation involved an explosive event sending matter hurling everywhere (big bang theory much?).  A force was thus needed to ensure that the matter would unite and become larger, organized objects. In the modern understanding of physics, this role is played are the four fundamental forces of gravity, electromagnetism, and the strong and weak fundamental forces. To the Maharal, the Torah is the force binds matter together, because of the Torah's teleological directive for matter to organize. (A modern Maharal might hold to his view and claim that the four fundamental forces are all manifestations of the Torah's influence on matter.)

The consequence of the above interpretation of reality is that the consequences of moral violations - violations of the Torah - are not limited to man and his immediate surroundings. Rather, violations of the Torah disrupt the very fabric of the universe, because the Torah is the only thing preventing the universe from dissipating into entropy. The wicked destroy the "world that was created with 10 utterances" - that is, the universe which is made of disparate particles that are united by the cosmic force of the Torah - while the righteous uphold it.


Tuesday, September 18, 2018

Maharal Netivot Olam: Introduction

Together with my colleague Itamar Landau, I’ve begun studying Netivot Olam, a book written by Maharal of Prague (Rabbi Judah Loewy), a 16th century rabbi most popularly famous for the legend of the golem. Though the golem stories are a 19th-century invention, the fact that the Maharal was credited with the ability to create an animate man from clay indicates the esteem to which he was held in Jewish folklore. The Maharal was a scholar of the sciences, philosophy, Talmud, and Jewish mysticism, and he was also involved in politics. There’s a cool statue of him at the city hall of Prague which shows him sporting a wicked beard.



Also, fun fact: J. Robert Oppenheimer, the father of the atomic bomb (“behold I am become death, destroyer of worlds) is a direct descendant of the Maharal, so apparently harnessing cosmic forces to protect the Jewish people runs in the family.

Netivot Olam is one of Maharal’s philosophical works, primarily concerned with ethics. Because I’m studying this for the first time, instead of giving an overview of the book, I’m going to try to write notes here on the chapters that we study as we study them. This time, we’re going to do the introduction.

In the introduction, Loewy divides ethics into two categories - “law\obligation” and “charity”. Each of these categories is essential for living a good life, but they differ in a variety of ways. Drawing from verses in the book of Proverbs, Loewy says that law is like a narrow path and charity is like a wide road. What he means by this is that obligation is rigid and well-defined, and if you deviate in the slightest bit from your obligations, you’ve seriously messed up. If you murder, cheat, or steal even once, you’ve already done something that has led you in bad direction. Charity, on the other hand, gives people a wide berth in terms of how and when to fulfill it. You don’t have to give money to every homeless person that you see or volunteer at every soup kitchen. So charitable behavior, while necessary for living a good life, gives people some level of latitude.

Because law/obligation is so rigidly defined, few people manage to live their whole lives while staying true to the path of obligation. Everyone messes up at some point and does something that they’re really not supposed to do. So the narrow path doesn’t hold very many people. Conversely, because the charitable road gives people a lot of leeway, most people don’t completely veer from it. Almost everyone does generous things every so often, so unless you live your life entirely selfishly, you’re probably okay on the charity front.

Maharal then discusses the nature of evil, that all evil has its root in the material (Hebrew: חומר), as the impermanence of physical things and material desire is the source of all privation in the world. He associates the material/impermanent with femininity, which is the reason why temptation to do evil is frequently referred to in the Bible with feminine language (yeah, it’s a bit misogynistic). The temptation to do evil moves people from the path of the good, and when they get older and their temptations weaken, they regret the mistakes of their youth that led them to make bad decisions that have deleterious consequences on their long-term happiness.

Maharal also cites an early rabbinic source (Avot 2) as saying that rewards and punishments for the commandments and prohibitions are not explicitly stated in the Torah, so that people shouldn’t dismiss the “trivial” commandments in favor of the more consequential ones. This is, prima facie, antithetical to the consequentialist/utilitarian view that assigning quanatitative values to outcomes is necessary for a moral calculus. The consequence of the utilitarian view, though, is that people will tend to ignore trivial things because they can be offset by more important things. So, for example, if people know that not flying in an airplane is far more consequential to the environment than recycling, everyone might just decide not to recycle. There is thus a benefit to a more deontological perspective where actions and outcomes aren’t assigned specific values and all moral obligations are treated as equally imperative.

Maharal concludes his introduction with an explanation for the title of his book. Netivot Olam (lit. “paths of the world/eternity*”) is intended to explain the ethical path which leads to the good life. The work is thus a 32-chapter book about ethics (presumably corresponding to the Hebrew word לב - heart/mind, in the Gematria numerology system), but he has one additional chapter about Torah, because in the Marahal’s view ethics stem from the study of Torah.

*There is a bit of ambiguity as to whether עולם in the title means "world" or "eternity", because in Judaism the concepts are linked. Rabbinic Judaism believes in the afterlife where man is rewarded for his good deeds and punished for his sins. In Rabbinic parlance the afterlife is called "the world to come", in distinction to our material reality, known as "this world." A person who merits life in the world to come (by dint of good deeds) effectively attains an immortal, eternal life.




Sunday, July 22, 2018

Review: The Righteous Mind, by Jonathan Haidt

I.


The Righteous Mind: Why Good People Are Divided by Politics and Religion, by Jonathan Haidt (2012), rose to fame largely because of its explanation of the political differences between conservatives and liberals in the United States. The topic has become ever more relevant, as the 2016 election has shown just how far American politics has been polarized. So though this review may come relatively late -- as book reviews go -- there is ample reason to revisit Haidt’s work on this topic six years later.

The Righteous Mind belongs to a category of literature I would put in the broad category of “human evolutionary behaviorism”. Though the topic has antecedents in Darwin, the serious scientific study of human behavior on a wide scale through the lens of experimental psychology as such is relatively young. I would personally point to the behavioral economics studies of Kahneman and Tversky in Thinking Fast and Slow as the origin of the discipline. That may be an arbitrary line, however, because people have been seriously thinking about the motivations for human behavior for a very long time, and anthropologists and psychologists have contributed value insights since the births of their respective fields. The reason why I emphasize the recentness of the field is that...as far as I can tell, our scientific knowledge of the principles governing human behavior largely depends on a few decades of studies. This also tends to mean that if you’ve read one popular book on the subject, there are pretty significant diminishing returns for each subsequent book that you read. 

My personal introduction to human evolutionary behaviorism was The Moral Animal, by Robert Wright (1994). The Moral Animal focused on an evolutionary psychology explanation of human behavior, particularly behavior such as altruism. From my recollection - it’s been a while since I’ve read the book - The Moral Animal essentially makes the claim that altruistic behavior can be selected for via evolution due to A) personal interest, in the sense that reciprocal altruism can benefit everyone in the long term and B) the “selfish gene” which “cares” about spreading itself (i.e. the gene), not about the individual person, so people with similar genes (especially families, but in theory can extend to everyone in a species) behave altruistically because it’s generally good for the spread of closely-related genes. The Righteous Mind considers these two possibilities but also emphasizes a third possibility: group selection, which is the idea that groups which behave altruistically within themselves tend to outlast groups that don’t. I’ll say a bit more about this later, but I just want to put the book in the context of some of the other ideas that are out there.

Anyway, The Righteous Mind is divided into three sections, each of which has a central claim. Haidt puts a lot of emphasis (maybe even too much emphasis) on the main takeaways of each section and chapter, so I’m not doing much editorializing here when I describe what they are.

Claim 1: Human moral thinking is primarily driven by intuition/emotion/sentiment, not reason. Haidt here contrasts the view of Plato, who believed that emotions ought to follow the intellect, with David Hume, who stated that “Reason is, and ought only to be the slave of the passions, and can never pretend to any other office than to serve and obey them.” Haidt is firmly on the side of Hume, using a metaphor of “the elephant and the rider.” The elephant is emotion and intuition, the rider is the intellect. According to Haidt, the elephant does most of the work, the rider is just along for the journey. More precisely, when it comes to morality, the intellect is used to defend the conclusions that intuition has already decided, not the other way around. The way Haidt proves this claim is via a battery of experiments wherein he asks participants how they would judge a variety of scenarios which intuitively feel very wrong but wherein it is difficult to see any utilitarian downside (e.g. one-time consensual secret incest between a brother and sister with birth control). Participants will often struggle for a while to come up with an argument for why something they intuitively believe to be wrong is actually bad, and then when they fail to come up with an argument they don’t necessarily change their view.

Claim 2: Morality is about more than harm. Here, Haidt discusses his “Moral foundations theory” which posits that instead of just being about suffering and happiness, intuitive morality comes in six different “flavors”, which he calls moral foundations. The flavors are Care/Harm, Liberty/Oppression, Fairness/Cheating, Loyalty/Betrayal, Authority/Subversion, And Sanctity/Degradation. Care/Harm is the easy one, it’s about empathizing with other people (and cute animals) and caring that they are happy and don’t get hurt. Liberty/Oppression is about freedom from tyrannical people who abuse their power. Fairness/Cheating has to do with making sure that people are rewarded for good behavior, punished for bad behavior, and don’t get more than their fair share (liberals) or than they deserve (conservatives). Loyalty/Betrayal focuses on being loyal to your group and involves things like treating your country’s flag with dignity or not speaking ill of your country on a foreign radio station. Authority/Subversion relates to respecting social hierarchies, like children honoring their parents or students listening to their teachers. It is also the basis of religious morality and submission to the authority of God. Finally, Sanctity/Degradation is the foundation that includes sensibilities about cleanliness, sanctioned and illicit sexual behavior (e.g. incest), and foods that one may eat (kosher for Jews, halal for Muslims, and not eating insects for secular Westerners). 

Haidt argues that although people have some sort of innate proclivity towards all of these moral foundations, some people place the emphasis in different places. In particular, liberals (in the American sense, i.e. progressives) tend to emphasize care/harm and liberty/oppression (in the sense of certain kinds of oppression, like that of minorities and marginalized groups) over the other moral foundations, having little respect for authority hierarchies, loyalty, or most forms of sanctity. In contrast, conservatives tend to have a more balanced palate, appreciating all six moral foundations pretty much equally. Haidt also demonstrates these proclivities via questionnaires about moral intuitions among people across the political spectrum and obtaining graphs that look like this:



Haidt says that part of the reason why liberals have such difficulty swaying conservatives is that liberals don’t how to engage the “taste buds” of loyalty, authority, and sanctity, which are core conservative values. 

Claim 3: Humans are naturally groupish and hive-minded, and this explains a lot of our values, especially the ones that conservatives tend to value more. Haidt’s metaphor for this section is “We Are 90 Percent Chimp and 10 Percent Bee.” There are a lot of obvious examples of this phenomenon, like religions, sports fans, political tribes, and so on. Haidt argues that this is not just an emergent aspect of collective human behavior; rather he thinks - drawing on the work of Emil Durkheim - that we have a sort of psychological “switch” that turns individual sense of self into an expanded, collective self. In particular, synchronized physical movement, such as marching in the army or the ecstatic dancing of Aztec tribes, can expand the mind from the individualistic “chimp” state to the collectivist “bee” state. 

Collectivist mentality can entail a reification of “society” as an entity that can be harmed. If you burn a flag in private, even if no one is being visibly harmed, there is a non-quantifiable sense in which you are harming your country’s social fabric (no pun intended) by removing yourself from the collective and damaging one of its sacred symbols. Loyalty, authority, and purity all start to make sense when seen in this light. Conservatives tend to be strong believers in the “sacred platoons” of Edmund Burke; the religions, clubs, and teams that bind people together. These platoons usually involve sacred symbols, hierarchies, strict demarcation of ingroup and outgroup, etc. They will also sometimes involve costly signaling rituals (like fasting on Ramadan) which demonstrate a commitment to the collective. In return, the collective provides increased “social capital” in the sense of trust and altruism between members of the group. Diminished social capital within societies means less trust and higher transaction costs. (For example, Orthodox Jewish diamond dealers in New York are able to outperform their competition because of their internal high-trust social network.) Being part of a collective also has psychological benefits, and Haidt here notes a positive correlation between individualistic societies and suicide prevalence.  

This is where group selection comes in. Unlike the outspokenly atheist proponents of evolutionary psychology such as Richard Dawkins, Haidt believes that religious tendencies are a feature which emerges natural selection, not a “bug” that arose as a byproduct of otherwise beneficial cognitive tendencies (such as seeking out conscious intent in nature, which is a beneficial skill in social contexts). According to Haidt, because religious societies tend to be high in social capital, groups with religious beliefs and practices tended to be more prone to survival than non-religious groups. As such, proclivities towards religious beliefs may have actually been selected for via natural selection. 

Overall, Haidt says that conservatives (including people from many traditional and religious societies) understand the value of social capital and the invisible fabric of society in a way that liberals (and libertarians) don’t. Liberals would thus be well-advised to heed the moral flavors that they tend to neglect, as they ignore the greater part of the moral palate at their own peril. 


II.


One point that Haidt mentions briefly, although not nearly as strongly as he should, is that the book is about descriptive morality, not prescriptive morality. In other words, Haidt's work - particularly regarding moral foundations - is designed to describe how people in the real world behave, not how they should behave. There’s one line in the book where he says that his preferred moral theory is utilitarianism, but he doesn’t dwell much on this point. Instead, the thrust of the book tends to be toward convincing liberals that the intuitions about morality held by conservatives are worthwhile paying attention to. Haidt himself seems to have started out as a liberal who grew to appreciate certain aspects of conservatism and traditional societies, and he thus finds himself in a position of urging his (presumably liberal, but also open-minded) readership to follow suit.

Sarah Constantine has a great post where she groups together Haidt, Jordan Peterson, and Geoffrey Miller (whose work I am not familiar with) as “psycho-conservatives”. In short, psycho-conservatives are people who believe that some form of conservatism/traditionalism is optimally suited to human psychology, and this entails a broad swath of implications for things like gender roles, criminal justice policy, race relations, and so on. And there does seem to be some sort of merit to this argument. But Sarah makes the following point:

“If you used evolved tacit knowledge, the verdict of history, and only the strongest empirical evidence, and were skeptical of everything else, you’d correctly conclude that in general, things shaped like airplanes don’t fly.  The reason airplanes do fly is that if you shape their wings just right, you hit a tiny part of the parameter space where lift can outbalance the force of gravity.  “Things roughly like airplanes” don’t fly, as a rule; it’s airplanes in particular that fly.”

In other words, while it seems like most ancient, traditional and conservative civilizations had to hit all of the taste buds on the moral palate to construct a society optimally in tune with human psychological needs, that might only be because we haven’t yet figured out the precise mix of ingredients necessary to create an optimal society without the deleterious effects of traditional values (diminished status of women, inter-group fighting, false beliefs, excessive obsession with symbolism, etc.)

Psycho-conservatives thus tend to run afoul of the naturalistic fallacy -- that evolutionary “is” entails moral “ought”. Robert Wright in The Moral Animal was better about this and was very careful to state that though evolutionary psychology can inform the project of morality to some degree, the moral thing to do is often the opposite of what our evolution-based intuitions would suggest. For example, people often have an intuitive preference for harsh punishment, but from a utilitarian standpoint, it would seem that cutting off the hands of thieves and stoning adulterers is not a good way to run a society. 

And this, I think, is where Haidt needs to be taken with a grain of salt - to continue the taste bud metaphor. We might all have some level of predilections toward all of the six moral foundations, but the devil is in the details - and more specifically, the mixing quantities. The tongue’s taste buds have very different sensitivities to different flavors. For example, the taste threshold for strychnine, a bitter toxin, is 0.0001 millimolar, while the threshold for sucrose is 20 millimolar, a difference of over five orders of magnitude (source). The same might be true for the best mix of the moral foundations. There’s no compelling a priori reason to believe that liberal emphasis of care/harm and dismissal of most aspects loyalty and hierarchy is the wrong way to do things from a prescriptive standpoint. In the simplified world where the six moral foundations are the only axes along which moral systems vary, the utilitarian problem reduces to finding the correct set of weights along each of those axes which maximizes human flourishing...and we don’t know what those weights are. And beyond that, they may vary from person to person, society to society, and environment to environment. That’s why politics (in the sense of creating good policy) is hard.

At the same time, though, politics is still worth doing. Why? Because “multiple optimal solutions for different individuals, societies, and environments” is not the same as “all solutions are optimal.” Acknowledging that diversity may exist in the way to construct a healthy society does not mean that moral relativism is the answer. (In his conclusion, Haidt also rejected relativism, but again didn’t really expand on how to go from his moral foundations theory to a non-relativistic moral system.)

I have the benefit/misfortune of a journey in the opposite directions as Haidt’s, going from a very traditional/conservative society (Orthodox Judaism) to a very liberal one (secular academia in Israel.) And while Haidt is enamored with the benefits of traditional society, I have close acquaintance with the dark underbelly...and it’s not pretty. Many forms of Judaism, including the more liberal forms of Orthodox Judaism (like Modern Orthodoxy, my background) do a reasonable job of promoting human happiness and flourishing despite their very different weighting of the moral taste buds than secular society. And like Haidt said, the strict rules do go a long way in promoting a strong sense of community and social trust. I still maintain strong social ties to the Orthodox Jewish world, largely because the community there is stronger and more fulfilling than anything I’ve managed to find outside of it thus far. That being said, there is a point at which religious communities go too far, and I’ve seen it. The obvious examples are the extremely insular Hassidic communities in the US and the Haredi (Ultra-Orthodox) communities in Israel that prevent their adherents from studying secular subjects, including basic science, math, and English. I am sure that the sense of community is stronger among those groups than among the Modern Orthodox; insularity results in that almost by definition. At a certain point, we have to put our foot down and say...no. That is not an optimal solution. You can’t trade “strong sense of community” for literally everything else involved in the human experience, like having some basic knowledge about the world around you. The harrowing accounts of people who have left that world, like Shulem Deen’s All Who Go do Not Return, leave you with a sense that while these communities have some positive aspects, they are dystopias, pure and simple. 

So where do I draw the line between acceptable diversity and unacceptable relativism? That’s also a hard question. I suppose I’m not really interested in lines, so much as proximity to optimal solutions. And I think that this is something that people within a society intuitively feel, and it’s something that’s quantifiable. I know that for me, living as a secular person is better than an Orthodox person. Yes, I miss lots of things about Orthodoxy, and I often dip back into that social world when I feel the need. But I feel more comfortable with myself living outside of the Orthodox bubble than inside it. Most importantly, as a secular person, I have true intellectual freedom; I can think what I want without it being judged according to the tenets of Orthodox theology. Of course, this is all still my personal experience. I feel like my personal experience should be generalizable, but interindividual variability is a very real thing.

Really what we need is a reasonably good battery of questions which captures human flourishing on a variety of axes (maybe coupled with some neurological measures, though that may end up in Goodhart territory) see how different societies measure up on average. This is, of course, the great utilitarian project, and at the moment it seems like it is being won the Scandinavian countries, which are very heavily atheist but also keep many of the cultural trappings of religion (link). This might be a good direction to go in. On the other hand, Scandinavian countries also have a lot of other things going for them in the political and economic realm, so it’s a bit hard to disentangle their politics from their religious predilections (and the two might be causally related anyway). 

In sum...maybe all six moral foundations are kind of important, and we should think about psychology when designing policy. But as long as we have reasonably acceptable metrics for how happy people tend to be in different kinds of societies, we don’t need to resort to a priori reasoning from psychological first principles. Instead, we can directly measure what kinds of solutions work and what don’t, and then do more of the stuff that works and less of the stuff that doesn’t work. And sure, we’re talking about things that can be tough to quantify and prone to various sorts of measurement error, but that’s life. Reasoning based on evidence about the target that you’re actually interested in will always be better than armchair philosophizing about how people are wired. People are wired in complicated and diverse ways, but some moral systems seem to produce strictly better outcomes than others, and that’s what we should be looking at when it comes to prescriptive morality.

Saturday, July 14, 2018

How I Judge Scientific Studies


If it matters to you whether a new drug will cure you or leave you debilitating side effects, if it's sensible to produce legislation to prevent climate change, or if drinking coffee will your raise blood pressure - it pays to know what the facts are. How do we find out what the facts* are? Usually scientific studies are the best place to look. But we've all heard about the replication crisis, we know that researchers can sometimes make mistakes, and there are some fields (like nutrition) that seem to produce contradictory results every other month. So how do we separate scientific fact from fiction?

I don't really have a great answer to this question. At a first pass, of course, the solution is to use Bayesian reasoning. Have some prior belief about the truth of the hypothesis and then update that belief according to the probability of the evidence given the hypothesis. In practice, though, this is much easier said than done because we often don't have a good estimate of the prior likelihood of the hypothesis and it's also difficult to judge the probability of the evidence given the hypothesis. As I see it, the latter problem is twofold.


  1. Studies don't directly tell you the probability of the evidence given the hypothesis. Instead, they give you p-values, which tell you the probability of the evidence given the null hypothesis. But there are many possible null hypotheses, such as the evidence being accounted for by a different variable that the authors didn't think of. This is why scientists do controls to rule out alternative hypotheses, but it's hard to think of every possible control.
     
  2. The evidence in the study isn't necessarily "true" evidence. You have to trust that the scientists collected the data faithfully, did the statistics properly, and that the sample from which they collected data is a representative sample**. 

In theory, the only strategy here is to be super-duper critical of everything, try to come up with every possible alternative hypothesis, and recalculate all the statistics yourself to make sure that they weren't cheating. And then replicate the experiment yourself. But, as a wise woman once said, "ain't nobody got time for that." It goes without saying that if the truth of a particular claim matters a lot to you, you should invest more effort into determining its veracity. But otherwise, in most low-stakes situations (e.g. arguing with people on Facebook) you're not going to want to do that kind of legwork. Instead, it's best to have a set of easy-to-apply heuristics that mostly work most of the time, so that you can "at a glance" decide whether something is believable or not. So I've come up with a list of heuristics that I use (sorta kinda, sometimes) to quickly evaluate whether a study is believable.
  1. Mind your priors. The best way to know whether a study is true or not is to have a pretty good idea of whether the claim is true or not before reading the study. This is kind of hard to do if it's a study outside of your field, but if it does happen to be in your field, you should have a pretty good idea of what kind of similar work has been done before reading the study. This can give you a pretty good idea going in of how believable the claim is. If you've been part of a field for a while, you develop an sixth sense (seventh if you're a buddhist) or a kind of intuition for what things sound plausible. At the same time...
  2. Beware of your own confirmation bias. Don't believe something just because you want it to be true or because it confirms your extant beliefs or (especially) political views. And don't reject something because it argues against your extant beliefs or political views. Don't engage in isolated demands for rigor. If you know that you have a pre-existing view supporting the hypothesis, push your brain as hard as you can to criticize the evidence. If you know that you have a pre-existing view rejecting the hypothesis, push your brain as hard as you can to defend the evidence.
  3. Beware of your own experience bias: Your personal experience is a highly biased sample. Don't disbelieve a high-powered study (with a large and appropriately drawn sample) simply because it contradicts your experience. On the other hand, your personal experience can be a good metric for how things work in your immediate context. If a drug works for 90% of people and you try it and it doesn't work for you, it doesn't work for you. At the same time, be careful, because people don't necessarily quantify their personal experience correctly.
  4.  Mind your sources. Believability largely depends on trust, so if you are familiar with the authors and think they do usually do good work, be trusting, if you don't know the authors, rely on other things. My experience is that university affiliation doesn't necessarily matter that much, don't be overly impressed that a study came out of Harvard. In terms of journals: peer-reviewed academic journals are best, preferably high-tier and highly cited but I think that probably gives you diminishing returns. High-tier journals often focus on exciting claims, not necessarily the best-verified ones. Unpublished or unreviewed non-partisan academic stuff is also at least worthwhile to look at and should be given some benefit of the doubt. Then comes partisan academish sources, like think tanks which promote a particular agenda but are still professional and know what they're doing. You should be more skeptical of these kinds of studies, but you shouldn't reject them outright. Be skeptical if they find evidence in favor of their preferred hypothesis, be more trusting if their evidence is seemingly against their preferred hypothesis.

    Don't trust non-professionals, including journalists, especially highly. Training exists for a reason. Some publications like The Economist seem to have some people there who know what they're doing in terms of data science and visualization. Some individual journalists can be relied upon to accurately report findings of trained scientists. The best way to do source criticism is to weight sources by how close to the truth they've been in the past, like the Nate Silver and the Five Thirty-Eight blog does with political polling. Don't trust politicians citing studies. Ever. Doesn't matter if they're from your tribe or not. Same thing goes for political memes, chain emails forwarded from grandma, etc. You are not going to get good information from people who care about partisan goals more than a dispassionate understanding of truth. And sure, everyone is biased a bit, but there are levels, man.
  5. Replication. The obvious one. If a bunch of people have done it a bunch of times and gotten the same results, believe it. If a bunch of people have done it a bunch of times and gotten different results, assume regression to the mean. If a bunch of people have done it a bunch of times and got the opposite results of the study you're looking at, the study you're looking at is probably wrong.
  6. Eyeball the figures. If the central claim of a study is easily born out or contradicted from a glance at the graphs, don't worry too much about p values or whatnot. This won't always work, but we're aiming for a "probably approximately correct" framework here, and a quick glance at figures will usually tell you if a claim is qualitatively believable.
    Linear Regression
  7. Different fields have different expectations of reliability. Some fields produce a lot more sketchy research than others. Basic low-level science like cell biology tends to have fairly rigorous methodology and a lot of results are in the form of "we looked in a microscope and saw this new thing we didn't see before." So there's usually no real reason to disbelieve that sort of thing. Most of the experimental results in my field (dendritic biophysics) are more or less of that nature, and I tend to believe experimentalists. Fields like psychology, nutrition, or drug research are more high-variance. People can be very different in their psychological makeup or microbiome, so even if someone did the most methodologically rigorous study in the field it might not generalize to the overall population. Simple systems tend to be amenable to scientific induction (an electron is an electron is an electron) it's harder to generalize from one complex system (like a whole person or a society) to other complex systems. That doesn't mean all findings in psychology or nutrition are wrong; I would just put less confidence in them. That being said, some fields involving complex systems (like election polling) are basically head-counting problems. I tend to believe this kind of data within some margin of error. This is because on average it tends to be more-or-less reliable on average and because...
  8. Bad data is better than no data. If you have a question and there's only one study that tries to answer the question, unless you have a really good reason to have an alternative prior, use the study as an anchor, even if it's sketchy. That is, unless it has glaring flaws that render it completely useless. The value of evidence is not a binary variable of "good" or "not good", it's a number on a continutious spectrum between 0 and 1 between "correlated with reality" and "not correlated with reality". So use the data that you have until something better comes along.
  9.  The test tests what the test tests. If the central claim of a study does not seem to be prima facie falsifiable by the experiment that they did, then run in the other direction. I tend to have faith in academia, but sometimes people do design studies that don't don't demonstrate anything. Moreover, a lot of studies are taken further than they are meant to be. Psychology studies can inform policy, but don't expect to be able to always draw a line from a psychology experiment done in a laboratory to policy. Policy interventions based on psychology need to be tested and evaluated as policy interventions in the real world (Looking at you, IAT). Drug studies on rats need to be carried out on humans. Etc. Popular media often sensationalizes scientific results, which are usually limited in scope.
  10. Not all mistakes are invalidating. Scientists are obsessive about getting things right, because we have to defend our claims to our peers. Nevertheless, science is hard and we sometimes make mistakes. That's life. A single mistake in a study doesn't mean that the study is entirely wrong. The scope of a mistake is limited to the scope of the mistake. Some mistakes will invalidate everything, some won't. Often a qualitative result will hold even if someone messed up the statistics a bit. It's easy to point out mistakes; the challenge is to extract a signal of truth from the noise of human fallibility.
  11. When in doubt, remember Bayes. Acquire evidence, update beliefs according to reliability of evidence. The rest is commentary.
* In science, at least in my field, we hardly ever use the word "fact"; we prefer to talk about evidence. Still, there are findings that are so well-established that no one questions them. Discussions at scientific conferences usually revolve around issues that haven't been settled yet, so scientific conferences are ironically fact-free zones.

** "Sample size" is often not the issue (that's the classic low-effort criticism from non-scientists about scientific studies, which is sometimes valid but usually the standard statistical measures take that into account; if your sample size is too small you won't get reasonable p-values). Rather, even if your sample size is large enough to produce a statistically significant effect for your particular test sample, you have to be wary of generalizing from your test population to the general population (sampling bias). If the two populations are "basically the same" - that is, your experimental population was uniformly sampled from the broader population - then you can use the standard sampling error metric to estimate how far off your results are for the general population. But if your test population is fundamentally different than the general population (i.e. you ran all your experiments on white undergraduate psychology students) there's reason to be skeptical that the results will generalize for the broader population.

Saturday, July 7, 2018

Judaism, Buddhism, and the Sense of Self

I recently finished two books: What the Buddha Taught, by Walpola Rahula, and The Great Shift, by James Kugel. What the Buddha Taught is an introductory text to Buddhism, The Great Shift is a book about changing senses of self and God in biblical and second-temple Judaism. I picked up the two books on different occasions and became interested in their respective topics for different reasons. As it happens, What the Buddha Taught and The Great Shift have some converging themes, so I thought I’d write a joint review of the two books and use that as a springboard to talk about the “idea of self” in Buddhism and biblical Judaism.

I.

First, a brief sketch of the two books. In The Great Shift, Kugel traces the development of how the Bible writes about individuals and their encounters with God over the course of the biblical timeline. Kugel notes that initially, biblical characters are mostly portrayed as acting, rather than thinking. We are not explicitly told of the motivations of Abraham, the doubts of Joseph, etc. Instead, biblical stories are told from a third-person omniscient perspective that focuses on the actions of the biblical characters and their consequences. Kugel suggests that this is not simply a literary choice; rather the lack of internal dialog of biblical characters might reflect the way that people thought of themselves in biblical times. In contrast to the familiar idea that people generate their own thoughts, people’s sense of themselves in biblical times may have been quite different. People in biblical times might have believed that their emotions and thoughts originated from outside themselves, possibly in the form of spirits that could enter a person. Kugel draws from anthropological research claiming that in many cultures, especially in the past, the idea of the individual, bounded self was very different than the Western notion we have today. The accounts that Kugel cites to this effect come off as bizarre to the Western reader, such as the Dinka people who have no concept of “mind” or “memory”.

Kugel claims that biblical characters, in particular those who had prophetic or revelatory experiences, held that the self was “semi-permeable”. In other words, one’s experiences could be taken over or influenced by external forces. Moreover, the distinction between thought, imagination, and reality was blurred; to the ancient man images produced by the mind might have been considered part of the “undifferentiated outside”. Thus, visions of conversations with God, wrestling matches with angels, and burning bushes were very much a reality to ancient biblical man. And in a world with little scientific understanding of reality, both the existence of deities and the idea that they would communicate with man was eminently plausible. So it is conceivable that the personalities of the Bible - assuming they existed - actually believed they were communicating with God and angels.

As time went on, however, the individual sense of self began to congeal in the Bible, leading us to characters like Jeremiah and Job who do introspectively reflect on their emotional state. Together with the crystallization of the self emerged a distancing from God, in the sense that people no longer had direct “sensory” experiences of encountering with the Divine. As history progressed, this led to a focus on law and prayer as opposed to sacrificial service at the Temple, which was predicated on a more experiential conception of God. No longer part of lived reality, God became distant from man, to the extent that religious people talked about a “re-establishing of God’s sovereignity” on Earth as opposed to the old assumption that God was ever-immanent. 

II. 

What the Buddha Taught also touches on the sense of self, albeit in a different way. Rahula describes Buddhism as a system of thought and practice dedicated to eradicating dukkha, usually translated as “suffering” but actually have a broader meaning referring to the impermanence of material things (c.f. The term הבל, usually translated as “futility” in Kohelet, but literally means something like “smoke” and ostensibly refers to a similar concept of impermanence or transience). In the Buddhist way of thinking, man is constantly confronted by suffering due to the unreliable nature of the material world, whether it be in the form of sickness, old age, death, or any of the myriad ways in which life can go wrong. Buddhism’s solution to dukkha is the idea of anatta, or absence of self. The only way that person can extricate himself from the suffering that arises from the impermanence of the world is to negate the self itself. In other words, Buddhism prescribes viewing the “self” as indistinct from any other sensory information (AKA “one with everything”, which is apparently what the Dalai Lama orders when he goes to a pizza shop). 

Anatta in Buddhism is a practical challenge in addition to being an intellectual claim. Even if it is true that there is no such thing as the self, that is a very difficult thing to feel experientially. The solution to this practical problem that Buddhism recommends is meditation. When one meditates, he can observe thoughts emerging and dissipating with the attitude of an objective, disinterested observer. As a person advances in his meditation practice, he gains the ability to dissociate his awareness from the thoughts, feelings, and perceptions that he is aware of. When one can fully dissociate awareness from the objects of awareness, and realizes in both in an intellectual and experiential sense that the “self” is simply a conglomerate of mental objects (i.e. memories, sensory experiences, emotions, etc.), he will have achieved the state of nirvana. In this state, a person will be free from all forms of desire and lust for the material, resulting in the ultimate bliss. In this way, nirvana is a recipe for both happiness and morality. In that immoral behavior stems from material desires and biological urges, being able to free oneself from the “self” allows one to act completely selflessly, devoting his thoughts to the love of all creatures and his behavior towards the betterment of the lives of others.

[Aside: As a neuroscientist (uh oh, here we go) I have a strong sympathy for the idea of anatta. I think neuroscientists would generally agree that thoughts, emotions etc. are coded in the brain and are brought to awareness by deterministic(ish) neural dynamics. We haven’t solved the hard problem of consciousness, of course, but the information relayed to consciousness - the spikes that eventually become qualia - are ostensibly present for everything that you think and feel. There isn’t a categorical distinction between sensory information and self-information like emotions and memories; in the brain it’s all just spikes. So in principle the observation that self-thoughts aren’t that different from sensory perceptions and that the “self” is really just a conglomerate of a variety of sources of information is well-taken in the modern scientific view.]

III.

So where am I going with all this? First of all, I think there’s an argument to be made from both What the Buddha Taught and The Great Shift that Western people take the idea of self too seriously. I’m not sure there is a right or wrong answer to what the self is, but at least from a scientific and philosophical standpoint, the Buddhists might actually have a better framework for thinking about the question than the Western world does. 

Beyond that, though, there seem to be very real psychological ramifications for how we contextualize our ideas of self and individual identity. In biblical times, if we accept Kugel’s view, a fluid sense of self could result in “real” encounters with the divine in a manner that even very religious people have a hard time with today (at least in the absence of psychedelic drugs, which further argue for the idea of the malleable self). And if the Buddhists are right, annihilating the self will lead to eternal bliss (and there seems to be at least some evidence that meditation has some positive effects for anxiety and depression). 

Amusingly, both Buddhism and Judaism would seem to be in favor of the abnegation of a strong sense of self, albeit for different reasons. In Judaism, a weaker sense of self might make possible the lived experience of divine encounters. And in Buddhism, of course, the absence of the sense of self is the terminal goal. I’m not sure either of these visions are ones which we should adopt, but at the very least it would be worthwhile for the modern West to question whether we should really take the self for granted.