Table of contents:

Why our brains tend to believe rumors
Why our brains tend to believe rumors
Anonim

An excerpt from a book by an anthropology professor on why people spread unverified information and how evolution has influenced it.

Why our brains tend to believe rumors
Why our brains tend to believe rumors

Modern man lives in a huge field of absolutely useless information. It may include various superstitions that are passed down from generation to generation, belief in magic and any other information that does not withstand the test of accuracy and logic. In his book Anatomy of Human Communities, Pascal Boyer calls this phenomenon a "garbage culture" and explains why people perceive dubious information as reliable.

Why do you need information? A sane mind, strange beliefs and madness of crowds

Rumors and hazard recognition

Rumors are mainly associated with negative events and their creepy explanations. They communicate that people intend to harm us, or that it has already been done. They report situations that will lead to disaster if not immediately acted upon. The government is involved in terrorist attacks on the population, doctors are involved in a conspiracy to hide the spread of mental disorders in children, foreign ethnic groups are preparing an invasion, etc. Rumors are reporting potential danger and many situations in which we may be in danger.

Does this mean that rumors are successful because they are negative? Psychologists have long noticed that many aspects of cognition are accompanied by the so-called negativity bias. For example, when we read a list, words with negative meanings get more attention than words with neutral or positive ones.

Negative facts are often processed more carefully than positive information. Negative impressions of another person's personality are easier to form and harder to discard than positive ones.

But to describe this tendency does not mean to explain the phenomenon. As many psychologists have noted, a possible reason for the tendency to pay attention to negative stimuli may be that our minds are attuned to information about potential dangers. This is quite obvious in cases of attention bias. For example, our sensing systems make it faster and more reliable to recognize a spider among flowers than a flower among spiders. The danger signal comes to the fore, from which it can be concluded that specialized systems are configured to recognize the danger.

How does the mind formed in the course of evolution anticipate a potential threat? Part of it are specialized recognition systems. It is an evolutionary law, imperative for all complex organisms, to monitor potential hazards in the environment and take the necessary precautions. It is no surprise that our risk warning systems seem to be tuned in to recognize persistent dangers to humans, such as predators, alien invasion, pollution, contamination, public disruption, and harm to offspring. People are attentive to this kind of information and, on the contrary, tend to ignore other types of threats, even if they pose a greater danger. Children are likewise prone to notice specific threats. They are often indifferent to real sources of danger, such as weapons, electricity, swimming pools, cars and cigarettes, but their fantasies and dreams are full of wolves and non-existent predatory monsters - confirmation that our hazard recognition systems are aimed at situations that have played an important role in evolution. … By the way, pathologies of hazard recognition (phobias, obsessive-compulsive disorders and post-traumatic stress) are also aimed at specific targets, such as dangerous animals, infection and pollution, predators and aggressive enemies, that is, threats to survival in the environment formed during evolution.

In humans and animals, hazard recognition systems are characterized by a significant asymmetry between hazard and safety signals.

For people whose behavior is heavily influenced by information from their fellows, this asymmetry between danger and safety leads to one important consequence, namely, that warning advice is rarely tested. One of the important benefits of cultural inheritance is that it saves us from systematically surveying the environment for sources of danger. Here's a simple example: generation after generation of Amazonian Indians passed on to each other that the tubers of cassava, a variety of cassava, are poisonous and only become edible when properly soaked and cooked. The Indians did not feel any desire to experiment with the cyanide contained in the roots of this plant. It is clear that obtaining information on the basis of trust is a much broader phenomenon in the transfer of cultural characteristics - most of the technical knowledge is passed from generation to generation, without being too deliberately tested. Following time-tested recipes, people, so to speak, for free, acting as "free riders", use the knowledge accumulated by previous generations. Warnings have a special status because if we take them seriously, we have no reason to check them. If you think raw cassava is poisonous, then the only thing left for you is not to test the claim that cassava is poisonous.

This suggests that information related to the hazard is often considered reliable, at least temporarily, as a precaution that is not unnecessary.

Psychologist Dan Fessler compared the extent to which people trust statements formulated in a negative, mentioning danger (“10% of patients who have had a heart attack die within ten years”) or in a positive spirit (“90% of patients who have had a heart attack live more than ten years ). Although these statements are completely equivalent, the subjects found negative statements to be more convincing.

All of these factors encourage participation in the transmission of information about threats, and from here it becomes clear why people spread so many rumors about potential danger. Even not too serious urban legends follow this model, many of them tell what happens to those who neglect the potential threat. Scary stories about a woman who never washed her hair and had spiders in her hair, about a nanny drying a wet puppy in the microwave, and other characters in urban legends warn us: this is what happens if we do not recognize the danger posed by everyday situations and items.

So, we can expect that people are especially eager to obtain information of this kind. Naturally, it does not always generate rumors that are taken seriously, otherwise the cultural information would consist solely of warning advice. There are several factors that limit the spread of rumors.

First, all other things being equal, plausible warnings take precedence over descriptions of unlikely situations. This seems obvious, but in most cases it imposes severe restrictions on communication. It is much easier to convince neighbors that the shopkeeper sells rotten meat than that he sometimes turns into a lizard. Note that the listener determines the likelihood or implausibility of the message based on his own criteria. Some people can easily be convinced of the most unlikely things (for example, the existence of mysterious horsemen who sow disease and death), if they had the corresponding ideas before (for example, about the end of the world).

Second, in the segment of unverified (and generally incorrect) warning information, the cost of safeguards should be relatively modest. As an extreme example, it's pretty easy to convince people not to circle the cow seven times at dawn, because it costs us nothing to follow that advice. While some costs are usually required, they should not be too high. This explains why many common taboos and superstitions require slight deviations from normal behavior. Tibetans bypass chortens (Buddhist stupas) on the right side, in Gabon, representatives of the Fang people pour a few drops from a freshly opened bottle on the ground - in both cases this is done in order not to offend the dead. Highly costly warning tips are also scrutinized and can therefore be as widespread as these worthless prescriptions.

Third, the potential cost of ignoring the warning advice, what can happen if we do not take precautions must be serious enough for the listener to trigger the hazard detection system.

If you were told that by going around the stupa on the left, you sneeze, and this is the only consequence, you may ignore the rule of passing the stupa. Insulting an ancestor or a deity seems to be a far more serious offense, especially if it is not known exactly how they might react to such behavior.

So it seems that hazard recognition is one area in which we can turn off our epistemic vigilance mechanisms and be guided by warning information, especially if such behavior is costing me too much, and the prevented danger is both serious and unclear.

Why danger is moralized

When discussing "garbage" culture, it is very easy to get stuck for a long time on the question "Why do people (other people) believe in such things?" But one can ask an equally important question: why do people want to transmit such information? Why do they tell each other about penis snatchers and the role of the secret services in spreading the HIV epidemic? The issue of beliefs and beliefs is very interesting, but the latter do not always play an important role in the inheritance of cultural characteristics. Yes, many people believe the rumors that are spread, but this belief alone is not enough. It is also necessary to take into account the desire to convey - without it, many would produce worthless, empty information, but it would not generate either rumors or a “garbage” culture.

Often the transmission of low-value information is associated with strong emotions. People find data on viruses, vaccinations, and government conspiracies extremely important. The disseminators of such messages strive not only to convey information, but also to persuade.

They follow the reaction of their audience, consider skepticism offensive, and explain doubts as malicious intent.

Take, for example, the campaigns against comprehensive vaccination of children against measles, mumps and rubella, launched in the 1990s. in the UK and USA. People who spread the word that vaccines are dangerous because they can cause autism in healthy children were not just reporting the perceived dangers of vaccination. They also denigrated doctors and biologists whose research was at odds with anti-vaccination theory. The injecting doctors were portrayed as monsters who knew perfectly well the danger they put children in, but who preferred to receive money from pharmaceutical companies. Audience reactions to such messages were also often presented as a moral choice. If you agree with the opinion of most doctors that the cost of collective protection that mass vaccination provides may be minor side effects, then you are on the side of the criminals.

Why are our beliefs so highly moralized? The obvious answer is that the moral value of spreading a message and its perception directly depends on the information transmitted. If you believe that the government tried to exterminate certain ethnic groups or helped plan terrorist attacks against the population, or that doctors deliberately poison children with vaccines, would you not try to make it public and convince as many people as possible that you are right?

But it may be one of those self-explanatory explanations that raises more questions than answers. To begin with, the connection between persuasion and the need to persuade others may not be as direct as is commonly thought. Social psychologist Leon Festinger, renowned for his work on millenarian cults, observed that in cases where the end of the world did not come on time, the apparently false original belief did not weaken, but strengthened the adherence of the members of the group to the millenarian cult. But why? Festinger explained this by the fact that people seek to avoid cognitive dissonance, that is, the tension that arises between two incompatible positions - that the prophet was right and that his prophecy was not justified. However, this does not explain one of the main characteristics of millenarian cults - the fact that failed prophecies lead not only to attempts to justify failure (which would be enough to minimize dissonance), but also to the desire to increase the size of the group. This effect of dissonance manifests itself mainly in interactions with people outside the group and requires explanation.

It might be worth taking a step back and looking at all this from a functional point of view, assuming that mental systems and aspirations are aimed at solving adaptive problems. From this position, it is not clear why our mind seeks to avoid cognitive dissonance, if the discrepancy between the observed reality and someone else's ideas is important information. Then it would be worth asking why the reaction to apparent failure is to win over as many people as possible.

The phenomenon becomes clearer when you look at it from the perspective of the coalition processes and group support described in Chapter 1.

People need the support of society, and they need to involve others in collective actions, without which individual survival is impossible.

The most important part of this evolutionary psychological trait is our ability and desire for effective coalition management. Therefore, when people convey information that can convince others to join in some action, it must be tried to understand in terms of engaging in a coalition. That is, it should be expected that an important part of motivation will just be the desire to convince others to join some kind of joint action.

This is why moralizing one's opinion may seem intuitively acceptable to many people. Indeed, evolutionary psychologists such as Rob Kurtzban and Peter DeChioli, as well as John Tubi and Leda Cosmides, have pointed out that, in many situations, moral intuitions and feelings are best viewed in terms of support and involvement. It is difficult to prove and observe this, but the main idea is simple and clearly correlates with the dynamics of the spread of rumors. As Kurtzban and DeChioli point out, in each case of moral violation, not only the perpetrator and the victim are involved, but also a third party - people who approve or condemn the perpetrator's behavior, defend the victim, impose a fine or punishment, refuse to cooperate, etc. people are interested in joining the side that is more likely to attract other supporters. For example, if someone takes a large share of a shared meal, the neighbor’s decision to ignore or punish the rule breaker is influenced by ideas about how others might react to that misconduct. This means that the moral feeling associated with the relative illegality of a particular behavior arises automatically and is largely picked up by other people. In other words, each mediator, based on his own emotions, can predict the reactions of the other. Since people expect to find agreement, at least in general terms, describing the situation from a moral standpoint will lead to a consensus opinion rather than to a different possible interpretation of what is happening.

People tend to condemn the side they perceive as the offender and side with the victim, in part because they assume that everyone else will make the same choice.

From this point of view, moralizing other people's behavior is an excellent tool for the social coordination necessary for collective action. Roughly speaking, the statement that someone's behavior is morally unacceptable leads to consensus more quickly than the statement that the person is behaving in such a way out of ignorance. The latter can spark discussion of the evidence and actions taken by the perpetrator, and is more likely to disrupt general agreement than reinforce it.

From this we can conclude that our everyday ideas about the so-called moral panics - unjustified outbursts of fear and the desire to eradicate "evil" - may be false or at least far from complete. The point is not, or not only, that people are convinced that terrible things have been done and decide: it is necessary to call on the rest in order to stop evil. Perhaps another factor is at work: many intuitively (and, of course, unconsciously) choose beliefs that potentially attract other people due to their moralizing content. Therefore, millenarian cults, with their unfulfilled prophecies, are just a special case of a more general phenomenon in which the desire to win over plays a major role in how people make sense of their beliefs. In other words, we choose our beliefs in advance in an intuitive way, and those that cannot attract others simply do not consider intuitive and attractive.

It does not follow from this speculative explanation that people who spread rumors are necessarily cynical manipulators.

In most cases, they are unaware of the mental processes that make them and others so sensitive to moralizing descriptions of behavior and are very likely to receive support. Our ancestors evolved as seekers of support from others and, therefore, as recruiters, and therefore we can direct our actions towards effective cooperation with other people without even knowing it. Moreover, one should not think that such appeals to morality are invariably successful. Moralization can facilitate recruitment, but it does not guarantee success.

Why the brain believes rumors. "Anatomy of Human Communities"
Why the brain believes rumors. "Anatomy of Human Communities"

Pascal Boyer is an evolutionary psychologist and anthropologist who studies human societies. He is sure that our behavior largely depends on how our ancestors evolved. Exploring the latest advances in psychology, biology, economics and other sciences, he explains in his new book Anatomy of Human Communities how religions arise, what the family is, and why people tend to believe in pessimistic forecasts for the future.

Recommended: