Introduction to Psychology (June 2021 Edition)

Elizabeth Arnott-Hill

Or'Shaundra Benson

College of DuPage Digital Press

Glen Ellyn, IL

Introduction to Psychology (June 2021 Edition) by Ken Gray, Elizabeth Arnott-Hill, and Or'Shaundra Benson is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License , except where otherwise noted.

Brief Table of Contents

Unit 1. thinking like a psychologist.

  • Module 1. How Psychologists Think
  • Module 2. How Psychologists Know What They Know
  • Module 3. How Psychologists Think About the Field of Psychology
  • Module 4. The Science of Psychology: Tension and Conflict in a Dynamic Discipline

Unit 2. Understanding and Using Principles of Memory, Learning, and Thinking

  • Module 5. Memory
  • Module 6. Learning and Conditioning
  • Module 7. Thinking, Reasoning, and Problem-Solving
  • Module 8. Tests and Intelligence
  • Module 9. Cognitive Psychology: The Revolution Goes Mainstream

Unit 3. Understanding Human Nature

  • Module 10. How Biology and Psychology are Linked
  • Module 11. Brain and Behavior
  • Module 12. How the Outside World Gets into the Brain: Sensation
  • Module 13. How the Brain Interprets Sensations: Perception
  • Module 14. Biopsychology: Bringing Human Nature into Focus

Unit 4. Developing Throughout the Lifespan

  • Module 15. Physical Development
  • Module 16. Cognitive Development
  • Module 17. Social Development
  • Module 18. Developmental Psychology: The Divide and Conquer Strategy

Unit 5. Getting Along in the Social World

  • Module 19. Personality: Who are You?
  • Module 20. Emotions and Motivation: What Moves You?
  • Module 21. Social Cognition and Influence: How Do People Interact?
  • Module 22. Intimate Relationships
  • Module 23. People in Organizations
  • Module 24. Social Psychology and Personality Psychology: Science and Society’s Problems

Unit 6. Achieving Physical and Mental Well-Being

  • Module 25. A Positive Outlook
  • Module 26. Consciousness and Sleep
  • Module 27. A Healthy Lifestyle
  • Module 28. Mood Disorders and Treatment
  • Module 29. Other Psychological Disorders and Treatment
  • Module 30. Clinical Psychology: The House that Psychology Built

This work was derived from Introduction to Psychology, Copyright © 2012 Kenneth Gray, for Cognella Academic Publishing. Cognella Academic Publishing retains the rights to reproduce the original work.

Except as otherwise noted, the current material is licensed under a Creative Commons Non-Commercial (4.0) International License ( CC-BY-NC 4.0 ).

This license allows you to copy and redistribute the material in any medium or format, and to remix, transform, and build upon the material, as long as:

  • You give appropriate credit, provide a link to the license, and indicate if changes were made.
  • You do not use the material for commercial purposes

Unit 1: Thinking Like a Psychologist

Module 1: how psychologists think.

1.1   Understanding the Science of Psychology

1.2   Thinking Like a Psychologist About Psychological Information

1.3   Watching Out for Errors in Reasoning

Module 2: How Psychologists Know What They Know

2.1   The Process of Psychological Research

2.2   Research Methods Used to Describe People and Determine Relationships

2.3   Research Methods Used to Determine Cause and Effect

2.4   Statistical Procedures to Make Research Data More Meaningful

2.5   Ethics in Research

Module 3: how psychologists think about the field of psychology.

3.1   Psychology’s Subfields and Perspectives

3.2   Career Options for Psychology Majors                                            

Module 4: The Science of Psychology: Tension And Conflict In A Dynamic Discipline

I wish I could:

  • Go back in time and remember what it was like to be a baby.
  • See inside people’s memory.
  • Read people’s minds.
  • See what other people see.

—Doug, age 6

To that, we can only add, us too. That is why we decided to study psychology.

Do you ever wonder:

  • Why you can remember some important information but forget other equally important information?
  • Why some people seem to love school and work, while others hate it?
  • How the brain works?
  • Whether parents really understand the unintelligible sounds that come out of their two year old’s mouth?
  • Whether you will be the same person 15, 25, or 50 years from now? And if you will be different, how you will be different?
  • Why some people have satisfying relationships and others seem to jump from one bad relationship to the next?
  • Why some people hate?
  • Why people fall in love?
  • Whether and how advertising really works?
  • Whether you get enough sleep, and what happens if you do not?
  • What, exactly, depression, anxiety disorders, and other psychological disorders are and why some people develop them but others do not?

Us too. That is why we decided to study psychology.

Psychology  is defined as the science of behavior and mental processes. This is a broad definition because, as you will see in this course, psychology is a very broad discipline. The two main parts of the definition are (1) the subject matter, namely behavior and mental processes, and (2) the methods used to study them, which are the methods of science. This first unit of the book deals with the role of science in psychology, so we will have a chance to tell you about that very soon.

First, however, a brief description of the other part of the definition, behavior and mental processes, is in order. A behavior  is any observable response in an organism, usually a person (although some psychologists study other animals). If you see two people walking down the hall together holding hands, you are observing behavior (several behaviors, actually). Likewise, a person insulting or injuring a rival is a behavior. So is answering a survey question, running to get out of the rain, eating, crying, sleeping, and so on. In short, anything a person does is a behavior and is a legitimate part of the subject matter of psychology. Behavior does not always require observation with the naked eye, by the way. As long as the response can be reliably measured, it counts as a behavior. For example, when you are nervous, your palms sweat. The sweat increases the electrical conductivity of your hand; it is called galvanic skin response, and it can be measured. Electrical activity in the brain, too, can be measured, so it counts as a behavior.

In the first part of the 20th century in the United States, psychology was almost purely the science of behavior. Modern psychologists try not to just measure behavior but also to figure out which mental processes , or functions within the brain, are responsible for producing the observed behavior. To give you a simple example, suppose you observe two people walking down the hall holding hands. As a casual observer, you might guess, or infer, from this behavior that they like each other. Liking cannot be observed directly but is taken to be a mental process associated with the observed behavior, holding hands. Although the concepts that psychologists use are a bit more complex, and the observations they make more careful and planned, their inferences of mental processes are basically the same thing that we do in our everyday lives.

Psychology is the subject the three authors of this textbook chose to devote our professional lives to decades ago. We chose it, in part, because the topics we were studying in our undergraduate psychology courses were so personally meaningful. Quite simply, we began to notice, and even use, the material from psychology courses in our everyday lives. That, in a nutshell, is our most important goal for this book, to highlight the relevance of psychology in your lives. This book, then, is organized around themes that we hope you will find personally meaningful. We will introduce you to the fascinating and complex world of psychology by dividing the topics that psychologists study into six themes relevant to everyday life, each one a unit of the textbook:

  • Unit One. Thinking Like a Psychologist
  • Unit Two. Understanding and Using Principles of Memory, Thinking, and Learning
  • Unit Three. Understanding Human Nature
  • Unit Four. Developing Throughout the Lifespan
  • Unit Five. Getting Along in the Social World
  • Unit Six. Achieving Physical and Mental Well-Being

So, the first unit of this book is “Thinking Like a Psychologist.” Why is that the first theme addressed in this book? Do psychologists really think better than other people? Perhaps. We certainly believe that psychologists have something to say about how we can better understand ourselves and others. In addition, psychology is deeply committed to scientific reasoning and critical thinking. Both of these skills will help you to evaluate research, arguments, and other claims you will encounter, in psychology and other disciplines. In short, if you begin to think like a psychologist, you will almost certainly become a more astute observer of the people and the world around you.

The unit is divided into four Modules (think of a Module as a short chapter):

Module 1, How Psychologists Think, introduces you to the role of science in psychology and describes how you should think about the psychological information you encounter in this course and elsewhere.

Module 2, How Psychologists Know What They Know, provides many details about the methods that psychologists use to learn about human behavior and mental processes. In short, it is about research design.

Module 3, How Psychologists Think About the Field of Psychology, describes how the discipline of psychology is subdivided and gives you information about career options for psychology majors.

At the end of the unit, and at the end of every unit in the book, is a special module that will look a little different from the earlier ones.  The purpose of these modules is to bring together the previous material and provide additional historical and psychological context for the material you have read. These sections often contain descriptions of research related to the material in the other modules. They provide a final link between the “personally meaningful” material emphasized in the modules and the traditional organization of the field by psychologists.

Module 4,  The Science of Psychology: Tension and Conflict in a Dynamic Discipline.” In addition to giving you a description of the role that conflict played in the development of scientific psychology, it offers another reason for you to think critically about the psychological information you encounter.

behavior : any observable response in an organism

mental processes : functions within the brain

psychology : the science of behavior and mental processes

The Unit 1 introduction that you just read lists many topics that psychologists are interested in. You may have been surprised to discover such a wide range of topics. Part of the reason people tend to have such a limited view of psychology is that their exposure to psychologists is often limited. We tend to hear only about psychologists who provide professional services for people, therapy or counseling. Of course, many people with education in psychology are involved in these activities. More, however, are devoted to other activities. In fact, the very large majority of people who have degrees in psychology (undergraduate and graduate) devote their careers to some other goal than providing therapy or counseling.

The characteristics that psychologists (individuals who have a doctorate degree in psychology) really have in common, along with anyone else who has at least a college-level exposure to the discipline, is an understanding of the essential role of science and research and an objective evaluation of ideas about human behavior and mental processes.

This module is divided into three sections. It begins by introducing you to the characteristics of a scientific discipline and explaining how they apply to psychology. The second section, acknowledging that much of what you will hear about psychology in your everyday life will come from the popular media (TV, magazines, internet, social media, and so on), gives you advice about how to begin to evaluate the psychological claims that you might come across. The final section outlines some key ways that people mentally distort the world when they fail to take a more scientific view.

  • 1.1  Understanding the Science of Psychology
  • 1.2 Watching Out for Errors and Biases in Reasoning
  • 1.3 Thinking Like a Psychologist About Psychological Information

READING WITH A PURPOSE

Remember and understand.

By reading and studying Module 1, you should be able to remember and describe:

  • Difference between beliefs and knowledge (1.1)
  • History of how psychology came to be considered a science (1.1)
  • Five key properties of scientific observations (1.1)
  • Operational definitions (1.1)
  • Six types of reasoning errors that people typically make: statistical reasoning errors, attribution errors, overconfidence errors, hindsight bias, confirmation bias, false consensus (1.2)
  • Seven tips for evaluating psychological information (1.3)

By reading and thinking about how the concepts in Module 1 apply to real life, and practicing, you should be able to:

  • Begin thinking like a scientist in everyday life (1.1)
  • Generate simple examples of operational definitions (1.1)
  • Recognize examples of reasoning errors in your life and correct them (1.2)
  • Use the “7-tips” to evaluate psychological claims- note this is also an Evaluate goal (1.3)

Analyze, Evaluate, and Create

By reading and thinking about Module 1, participating in classroom activities, and completing out-of-class assignments, you should be able to:

  • Determine whether a particular subject or discipline is scientific or not (1.1)
  • Outline how you would change a non-scientific observation into a scientific one (1.1)
  • Separate flawed reasoning based on overconfidence errors from solid reasoning in an individual’s argument. (1.2)
  • Articulate a set of reasons why a particular psychological claim might not be trustworthy (1.3)

1.1  Understanding the Science of Psychology

  • Is each of the following a science or not? Chemistry. History. Biology. Psychology. Physics. What distinguishes the sciences from the non-sciences in this list? What does it mean for a discipline to be scientific?
  • Why do you think people care whether or not psychology is a science? Do you care whether psychology is a science or not? Why or why not?

Every day, we attempt to achieve the same goals that psychologists do. We see someone do something, and we try to explain why. For example, imagine that you encounter your best friend in the hall outside class, and he ignores you. Very likely, you would try to explain this behavior. Did he not see you? Is he angry with you? Is something troubling him? You might not stop at the question stage, however. Most people will answer the question and have high confidence that their answer, their expiation of the behavior, is correct. Psychologists do something different, though. They replace everyday observations and explanations with scientific ones. Science is nothing more than a method of gaining knowledge about the physical world. But it is a highly valued method.

Think of what it means to know something, as opposed to just believing it. Many children in the United States grow up believing in Santa Claus, the Easter Bunny, or the Tooth Fairy. As they get older they discover the many contradictions and inconsistencies that accompany belief in these characters—for example, “How does Santa get into our house? We don’t have a fireplace.” Eventually, as they realize that the beliefs are not justified, that the characters were invented to disguise gift-giving by their parents, the children discover an inescapable fact: Believing something to be true does not make it true. We are not saying that beliefs are wrong. We are saying that in order to know something, the belief must be justified. If you are approaching a railroad crossing in your car, you would much rather know that you will beat the oncoming train than simply believe it.

So, you can think about knowledge as correct, justified belief (although philosophers argue that the concept of knowledge is more complicated than that). Science  has emerged as the most important method of providing the justification for belief, bringing it closer to knowledge. A scientist believes something to be true because it has been supported by evidence, evidence produced under tightly controlled conditions designed to allow the scientist to draw valid conclusions.

Throughout this book you will encounter many explanations of psychological phenomena. We frequently use real-life examples to illustrate these phenomena. You should always remember, however, that psychologists base their explanations not on casual everyday observation but on careful scientific research.

The Importance of Science to Psychology

If you have the opportunity, take a look at some other general or introductory psychology textbooks. Many of them make a big deal out of the assertion that psychology is scientific. (If you do not have the opportunity, take our word for it; they do) You might wonder, why does it matter if psychology is scientific or not?

Think of all of the classes you have taken in high school and college. How many of them began with a statement that the discipline you were about to study is a science? Of course, many disciplines are not sciences (for example, English, history, and foreign languages). What about biology, chemistry, or physics, though? Why doesn’t a chemistry textbook explain that chemistry is a science in its first chapter? The answer is probably obvious; it is because everyone knows that chemistry is a science. Aha, now we are on to something. The reason that psychology textbooks have to explain the link with science is that not everyone knows that psychology is a science (Lilienfeld, 2012). Unfortunately, that seems to include other scientists. As a consequence, psychology sometimes seems as if it is “fighting for respect” among the scientific disciplines (Stanovich, 2019).

Over the past few centuries, science has emerged as the most important and most widely respected way of discovering truths about the physical world—in other words, of turning belief into knowledge. Even in the 18th century, scientific ideals were held up as the model for many disciplines. Unfortunately, Immanuel Kant (2004/1786), an influential 18th century philosopher, had asserted that a scientific psychology was impossible. Given the respect with which scientific disciplines were treated, the implication may have been that psychology was not “good enough” to be a science.

It is interesting to note, however, that many of the scholars who were interested in psychological concepts during the 18th and 19th centuries had a scientific background. To give one quick example, Hermann von Helmholtz, who in 1852 proposed a theory of color vision that is still accepted by psychologists today, was a physicist. (sec 10.1) Also, it seemed reasonable to believe that if other complex systems—for example, the universe—could be studied scientifically, why not the human mind?

Still, when psychology emerged as a legitimate discipline, it had to struggle to establish itself as a science. One reason that the German researcher Wilhelm Wundt is credited with being the first psychologist is because he worked so hard at establishing psychology as a science throughout Europe (Hunt, 2007).

Five Key Properties of Science

It is not just the word of scientists or other authorities that gives science its special power to justify people’s beliefs. Rather, it is the characteristics of scientific inquiry itself that make it so effective. It has five key properties:

  • Science is empirical.
  • Science is repeatable.
  • Science is self-correcting.
  • Science relies on rigorous observation.
  • Science strives to be objective.

As you read about these properties, try to imagine ways that you can apply them to your own attempts to understand the world. You will find that with practice, you can apply a more scientific approach to your everyday thinking (and as you will see soon, that is a good thing).

Science Is Empirical

Empirical  means “derived from experience.” Simply put, science proceeds as scientists “experience” the world and make observations in it. The other kind of potential observation is an inside-the-head one, observation of one’s own consciousness and thought processes. This second technique, known as introspection, was very important in the early history of psychology. For example, you can imagine lying on a beach and relaxing and then to report how that thought makes you feel. You might report it to be a very effective way of helping you to relax, but because you did not have to leave your own head, so to speak, your feeling is not an empirical observation.

It is probably fair to say that empirical observations are the most fundamental principle of science. These experience-based, public observations are what allow the remaining four characteristics of science to be achieved.

Science Is Repeatable 

If you were to conduct a scientific research project, you would seek to publish an article about your research in a scientific journal. One of the sections of that article is called Methods and it would lay out in great detail how you conducted your study. If future researchers want to repeat your study, all they would have to do is pick up your article and follow your methods like a recipe. This process, repeating a research study, is called replication.

Well, that sounds boring and useless, you might think. How do science and psychology progress if researchers spend their time repeating someone else’s study? First, replication is precisely what creates the third key property of science, the capacity for self-correction (see below). Second, relatively few studies are simple repetitions of previous studies (although that is changing somewhat; see Module 4). Instead, the replication seeks to repeat some key aspects of an earlier study while introducing a new wrinkle. To give you a simple example, a replication of a study done on learning in preschool children might examine the same phenomenon in children throughout the primary grades. It could show that the way that preschool children learn also applies to children of other ages as well.

Science Is Self-Correcting

We suggested above that replication is what allows science to be self-correcting. Let us explain. Self-correcting  means, roughly, that evidence based on good research tends to accumulate, while information based on bad research tends to fade away, forgotten.

Suppose you are watching the evening news fifteen years from now. A vaguely familiar person is being interviewed about her amazing new psychological discovery. As she is describing how her research has thrown into question everything we previously thought was true about human behavior and mental processes, you suddenly realize that you know this person. She was the person who goofed off in, rarely showed up to, and most likely failed the General Psychology class you took together back in college. “No way,” you think to yourself as she describes how the practical applications of her research finding will make her a multimillionaire. “She must have made a mistake when she did her study.” Quite simply, you do not believe that she got the correct results.

As someone who understands the science of psychology, you have a way to check up on her. Find her journal article, repeat the methods, and see if you can replicate her results. If you do, your results are another point in her favor, as an independent researcher has produced additional evidence for her findings. If you get different results, you have generated an official scientific controversy. Now a third researcher has to come along and replicate the study. The new replication may agree with you, or it may agree with your rival. Then, another researcher has to come along. And so on. Over time, the evidence will start to pile up on one side. Most of the researchers will obtain results that agree with one another, and the few that do not will be forgotten.

Here are two real-life examples of this scenario. Neither is from psychology, but it is important for you to realize that scientific principles have nothing to do with the subject matter. As long as you adhere to the principles, you are a scientist.

First, in 1989, a team of scientists claimed that they had achieved something called cold fusion, a nuclear reaction previously thought to be impossible. Observers noted that the results of these experiments, if verified, could be harnessed to solve the world’s energy supply problems (Energy Research Advisory Board, USDOE, 1989). Researchers across the world could not believe that this difficult problem, with such important potential for the human race, had finally been solved. Many tried to replicate these results in their own labs. The vast majority was unable to do so, and the original research was forgotten.

The second example is from biology. In 1997, a team of researchers again claimed that they had achieved what had previously been thought impossible. They were able to clone a higher mammal, a sheep; they named her Dolly. Doubting researchers across the world attempted to replicate these results, and this time, they were successful. Since the cloning of Dolly, there have been other sheep, cats, deer, dogs, horses, mules, oxen, rabbits, rats, and rhesus monkeys (NHGRI, 2017). It is now commonly accepted scientific knowledge that cloning of higher mammals is possible.

And because this is a psychology textbook, let us conclude with a more relevant example. In 1929, the electroencephalogram (EEG) was invented by Hans Berger. He placed electrodes on a person’s scalp and was able to amplify and therefore measure the electrical signals coming from the brain. Skeptical researchers did not believe that Berger was actually measuring brain signals; some even produced similar looking signals from a bowl of quivering gelatin. But over the next several years, a funny thing happened. Numerous researchers were able to reproduce these EEG signals, and it was eventually accepted as genuine (Luck, 2104). Interestingly, EEG is still in use today as a key method of measuring brain activity.

Of course, it can take many years for enough evidence to accumulate on one side of a controversy in order to draw a firm conclusion. This lengthy time frame makes it very frustrating to be a consumer of scientific information. We may learn through media reports, for example, that a study found a particular diet to be safe and effective. Soon after, another study is reported in which the first study is contradicted. What is happening is that we are hearing about the individual pieces that compose the scientific controversy while it is still in progress.

Science Relies on Rigorous Observation

Earlier, we said that scientific evidence was produced under tightly controlled conditions designed to allow the scientist to draw valid conclusions. The conditions under which scientific observations are made are laid out by specific research methods. These methods are essentially the rules for making scientific observations. (see Module 2)

For example, you might be interested in discovering whether caffeine improves exam performance. To do this, you would probably select a research method called an experiment. There are entire courses that teach the details (that is, the rules) about this method and explain why it would be the method you should choose. The important point here is that scientists learn about phenomena by carefully controlling, recording, and analyzing their empirical observations.

Science Strives to Be Objective

You should be aware of two related but distinct senses of objective. First, scientists strive to be personally objective; they try to not let their personal beliefs influence their research. Second, the observations that scientists make must be objective, meaning that different observers would observe the same thing. For example, if a research participant answers a question on a survey by choosing a number on a 5-point scale, different observers would be able to agree which number was chosen.

It can be very difficult to make objective observations. Imagine sending different observers out to watch a group of children and count how many aggressive acts they commit. As you might guess, the different observers might come back with very different reports. One source of difficulty can be the personal background and beliefs of the individual observers. Perhaps one observer believes that boys are more aggressive, so he watches them more carefully than he watches girls.

Another source of difficulty when trying to make objective observations is a lack of clarity about precisely what is being observed. In order to make observations more objective, researchers use operational definitions. Operational definitions specify exactly how a concept will be measured in the research study. For example, an operational definition for aggressiveness could be a checklist of behaviors that observers might see in the children they are watching: hitting, punching, kicking another child, using profanity toward another child, directing a threat toward another child, and so on. The goal is to come up with a list of behaviors that are a reasonable reflection of aggressiveness and that different observers can consistently recognize as aggressiveness. An operational definition like this gives observers a way to know what to count as an aggressive behavior so they can compare apples to apples.

The role of peer-review in science

Scientific research uses a technique called peer review to help ensure that the features of good science are contained in any specific research project. Here is how it works. If you want to have a report of your research study published in a scientific journal, it will be reviewed by a small group (often three) of experts in the research area. These experts, the peers, will evaluate your article, making comments about and suggestions to improve the scientific strength of the project and report. Publication decisions are based on the recommendations of the peer reviewers. As a result of peer-review, a great many articles are rejected, and nearly all others are required to make significant revisions before they can be published. Peer review, then, is the basic mechanism that we use for quality control throughout the scientific disciplines. We should point out that peer review is certainly not perfect. Low-quality studies can slip through, and high-quality studies may occasionally be rejected by a powerful but biased reviewer. It is, however, the best procedure we have available to maintaining the level of scientific rigor in published research.

  • Think about some non-scientific disciplines, such as history, philosophy, and humanities. Can you imagine how they might be made scientific?
  • Would it be a good idea or a bad idea to make non-scientific disciplines more scientific?

1.2  Watching Out for Errors and Biases in Reasoning

  • In your opinion, what type of people are the worst drivers? How did you form this opinion?
  • Do more people in the US die from falling or from fires? How sure are you that you are correct?

Human beings, probably on a daily basis, try to accomplish the same goals as scientists. When we witness some event, rather than simply being a passive observer, we often try to explain why it happened. Specifically, in the case of psychology, we see someone engage in a behavior and then try to explain it and the mental processes underlying it. For example, if we see someone running down the hall at school and yelling, we might wonder, “Why did he do that? Is he being chased? Is he celebrating because he just finished his finals?” We would have to call this very common human activity of searching for explanations naïve, or intuitive psychology , however, because it takes place without the benefit of scientifically gathered evidence. Other disciplines are similar; for example, researchers have discovered that people generate their own explanations for physical phenomena without relying on formal physics principles (as you might guess, it is sometimes called naïve, or intuitive physics).

Why should we care about intuitive reasoning (about psychology and the physical world)? Well, psychologists who study reasoning and thinking have discovered an important fact about it: We make many predictable sorts of errors when we try to draw conclusions about our everyday observations without thinking scientifically. And, from our selfish perspective, it is a good thing, too. After all, if your explanations about human behavior and mental processes were all correct before you took this class, psychology educators would be out of a job. In other words, if naïve psychology were always correct, there would be no need for scientific psychology.

In the following sections, we will outline a few important biases and errors. First, however, let’s talk about what we mean by biases (we will assume you know what we mean by errors). A bias is a specific tendency, a consistent way of thinking, seeing, believing, or acting. One important source of bias is one’s personal experiences and background. So now you might realize that when we spoke earlier about scientists’ need to ignore their personal backgrounds and make objective observations, we were in fact talking about the need to move beyond their biases. We distinguish between error and bias because an error, by definition, is always wrong. A bias in some specific situations might lead to a correct conclusion. For example, professors who have a bias that students are dishonest may be very successful at identifying cheaters in their classes. This can make it very difficult for people to discover that their biases might be incorrect (see also the confirmation bias below). The key idea is that if a bias is applied consistently, eventually it will lead to an error.

So with that in mind, here are a few important types of biases and errors in reasoning:

Statistical reasoning errors. There are many situations in which we try to make some judgment about the frequency or likelihood of something. For example, if we see a man running down the hall at school, we might need to judge how likely it is that he is being chased or fleeing some catastrophe. This is essentially what statisticians do, but, unlike naïve psychologists, they base their conclusions about likelihood on much more data and on the laws of probability. Statistical reasoning error s are poor judgments about likelihoods. Largely because we do not have the time or ability to calculate probabilities in our heads, we use shortcuts when trying to judge likelihood, which leads to many important errors. (sec 6.2)

Attribution errors. We also tend to make errors in the types of explanations that we come up with for people’s behavior—in short, attribution errors. For example, many people are very likely to explain someone’s behavior by attributing it to internal causes—that is, something about the person’s disposition or personality. (sec 18.1) So, observing someone running through the halls yelling, we are more likely to assume that he is a rude and obnoxious person and less likely to assume that some situational factor, such as an emergency, is responsible.

Overconfidence errors. Making matters worse, we have a set of biases that lead us to think that we are correct more often than we actually are. Individually, each bias is quite a dangerous overconfidence error.  Together, they combine to make us overconfident of our ability to explain and know things without relying on scientific research. And we can be very overconfident. In one study, research participants judged which of two kinds of events were more deadly (for example, do more people in the US die from fires or falls) and how likely their judgments were to be correct. When they said that there was a million to one chance against being wrong, they were actually wrong 10% of the time (Fischhoff, Slovic, & Lichtenstein, 1977). (For the record: according to the Centers for Disease Control, 38,707 people died from falls and 6,196 people died from fires in the US in 2018.) There is little doubt that people are rewarded for confidence, and even for overconfidence. For example, research participants judge that experts are more believable when the experts are more confident; interestingly, they even overestimate how often overconfident experts are correct (Price and Stone, 2003; Brodsky, Griffin, and Cramer, 2010).

So, if you see a man running down the hall, not only is there a pretty good likelihood that you will make the wrong judgment about him, there is also a pretty good likelihood that you will be nearly sure that your wrong judgment is correct. Some of the specific biases that lead to overconfidence are the hindsight bias, confirmation bias, and false consensus effect:

Hindsight bias. Once an event has happened, it seems to have been inevitable, and people misremember and believe that they could have predicted the event (Fischhoff, 1982; Lilienfeld, 2012). This has been called the hindsight bias, or the “I knew it all along” bias. For example, on many autumn Monday mornings, football fans across the US engage in what is known as “Monday morning quarterbacking.” Fans complain about the interception that the quarterback for their favorite team threw: “It was obvious that the cornerback was going to blitz; why didn’t he just throw the ball out of bounds?” But the event was not inevitable, it could not have been predicted, and had the fans been questioned before the interception actually occurred, they would not have “known it all along.” And you need not be a sports fan to fall for the hindsight bias. One study tested participants ranging from 3 to 95 years old; the bias was common in all of the age groups (Bernstein, et al. 2011). Another demonstrated the bias among Japanese and Korean participants (Yama et al., 2010). You should realize that the hindsight bias also works powerfully to make people believe that much research is unnecessary. When told that researchers have made some discovery, many people’s response is “I knew that; who needed to do research to find that out?” When people find themselves thinking, “I knew that already!” as a result of the hindsight bias, they often turn out to be overconfident about their beliefs as well.

Confirmation bias. We once asked a few friends what type of people are the worst drivers. The answers we received included teenage boys, people over 80, 20-something women with cell phones, moms in minivans, and older men wearing hats. Interestingly, several people were absolutely sure that they were right. Yet, it is impossible that they were all right. Only one group of drivers can be the worst. The strength of our friends’ beliefs results from something called the confirmation bias  (Ross and Anderson, 1982). People have a tendency to notice information that confirms what they already believe. It works this way: At some point you may have picked up the belief that older men wearing hats are the worst drivers (one friend heard it on a radio show). Now, every time you see an example that confirms that belief—for example, a 70-year-old man in a bowler straddling two lanes while driving 15 miles per hour under the speed limit—you make a mental note of it. “Oh, there is another old man in a hat. They should not be allowed to drive!” The flip side of confirmation bias is that we fail to notice information that disconfirms our belief. So, we might not pay attention to the 18-year old who is driving the Mustang that crossed the yellow line and narrowly avoided a truck trying to pass the older man in the hat. The confirmation bias is very common in many different situations (Nickerson, 1998). For example, people suffering from insomnia may incorrectly recall that they sleep less than they actually do, in part because of the confirmation bias (Harvey and Tang, 2012). The confirmation bias is a particularly dangerous one because it often directly leads us to draw the wrong conclusion while it is simultaneously increasing our confidence in that wrong conclusion.

By the way, according to the National Highway Traffic and Safety Administration, males between 16 and 20 years old have the highest rate of involvement in automobile accidents. Females in the same age group are in second place. Sorry, there were no data on older men in hats.

False consensus. The famous developmental psychologist Jean Piaget proposed that young children have difficulty taking someone else’s point of view; he called it egocentrism (Module 16). But the characteristics that Piaget described do not apply to children only. We can find many examples of adults who fail to take other people’s point of view. False consensuS, the tendency to overestimate the extent to which other people agree with us, is an important example of this failure (Pronin, Puccio, and Ross, 2002). In essence, we tend to think our point of view is more common than it actually is, failing to consider that other people might not see things the same way. In 2003, we asked approximately 100 General Psychology students to rate their degree of support for the U.S. war with Iraq, which was then near its peak. Then we asked them to estimate how many of their fellow students gave the same rating—that is, how many agreed with them. Ninety percent of the students believed that more people agreed with them than actually did, a very strong false consensus effect (Gray, 2003). Again, this error contributes to our overconfidence and to our belief that research is not necessary. It is all too tempting to believe that we have learned the truth about the whole world by observing ourselves and our small part of the world. Research is important because it helps us find out objectively how common or uncommon our personal beliefs may be.

  • Try to think of an example from your life in which you or someone you know might have committed each of the errors described in this section: statistical reasoning error, attribution error, overconfidence error, hindsight bias, confirmation bias, false consensus.

1.3  Thinking Like a Psychologist About Psychological Information

  • Have you ever read a self-help book? If so, did you follow the advice in the book, and did it help?
  • Have you ever found yourself in a discussion in which someone says, “I read somewhere that _ __ ,” where the blank is filled with some claim about psychology (human behavior and mental processes), such as “men and women solve problems differently” or “most people are right-brained.” How did you respond to the statement?

Unless you major in psychology, this might be the only psychology class you ever take. Even if you wind up taking one or two additional classes, your most significant lifetime exposure to psychological information will be as a casual user of the information. Even psychology majors who end up earning advanced degrees will be bombarded with psychological information from the popular media and other non-academic sources—newspaper and magazine reports, or Facebook posts that summarize some new finding, commercial websites touting some remarkable relationship-saving communication strategy, psychological claims made by “experts” on television and YouTube, claims made by friends and acquaintances during conversations, and so on. So whatever you may decide to do as a student of psychology, it is important that you learn how to make sense of these claims and to evaluate them.

The basic principles of scientific thinking and time-tested research methods and statistical techniques will help you sort out the good from the bad, the sense from the nonsense. This section focuses on some critical thinking skills (sec 7.1) that will help you overcome problems you will face when you are exposed to psychological information and research in everyday life. As an added bonus, many of the tips in this section can also be applied to help you evaluate media reports of claims and research from other disciplines or even advertising and political campaigns.

Often, the only way to draw valid conclusions about some claim will be for you to enlist the thinking skills that you acquire through your education in science; remember, the whole purpose of science is to provide justification for belief. So you would need to locate scientific journal articles, read them carefully, and compare the articles to one another and to the claims from the popular media.

As you might guess, this can be an enormous undertaking, one that could be a full-time career, so even psychologists with advanced degrees do not often do all of this work. How can you decide when you should go to the trouble? You should judge how important it is for you to not be misled about each individual claim. For example, if you are currently having serious difficulty in a romantic relationship, you may want to determine whether the relationship-saving claims from someone’s website are supported by scientific research before you follow the advice we know we would).

Another strategy is to use the suggestions from this section as a set of warning flags during your initial encounter with the psychological information. If the popular claims that you are evaluating do not pass the tests suggested by the following seven tips, you should be very cautious. It might be time for you to take a deep breath and begin wading through the scientific literature to find more authoritative information.

Tip # 1. Be aware of your pre-conceived ideas

If you think about the confirmation bias from Section 1.2 for a minute, you might realize something important about it. If we go through life typically paying attention only to information that confirms what we already believe, it might be remarkably difficult to change our minds. Indeed, researchers have demonstrated that this is exactly what happens. It is called belief perseverance , and it is very common. People sometimes even refuse to change their minds when their beliefs are proven completely wrong (Anderson, 2008; Ross, Lepper, and Hubbard, 1975). As you might realize, the ability to critically challenge your own beliefs is one of the most important thinking skills you can develop. The reason is simple; no one is always right.

One of the greatest dangers we face when evaluating psychological claims, then, is that we tend to be very uncritical about the information that we already believe. Many people have very little interest in and devote very little effort into proving themselves wrong (Browne & Keeley, 2009). If we happen to be wrong, though, we will never find out. If your goal is to find the truth, sometimes you have to admit that your pre-conceived ideas were wrong.

Tip #2. Who is the source?

Although an advanced degree in psychology from a reputable university is certainly not a guarantee that a claim will be correct, the lack of such a degree can be a cause for caution. A person who makes psychological claims should be qualified to make those claims. Dr. Laura Schlessinger, for example, is the author of several bestselling books that dispense psychological information, as well as the host of a national call-in radio advice program. She bills herself as America’s #1 Relationship Talk Show Host. One problem: Her Ph.D. is in physiology (read that carefully; it didn’t say psychology). Although Dr. Laura, as she calls herself, has a certificate in Marriage, Family, and Child Counseling, it is the Ph.D. that qualifies someone to refer to herself as “Dr.” It seems a bit misleading to dispense psychological information as a “Dr.” in physiology.

How about Dr. John Gray, the author of the successful Mars and Venus books? According to his website, MarsVenus.com, the original book in the series, Men Are from Mars, Women Are from Venus, has sold more than 15 million copies, and Dr. Gray is the “best-selling relationship author of all time.” John Gray does indeed have a Ph.D. in psychology, so he may appear qualified. His degree, however, is from Columbia Pacific University, a school that was ordered by the state of California in 1999 to cease operations because it had been granting Ph.D. degrees to people for very low-quality work. (Hamson, 1996, 2000).

Organizations can also sometimes deceive us about their true origins and purpose. Have you ever heard of the American Academy of Pediatrics? According to their website, it is a large group of pediatricians (established in 1930) with a national organization and 59 individual chapters throughout the US (and 7 in Canada). Among other activities, the Academy shares with the public medical consensus opinions about various topics intended to improve the health of children (AAP, 2020). Well, how about the American College of Pediatricians? According to their website they, too, are an organization of pediatricians (and other healthcare professionals). It was established in 2002 and now has members across the US and in other countries. (ACPEDS, 2020). If you are at a computer, please take a few minutes right now to Google ACPEDS. (Go ahead, you have time; you are almost finished with this section.) Don’t go to their website, but look at some of the other results that Google gave you. Did you find the one that states that the Southern Poverty Law Center has labeled the American College of Pediatricians a hate group? Others refer to it as a fringe group of pediatricians with an obvious ideological bias. Now, keep in mind, we are not saying that this quick Google exercise has definitely unmasked this group as a fraud. But we certainly have quite a bit more to think about before we automatically accept their information.

When you are faced with the problem of trying to figure out if an individual or group is legitimate, do what fact-checkers do. Do not simply read the “About” section of a website. Do an independent investigation of the person’s (or organization’s) background, experience, or credentials. You don’t have to hire a private detective; just do a bit of a Google search. Use Wikipedia (tell your professor we said it was ok in this case). All we are trying to do is get a sense for someone’s background and whether or not they are associated with any controversies. An informal search like this will work quite well for those purposes.

Tip # 3: What is the purpose of the information?

This one might seem obvious, and sometimes it is. When the first thing you see on a website is a Buy Here button, you know that they want to sell you something. Sometimes it is not exactly obvious, though. A common persuasion technique is to disguise an attempt to persuade as information (Levine, 2020). For example, financial advisors who work on commission often try to sell annuities or other financial products by sponsoring free educational seminars or by publishing “informational booklets” about financial products in general. Other common hidden purposes include political agendas (see ACPEDS above for a possible example) and obtaining personal information about users for marketing purposes (cough-cough, Facebook).

Tip #4. Is it based on research?

If you learn nothing else from this course, we’d like you to learn this next point. No, wait. On second thought, there are a great many things we would like you to learn from this course. Among those, we would like to emphasize this next critically important nugget. There is only one reason you are allowed to say that something is true in psychology. And that reason is that someone did the research. Not just one someone, but lots of someones. You see, that is what it means to be a scientific discipline. We cannot rely on casual observation or opinion, even expert opinion. We must only draw conclusions when they are warranted by careful research conducted by a number of different researchers.

And this certainly applies to the psychological information to which you are exposed on a regular basis outside of the confines of this course. Consider self-help, for example. Self-help is an enormous industry. (Just for fun, we just Googled self-help; it returned 4,150,000,000 results in 0.75 seconds.) There are many excellent self-help resources. Unfortunately, however, there are also many that are, well, “not excellent.” The fact that a book has been published, for example, says only that the publisher believes that it will sell; sadly, it says nothing about the quality of the information.

How do you tell the good from the poor resources, then? The task involves several of the tips we have given in this section and more. Pay attention to the qualifications of the author. Look for signs that the author is oversimplifying (Tip #5 below) or relying on persuasion tricks (Tip #7).

Most importantly, does the resource have a good grounding in scientific research? Is there a section somewhere prominent that lists the studies cited in the resource? Are the studies from scientific psychological journals, such as The Journal of Personality and Social Psychology and Health Psychology ? Is the underlying research described in the resource itself? Have the authors conducted any of the research themselves? If the answer to most or all of these research questions is no, we would be very cautious about accepting the claims in the resource.

Tip # 5. Beware of oversimplifications

Descriptions of psychological concepts intended for the public must simplify (for that matter, so, too, must undergraduate textbooks). If they presented the information in as much detail as one typically finds in a scientific journal, very few people would ever pay attention, even if they could understand the information (scientific journal articles are notoriously difficult to read because they are typically written for an audience of Ph.D.-level psychologists). So simplification is acceptable, often even necessary. But oversimplification  is simplification that goes too far and ends up distorting or misrepresenting the original information.

An expert can usually recognize oversimplification, but how can a non-expert? It would seem that you would need to know the complicated version of the information in order to know if it is being distorted when simplified. To help you recognize oversimplification, you should get in the habit of looking for the common clues that it is occurring. For one thing, there are very few absolutes ( none, never, always ) in psychology. (What if we had said there are no absolutes in psychology? Would you have believed us?) If someone tells you that something is always true, it is a good bet that they are oversimplifying.

Also, be very careful when people make sweeping generalizations that seemingly apply to everyone. These are called overgeneralizations , incorrectly concluding that some fact or research finding true of one group is automatically true of a larger or different group. For example, a headline on an internet site we first encountered a few years ago trumpeted “This Food Makes Men Aggressive,” a statement that certainly sounds like a sweeping generalization. Clicking on the link, we discovered that the title of the article was “This Food Can Make Men Aggressive,” a bit of a hedge from the headline, but still an oversimplified overgeneralization of the research, as you will see. The article cautioned readers about the potential dangers of eating soy burgers because research had discovered a link between soy and aggression. The actual research, published in the journal Hormones and Behavior, reported that monkeys that were fed a diet high in soy isoflavones (125 mg daily) were more aggressive than monkeys fed no isoflavones (Simon et al., 2004). A typical soy burger has 7 mg of soy isoflavones (Heneman et al., 2007). So unless you are eating 18 of them per day and are as small as an average monkey, we recommend waiting before throwing away that package of soy burgers in the freezer. We call this specific kind of oversimplification the headline effect , distorting some research results by creating a very short headline-like summary. Keep in mind, the headline effect does not happen every time someone uses a short summary; it only comes into play when that headline-like summary distorts or hides some important aspects of the larger story.

Another clue that someone may be oversimplifying is when an explanation uses a very firm either/or approach; this type of oversimplification is called creating a false dichotomy  or false choice . For example, the popular media might report that some new research has uncovered a genetic component for personality, implying that environment, therefore, has no influence on personality. In other words, personality is presented as the product of either nature or nurture. As you will learn throughout this book, though, psychological phenomena appear to be a combination of biological and environmental influences. Few things in psychology have only one cause or explanation. Any report that emphasizes one explanation to the exclusion of everything else is probably oversimplifying.

Tip # 6. Beware of distortions of the research process

Controversies

Science is, by nature, uncertain. For very long periods of time, some theory or claim may be quite controversial, with large groups of psychologists standing on both sides of the issue. For example, at a presentation in which the speaker asked the audience—about 125 college and high school psychology instructors—whether personality was determined more by a person’s genes or by the environment, the instructors split nearly 50-50. Obviously, these psychologists did not all agree.

When psychological claims are presented in the media, however, they often skip the disclaimer that not all psychologists agree with the claim, or that the results of a specific study represent a snapshot: one piece of information at one point in time. An honest and complete view acknowledges that progress in psychological knowledge is more of a back-and-forth process than a straight line and that individual results must be put into the context of all of the research that preceded it (science is self-correcting over time). (sec 1.1)

More dramatically, the media can sometimes present the opinions of a very small number of scientists (sometimes as few as one) as if they represent the existence of a legitimate scientific controversy. For example, in April 2020, during the height of the COVID-19 quarantine in the United States, two urgent care doctors, Dan Erickson and Artin Massihi, recorded a series of videos on YouTube that asserted (among other things), that the shelter-in-place orders had little to no effect on the spread of the coronavirus. Somehow, they and the millions of supporters of their videos seemed to think that the doctors’ conclusions were more valid than the conclusions of the World Health Organization, the Center for Disease Control, and highly credentialed epidemiologists throughout the world. We like to think of this as the myth of two equal sides. Although it is true that there are usually two sides to a story, it does not mean that the two sides are equally good. Two doctors who treat patients in an urgent care facility on one side do not form an equal counterweight to the entire scientific discipline of epidemiology and the most prestigious health organizations in the world. The very strong scientific consensus is that sheltering in place led to a very substantial slowing of the spread of the virus, probably saving millions of lives by largely preventing the overloading of hospitals. In this case, as sometimes happens, supporters began to treat the doctors as if they were some kind of martyrs, shunned and censored because they dared to challenge the accepted wisdom, as if they were some modern Galileo (whom you may recall was persecuted by the establishment when he proposed that the earth revolved around the sun). But as science historian Michael Shermer (2002) has pointed out, you do not get to be Galileo simply by being shunned by establishment science; you must also be correct.

Spurious correlations

In Module 2, you will learn an important point: just because two things are associated, it does not mean that one caused the other one. Oh, what the heck. We might as well introduce you to the point here. Then we can review it in Module 2 in the context of interpreting statistics. Researchers who want to determine that there is a cause and effect relationship between two variables, have the ideal research design available to them: experiments. The experimental research design allows causal conclusions precisely because the researcher manipulates one variable and measures the other variable while holding other variables constant to rule out alternative explanation. A different type of research design is correlational, in which a researcher measures the correlation, or the association of two variables. For example, students who have higher grades in high school tend to get higher grades in college (and vice versa: low performing students in high school tend to get lower grades in college). This relationship, or association, is useful for predicting: for example, if you want to predict who will do well in college, focus first on the students who did well in high school. But, as you will see in some detail in Module 2, this does not mean that doing well in high school causes students to do well in college. Unfortunately, people sometimes make that exact sort of claim. They speak as if the association between two variables automatically means that they are causally related. For example, in 2010, a Los Angeles Times headline trumpeted, “Proximity to freeways increases autism risk, study finds,” even though the research was based on a correlation. In fact, the researchers themselves cautioned against drawing that causal conclusion but that did not stop the headline (Roan, 2010). (We hope this sounds familiar; if not, see the Headline Effect above).

Actually, it turns out that this tip really includes most of Module 2, too. Seriously. If you want to be able to evaluate claims about research, you have to know quite a bit about different research strategies, when they are appropriate, their strengths and limitations, etc. You also need to know a bit about the use and misuse of statistics. Module 2 will give you a good solid foundation to be able to recognize some of the common distortions that can occur, so the continuation of this tip really is “most of Module 2.”

Tip #7. Beware of persuasion tricks

Do you believe that advertising does not affect you? If you said yes, well, that is exactly what they want you to believe. You might think we are joking, but we are not. In 2019, Facebook earned $70 billion (with a b) from advertising (this was even after their well-publicized mishaps handling our private information became public). This is an extraordinary amount of money. How extraordinary? Stack $1 bills on top of each other. One inch of dollar bills would be about $230. Seventy billion dollar bills would be a stack that reaches over 4700 MILES in the air. And Facebook is only one company. There is a simple reason why companies earn thousands of miles of money from advertising. It works. And the less aware you are of these persuasion effects, the more you are likely to fall for them (Wegener & Petty, 1997). So, sorry to have to do this to you, but you really should read the section on Persuasion to get a more complete understanding of how these techniques are used on (or against) us (Module 21) To get you interested, let us consider one of those techniques here: testimonials.

A testimonial  is a report on the quality or effectiveness of some treatment, book, or product by an actual user. For example, diet ads often use testimonials to demonstrate the effectiveness of their plan. It is a persuasive technique because presumably the person giving the testimonial is someone just like you or me, an unpaid consumer who happened to be so impressed by the performance of the product that they just could not help but thank the company.

Some testimonials are clearly contrived. But even honest testimonials present a problem: Each testimonial is useful for describing the experiences of one single person only. The individual in question might not be all that typical or all that much like you. Buried in the small print, diet ads may tell you that the “results are not typical.” Just because one person lost 75 pounds by eating only the crusts of white bread, it does not mean that everyone, or even most people, will see similar benefits. In terms of the principles of science, a testimonial is not based on rigorous observations, and one person’s experience might not be repeatable (Schick and Vaughn, 1999). (See Section 1.1 above)

By the way, if you are paying close attention, you might realize that this sounds awfully similar to overgeneralization. They are assuming that just because something is true of one person, it is true of everyone else. If you did notice that, congratulations. You should probably get extra credit.

Have the tips in this section led you to reconsider whether some psychological claim that you believed is true or not? If so, what was the claim, and which tips led you to doubt it?

A major goal of Module 2 is showing you some details about how psychologists use research to expand their knowledge of human behavior and thinking processes. This module explains many of the nuts-and-bolts methods of conducting psychological research. As you read about psychological phenomena throughout the book, keep in mind that the research was conducted using one of these methods, sometimes a very sophisticated version.

Module 2 is divided into five sections. A section that presents the overall process of research is followed by two sections that describe the characteristics, strengths, and limitations of the major research methods in psychology. One is about the types of scientific studies that are used to describe people and determine relationships among phenomena (case studies, surveys, and naturalistic observations); the other describes in some detail the most important type of research in psychology, the experiment, which allows us to determine cause-and-effect relationships. Another section explains the use of statistical procedures to summarize and draw conclusions about research. The module would not be complete without a discussion of why ethics are important in psychological research and how to conduct research ethically.

  • 2.1 The Process of Psychological Research

2.2 Research Methods Used to Describe People and Determine Relationships

  • 2.3 Research Methods Used to Determine Cause and Effect

2.4 Statistical Procedures to Make Research Data More Meaningful

  • 2.6 Ethics in Research

By reading and studying Module 2, you should be able to remember and describe:

  • Three elements in psychological research: observations, theory, hypothesis (2.1)
  • Characteristics, strengths, and limitations of descriptive and correlational research methods: case studies, surveys, naturalistic observation (2.2)
  • Variables, correlations, correlation coefficient (2.2)
  • Reasons that correlations do not imply causation (2.2)
  • Experimental method: independent and dependent variables, experimental and control groups, random assignment, extraneous variables (2.3)
  • Descriptive statistics: frequency distributions, measures of central tendency, measures of variability (2.4)
  • Inferential statistics (2.4)
  • Two basic approaches to deciding whether a research project is ethical (2.5)
  • American Psychological Association ethical guidelines for research (2.5)

By reading and thinking about how the concepts in Module 2 apply to real life, you should be able to:

  • Design and conduct a simple survey project (2.2)
  • Design and conduct a simple experiment (2.2)

By reading and thinking about Module 2, participating in classroom activities, and completing out-of-class assignments, you should be able to:

Formulate hypotheses from scientific theories (2.1)

  • Select an appropriate research design from a set of research goals (2.2 and 2.3)
  • Explain why the conclusions drawn about a research project are warranted or unwarranted (2.2 and 2.3)
  • Identify the research method and key elements of the method from a description of a research project (2.2 and 2.3)
  • Explain why a researcher’s choice of descriptive statistics is appropriate or inappropriate (2.4)
  • Judge whether a research project is likely to be considered ethical or unethical using the basic ethical approaches and the American Psychological Association ethical guidelines (2.5)

2.1 The Process of Psychological Research

  • Do you generally trust or mistrust science and research? Why?

Research is essential to science, and so it is (or should be) conducted very carefully. You will be able to recognize the basic ideas behind scientific research, however, because chances are that you have often conducted an informal kind of research.

The process often starts with some observations  of a curious phenomenon. You see or hear or learn in some other way that something noteworthy is happening. For example:

Several years ago, while Piotr and his fiancee Zofia were driving around and looking for a place to eat dinner, she glanced up at a billboard and read, “Joe’s Pizza.” It was, however, a sign for a car dealership owned by a man named Joe Rizza.

Once, about halfway through eating what he thought was a raisin bagel, Piotr realized that it was actually a chocolate chip bagel with no raisins at all.

Try as he might, Piotr couldn’t decipher the lyrics of a song he liked. But after he read them somewhere, he could hear them clearly.

For people with an inquiring mind, the next step is to try to answer the question “Why did that happen?” A scientist uses theories  to fulfill that goal; one of the primary roles of theories is to explain observations. We also come up with informal theories (explanations) in everyday life:

  • Because she was hungry, Zofia misperceived the car billboard to be consistent with what was on her mind at the time, namely food.
  • Because he expected the bagel to have raisins in it, Piotr mistakenly thought that the chocolate chips were raisins.
  • Seeing the words of the song written down somehow put him in a frame of mind to hear them more clearly.

The second primary role of a theory is to organize different observations. In essence, scientists use theories to propose common explanations for or important relationships between different observations. In everyday life, we also may have an insight into how different phenomena are related. In each of the everyday examples we provided here, a person’s perception of some stimulus (seeing a billboard, tasting a bagel,  hearing song lyrics) is influenced by the state of mind that they are in at the time (being hungry, expecting raisins, knowing what the lyrics are from reading them). This last statement is basically a theory about perception. For now, let’s call it a theory of expectation effects in perception: If people have an expectation, they are likely to perceive some stimulus in a way that is consistent with that expectation. (sec 11.3) Please note, however, that although scientists may get their ideas for research from everyday observations, such as the examples we have been using here, they will use much more carefully controlled observations obtained during research to formulate or refine a theory.

Theories have one last important role: they allow us to make predictions about future observations. These predictions are called hypotheses, and they are used to test the theory. For example, from our theory about expectation effects, you might predict that many other kinds of perceptions would be similarly affected. This might be one hypothesis derived from the theory: Fearfulness may set up an expectation that causes a shadow on the wall to be mistakenly perceived as an intruder. If the hypothesis is verified by a well-designed research study, the theory becomes more believable. If the hypothesis is not verified, the theory may need to be modified to fit the new observation; in extreme cases, a hypothesis that is not verified can lead to a rejection of the theory.

So, to summarize, a theory is a set of statements that explain and organize separate observations and allow a researcher to make predictions, or hypotheses, about some possible future observations. Although the process may seem simple, scientific research is really a complex interplay among observations, hypotheses, and theories.

The figure is a flow chart with eight labeled boxes linked by arrows describing the relationship between observation, hypothesis, and theory. Three observations lead to a theory. The theory leads to a hypothesis. The hypothesis leads to the question: was the hypothesis verified? If yes, you will have more confidence in original theory, which now explains four observations. If no, revise the theory or reject it for one that explains the four observations.

Finally, a scientist need not begin the process with observations. This is particularly true for a beginning researcher, who as a graduate or undergraduate student is generally considered an apprentice. Students often begin the research process by learning about theories, often reading about them in scientific journals. Then they can generate new hypotheses from the theory and continue the process from there.

hypothesis : A prediction that is generated from a theory.

  • In the introduction to this unit, we listed some questions —essentially some observations—about human behavior and mental processes (see the “Have you ever wondered?” section). Please turn to the introduction and select two observations. For each question:
  • Try to come up with a statement that explains why the type of event from the observation happens; this is your informal theory.
  • From your theory, try to generate a hypothesis about a new observation.
  • Have you ever participated in a survey? What do you think the researcher was trying to discover?
  • Have you ever been a subject of an observational study (that you are aware of)? What do you think a typical goal would be for a researcher who is observing people?

Research begins with a question. Driving home from work, you notice that solitary drivers seem to be driving faster than those with passengers, and you wonder if that is generally true, and if so, why? While trying to fall asleep, you decide to count sheep and discover that it does not help you fall asleep. You wonder, “Am I typical; is counting sheep to help sleep really a myth?” Or you notice that many of your classmates obviously enjoy school, whereas others barely endure it, and you wonder what makes some people dislike school. Many of the world’s greatest researchers are driven primarily by this kind of curiosity.

To answer their questions, researchers have many different specific research methods from which to choose; a researchers’ selection of a method will depend largely on their goals. If the goal is to describe people or determine whether different characteristics of people are related, the research method chosen will probably be a case study, survey, or naturalistic observation.

Case Studies

Let’s consider the research question of why some students dislike school. One way you might try to discover what makes people dislike school is to study in depth someone who happens to dislike it. You could interview him, his teachers, his parents, and his friends; you could observe him in school and at home; and you could study his school records. This technique is a research method known as a case study , a detailed examination of an individual person, or case. It is an excellent method for describing individuals, a reasonable goal of a research project. Psychologists use case studies, obviously, when they need to learn a great deal of information about one person.

There are many situations in which case studies are used. For example, psychologists and neuroscientists might conduct a case study on a patient who has suffered a brain injury to determine the types of mental abilities that have been compromised by the injury, as well as to suggest potential treatments or therapies that might be effective.

You might discover from a case study that our individual in question dislikes school because he had a horrible first-grade teacher who used to humiliate him in front of the other students. As he grew older, he realized that he hated being told what to do, preferring to do things his own way. Making matters worse, he had a big problem with the focus on grades; he would have liked it better if he could just learn without being graded. As a result, his grades were very low, and he had to endure a lot of harassment from his parents and teachers. In their never-ending attempts to improve their son’s grades, his parents tried threatening severe punishments when he got bad grades and offering money for good grades.

Our example reveals both the strengths and limitations of case studies. When done carefully, a case study provides us with an extraordinary amount of information about our individual. Because it relies on many specific techniques, the case study is the best method to gather this information.

Ordinarily when we do research, however, we want to know about more than a single person. In short, we would like to be able to draw conclusions about a larger group of people. In this particular case, we would like to be able to say something about people who dislike school in general. Unfortunately, however, when we use the case study method, we cannot do that. When we examine a single case, we never know whether we have picked an unusual case. For example, it is likely that many students who dislike school never had a first-grade teacher who humiliated students. Many other students dislike school even though they get good grades. In short, we have no idea how general our individual issues and experiences about school are. We need some kind of method that will allow us to draw conclusions about people in general.

Perhaps if you asked a large group of people about their attitudes toward and experiences in school, you might be able to draw conclusions about people in general. For example, you could take the ideas generated from your case study and formulate them as questions to ask a large group of people: Did you ever have a teacher who humiliated you? Do you like or dislike being told what to do? And so on. This is the basic idea behind a survey: a researcher asks questions, and a group of participants respond.

There are very many details and options to be filled in, of course. For example, the questions may be asked in person, on paper, by telephone, or by computer. The questions may be open-ended, in which the participants are free to answer with any response they want, or closed-ended, in which a set of alternatives or a rating scale is provided for the participants.

Surveys serve a great many purposes for researchers. Generally, they are used to measure peoples’ attitudes, opinions, and behavior and to obtain demographic information (for example, gender, age, household income). Surveys’ greatest strength is efficiency. In order to draw conclusions about a group of people, it is not necessary to survey all of them. In fact, if a survey is done correctly, a very small sample of people within that group will suffice. For example, the Gallup Organization (one of the most well-respected survey companies) maintains a survey product called World Poll in which they represent 95% of the World’s population by surveying 1000 – 2000 respondents per country (for most countries).

There is one key limitation that is inherent to the survey method itself. People might lie. There is really no way around that. Researchers can insert questions designed to detect dishonesty in participants, but those questions can sometimes be pretty obvious. Two other problems with surveys are technically not limitations of the method, but they may just as well be because they are so common. Basically, people seem to misinterpret the simplicity of the basic survey concept to mean that surveys are simple to conduct. They are not. Researchers can fall into two major traps: They can ask the questions the wrong way, and they can ask the wrong people.

Question Wording.

Surveys are intended to measure relatively stable, sometimes permanent, characteristics of people. Unfortunately, however, people’s responses to survey questions can vary dramatically as a consequence of the way questions are worded. Biases due to question wording effects are quite common. Research has demonstrated their influence for many types of questions, including those measuring self-esteem (Dunbar et al. 2000), political party identification (Abramson and Ostrom, 1994), public support for a woman’s right to abortion (Adamek, 1994), and even respondents’ race (Kirnan, Bragge, and Brecher, 2001).

Question wording effects frequently occur without the researcher’s awareness of the problem. Even worse, if a researcher is dishonest, they can easily come up with biased questions to elicit a desired response.

In order to be able to draw conclusions about a large population from a relatively group of respondents, the sample used in a survey must be a representative sample  of the population from which it is drawn—that is, it must resemble the population in all important respects. For example, if a population of college students has 25% each of freshmen, sophomores, juniors, and seniors, so should the sample. The simplest way to ensure a representative sample is to use  random sampling, in which every member of the population has an equal chance of being a participant in the survey.

Getting a representative sample is easier said than done, however, and it is difficult to accomplish even when professional survey companies are conducting the research. A great deal of surveying is still done by phone, and people are increasingly screening their calls and rejecting ones from unrecognized numbers. One company (ZipWhip) recently reported that in their research (no word on if it was a representative sample), nearly 90% of consumers reported recently rejecting calls from unknown numbers. Although there is some research that suggests that caller ID information indicating a call is coming from a legitimate organization can increase response rates for some people (Callegaro et al., 2010), that still means that the sample might not be representative. Other difficulties commonly arise as well. Researchers often make several attempts to contact people who did not answer the first time a call was placed. Eventually, however, they will be forced to give up on some. Finally, just because the researcher gets the person on the phone it does not mean the survey will be completed. Many people decline to participate, often believing that the call is not a legitimate survey but a telemarketing call. The result is a sample that may very well be unrepresentative of the overall population. And this is when the survey is being conducted by a high-quality, honest, professional survey company.

What if the organization conducting the survey is not being so careful? As you might guess, it is very easy to create an unrepresentative sample. For example, instead of telephoning people randomly, a researcher can purchase from a list broker a list of phone numbers for people who subscribe to a particular magazine or donate to a particular cause. In many other cases, surveys are based on what are called convenience samples, groups of people who are easily available to the researcher. For example, the researcher can go to a train station and approach people waiting for the morning train. Although convenient, this sample should not be considered representative of the adult US population. At least the researcher is maintaining some control over the sampling process by approaching potential participants. If a researcher who is using a convenience sample is careful and exercises good judgment, the resulting sample can be representative. For example, the researcher can approach individuals in several different locations and at several different times to try to capture a more representative sample.

When researchers rely on self-selected samples , however, there is no confidence at all that the sample is representative. A self-selected sample is one for which the researcher makes no attempt to control who is in the sample; inclusion in the sample requires the effort of the participants only (the participants “self-select” into the sample rather than being specifically asked to participate). For example, a survey is published in a magazine and all readers are invited to respond. Typically, very few choose to do so; those who do respond usually are more interested than most people in the topic and usually have very strong and often extreme opinions (that is probably why they responded). The opinions of these few people are often unrepresentative of the opinions of the rest of the readers.

Biased samples are an extremely common and serious flaw in a great deal of survey research. Be very cautious about accepting the results of a survey unless you know that an effort was made to obtain a representative sample.

Naturalistic Observations

Suppose you conduct a survey of a representative sample of students at your school, and you discover that students who dislike school tend to report that their instructors have unfair grading policies, often evaluating students on work that they were not aware was required. Then, thinking there might be more to the phenomenon than the survey indicates, you decide to follow your representative sample throughout the semester. By hanging around with students before, between, and after classes, you discover that the students who reported that their instructors were unfair often skipped most of their classes, including the first day when the requirements were given to students. You have just discovered the advantage of our third research method, naturalistic observation, over surveys. The researcher simply observes behavior without interfering with it at all. The goal is to try to capture the behavior as it naturally occurs.

The greatest strength of naturalistic observation is that behavior does not lie. As we indicated above, people responding to surveys sometimes do (or even if they do not lie, they are sometimes mistaken). For example, one naturalistic observation found that users of a self-serve copy machine at a university underreported the number of copies that they made (Goldstone & Chin, 1993). Imagine if the researchers had conducted a survey instead, asking people if they always reported the correct number of copies on the copy machine. Many of the people they surveyed would deny ever misreporting the number of copies. Although there are some survey procedures that make it more likely that respondents will be honest, you can never be sure. With naturalistic observation, you can be sure; behavior does not lie.

On the other hand, although behavior does not lie, it can be difficult to interpret. This is probably the greatest limitation of naturalistic observation. It can be extremely difficult to make objective observations, especially when the behavior being scrutinized is complex. For example, asking observers to count the number of aggressive acts committed by a group of children is difficult. For one thing, what counts as an aggressive act? This is a realistic example of the challenge facing researchers trying to conduct a naturalistic observation.

Correlation Coefficients: Representing Relationships

So far, we have discussed one major goal of case studies, surveys, and naturalistic observations: to describe characteristics of people, such as their behaviors, attitudes, opinions, and demographic information. All of these characteristics are called variables. A variable  is a characteristic that can take on different values for different members in a group—for example, gender is a variable, and male, female, and non-binary are three different possible values of that variable. Other variables are numerical; for example, a variable could be how much a respondent likes school rated on a 5-point scale, and the five individual ratings that can be given are the different values. When we conduct research for the purposes of describing individuals or groups, we call it  descriptive research .

Surveys and naturalistic observations can be used in correlational research , in which the goal is to discover relationships between variables, or correlations. For example, you might discover through a survey that students who give high ratings on the “liking school” question attend class more frequently than those who give low ratings. Another example: a survey conducted by an advertising agency several years ago found a correlation between golf watching and car buying; people who watch a lot of golf on television tend to buy more cars than those who do not watch much golf. When both variables are numerical, a statistic called the correlation coefficient  can be used to measure the direction and strength of the relationship between them.

First, the direction: A relationship, or correlation, can be positive or negative. A positive correlation is one in which high scores on one variable are associated with high scores on the other and low scores on one variable are associated with low scores on the other. For example, many faculty members have observed (no surprise) a positive correlation between the number of hours a student reports studying during the week prior to an exam and the student’s grade on the exam. High scores on studying (that is, a large number of hours studied) are associated with higher grades, and low scores on studying are associated with lower grades. On the other hand, faculty members have discovered a negative correlation between the number of times a student is absent from class and the student’s final grade in the class. A negative correlation, then, is one in which high scores on one variable (absences from class) are associated with low scores on the other (final grades). In a correlation coefficient, the direction of the relationship is indicated by the sign of the number: + for a positive relationship, – for a negative relationship.

Second, the strength, or size, of the relationship: It is given by the number itself. The strongest correlation will have a correlation coefficient of +1.0 or –1.0, meaning that the two variables are perfectly correlated. The closer the coefficient gets to zero, or the farther away from +1.0 or –1.0, the weaker the relationship. A correlation coefficient of 0 indicates that the two variables are unrelated.

To summarize:

  • A correlation coefficient measures the direction and strength of the relatonship between two numerical variables.
  • It can take on a value between –1.0 and +1.0.
  • The sign of the correlation coefficient indicates the direction of the relationship.
  • The number itself indicates the strength of the relationship; closer to –1.0 or 1.0 indicates stronger relationships, closer to 0 indicates weaker relationships.

variable : A general characteristic of an individual that can take on a number of specific values.

Caution: Correlation Is Not Explanation

When two variables are related to each other, you can use one to predict the other. For example (assuming the relationship we suggested above is true), if you know how often a student comes to class, you can predict how much they like school. If you know how much golf someone watches on TV, you can predict how likely they are to buy a new car. The stronger the relationship is (the closer to +1 or -1 the correlation coefficient is), the more accurate your prediction will be.

Obviously, the prediction is important, and sometimes it is all a researcher wants to know. For example, it may be enough for a marketing researcher to know that golf watchers buy more cars. Armed with this knowledge, a car company can take advantage by advertising during televised golf tournaments. More commonly, however, researchers want to know whether one variable causes another. For example, if we determine that class attendance and liking school are positively correlated, we can certainly predict who will and will not like school from their attendance patterns. Ultimately, though, we want to know what causes people to like or dislike school, so we can try to help some students like it better.

Unfortunately, however, you cannot draw a causal conclusion about two variables simply because they are correlated. In other words, a relationship between two variables does not necessarily mean that one of them causes the other. We hope this information sounds familiar because it is a follow-up to Tip #6 from Module 1, the one about spurious correlations. Let us explain now why it is a mistake to assume that correlation implies causation. There are two problems with trying to infer a causal relationship from a correlation:

Directionality problem.

As the name of this problem implies, a correlation cannot indicate whether variable A causes variable B or variable B causes variable A. For example, attending class frequently might cause students to like school more. Or it could be the other way around: Liking school causes students to attend class more frequently. So the two variables might be causally related, but we do not know the direction.  

Third variable problem.

Perhaps some third variable causes the effect that you are interested in. For example, perhaps the frequency of attending class and liking school are not causally linked at all. Rather, there may be some third variable, or factor, that causes (or is related to) both of them. Perhaps it is something like student engagement. If students are engaged and involved in school, it will cause them to attend class frequently and to like school. Or, consider the golf and car buying relationship. It seems silly to think that watching golf causes people to buy cars. Rather, there is some third variable, in this case probably income, that causes people to buy more cars. Income also happens to be related to golf watching.

Remember the phrase that researchers use to remind themselves not to misinterpret correlations: Correlation does not imply causation .

Keep in mind, though, that the two variables in question might indeed be causally related. For example, many faculty members certainly believe that spending a lot of hours studying for an exam will cause you to get a better grade. All we have said is that the research methods we have discussed so far can only reveal correlations; they do not allow you to draw the causal conclusion you might seek. For that you need experiments. (sec 2.3)

  • Try to think of examples you have encountered of someone making the mistake of generalizing to a group of people from a single case.
  • Try to think of examples you have encountered of someone making the mistake of drawing a causal conclusion from a correlation (even if it was not part of a research study).
  • How do you define the word experiment? How might your definition differ from the definition that psychologists use?
  • How could you know whether ingesting caffeine improved memory for course material? Think carefully. What would you have to observe and what would you have to control to find an answer to this question?

Case studies, naturalistic observations, and surveys, although they are useful for describing phenomena and relationships between variables, may leave us wanting to know more. It may not be enough to know that two variables are related; we often want to know why they are related. Specifically, does one of them cause the other? For example, does playing violent video games cause aggression? Does high self-esteem cause happiness? In order to be able to draw a conclusion that one thing causes another, researchers turn to experiments as the method of choice.

Experiments

The term experiment has a technical meaning that is very different from the way it is used in everyday conversation. When people say that they are going to try an experiment, they mean that they are going to try out some new plan, just to see if it will work. To a researcher, an experiment is a far more precise concept.

The basic idea behind experiments, as researchers use the term, is quite simple. There are two main principles:

If you think one event is a cause and another its effect, you simply manipulate, or change, the cause and check what happens to the effect.

If you can rule out alternative reasons that the effect might have changed, you may conclude that the first event was indeed the cause.

Although the basic idea of experiments is simple, there are several important elements that make experiments a bit complicated.

Manipulate the Cause and Measure the Effect.

Now for an example so that you can see how these basic principles apply to an actual experiment.

Throughout this module, we have presented several ideas about why students might dislike school, based on potential case studies, surveys, or naturalistic observations. Of course, this is a very complex question, one with multiple answers. An individual experiment cannot examine every possible factor involved in a phenomenon, so a researcher should “divide and conquer.” (sec window 4) You should focus on a particular part of the problem, leaving other research questions for another day or another researcher. For example, you might narrow your focus on the ways that parents and teachers try to get children to complete their schoolwork.

Now it is time to move to the experiment. It seems rather unethical to purposely create children who dislike school (besides, it sometimes seems that we do a good enough job accidentally). Instead, researchers have come up with a research scenario that realistically mimics the effects we are interested in without causing any serious lasting damage. One specific experiment was conducted by Mark Lepper, David Greene, and Richard Nisbett (1973); instead of trying to cause children to dislike school, they tried to get them to dislike coloring. Lepper, Greene, and Nisbett proposed that controlling the children by providing rewards for an activity will cause them to dislike the activity—in other words, this was their hypothesis. (sec 2.1) To test this hypothesis, the researchers manipulated the cause (providing rewards) and checked whether the effect (liking coloring) changed.

The supposed cause is called the independent variable. It is the variable that was manipulated, or changed by the researchers. They had two groups of children: One was given a reward for coloring, and one was not; these are the two levels of the independent variable. The experimental group , the one the researchers were interested in exploring, was the group of children who got the reward. The control group —in this experiment, the group who did not get a reward for coloring—is a baseline group to which the experimental group can be compared.

The researchers predicted that the independent variable would influence how much the children like coloring. Liking coloring is the dependent variable —the supposed effect, or what the researcher measures. (To help you remember it, remind yourself that the dependent variable depends on the independent variable.) The researchers needed a measure of how much the children liked to color. In many cases, a survey can be used as a dependent variable—for example, simply ask the children how much they like to color. Because the children in this experiment were quite young, however, a survey seemed a rather poor choice. Instead, the researchers simply timed how long the children chose to color during a later free-play period.

Diagram depicting the independent and dependent variables for the coloring experiement. The Indipendent variable is whetehr or not a reward is given. The experimental group is the group researchers are interested in. This group gets a reward. The control group is the baseline, or comparison group and receives no reward. The dependent variable helps identify the causes. The dependent variable is how long kinds choose to color. The experiemtnal group has less time than the control group. The control group has more time than the experimental group.

control group : The group to which the experimental group is compared.

dependent variable : The supposed effect. This is what the researcher measures.

experimental group : The group in which the researcher is interested.

independent variable : The supposed cause. This is what the researcher manipulates.

Rule Out Alternative Explanations.

So, the first principle of experiments is to manipulate the cause (independent variable) and measure the effect (dependent variable). In order to conclude that the manipulation is what caused any observed effect, the researcher must also apply the second principle of experiments and be able to rule out alternative explanations. The two main types of alternative explanations are:

  • The two groups were not the same at the beginning of the experiment.
  • The two groups were treated differently during the experiment.

Think about the first alternative explanation. If one group already liked coloring more than the other at the beginning of the experiment, a difference at the end could reflect this preexisting difference, not any effect of the reward. In order to ensure that the two groups are equal beforehand, the researcher can use random assignment  to groups. By simply randomly assigning children in the experiment to be in the control and experimental groups, the researchers are reasonably assured that the two groups will be equal in terms of how much they like coloring to begin with, how long their attention spans are, how much they like other activities—any variable that might be of interest. Although a very simple technique, random assignment is quite effective, as long as the groups have at least 20 members each (and even more effective when the groups are larger; Hsu, 1989).

Now think about the second alternative explanation. The two groups must be treated the same throughout the experiment so that the only difference between them is that one gets the reward and one does not. For example, suppose the group that got the reward used a different set of coloring books, one that had unappealing characters in it. If they later disliked coloring it might be because of the reward or it might be because of the bad coloring books. There is no way of knowing for sure because the coloring book used was an extraneous variable  or confounding variable , one that varies along with the independent variable (everyone who was in the experimental group got the bad coloring book, while everyone in the control group got the good coloring book). The researcher must make every effort to control these extraneous variables , ensuring that the different groups are treated the same (except for the independent variable, of course). Whatever is done to one group must be done to the other.

Good researchers exert strong control over the experimental situation—they manipulate only the suspected cause, assign participants to groups randomly, and treat the groups identically by controlling extraneous or confounding variables. As a consequence, they can conclude that the variables they are examining are indeed cause and effect with a high level of confidence.

confounding variable or extraneous variable : A variable that varies along with the independent variable. If confounding variables are not controlled, the researcher cannot conclude with confidence that any change in the dependent variable was caused by the independent variable. Same as extraneous variable.

random assignment : The division of participants into experimental and control groups so that each person has an equal chance of being in either group. It ensures that the two groups are equivalent. (Do not confuse random assignment with random selection, which applies to a person’s inclusion in a sample.) (sec 2.2)

There are three important confounding variables or extraneous variables that you want to make sure that the experimenter accounted for. First is the possibility of a placebo effect , which is a situation in which research participants’ expectations or beliefs alone can lead to a change in the dependent variable. For example, suppose you are testing whether Omega-3 fatty acid supplements can improve memory. You randomly divide your participants into two groups: your experimental group gets Omega-3 supplements and the control group gets nothing. Then, you test their memory.  It turns out that the simple belief that the supplement can improve memory might do just that, and that is what the placebo effect is. You should realize that it is a confound because everyone in the experimental group knows that they have the Omega-3 and no one in the control group does.

The second issue is closely related to the placebo effect; it is called participant demand . It begins with two happy truths. First, participants are not passive lumps in research projects. Rather, they often actively try to think about, to figure out the purpose of the study they are involved in. Second, very few people are jerks. So once a participant thinks they have the study figured out, they try to produce the behavior expected of them. In the case of the Omega-3 study, they might assume the pills are supposed to improve their memory, try harder, and therefore improve their memory.

Note, in both placebo effect and participant demand, there is nothing about the Omega-3 itself that is improving memory, it is the participants’ expectations and beliefs. A third issue comes from the experimenters’ expectations and beliefs, so it is called experimenter expectations . If the experimenters strongly believe that Omega-3 will improve memory, they might subtly influence the participants to help them improve their memory, for example, by being extra encouraging with the experimental group.

Perhaps you noticed something about all three of these issues. It is the knowledge about who is in the experimental group and who is in the control group that is at the root of the problem. If we fix that, we fix the problem. The double-blind procedure  fixes the problem. Double-blind means that both the experimenters and the participants do not know who is in the experimental group and who is in the control group (in other words, they are both blind to the conditions). As long as the experimenter creates a control group that seems equivalent to the experimental group, these three confounds are controlled. Keep in mind, this often means creating a placebo condition for the control group—a condition that looks just like the experimental condition but without the essential part of the independent variable. You can see this most clearly in the case of an experiment with some sort of pill. The placebo would be a pill that looks and tastes exactly like the Omega-3 pill but has no Omega-3 in it.

double-blind procedure: an experimental research design in which both the experimenters and the participants do not know who is in the experimental group and who is in the control group.

experimenter expectations: a situation in which experimenters can subtly influence the outcome of an experiment because of their expectations.

participant demand: a situation in which research participants try to produce the behavior that they think is expected of them.

placebo effect:  a situation in which research participants’ expectations or beliefs alone can lead to a change in the dependent variable

Complex Experiments and External Validity

So far, we have described a situation in which researchers caused children to dislike coloring and suggested that we can use that result to draw conclusions about why some students dislike school. A researcher must be cautious about generalizing from one experiment to other situations. External validity  refers to the extent to which the experimental situation can be generalized to other situations, particularly to the real world.

One way to increase external validity is to repeat (replicate) the experiment in a number of different situations. For example, researchers can demonstrate the effects of providing rewards on liking for other activities, such as other games or sports. The more situations in which we can observe the phenomenon, the greater confidence we can have when we try to generalize it to an untested situation.

A second way that researchers can make their experiments more realistic (thus increasing external validity) is by making them more complicated. For example, instead of simply offering a reward versus no reward, the researcher can offer different levels of reward. In the original Lepper, Greene, and Nisbett experiment, there were actually three levels of the independent variable: Some children were given an expected reward, some were given an unexpected reward, and the rest were given no reward. Essentially, there were two experimental groups (the two reward groups) and one control group. Also, a researcher can introduce a second independent variable and manipulate both independent variables simultaneously. For example, a researcher might add an independent variable for “type of reward,” and possible levels could be “money” and “candy.” Experiments with more than one independent variable are called complex experiments , and they are sometimes more desirable than simple experiments because in real life the world also typically changes across multiple dimensions at the same time.

By the way, a researcher is not limited to two independent variables. It is possible to design experiments with several independent variables, each with several levels. But you should realize that the more complex the design, the more difficult it can be to interpret the results. Researchers often limit themselves to three or fewer independent variables per experiment.

complex experiment:  An experiment in which a researcher simultaneously manipulates two or more independent variables.

external validity : The degree to which the results of an experiment can be generalized to the outside world.

What if you Cannot Do an Experiment? What Then?

Third, even in situations where researchers cannot control confounds, as in a traditional survey design, they can use statistical techniques like multiple regression that can account for these confounds statistically. This technique allows the researchers to rule out some third variable explanations (for variables that they measured), but others remain for variables that were not part of the study. For example, imagine that a team of researchers believes that sleeping too few hours per night leads to increases in stress. They could conduct a survey that measured students’ level of stress, the number of hours of sleep per night, and the number of credit hours they are taking. Using multiple regression, they can essentially estimate the relationship between sleep and stress for students who are taking the same number of credit hours, thus statistically controlling for credit hours and removing it as a possible third variable explanation. 

longitudinal design : a correlational research design in which the same participants are measured over time.

quasi-experiment : A correlational design that has some, but not all, of the features of an experiment. It takes pre-existing group differences and treats them like an independent variable, and allows control of some confounds. It does not have random assignment to groups as in a true experiment.

  • What other strategies do you think would be useful to increase the external validity of an experiment?
  • Can you think of a situation in your life in which you might be able to adapt the rough idea of an experiment in order to figure something out?
  • Which research method (case study, survey, naturalistic observation, or experiment) do you think you would prefer if you were a research psychologist? What made you pick the one that you did?

Which opinion comes closer to your own?

  • Statistics are mostly used to lie.
  • I tend to trust research that refers to statistics.

Please give some reasons that support the opinion that you selected.

Psychological research often produces numbers, and researchers use statistics  to make sense of those numbers. Even if you never conduct research on your own, however, it is very important that you understand how statistics are used. You will undoubtedly encounter information that is based on statistical analyses throughout your career and your everyday life. The best way to evaluate that information is to have in-depth knowledge about statistical techniques. Thus, this section has a fair amount of detail about the statistical procedures discussed. This description will not make you an expert in statistics, but it is a good start on the road to understanding and evaluating statistics.

If there is one thing that many people really distrust about research it is statistics. As Benjamin Disraeli (a British prime minister in the 1800’s) once said, “There are three kinds of lies; lies, damned lies, and statistics.” (Many people attribute this quotation to Mark Twain, who admitted that Disraeli said it first.) It is true; you can lie with statistics. You know what, though? You can also lie without them. And if someone is going to try to lie to us, we hope that they use statistics. At least then we will have a fighting chance of discovering the lie because we understand the use and misuse of statistics.

The truth, however, is that most of the time statistics are not used to lie. Like any tool, if used correctly, statistics are very useful, even indispensable. Scientists use statistics for two purposes: to summarize information, usually called data, and to draw conclusions. Descriptive statistics  are used for the first purpose, inferential statistics  for the second.

descriptive statistics : Statistical procedures that are used to summarize information.

inferential statistics : Statistical procedures that are used to draw conclusions, or inferences.

statistics : Mathematical techniques that researchers use to summarize information and draw conclusions about their research.

Descriptive Statistics

Suppose you survey a random sample of students at your college about how much they like school on a 9-point scale (9 means they like it a great deal, 1 means not at all).

These are the data you collect:

Even with a relatively small number of students, it would be hard to report the data for every single student any time someone wants to know how much the group likes school. Clearly, it would be easier to talk about your research if you could summarize these data.

One way you could summarize the data is by representing them graphically, as a frequency distribution. A frequency distribution  chart would show how many students gave each rating point. For example, from the chart below, you can see that two students rated their liking of school a 3, while four students rated it a 9.

two students eachrated their liking of school at 3, 5, 7, or 8. Four students rated at a 4 or 9, and 8 students rated at a 6.

A frequency distribution is very useful, but sometimes it is not enough of a summary. We might want to be able to report our results very briefly, using at most a number or two. In addition, if we want to conduct more advanced statistical tests designed to help us draw conclusions (inferential statistics—described below), we will need to provide the summary numerically. That is where the descriptive statistics known as measures of central tendency and measures of variability come in.

Measures of Central Tendency.

Measures of central tendency  tell you what a typical rating (or score) is, an obviously important piece of information if you want to summarize. The three specific measures of central tendency that you will encounter are the mode, median, and mean. Each represents a different way of defining what it means to be a typical rating.

One way to define a typical rating is as the most common one; this is the mode. The mode, then, is the most frequently occurring score. In the example above, the mode is 6; eight of the students rated their liking for school a 6, and no other rating point had as many students. The mode is also a useful measure when you have non-numerical data, such as gender.

The mode has a significant limitation, however. If you use it to report your typical rating, you only know what the single most common rating point is. What if there are other rating points that have nearly as many students? For example, what if our frequency distribution looked like this, with a few more students rating school a 9?

In this example, 8 students rated their like of school at a 6 and 7 rated their liking of school at a 9. The other ratings have 1 to 3 students represented.

Reporting the mode, 6, as the typical rating seems somewhat misleading in cases like this, given that nearly as many students rated school a 9 as rated it a 6. In short, the mode uses a single rating only and discards information about all of the other numbers in the distribution. Thus researchers often opt to use a measure of central tendency that takes all of the ratings into account.

One measure of central tendency that accomplishes this goal, which is also the measure of central tendency most commonly used, is the mean. The mean  is what you probably know as the average; to compute it you simply add up all of the ratings in the distribution and divide by the number of students. The mean is the measure of central tendency that retains the most information about the original distribution because you use all of the ratings to calculate it. For example, the mean of the first distribution is 6.1. The mean of the second distribution (the one with the extra 9’s in it) is 6.6.

The main situation in which you should be cautious about using the mean as your measure of central tendency is when there are extreme scores. Think of it this way: If Bill Gates came to your next class, the mean net worth of the class would be approximately 30 billion dollars. Clearly, a figure like this is not exactly a typical score. In cases such as this, you are safer reporting the mode as the measure of central tendency.

In cases when the mean is not appropriate because of extreme scores, you might also choose to use the median as your measure of central tendency. The median  is the rating that falls right in the middle of the distribution; half of the ratings are above and half below it. To find the median, simply order the students from lowest to highest rating and find the student that is in the middle (if the middle is between two scores, average them to get the median). The median is more informative than the mode because it does use all of the scores. Still, it is not as sensitive as the mean, which uses all of the scores to calculate an average, rather than simply counting all of the scores. The median of both distributions in this section is 6, the same as the mode.

It is worth stepping back at this point and realizing that statistics can be misleading. Sometimes, researchers misuse statistics intentionally, sometimes inadvertently. We have already mentioned the first way that these measures can be misused: by allowing a small number of cases to influence the measure (that is, reporting the mean when the median or mode would be more appropriate because of extreme scores).

A second important misuse of central tendency is to confuse the idea of typical score with typical person. Suppose we give an exam for which half of the students get a score of 100% and half get a 50%. The mean (and median) score is 75%. We have been referring to this as the typical score, but in this situation, there is not a single person who earned a 75%. The mean is a statistical figure that need not correspond to an actual person. Hence, it would not make any sense to say that the typical student in the class received a 75%.

A related misuse is to take several measures of central tendency for different variables and use them to draw a composite, acting as if the composite was a typical person. For example, suppose the following are all true of graduates at your school:

  • Modal (most frequently occurring) gender: female
  • Modal major: business administration
  • Mean GPA: 3.2
  • Mean time to graduation: 4 years, 2 months

You cannot conclude that the typical graduate at your school is a female business administration major with a GPA of 3.2 who took 4 years, 2 months to graduate. Again, there may not be a single person at the whole school who matches this composite. Reporting a composite like this can be misleading in other ways, too. For example, although the overall mean GPA may be 3.2, the mean GPA of females is probably higher than 3.2 because females tend to have higher GPA’s than males.

mean : The arithmetic average of a distribution (add up all scores and divide by the number of scores); a measure of central tendency.

measure of central tendency : A descriptive statistic that conveys what a typical score of a distribution is.

median : The score in the middle of a distribution (half the scores are above, half are below); a measure of central tendency.

mode : The most frequently occurring score in a distribution; a measure of central tendency.

Measures of Variability.

A second useful piece of summarizing information is how close the rest of the scores are to the typical score. We use  measures of variability  for this purpose.

You will generally encounter two main measures of variability: The  variance  is a statistical measure of the average squared difference of each individual score from the mean, which tells you how spread out the distribution of scores is. When the variance is computed, the units for the resulting statistic are squared distances. Because of this, many researchers report the standard deviation , which is the square root of the variance. In addition to their usefulness in summarizing a distribution, these measures of variability are needed in order to calculate inferential statistics.

measure of variability : A descriptive statistic that conveys how spread out a distribution is.

variance : A measure of variability composed of the average squared difference of each individual score from the mean in a distribution.

standard deviation : A measure of variability calculated as the square root of the variance of a distribution.

Inferential Statistics

Once data have been summarized using descriptive statistics, researchers will usually want to draw conclusions about them. For this, they use inferential statistics. Suppose you did the survey of liking school and found that the ratings given appeared to differ by major. Specifically, business majors rated their liking of school a mean of 6.2, while psychology majors rated it an average of 8.3. You might look at the difference, 6.2 versus 8.3, and think that it seems like a pretty big difference. On the other hand, you might think that this difference is quite small, so small that it might simply have resulted from random variation. After all, people are all different from each other. If you take any two groups of students at random and compare their average ratings for liking school, you will very rarely find the exact same number for both groups. The key question is, how different do the two averages need to be in order for you to believe that it is a reliable difference, and not one that is just a random variation?

Inferential statistics allow you to draw this conclusion. A difference between groups is called statistically significant if the inferential statistics determine that it is very unlikely that you would observe a difference as large as the one you did if, in fact, there is no true difference. In other words, there must be some real, reliable difference between the groups that led to the different average scores. There are many different specific inferential statistical techniques; a researcher chooses the appropriate one to use based on the specific characteristics of the research.

Inferential statistics can also be misused. For our money, the most important point about inferential statistics is that statistical significance and practical significance (or importance) are completely different concepts. It is an unlucky accident that the word significance is used in both statistics and everyday language. The two senses of the word are completely unrelated. Statistical significance  refers to the reliability of an observed difference. As we said above, a difference is called statistically significant if it is unlikely that you would observe a difference as large as the one you did in your research if, in fact, there is no true difference in the world.  Practical significance refers to the importance of some event. Many people think that the two concepts are the same. It is simply the case that a statistically significant research result might have no practical significance whatsoever.

Some people also act as if the use of inferential statistics automatically confers legitimacy on a research result. Although a difference may be statistically significant, if it is based on a poorly designed study, the conclusion will be suspect.

Richard Nisbett and his colleagues (1983; 2002) have found that training in statistics helps people to avoid statistical reasoning errors, which can lead to baseless beliefs. (sec 1.2) Many people dread and avoid taking statistics classes in college. But perhaps if they realized the great benefit of understanding statistics for their everyday judgments and the evaluation of information presented to them, they would approach statistics classes more eagerly.

inferential statistics : Statistical procedures that researchers use to draw conclusions about their research.

statistical significance : A judgment that a difference observed in a research project is larger than what you would expect if, in fact, there is no true difference in the world. You would therefore judge that your observed difference reflects a true difference.

Once again, which of the opinions from the Activate section comes closer to your own?

Again, please give some reasons that support the opinion that you selected. This time, please also give some reasons that support the opinion you did not select.

  • How does it make you feel if someone purposely embarrasses you, makes you feel inferior, or harms you?
  • Have you ever participated in a survey or other psychological research? Was it a pleasant or unpleasant experience?

If you are attending a large school, particularly one with a graduate program in psychology, you will likely be given the opportunity to participate in research as part of your General Psychology course (sometimes, that “opportunity” feels a little bit like a requirement). If you advance in psychology, you will even have the opportunity to conduct your own psychological research; if you go to graduate school, it will probably be a requirement. The rest of you might gain first-hand exposure to psychological research methods as potential participants in marketing research projects. In all three situations it is important to understand the goals and procedures for conducting ethical research.

Participating in research projects as part of a General Psychology course is an excellent way to see first-hand the research methods that are used to generate the psychological knowledge that is described in this book. (secs 2.2/2.3) Some students get interested enough in psychological research from their experiences as participants that they end up going to graduate school as a result.

On the other hand, if participating in studies is unpleasant, people will become hostile to research and psychology. Therefore, it is in researchers’ best interest to make their projects as pleasant, interesting, and educational for participants as possible. The ethical guidelines that have been developed by the American Psychological Association can help researchers achieve these goals.

Imagine the following scenario. As a participant in a research study, you are asked to have a conversation with five other students about your adjustment to college life. In order to help you speak honestly and freely, the conversation will take place over an intercom, so you are the only person in a room. During the conversation, one of the students lets it slip that he suffers from epilepsy. A few minutes later, he actually has an epileptic seizure and cries out for help before choking and going silent. As you sit in your room wondering whether you should help, how you should help, or whether the other participants in the conversation helped, the experimenter walks into your room and informs you that the study is over. The experimenter was not, as he had stated, interested in your adjustment to college life, but was testing whether or not you would help in an emergency. All of your anxiety and distress was for naught; the student with the seizure was a tape-recorded actor. As a consequence of participating in the study, you are embarrassed, distressed, and feeling more than a little guilty over your failure to help.

Based on what we have told you about the experiment so far, is this research ethical or unethical? In other words, should the researcher have been allowed to subject you and the other participants to such treatment?

Some people say no. The treatment of the research participants was so distressing, so upsetting that the research should not have been allowed. This is the research equivalent of the doctor’s Hippocratic oath, which is “Above all, do no harm.” Specifically, you might note that the researchers deceived the participants about the true nature of the experiment, led them into a very distressing situation, and made them realize that they would have allowed a person in need to die. It was wrong to do these bad things to the participants, so the research was unethical.

Others argue that it is too simplistic to decide that the research was unethical only because it was distressing to the participants. They wonder why the study was conducted, what we learned from it. For example, how many people failed to help the student with the seizure? In the experiment just described, originally conducted by John Darley and Bibb Latane (1968), 40% of the participants did not go for help at all during the experiment (the period after the seizure until the end of the experiment was four minutes). In addition, even those who did go for help did so fairly slowly. This sounds like an important fact to learn about people. Perhaps the distress suffered by the participants is justified by the potential benefit of learning something new and important about human behavior.

This second argument corresponds to the way that many decisions about the ethics of a research project are made. The potential benefits of a study to society are weighed against any adverse treatment of participants.

By the way, some people, upon hearing the results of research like this, are tempted to conclude that the research is ethical because the participants deserved the distress that they suffered. They failed to help someone in trouble, so they deserve to feel bad about themselves. This conclusion, however, fails to take into account the basic lesson taught by studies like this one. Very many people, in some experiments most people, fail to help. The experiment was specifically set up to create a situation in which people would fail to help. The ease with which the researchers were able to get people to fail to help is not an indication of the weakness or badness of the individual participants but of the power of a situation to influence people’s behavior.

Ethical Guidelines for Human Research

Psychologists usually conduct research under the auspices of a government agency, a corporation, or a university. Regardless of where the studies are conducted, the sponsor usually requires that research participants be treated ethically. Of course, researchers should be sensitive to their treatment of human participants in any case.

Decisions about whether a research study is ethical are made by the institutions where the research will be conducted. These decisions are often made by institutional review boards, and they are made in reference to a set of ethical guidelines that have been prepared by the American Psychological Association (APA).

Institutional Review Boards.

The U.S. federal government requires that any research that is supported or regulated by any federal agency must be approved by an institutional review board, or IRB. An IRB is a committee of institutional and community members who evaluate all proposed research that involves human participants at an institution. Most research that is conducted at colleges and universities across the United States is covered under this policy.

In addition to approving research projects, IRBs also ensure that the specific procedures conform to the guidelines for ethical research. They ensure that informed consent is obtained, that participants are treated with respect, and that they are given opportunities to withdraw from the study and to receive the results.

institutional review board (IRB) : A committee composed of members of an institution where research is to be conducted and community members, whose job it is to approve or disapprove individual research projects and to ensure that ethical guidelines are followed when those projects are conducted.

APA Research Guidelines.

In psychology, guidelines for ethical research have been established by the American Psychological Association, most recently revised in 2017 (APA, 2017). Although the full guidelines for ethical behavior of psychologists are quite detailed—there are ten individual standards, each with multiple sub-requirements—the specific guidelines that pertain to research are quite understandable and sensible. The guidelines deal with several important aspects of research procedures:

Informed consent. This guideline is the cornerstone of the ethical treatment of research participants. Basically, participants must know “what they are signing up for.” Researchers must obtain voluntary consent from participants in a research study only after they have fully informed the participants about the procedure, paying special attention to any risks involved. Any piece of information that might reasonably lead some people to decline to participate must be disclosed. Participants also must be informed whether there is any consequence if they choose to terminate participation before the study is completed. They must also be informed of the degree to which the results are confidential. For some studies, however, fully informed consent would literally ruin the research. Consider, for example, the study on helping behavior described earlier. If participants had been informed that they were about to take part in an experiment designed to test whether they would help someone experiencing a medical emergency, they would surely have been quicker to offer help. The research would have discovered little about how people are actually inclined to act in an emergency. In short, sometimes it is necessary to disguise the true purpose of a study. These “deception studies,” as they are called, must undergo extra scrutiny by IRBs, and the researcher is obliged to take precautions to minimize any potential negative effects of the deception. First, the researcher must be able to justify that the benefits of the research outweigh the cost of deceiving participants. They must also demonstrate that there is no alternative to the deception. Then, as soon as possible after the conclusion of the research session, the participants must be told about the deception, why it was necessary, and the true purpose of the study.

Freedom from coercion . The principle of informed consent implies that potential research participants must not be forced, or coerced, to participate in a study. This is a simple-seeming requirement, but coercion can be quite subtle. Excessively large incentives for participation, for example, are considered a form of coercion. Also, it is common for psychology professors to conduct research using students from their school, sometimes from their own classes. Often, course credit or extra credit is awarded for participation. In such cases, an equivalent alternative opportunity for students to receive the credit must be provided. For example, a professor cannot require students to choose between participating in a one-hour experiment or writing a 20-page paper. This false choice is really coercion in disguise.

Respect for people’s rights, dignity and safety . Researchers must respect participants’ right to privacy or confidentiality (unless they waive those rights through informed consent). Participants must also be protected from harm. Obviously, physical harm is included in this guideline, but so is psychological harm, such as feeling embarrassed or foolish. In the study of people’s helping behavior, for example, the participants needed to be told that their failure to help is what the psychological theory predicted and that very many people put into the same situation also fail to help. If a researcher discovers that a participant was harmed in any way, they must take reasonable steps to reverse the harm. IRBs can be very conservative when it comes to the potential harm guideline. Often, they reject any study that subjects participants to greater distress than they would expect to experience in the normal course of a day.

Debriefing . After the conclusion of a study, research participants are offered the opportunity to learn about the purpose and results of the full study.

The Case for Strict Ethical Guidelines

The research section of the APA guidelines are only a portion of the entire APA Code of Ethics . The rest of the code describes the guidelines that professional psychologists must follow in the publication of their research, in their relationships with clients or patients, and in their teaching activities. You can see the entire ethics code at www.apa.org/ethics .

Most psychological research does not pose an ethical problem. Research procedures and treatment of participants are generally quite tame, usually much less distressing and threatening than many events that we experience every day. So is this great attention paid to ethics overkill? Should institutional review boards be more lenient and allow research to be conducted without seeking informed consent and without making the research educational for participants?

Probably not. As we mentioned before, if the discipline regularly allowed research participants to be treated with disrespect or misused in some way, before long all research would grind to a halt as participants became unwilling to participate (or coerced participants purposely sabotaged the research).

Besides, some research does present a sticky ethical situation, and IRBs may be the only way to identify the risks and protect potential participants. If a researcher wants to know if temporarily lowering people’s self-esteem makes them more aggressive, for example, there is no alternative to a research procedure that will make participants feel inferior, such as falsely informing them that they performed very poorly on some test. In the Darley and Latané study described above, in order to examine people’s reactions in an emergency, the researchers had to create a realistic emergency, complete with serious consequences and the accompanying anxiety and distress. The simple fact is, many important discoveries were made during research that inconvenienced, embarrassed, stressed, or even insulted research participants. When this happens, though, it is absolutely essential that the researcher make every attempt to reverse any negative effects for the participants.

Ethical Guidelines for Animal Research

Animal research is often very controversial. Animals cannot decline to participate, and they often cannot communicate to us that they are experiencing distress. Fortunately, very little research in psychology is conducted with animals. And of the psychological animal research that is done, a relatively small portion harms the animal in any way.

The decision to allow animal research is based on the same cost-benefit argument used to evaluate human research: Does the benefit outweigh the potential harm? Many important discoveries were made by conducting research on animals. For example, one of the most important theories of depression, called learned helplessness, was examined in research with dogs and rats (Seligman & Maier 1968; Rosellini & Seligman 1975). (sec 24.2) These discoveries have improved, even saved, the lives of millions of people. According to the cost-benefit approach, the cost of harm to a few animals is small compared to the benefit to people.

Nevertheless, the famous chimpanzee observer Jane Goodall (1999) has suggested that captive animal research, although sometimes currently necessary in order to save human lives, is unethical. Hence, we should strive to develop alternative experimental methods with the goal of eventually ending animal research.

In the meanwhile, we must treat captive animal research subjects as humanely as possible. The American Psychological Association provides guidelines for the ethical treatment of animals in research. Psychologists who supervise research with animals must be trained in humane research methods and experienced in caring for animals. Anyone involved in the research must receive training in research and care as well. Harming animals is allowed only when absolutely no other procedure is available. If surgery is performed, anesthesia for pain and follow-up treatment for infection must be provided. If an animal’s life must be terminated, it must be done quickly and with a minimum of pain.

When deciding whether a research project is ethical, which approach do you prefer?

  • Ethical status is decided by the features of the research project alone.
  • Ethical status is decided by an analysis of the tradeoff between the costs and benefits of a research project.

How might you apply these approaches when deciding whether your own non-research activities are ethical?

In most of this unit we described how psychologists think about the world and how they discover knowledge about human behavior and mental processes. Here we turn our gaze inward, so to speak, and examine how psychologists think about their own discipline.

We, like many psychologists, were originally drawn to the discipline because of our observations and curiosity about everyday phenomena. Even now, we are continually fascinated by the events and behaviors that we witness daily. Even more so, we are intrigued by how these everyday phenomena fit into the discipline of psychology. The field of psychology is divided into several subfields; each subfield is concerned with topics that are loosely related to a set of similar everyday phenomena.

If you decide to become a psychologist, or more likely, if you decide to major in psychology, you will have to think about the discipline in a new way, too. Specifically, you will have to consider what career options are available to you.

This module is divided into two parts. One section describes the organization of the field, and the other section describes career options for psychology majors.

3.1  Psychology’s Subfields and Perspectives

3.2  Career Options for Psychology Majors

By reading and studying Module 3, you should be able to remember and describe:

  • The major psychological subfields: biopsychology, clinical psychology, cognitive psychology, developmental psychology, industrial/organizational psychology, personality psychology, social psychology (3.1)
  • The minor psychological subfields: community psychology, consumer psychology, educational psychology, health psychology, human factors/engineering psychology, forensic psychology (3.1)
  • The main psychological perspectives: biological, cognitive, learning, psychodynamic, sociocultural (3.1)
  • Skills that employers value (3.2)
  • Common careers available to undergraduate psychology majors (3.2)
  • Career options for students with master’s and doctoral degrees in psychology (3.2)

By reading and thinking about how the concepts in Module 3 apply to real life, you should be able to:

  • Demonstrate how different classes are helping you are acquiring the skills that employers value (3.2)

By reading and thinking about Module 3, participating in classroom activities, and completing out-of-class assignments, you should be able to:

  • Outline how psychologists from different perspectives might approach a specific research question (3.1)

3.1 Psychology’s Subfields and Perspectives

  • Think of about a dozen questions that interest you about human behavior and mental processes. Do your questions all seem similar to you or all seem dissimilar? Try to organize them into distinct groups.
  • If someone asked you to subdivide the field of psychology, how would you do it? Do you think that your division would be the same as a psychologist’s?

As students begin to learn about different disciplines in college, many are surprised to discover how complex the organization within each discipline can be. For example, think about biology. It is divided into several subfields, such as animal physiology, biochemistry, cellular biology, molecular biology, ecology, evolutionary biology, and neurobiology. The subfields are related to each other in complex ways, and several of them are related to other disciplines, such as medicine, biotechnology, and natural resources (and psychology).

Psychology is no different. It has several major and minor subfields, divisions of the discipline-based on topics. Some of the subfields are themselves divided into sub-subfields. In addition, some subfields are beginning to merge, thus creating new combination subfields. To give you an idea of the complexity, the American Psychological Association has 54 divisions; most are devoted to specific subdivisions or subfields. Altogether, the divisions in the field of psychology make an extraordinarily complex discipline.

That is not the end of the complexity, though. Psychologists who are interested in the same topics or who labor within the same subfields may adopt different perspectives. The division of psychology into perspectives provides an alternative way to organize the field.

Psychology’s Subfields

Psychology has a small number of major subfields, reasonably broad groupings of psychologists who are interested in similar topics within the discipline. The subfields correspond to the department divisions that you will find in a large university’s psychology department and to course names of many second and third-year psychology courses. They also correspond roughly to the major units within this book.

Here are descriptions of the subfields, along with some of the major topics covered in each:

Biopsychology (or biological psychology ). Concerns itself with the biological underpinnings of behavior and mental processes. Biopsychology can hardly be called a subfield, however, as its content is distributed across the entire discipline. Any time psychologists are interested in the brain areas, brain and nervous system activity, physiological states, hormones, or evolution, they are working in the subfield of biopsychology.

Clinical psychology . Uses psychological theory to understand and treat psychological disorders and promote adjustment and personal development. Many clinical psychologists provide therapy to individuals; others conduct research and teach.

Cognitive psychology . Studies knowledge—what it is and how it is learned, understood, communicated and used. Cognitive psychology is the psychology of everyday thinking. It includes such topics as reasoning and problem solving, memory, language, judgment and decision making, and perception.

Developmental psychology . Examines how people change and how they stay the same throughout the life-span. Like biopsychology, developmental psychology cuts across all of the other subfields. For example, a developmental psychologist with an interest in biopsychology might be interested in what happens to children’s brains as they mature. A specialist in cognitive development might be interested in the differences in memory ability for children, adolescents, young adults, and older adults. A social or personality development psychologist might examine whether a personality trait such as shyness tends to change or stay constant throughout a person’s lifetime.

Industrial/organizational psychology. Applies psychology to the workplace. It is roughly divided into human resources topics and organizational psychology. Human resources topics include selecting, training, rewarding, and retaining workers. Organizational psychology is essentially applied social psychology. It is concerned with such topics as group functioning, leadership and management, motivation, and job satisfaction.

Personality psychology. Focuses on the characteristics of individual people, such as personality traits. Personality psychologists and social psychologists (see below) are interested in many of the same topics, so the two subfields are very closely related.

Social psychology. Seeks to understand how people think about, influence, and relate to one another. Topics of interest include aggression, prejudice, persuasion, romantic attraction, friendship, group processes, and helping behavior. Social psychologists have a particular interest in how situational factors influence these phenomena.

Some minor subfields are also important to know:

Psychological Perspectives

Psychologists who work in the different subfields tend to be interested in different phenomena or topics. For example, a cognitive psychologist might be interested in how information gets put into memory, while a social psychologist might be interested in how stereotypes develop. At the same time, psychologists who work in particular subfields develop characteristic approaches. For example, cognitive psychologists tend to prefer experiments as their research method, and they (obviously) focus on the cognitive causes of behavior.

Division by topic is not the only way to organize psychology. Another way is on the basis of different perspectives, the approaches or lenses through which psychologists may view a single phenomenon. For example, consider the phenomenon of depression. The subfield that is most directly related to depression is clinical psychology, of course. Depression is of interest to psychologists in a variety of subfields, however, and it can be viewed through several perspectives:

Biological perspective. Similar to the subfield of biopsychology, the biological perspective seeks to explain psychological phenomena by discovering the biological causes, such as brain and nervous system activity, brain structures, hormonal influences, and so on. A psychologist who takes a biological perspective on depression might note that it is related to an irregularity in the neural transmission process, the process through which individual cells in the nervous system send chemical signals to other cells.

Cognitive perspective. Similar to the cognitive psychology subfield, the cognitive perspective seeks to explain psychological phenomena by discovering the causes that are related to patterns and styles of thinking. From a cognitive perspective, a psychologist might note that particular patterns of thinking, such as blaming oneself for failures, seem to be related to depression.

Learning perspective. Many phenomena can be understood as examples of learning from experience. The learning perspective often focuses on observable behavior. A psychologist with a learning perspective might emphasize how a depressed person is rewarded for his or her passive behavior.

Psychodynamic perspective. In the early 20th century, Sigmund Freud developed the psychoanalytic perspective, which views human personality and behavior as a reflection of conflicts between hidden desires and social restraints. The original psychoanalytic perspective was extremely influential throughout the first half of the 20th century. It has developed into the modern psychodynamic perspective. This newer perspective retains the key assumptions about conflicts from the original psychoanalytic perspective but drops some of the more controversial aspects, such as Freud’s emphasis on childhood sexuality. A psychodynamic psychologist might emphasize how depression results from negative feelings left over from unresolved conflicts.

Sociocultural perspective. The sociocultural perspective examines the role of social forces and culture on psychological phenomena. An important piece of the sociocultural perspective is the cross-cultural view. It examines the role of culture on psychological phenomena by exploring the similarities and differences between people throughout the world. A psychologist who takes a sociocultural perspective might note that the decline in social connections that has affected the United States since 1960 correlates with the increase in depression over the same period. A psychologist taking a cross-cultural approach might compare rates of depression in different parts of the world.

No one perspective provides the answer to every psychological question. All can be correct simultaneously. Together, they give a more complete picture of a phenomenon than each perspective can alone. For example, in the study of depression, the answers suggested by all of the perspectives provide a much fuller explanation than any one perspective can by itself.

The Intersection of Subfields and Perspectives

To summarize (because the distinction between subfields and perspectives can be hard to keep straight):

  • The organization of psychology into subfields reflects psychologists’ interests in different topics. Psychologists who are interested in similar topics work in the same subfield.
  • The organization of psychology into perspectives reflects psychologists’ preferred approaches to studying a topic. Psychologists may be interested in the same topic but study it from different perspectives.

This book tends to be organized around subfields, grouping topics more or less the way professional psychologists do. But sometimes you can see signs of the psychological perspectives. For example, because the biological perspective has become so important in recent years, we often include a description from that perspective for a topic more often linked with a subfield like cognitive psychology or clinical psychology. Occasionally, particularly for complex and important phenomena, such as depression, we will draw from multiple perspectives.

  • For each of the questions you generated in #1 in the Activate section, try to pick which subfield seems the most appropriate source of answers.
  • Try to summarize how psychologists from two different perspectives might view each of the questions that you generated in #1 in the Activate section.

3.2 Careers Options for Psychology Majors

  • What is your major or your intended career? (Which way are you leaning if you haven’t decided yet?) Why have you chosen the major and career that you have?
  • What kinds of skills that you are acquiring in college will help you to succeed in your intended career?
  • Have you ever heard anyone say that you cannot get a job with a bachelor’s degree in psychology? Do you believe that statement?

Most of the people who read this book will not major in psychology. Indeed, out of the more than 1 million U.S. students who take General Psychology every year, only about 94,000, or at most 9%, go on to major in psychology (Goldstein, 2010; NCES, 2010). On the other hand, 94,000 is a very large number; psychology is a common college major.

In the event that you are one of the people who are intrigued by their first course in psychology and decide to make it your major (or have already decided to major in psychology), this section provides some information about what majoring in psychology will do for you in your future career and about whether an undergraduate degree or an advanced degree is necessary for success. Even if you do not major in psychology, you can use the information in this section to start thinking about how to make the most of your undergraduate experience and about the many different career options that are available for almost any major.

In preparation for writing this module, we previewed a well-known textbook in psychology (we won’t tell you which one because we are about to criticize it). In their section on career options for students with degrees in psychology, they devoted five times as much space to graduate degrees as they did to undergraduate degrees (and the pictures were better too). That might seem sensible at first, as there are more graduate degrees to talk about and it is the career path that many future psychology grads are interested in. The truth is, however, that the majority of students who major in psychology do not end up going to graduate school. By focusing on the graduate school path, nearly to the exclusion of the more common undergraduate-only path, textbooks contribute to one of the most damaging myths about the psychology major, that you have to go to graduate school to get a job. That has never been true, as you will see.

What Useful Skills Do Psychology Majors Develop?

Many types of employers, in many different fields, routinely hire psychology majors because of the skills they cultivate in pursuit of their degree. Eric Landrum and Renee Harrold (2003) conducted a survey of 87 businesses that hire psychology majors and found that a few of the most important skills are:

  • Ability to listen
  • Ability to work on a team
  • Ability to get along with other people
  • Willingness and ability to learn

More recently, many researchers have identified that these skills, along with several others are still essential for successful college graduates to possess. For example, the National Association of Colleges and Employers (2018) has identified the following top-ten skills that employers seek in college graduates:

  • Problem-solving skills
  • Written communication skills
  • Strong work ethic
  • Analytical/quantitative skills
  • Oral communication skills
  • Detail-oriented
  • Flexibility/adaptability

Whether you end up majoring in psychology or not, you should look for opportunities to develop these kinds of skills. You should be aware, however, that psychology courses not only give you opportunities to practice these skills, as do many other college courses, but also often give you the theoretical knowledge to apply them in new situations.

Careers with an Undergraduate Psychology degree from A to Z

“You can’t get a job with a Bachelor’s degree in psychology.” That “fact” first surfaced for us back in 1982 when one of the authors was first considering majoring in psychology. It is still a common caution today. The only problem is, it is not true. Approximately 45% of psychology majors go on to earn a degree beyond a bachelor’s degree (Carnevale et al., 2015). That means a majority of psychology majors have a bachelor’s degree only, and clearly they do not all remain unemployed. Indeed, in an extensive survey of college graduates from 1993, the National Center for Education Statistics found that fewer than 5% of academic major graduates (including psychology majors) were unemployed in 2003, which was below the overall unemployment rate of 6%. Although social science majors began their careers earning below-average salaries, by 2003 many had caught up to—and in some cases passed—their peers who had majored in career-oriented fields, such as business (Choy & Bradburn, 2008).

If you still do not believe us, we offer you, as more evidence, a list of careers you can have with an undergraduate psychology degree from A to Z:

  • Advertising Assistant
  • Benefits Manager
  • Community Relations Representative
  • Delinquency Prevention Social Worker
  • Employment Agency Counselor
  • Fund Raiser
  • Group Worker (leads groups within social service sector)
  • Human Resource Advisor
  • Information Specialist
  • Job Developer
  • Keeper (of animals); this one might seem like a stretch, but a knowledge of animal behavior is essential in this industry. Some very important principles of human psychology also apply to animal behavior.
  • Labor Relations Manager
  • Market Research Analyst
  • News Writer
  • Occupational Analyst
  • Personnel Interviewer
  • Queen of a Small Country, but you might have to marry a King. OK, we admit it. we could not find a psychology-related position that starts with Q, but trust us, we could have listed about 20 more that start with P.
  • Recreational Therapist
  • Sales Representative
  • Teacher; e.g., high school, but of course, you would need to obtain a teacher certification as part of your education.
  • Union Business Representative
  • Volunteer Coordinator
  • Wage/Benefits Analyst
  • X-Men; we reserve Wolverine (one of us went to the University of Michigan), but the rest of the spots are available. Again, we could not find a real occupation that starts with X, but unless you are interested in working with X-rays or xylophones, who could?
  • Youth Corrections Officer
  • Zoo Communications Researcher; seriously, one of us was almost hired for this position at the Brookfield Zoo (near Chicago), but we all have Ph.D’s.

The sources we used to compile this list were: Majoring in Psychology by Jeffrey Helms and Daniel Rogers (2011) and Occupations of Interest to Psychology Majors from the Dictionary of Occupational Titles, an online publication by Drew Appleby (2006). To find a couple of job titles, we consulted the US Department of Commerce’s Dictionary of Occupational Titles ourselves (and the zoo position is based on personal experience).

As you might have noticed from the list, a psychology major is an especially important route to jobs in the business world. Approximately one-third of social science majors who do not enroll in graduate school have careers in business ten years after graduation (Choy & Bradburn, 2008).

What About Pay?

Many students base their choice of major solely on the expected salary. We would like to caution you to be careful about choosing a major this way. For example, many students choose engineering because it is the highest paying major, and shun education because it is the lowest paying major. First, you should realize that money will likely not bring the level of happiness that many people expect it to (but that is a story for another module). The important point for this section is that these salary expectations are only estimates, or more technically, they are medians when a single number is given. So if petroleum engineers earn a median yearly salary of $136,000 over the course of their careers (which is true, Carnevale et al., 2015), it does not mean that all petroleum engineers make $136,000. It does mean that half earn more than that, and half earn less, sometimes much less (this is the definition of the median, remember). Suppose you choose a major for which you are ill-suited. Do you think that you will be among the high earners or the low earners in that field? Now, we are not trying to talk anyone out of majoring in engineering, or business. We are trying to talk you into choosing a major that suits you, one that will lead to a career that you will find meaningful and satisfying.

Let’s consider some actual numbers to drive this point home (from Carnevale et al., 2015). The bottom 25% of business majors earn $43,000 per year (averaged over the course of their careers). The top 25% of education majors (a famously low-paying major) earn $59,000. Students who major in business solely because it pays well but have no real interest in the field, stand a very good chance of ending up in that bottom 25%. On the other hand, students who pick a major that they love have a very good chance of being a top performer, and therefore, relatively high earner in that “low paying” field.

Liberal Arts Education

As the cost of a college education continues to increase, observers have begun to question its value in general. A common target of critics is the “Liberal Arts” education. A Liberal Arts education is a well-rounded education that cuts across many different disciplines, rather than one that focuses on preparing students for one specific career. History, humanities, philosophy, and psychology, for example, are generally considered Liberal Arts degrees. A business degree, on the other hand, is by far the most common career-oriented major (and the most common college major, period). Many observers (along with quite a few parents and students with whom we have spoken) believe that because college is intended to prepare students for careers, it should be specifically focused on career training.

It is undeniable that business majors have an easier time getting their first job (Choy & Bradburn, 2008), but do not sell Liberal Arts education short. Research has found that social sciences, humanities, natural science, and mathematics majors improved the most during their college careers in critical thinking, complex reasoning, and writing. Business, education, social work, and communications majors improved the least. In the first few years after graduation, students who showed the least improvement in these skills were three times more likely to be unemployed, and more likely to live with their parents and have credit card debt, regardless of their college major (Arum et al., 2011; Arum et al., 2012).

We should tell you that the main point of the Arum et al. research was that college students in general tend to improve very little in these important skills. So, whatever your major is, look for opportunities to develop and practice them. The researchers gave the following advice:

  • Spend time studying alone (studying with a group, although useful for building relationships with classmates, is not very effective).
  • Take courses with more reading (40 or more pages per week) and writing (20 or more pages per semester).
  • Seek out professors with high standards and high expectations.

What About an Advanced Degree?

It is true that if you hope to be able to call yourself a “psychologist,” or to provide individual therapy to clients, then you will need an advanced degree (master’s degree or higher). As you have just learned, however, dozens of careers (or at least 26, which is technically dozens) exist for which an undergraduate psychology degree provides excellent qualifications.

So, what about the 45% of psychology majors who do go on to earn an advanced degree? Where do they end up employed? Even here, there are many more options than most people realize. Although about half of the psychology doctorate degrees are in clinical psychology or counseling, the other half are in the other subfields (Morgan and Korschgen, 2008). People with advanced degrees in the other subfields often end up employed in the same kinds of careers (at higher levels) as those with undergraduate psychology degrees.

About 21,000 students earn master’s degrees in psychology each year (Goldstein, 2010). These degrees typically take two years beyond a bachelor’s degree. Graduates with master’s degrees can often begin their careers at a higher level in many of the same areas that are available to students with bachelor’s degrees. In addition, a master’s degree is considered the minimum qualification that will allow you to provide any substantive one-on-one counseling or therapy. You can also teach at the community college level with a master’s degree.

If you earn a master’s degree in psychology, you cannot yet call yourself a psychologist; that title is reserved for people who have earned a doctorate. The two types of doctorate degrees are a Ph.D. and a Psy.D. To earn a Ph.D., a student attends graduate school for five to seven years beyond a bachelor’s degree. It is a research degree and provides training for conducting research and teaching at the university level and clinical training for therapists (if the Ph.D. is in clinical or counseling psychology). People with a Ph.D. in psychology also find employment in business as researchers, statisticians, or industrial/organizational psychologists. They also are employed by government and school systems (as a school psychologist, for example). A Psy.D. requires three to four years beyond a bachelor’s degree. It provides training for therapy only.

One last point about advanced degrees: Psychology is also a good choice for an undergraduate major if you plan to attend graduate school in some other discipline, such as business, law, or medicine.

Where Does Psychiatry Fit In?

A psychiatrist is an MD (medical doctor) that has specialized in the diagnosis and treatment of psychological disorders. It takes about 8 years after your undergraduate degree to become a psychiatrist, four years in medical school and four years as a resident. As physicians, psychiatrists are the only mental health professionals who are authorized to prescribe medications. Psychiatrists can also provide psychotherapy. Often, however, they work as part of a team with a psychologist who provides the primary psychotherapy.

As we have described, an undergraduate degree in psychology qualifies you for dozens of careers in business, mental health, and social services, as well as for graduate study in several disciplines (including, of course, psychology). To be sure, any college major that offers you a well-rounded education can likewise prepare you for many fulfilling careers. The key is to make the most of your undergraduate experiences. Do not consider your coursework a series of meaningless hurdles that you must jump over; consider them opportunities to gain skills that will help you throughout your career and your life. Try to see the value of all of your classes. Not only will doing so help turn you into a more attractive candidate when you eventually do begin your career, it will help make the classes more enjoyable now.

  • What kinds of skills do you think that you can learn in this class that will help you in your intended career?
  • Whatever your intended major is, what are some alternative career options that you could pursue with the same major?

List four or five psychology-related careers. For each, decide which subfield seems most closely related.

Module 4: The Science of Psychology: Tension and Conflict in a Dynamic Discipline

Reading with purpose.

By reading and studying Module 4, you should be able to remember and describe:

  • Three different kinds of disagreements that occur in psychology
  • The relationship between the amount of tension in a field and important outcomes
  • The origins of psychology
  • Disagreements about theories: free will versus determinism, nature versus nurture (René Descartes and John Locke), people are good versus people are bad (René Descartes and Thomas Hobbes)
  • Disagreements about scientific and non-scientific psychology (Wilhelm Wundt, William James, and the behaviorists)
  • Disagreements about basic and applied goals in research

By reading and thinking about Module 4, participating in classroom activities, and completing out-of-class assignments, you should be able to:

  • State and defend your position on the three debates about fundamental theories
  • State and defend your position on the relative values of scientific and non-scientific psychology
  • State and defend your position on the importance of basic and applied research

Imagine three potential friends or romantic partners. One person agrees with everything you say. Most of you will find a relationship with someone like that somewhat boring. On the other hand, a potential partner or friend who disagrees with everything you say, often aggressively, would lead to a relationship that is too stormy and distressing to be fulfilling. An ideal relationship would be with someone in between these two extremes, someone with whom you have “spirited disagreements,” serious yet respectful discussions about issues that you both care about. We propose that this third imaginary relationship serves as a useful analogy for the types of interactions that should characterize a dynamic, evolving discipline such as psychology.

Throughout this unit, we have invited you to “think like a psychologist,” as if all psychologists think alike. One important goal of this module is to explain that psychologists do not all think alike. Psychologists throughout the history of the discipline have disagreed often. Here we will describe three basic kinds of disagreements that characterize the field of psychology.

  • Disagreements about the best theoretical explanations for fundamental observations about the human condition
  • Disagreements about the role of science and scientific methods of inquiry in the discipline
  • Disagreements about the relative importance of two goals: discovering basic principles of human behavior and mental processes versus applying this knowledge to help people

Although psychologists do not always see eye-to-eye, the lack of agreement is not necessarily a problem. If managed successfully, disagreement, conflict, and tension in a discipline are essential for progress.

Tension Is Good

Like that ideal imaginary romantic partner, the field of psychology has just the right amount of conflict and tension to keep things lively and interesting. There is tension in psychology as different camps of psychologists champion their favored theories and visions of the field. If there is too much tension, psychology would be too disjointed to hang together as a discipline. If there is too little tension, psychology would become too homogenized, too uniform, to be creative and dynamic.

Psychological research about people has left no doubt that extreme conflict or poorly managed conflict is very damaging and can ultimately lead to anger, pain, sadness, animosity, and even violence (Johnson, 2000). Any of these destructive outcomes can lead people to avoid one another. To be sure, occasional avoidance to manage particular conflicts has its place. If you are primarily interested in maintaining a relationship, for example, avoiding conflict may be a very appropriate temporary strategy (Johnson, 2000). This strategy is particularly common in cultures that value relationships highly, such as China (Tjosvold and Sun, 2002). As a general strategy, however, avoidance is often counterproductive or even unhealthy (De Dreu et al., 2000). When people (or disciplines) that could enrich each other’s understanding act instead as if neither has anything to offer, both suffer.

Too little conflict can be nearly as damaging as too much, however. Research in education and work organizations has indicated that conflict can lead to creativity and good decision-making (Cosier and Dalton, 1990; James et al., 1992). The key is to manage conflict so that it does not fall into unproductive infighting (Johnson and Johnson, 2003).

Although these observations about conflict and creativity have been made through research on small groups or in individual organizations, it is reasonable to expect that they would apply to a discipline like psychology as well. After all, as you will see, psychology is a collection of factions with different points of view and opinions.

Diagram positions factions in psychology along a tension continuum. Uncreative homogeneity is at the bottom with the least amount of tension. Healthy conflict and creativity is in the middle. And avoidance or destructive conflict is on top representing high tension.

Figure 4.1: Tension in the Field of Psychology

Psychology’s Long History of Tension and Conflict

The birth of scientific psychology is often taken to be 1879. What happened that year is that Wilhelm Wundt established the first psychological laboratory in Leipzig, Germany, supporting the idea that psychology is a scientific discipline. It was not really the birth of the discipline, though. The term psychology  had already been in use for over 300 years, and several researchers across Europe had already been working in areas that would become part of psychology. But because Wundt established the lab, and he worked hard to establish psychology as a discipline throughout Europe, he is given the credit (Hunt, 2007).

The term psychology  first appeared in 1520, but systematic thinking about human behavior and mental processes began far before then. Morton Hunt (2007) places the real beginning around 600 BC. Prior to that date, people simply assumed without question that human thoughts and emotions were implanted by gods. The real birth of psychology was probably the day that philosophers started to question that belief.

Hunt notes that the questions asked by the ancient Greek philosophers became the fundamental debates of psychology, debates that in many cases continue in some form even today. Thus, the great philosophical disagreements from centuries ago became the subject matter and controversies of psychology. We can see many of these philosophical disagreements at the root of some of the most fundamental conflicts in the history of psychology: conflicts about fundamental theories of human nature, methods of inquiry, and proper goals for psychologists.

Conflicts About Fundamental Theories

Scientific meetings are filled with people who disagree with one another. If a psychologist is giving a presentation about a theoretical explanation for depression at one of the meetings, for example, they can be sure that some members of the audience are there precisely because they disagree with the presenter. Occasionally, tempers flare and interpersonal conflict, not just scientific conflict, is part of the picture. One of the authors once witnessed an audience member purposely sit in the front row of a presentation given by a researcher he did not like just so he could cause a commotion when he got up in the middle and walked out.

Particular psychologists may have invested a great deal in a specific theory. They may have devoted many years to developing it, and they may have based their entire professional reputation on the success of the theory. In these cases, it can be difficult to avoid seeing an attack on your theory as a personal attack. Making matters worse, some attacks on theories are more personal than they should be. Robert Sternberg, a former president of the American Psychological Association, has noted with dismay the sometimes savage criticism that may be leveled against rivals, far harsher than is warranted by disagreements about theories. Yet interpersonal conflicts are somewhat unusual. Instead, psychologists tend to have the healthier “spirited disagreements” that are confined to the theories alone.

Quite likely, no psychological theory is universally accepted across the discipline. But that lack of agreement can be a good thing. Progress in science occurs through the production and resolution of scientific controversies, and psychology is no different. (sec 1.1)  Sometimes the disagreements are minor, focusing on a small part of a single theory. At other times the disagreements are related to very important theories or reflect fundamental beliefs about the human condition.

Conflicts about human nature have been part of many major theoretical disagreements in psychology. Let us consider three of the theory-related disagreements. Keep in mind that the conflicts are larger than disagreements about individual theories. They address basic philosophical beliefs about the nature of humanity. These disagreements had their origin in philosophy and thus illustrate our earlier point about the development of psychology from earlier philosophical debates. Some of the details about these conflicts are filled in throughout the rest of the book.

Free Will Versus Determinism. 

For many centuries prior to the 6th century BC, the questions about our mental processes and behaviors had a very simple answer: Everything was predetermined by a god or gods. The ancient Greek philosophers began to question this belief when they proposed that emotions and thoughts were not placed in the head by gods, that at least some of them came from an individual’s experience and from thinking about the world (Hunt, 2007). By asking questions such as “Does the mind rule emotions, or do emotions rule the mind?” some of these philosophers began to wonder whether human beings had free will.

When psychology emerged, it also adopted the free will versus determinism debate. For example, a group of psychologists known as the behaviorists were champions of a deterministic view. Prominent behaviorists in the 1940s and 1950s, such as B. F. Skinner, contended that human behavior was entirely determined by the environment. For example, behaviors that are rewarded persist, and those that are punished fade away. In the 1960s several groups of psychologists rejected this overly deterministic stance. For example, cognitive psychologists noted that human behavior was far too complex and novel to have emerged as a simple response to the environment. (sec Module 9)

Nature Versus Nurture. 

René Descartes, considered by many to be the father of modern philosophy, championed a viewpoint called rationalism ; he believed that much of human knowledge originates within a person and can be activated through reasoning. Hence, knowledge is innate, a product of nature only (1637). John Locke is perhaps the most famous philosopher on the other side of this debate; his viewpoint is typically referred to as empiricism . He likened the mind to white paper (or a blank slate); through experience (or nurture) the paper acquires the materials of knowledge and reason (1693). Although Locke and Descartes did not have conflicts themselves—Locke was only 22 when Descartes died—Locke’s writing did take aim at Descartes’s specific ideas (Watson, 1979).

A very controversial book was published in 1994 that reignited the nature versus nurture debate. In The Bell Curve,  Richard Herrnstein and Charles Murray argued that because the genetic contribution to intelligence is so high within a population, social interventions are unlikely to help people with low intelligence. To restate the point simplistically (although the authors were careful not to state this outright), nature is more important than nurture. Many psychologists stepped up and pointed out the errors in research and reasoning that led Herrnstein and Murray to these conclusions.

And these conflicts continue today. In 2018, behavioral genetics researcher Robert Plomin published Blueprint: How DNA Makes Us Who We Are. Plomin argues, based on research over the past 50 years, much of it conducted by Plomin himself, that much of what we consider as important aspects of nurture, such as parenting styles, do not affect outcomes very much. Critics were quick to jump in and do exactly what they are supposed to: argue for the other side. For example, Richardson and Joseph (2019) contended that Plomin’s explanations glossed over subtle factors related to nurture that temper his conclusions and that his analyses minimized the changes that occur in people during their lifetimes, something that a genetics-only approach has difficulty explaining. We are not coming down on one side or the other in this debate, as it is still in progress. It is, however, a terrific example of a conflict from philosophy that continues today in scientific psychology.

People Are Good Versus People Are Bad. 

Thomas Hobbes, a contemporary of Descartes, is considered by many to be the father of modern political science. He believed that a strong monarch was necessary to control the populace because people had a natural tendency toward warfare. Without a strong leader, Hobbes believed, the lives of men were doomed to be “solitary, poor, nasty, brutish, and short” (1651). In other words, people are bad, or at least seriously flawed. René Descartes figures prominently on the other side of this debate. He believed that the innate knowledge that we derive through reason comes from God. Thus human beings are fundamentally good.

Throughout the history of psychology, this debate has resurfaced repeatedly. For example, Sigmund Freud believed that our personality emerges as we try to restrain our hidden, unacceptable desires. (sec 16.2) A group called the humanistic psychologists arose in the 1960s as a reaction against the view of people as bad and in need of restraint. (sec Mod 21)

Conflict About Methods of Inquiry: Science Versus Non-Science

As we told you in Module 1, psychology is a science. Or is it? Many people outside of psychology do not consider it a science. Scientific psychologists tend to find this belief quite bothersome, but this conflict was very important for the development of psychology into the discipline that it is today.

The ancient philosophers Socrates and Aristotle, both of whom greatly influenced the discipline of psychology, foreshadowed the difference between science and non-science. One of Socrates’ most famous quotations is “An unexamined life is not worth living.” He favored a very introspective (non-empirical, or non-scientific) approach to understanding human nature. Aristotle’s thinking, on the other hand, was guided by empirical observations of the world around him. On the basis of these observations, he formulated theories about memory, learning, perception, personality, motivation, and emotion (Hunt, 2007).

During the infancy of psychology, the early giants in the field frequently disagreed about what the appropriate subject matter of psychology is and, by implication, whether the field is scientific or not. Recall Wilhelm Wundt, who in 1879 established the first psychological laboratory. One of Wundt’s central goals was the reduction of conscious experience into basic sensations and feelings (Hunt, 2007). Wundt’s research relied on the key method known as introspection . Trained observers (often the researchers themselves) reported on their own consciousness of basic sensations while they completed tasks such as comparing weights or responding to a sound. Wundt believed that these experimental methods could be applied to immediate experience only. Recall that science requires objective observation. Wundt believed that mental processes beyond simple sensations—for example, memory and thoughts—depended too much on the individual’s interpretation to be observed objectively. Hence to him only part of what we now think of as psychology could be a true science.

The early 20th century brought several challenges to Wundt’s approach. Some psychologists began to broaden the concept of introspection. For example, in the United States, William James embraced a more expansive and personal type of introspection, in which naturally occurring thoughts and feelings, not just sensations, were examined. He believed that this method was a great way to discover psychological truths. Although James’s more inclusive definition of introspection greatly broadened the scope of psychological inquiry, this expansion made it very difficult to assert that psychology was a scientific discipline. As you might guess, different observers might very well report different observations while introspecting about the same task. That is, because introspection was subjective and unreliable, it was a poor method on which to base a new science.

By far, however, the most significant challenge to Wundt came from the early behaviorists. They sought not to expand psychology but to reduce its scope, and in so doing, they rejected virtually all of the previous efforts in psychology. John B. Watson, a young American psychologist, was the chief spokesperson for the behaviorist movement. He defined psychology as the science of behavior and declared that the goal of psychology was the prediction and control of behavior. Watson emphasized that a psychology based on introspection was subjective; different psychologists could not even agree about the definitions of key concepts, let alone examine them systematically. In short, the introspective method was not useful. Further, mental states and consciousness were not the appropriate subject matter of psychology, defined as the science of behavior. The questions that had been addressed by introspective psychology were far too speculative to be of scientific value (1913; quoted in R. I. Watson, 1979).

Watson produced an extremely persuasive case for why behaviorism was right and all previous psychology was wrong. Although he was not the first to make many of these points, he was, in essence, a top-notch salesperson and his rather narrow vision of psychology eventually took over a great deal of the field (Hunt, 2007).

In the 1950s the cognitive revolution shook psychology, as internal mental processes became acceptable topics of study. (sec Module 9) This time, however, researchers were careful to approach these topics scientifically.

Today, the field of psychology combines the scientific orientation of Wundt and the behaviorists with the broad view of the scope of the psychology of William James and the cognitive psychologists. It seems rather unlikely that these two visions would both be represented in modern psychology if they had not been so vigorously championed by the two camps during the discipline’s early history.

Some might argue that, because the field has successfully merged the interests of earlier non-scientific psychologists with the scientific approach, the non-scientific orientation to psychology is no longer necessary. In other words, they believe that psychology should be purely scientific. The problem with this view is that it ignores an important lesson from the history of psychology. Basically, how do we know that non-scientific psychology can no longer be useful? If in the past non-scientific ideas merged with scientific methods to expand the useful scope of psychology, why should we be content to stop mixing things up now? In short, today’s non-scientific psychology may be a useful source of topics and ideas that future researchers may bring into the scientific fold.

The development from humanistic psychology into positive psychology is a great example of how conflict about methods of inquiry continues—and continues to make psychology ever more useful. In the 1950s and 1960s a group of non-scientific psychologists—for example, Carl Rogers and Abraham Maslow—publicly stated their belief that human beings are individuals who have a natural tendency toward growth and self-fulfillment. They also largely rejected the scientific approach. Partly because of this disdain for science, the original humanistic psychology has little influence in modern psychology. Instead, a new perspective known as positive psychology has emerged from it. The positive psychologists have many of the same goals and basic beliefs as the former humanistic psychologists, but instead of rejecting the scientific approach, they have embraced it. Basically, positive psychology focuses on “mental health rather than mental illness” (Weil, quoted in Seligman, 2002). Positive psychology has already contributed a great deal to our understanding of what leads to happiness, health, and fulfillment (Diener, Oishi, Tay, 2020).

Conflict About Proper Goals: Research Versus Application.

People who work in a discipline like psychology can have two different types of goals. They can devote themselves to advancing knowledge in the discipline, or they can devote themselves to using the discipline to solve problems. Those who do the former are concerned with basic research . The others are interested in application . When researchers conduct research in order to solve specific problems, they are said to be involved in applied research.

Non-scientific psychologists tend to be application-oriented; scientific psychologists may be basic researchers or applied researchers. Throughout the history of psychology, both basic research and applied research have contributed to the development of the field. Basic research has built most of the knowledge base of psychology, while applied researchers have solved problems for government and business or on behalf of people who suffer from behavioral or mental problems. Practicing psychologists do not typically conduct research at all but have been concerned with using psychological knowledge to help people.

There has often been tension between basic researchers and application-oriented psychologists. The two sides are often pitted against each other. The pool of available research funds is a fixed, or even shrinking, pie. That means that if funding organizations decide to throw support behind applied research, basic research will suffer. And vice versa. Observers note that this is just what has happened over the past 40 years or so.

Probably in part because of the competition for resources between basic and applied research, psychology has sometimes not been successful at harnessing the power of the tension between the two sides to create healthy conflict. In the not-too-distant past,  Ph.D. students who pursued careers in applied, invariably higher-paying areas were considered “selling out” by many basic researchers.  Others often referred to basic research as “pure” research, as if applied research was somehow “impure” or “dirty.”

The general public has a role in this tension, by the way. The public has a tendency to belittle basic research. People often do not understand the point of research that cannot be put to use immediately. Politicians can sometimes fuel the criticism. Former U.S. Senator William Proxmire was well-known for the Golden Fleece Award that he publicized during the 1970s and 1980s. This “award” highlighted and ridiculed activities by government agencies that, in Senator Proxmire’s opinion, were a waste of taxpayers’ money. Although he criticized what he saw as waste throughout the government, basic research was a frequent target. His very first Golden Fleece Award was for research on why people fall in love. (sec 19.1)

Many people do not realize that basic research is an investment in the future. Like any investment, a particular research project might not turn out to be useful right away. But the eventual payout may be enormous. For example, consider research about genetics. Basic research on genetics began in the 19th century and has continued since then. Over the years, we have discovered many applications of this knowledge, such as the genetic contribution to physical and psychological disorders. As we continue to expand our knowledge of how genes work, we may someday be able to cure many currently incurable diseases.

The problem is, how do you know which basic research that is being conducted today will allow us to solve important problems 10, 20, or 30 years from now? For example, who can tell whether research about environmental factors in human aggression (another “winner” of a Golden Fleece Award) may or may not someday be used to end human aggression? To ridicule basic research today is shortsighted and uninformed.

Another concrete example of how basic research may pay off in the long run concerns a 1979 winner of the Golden Fleece Award, research on the behavioral factors in vegetarianism. At the time, the U.S. government was dispensing the advice that people should eat from the “four basic food groups”—roughly, that people should eat equal amounts of meats, dairy products, grains, and fruits and vegetables. Since 1979 there has been a dramatic change in that advice; people are now advised to eat large amounts of grains and fruits and vegetables and relatively small amounts of dairy products and meats (the USDA Food Pyramid). Most people eat nowhere near enough fruits, vegetables, and grains. In short, a vegetarian diet is a much healthier diet than most Americans currently eat. The behavioral factors in vegetarianism would be a pretty useful piece of information to know now, wouldn’t it?

Very simply, there is a tradeoff between basic research and applied research. With any individual basic research study, there is a small chance that you will be able to solve a very important problem at some point in the future. With applied research, there is a good chance that you will be able to solve a less important problem soon. Rather than choose one over the other, we would like to see both valued.

The truth is, basic researchers and applied researchers need each other. When basic researchers remove themselves too much from the real world, their research may become so artificial that it distorts the way processes actually work. When applied researchers overlook the findings of basic research, they work inefficiently, examining many dead ends or “reinventing the wheel” by repeatedly demonstrating well-established phenomena. There will probably always be some friction between basic and applied researchers, however. Basic researchers may continue to covet the resources available to applied researchers, while applied researchers may continue to long for the intellectual respect accorded to basic researchers.

There is reason to believe that the separation and unhealthy tension of the past will diminish, however. In fact, the line between basic and applied research seems to be blurring these days. Basic researchers, funded by organizations that need to justify their existence, are being encouraged to propose applications of their research. Applied researchers are being encouraged to keep an eye on developments in basic research so they can more efficiently solve the human problems they encounter. The history of psychology has demonstrated that basic research is more accurate and comprehensive when it pays attention to the real world and that application is more effective when it is grounded in basic research results.

The Perils of Too Little Conflict: The Replication Crisis

You might recall (in fact, we hope you recall) that replication is one of the defining features of science. Only when multiple researchers have been able to successfully produce an effect can we begin to have confidence that the effect is reliable (think, real). The road to reaching this conclusion is, or should be, filled with the kind of conflict and tension that leads to scientific progress. Unfortunately, the field of psychology has discovered in the past few years that we have a problem in this area. For a long time, researchers paid little attention to the need for replication. In 2015, the Open Science Collaboration project reported on the results of an attempt to determine how serious this lack of attention is. The project replicated 100 studies that had been published in three prestigious psychology journals in 2008. Their results were sobering. Overall, only 39% of the statistically significant results reported in the original articles were reproduced upon replication.

Now, before everyone panics and drops psychology to take a “real science course,” this does not mean that 2/3 of the published studies were wrong. But it almost certainly means that some of them are wrong. For a given non-replicated study, we now have a legitimate scientific controversy on our hands. One set of researchers has found one result, and a second set has found another. We need a third, fourth, fifth (and so on) set of researchers to come along and settle the controversy. We certainly hope that this sounds familiar, as it is exactly how science is supposed to work. Yes, real science. The process in psychology will just be a bit faster over the next few years while we try to get caught up. So although the “replication crisis” as the problem became known is a significant issue, it is one that can definitely be addressed. Stay tuned below for some additional specific steps the field is taking to prevent a recurrence of the problem.

How did we get here?

You might wonder how the field got lost to begin with. It is an oversimplification to blame it completely on a lack of conflict, leading to misplaced trust in individual research results. Actually, the causes are complex, and it is worth going into a little bit of detail about two of the major factors. One factor explains why no one bothered to do replications, and the second explains how researchers might get “wrong” results to begin with (thus making those missing replications essential).

Factor Number 1: There was no incentive structure in place to encourage replications .

The large majority of research in psychology is conducted by professors at Universities across the world. Although you may know your professors as teachers, many professors are employed more for their research than their teaching. (Note, this is not to say that they don’t care about or are not good at teaching, just that the job of a university professor is, to a large degree, based on research). In order to keep their jobs at their universities, these professors have to publish original research. A beginning professor who published replications would literally be out of a job when tenure decisions are made. Making matters worse, many psychology journals over the years did not consider replication studies worthy of publication. So very few researchers did them.

Factor Number 2: Publication bias in favor of positive results, leading to the file drawer effect and questionable research practices.

Scientific journal articles in psychology (and other disciplines) have another bias in addition to an aversion to replications. They also have a strong preference for studies that work, or in other words, that obtain statistically significant results. This preference is so strong, that many researchers who obtained results that were not statistically significant abandoned the projects. They didn’t throw the results away, they just filed them away; this phenomenon became known as the file drawer effect (years ago, the results were stored in physical file cabinets; nowadays, the results are just stored in a forgotten folder on a researcher’s computer). The basic idea is that for any published study that shows statistically significant results, we have no idea how many other researchers have non-significant results lurking unseen in their file drawers.

Making matters worse, because researchers’ livelihoods literally depended on publishing statistically significant results, they felt intense pressure to produce those results. As a result, many researchers began to adopt what became known as questionable research practices. Please note that we are not saying the researchers were being dishonest (although some certainly were). Rather, it became routine for researchers to inadvertently adopt techniques that were likely to produce “false positive” research results. A full description of the kinds of questionable practices that were common is beyond the scope of this textbook, so we will describe only one (if you are interested in learning more, we recommend you take courses in Statistics and Research Methods).  One of the key questionable research practices is known as p -hacking. A p -value is a statistical concept that signifies that a research result is statistically significant. In the most basic form of p -hacking, a researcher would produce a great many research results, and select only those that the p -value indicated were statistically significant.

How do we get out of here? Honestly, psychology has made a lot of progress already. The field has identified some research results that might be ready to be removed from textbooks (and we will mention some of these throughout the book, in case you encounter them elsewhere). More importantly, however, researchers have begun to embrace Open Science practices, which will make it easier for future researchers to conduct replications and less likely that research based on questionable practices will sneak through the journal article review process. Open Science practices include three key parts (APS, 2020):

  • Open Materials. Researchers make their materials freely available to other researchers to facilitate replications.
  • Open Data. Researchers make their original data freely available so that other researchers can run their own statistical analyses to ensure that the original findings were not dependent on the specific data analysis choices.
  • Preregistration. Researchers publicly commit to methods and data analysis strategy for a specific study prior to conducting it to prevent many questionable research practices from occurring.

One other practice that has been instrumental in recovering from the replication crisis is the use of meta-analyses. A meta-analysis is a study of studies, so to speak. Researchers take multiple studies on a topic and combine them into a single dataset, as if they have one giant study. Then, they perform advanced statistical analysis to allow them to conclude if a research result is stable beyond a single study. Meta-analyses are also useful for estimating the size of an effect and for determining factors that might change an observed effect. Obviously, meta-analyses are very useful, but they do not answer every question. For one, researchers cannot (or do not) always include all of the studies in an area. The exclusion criteria that they use (and sometimes, the specific data analysis procedure) can sometimes change the conclusions. Especially important is the need to find unpublished work in a research area to include in a meta-analysis to overcome the problem of publication bias.

By now the field has seen quite a few attempts to replicate individual studies, many meta-analyses, as well as several large-scale organized attempts to replicate many studies at one time (e.g., the Many Labs efforts). As a general rule of thumb, it appears that approximately half of the results of published studies can be reproduced. Again, this does not mean that half of what you are reading in your textbook is wrong, but it does mean that the ongoing process of self-correction is still going on (after all, that is what ongoing means).

Oh, and one last point. Psychology is not the only science that is enduring a replication problem. Even a quick Google search will reveal that among other disciplines, economics, exercise and sports science, and even biomedical research are all dealing with similar issues. Yes, biomedical research.

Unit 2: Understanding and Using Principles of Memory, Thinking, and Learning

Please take a few minutes to list all of the skills that are necessary to succeed in school. Try to organize the list into categories. What skills and categories did you come up with?

Although there are many “right answers” to this question, one could make a strong argument that there are three main types of skills that lead to success in school:

  • Mental activity that many people would judge as the basis for academic thought, things like memory, reasoning, problem solving, and critical thinking
  • Interpersonal skills, such as getting along with instructors and classmates and being able to work in groups—in short, the ability to “work and play well with others”
  • Intrapersonal skills, which involve understanding and managing yourself—including being aware of your strengths and weaknesses and being able to maintain focus and motivation

This unit is primarily devoted somewhat to the first category, the classic academic skills. The five modules in this unit describe the basic processes of memory, learning, reasoning, problem solving, and critical thinking. All are major concerns of the sub-field of psychology known as cognitive psychology,  the study of cognition. Cognition  is the mental activity that deals with perception and with knowledge: what it is, and how people understand, communicate, and use it. Cognition is essentially “everyday thinking.” (Note, the perception part of cognition will be covered in Unit 3)

This unit has two goals. The first is to show you how knowledge of the psychological principles of cognition can benefit you. The second goal is to show you how psychologists think about cognition.

By the way, although we will be talking a lot about using cognitive principles to succeed in school, these—and for that matter nearly all—psychological principles are relevant to life beyond school as well.

This unit is divided into five modules:

  • Module 5. Memory, describes many of the important discoveries about memory and shows how to use the knowledge to improve your own memory.
  • Module 6. Learning and Conditioning, introduces you to classical and operant conditioning, two of the most important types of learning. As you will see, however, they are not exactly what first comes to mind when you consider the concept learning.
  • Module 7. Thinking, Reasoning, and Problem-Solving, details the discoveries that psychologists have made about many of the thinking skills beyond memory that you use in school and throughout your life.
  • Module 8. Tests and Intelligence, places the common experience of tests into the contexts of the psychology of intelligence and the principles of test construction.
  • Module 9. Cognitive Psychology: The Revolution Goes Mainstream, describes the ups and downs of interest in cognition throughout the history of psychology and, more generally, illustrates how research helps psychologists gradually develop a better understanding of human beings.

Module 5: Memory

Memory plays a key role in many areas of our lives, not the least of which is school. To understand why we remember and forget, you need to consider the entire memory process. Here’s a very simple description: First, you have to get information into your memory systems; call this process  encoding. When you need to get information out of memory (for example, when you are taking an exam, or telling a story), you use the process called  retrieval. In between encoding and retrieval we have, of course, memory  storage.

A diagram with three boxes. Encoding is the first box and leads to Storage. The Storage box leads to Retrieval

Failure to remember information—that is, forgetting—can occur because of a breakdown at any of the three points (encoding, storage, retrieval). The typical culprits in the failure to remember, however, are encoding and retrieval problems. That’s why most of this module is devoted to encoding and retrieval. But first you need to understand the basic layout of memory, which is a key element of cognition.

This module breaks psychologists’ basic understanding of memory into six sections. First, it explains that not all forms of memory are alike and describes some of the different memory systems. The section introduces principles of encoding and explains how recoding is one of the keys to effective memory. The third section describes the processes that take place in the brain when information is encoded and stored in memory. The fourth section covers memory retrieval. The final section describes how memories are constructed and, sometimes, distorted.

5.1 Memory Systems

5.2 Encoding and Recoding

5.3 Memory Storage and Memory in the Brain

5.4 Memory Retrieval

5.5 memory construction and distortion.

encoding: putting information into memory systems

retrieval: taking information out of memory systems

storage:  keeping memories in the brain for future use

By reading and studying Module 5, you should be able to remember and describe:

  • Distinctions among encoding, storage, and retrieval (5 introduction)
  • Characteristics of sensory memory, working memory, and long-term memory (5.1)
  • Characteristics of procedural memory and declarative memory (5.1)
  • Methods of rehearsal for encoding: repetition, auditory encoding, semantic encoding (5.2)
  • Strategies for semantic encoding: elaborative verbal rehearsal, self-reference, mental images (5.2)
  • Organizing to encode (5.3)
  • Concept map and neural networks (5.4)
  • Parts of a neuron: axon, dendrites, cell body (5.4)
  • Synaptic plasticity (5.4)
  • Retrieval cues (5.5)
  • Memory distortion (5.6)

By reading and thinking about how the concepts in Module 5 apply to real life, you should be able to:

  • Identify different kinds of memory (5.1)
  • Characterize your own typical study strategies in terms of encoding and retrieval principles (5.2, 5.3, 5.5)
  • Recognize a memory from your own life that might be distorted (5.6)

By reading and thinking about Module 5, participating in classroom activities, and completing out-of-class assignments, you should be able to:

  • Devise a strategy for studying that uses encoding and retrieval principles (5.2, 5.3, 5.5)
  • Recognize a situation in which you would suspect a memory distortion (5.6)
  • Can you think of more than one kind of memory that you have drawn upon?
  • Why can you remember a birthday party you attended years ago, but forget what your instructor said seconds ago? 
  • Is it true that some memories can last a lifetime?
  • Is it true that “you never forget how to ride a bicycle?” 

When you first start to think about it, memory might seem pretty simple.  But consider some of the memories you might have:

  • What you had for breakfast this morning
  • Your 10th birthday party
  • The address someone just left on your voicemail
  • Your phone number
  • What your best friend looks like
  • What a cat is
  • How to read
  • What you read in section 1.2 of this book
  • The answer to question 3 on your History mid-term
  • The name of the person you just met
  • How to do a cartwheel

All of these phenomena are, at their core, memories, which means that they share some fundamental properties. Yet they have significant differences, too. It has been a major accomplishment of memory researchers to describe the different types of memory systems and processes, and determine the specific properties of each one.

Distinguishing by duration and purpose of the memory

We have two major memory systems that help to explain how memories are stored: working memory (sometimes referred to as short-term memory, although the actual meaning is not identical) and long-term memory. The process of creating a memory that you will remember for a test you will be taking next week and beyond involves both systems working together.

A diagram has two boxes: One for Working memory and one for long-term memory. Working memory is connected to an arrow labeled encoding that points towards long-term memory. An arrow labeled Retrieval is connected to Long-term memory and points towards Working memory

Soon after information is first encountered, it enters the system called working memory, simply by virtue of the fact that you pay attention to it (Baddeley and Hitch, 1974). The best way to understand working memory  is to think of it as the current contents of your consciousness—that is, whatever you are thinking about right now. So as you are sitting at your desk staring at a textbook, the words that you pay attention to enter into working memory. You hold information in working memory either because you are going to use it (for example, to solve some problem) or because you will be trying to transfer, or encode it, into long-term memory.

Long-term memory  is the memory system that holds information for periods of time ranging from a few minutes to many years. If you do not use or transfer the information in working memory into long-term memory, it will be forgotten, probably in less than thirty seconds (Peterson & Peterson, 1959).

One fact you should realize about working memory is that its capacity is limited. Psychologists had thought that people can generally hold about 7 pieces, or  chunks, of information in working memory at one time (Miller, 1956). A chunk is a unit of meaningful information. For example, an individual letter might be a chunk. If the letters can be ordered to form words or abbreviations, then these are the chunks. More recently, however, researchers have proposed that memory capacity is a function of time, not quantity. Specifically, our working memory may hold the amount of information that we can process in about two seconds (Baddeley, 1986, 1996).

If you manage to get the information from working memory encoded into long-term memory, it is possible that you can retain that information for many years. It can even last a lifetime; picture a 92-year-old grandmother who still tells stories about her childhood in Italy. Also, although that “I can’t study anymore because my brain is full” feeling may make you think otherwise, you can essentially store a limitless amount of information in long-term memory (Landauer, 1986).

One of the keys to good memory, then, is to have effective strategies for encoding information into long-term memory (see section 5.2). You typically store the general meaning of information in long-term memory, however, rather than precisely what you encountered (Brewer, 1977).

Working memory and long-term memory are not the only two memory storage systems. Another one is called  sensory memory, and it actually comes into play before working memory does (Sperling, 1960; Crowder & Morton, 1969). Sensory memory is an extremely accurate, very short duration system. It essentially stores the information taken in by the senses, vision and hearing, just long enough (about a second) to allow you to direct attention to it so you can get the information into working memory.

Distinguishing by the kind of information in the memory

Can you do a backflip? Former World’s Strongest Man Eddie Hall can.

Thumbnail for the embedded element "Eddie Hall Does A Backflip!"

A YouTube element has been excluded from this version of the text. You can view it online here: https://cod.pressbooks.pub/introductiontopsychologywinter21/?p=39

You can also access this video directly at: https://youtu.be/huzzUtkmZ2I

Procedural Memory

This ability to do a backflip is a skill, or a memory, like riding a bicycle, tying one’s shoes, or hitting a tennis ball. These types of memories, however, seem very different from remembering what you had for dinner last night or remembering that Albany is the capital of New York.

Psychologists, too, have noticed this distinction and have given the two kinds of memories different names.  Procedural memory  refers to skills and procedures. These are memories for things that you can do.  Declarative memory  refers to facts and episodes (Cohen & Eichenbaum, 1993). Declarative memory is further subdivided into  semantic memory —your general store of knowledge, such as facts and word meanings, and  episodic memory — memory for events, or episodes from your life. So, if you remember that Bismarck is the capital of North Dakota, it is semantic memory, unless you remember the exact time that you learned this fact (in 5th-grade social studies, for example), in which case it would be episodic memory. So you see, as the details about when we first learned some piece of information fade, episodic memories can become semantic memory.

If you remember when you learned some trivia, it is episodic memory:

Thumbnail for the embedded element "Rick and Morty Catagory on Jeopardy!"

You can also access this video directly at: https://youtu.be/8Ik57i3e7NE

Declarative Memory

Procedural memory seems to operate by different rules than declarative memory. For example, when we talk about transferring information from working memory to long-term memory (encoding) and retrieving information from long-term memory back into working memory, we are talking about declarative memory only. There is no working memory for procedures. Acquiring a procedural memory typically takes much more practice than acquiring a declarative memory does. But once a skill is acquired (that is, once it becomes part of your procedural memory), it may well be there to stay. So, at least for some people, it is probably true that you never forget how to ride a bicycle.

(See Module 9 for a related distinction called explicit and implicit memory)

chunk: a unit of meaningful information

declarative memory: memory for facts and episodes

episodic memory: the part of declarative memory that refers to specific events or episodes from someone’s life

long-term memory: an essentially unlimited, nearly permanent memory storage system

procedural memory: memory for skills and procedures

semantic memory: the part of declarative memory that refers to one’s general store of knowledge

sensory memory: a very short (about one second), extremely accurate memory system that holds information long enough for an individual to pay attention to it

working memory: a short-term memory storage system that holds information in consciousness for immediate use or to transfer it into long long-term memory

  • Think about the last time you forgot something. Was the forgetting a problem with working memory or long-term memory?
  • What is your most interesting procedural memory? Have you ever tried to teach it to someone else? If so, how did you do it?
  • What is your earliest declarative memory? (Use an episode from your life rather than trying to figure out the first fact that you learned.) Do you think that your declarative memory is good or poor?

5.2 Recode to Encode

  • Have you ever finished reading a short section from a textbook and immediately realized that you have already forgotten what you just read?
  • Have you ever looked at the first question on an exam for which you thought you had studied well and thought, “I have never seen this concept before in my life; am I in the right room?”
  • Do you find yourself able to remember unimportant material for a class (for example, material not on the test) and unable to remember important material?
  • Please turn to the beginning of Module 5. Notice the description and list of all the sections that fit within the Module. Now go find a couple of textbooks from your other classes and look at the outlines in the first pages of some chapters or at least at the table of contents. (Seriously, go look! We’ll wait.) Why are these outlines included?

Think about your best friend for a moment. What were they wearing the last time you were together? You will often find yourself unable to remember information like this. Why? Because you probably never attempted to encode that information from working memory into long-term memory. You didn’t look at your friend and say, “Lisa looks so good today; I’m going to remember what she is wearing!”

Certainly, information sometimes makes it into long-term memory without you engaging in purposeful encoding. Perhaps you have an annoying song going through your head right now. It is not very likely that when you first heard the song, you said to yourself, “Hey, I better make sure I memorize this song.” (You might be interested to know that psychologists have studied this phenomenon of annoying songs you cannot get out of your head. They call them earworms –see Jakubowski et al., 2017). But do not count on this accidental encoding to provide you with a solid memory when you need it. The simple truth is if you want to be able to retrieve information from long-term memory, you have to do a very good job of putting it in there in the first place.

How do you effectively encode information into long-term memory?

The basic strategy that people use to encode information from working memory into long-term memory is  rehearsal. All of the encoding strategies in this module are kinds of rehearsal. The simplest kind of rehearsal is straight  repetition.  Imagine trying to learn your French vocabulary words by mentally running through the vocabulary list over and over until you get them all right. It works ok, as long as the test was soon after you finish studying (about 15 seconds seems to be the ideal delay; anything more than that and you start forgetting). Although it may be one of the most common rehearsal strategies and is the one favored by many students, repetition is probably one of the least effective. Call this encoding without recoding. And the advice about it bears repeating: Encoding without recoding (in other words, straight repetition) is a poor way to encode information from working memory into long-term memory.

One specific situation in which many people have difficulty encoding is when they read textbooks. Have you ever read a paragraph, realized that you have immediately forgotten it, and as a consequence decided to re-read it? Often, the problem is that you are merely reading the words over in your head, making sure you can “hear” yourself silently saying the words. In this case, you are  recoding : transforming the information from one form into another. But the transformation, in this case, is minor and not very useful. Psychologists call it auditory encoding or  acoustic encoding . Auditory encoding is ok. Many students rely on it, and with enough effort, they do fairly well at school.

In order to remember better, however, there is no question that you should try to move to the next level of recoding, in which you transform the information into something meaningful. For example, Craik and Tulving (1975) developed the idea of  semantic encoding  (Craik & Tulving 1975). Semantic means “meaning,” so semantic encoding refers to mentally processing the meaning of information. For example, you should pay attention to patterns and relationships and their significance, rather than just the words or numbers themselves.

Psychologist F. I. M. Craik and his colleagues demonstrated the benefits of using semantic encoding in a famous series of experiments during the 1970s (Craik and Lockhart, 1972; Craik and Tulving, 1975). These experiments examined what Craik termed levels of processing. In a typical experiment, participants would read a list of words with instructions that would encourage one specific type of encoding. The shallowest encoding strategy (or level of processing) required participants to pay attention to the visual appearance and shapes of the letters only. For example, a shallow encoding strategy would be to count how many straight and curved letters there are in each word. Note that you do not even need to read the words in order to use this strategy, so it would seem to be quite a poor recoding strategy. Somewhat “deeper” encoding strategies were those that required participants to pay attention to more properties of the words, such as the auditory qualities. For example, judging whether the word rhymes with a specific word is a deeper encoding strategy, an acoustic one. Note that you do not need to encode the meaning of the words in order to use this strategy.

The deepest level of processing, the one that requires meaningful recoding, is semantic encoding, or paying attention to the words’ meanings. A specific task to encourage semantic encoding might be to judge whether the word makes sense in the following sentence: “The __ fell down the stairs.”

Craik’s research consistently showed that memory was better the deeper the processing. Semantic processing was better than acoustic processing, which was better than visual processing. This is a basic principle of memory that you can start using today to improve your memory: to effectively encode, you should recode information in a way that allows you to process the meaning of what you are trying to remember.

auditory (acoustic) encoding: encoding from working memory into long-term memory by paying attention to the sounds of words only

levels of processing: strategies that affect how well a memory is encoded. Craik and Tulving’s research demonstrates that deeper processing (that is, semantic encoding) leads to better memory than shallower processing (that is, encoding based on auditory and visual properties)

recoding : transforming information to be encoded into a different format

rehearsal: the basic strategy that people use to encode information from working memory into long term memory

semantic encoding: encoding from working memory into long-term memory by paying attention to the meaning of words

How Can You Recode for Meaning?

One main reason that recoding for meaning helps to create solid memories is that it takes advantage of the format of information when it is stored in long-term memory. Try this: Tell a few minutes of the story “Goldilocks and the Three Bears” or any other story you know from your childhood. Did you tell the story word-for-word the way it was told to you? Probably not. But still, you remembered the characters and the sequence of events quite well. Typically (but not always), long-term memory stores information by meaning, taking advantage of patterns and creating links between concepts and people and events (Bransford et al., 1972; Brewer, 1977). This tendency allows you to recall the general story, but not the precise story, whether it is a children’s fantasy, a description in a textbook, or some event that happens to you. When you make special efforts to encode meaning, you are playing to the natural tendencies and strengths of your long-term memory.

Any way that you can make information meaningful should help make your efforts to remember more successful. Here are some useful strategies that you can use for reading textbooks and remembering lectures and other course material:

Elaborative Verbal rehearsal and Self-Reference

Try elaborative verbal rehearsal, which is basically restating what you have just read or heard in your own words. After reading a short section or paragraph, pretend that a friend has asked you to explain it. Or pretend that you are trying to teach the material to someone. Although this can be difficult to do, the payoff is tremendous. In one study that compared high-performing and low-performing students who were taking General Psychology, the use of elaborative verbal rehearsal was the most important difference (Ratliff-Crain and Klopfleisch, 2005).

Use the  self-reference effect  by trying to apply the material to yourself (Forsyth & Wibberly, 1993; Fujita, & Horiuchi, 2004, Jackson et al., 2019). Suppose you were trying to teach some course content to someone else. You might decide to use some real-life examples to help your students understand the material. Well, it turns out that this strategy is extremely powerful for remembering the material yourself. Continually ask yourself, “Can I think of an example of this concept from my own life?” or even simply, “How does this apply to me?” Creating a mental link between the course material and what it means to you is one of the very best ways to encode meaning. With practice, you should be able to use this strategy in many of your courses. The self-reference effect is very robust; it has been demonstrated with children, college students, older adults (with and without mild cognitive impairment), and adults and adolescents with autism (Jackson et al., 2019; Lind et al., 2019).

Keep in mind as you consider trying these strategies that they can be hard to do, at least at first. It is certainly harder, and more time-consuming, to do elaborative verbal rehearsal than to simply read a textbook chapter once. But it is no more time consuming than re-reading a chapter a few times because you know you will not be able to remember it. Also keep in mind that, as you get better at using the strategies, they grow more effective and get easier to use.

elaborative verbal rehearsal: an encoding technique that encourages semantic processing by restating to-be-remembered information in your own words, as if teaching it to someone else

self-reference effect: an encoding technique that encourages semantic processing by applying to-be-remembered information to yourself

Organize Information

Imagine that you are visiting a city for the first time. You have only a vague idea of where you are and you need to get to the post office. What you need is a map. A map can help you to learn where important things are and can help you figure out how to find them.

That is what the organizational aids in this book are, as well as the chapter outlines (and tables of contents) in other books and even web sitemaps. They are maps. They are useful for helping you effectively transfer information from working memory into long-term memory because they organize that information in a meaningful way.

If you can organize information meaningfully (or take advantage of a meaningful organization provided for you), it will be more effectively encoded into long-term memory (Bransford et al., 1999; Halpern, 1986). The beauty of this strategy from a practical standpoint in school is that often the work is done for you. Someone has already gone to the trouble of coming up with a meaningful organizational scheme. Use the chapter outlines to plot your route through your textbook. Pay attention during the first five minutes of the lecture when your professor gives you a preview of the day’s lecture and activities.

Signaling Meaning in Advance

One of the reasons that outlines and previews help you put information into long-term memory is that they alert you in advance to the types of information you’ll be encountering. Sometimes just a little bit of information goes a long way. Even something as simple as knowing the title of reading material before you start reading allows you to organize the information so that it makes sense and can be remembered.

John Bransford and his colleagues demonstrated this kind of effect by asking two groups of research participants to remember a paragraph. For the first group, the paragraph alone was presented. Here is one of their paragraphs. See how well you think you would remember it:

The procedure is actually quite simple. First, you arrange things into different groups. Of course, one pile may be sufficient depending on how much there is to do. If you have to go somewhere else due to lack of facilities that is the next step, otherwise you are pretty well set. It is important not to overdo things. That is, it is better to do too few things at once than too many. In the short run, this may not seem important but complications can easily arise. A mistake can be expensive as well. At first, the whole procedure will seem complicated. Soon, however, it will become just another facet of life. It is difficult to foresee any end to the necessity of this task in the immediate future, but then one never can tell. After the procedure is completed one arranges the materials into different groups again. Then they can be put into their appropriate places. Eventually, they will be used once more and the whole cycle will then have to be repeated. However, that is part of life (from Bransford and Johnson, 1972).

Do you think you would do a good job on a memory test for this paragraph? Bransford and Johnson’s participants did very poorly. Although the individual sentences are meaningful, it is difficult to see how they are related to each other—in other words, how they are organized.

The second group of participants read the same paragraph, but before doing so, they were given the title “Doing the Laundry.” Now that you know the title, go back and read the paragraph again and see if it makes sense. If you are like most of Bransford and Johnson’s participants, providing a title makes the paragraph much easier to understand and remember.

What Bransford and Johnson demonstrated is that the title allows readers to make inferences—that is, to use their background knowledge to tie the paragraph together. For example, in the second sentence, the title allows you to draw the inference that the word “things” refers to “clothes.” Inferences like these relate the formerly meaningless paragraph to the knowledge about the world that you already have. By providing a title, Bransford and Johnson allowed participants to activate their own knowledge about the way the world is organized before they started reading the paragraph. The title gave them preexisting memory hooks on which to hang the new words that they were reading.

Highlighting Relationships

In order for the technique of organizing to encode to work, you have to find the organization meaningful. That is, you have to see the organization as more than simply a list of topics. You need to learn to recognize the typical relationships between concepts. An outline or a table of contents, with items indented different amounts and different formatting for various levels of headings, also shows the relationships among the topics: which concepts can be grouped together, which are more important than others. To a very large degree, organizing information to improve encoding is simply a matter of paying attention to these types of relationships.

One very important relationship is between a general principle and an example of that principle. Look for clues in the text of your book, such as introductory phrases (“for example,” “the main idea is,” and the like). When you have identified whether a given statement is a general principle or an example, try to generate the other. If you think it is the general principle, try to come up with a new example. If you think it is an example, make sure you can identify the general principle.

Here are three other types of relationships you should make a habit of distinguishing in the materials you want to remember:

  • Causes and effects.  For example, if we were doing an experiment on violent video games and aggression, the independent variable, exposure to violent video games, is the supposed cause, and the dependent variable, aggressiveness, is the supposed effect (see sec 2.3).
  • Parts and wholes.  For example, a neuron is essentially a small part of the brain (the brain is made up of billions of neurons). Neurons themselves are composed of parts, including the cell body, dendrites, and axons (see secs 5.3/11.1).
  • Levels of a hierarchy. A hierarchy is an organization system in which lower-level, or subordinate categories are included under higher-level, or superordinate categories. For example, the levels of living things that you probably learned in biology—kingdom, phylum, class, order, etc.—are organized in a hierarchy.

Any organization scheme that you come up with yourself will be particularly effective. Because you find it personally meaningful, a self-generated scheme will be easily and effectively encoded into long-term memory. You would be doing yourself a tremendous favor if you adopted a good strategy for generating these organizational schemes.

  • In your own words, why is rephrasing textbook material in your own words an effective strategy for encoding information into long-term memory?
  • Why can it be difficult to assemble something using a poorly written instruction manual?
  • Try to think of a situation in your life where you were unable to understand or remember something because you did not know how it was organized.
  • Why is it difficult to understand or remember a movie for which you missed the first 30 minutes ?

5.3 Memory Encoding and the Brain

What do you think of when you think of “dog”? Diagram your thoughts about “dog” by following these directions:

  • On a sheet of paper draw a small circle in the middle of the page and write the word “dog” in the circle.
  • Draw a short line out from this first circle and draw another circle at the end of the line; inside the new circle write a word that relates to the word dog(perhaps “tail”).
  • Continue to draw lines out from the concept of dog and draw circles into which you write words that are related to dog. Also, draw some lines out from some of the new concepts and add concepts related to them. For example, if you wrote down “tail” you might connect it to a circle with the word “wag.”
  • When you are finished writing down new concepts, take a few minutes to draw lines connecting some of the concepts that seem to be related.

The network of interrelated items that you have just created is a  concept map. Yours might look something like this:

A concept map has a center circle and five smaller circles connected to it. The central circle is labeled dog. Two circles labelled mammal and fur both connect to dog and also each other. Two circles labeled tail and friendly connect to dog as well as to a third circl labelled wags. A circle labelled bark is connected to dog and also connects to a second circle labelled loud.

A concept map is, among other things, a good way to organize information for encoding into long-term memory. It signals the meanings of a number of related concepts and highlights the relationships among them (remember our discussion in section 5.3?). A concept map is also a simple representation of how networks of concepts are formed in the brain.

Creating Memories in the Brain: Activation and Synaptic Plasticity

You may already know that the brain is made up of billions of cells called  neurons. For now, you can think of the brain as simply a very large collection of neurons. The neurons are all connected to each other in an extraordinarily complex pattern (one neuron can be simultaneously connected to many other neurons, all of which can be connected to many other neurons, and so on down the line). Neurons are connected to each other by axons, which look like single long branches extending from the cell body, which is the round part of the neuron, and by  dendrites, which are smaller branches splitting off from the cell body. (Each neuron has a single axon but many dendrites.) Electrical and chemical activity that takes place through pathways created by these interconnected neurons determines everything we say, think, feel, or do (see sec 11.1).

The cell body of the neuron is connected to branched extensions called dendrites. The axon extends form the cell body and splits into branches that connect to other neurons.

The neurons are involved in two significant ways when you encode information:

  • Activation . When you encode information and move it into memory, many neurons throughout the brain become active. The neural activity is pulses of electricity that are caused by chemicals called ions (electrically charged particles) briefly changing locations in your brain. The ions (sodium, which is abbreviated Na+) rush into the axon of a neuron. This movement of ions produces a brief electrical charge inside the neuron, which is then transmitted to many other neurons (see Module 11 for details).
  • Synaptic plasticity . In order to store information for a long time, the brain has to change its very structure—that is, the neurons themselves must change. Brain researchers currently believe that the change in structure can occur either within the individual neurons or through the connections among the billions of neurons in your brain. The connections are called synapses, hence the name synaptic plasticity. Changes that occur inside the neuron cause the neuron to produce more or fewer of the chemicals that it uses to communicate with other neurons, which are called neurotransmitters (see sec 11.3). The synapses are located at the spaces where the axon of one neuron is situated next to the dendrites of a neighboring neuron. Two things can happen in response to changing levels of neurotransmitters: the axons and dendrites can extend or retract, hence changing, ever so slightly, the structure of your brain; and the surface of the neuron can change by having more or fewer receptive areas for neurotransmitters.  Both of these events are forms of synaptic plasticity and occur whenever new information is encountered.

These two kinds of changes, especially activation, happen extremely quickly. And the changes of synaptic plasticity can last a very long time, perhaps even forever. Think about it: any time you have a new experience your brain immediately changes its electrical activity and changes its structure permanently.

activation: the electrical charging of a neuron, which readies it to communicate with other neurons

axon: the single tube in a neuron that carries an electrical signal away, toward other neurons

dendrite : one of the many branches on a neuron that receive incoming signals

neuron : the basic cell of the nervous system; our brain has billions of neurons

neurotransmitter: chemical that carries a neural signal from one neuron to another

synapse: the area between two adjacent neurons, where neural communication occurs

synaptic plasticity: the brain’s ability to change its structure through tiny changes in the surfaces of neurons or in their ability to produce and release neurotransmitters

Storing Memories Across the Brain: Neural Networks

So far, we have just been thinking about connections between two neurons. Let us return now to the idea that neurons are connected to each other in massive three-dimensional, dynamic, organic versions of the concept map. We call these many interconnected neurons  neural networks. Many neuroscientists believe that most memories are not stored in a specific area of the brain but are spread out in interconnected neural networks across many areas of the brain. In other words, brain activation and synaptic plasticity for memories travel throughout the brain.

This neural network idea offers an explanation for why encoding meaning works so well in forming long-lasting memories. When you start searching through your brain for information—a memory—you will have a greater chance of hitting a unit of that information with a neural network that is spread out and contains a lot of information. A larger, more detailed network that uses lots of neurons will be easier to activate and use than a smaller network.

  • Describe in your own words the changes that take place in your brain when you encode new information into long-term memory.
  • Draw a concept map that includes the concepts from this module.

Have any of the following ever happened to you?

  • You know a fact but can’t come up with it. You have the feeling that it is on the “tip of your tongue.”
  • You blank out on a test question. After a mighty struggle to remember, you give up and leave the question unanswered (or you make a wild guess). Then, the correct answer hits you on the way home like a slap in the head.
  • You (temporarily) forget the name of someone who you know very well.
  • You (temporarily) forget your own phone number.
  • Is it true that you always find your keys in the last place you look for them? (Answer: Yes, because most people stop looking after they find what they were looking for.)

It is the day of the big Political Science mid-term. You have been studying for days. You feel as if your head is so full of political facts, principles, and theories that it is going to explode. Your professor walks in and asks if there are any questions before she hands out the exam. “Please,” you silently beg, “hand out the exam now, before I forget everything I studied.” After ten minutes of questions from classmates (that you don’t listen to because you are too nervous), you get your exam. Question #1: How much of the U.S. government’s budget is spent on foreign aid? You know this. You just studied it last night. It is in your head somewhere if you could only find it. Why can’t you remember? You are struggling with retrieval.

Understanding (and Improving) Retrieval

Memory retrieval  (withdrawing information from long-term memory for use in working memory) is largely a matter of coming up with and using effective retrieval cues. In familiar terms, retrieval cues  are reminders, any information that automatically leads you to remember something. More scientifically, you can think of retrieval cues as entry points into the neural network associated with a particular memory (see sec 5.3).

You might also think of retrieval cues this (decidedly less scientific) way:  Any specific memory you have floating around in your head (the amount of U.S. foreign aid, for example) is slippery. To pull it out of long-term memory and into working memory, you need a hook, something attached to the specific memory that you can grab onto. A retrieval cue is that hook. The very best hooks are ones that you put there yourself during recoding.

To create potential retrieval cues for yourself while you’re studying, you can use the encoding principles we have already described: encode meaning and organize information. The more cues you create through this recoding and the better they are, the better your chances of being able to “grab onto one” when you need it.

Now you might begin to understand why straight repetition is only a mediocre study strategy. To be sure, the repetition of a concept and its definition provide you with a possible retrieval cue. A formerly meaningless term and definition, completely disconnected from the rest of the knowledge in your head, is not the world’s greatest hook, however.

In contrast, consider a retrieval cue that is based on memories from your own life. For example, suppose when trying to encode the concept  procedural memory into your long-term memory, you remembered the time you helped your little sister learn how to tie her shoes. The formerly meaningless concept, procedural memory, now becomes part of your memory for this event.

Importantly, you would probably have a fairly detailed memory of such an event. Any of these details can serve you as a possible retrieval cue. Can you picture the smile on your little sister’s face when she finally got her shoes tied right? That can be your hook. Do you remember the feeling of frustration before she caught on? That can be your hook. And so on. Literally, anything you might remember about the event can work to remind you of the concept procedural memory.

That is the beauty of making the information personally meaningful (remember, it is called the self-reference effect). It becomes embedded in a rich network of information that is the easiest stuff in the world for you to remember—information about yourself. The specific hook, or retrieval cue, can be any aspect of the event that you can recall. Add this to the recoding that you did based on organization (for example, attending to the relationship between procedural and declarative memory) and by rephrasing the material in your own words, and you have an extremely powerful set of potential retrieval cues, a set of hooks that give you an excellent chance of being able to grab one when you need it.

memory retrieval: withdrawing information from long-term memory into working memory

retrieval cue: a reminder that leads to the withdrawal of information from long-term memory into working memory

Providing a Match Between Encoding and Retrieval

Sometimes, even extensive encoding is not enough to give you a good retrieval cue when you need it. Or, perhaps, you didn’t do a careful job of encoding. What then? Is there still a way to make retrieval cues work in your favor? Fortunately, the answer is yes.

The general strategy that you use to make retrieval cues available and useful is to try to provide some kind of match between the encoding and retrieval situations. This idea is known as the encoding specificity principle (Tulving & Thomson, 1973). If your physiological state or the external environment (the context) is similar during both encoding and retrieval, you have a better chance of coming up with a retrieval cue (Murnane & Phelps, 1993; Smith, 1979). For example, suppose you drank four cups of coffee, each with an extra shot of espresso, when you were encoding information for a big test. You might consider ingesting a bit of caffeine before retrieval time.

Even seemingly trivial aspects of the external environment, such as your location in a room, can be just the match you need to give you a retrieval cue. But hold on before you decide to wear the same clothes every day to take advantage of the encoding specificity effect.  Think about what we are saying. The encoding specificity effect allows you to remember something in a situation that closely matches the situation at encoding. That might be helpful for an exam, but is that what you really want to accomplish? For example, suppose you are studying to be a nurse. Do you really want to remember some important medical concept ONLY when you are sitting at your desk, wearing your favorite blue shirt, and chewing peppermint-flavored gum? We thought not. If you really want to learn something, to be able to retrieve it in many future situations, you would do best to simulate that when you encode it. In other words, engage in multiple encoding episodes, and vary the context in each (Bjork & Bjork 2011). This is hard. In fact, it is one of the list of strategies known as desirable difficulties . These are strategies that are difficult to use and make you feel as if you are not learning, but in reality lead to much more effective (and lasting) learning (Bjork & Bjork 2011; Smith, Glenberg & Bjork, 1978). You might also consider some of the strategies we have recommended previously (e.g., elaborative verbal rehearsal and generating self-references) to be other types of desirable difficulties. As we said previously, they can be hard to use, but they are extremely effective.

Saving the Best for Last: Retrieval Practice (and Spacing)

So, do you think that the principles we have shared so far can help you in your quest to improve your memory? Well, we have terrific news: We have saved some of the best news for last. There is one strategy that may have been first suggested by Aristotle and has been examined in research for over 100 years. Time and again, this strategy has been found to lead to better memory than re-studying material (Brown, Roediger, & McDermott, 2014). And very few students use this strategy (Karpicke, Butler, & Roediger, 2009). OK, have we kept you in enough suspense? Here it is: If you want to be able to retrieve information from memory, one of the most important things you should do is to PRACTICE RETRIEVING THAT INFORMATION (sorry for yelling, but this is that important). And not just once. You should practice retrieval over time, spacing out your practice sessions as much as you can. (Soderstrom et al.,2016; Karpicke and Roediger, 2008). Many students believe that it is more efficient to do all of their studying at one time, but the spacing effect  shows that the very opposite is true.

This is obviously great news because you do not need to recode information or come up with new examples, or struggle with organization to use these strategies. You only need to intentionally practice and organize your time.

Just as a reminder or clarification: we are certainly not saying that you should only practice retrieval with the spacing effect. We are saying that it is the one strategy that may have the largest impact on your ability to remember. So, to summarize, allow us to present a guide to studying that is based on some of the best principles of memory that psychologists have to offer.

  • Spend some time surveying the material before you start reading it. Figure out how it is organized by reading previews and summaries, and paying attention to outlines.
  • Recode for meaning while you read: periodically pause and reflect on what you have just read. Rephrase material and come up with examples from your own life (elaborative verbal rehearsal with self-reference). Note relationships between different concepts. Pay attention to how the current information fits into what you have already learned.
  • Practice retrieving while you are reading. During some of your periodic pauses, cover up what you just read. Try to retrieve the definitions of key terms. Try to generate your elaborative verbal rehearsals without looking at the text.
  • Practice retrieval after reading. Use practice quizzes, flash cards, quizlet, etc. It is far more effective if you have to come up with the answers yourself rather than just recognizing the answer (like in a multiple-choice question).
  • Come up with a schedule that allows you to take advantage of the spacing effect.

desirable difficulties : strategies that are difficult to use and make you feel as if you are not learning, but lead to much more effective and lasting learning

spacing effect : the finding that information that is learned and practiced over a period of time (instead of all at once) is remembered better

  • Try to remember a time that you had a temporary retrieval failure. What retrieval cue eventually helped you to remember?
  • What specific types of retrieval cues do you think work best for you?
  • Do you have any memories in which you see yourself in the third person, as if you were watching yourself on television? Doesn’t that seem odd, considering the fact that you never experience yourself that way?
  • Have you ever had an argument with someone about an event that happened in which the main point of disagreement is that the two of you remember the event differently? Were you both sure that you were right?

College student Charles was always proud of their memory. In school, they rarely took notes and often had to read a chapter a single time only in order to remember it well enough to get a good grade on an exam. They also had many detailed autobiographical memories, several dating back to when they were a very small child. For example, they remembered their mother coming home from the hospital when their brother was born; they were two years, four months old. Or they remembered an early haircut, perhaps their first visit to the barber. They were sitting in the barber’s chair, eating a lollipop (covered with hair, no doubt), while their whole family stood around and watched.

One evening during Charles’s sophomore year, they and their family decided to watch some old videos from the family to celebrate their parents’ anniversary. Then, suddenly, Charles saw their memory on the television screen. It was their first haircut. Charles’s parents had obviously wanted to remember the event for the rest of their lives, so they decided to capture it on film. There in the family room, Charles saw their entire memory played out on the screen, and they realized that they did not, in fact, have a memory of their first haircut. Charles had a memory of the home movie of their first haircut and had mistakenly believed that it was a memory of the actual event. Charles also knew this because they had just learned this concept in their psychology class. Forgetting the actual source of a memory is very common; it is called source misattribution  (Schacter, 2001). It is one form of memory distortion.

The early sections of this module emphasized how employing good encoding and retrieval skills can lead you to remember information more effectively. Somewhat hidden in those discussions, however, is an important observation about the way memory works. Although it is fair to accept the existence of different memory systems, such as working memory and long-term memory, it is not fair to assume that information gets copied into these systems perfectly, to be replayed accurately and in its entirety every time the correct retrieval cue is accessed. Memory, it turns out, is much more dynamic than that.

Instead of thinking of memory as something to be recorded and played back, it is more accurate to say you construct memories of events as you go along. The idea of  memory construction  might be hard to accept at first, but it is the simplest way to explain how memories for events change over time. Not only do some of the details of memories fade (as you might realize), but new details also creep into them. For example, imagine that someone tells you a very unusual story that does not make a great deal of sense to you. The story is from a non-Western culture and is quite difficult for you to follow (assuming you are from a Western culture, of course). Over time, as you attempt to recall this story, it will begin to resemble stories that are more familiar to you, with many of the cultural idiosyncrasies forgotten and replaced by themes and details more typical of Western culture (see Window 2).

A number of factors may render a memory incomplete or inaccurate. The kind and amount of processing that takes place at encoding can have a huge impact on the contents of an eventual memory. Also, minor distortions that are consistent with one’s view of the world often creep in. Imagine that you are visiting your psychology professor’s office for the first time. After leaving, you are asked to report what was in the office. Most people have beliefs about what sorts of objects would be in a professor’s office (such as desk, telephone, books), and they would be likely to think they remembered seeing these objects even if they were not actually in the professor’s office. Nearly one-third of the participants in a study similar to the situation just described reported seeing books in a professor’s office—even though the office had been specifically set up without books to test if participants would falsely remember them (Brewer & Treyens 1981).

Elizabeth Loftus and her colleagues have pioneered research on the  misinformation effect, perhaps the most dramatic demonstration of the way that memory can be distorted. Loftus’s research has demonstrated that information that is given to people after an event occurs, even at retrieval, can lead to memory distortions. For example, research participants who had been shown a slide show of a car accident were later misled to believe that a stop sign was pictured in one of the slides. Many of these participants on a subsequent memory test mistakenly reported that they had seen the stop sign (Loftus et al., 1978).

In another experiment, research participants were asked one of two questions after viewing a videotape of an accident between two cars. In one condition, they were asked, “How fast were the cars going when they hit each other?” In the other condition, participants were asked, “How fast were the cars going when they smashed into each other?” One week later, participants who had been asked the “smashed” version of the question were more likely to report seeing broken glass in the video (Loftus et al., 1985).

The misinformation effect has been demonstrated many times, even leading participants to remember events that did not occur at all, such as spilling a punch bowl or being lost in a mall as a child (Hyman and Pentland, 1996; Loftus and Pickrell, 1995).

  • memory construction: the process of building up a recollection of an event, rather than “playing” a memory, as if it were a recording
  • misinformation effect: a memory distortion that results when misleading information is presented to people after an event has occurred
  • source misattribution: a memory distortion in which a person misremembers the actual source of a memory
  • Can you think of a memory from your life that you would be willing to admit might be a memory distortion?

Module 6: Learning and Conditioning

Many students are confused when they first encounter the concept “learning” in their psychology class. We all know what learning means, having been students for at least 12 years prior to taking a college General Psychology course. Every day, we are asked, encouraged, or forced to “learn” new material in classes. Then you encounter a chapter in a General Psychology textbook called Learning, and it talks about a child who comes to fear a white rat because it is paired with a loud noise or a pigeon that pecks on a surface in order to receive a pellet of food. There seems to be some disconnect here between your experience of learning and what psychologists want to tell you about learning.

But there isn’t really a disconnect. The common thread is this idea: behavior (and knowledge) can change as a result of experience. When it happens, we call it learning.  This is an intentionally broad definition. It encompasses both of the phenomena mentioned earlier—a child learning to fear a rat and a pigeon learning to peck—plus all that you are likely to have in mind when you think of learning.

As you read this module, keep in mind that the learning with which you are most familiar, the kind that takes place in a school setting, involves remembering information in order for you to prove that you learned it (for example, for you to perform well on an exam). Thus, it is often useful for you to think of learning and memory as parts of the same process. How can you remember something if you did not learn it? And how can you say that you have learned something if you do not remember it?

This module describes several basic types of learning, but it focuses primarily on two. The first is classical conditioning, in which the learner comes to associate two events in the environment, called stimuli. The second is operant conditioning, in which the learner comes to associate a behavior with its consequences. Together, classical and operant conditioning are sometimes called associative learning, because both involve learning some association, or link. The last section in the module concludes with a description of some other phenomena that also qualify as types of learning.

6.1 Learning That Events Are Linked: Classical Conditioning

6.2 learning that actions have consequences: operant conditioning.

  • 6.3 Other views of learning

associative learning: learning based on making a connection between two events in the environment, or stimuli (classical conditioning), or between behavior and its consequences (operant conditioning)

learning: changing knowledge and behavior as a result of experience

By reading and studying Module 6, you should be able to remember and describe:

  • Learning (psychologist’s definition) (6 introduction)
  • Basic elements of classical conditioning: unconditioned stimulus, unconditioned response, conditioned stimulus, conditioned response (6.1)
  • Higher-order conditioning (6.1)
  • Generalization and discrimination (6.1)
  • Extinction and spontaneous recovery (6.1)
  • Basic elements of operant conditioning: positive reinforcement, negative reinforcement, positive punishment, negative punishment (6.2)
  • Shaping (6.2)
  • Continuous and partial reinforcement (6.2)
  • Immediate and delayed consequences (6.2)
  • Side effects of punishment (6.2)
  • Primary and secondary reinforcers (6.2)
  • Observational learning, non-associative learning, habituation, sensitization (6.3)

By reading and thinking about how the concepts in Module 6 apply to real life, you should be able to:

  • Recognize and explain examples of classical conditioning (6.1)
  • Recognize and explain examples of operant conditioning (6.2)
  • Recognize and explain examples of observational learning (6.3)
  • Recognize and explain examples of non-associative learning (6.3)
  • Explain why some bad habit in yourself or others has developed using principles from the module (6.1, 6.2, 6.3)
  • Devise a strategy for studying that uses principles from the module (6.1, 6.2, 6.3)
  • Have you ever developed an aversion to a food because of a bad experience with it? What happened?
  • Have you ever developed a fear of some object or situation because of a bad experience? What happened?
  • Do you generally eat meals at the same time every day? If so, what happens if you miss a regularly scheduled meal?

Ed is a 55-year-old former Military Police officer. He has complained that he cannot drink beer because, as an MP, he often had to break up fights at bars. Even today, decades after his duty, he finds that the smell of beer gets him too worked up.

Ciara is a dog who seems to be able to read her owner’s mind. She seems to know that her owner is going to take her running as soon as he decides to do it.

Although it may not be obvious at first, these two descriptions are both examples of the same psychological phenomenon, classical conditioning. Classical conditioning  is learning that two stimuli are associated with each other. A  stimulus  is simply an event or occurrence that takes place in the environment and leads to a response, or a reaction, in an individual. For example, suppose that you are fortunate enough to have someone feed you dinner every night. Further, suppose that this kind person does this at the same time every night, 6:00 pm. On the first day, you look up at the clock and see that it is 6:00 (a stimulus), and your benefactor makes dinner appear in front of you (another stimulus). Second day, same thing: 6:00, and dinner appears. It will not take you too many days to learn that these two stimuli are associated- every time the clock says 6:00, someone gives you dinner. This is the essence of classical conditioning, and it explains a wide variety of animal and human behavior.

Think carefully for a moment about how we could tell that someone has learned to associate the time on the clock with dinner. Consider the second part of the definition of a stimulus; it leads to a response. One way see if someone has learned that two stimuli are associated would be to observe how he or she responds to the two. If we discover that the person responds to the clock the same way that she responds to dinner, it is reasonable to conclude that she has learned the association between the clock and dinner. Specifically, when you begin to eat dinner, your body responds in very specific ways—for example, salivation begins, the stomach begins to secrete acids, the pancreas begins to secrete insulin, and so on. To keep things simple, focus on the salivation response for a moment. If the person begins to salivate when the clock says 6:00 pm, even when dinner is not served, we can tell that she has learned to associate the two stimuli.

And precisely what association is learned? As psychologists have observed, it is that one stimulus predicts  that the other stimulus is about to occur. So, the 6:00 clock face predicts that dinner is about to occur, or the smell of beer for a military police officer predicts that he will soon encounter a fight that he will have to break up.

The mechanisms of classical conditioning were originally spelled out in the early 1900’s by the Russian physiologist Ivan Pavlov. A bit later, classical conditioning was embraced by a group of psychologists known as the behaviorists. John B. Watson, the most famous and influential of the behaviorists, believed that the principles developed by Pavlov could explain all of human behavior. Classical conditioning does do a good job of explaining some very interesting aspects of human (and animal) behavior, although it falls short, of being a complete explanation of human psychology (see Module 9).

classical conditioning: a type of associative learning, in which two stimuli are associated, or linked, with each other

response: a reaction to something that takes place in the environment (a stimulus)

stimulus: an event or occurrence that takes place in the environment and leads to a response in an individual

How Two Events Become Linked: Stimulus and Response

With these basic ideas in mind, we can take a closer look at the details of classical conditioning. As you begin to learn the distinctions among some important terms—what are known as the unconditioned stimulus, unconditioned response, conditioned stimulus, and conditioned response—try to avoid the temptation to memorize through repetition. Rather, recode to make these concepts meaningful (see Module 5). Two key questions will help you in this regard:

  • Are you considering something that originated in the environment, or is it a person’s (or animal’s) reaction to something in the environment? If it originated in the environment, it is a stimulus. If it is a reaction to the stimulus originating in the person (or animal), it is a response.
  • Are you looking at a relationship between a stimulus and response that is automatic (unlearned), or did the person (or animal) have to learn it? Conditioned means learned, so the answer to this question will tell you whether you are looking at a stimulus and response pairing that is unconditioned or conditioned. Specifically, automatic stimulus-response pairings are called unconditioned ,  and learned pairings are called conditioned .

Let’s apply these two questions to a specific classical conditioning example. Before Ed became a Military Police officer, the smell of beer probably had very little effect on him. Being called upon to break up a fight, however, does lead to the automatic, and very dramatic, “fight-or-flight” response. For instance, the heart will begin to race, and the digestive system will shut down as blood is diverted from it to the body systems that will allow the person to face the physical danger, principally the respiratory system, circulatory system, and the large skeletal muscles of the arms and legs (see Modules 11 and 28).

A diagram describing the classical conditioning process. Before conditioning, a fight in the bar (and unconditioned stimulus) leads to a flight-or-flight reaction (unconditioned response). Where as the smell of beer (a neutral stimulus) does not lead to a particular response. During conditioning, the repeat pairing of beer smell with fight. Every time beer is smelled, he encounters a fight. After conditioning, a fight in a bar (unconditioned stimulus) leads to a fight-or-flight reaction (uncondiitioned response) and the beer smell (now a conditioned stimulus) leads to a fight-or-flight reaction (a conditioned response).

The two questions, in this case, are easy to answer:

  • The part that originated in the environment is the fight (the stimulus), and the physiological changes that Ed experiences are what the person does in reaction to the stimulus (the response).
  • The fight-or-flight response is automatic, that is, unlearned, so we are observing an unconditioned stimulus (UCS) and an unconditioned response (UCR).

Diagram showing a fight in a bar (unconditioned response) resulting in a fight-orflight reaction (unconditioned response)

Classical conditioning occurs when the unconditioned stimulus is paired with something else that originates in the environment (another stimulus), in this case, the smell of beer. Originally, this stimulus had no particular power to produce a response. In other words, it was a neutral stimulus . Over the course of his experience as an MP officer, though, every time Ed smelled beer, he found himself confronted with a fight to break up. After a few pairings of beer with fight, Ed began to have a fight-or-flight response when he smelled beer alone. This new response was learned, or conditioned, so it is called the conditioned response (CR). The stimulus that elicited it, the smell of beer, is called the  conditioned stimulus (CS).

The smell of beer used to be neutral for Ed, but because of the pairing with the bar fights, he learned to associate the two stimuli. Thus, he has been classically conditioned. Again, what he has learned is that the smell of beer—the conditioned stimulus—predicts that a fight—the unconditioned stimulus—is about to occur.

In real life, it is not always so easy to decide whether something is a stimulus or response. A stimulus may occur because of something you did, so it might seem like a response. For example, Ed might open a beer bottle to produce the smell of beer. But that beer smell itself comes from the environment and leads to a response, so it is a stimulus. Similarly, suppose you are the person who feeds yourself at 6:00 every night. Although you prepare the dinner, the food comes from the environment and leads to a response in you (the digestive response). Therefore, it is a stimulus.

Also, it can be challenging to tell the difference between conditioned and unconditioned. If you are having difficulty, consider this observation. The unconditioned stimulus is ALWAYS the unconditioned stimulus. A bar fight leads to the fight-or-flight response the first time, fifth time, tenth time, and thousandth time it happens. It does not change. The conditioned stimulus changes. At first (before conditioning), it is neutral. It leads to nothing interesting. It is only after repeated pairing of this stimulus with the unconditioned stimulus that it begins to lead to response. In other words, it does change. It goes from being a neutral stimulus to a conditioned stimulus.

One other realization might help you keep the distinction between conditioned and unconditioned straight. Think again about what, precisely, is being learned. The individual comes to realize that some formerly meaningless stimulus (smell of beer, clock saying 6:00, etc.) has begun to predict that something important is about to occur.

conditioned response (CR): In classical conditioning, an organism’s learned response to a conditioned stimulus

conditioned stimulus (CS): In classical conditioning, an environmental event that an organism associates with an unconditioned stimulus; the conditioned stimulus begins to lead to a reaction that is similar to an unconditioned response.

neutral stimulus : In classical conditioning, an environmental event that does not lead to any particular response related to the conditioning situation. This stimulus will become a conditioned stimulus.

unconditioned response (UCR): In classical conditioning, an organism’s automatic (unlearned) reaction to an unconditioned stimulus

unconditioned stimulus (UCS): In classical conditioning, the environmental event that leads to an automatic (unlearned) response

Higher-order conditioning

All right. Now suppose you have a very strongly learned conditioned stimulus. As an MP officer, you are unlucky enough to be called on to break up hundreds of fights, each with its corresponding beer smell. In cases like this, the conditioned stimulus can become so well established that it can eventually become an unconditioned stimulus in a future round of classical conditioning. This type of conditioning is called  higher-order conditioning. Again, think of it as a “later round” of conditioning. A conditioned stimulus in round one that is very well established becomes the automatic, or unconditioned, stimulus in round two. Higher-order conditioning can then repeat several times until it is difficult to identify the original conditioned stimulus.

For example, consider the dog, Ciara, we mentioned in the beginning of the section. She has always loved going running with her owner. This stimulus, being taken running, leads to an automatic response: she gets excited. Thus, these are unconditioned stimulus and unconditioned response. Over time, a neutral stimulus, namely a leash, gets paired with the unconditioned stimulus (every time the owner gets the leash, Ciara gets taken running). Thus, the leash becomes a conditioned stimulus that causes Ciara to get excited (conditioned response). Round 1 is over.

Now, begin conditioning round 2. Because the leash had become a strong conditioned stimulus at the end of round 1, it will become an unconditioned stimulus in round 2. A new neutral stimulus, namely the owner putting on running shoes, now gets paired with the new unconditioned stimulus (every time they put on their running shoes, they get the leash). At the end of round 2, putting on running shoes is a conditioned stimulus that will cause Ciara to get excited.

introduction to psychology essay questions and answers pdf

The conditioning can continue to round 3. A new neutral stimulus, the owner going into the closet to get the shoes, gets paired with the new unconditioned stimulus (the conditioned stimulus from round 2, putting on the shoes). And so on. By the end of several rounds, Ciara has undergone 4th, 5th, or even 6th order conditioning, as she learns to associate new stimuli with previously learned stimuli.

How Conditioned Responses May Change with Time and Experience

A few more details will help you to recognize and understand the many examples of classical conditioning that you may encounter. The period during which classical conditioning occurs is called acquisition. During acquisition, in order for conditioning to occur, the conditioned stimulus must come before the unconditioned stimulus. If you recall the earlier point about prediction, it is easy to see why this is so. In order for the conditioned stimulus to be a good predictor of the unconditioned stimulus, it must come first. A predictor that occurs after, or at the same time as, the event it is supposed to predict is not very useful.

Imagine that you were once bitten by a big yellow dog named Rex. You might easily develop a fear of Rex through classical conditioning. But many people who have an experience like this go on to fear other dogs as well, even little white or black or brown ones; in some cases, they may come to fear all dogs. What has happened is that stimulus generalization has taken place. Stimulus generalization  occurs whenever a conditioned response occurs in the presence of stimuli that are similar to the original conditioned stimulus. On the other hand, what if you have a dog? In this case, although you might still develop a fear of Rex and some other dogs, it is likely that you would not come to fear your own dog. In this case, stimulus discrimination has occurred. Stimulus discrimination  is when a conditioned response does not occur in the presence of a stimulus similar to the original conditioned stimulus.

Classical conditioning effects do not last forever; they fade over time. If a conditioned stimulus is presented repeatedly without pairing it with the unconditioned stimulus, the conditioned response will grow weaker and eventually disappear. This is called extinction. For example, suppose Rex bites you, and then you adopt a new puppy. At first, because of generalization, you may have a classically conditioned fear of the new puppy. Over time, however, as this puppy (a conditioned stimulus) is presented to you without the unconditioned stimulus—she does not bite you—your fear may fade.

The concept of extinction is perhaps misnamed, however, because the conditioned response is not really dead. After a delay, it will reappear in a weakened from, a process called spontaneous recovery.

acquisition: the period during which classical conditioning occurs

extinction: in classical conditioning, the fading away of a conditioned response after repeated presentation of a conditioned stimulus without the unconditioned stimulus

spontaneous recovery: in classical conditioning, the reappearance of a formerly extinct conditioned response after a delay

stimulus discrimination: in classical conditioning, a situation in which an organism learns to not have a conditioned response in the presence of stimuli similar to the original conditioned stimulus

stimulus generalization: in classical conditioning, a situation in which an organism has a conditioned response in the presence of stimuli similar to the original conditioned stimulus

How Understanding Classical Conditioning Can Help You

You may now feel that you can recognize some examples of classical conditioning in your life. It may not be obvious, however, that you can use this knowledge to help you. The thing for you to realize is that many typical conditioned responses—good and bad habits, if you will—are classically conditioned.

For example, many people have a bad habit of falling asleep when they study, particularly if they try to study in bed. Perhaps now you can see this habit as classical conditioning. The comfortable stimulus of your bed may be an unconditioned stimulus that leads to an unconditioned response of drowsiness. If you frequently read your chemistry textbook in bed, the textbook will become a conditioned stimulus that will also make you feel drowsy (even later when you do not read it in bed). Stimulus generalization may also occur, and then you might discover that any textbook (except psychology, of course) makes you feel drowsy.

Other habits, such as being anxious or being unable to study in certain situations, may likewise be examples of classical conditioning. The trick is for you to recognize them as such and use the principles you learned in this section to break the habits. Stop pairing the conditioned stimulus with the unconditioned stimulus (for example, stop reading in bed) to encourage extinction to occur.

Better yet, you might try to make your chemistry book a conditioned stimulus for a more productive conditioned response, such as studying and being alert. This concept, called  counterconditioning, replaces an original conditioned response with a new, incompatible conditioned response; it is the basis for a common therapy that psychologists use to treat phobias (see Module 30).

  • If you were able to answer “yes” to any of the questions in the Activate exercise, describe your experiences as examples of classical conditioning, being sure to label the unconditioned stimulus, unconditioned response, conditioned stimulus, and conditioned response.
  • If you did not generate any examples in the Activate exercise, describe a new example of a time when you learned the association between two stimuli. Again, be sure you can label the UCS, UCR, CS, and CR.
  • Describe a behavior or activity that you do because you have been rewarded for it in the past.
  • Describe a behavior or activity that you used to do, but do not do any longer because you were punished for it.

Suppose you decide to study for an upcoming exam by using recoding for meaning and retrieval practice with the spacing effect. When you get your exam score, you find that you got the highest grade you have ever received. Assuming you find this consequence pleasant, you will be more likely in the future to study using the same techniques. On the other hand, suppose you insult your psychology professor by pointing out that his clothes look funny. On your next written assignment, you get the lowest grade you have ever received. Assuming this consequence is unpleasant, you are rather unlikely to insult your professor again in the future. This, in a nutshell, is operant conditioning.

Understanding Different Kinds of Consequences

In order to understand operant conditioning well, you have to learn to distinguish between the two different types of consequences, known as reinforcement and punishment. As you read the descriptions, take time to understand them and be careful; these concepts are among the most misunderstood in all of psychology.

Let’s start abstractly, with the general ideas. Again, pleasant consequences make it more likely that you will repeat a behavior in the future, and unpleasant consequences make it less likely that you will repeat a behavior in the future. Consequences that make it more likely that you will repeat a behavior are called reinforcements, whereas consequences that make it less likely that you will repeat a behavior are called punishments.

There is already a complication that makes it difficult to recognize the difference between reinforcement and punishment. Basically, there are two main ways that we could do something pleasant to you: we can give you something good, or we can take away something bad. Similarly, there are two ways that we could do something unpleasant to you: We can give you something bad, or we can take away something good. These four possibilities constitute the four main types of consequences that are important in operant conditioning; they are called  positive reinforcement, negative reinforcement, positive punishment,  and negative punishment.  This diagram shows how you can decide which type of consequence is creating the learning:

A flow chart describing the operant conditioning decision tree. The top of the tree is labeled Are you more likely (pleasant consquence) or less likely (unpleasant consequence) to repeat the behavior in the future? Depending on whether or you are more or less likely two branches extend. If more likely is chosen, the branch is labeled Reinforcement (pleasant). From here a box is linked labeled is it pleasant because something good happened or bechause something bad was removed? If good, the final box is Positive reinforcement. If something bad was removed, the final box is Negative reinforcement. The second main branch is chose if one is less likely to repeath the behavior. The next box in this tree is Punishment (unpleasant). The Punishment box leads to a box labeled Is it unpleasant because something bad happened or because something good was removed? If something bad happened, the final box is Positive punishment. If something good was removed, the final box in the decision tree is negative punishment.

Note that the terms “positive” and “negative” have nothing whatsoever to do with whether the consequence is pleasant or unpleasant. They refer only to whether something was done to you (positive) or taken away from you (negative).

Another reason that many students find it difficult to recognize examples of punishment is that they believe that someone must be doing the punishing. This misconception is consistent with the common usage of the English word punishment— as, for example, when we talk about parents punishing their children. It does not matter where the consequence comes from, however. If the consequence is unpleasant and you are less likely to repeat the behavior in the future, it is punishment.

Let us summarize and recap with an example of each of the four types of consequences:

  • Your decision to study using the Module 5 techniques in the future because of the good grade you received on an exam is an example of positive reinforcement.  The pleasant consequence occurred because something good (the high grade) happened to you. You are likely to repeat the behavior again—that is what makes the consequence a reinforcement.
  • Imagine that you are plagued by anxiety whenever you do not study hard enough for an exam. Every time you do study, you find that the anxiety goes away. You are likely to find this consequence (getting rid of the anxiety) pleasant. Therefore, you are likely to increase your studying, so this consequence is a reinforcement for your behavior. Because the reinforcement occurred by taking away something bad (the anxiety), it is negative reinforcement.
  • The example of insulting your psychology professor is positive punishment.  The consequence, getting a low grade, will make you unlikely to insult your professor again, which makes it a punishment. And getting a low grade is something bad that happened to you, not a good thing that was removed.
  • Losing driving privileges as a consequence of committing traffic violations is an example of negative punishment.  A driver who has their license suspended becomes less likely to commit the violations in the future. Thus, because the unpleasant consequence occurs by taking away something good (the right to drive), it is negative punishment.

One additional distinction that you should know is between primary and secondary reinforcers. A primary reinforcer  gains its power to increase behavior because it satisfies some biological need. The clearest examples are food and water. A secondary reinforcer  gains its power to increase behavior through learning. In short, you learn that a secondary reinforcer is valuable; hence, it is perceived as rewarding. Perhaps the best example of a secondary reinforcer is money. Both primary and secondary reinforcers can be quite effective at increasing behaviors.

negative punishment: in operant conditioning, punishment that occurs because of the removal of something good

negative reinforcement: in operant conditioning, reinforcement that occurs because of the removal of something bad

positive punishment: in operant conditioning, punishment that occurs because of the addition of something bad

positive reinforcement: in operant conditioning, reinforcement that occurs because of the addition of something good (i.e.that is, a reward)

primary reinforcer: a reinforcer that meets some biological need

punishment: in operant conditioning, a consequence of behavior that makes it less likely that the organism will repeat the behavior in the future

reinforcement: in operant conditioning, a consequence of behavior that makes it more likely that the organism will repeat the behavior in the future

secondary reinforcer: a reinforcer that has the power to increase behavior because the organism learns that it is valuable

Why Reinforcement Works Better Than Punishment

You have perhaps noticed something missing in the earlier examples of positive and negative punishment. Consider the insulting the psychology professor scenario. Although you may stop the face-to-face insults if you received a low grade on your assignment, would you then go around thinking and saying only good things about your professor? Probably not. On the contrary, you would likely be very angry and might engage in some other behavior that the professor might find objectionable (for example, complaining to the department head, or leaving a bad review on Ratemyprofessor.com). An important fact to realize about punishment is that, although it may decrease the likelihood of a specific behavior, it does not necessarily replace that behavior with a more appropriate one.

Because punishment only tells you what not to do and not what to do, many psychologists favor the use of reinforcement when trying to influence the behavior of others. Parents, for example, are advised to use punishment sparingly, because it might be followed by some other unwanted behavior. To be sure, when it is necessary to stop a dangerous behavior quickly, punishment may be the only practical means available. A parent should always keep in mind, however, that the child needs to be shown what to do after being shown what not to do. Finally, note that punishment does not refer to physical punishment( e.g., corporal punishment), which is rarely recommended by psychologists (see Module 17).

How Time Affects the Link Between Behavior and Consequence

It is not only the pleasantness or unpleasantness of a consequence that determines its influence on subsequent behavior. Two additional important factors are the amount of time that passes between the behavior and the consequence and the frequency of the consequence.

The amount of time between behavior and consequence has a very strong influence on how effective operant conditioning will be. Immediate consequences are much more effective than delayed consequences. For example, some people find it difficult to take advantage of the delayed positive reinforcement that results from working hard, such as good grades in school or recognition at work. Instead, their behavior is more likely to be influenced by the immediate consequences of goofing off, such as having fun.

As for the frequency of the consequence, suppose someone starts giving you $10 every time you answer a question during class discussions, even if you are wrong. How long do you think it would take you to start answering every question? Then suddenly, your benefactor stops paying you for talking. Now, how long do you think it would take for you to stop answering questions? You have just discovered the characteristics of what is known as continuous reinforcement —reinforcement that occurs every time the behavior does. You probably realized that with a schedule of continuous reinforcement you would acquire the behavior (answering questions in class) very quickly, which is exactly what is observed when continuous reinforcement is used. You probably also predicted that soon after the reinforcement stops coming, you would stop doing the behavior. Again, this is what happens with continuous reinforcement. Rapid learning and rapid extinction are the hallmarks of continuous reinforcement.

Suppose instead that you get money for speaking in class, but not every time you do it. This method of reinforcement is known as a partial reinforcement  schedule. It may take you a while to begin speaking in class, but what do you think will happen when the reinforcement stops? Perhaps you figured out that extinction would be much slower with partial reinforcement than with continuous reinforcement.

For example, consider a dog that continues to beg for scraps whenever someone takes food out of the refrigerator, despite the fact that the entire family has been instructed not to give the dog people food. Many years ago, however, this behavior was reinforced on an occasional basis by a well-meaning, but uninformed relative. (“Oh, every so often won’t matter, as long as I don’t feed him every time.”) He continued begging for 10 years. You can just imagine the dog thinking, “ This time, he’s going to give me the piece of cheese.”

Similar to what we saw for punishment, there is a clear parenting application to this concept. Parents who give in to their children’s tantrums only occasionally are essentially using partial reinforcement of the bad behavior. Much of the advice to parents that they must be consistent in their parenting practices relates to the pitfalls of partial reinforcement.

continuous reinforcement:  reinforcement that occurs after every appearance of a behavior. It leads to rapid learning; when the reinforcement stops, extinction is rapid

partial reinforcement:  reinforcement that occurs only after some appearances of a behavior. It leads to slow learning; when the reinforcement stops, extinction is slow

How Operant Conditioning and Classical Conditioning Work Together

The separate discussions of operant and classical conditioning in this book reflect the historical development of the two concepts, and make it easier to explain them. In the normal course of the day, however, operant and classical conditioning are not separate. They can work together to cause human and animal behavior.

For example, imagine that your family has a cat that has a habit of walking on the dining room table. In order to train the cat to get off the table, many people use a bottle to spray it with water. Eventually, your kitty would jump off the table as soon as you walked into the room with your spray bottle. The pairing of spray bottle (a CS) with the jet of water (a UCS) is a straightforward example of classical conditioning; your cat learns that the appearance of the spray bottle predicted that she was about to get wet. Getting hit between the eyes with a stream of water (you have to practice your aiming to achieve this) for jumping on the table is a good example of operant conditioning (specifically, positive punishment). Together, these two forms of conditioning can help kitty learn to stay off the table.

How Shaping Can Help You Change Behavior

It is probably fairly obvious how you can use the principles of operant conditioning in your own life. For example, there are clear ways to apply the ideas to change your own behavior to improve your study habits or to change your children’s behavior. You should know about one more concept to help you in case you ever decide to try these principles, however.

Imagine that you decide to use operant conditioning to increase the length of time that you can study without your mind wandering. Currently, you can make it for about five minutes before some distraction becomes so magnetic that you cannot resist leaving your desk. Your goal is to study for one hour at a time without interruption, so you decide to use positive reinforcement; you will reward yourself with one dollar in a “shopping spree” jar every time you are able to study for one hour. At the end of the month, you can spend that money on anything you want. After 30 days, you reach for your jar to discover with dismay that it is empty. You were never once able to make it to one hour without distraction.

The concept that you need to know to start filling that jar and increasing the length of time that you can study is called shaping. Shaping  is teaching (or learning) a new behavior by reinforcing closer and closer approximations to the desired behavior. Rather than waiting until you manage a full hour of studying to reward yourself, you can give yourself the money every time you are able to study for 10 minutes, only 5 minutes longer than your current behavior. After you are able to consistently study for 10 minutes, you reward yourself only when you are able to study for 15 minutes, another 5-minute increase over your current study time. Over several weeks, you should be able to increase your study time to an hour straight, but it will be easy because every increase was only a small bump up from what you could already do. Shaping can be used to learn very complex behaviors; the key is keeping individual steps small.

  • The situations that you described in the Activate exercise (things that happened to you) were probably examples of positive reinforcement and positive punishment. Please think of several additional examples of operant conditioning that you have experienced so you have examples of positive reinforcement, negative reinforcement, positive punishment, and negative punishment). In each case, was the reinforcement continuous or partial? Were the consequences immediate or delayed?

6.3 Some Other Types of Learning

Stop us if you have heard this one before. Mom and Dad were growing distressed that their two children,  Joey, aged 9, and Zoey, aged 11, had begun to use profanity. They realized that they needed to do something fast, so they decided to use physical punishment to stop their children’s swearing. That night at dinner, Zoey says, “Pass the f***ing salt.” Dad immediately turns to Zoey and smacks her face, knocking her glass of milk onto her lap and her food to the floor. Then, he turns to Joey and demands, “Well, what do you have to say about it?” Joey looks at the mess and replies, “Well, you can bet your a** I’m not going to ask for the f***ing salt.”

For the record: we do not condone physical punishment under any circumstances (see Module 17). We just dug up this old joke to make a point about observational learning , which is learning that occurs through watching others’ behavior. Of particular importance is the observation of operant conditioning in someone else. For example, if a child observes his sister being punished for a behavior he may learn not to do it as effectively as if he were being punished himself. Also, as the joke hilariously illustrates (we assume you are still laughing at it), because punishment does not directly tell learners what they are supposed to do, is not a particularly efficient way to change people’s behavior.

Throughout the rest of this module, we have been describing different kinds of associative learning. Even observational learning involves learning that behavior is associated with consequences, just as in operant conditioning. You might be wondering if there is such a thing as non-associative learning. The answer is yes. Non-associative learning  occurs when the repetition of a single stimulus leads to a change in an individual. Note, of course, that this stimulus is not linked with anything; it just occurs repeatedly. Over time, your experience, even your very perception of that stimulus might change. Allow us to explain with a couple of examples. Imagine that you visit a friend who lives near an airport for the first time. As your friend is making coffee for the two of you, an airplane flies overhead, and you practically jump out of your skin. Your friend, on the other hand, does not even react, continuing to make the coffee as if nothing has happened. You cannot believe it. “How can you even hear yourself think with that deafening noise?” you ask. “Oh, I got used to it. I barely even hear it anymore,” your friend answers. Your friend has experienced habituation , in which the repetition of the stimulus leads to a reduced reaction or perception over time. On the other hand, consider the opposite kind of non-associative learning, sensitization , in which the repetition of the stimulus causes a stronger reaction or perception over time. We like to call this the annoying-little-brother effect. Imagine that you have a little brother who has the worst habit of scraping his teeth on his fork when he takes it out of his mouth. You are absolutely convinced that he is doing it louder and louder just to annoy you. Maybe. Or maybe you have just experienced sensitization.

habituation : non-associative learning type in which the repetition of some stimulus over time leads to a reduced reaction to the stimulus

non-associative learning : learning, or change, that occurs because of the repetition of a single stimulus over time

observational learning : learning that occurs through watching others’ behavior

sensitization : non-associative learning type in which the repetition of some stimulus over time leads to a stronger reaction to the stimulus

Module 7: Thinking, Reasoning, and Problem-Solving

This module is about how a solid working knowledge of psychological principles can help you to think more effectively, so you can succeed in school and life. You might be inclined to believe that—because you have been thinking for as long as you can remember, because you are able to figure out the solution to many problems, because you feel capable of using logic to argue a point, because you can evaluate whether the things you read and hear make sense—you do not need any special training in thinking. But this, of course, is one of the key barriers to helping people think better. If you do not believe that there is anything wrong, why try to fix it?

The human brain is indeed a remarkable thinking machine, capable of amazing, complex, creative, logical thoughts. Why, then, are we telling you that you need to learn how to think? Mainly because one major lesson from cognitive psychology is that these capabilities of the human brain are relatively infrequently realized. Many psychologists believe that people are essentially “cognitive misers.” It is not that we are lazy, but that we have a tendency to expend the least amount of mental effort necessary. Although you may not realize it, it actually takes a great deal of energy to think. Careful, deliberative reasoning and critical thinking are very difficult. Because we seem to be successful without going to the trouble of using these skills well, it feels unnecessary to develop them. As you shall see, however, there are many pitfalls in the cognitive processes described in this module. When people do not devote extra effort to learning and improving reasoning, problem solving, and critical thinking skills, they make many errors.

As is true for memory, if you develop the cognitive skills presented in this module, you will be more successful in school. It is important that you realize, however, that these skills will help you far beyond school, even more so than a good memory will. Although it is somewhat useful to have a good memory, ten years from now no potential employer will care how many questions you got right on multiple choice exams during college. All of them will, however, recognize whether you are a logical, analytical, critical thinker. With these thinking skills, you will be an effective, persuasive communicator and an excellent problem solver.

The module begins by describing different kinds of thought and knowledge, especially conceptual knowledge and critical thinking. An understanding of these differences will be valuable as you progress through school and encounter different assignments that require you to tap into different kinds of knowledge. The second section covers deductive and inductive reasoning, which are processes we use to construct and evaluate strong arguments. They are essential skills to have whenever you are trying to persuade someone (including yourself) of some point, or to respond to someone’s efforts to persuade you. The module ends with a section about problem-solving. A solid understanding of the key processes involved in problem-solving will help you to handle many daily challenges.

7.1. Different kinds of thought

7.2. Reasoning and Judgment

7.3. Problem Solving

By reading and studying Module 7, you should be able to remember and describe:

  • Concepts and inferences (7.1)
  • Procedural knowledge (7.1)
  • Metacognition (7.1)
  • Characteristics of critical thinking:  skepticism; identify biases, distortions, omissions, and assumptions; reasoning and problem-solving skills (7.1)
  • Reasoning:  deductive reasoning, deductively valid argument, inductive reasoning, inductively strong argument, availability heuristic, representativeness heuristic  (7.2)
  • Fixation:  functional fixedness, mental set  (7.3)
  • Algorithms, heuristics, and the role of confirmation bias (7.3)
  • Effective problem-solving sequence (7.3)

By reading and thinking about how the concepts in Module 7 apply to real life, you should be able to:

  • Identify which type of knowledge a piece of information is (7.1)
  • Recognize examples of deductive and inductive reasoning (7.2)
  • Recognize judgments that have probably been influenced by the availability heuristic (7.2)
  • Recognize examples of problem-solving heuristics and algorithms (7.3)

By reading and thinking about Module 7, participating in classroom activities, and completing out-of-class assignments, you should be able to:

  • Use the principles of critical thinking to evaluate information (7.1)
  • Explain whether examples of reasoning arguments are deductively valid or inductively strong (7.2)
  • Outline how you could try to solve a problem from your life using the effective problem-solving sequence (7.3)

7.1. Different kinds of thought and knowledge

  • Take a few minutes to write down everything that you know about dogs.
  • Do you believe that:
  • Psychic ability exists?
  • Hypnosis is an altered state of consciousness?
  • Magnet therapy is effective for relieving pain?
  • Aerobic exercise is an effective treatment for depression?
  • UFO’s from outer space have visited earth?

On what do you base your belief or disbelief for the questions above?

Of course, we all know what is meant by the words  think  and  knowledge . You probably also realize that they are not unitary concepts; there are different kinds of thought and knowledge. In this section, let us look at some of these differences. If you are familiar with these different kinds of thought and pay attention to them in your classes, it will help you to focus on the right goals, learn more effectively, and succeed in school. Different assignments and requirements in school call on you to use different kinds of knowledge or thought, so it will be very helpful for you to learn to recognize them (Anderson, et al. 2001).

Factual and conceptual knowledge

Module 5 introduced the idea of declarative memory, which is composed of facts and episodes. If you have ever played a trivia game or watched Jeopardy on TV, you realize that the human brain is able to hold an extraordinary number of facts. Likewise, you realize that each of us has an enormous store of episodes, essential facts about events that happened in our own lives. It may be difficult to keep that in mind when we are struggling to retrieve one of those facts while taking an exam, however. Part of the problem is that, in contradiction to the advice from Module 5, many students continue to try to memorize course material as a series of unrelated facts (picture a history student simply trying to memorize history as a set of unrelated dates without any coherent story tying them together). Facts in the real world are not random and unorganized, however. It is the way that they are organized that constitutes a second key kind of knowledge, conceptual.

Concepts  are nothing more than our mental representations of categories of things in the world. For example, think about dogs. When you do this, you might remember specific facts about dogs, such as they have fur and they bark. You may also recall dogs that you have encountered and picture them in your mind. All of this information (and more) makes up your concept of dog. You can have concepts of simple categories (e.g., triangle), complex categories (e.g., small dogs that sleep all day, eat out of the garbage, and bark at leaves), kinds of people (e.g., psychology professors), events (e.g., birthday parties), and abstract ideas (e.g., justice). Gregory Murphy (2002) refers to concepts as the “glue that holds our mental life together” (p. 1). Very simply, summarizing the world by using concepts is one of the most important cognitive tasks that we do. Our conceptual knowledge  is  our knowledge about the world. Individual concepts are related to each other to form a rich interconnected network of knowledge. For example, think about how the following concepts might be related to each other: dog, pet, play, Frisbee, chew toy, shoe. Or, of more obvious use to you now, how these concepts are related: working memory, long-term memory, declarative memory, procedural memory, and rehearsal? Because our minds have a natural tendency to organize information conceptually, when students try to remember course material as isolated facts, they are working against their strengths.

One last important point about concepts is that they allow you to instantly know a great deal of information about something. For example, if someone hands you a small red object and says, “here is an apple,” they do not have to tell you, “it is something you can eat.” You already know that you can eat it because it is true by virtue of the fact that the object is an apple; this is called drawing an  inference, assuming that something is true on the basis of your previous knowledge (for example, of category membership or of how the world works) or logical reasoning.

Procedural knowledge

Physical skills, such as tying your shoes, doing a cartwheel, and driving a car (or doing all three at the same time, but don’t try this at home) are certainly a kind of knowledge. They are procedural knowledge, the same idea as procedural memory that you saw in Module 5. Mental skills, such as reading, debating, and planning a psychology experiment, are procedural knowledge, as well. In short, procedural knowledge is the knowledge how to do something (Cohen & Eichenbaum, 1993).

Metacognitive knowledge

Floyd used to think that he had a great memory. Now, he has a better memory. Why? Because he finally realized that his memory was not as great as he once thought it was. Because Floyd eventually learned that he often forgets where he put things, he finally developed the habit of putting things in the same place. (Unfortunately, he did not learn this lesson before losing at least 5 watches and a wedding ring.) Because he finally realized that he often forgets to do things, he finally started using the To Do list app on his phone. And so on. Floyd’s insights about the real limitations of his memory have allowed him to remember things that he used to forget.

All of us have knowledge about the way our own minds work. You may know that you have a good memory for people’s names and a poor memory for math formulas. Someone else might realize that they have difficulty remembering to do things, like stopping at the store on the way home. Others still know that they tend to overlook details. This knowledge about our own thinking is actually quite important; it is called metacognitive knowledge, or metacognition . Like other kinds of thinking skills, it is subject to error. For example, in unpublished research, one of the authors surveyed about 120 General Psychology students on the first day of the term. Among other questions, the students were asked them to predict their grade in the class and report their current Grade Point Average. Two-thirds of the students predicted that their grade in the course would be higher than their GPA. (The reality is that at our college, students tend to earn lower grades in psychology than their overall GPA.) Another example: Students routinely report that they thought they had done well on an exam, only to discover, to their dismay, that they were wrong (more on that important problem in a moment). Both errors reveal a breakdown in metacognition.

The Dunning-Kruger Effect

In general, most college students probably do not study enough. For example, using data from the National Survey of Student Engagement, Fosnacht et al. (2018) reported that first-year students at 4-year colleges in the U.S. averaged less than 14 hours per week preparing for classes. The typical suggestion is that you should spend two hours outside of class for every hour in class, or 24 – 30 hours per week for a full-time student. Clearly, students generally are nowhere near that recommended mark. Many observers, including some faculty, believe that this shortfall is a result of students being too busy or lazy. Now, it may be true that many students are too busy, with work and family obligations, for example. Others are not particularly motivated in school, and therefore might correctly be labeled lazy. A third possible explanation, however, is that some students might not think they need to spend this much time. And this is a matter of metacognition. Consider the scenario that we mentioned above, students thinking they had done well on an exam only to discover that they did not. Justin Kruger and David Dunning examined scenarios very much like this in 1999. Kruger and Dunning gave research participants tests measuring humor, logic, and grammar. Then, they asked the participants to assess their own abilities and test performance in these areas. They found that participants generally tended to overestimate their abilities, already a problem with metacognition. Importantly, the participants who scored the lowest overestimated their abilities the most. Specifically, students who scored in the bottom quarter (averaging in the 12th percentile) thought they had scored in the 62nd percentile. This has become known as the Dunning-Kruger effect . Many individual faculty members have replicated these results with their own students on their course exams, including the authors of this book. Think about it. Some students who just took an exam and performed poorly believe that they did well before seeing their score. It seems very likely that these are the very same students who stopped studying the night before because they thought they were “done.” Quite simply, it is not just that they did not know the material. They did not know that they did not know the material. That is poor metacognition.

In order to develop good metacognitive skills, you should continually monitor your thinking and seek frequent feedback on the accuracy of your thinking (Medina et al., 2017). For example, in classes get in the habit of predicting your exam grades. As soon as possible after taking an exam, try to find out which questions you missed and try to figure out why. If you do this soon enough, you may be able to recall the way it felt when you originally answered the question. Did you feel confident that you had answered the question correctly? Then you have just discovered an opportunity to improve your metacognition. Be on the lookout for that feeling and respond with caution.

concept:  a mental representation of a category of things in the world

Dunning-Kruger effect : individuals who are less competent tend to overestimate their abilities more than individuals who are more competent do

inference: an assumption about the truth of something that is not stated. Inferences come from our prior knowledge and experience, and from logical reasoning

metacognition:  knowledge about one’s own cognitive processes; thinking about your thinking

Critical thinking

One particular kind of knowledge or thinking skill that is related to metacognition is  critical thinking  (Chew, 2020). You may have noticed that critical thinking is an objective in many college courses, and thus it could be a legitimate topic to cover in nearly any college course. It is particularly appropriate in psychology, however. As the science of (behavior and) mental processes, psychology is obviously well suited to be the discipline through which you should be introduced to this important way of thinking.

More importantly, there is a particular need to use critical thinking in psychology. We are all, in a way, experts in human behavior and mental processes, having engaged in them literally since birth. Thus, perhaps more than in any other class, students typically approach psychology with very clear ideas and opinions about its subject matter. That is, students already “know” a lot about psychology. The problem is, “it ain’t so much the things we don’t know that get us into trouble. It’s the things we know that just ain’t so” (Ward, quoted in Gilovich 1991). Indeed, many of students’ preconceptions about psychology are just plain wrong. Randolph Smith (2002) wrote a book about critical thinking in psychology called  Challenging Your Preconceptions,  highlighting this fact. On the other hand, many of students’ preconceptions about psychology are just plain right! But wait, how do you know which of your preconceptions are right and which are wrong? And when you come across a research finding or theory in this class that contradicts your preconceptions, what will you do? Will you stick to your original idea, discounting the information from the class? Will you immediately change your mind? Critical thinking can help us sort through this confusing mess.

But what is critical thinking? The goal of critical thinking is simple to state (but extraordinarily difficult to achieve): it is to be right, to draw the correct conclusions, to believe in things that are true and to disbelieve things that are false. We will provide two definitions of critical thinking (or, if you like, one large definition with two distinct parts). First, a more conceptual one: Critical thinking is thinking like a scientist in your everyday life (Schmaltz et al., 2017).  Our second definition is more operational; it is simply a list of skills that are essential to be a critical thinker. Critical thinking entails solid reasoning and problem-solving skills; skepticism; and an ability to identify biases, distortions, omissions, and assumptions. Excellent deductive and inductive reasoning, and problem-solving skills contribute to critical thinking. So, you can consider the subject matter of sections 7.2 and 7.3 to be part of critical thinking. Because we will be devoting considerable time to these concepts in the rest of the module, let us begin with a discussion about the other aspects of critical thinking.

Let’s address that first part of the definition. Scientists form hypotheses, or predictions, about some possible future observations. Then, they collect data or information (think of this as making those future observations). They do their best to make unbiased observations using reliable techniques that have been verified by others. Then, and only then, they draw a conclusion about what those observations mean. Oh, and do not forget the most important part. “Conclusion” is probably not the most appropriate word because this conclusion is only tentative. A scientist is always prepared that someone else might come along and produce new observations that would require a new conclusion to be drawn. Wow! If you like to be right, you could do a lot worse than using a process like this.

A Critical Thinker’s Toolkit 

Now for the second part of the definition. Good critical thinkers (and scientists) rely on a variety of tools to evaluate information. Perhaps the most recognizable tool for critical thinking is  skepticism  (and this term provides the clearest link to the thinking like a scientist definition, as you are about to see). Some people intend it as an insult when they call someone a skeptic. But if someone calls you a skeptic, if they are using the term correctly, you should consider it a great compliment. Simply put, skepticism is a way of thinking in which you refrain from drawing a conclusion or changing your mind until good evidence has been provided. People from Missouri should recognize this principle, as Missouri is known as the Show-Me State. As a skeptic, you are not inclined to believe something just because someone said so, because someone else believes it, or because it sounds reasonable. You must be persuaded by high-quality evidence.

Of course, if that evidence is produced, you have a responsibility as a skeptic to change your belief. Failure to change a belief in the face of good evidence is not skepticism; skepticism has open-mindedness at its core. M. Neil Browne and Stuart Keeley (2018) use the term weak sense critical thinking to describe critical thinking behaviors that are used only to strengthen a prior belief. Strong sense critical thinking, on the other hand, has as its goal reaching the best conclusion. Sometimes that means strengthening your prior belief, but sometimes it means changing your belief to accommodate the better evidence.

Many times, a failure to think critically or a weak sense of critical thinking is related to a bias , an inclination, tendency, leaning, or prejudice. Everybody has biases, but many people are unaware of them. Awareness of your own biases gives you the opportunity to control or counteract them. Unfortunately, however, many people are happy to let their biases creep into their attempts to persuade others; indeed, it is a key part of their persuasive strategy. To see how these biases influence messages, just look at the different descriptions and explanations of the same events given by people of different ages or income brackets, or conservative versus liberal commentators, or by commentators from different parts of the world. Of course, to be successful, these people who are consciously using their biases must disguise them. Even undisguised biases can be difficult to identify, so disguised ones can be nearly impossible.

Here are some common sources of biases:

  • Personal values and beliefs.  Some people believe that human beings are basically driven to seek power and that they are typically in competition with one another over scarce resources. These beliefs are similar to the world-view that political scientists call “realism.” Other people believe that human beings prefer to cooperate and that, given the chance, they will do so. These beliefs are similar to the world-view known as “idealism.” For many people, these deeply held beliefs can influence, or bias, their interpretations of such wide-ranging situations as the behavior of nations and their leaders or the behavior of the driver in the car ahead of you. For example, if your worldview is that people are typically in competition and someone cuts you off on the highway, you may assume that the driver did it purposely to get ahead of you. Other types of beliefs about the way the world is or the way the world should be, for example, political beliefs, can similarly become a significant source of bias.
  • Racism, sexism, ageism and other forms of prejudice and bigotry.  These are, sadly, a common source of bias in many people. They are essentially a special kind of “belief about the way the world is.” These beliefs—for example, that women do not make effective leaders—lead people to ignore contradictory evidence (examples of effective women leaders, or research that disputes the belief) and to interpret ambiguous evidence in a way consistent with the belief.
  • Self-interest.  When particular people benefit from things turning out a certain way, they can sometimes be very susceptible to letting that interest bias them. For example, a company that will earn a profit if they sell their product may have a bias in the way that they give information about their product. A union that will benefit if its members get a generous contract might have a bias in the way it presents information about salaries at competing organizations. (Note that our inclusion of examples describing both companies and unions is an explicit attempt to control for our own personal biases). Homebuyers are often dismayed to discover that they purchased their dream house from someone whose self-interest led them to lie about flooding problems in the basement or back yard. This principle, the biasing power of self-interest, is likely what led to the famous phrase Caveat Emptor  (let the buyer beware) .  

Knowing that these types of biases exist will help you evaluate evidence more critically. Do not forget, though, that people are not always keen to let you discover the sources of biases in their arguments. For example, companies or political organizations can disguise their support of a research study by contracting with a university professor, who comes complete with a seemingly unbiased institutional affiliation, to conduct the study.

People’s biases, conscious or unconscious, can lead them to make omissions, distortions, and assumptions that undermine our ability to correctly evaluate evidence. It is essential that you look for these elements. Always ask, what is missing, what is not as it appears, and what is being assumed here? For example, consider this (fictional) chart from an ad reporting customer satisfaction at 4 local health clubs.

introduction to psychology essay questions and answers pdf

Clearly, from the results of the chart, one would be tempted to give Club C a try, as customer satisfaction is much higher than for the other 3 clubs.

There are so many distortions and omissions in this chart, however, that it is actually quite meaningless. First, how was satisfaction measured? Do the bars represent responses to a survey? If so, how were the questions asked? Most importantly, where is the missing scale for the chart? Although the differences look quite large, are they really?

Well, here is the same chart, with a different scale, this time labeled:

introduction to psychology essay questions and answers pdf

Club C is not so impressive anymore, is it? In fact, all of the health clubs have customer satisfaction ratings (whatever that means) between 85% and 88%. In the first chart, the entire scale of the graph included only the percentages between 83 and 89. This “judicious” choice of scale—some would call it a distortion—and omission of that scale from the chart make the tiny differences among the clubs seem important, however.

Also, in order to be a critical thinker, you need to learn to pay attention to the assumptions that underlie a message. Let us briefly illustrate the role of assumptions by touching on some people’s beliefs about the criminal justice system in the US. Some believe that a major problem with our judicial system is that many criminals go free because of legal technicalities. Others believe that a major problem is that many innocent people are convicted of crimes. The simple fact is, both types of errors occur. A person’s conclusion about which flaw in our judicial system is the greater tragedy is based on an assumption about which of these is the more serious error (letting the guilty go free or convicting the innocent). This type of assumption is called a value assumption (Browne and Keeley, 2018). It reflects the differences in values that people develop, differences that may lead us to disregard valid evidence that does not fit in with our particular values.

Oh, by the way, some students probably noticed this, but the seven tips for evaluating information that we shared in Module 1 are related to this. Actually, they are part of this section. The tips are, to a very large degree, a set of ideas you can use to help you identify biases, distortions, omissions, and assumptions. If you do not remember this section, we strongly recommend you take a few minutes to review it.

skepticism:  a way of thinking in which you refrain from drawing a conclusion or changing your mind until good evidence has been provided

bias: an inclination, tendency, leaning, or prejudice

  • Which of your beliefs (or disbeliefs) from the Activate exercise for this section were derived from a process of critical thinking? If some of your beliefs were not based on critical thinking, are you willing to reassess these beliefs? If the answer is no, why do you think that is? If the answer is yes, what concrete steps will you take?

7.2 Reasoning and Judgment

  • What percentage of kidnappings are committed by strangers?
  • Which area of the house is riskiest: kitchen, bathroom, or stairs?
  • What is the most common cancer in the US?
  • What percentage of workplace homicides are committed by co-workers?

An essential set of procedural thinking skills is  reasoning , the ability to generate and evaluate solid conclusions from a set of statements or evidence. You should note that these conclusions (when they are generated instead of being evaluated) are one key type of inference that we described in Section 7.1. There are two main types of reasoning, deductive and inductive.

Deductive reasoning

Suppose your teacher tells you that if you get an A on the final exam in a course, you will get an A for the whole course. Then, you get an A on the final exam. What will your final course grade be? Most people can see instantly that you can conclude with certainty that you will get an A for the course. This is a type of reasoning called  deductive reasoning , which is defined as reasoning in which a conclusion is guaranteed to be true as long as the statements leading to it are true. The three statements can be listed as an  argument , with two beginning statements and a conclusion:

Statement 1: If you get an A on the final exam, you will get an A for the course

Statement 2: You get an A on the final exam

Conclusion: You will get an A for the course

This particular arrangement, in which true beginning statements lead to a guaranteed true conclusion, is known as a  deductively valid argument . Although deductive reasoning is often the subject of abstract, brain-teasing, puzzle-like word problems, it is actually an extremely important type of everyday reasoning. It is just hard to recognize sometimes. For example, imagine that you are looking for your car keys and you realize that they are either in the kitchen drawer or in your bookbag. After looking in the kitchen drawer, you instantly know that they must be in your bookbag. That conclusion results from a simple deductive reasoning argument. In addition, solid deductive reasoning skills are necessary for you to succeed in the sciences, philosophy, math, computer programming, and any endeavor involving the use of logic to persuade others to your point of view or to evaluate others’ arguments.

Cognitive psychologists, and before them philosophers, have been quite interested in deductive reasoning, not so much for its practical applications, but for the insights it can offer them about the ways that human beings think. One of the early ideas to emerge from the examination of deductive reasoning is that people learn (or develop) mental versions of rules that allow them to solve these types of reasoning problems (Braine, 1978; Braine, Reiser, & Rumain, 1984). The best way to see this point of view is to realize that there are different possible rules, and some of them are very simple. For example, consider this rule of logic:

therefore q

Logical rules are often presented abstractly, as letters, in order to imply that they can be used in very many specific situations. Here is a concrete version of the of the same rule:

I’ll either have pizza or a hamburger for dinner tonight (p or q)

I won’t have pizza (not p)

Therefore, I’ll have a hamburger (therefore q)

This kind of reasoning seems so natural, so easy, that it is quite plausible that we would use a version of this rule in our daily lives. At least, it seems more plausible than some of the alternative possibilities—for example, that we need to have experience with the specific situation (pizza or hamburger, in this case) in order to solve this type of problem easily. So perhaps there is a form of natural logic (Rips, 1990) that contains very simple versions of logical rules. When we are faced with a reasoning problem that maps onto one of these rules, we use the rule.

But be very careful; things are not always as easy as they seem. Even these simple rules are not so simple. For example, consider the following rule. Many people fail to realize that this rule is just as valid as the pizza or hamburger rule above.

if p, then q

therefore, not p

Concrete version:

If I eat dinner, then I will have dessert

I did not have dessert

Therefore, I did not eat dinner

The simple fact is, it can be very difficult for people to apply rules of deductive logic correctly; as a result, they make many errors when trying to do so. Is this a deductively valid argument or not?

Students who like school study a lot

Students who study a lot get good grades

Jane does not like school

Therefore, Jane does not get good grades

Many people are surprised to discover that this is not a logically valid argument; the conclusion is not guaranteed to be true from the beginning statements. Although the first statement says that students who like school study a lot, it does NOT say that students who do not like school do not study a lot. In other words, it may very well be possible to study a lot without liking school. Even people who sometimes get problems like this right might not be using the rules of deductive reasoning. Instead, they might just be making judgments for examples they know, in this case, remembering instances of people who get good grades despite not liking school.

Making deductive reasoning even more difficult is the fact that there are two important properties that an argument may have. One, it can be valid or invalid (meaning that the conclusion does or does not follow logically from the statements leading up to it). Two, an argument (or more correctly, its conclusion) can be true or false. Here is an example of an argument that is logically valid, but has a false conclusion (at least we think it is false).

Either you are eleven feet tall or the Grand Canyon was created by a spaceship crashing into the earth.

You are not eleven feet tall

Therefore the Grand Canyon was created by a spaceship crashing into the earth

This argument has the exact same form as the pizza or hamburger argument above, making it is deductively valid. The conclusion is so false, however, that it is absurd (of course, the reason the conclusion is false is that the first statement is false). When people are judging arguments, they tend to not observe the difference between deductive validity and the empirical truth of statements or conclusions. If the elements of an argument happen to be true, people are likely to judge the argument logically valid; if the elements are false, they will very likely judge it invalid (Markovits & Bouffard-Bouchard, 1992; Moshman & Franks, 1986). Thus, it seems a stretch to say that people are using these logical rules to judge the validity of arguments. Many psychologists believe that most people actually have very limited deductive reasoning skills (Johnson-Laird, 1999). They argue that when faced with a problem for which deductive logic is required, people resort to some simpler technique, such as matching terms that appear in the statements and the conclusion (Evans, 1982). This might not seem like a problem, but what if reasoners believe that the elements are true and they happen to be wrong; they will believe that they are using a form of reasoning that guarantees they are correct and yet be wrong.

reasoning , the ability to generate and evaluate solid conclusions from a set of statements or evidence

deductive reasoning:  a type of reasoning in which the conclusion is guaranteed to be true any time the statements leading up to it are true

argument:  a set of statements in which the beginning statements lead to a conclusion

deductively valid argument:  an argument for which true beginning statements guarantee that the conclusion is true

Inductive reasoning and judgment

Every day, you make many judgments about the likelihood of one thing or another. Whether you realize it or not, you are practicing  inductive reasoning  on a daily basis. In inductive reasoning arguments, a conclusion is likely whenever the statements preceding it are true. The first thing to notice about inductive reasoning is that, by definition, you can never be sure about your conclusion; you can only estimate how likely the conclusion is. Inductive reasoning may lead you to focus on Memory Encoding and Recoding when you study for the exam, but it is possible the instructor will ask more questions about Memory Retrieval instead. Unlike deductive reasoning, the conclusions you reach through inductive reasoning are only probable, not certain. That is why scientists consider inductive reasoning weaker than deductive reasoning. But imagine how hard it would be for us to function if we could not act unless we were certain about the outcome.

Inductive reasoning can be represented as logical arguments consisting of statements and a conclusion, just as deductive reasoning can be. In an inductive argument, you are given some statements and a conclusion (or you are given some statements and must draw a conclusion). An argument is  inductively strong  if the conclusion would be very probable whenever the statements are true. So, for example, here is an inductively strong argument:

  • Statement #1: The forecaster on Channel 2 said it is going to rain today.
  • Statement #2: The forecaster on Channel 5 said it is going to rain today.
  • Statement #3: It is very cloudy and humid.
  • Statement #4: You just heard thunder.
  • Conclusion (or judgment): It is going to rain today.

Think of the statements as evidence, on the basis of which you will draw a conclusion. So, based on the evidence presented in the four statements, it is very likely that it will rain today. Will it definitely rain today? Certainly not. We can all think of times that the weather forecaster was wrong.

A true story: Some years ago psychology student was watching a baseball playoff game between the St. Louis Cardinals and the Los Angeles Dodgers. A graphic on the screen had just informed the audience that the Cardinal at bat, (Hall of Fame shortstop) Ozzie Smith, a switch hitter batting left-handed for this plate appearance, had never, in nearly 3000 career at-bats, hit a home run left-handed. The student, who had just learned about inductive reasoning in his psychology class, turned to his companion (a Cardinals fan) and smugly said, “It is an inductively strong argument that Ozzie Smith will not hit a home run.” He turned back to face the television just in time to watch the ball sail over the right-field fence for a home run. Although the student felt foolish at the time, he was not wrong. It was an inductively strong argument; 3000 at-bats is an awful lot of evidence suggesting that the Wizard of Ozz (as he was known) would not be hitting one out of the park (think of each at-bat without a home run as a statement in an inductive argument). Sadly (for the die-hard Cubs fan and Cardinals-hating student), despite the strength of the argument, the conclusion was wrong.

Given the possibility that we might draw an incorrect conclusion even with an inductively strong argument, we really want to be sure that we do, in fact, make inductively strong arguments. If we judge something probable, it had better be probable. If we judge something nearly impossible, it had better not happen. Think of inductive reasoning, then, as making reasonably accurate judgments of the probability of some conclusion given a set of evidence.

We base many decisions in our lives on inductive reasoning. For example:

Statement #1: Psychology is not my best subject

Statement #2: My psychology instructor has a reputation for giving difficult exams

Statement #3: My first psychology exam was much harder than I expected

Judgment: The next exam will probably be very difficult.

Decision: I will study tonight instead of watching Netflix.

Some other examples of judgments that people commonly make in a school context include judgments of the likelihood that:

  • A particular class will be interesting/useful/difficult
  • You will be able to finish writing a paper by next week if you go out tonight
  • Your laptop’s battery will last through the next trip to the library
  • You will not miss anything important if you skip class tomorrow
  • Your instructor will not notice if you skip class tomorrow
  • You will be able to find a book that you will need for a paper
  • There will be an essay question about Memory Encoding on the next exam

Tversky and Kahneman (1983) recognized that there are two general ways that we might make these judgments; they termed them extensional (i.e., following the laws of probability) and intuitive (i.e., using shortcuts or heuristics, see below). We will use a similar distinction between Type 1 and Type 2 thinking, as described by Keith Stanovich and his colleagues (Evans and Stanovich, 2013; Stanovich and West, 2000). Type 1 thinking  is fast, automatic, effortful, and emotional. In fact, it is hardly fair to call it reasoning at all, as judgments just seem to pop into one’s head. Type 2 thinking , on the other hand, is slow, effortful, and logical. So obviously, it is more likely to lead to a correct judgment, or an optimal decision. The problem is, we tend to over-rely on Type 1. Now, we are not saying that Type 2 is the right way to go for every decision or judgment we make. It seems a bit much, for example, to engage in a step-by-step logical reasoning procedure to decide whether we will have chicken or fish for dinner tonight.

Many bad decisions in some very important contexts, however, can be traced back to poor judgments of the likelihood of certain risks or outcomes that result from the use of Type 1 when a more logical reasoning process would have been more appropriate. For example:

Statement #1: It is late at night.

Statement #2: Albert has been drinking beer for the past five hours at a party.

Statement #3: Albert is not exactly sure where he is or how far away home is.

Judgment: Albert will have no difficulty walking home.

Decision: He walks home alone.

As you can see in this example, the three statements backing up the judgment do not really support it. In other words, this argument is not inductively strong because it is based on judgments that ignore the laws of probability. What are the chances that someone facing these conditions will be able to walk home alone easily? And one need not be drunk to make poor decisions based on judgments that just pop into our heads.

The truth is that many of our probability judgments do not come very close to what the laws of probability say they should be. Think about it. In order for us to reason in accordance with these laws, we would need to know the laws of probability, which would allow us to calculate the relationship between particular pieces of evidence and the probability of some outcome (i.e., how much likelihood should change given a piece of evidence), and we would have to do these heavy math calculations in our heads. After all, that is what Type 2 requires. Needless to say, even if we were motivated, we often do not even know how to apply Type 2 reasoning in many cases.

So what do we do when we don’t have the knowledge, skills, or time required to make the correct mathematical judgment? Do we hold off and wait until we can get better evidence? Do we read up on probability and fire up our calculator app so we can compute the correct probability? Of course not. We rely on Type 1 thinking. We “wing it.” That is, we come up with a likelihood estimate using some means at our disposal. Psychologists use the term heuristic to describe the type of “winging it” we are talking about. A  heuristic  is a shortcut strategy that we use to make some judgment or solve some problem (see Section 7.3). Heuristics are easy and quick, think of them as the basic procedures that are characteristic of Type 1.  They can absolutely lead to reasonably good judgments and decisions in some situations (like choosing between chicken and fish for dinner). They are, however, far from foolproof. There are, in fact, quite a lot of situations in which heuristics can lead us to make incorrect judgments, and in many cases, the decisions based on those judgments can have serious consequences.

Let us return to the activity that begins this section. You were asked to judge the likelihood (or frequency) of certain events and risks. You were free to come up with your own evidence (or statements) to make these judgments. This is where a heuristic crops up. As a judgment shortcut, we tend to generate specific examples of those very events to help us decide their likelihood or frequency. For example, if we are asked to judge how common, frequent, or likely a particular type of cancer is, many of our statements would be examples of specific cancer cases:

Statement #1: Andy Kaufman (comedian) had lung cancer.

Statement #2: Colin Powell (US Secretary of State) had prostate cancer.

Statement #3: Bob Marley (musician) had skin and brain cancer

Statement #4: Sandra Day O’Connor (Supreme Court Justice) had breast cancer.

Statement #5: Fred Rogers (children’s entertainer) had stomach cancer.

Statement #6: Robin Roberts (news anchor) had breast cancer.

Statement #7: Bette Davis (actress) had breast cancer.

Judgment: Breast cancer is the most common type.

Your own experience or memory may also tell you that breast cancer is the most common type. But it is not (although it is common). Actually, skin cancer is the most common type in the US. We make the same types of misjudgments all the time because we do not generate the examples or evidence according to their actual frequencies or probabilities. Instead, we have a tendency (or bias) to search for the examples in memory; if they are easy to retrieve, we assume that they are common. To rephrase this in the language of the heuristic, events seem more likely to the extent that they are available to memory. This bias has been termed the  availability heuristic  (Kahneman and Tversky, 1974).

The fact that we use the availability heuristic does not automatically mean that our judgment is wrong. The reason we use heuristics in the first place is that they work fairly well in many cases (and, of course, that they are easy to use). So, the easiest examples to think of sometimes are the most common ones. Is it more likely that a member of the U.S. Senate is a man or a woman? Most people have a much easier time generating examples of male senators. And as it turns out, the U.S. Senate has many more men than women (74 to 26 in 2020). In this case, then, the availability heuristic would lead you to make the correct judgment; it is far more likely that a senator would be a man.

In many other cases, however, the availability heuristic will lead us astray. This is because events can be memorable for many reasons other than their frequency. Section 5.2, Encoding Meaning, suggested that one good way to encode the meaning of some information is to form a mental image of it. Thus, information that has been pictured mentally will be more available to memory. Indeed, an event that is vivid and easily pictured will trick many people into supposing that type of event is more common than it actually is. Repetition of information will also make it more memorable. So, if the same event is described to you in a magazine, on the evening news, on a podcast that you listen to, and in your Facebook feed; it will be very available to memory. Again, the availability heuristic will cause you to misperceive the frequency of these types of events.

Most interestingly, information that is unusual is more memorable. Suppose we give you the following list of words to remember: box, flower, letter, platypus, oven, boat, newspaper, purse, drum, car. Very likely, the easiest word to remember would be platypus, the unusual one. The same thing occurs with memories of events. An event may be available to memory because it is unusual, yet the availability heuristic leads us to judge that the event is common. Did you catch that? In these cases, the availability heuristic makes us think the exact opposite of the true frequency. We end up thinking something is common because it is unusual (and therefore memorable). Yikes.

The misapplication of the availability heuristic sometimes has unfortunate results. For example, if you went to K-12 school in the US over the past 10 years, it is extremely likely that you have participated in lockdown and active shooter drills. Of course, everyone is trying to prevent the tragedy of another school shooting. And believe us, we are not trying to minimize how terrible the tragedy is. But the truth of the matter is, school shootings are extremely rare. Because the federal government does not keep a database of school shootings, the Washington Post has maintained its own running tally. Between 1999 and January 2020 (the date of the most recent school shootings with a death in the US at the time this paragraph was written), the Post reported a total of 254 people died in school shootings in the US. Not 254 per year, 254 total. That is an average of 12 per year. Of course, that is 254 people who should not have died (particularly because many were children), but in a country with approximately 60,000,000 students and teachers, this is a very small risk.

But many students and teachers are terrified that they will be victims of school shootings because of the availability heuristic. It is so easy to think of examples (they are very available to memory) that people believe the event is very common. It is not. And there is a downside to this. We happen to believe that there is an enormous gun violence problem in the United States. According to the Centers for Disease Control and Prevention, there were 39,773 firearm deaths in the US in 2017. Fifteen of those deaths were in school shootings, according to the Post. 60% of those deaths were suicides. When people pay attention to the school shooting risk (low), they often fail to notice the much larger risk.

And examples like this are by no means unique. The authors of this book have been teaching psychology since the 1990s. We have been able to make the exact same arguments about the misapplication of the availability heuristics and keep them current by simply swapping out for the “fear of the day.” In the 1990’s it was children being kidnapped by strangers (it was known as “stranger danger”) despite the facts that kidnappings accounted for only 2% of the violent crimes committed against children, and only 24% of kidnappings are committed by strangers (US Department of Justice, 2007). This fear overlapped with the fear of terrorism that gripped the country after the 2001 terrorist attacks on the World Trade Center and US Pentagon and still plagues the population of the US somewhat in 2020. After a well-publicized, sensational act of violence, people are extremely likely to increase their estimates of the chances that they, too, will be victims of terror. Think about the reality, however. In October of 2001, a terrorist mailed anthrax spores to members of the US government and a number of media companies. A total of five people died as a result of this attack. The nation was nearly paralyzed by the fear of dying from the attack; in reality, the probability of an individual person dying was 0.00000002.

The availability heuristic can lead you to make incorrect judgments in a school setting as well. For example, suppose you are trying to decide if you should take a class from a particular math professor. You might try to make a judgment of how good a teacher she is by recalling instances of friends and acquaintances making comments about her teaching skill. You may have some examples that suggest that she is a poor teacher very available to memory, so on the basis of the availability heuristic, you judge her a poor teacher and decide to take the class from someone else. What if, however, the instances you recalled were all from the same person, and this person happens to be a very colorful storyteller? The subsequent ease of remembering the instances might not indicate that the professor is a poor teacher after all.

Although the availability heuristic is obviously important, it is not the only judgment heuristic we use. Amos Tversky and Daniel Kahneman examined the role of heuristics in inductive reasoning in a long series of studies. Kahneman received a Nobel Prize in Economics for this research in 2002, and Tversky would have certainly received one as well if he had not died of melanoma at age 59 in 1996 (Nobel Prizes are not awarded posthumously). Kahneman and Tversky demonstrated repeatedly that people do not reason in ways that are consistent with the laws of probability. They identified several heuristic strategies that people use instead to make judgments about likelihood. The importance of this work for economics (and the reason that Kahneman was awarded the Nobel Prize) is that earlier economic theories had assumed that people do make judgments rationally, that is, in agreement with the laws of probability.

Another common heuristic that people use for making judgments is the  representativeness heuristic  (Kahneman & Tversky 1973). Suppose we describe a person to you. He is quiet and shy, has an unassuming personality, and likes to work with numbers. Is this person more likely to be an accountant or an attorney? If you said accountant, you were probably using the representativeness heuristic. Our imaginary person is judged likely to be an accountant because he resembles, or is representative of the concept of, an accountant. When research participants are asked to make judgments such as these, the only thing that seems to matter is the representativeness of the description. For example, if told that the person described is in a room that contains 70 attorneys and 30 accountants, participants will still assume that he is an accountant.

inductive reasoning:  a type of reasoning in which we make judgments about likelihood from sets of evidence

inductively strong argument:  an inductive argument in which the beginning statements lead to a conclusion that is probably true

heuristic:  a shortcut strategy that we use to make judgments and solve problems. Although they are easy to use, they do not guarantee correct judgments and solutions

availability heuristic:  judging the frequency or likelihood of some event type according to how easily examples of the event can be called to mind (i.e., how available they are to memory)

representativeness heuristic:  judging the likelihood that something is a member of a category on the basis of how much it resembles a typical category member (i.e., how representative it is of the category)

Type 1 thinking: fast, automatic, and emotional thinking.

Type 2 thinking : slow, effortful, and logical thinking.

  • What percentage of workplace homicides are co-worker violence?

Many people get these questions wrong. The answers are 10%; stairs; skin; 6%. How close were your answers? Explain how the availability heuristic might have led you to make the incorrect judgments.

  • Can you think of some other judgments that you have made (or beliefs that you have) that might have been influenced by the availability heuristic?

7.3 Problem Solving

  • Please take a few minutes to list a number of problems that you are facing right now.
  • Now write about a problem that you recently solved.
  • What is your definition of a problem?

Mary has a problem. Her daughter, ordinarily quite eager to please, appears to delight in being the last person to do anything. Whether getting ready for school, going to piano lessons or karate class, or even going out with her friends, she seems unwilling or unable to get ready on time. Other people have different kinds of problems. For example, many students work at jobs, have numerous family commitments, and are facing a course schedule full of difficult exams, assignments, papers, and speeches. How can they find enough time to devote to their studies and still fulfill their other obligations? Speaking of students and their problems: Show that a ball thrown vertically upward with initial velocity v0 takes twice as much time to return as to reach the highest point (from Spiegel, 1981).

These are three very different situations, but we have called them all problems. What makes them all the same, despite the differences? A psychologist might define a  problem  as a situation with an initial state, a goal state, and a set of possible intermediate states. Somewhat more meaningfully, we might consider a problem a situation in which you are in here one state (e.g., daughter is always late), you want to be there in another state (e.g., daughter is not always late), and with no obvious way to get from here to there. Defined this way, each of the three situations we outlined can now be seen as an example of the same general concept, a problem. At this point, you might begin to wonder what is not a problem, given such a general definition. It seems that nearly every non-routine task we engage in could qualify as a problem. As long as you realize that problems are not necessarily bad (it can be quite fun and satisfying to rise to the challenge and solve a problem), this may be a useful way to think about it.

Can we identify a set of problem-solving skills that would apply to these very different kinds of situations? That task, in a nutshell, is a major goal of this section. Let us try to begin to make sense of the wide variety of ways that problems can be solved with an important observation: the process of solving problems can be divided into two key parts. First, people have to notice, comprehend, and represent the problem properly in their minds (called  problem representation ). Second, they have to apply some kind of solution strategy to the problem. Psychologists have studied both of these key parts of the process in detail.

When you first think about the problem-solving process, you might guess that most of our difficulties would occur because we are failing in the second step, the application of strategies. Although this can be a significant difficulty much of the time, the more important source of difficulty is probably problem representation. In short, we often fail to solve a problem because we are looking at it, or thinking about it, the wrong way.

problem:  a situation in which we are in an initial state, have a desired goal state, and there is a number of possible intermediate states (i.e., there is no obvious way to get from the initial to the goal state)

problem representation:  noticing, comprehending and forming a mental conception of a problem

Defining and Mentally Representing Problems in Order to Solve Them

So, the main obstacle to solving a problem is that we do not clearly understand exactly what the problem is. Recall the problem with Mary’s daughter always being late. One way to represent, or to think about, this problem is that she is being defiant. She refuses to get ready in time. This type of representation or definition suggests a particular type of solution. Another way to think about the problem, however, is to consider the possibility that she is simply being sidetracked by interesting diversions. This different conception of what the problem is (i.e., different representation) suggests a very different solution strategy. For example, if Mary defines the problem as defiance, she may be tempted to solve the problem using some kind of coercive tactics, that is, to assert her authority as her mother and force her to listen. On the other hand, if Mary defines the problem as distraction, she may try to solve it by simply removing the distracting objects.

As you might guess, when a problem is represented one way, the solution may seem very difficult, or even impossible. Seen another way, the solution might be very easy. For example, consider the following problem (from Nasar, 1998):

Two bicyclists start 20 miles apart and head toward each other, each going at a steady rate of 10 miles per hour. At the same time, a fly that travels at a steady 15 miles per hour starts from the front wheel of the southbound bicycle and flies to the front wheel of the northbound one, then turns around and flies to the front wheel of the southbound one again, and continues in this manner until he is crushed between the two front wheels. Question: what total distance did the fly cover?

Please take a few minutes to try to solve this problem.

Most people represent this problem as a question about a fly because, well, that is how the question is asked. The solution, using this representation, is to figure out how far the fly travels on the first leg of its journey, then add this total to how far it travels on the second leg of its journey (when it turns around and returns to the first bicycle), then continue to add the smaller distance from each leg of the journey until you converge on the correct answer. You would have to be quite skilled at math to solve this problem, and you would probably need some time and a pencil and paper to do it.

If you consider a different representation, however, you can solve this problem in your head. Instead of thinking about it as a question about a fly, think about it as a question about the bicycles. They are 20 miles apart, and each is traveling 10 miles per hour. How long will it take for the bicycles to reach each other? Right, one hour. The fly is traveling 15 miles per hour; therefore, it will travel a total of 15 miles back and forth in the hour before the bicycles meet. Represented one way (as a problem about a fly), the problem is quite difficult. Represented another way (as a problem about two bicycles), it is easy. Changing your representation of a problem is sometimes the best—sometimes the only—way to solve it.

Unfortunately, however, changing a problem’s representation is not the easiest thing in the world to do. Often, problem solvers get stuck looking at a problem one way. This is called fixation . Most people who represent the preceding problem as a problem about a fly probably do not pause to reconsider, and consequently change, their representation. A parent who thinks her daughter is being defiant is unlikely to consider the possibility that her behavior is far less purposeful.

Problem-solving fixation was examined by a group of German psychologists called Gestalt psychologists during the 1930s and 1940s. Karl Dunker, for example, discovered an important type of failure to take a different perspective called functional fixedness . Imagine being a participant in one of his experiments. You are asked to figure out how to mount two candles on a door and are given an assortment of odds and ends, including a small empty cardboard box and some thumbtacks. Perhaps you have already figured out a solution: tack the box to the door so it forms a platform, then put the candles on top of the box. Most people are able to arrive at this solution. Imagine a slight variation of the procedure, however. What if, instead of being empty, the box had matches in it? Most people given this version of the problem do not arrive at the solution given above. Why? Because it seems to people that when the box contains matches, it already has a function; it is a matchbox. People are unlikely to consider a new function for an object that already has a function. This is functional fixedness.

Mental set  is a type of fixation in which the problem solver gets stuck using the same solution strategy that has been successful in the past, even though the solution may no longer be useful. It is commonly seen when students do math problems for homework. Often, several problems in a row require the reapplication of the same solution strategy. Then, without warning, the next problem in the set requires a new strategy. Many students attempt to apply the formerly successful strategy on the new problem and therefore cannot come up with a correct answer.

The thing to remember is that you cannot solve a problem unless you correctly identify what it is to begin with (initial state) and what you want the end result to be (goal state). That may mean looking at the problem from a different angle and representing it in a new way. The correct representation does not guarantee a successful solution, but it certainly puts you on the right track.

A bit more optimistically, the Gestalt psychologists discovered what may be considered the opposite of fixation, namely insight . Sometimes the solution to a problem just seems to pop into your head. Wolfgang Kohler examined insight by posing many different problems to chimpanzees, principally problems pertaining to their acquisition of out-of-reach food. In one version, a banana was placed outside of a chimpanzee’s cage and a short stick inside the cage. The stick was too short to retrieve the banana but was long enough to retrieve a longer stick also located outside of the cage. This second stick was long enough to retrieve the banana. After trying, and failing, to reach the banana with the shorter stick, the chimpanzee would try a couple of random-seeming attempts, react with some apparent frustration or anger, then suddenly rush to the longer stick, the correct solution fully realized at this point. This sudden appearance of the solution, observed many times with many different problems, was termed insight by Kohler.

Lest you think it pertains to chimpanzees only, Karl Dunker demonstrated that children also solve problems through insight in the 1930s. More importantly, you have probably experienced insight yourself. Think back to a time when you were trying to solve a difficult problem. After struggling for a while, you gave up. Hours later, the solution just popped into your head, perhaps when you were taking a walk, eating dinner, or lying in bed.

fixation:  when a problem solver gets stuck looking at a problem a particular way and cannot change their representation of it (or their intended solution strategy)

functional fixedness:  a specific type of fixation in which a problem solver cannot think of a new use for an object that already has a function

mental set:  a specific type of fixation in which a problem solver gets stuck using the same solution strategy that has been successful in the past

insight:  a sudden realization of a solution to a problem

Solving Problems by Trial and Error

Correctly identifying the problem and your goal for a solution is a good start, but recall the psychologist’s definition of a problem: it includes a set of possible intermediate states. Viewed this way, a problem can be solved satisfactorily only if one can find a path through some of these intermediate states to the goal. Imagine a fairly routine problem, finding a new route to school when your ordinary route is blocked (by road construction, for example). At each intersection, you may turn left, turn right, or go straight. A satisfactory solution to the problem (of getting to school) is a sequence of selections at each intersection that allows you to wind up at school.

If you had all the time in the world to get to school, you might try choosing intermediate states randomly. At one corner you turn left, the next you go straight, then you go left again, then right, then right, then straight. Unfortunately, trial and error will not necessarily get you where you want to go, and even if it does, it is not the fastest way to get there. For example, when a friend of ours was in college, he got lost on the way to a concert and attempted to find the venue by choosing streets to turn onto randomly (this was long before the use of GPS). Amazingly enough, the strategy worked, although he did end up missing two out of the three bands who played that night.

Trial and error is not all bad, however. B.F. Skinner, a prominent behaviorist psychologist, suggested that people often behave randomly in order to see what effect the behavior has on the environment and what subsequent effect this environmental change has on them. This seems particularly true for the very young person. Picture a child filling a household’s fish tank with toilet paper, for example. To a child trying to develop a repertoire of creative problem-solving strategies, an odd and random behavior might be just the ticket. Eventually, the exasperated parent hopes, the child will discover that many of these random behaviors do not successfully solve problems; in fact, in many cases they create problems. Thus, one would expect a decrease in this random behavior as a child matures. You should realize, however, that the opposite extreme is equally counterproductive. If the children become too rigid, never trying something unexpected and new, their problem-solving skills can become too limited.

Effective problem solving seems to call for a happy medium that strikes a balance between using well-founded old strategies and trying new ground and territory. The individual who recognizes a situation in which an old problem-solving strategy would work best, and who can also recognize a situation in which a new untested strategy is necessary is halfway to success.

Solving Problems with Algorithms and Heuristics

For many problems there is a possible strategy available that will guarantee a correct solution. For example, think about math problems. Math lessons often consist of step-by-step procedures that can be used to solve the problems. If you apply the strategy without error, you are guaranteed to arrive at the correct solution to the problem. This approach is called using an algorithm , a term that denotes the step-by-step procedure that guarantees a correct solution. Because algorithms are sometimes available and come with a guarantee, you might think that most people use them frequently. Unfortunately, however, they do not. As the experience of many students who have struggled through math classes can attest, algorithms can be extremely difficult to use, even when the problem solver knows which algorithm is supposed to work in solving the problem. In problems outside of math class, we often do not even know if an algorithm is available. It is probably fair to say, then, that algorithms are rarely used when people try to solve problems.

Because algorithms are so difficult to use, people often pass up the opportunity to guarantee a correct solution in favor of a strategy that is much easier to use and yields a reasonable chance of coming up with a correct solution. These strategies are called  problem solving heuristics . Similar to what you saw in section 6.2 with reasoning heuristics, a problem-solving heuristic is a shortcut strategy that people use when trying to solve problems. It usually works pretty well, but does not guarantee a correct solution to the problem. For example, one problem-solving heuristic might be “always move toward the goal” (so when trying to get to school when your regular route is blocked, you would always turn in the direction you think the school is). A heuristic that people might use when doing math homework is “use the same solution strategy that you just used for the previous problem.”

By the way, we hope these last two paragraphs feel familiar to you. They seem to parallel a distinction that you recently learned. Indeed, algorithms and problem-solving heuristics are another example of the distinction between Type 1 thinking and Type 2 thinking.

Although it is probably not worth describing a large number of specific heuristics, two observations about heuristics are worth mentioning. First, heuristics can be very general or they can be very specific, pertaining to a particular type of problem only. For example, “always move toward the goal” is a general strategy that you can apply to countless problem situations. On the other hand, “when you are lost without a functioning GPS, pick the most expensive car you can see and follow it” is specific to the problem of being lost. Second, all heuristics are not equally useful. One heuristic that many students know is “when in doubt, choose c for a question on a multiple-choice exam.” This is a dreadful strategy because many instructors intentionally randomize the order of answer choices. Another test-taking heuristic, somewhat more useful, is “look for the answer to one question somewhere else on the exam.”

You really should pay attention to the application of heuristics to test-taking. Imagine that while reviewing your answers for a multiple-choice exam before turning it in, you come across a question for which you originally thought the answer was c. Upon reflection, you now think that the answer might be b. Should you change the answer to b, or should you stick with your first impression? Most people will apply the heuristic strategy to “stick with your first impression.” What they do not realize, of course, is that this is a very poor strategy (Lilienfeld et al., 2009). Most of the errors on exams come on questions that were answered wrong originally and were not changed (so they remain wrong). There are many fewer errors where we change a correct answer to an incorrect answer. And, of course, sometimes we change an incorrect answer to a correct answer. In fact, research has shown that it is more common to change a wrong answer to a right answer than vice versa (Bruno, 2001).

The belief in this poor test-taking strategy (stick with your first impression) is based on the  confirmation bias  (Nickerson, 1998; Wason, 1960). You first saw the confirmation bias in Module 1, but because it is so important, we will repeat the information here. People have a bias, or tendency, to notice information that confirms what they already believe. Somebody at one time told you to stick with your first impression, so when you look at the results of an exam you have taken, you will tend to notice the cases that are consistent with that belief. That is, you will notice the cases in which you originally had an answer correct and changed it to the wrong answer. You tend not to notice the other two important (and more common) cases, changing an answer from wrong to right, and leaving a wrong answer unchanged.

Because heuristics by definition do not guarantee a correct solution to a problem, mistakes are bound to occur when we employ them. A poor choice of a specific heuristic will lead to an even higher likelihood of making an error.

algorithm:  a step-by-step procedure that guarantees a correct solution to a problem

problem solving heuristic:  a shortcut strategy that we use to solve problems. Although they are easy to use, they do not guarantee correct judgments and solutions

confirmation bias:  people’s tendency to notice information that confirms what they already believe

An Effective Problem-Solving Sequence

You may be left with a big question: If algorithms are hard to use and heuristics often don’t work, how am I supposed to solve problems? Robert Sternberg (1996), as part of his theory of what makes people successfully intelligent (Module 8) described a problem-solving sequence that has been shown to work rather well:

  • Identify the existence of a problem.  In school, problem identification is often easy; problems that you encounter in math classes, for example, are conveniently labeled as problems for you. Outside of school, however, realizing that you have a problem is a key difficulty that you must get past in order to begin solving it. You must be very sensitive to the symptoms that indicate a problem.
  • Define the problem.  Suppose you realize that you have been having many headaches recently. Very likely, you would identify this as a problem. If you define the problem as “headaches,” the solution would probably be to take aspirin or ibuprofen or some other anti-inflammatory medication. If the headaches keep returning, however, you have not really solved the problem—likely because you have mistaken a symptom for the problem itself. Instead, you must find the root cause of the headaches. Stress might be the real problem. For you to successfully solve many problems it may be necessary for you to overcome your fixations and represent the problems differently. One specific strategy that you might find useful is to try to define the problem from someone else’s perspective. How would your parents, spouse, significant other, doctor, etc. define the problem? Somewhere in these different perspectives may lurk the key definition that will allow you to find an easier and permanent solution.
  • Formulate strategy.  Now it is time to begin planning exactly how the problem will be solved. Is there an algorithm or heuristic available for you to use? Remember, heuristics by their very nature guarantee that occasionally you will not be able to solve the problem. One point to keep in mind is that you should look for long-range solutions, which are more likely to address the root cause of a problem than short-range solutions.
  • Represent and organize information.  Similar to the way that the problem itself can be defined, or represented in multiple ways, information within the problem is open to different interpretations. Suppose you are studying for a big exam. You have chapters from a textbook and from a supplemental reader, along with lecture notes that all need to be studied. How should you (represent and) organize these materials? Should you separate them by type of material (text versus reader versus lecture notes), or should you separate them by topic? To solve problems effectively, you must learn to find the most useful representation and organization of information.
  • Allocate resources.  This is perhaps the simplest principle of the problem-solving sequence, but it is extremely difficult for many people. First, you must decide whether time, money, skills, effort, goodwill or some other resource would help to solve the problem Then, you must make the hard choice of deciding which resources to use, realizing that you cannot devote maximum resources to every problem. Very often, the solution to the problem is simply to change how resources are allocated (for example, spending more time studying in order to improve grades).
  • Monitor and evaluate solutions.  Pay attention to the solution strategy while you are applying it. If it is not working, you may be able to select another strategy. Another fact you should realize about problem-solving is that it never does end. Solving one problem frequently brings up new ones. Good monitoring and evaluation of your problem solutions can help you to anticipate and get a jump on solving the inevitable new problems that will arise.

Please note that this is an  effective problem-solving sequence, not THE  effective problem-solving sequence. Just as you can become fixated and end up representing the problem incorrectly or trying an inefficient solution, you can become stuck applying the problem-solving sequence in an inflexible way. Clearly, there are problem situations that can be solved without using these skills in this order.

Additionally, many real-world problems may require that you go back and redefine a problem several times as the situation changes (Sternberg et al., 2000). For example, consider the problem with Mary’s daughter one last time. At first, Mary did represent the problem as one of defiance. When her early strategy of pleading and threatening punishment was unsuccessful, Mary began to observe her daughter more carefully. She noticed that, indeed, her daughter’s attention would be drawn by an irresistible distraction or book. Fresh with a re-representation of the problem, she began a new solution strategy. She began to remind her daughter every few minutes to stay on task and remind her that if she is ready before it is time to leave, she may return to the book or other distracting object at that time. Fortunately, this strategy was successful, so Mary did not have to go back and redefine the problem again.

Pick one or two of the problems that you listed when you first started studying this section and try to work out the steps of Sternberg’s problem-solving sequence for each one.

Module 8: Testing and Intelligence

  • Did you take the SAT or the ACT? Have you ever taken an intelligence test?
  • Do you think that tests of intellectual ability (for example, SAT, ACT, intelligence tests) do a good job of predicting who will be successful in school and in life?
  • If the answer to the previous question is “no,” what abilities (besides “intellectual” abilities) might help people to succeed in school and in life?

CogAT, Iowa Test of Basic Skills, Minnesota Multiphasic Personality Inventory, SAT, ACT, General Chemistry, Principles of Economics. At your school, students may be required to take exams to place them into the correct English and math classes, and to determine if they are skilled at college-level reading. You may face standardized tests when you begin and when you finish at the college in order to assess the effectiveness of the curriculum. And, of course, nearly every college class offers two to three exams of its own.

Even after college, you will not be done with tests. Cognitive ability tests, skills tests, even personality tests and integrity tests are all used as part of the employee selection procedure at many companies. If you decide to earn an advanced degree, you may be required to take another standardized test such as the LSAT, GMAT, MCAT, or GRE. And of course, if the current projections that most people will be required to return to school for retraining at various times during their careers are correct, you are likely to continue to face the dreaded midterm and final examinations in courses throughout your life.

For better or worse, we have entered an era of unprecedented testing. Many states require college placement exams of all high school students. Elementary schools begin preparing their students for third grade abilities tests as early as kindergarten. This module describes the good and bad aspects of tests, primarily tests of intellectual ability. Section 8.1 introduces you to the principles of test construction and how they apply to standardized tests and course exams in school. Section 8.2 takes up the question of what intelligence tests measure and fail to measure; it also discusses a couple of views of intelligence that characterize it as a set of separate abilities, rather than as a single trait. Because tests can be difficult, unpleasant, and very important, many students suffer anxiety as a result of them. This module concludes with Section 8.3, a discussion of test anxiety and the effects of stress on memory.

8.1. Understanding Tests and Test Construction

8.2. Measuring “Intelligence”

8.3. Test Anxiety

By reading and studying Module 8, you should be able to remember and describe:

  • Aptitude and achievement tests (8.1)
  • Standardization, reliability, and validity (8.1)
  • Cattell-Horn-Carroll (CHC) theory and its organization (8.2)
  • Successful Intelligence: analytical, creative, practical intelligence (8.2)
  • Test bias (8.2)
  • Stereotype threat (8.2)
  • Stressors (8.3)
  • Effects of stress and anxiety on memory and testing ability (8.3)
  • Relaxation techniques for test anxiety: STOP technique, progressive relaxation (8.3)

By reading and thinking about how the concepts in Module 8 apply to real life, you should be able to:

  • Recognize standardization, reliability, and validity in exams you encounter (8.1)
  • Recognize examples of the Cattell-Horn-Carroll (CHC) theory (8.2)
  • Recognize examples of Robert Sternberg’s Successful Intelligence (8.2)

By reading and thinking about Module 8, participating in classroom activities, and completing out-of-class assignments, you should be able to:

  • Combine elements of different definitions and theories to come up with your own definition of intelligence (8.2)
  • Identify whether anxiety helps or hurts you on exams, and devise a strategy to manage it if necessary (8.3)
  • Describe the important elements you would include in a test to predict success in school and career (8.1 and 8.2)

8.1 Understanding Tests and Test Construction

  • Have you ever taken a test and felt that you knew more than you were able to demonstrate on the test?
  • Do you consider yourself a good or a poor test taker? If good, what makes you good? If poor, what makes you poor?

Many students complain that tests—standardized tests in particular—do not reflect their actual talents, knowledge, and abilities. These students assert that they are poor test takers who know much more than their test scores indicate. They worry that being bad at taking tests will unfairly impede their academic and work careers, because so many decisions that affect a person’s status in life are based on test scores. Although it will not entirely solve the problem, understanding a few things about tests might help demystify them and help you cope with them.

Entrance, placement, and job selection tests are all designed to do one thing, predict your future success at some endeavor. Tests that are designed to predict some future performance are called aptitude tests. College aptitude tests (SAT, ACT), for example, are designed to predict your college first semester freshman year grade point average. They are, at least in principle, different from the kinds of tests that you take in your classes at school. These other tests are called achievement tests . Achievement tests are designed to measure whether you have met some particular learning goals (e.g., did you learn the material from Chapter 6 of your History textbook).

Aptitude tests are not supposed to reflect achievement, but they often do. Some of the best-known so-called aptitude tests, such as the SAT, have been found to rely too much on the knowledge that people learn from their environment to be true measures of one’s potential. Thus, you should always keep in mind that the terms aptitude tests and   achievement tests refer  to their intended uses only, not to any principles related to their construction or the things they actually measure. In order to avoid confusion, we will rarely use these terms and refer instead to specific types of tests—for example, college entrance tests (SAT, ACT), and course exams.

aptitude test: a test designed to predict the test taker’s future performance

achievement test : a test designed to measure whether the test taker has met particular learning goals

Three Key Testing Concepts

College entrance tests, along with other aptitude tests, are generally standardized tests. These are tests given to people under similar testing conditions and for which individual scores are compared to a group that has already taken the tests. Let us look carefully at three key concepts that apply to many tests: standardization, reliability, and validity.

Standardization  refers to the procedure through which an individual’s score is compared to the scores from people who have previously taken the test. Typically, a test will be given to a large group of people (several thousand). The scores from this standardization group are distributed in the form of the famous “bell curve.” That is, most of the scores will cluster near the middle, at the average score. There will be fewer and fewer people who score farther away from the average score. A chart of this distribution looks like the outline of a bell:

introduction to psychology essay questions and answers pdf

By using this distribution of scores, it is possible to estimate an individual’s score relative to the standardization group with great precision. In order for standardization to work—that is, to be able to pinpoint the individual’s performance relative to the standardization group—the testing conditions must be similar for all test takers. That is why these tests are timed, with everyone taking the test in nearly identical settings.

The second important concept in standardized testing is reliability, which refers to the consistency of a test. If a test is to be a good predictor of your future performance, it should be a consistent, or stable, predictor. It would not be very useful if the test predicted straight A’s for you on a Tuesday but a D average for you on a Friday. In order to assess the reliability of a standardized test, psychometricians (psychologists who construct tests) examine two types of consistency:

  • Consistency over time is often assessed by measuring test-retest reliability. The concept is very simple. Give a group of people the test today, and then give it to them later, say, three months from today. If the test is reliable, individuals in the group should receive close to the same score both times.
  • Consistency within the test is assessed by measuring what is known as internal consistency reliability, or how well the different parts of a test agree with one another. If one section of a test of verbal ability indicated that you have above-average verbal ability, it would not make sense if another section of the test indicated below-average verbal ability.

There is an important relationship between reliability and standardization. Specifically, failure to standardize the procedures by which a test is administered will lead to unreliability. For example, if a group of test-takers is given more time in a more comfortable room the second time they take a test, then test-retest reliability will be low (because they will likely score much higher on the retest).

The third important concept in standardized testing is validity, which refers to the degree to which a test measures what it is supposed to measure. It is the most complex of the three concepts. We will focus on two particular kinds of validity:

  • Content validity  in essence rephrases the question “Does the test measure what it is supposed to measure?” to be “Does the test look like it measures what it is supposed to?” Specifically, content validity is a judgment made by a subject-matter expert that a test adequately addresses all of the important skills and knowledge that it should.
  • Predictive validity  rephrases the question to be “Does the test predict what it is supposed to predict?” Obviously, then, it is principally of interest when we are talking about aptitude tests. For example, the SAT and ACT are designed to predict your college GPA. The measure of a tests’ success at doing that is its predictive validity.

There is an important relationship between reliability and validity. If a test is reliable, we can say nothing about whether it is valid or not. Think about it. A test can be extremely reliable (consistent) yet be a very poor predictor. For example, your shoe size is reliable but not a valid predictor of your grade in this class (of course, your instructor’s job would be much easier if it were valid!).

A diagram with three boxes. The first box is labelled Test is reliable and is connected to two boxes. The second box is labelled Test might be valid. The third box is labelled test might not be valid.

On the other hand, if a test is unreliable, we can say something about its validity. An unreliable test cannot be valid. If a test is not consistently measuring anything (which is what being unreliable means), then it certainly cannot be a good predictor. Thus, reliability is necessary but not sufficient for a test to be valid.

A diagram of two text boxes. The first box is labelled test is unreliable and is connected to the second box labelled test cannot be valid.

standardization: comparing a test taker’s score to the scores from a pre-tested group

reliability: the consistency of a test

test-retest reliability: a technique for measuring reliability by examining the similarity of scores when the same individuals take a test multiple times

internal consistency reliability: a technique for measuring reliability by examining the similarity of an individual’s sub-score for different parts of the test

validity: whether a test measures what it is intended to measure

content validity: a technique of estimating validity by having an expert judge whether the test samples from an appropriate range of skills and knowledge

predictive validity: a technique of estimating the validity of an aptitude test by comparing test-takers’ actual performance on some task to the performance that was predicted by the test

The Properties of Course Exams

So, how do the tests with which you may be familiar fare on the three important test construction properties? First, let us consider course exams. It is important to realize that college professors are not psychometricians. Nonetheless, an informal kind of standardization often does occur. Instructors try to administer tests in similar conditions every time, and they often adjust their results to report scores relative to the rest of the class (when you are graded on a curve). This is similar to, but not the same as, standardization.

It is difficult to make generalizations about the reliability of course exams. It is probably safe to say that reliability is a problem for many course exams because there are so many specific threats to reliability. First, although instructors try, it is difficult to standardize procedures. Some classrooms are cold and noisy, others are comfortable and quiet. Some class sections meet twice a week for an hour and a half, others 3 days a week for an hour. And so on. Other important threats to reliability can come from questions or instructions that are misunderstood by students and non-course related vocabulary words that are known to some students and not to others.

Of course, then, these threats to reliability also influence the validity of course exams. There are some reasons to be a bit more optimistic about the validity of these exams, however. The content validity of course exams is often very high. Instructors are often very careful about indicating what skills and knowledge are important to learn during the course. They subsequently do a good job of ensuring that these skills and knowledge are included on their exams. If some of the threats to validity that result from reliability problems could be addressed, then we might be very optimistic indeed about the validity of course exams. Note, however, that these qualities relate only to course exams’ validity as achievement tests. Their usefulness for predicting some future performance (hence, their predictive validity) is usually an open question.

The Properties of Standardized Tests

Now, what about standardized tests (so-called aptitude tests), such as the SAT and ACT? As you might guess, standardization tends to be very good. The administration procedures are usually precisely controlled, making them quite uniform. The comparison group (standardization group) is usually very large and quite recent.

Reliability tends to be high for standardized tests, in part because of the good control of administration procedures. For example, although some people do score differently if they take the test a second or third time, most people will have very similar scores from one time to another.

Validity, however, is a more complex and controversial issue. Let us focus on the predictive validity of college aptitude tests (SAT and ACT) to illustrate the issues. As supporters of aptitude tests are quick to point out, these tests are the best single predictor of success in a wide variety of areas. This may not be true, however. Recent research conducted by the College Board, the publisher of the SAT found that high school GPA is a better predictor of college grades than the SAT is (Wingert, 2008). Still, if you ask us to pick someone who will succeed as a salesperson, a doctor, or a marketing manager, and tell us that we can know only one piece of information about that person, we might ask for the SAT score. Unfortunately, some people confuse the idea that “aptitude tests are the best single predictor” with “aptitude tests are better than all other predictors combined.” These are very different ideas. College aptitude tests (SAT and ACT) are said to be moderately successful at predicting a person’s college GPA during the first semester of freshman year. No other predictor (e.g., high school GPA, letters of recommendation) has as high a correlation with first-semester freshman year college GPA.

This does not mean, however, that the SAT predicts most of college GPA, or even that it does a great job of predicting GPA during the first semester of freshman year. Only 25% of the variability in students’ first semester freshman year GPA is related to their college aptitude test scores (Willingham et al., 1990; Wingert, 2008). Thus test scores are not a trivial tool for predicting success when a person first starts college, but at the same time, they are not infallible. With 75% of first-year freshman-year college GPA unrelated to test scores, you will find many cases of people who score high on the tests yet fail when they get to college. Similarly, community colleges across the United States have many students with SAT or ACT scores that predicted they would fail at a four-year college who end up doing extremely well and even go on to complete a bachelor’s or master’s degree or more.

  • Please try to recall some bad experiences with exams (both course exams and standardized tests). Try to assess the degree to which the bad experience was related to standardization, reliability, and validity.
  • What would you do to increase the reliability and validity of course exams?
  • What would you do to increase the validity of tests like the SAT or ACT?

8.2 Measuring “Intelligence”

  • What is your definition of intelligence?
  • How closely is intelligence, as you define it, related to success in life?

One way to begin to think about what intelligence is to examine the tests that are supposed to measure it. Let us look briefly at one of the most popular intelligence tests, the Wechsler Adult Intelligence Scale. The WAIS-IV (for 4 th edition), which was published in 2008, is described as a test of intellectual ability. The test is administered to individual test takers by a trained examiner; it takes 60 – 90 minutes to complete. It contains 10 core subtests and 5 supplemental, unscored subtests. The scored subtests contribute to an overall intelligence test score.

WAIS-IV Sub-tests

Sub-tests preceded by an asterisk are unscored subsections.

So, now you know roughly what a typical test of intelligence looks like. (You might also think about the college entrance exams you may have taken; although they are not supposed to be intelligence tests, they are based on principles designed to measure intelligence; Gardner 1999). But could you now describe exactly what intelligence is? No one really has a good, complete definition of intelligence with which everyone will agree. Furthermore, as you might have noticed from the WAIS description, intelligence tests do not measure the types of abilities that most people, including many psychologists, would count as intelligence. That is one reason why the subject of measuring intelligence has become rather controversial.

Many would agree that standardized intelligence-type tests mostly measure your ability to solve problems in very specific areas, namely math and language. Scores on these tests have been shown to modestly predict success in other areas as well, such as on the job (at least for certain jobs). Some psychologists have argued, however, that these other types of success depend much more on abilities—what we might call non-academic intelligences—that are beyond skills in using math and language.

The most complete current theory that describes the structure and organization of the cognitive abilities that we would call intelligence is the Cattell-Horn-Carroll (CHC) theory  (Flanagan & Dixon, 2014). CHC theory divides cognitive ability into three separate levels; these levels go from broad, general abilities to narrow, more specific ones. The top-level corresponds to general intelligence, the broadest level. An intermediate level includes 16 somewhat narrower, but still quite general abilities such as processing speed, reasoning ability, memory, acquired knowledge, etc. Then, there are more than 70 narrow abilities, such as:

  • inductive reasoning,
  • quantitative reasoning,
  • communication ability,
  • mechanical knowledge,
  • reading comprehension,
  • memory span (number of items you can hold in working memory),
  • ability to remember meaningful information,
  • ability to remember unrelated information,
  • olfactory memory (memory for different odors),
  • multi-limb coordination,
  • and so on.  

Think of the three levels as a hierarchy; some of the 70 narrow abilities map onto abilities in the intermediate level, which then combine to map onto the general intelligence level. Because the narrow abilities can be measured more easily, psychologists can use these both for the insights they provide about the narrow abilities themselves and for an estimation of an individual’s abilities on the higher levels.  

One final observation about the theory: Note the number of abilities we just reported. CHC theory has 16 semi-general abilities and 70 narrow abilities. The ones we listed were just some interesting examples of those abilities. In other words, the theory is huge. And maybe that is a good thing. Given the rich diversity of human cognitive abilities, and the very many ways that one might exhibit intelligence, it makes sense that it would take a monster-sized theory to describe it. CHC theory has been very useful at guiding and organizing research, and it undergoes frequent revision as new research comes in. We hope that sounds familiar and good, as this is exactly what a theory is supposed to do (see Module 2).  

Successful Intelligence

Robert Sternberg took a slightly different track in the development of his theory of intelligence. He noted that traditional views of intelligence tended to include only problem-solving in an academic context and ignored the set of abilities that truly allow people to succeed in life. Sternberg calls his concept Successful Intelligence. It encompasses the abilities of recognizing and maximizing strengths, recognizing and compensating for weaknesses, and adapting to, shaping, and selecting environments. Sternberg defines three important components of successful intelligence (Sternberg and Grigorenko, 2000; Sternberg, 1996):

  • Analytical intelligence: The conscious direction of mental process to solve problems. It involves identifying problems, allocating resources, representing and organizing information, formulating strategies, monitoring strategies, and evaluating solutions. The “intelligence” tested by many traditional tests of intellectual ability compose a small portion of analytical intelligence only.
  • Creative intelligence: Generating ideas that are novel and valuable. It often involves making connections between things that other people do not see.
  • Practical intelligence: An ability to function well in the world. It involves knowing what is necessary to do to thrive in an environment and doing it. Because our world is essentially a social one, interpersonal and communication skills are keys to practical intelligence.

These three component intelligences combine to determine a person’s successful intelligence. Interestingly, Sternberg believes that successful intelligence can be improved.

A Final Word on Different Views of Intelligences

Which one of these views on intelligences is correct? This is, perhaps, not a fair question. Because intelligence is defined differently by different psychologists, both views may be considered correct. It seems very likely that a set of abilities related to managing emotions and understanding and dealing with people (including oneself) is a key component for success in life. And these abilities, more than any others, are the keys to success that are not tested by traditional intelligence tests and other intellectual aptitude tests.

Cattell-Horn-Carroll Theory of Cognitive Ability: A comprehensive theory of human cognitive ability that organizes intelligence in three levels, from the highest general intelligence level, through intermediate broad abilities, and to more than 70 narrow abilities.

Successful intelligence: Robert Sternberg’s characterization of intelligence as three separate abilities that allow an individual to succeed in the world.

Are tests unfair?

Of course, tests can be unpleasant experiences for some people. Many unpleasant experiences are fair and beneficial, however. For example, many people do not particularly like injections, but they endure them because they know that the shots are good for them. So, it is not enough to condemn testing because the experience is difficult and unpleasant. We need to take a good look at whether they treat people fairly. One exercise you may find helpful in that regard is to take a quick glance backward, to the history of intelligence, intelligence testing, and aptitude testing.

As you might know, intelligence and intelligence testing have been quite controversial over the years. Perhaps the topic was doomed to controversy from the very beginning. In the late 1800s, one of the pioneers in the area, Francis Galton, promoted the view that intelligence was entirely hereditary. He also believed that various races differed in their intelligence and took it as a given that men were more intelligent than women. Most controversially, perhaps, it was Galton who developed the concept of eugenics, a field that sought to encourage the reproduction of “genetically superior” people and discourage the reproduction of “genetically inferior” people (Hunt, 2007).

Although Galton developed his own tests of intelligence, they tended to focus on sensory abilities (this was another of his beliefs, that these sensory abilities were related to intelligence) and were not particularly influential in the fledgling field of intelligence testing. Intelligence testing as we know it really began with Alfred Binet and Theodore Simon in France at the beginning of the 20 th century. Binet and Simon created a test designed to predict which children would succeed and which ones would have difficulty in school. Binet and Simon conceptualized the idea of mental age, the cognitive abilities that should correspond to a particular age. Later, the concept would be given a number, the famous IQ, or intelligence quotient. IQ is the ratio of mental age to actual age (multiplied by 100). A child with the same mental and actual age, therefore, has an IQ of 100. A child with a mental age of 10 and an actual age of 8 would have an IQ of 125 (10/8 = 1.25). Intelligence testing quickly developed into aptitude testing, testing designed to predict future performance. The practice was embraced in the US and was adopted by the US military to classify recruits during World War I.

Fresh from its perceived success in the war effort, the testing world evolved in the 1930s to create a new controversy. In 1926, the Scholastic Aptitude Test (SAT) was developed to admit students to Ivy League universities (a small group of east coast, extremely elite schools, such as Harvard and Yale). The original intention was to create a test that could discover students who would be able to excel at these institutions despite not having had the advantage of an east coast college-preparatory education. That is, the goal was to tap into a student’s potential, a potential distinct from one’s educational experience (Lemann, 1999). Unfortunately, it did not quite work out that way. The SAT and its chief competitor, the ACT, ended up largely “discovering” students from the very same advantaged backgrounds that they had sought to move beyond (Gardner, 1999). So, an important piece of the history of aptitude tests is that the individuals who had better educational experiences, particularly wealthy whites, performed better on the tests.

What does it mean for a test to be unfair? Many students might initially argue that it is essentially the same thing as being difficult, but that is not quite right. A test can be difficult and fair. As an aside, it is important to distinguish between the fairness of a test and the fairness of using the test. A test may be perfectly fair, but if it is used for an unintended purpose, it will be unfair. For example, some personality tests that have not been designed to predict success on the job have been used for job selection purposes.

In order for a test to be fair, it should be acceptably high on the test construction principles, standardization, reliability, and validity. Second, it should be unbiased, meaning that it should treat everyone the same. These ideas apply to both aptitude tests and course exams, by the way. We have already described the issues surrounding standardization, reliability, and validity, so let us focus on the question of bias.

In many cases, whether a test is biased or unbiased is a legal question. For example, think about job selection procedures. (sec 20.1) According to the Equal Employment Opportunity Commission, a department of the US Government, a selection procedure, including a test, is suspect if it results in a hiring rate for minority group members below 80% of the rate for the most-hired group. At that point, it is up to the employer to prove that the selection procedure is valid. So, in principle, it can be acceptable for a test to be biased against a group, as long as it does a good job at predicting success. In reality, however, it can be very difficult to justify the use of a biased test unless the validity is extraordinarily high.

As a result of the attention that psychometricians have paid to bias over the years, the most obvious sources of biases are gone from the widely used tests. They are probably not all gone, however. For example, some researchers note that the SAT is still biased against African Americans, Asian Americans, and members of the Latinx community (Freedle, 2003).

Let us finish this discussion of the fairness of tests by turning to a common complaint about course exams that sounds like a matter of content validity. Some students believe that an exam was unfair because they spent time studying a topic that ended up not being on the test. Although this could be a weakness of content validity (the test failing to sample from the appropriate skills and knowledge), it is more likely a misunderstanding of how course exams are supposed to work. When instructors give an exam, they would like to be able to conclude that students’ performance on it reflects their mastery of all of the associated course material, not simply the material on the exam.

The way this works is that the process of giving an exam is analogous to the process of conducting a survey with a representative sample. Survey researchers draw random samples from a population in order to generalize from a small sample to the whole population (sec 2.2). As long as each individual member of the population has an equal chance of being included in the sample, the researcher can conclude that the results of the sample reflect the opinions of the whole population. Course exams work the same way. Instructors’ “population” is all of the material from the course. They want to be able to determine whether you have learned everything. Of course, one way to do that would be to test you on everything, but that would result in a test that is as long as the material you have learned. So, your instructor samples from the material. If you are able to correctly answer questions on a sample of material—as long as you do not know exactly what that sample will be prior to the exam—your instructor can assume that you could have correctly answered questions from any possible sample of material. In other words, we can assume that you learned all of the material by testing you on a relatively small portion of it.

  • Did your earlier definition of intelligence contain any of the “non-intellectual” abilities described in this section?
  • Can you think of an example of how the wording of an aptitude test question could be biased?

8.3 Test Anxiety

Think about the most important test you can remember taking.

  • How nervous were you during the test?
  • How long before the test did you start getting nervous?
  • Did your nerves cause you to do better or worse on the test than you would have had you been completely calm?

Some students seem to shine on tests. The pressure of needing to do well enhances their memory and performance. Others, however, are not so fortunate. Despite studying, in many cases, longer than the “shiners,” these students find themselves blanking out on tests, nearly paralyzed by an anxiety that makes them forget much of what they studied. Why does the stress of taking a test translate into increased performance for some and debilitating anxiety (and reduced performance) for others? In order to answer this question, it is helpful to understand how stress affects memory.

Stress, as Module 27 describes more completely, arouses the body. It is important to realize that the body does not differentiate between psychological threats, such as exams and deadlines, and physical threats, such as being held up at gunpoint. These environmental threats or challenges are called stressors. For both psychological and physical stressors, the physiological reactions that take place have collectively been called the “fight or flight” response. Essentially, stressors cause the body to prepare itself to meet a physical danger—even when the stressor is psychological—by giving it a temporary boost in its ability to fight or run away from the danger. So, your heart beats faster in order to pump blood, which sends glucose (sugar, for energy) and oxygen throughout the body faster. This blood is diverted from parts of the body not needed to face the danger, such as the digestive system, and pumped to the large muscles of the arms and legs (so you can fight or run). This is why your stomach gets jumpy and your mouth gets dry when you are under stress.

Another part of the body that gets a boost of energy is the brain, particularly the areas that are involved in memory formation. Stress hormones also affect the memory-related areas of the brain. The result is a boost in memory during stressful events, which helps explain the “I do my best testing under pressure” folks.

But what about those who suffer from test anxiety? The key observation that is missing so far is that the memory-boosting effects of stress result from short-term, mild to moderate stressors only. Long-term, severe stressors have the opposite effect, they make memory worse.

How short is short-term? After about 30 minutes of a constant stressor, the brain’s use of glucose for extra energy returns to its regular level (Sapolsky, 2004). Beyond 30 minutes, there is actually a rebound effect, a reduction in the brain’s use of glucose. The student who begins to get nervous for a 1:00 exam at 12:55 is not much affected by this rebound effect. It is not so good, though, for the student who wakes up at 7:00 am already anxious about the exam.

To make matters worse for the test-anxious student, the very same event will be more threatening, and thus more stressful, for some students than others. Think about it. If you have had a lot of success taking exams in the past, each new exam is likely to seem fairly unthreatening. On the other hand, if you frequently do poorly on exams, each new exam is very threatening. The result is that the same mid-term exam or the same college aptitude test will be a short-term, mild to moderate stressor for some students and a longer-term, severe stressor for others. The students who fall in the latter group suffer from test anxiety.

The trick for the test-anxious, then, is to turn exams and standardized tests into short-term, mild-to-moderate stressors. Easier said than done, you say. The very activity that is intended to prepare you for the exam, studying, becomes stressful itself as you begin to think about the likelihood of failure. The key is to continue to prepare, studying as hard as or harder than you have been, but with two important differences. First, pay very close attention to the information in Module 7 about metacognition. It will be very helpful for you to develop the ability to reflect on your thinking and studying, so you will have a very accurate estimate of just how prepared you really are. Second, and perhaps more importantly, you should learn how to relax when you begin to feel anxious while studying. Remember, the ideal is to become a little anxious right before the exam begins, so if you are feeling anxiety at any time before then, you should probably engage in some calming or relaxation behaviors. You should develop a repertoire of relaxation techniques, some short and simple, others longer and more involved. Depending on what the situation allows and how anxious you are feeling, you can then choose from among several alternatives.

Here are a couple of techniques to help you begin to learn how to relax in the face of test anxiety (from Davis, Robbins Eshelman, and McKay, 1995). One is very short and simple, the second a bit more involved.

  • STOP Technique. Whenever you find yourself getting anxious about a test (while studying or taking the test), silently “shout” to yourself to “STOP!” Once you have distracted yourself from the anxiety-provoking thoughts (which may take several “shouts”), start paying attention to your breathing. Take slow, deep breaths, making sure that your abdomen moves out with each inhalation. Count your breaths. Inhale. Exhale and count one. Inhale. Exhale and count two. When you reach 4, start over with one. Try to concentrate on the breaths. Don’t worry if you find yourself thinking the anxious thoughts again. Just “STOP” yourself and continue counting your breaths. Continue until you feel relaxed. You can repeat the procedure any time you start thinking anxious thoughts.
  • Progressive Relaxation. Begin in a comfortable position, either sitting or lying down. Clench your right fist tightly, paying close attention to the tension in your fist and forearm. Relax and pay attention to the difference in feeling. Repeat once more with the right fist, then twice with the left. Bend your elbows and tense both biceps. Again, pay attention to the tension. Relax and straighten your arms, paying attention to the different feeling. Repeat once. Repeat the procedure twice with different areas of the head and face, for example, forehead, jaws, eyes, tongue, and lips. Tense your neck by tucking your chin back. Feel the tension in different areas of the neck and throat as you slowly move it from side-to-side and let your chin touch your chest. Relax and return your head to a comfortable position. Tense your shoulders by shrugging them. Relax them and drop them back. Feel the relaxation in your neck, throat, and shoulders. Relax your whole body. Take a deep breath and hold it. You will feel some tension. Exhale and relax again. Repeat several times. Then, tighten your stomach. Relax. Breathe deeply, letting your stomach rise with each inhalation. Note the tension as you hold each breath and the relaxation as you exhale. Tense and then relax your buttocks, thighs, calves, and shins (tense your calves by pointing your toes, shins by pulling your toes toward your shins) twice each. As you relax, notice how heavy your whole body feels. Let the relaxation go deeper, as you feel heavy and loose through your whole body.

You should practice these techniques for a week or two when you are not feeling anxious so that they will be easier to use when facing the real thing.

  • In general, do you think that you need to increase or decrease your level of anxiety to achieve your best performance on tests?
  • If you need to increase your anxiety, what strategies do you think you can use to help you?
  • If you need to decrease anxiety, which of the strategies presented in this section seem most useful to you? Can you think of other useful strategies?

Module 9: Cognitive Psychology: The Revolution Goes Mainstream

By reading and studying Module 9, you should be able to remember and describe:

  • The place of “mental processes” in early psychology
  • The banishment of “mental processes” by the behaviorists
  • Ivan Pavlov’s work on classical conditioning
  • John Watson’s work, especially the Baby Albert demonstration
  • Edward Thorndike’s and B.F. Skinner’s extensions of behaviorist psychology
  • Shortcomings of behaviorism

The Cognitive Revolution

  • Progress in memory research: Ebbinghaus and Bartlett, early “post-revolution” research, working memory versus short term memory, inferences and memory, explicit and implicit memory
  • Cognitive psychology in perspective

Modules 5 through 8 showed you how being knowledgeable about some psychological principles can help you think, remember, reason, solve problems, learn, and take tests more effectively, in school and throughout life. Most of the topics contained in these modules are considered the domain of Cognitive Psychology, the psychology of cognition. The study of cognition is the study of knowledge: what it is, and how people understand, communicate, and use it. Cognition is essentially “everyday thinking.”

In the early days of psychology (the late 1800’s), cognitive (mental) processes were at the forefront of psychological thinking. But by the first half of the 1900’s, psychology—especially in the United States—was dominated by Behaviorism, a view that rejected the idea that internal mental processes are the appropriate subject matter of scientific psychology. Behaviorism itself fell victim to what became known as the Cognitive Revolution in the second half of the 1900’s. Psychology, in a way, had come full circle, as mental processes again became a major area of interest. Today, insights about Cognition have touched all areas of Psychology. In addition, Cognitive Psychologists have collaborated with scientists in other disciplines interested in “intelligent behavior” (for example, computer science and philosophy) to create what Howard Gardner (1985) called, “The Mind’s New Science,” or Cognitive Science.

Psychology as the Study of Mind

Wilhelm Wundt created the first psychological laboratory in 1879 in Germany, although for a number of years before then several scientists had been conducting research that would become part of psychology. Wundt worked very hard for decades to establish psychology as a viable discipline across Europe and the United States. His laboratory trained many of the first generation of psychologists, who began their own research throughout the western world.

Wundt believed that experimental methods could be applied to immediate experience only. Hence, only part of what we now think of as psychology could be a true science. Recall that a science requires objective observation. Wundt believed that mental processes beyond simple sensations—for example, memory and thoughts—were too variable to be observed objectively.

Mind Versus Behavior

The American psychologist William James argued that psychology should include far more than immediate sensations. He greatly expanded psychology into naturally occurring thoughts and feelings. The group of psychologists called the behaviorists, however, moved psychology in the opposite direction by reducing psychology to the study of behavior only. Behaviorism dominated psychology in the United States through most of the first half of the 20 th century. Although, as we shall see, the behaviorists’ approach ended up being far too narrow to explain all of human behavior, their contributions were nevertheless extremely important.

Classical Conditioning

The first giant contributor to behaviorist psychology was Ivan Pavlov. Although Pavlov was not a psychologist—he was a Russian physiologist—his research had an enormous impact on the development of psychology as the science of observable behavior. He discovered and developed the basic ideas about classical conditioning, which he believed explained why animals, including humans, respond to their environment in certain ways.

As a physiologist, Pavlov had built quite a reputation studying digestive processes. He had done a great deal of research with dogs, for which he constructed a tube to collect saliva as it was produced in a dog’s mouth. In the course of his research, Pavlov became annoyed by a common occurrence. If you put food into a dog’s mouth, the dog will begin to salivate. After all, salivation is a natural part of the digestive process. The annoying phenomenon that Pavlov noticed (annoying because it interfered with his regular research) is that the dogs under study would begin to salivate before they were given the food, perhaps when the person who fed them walked into the room (Hunt 2007). In 1902, Pavlov realized that this annoyance was worthy of study itself, and thus embarked upon a series of examinations that would span the rest of his career. This research program has left Pavlov one of the most important contributors in the history of psychology.

Pavlov was responsible for the initial investigations of the central concepts in classical conditioning: unconditioned stimulus, unconditioned response, conditioned stimulus, conditioned response (see Module 6). He typically used food, often meat powder, as the unconditioned stimulus and noted how salivation occurred in response, as an unconditioned response. He would present various neutral stimuli (e.g., sounds, sights, touches) along with the Unconditioned Stimulus (the meat powder), turning them into Conditioned Stimuli. Thus these new stimuli acquired the power to cause salivation (now Conditioned Responses) in the absence of food.

It was Pavlov who discovered that the Conditioned Stimulus must come before the Unconditioned Stimulus, that Conditioned Stimuli could be generalized and discriminated, and that extinction of the conditioned response would occur if a Conditioned Stimulus stopped being paired with the Unconditioned Stimulus.

John Watson, an early leader of the behaviorist movement, seized on Pavlov’s findings. He thought—and persuasively argued—that all animal behavior (including human behavior) could be explained using the Stimulus-Response principles discovered by Pavlov. In support of this idea, Watson and Rosalie Rayner provided a dramatic example of classical conditioning in humans, and one of the most famous psychology studies of all time. They were able to classically condition an infant who became known as Baby Albert to fear a white rat. Before conditioning, the rat was a neutral stimulus (or even an attractive stimulus); it certainly elicited no fear response in Albert. Loud noises did, though. The sound of a metal rod being hit with a hammer close behind Albert’s head automatically made him jump and cry. The loud noise, therefore, was an Unconditioned Stimulus; Albert’s automatic fear, an Unconditioned Response. By showing Albert the rat, then banging on the metal rod (i.e., pairing the neutral stimulus with the Unconditioned Stimulus), the rat was easily transformed into a Conditioned Stimulus, eliciting the Conditioned Response of fear (Watson and Rayner 1920). Two pairings of rat with noise were enough to elicit a mild avoidance response. Only five additional pairings led to quite a strong fear response. As Watson and Rayner described it:

The instant the rat was shown the baby began to cry. Almost instantly, he turned sharply to the left, fell over on left side, raised himself on all fours and began to crawl away so rapidly that he was caught with difficulty before reaching the edge of the table.

Albert’s fear was also generalized to a rabbit, a dog, a fur coat, and a Santa Claus mask.

Although this was an important observation of classical conditioning in humans, the study has since been widely criticized. Watson and Rayner apparently did nothing to decondition Albert’s fear (Hunt 2007). Today this neglect would certainly be judged an extreme violation of ethical standards (even if the study itself was judged ethical).

Operant Conditioning

There were those, including Pavlov and Watson, who believed that all human learning and behavior could be explained using the principles of classical conditioning. Certainly, these specific behaviorist principles shed light on important aspects of human and animal behavior. Edward Thorndike, working primarily with cats and chickens, demonstrated that additional principles were needed to explain behavior, however. By observing these animals learning how to find food in a maze (chickens), or escape from a puzzle box (cats), Thorndike was laying the groundwork for the second major part of behaviorist psychology, operant conditioning, which helped to explain how animals learn that a behavior has consequences (see sec 6.2).

Without a doubt, the most famous psychologist who championed operant conditioning was B. F. Skinner. He demonstrated repeatedly that if a consequence is pleasant (a reward), the behavior that preceded it becomes more likely in the future. Conversely, if a consequence is unpleasant (a punishment), the behavior that preceded it becomes less likely in the future. Working mostly with rats and pigeons, Skinner sought to show that all behavior was produced by rewards and punishments.

The Shortcomings of Behaviorism

Together, the concepts of classical and operant conditioning have played an important role in illuminating a lot of human behavior. Further, a number of applied areas have benefited greatly from behaviorism. For example, if fear can be conditioned (as it was in Baby Albert’s case), it can be counter-conditioned. Systematic desensitization, a successful psychological therapy for curing people of phobias, is a straightforward application of classical conditioning principles (see Module 30).

And yet, something was missing from the behaviorist account. Think about the following question: Why do you go to work (assuming you have a job, of course)? A behaviorist explanation is straightforward: you work because the behavior was reinforced (i.e., you worked and were given a reward, money, for doing so). But, think carefully; do you go to work because you were paid for working last week, or because you expect to be paid for working this week? Suppose your boss hands you your next paycheck and tells you that the company can no longer afford to pay you. Will you return to work tomorrow? Many people would not; they would look for a new job. Although the point seems unremarkable, it is profound. There appears not to be a direct link between a behavior and its consequence, as the behaviorists maintained. Rather, a critical mental event (the expectation of future consequences) intervenes.

Even in classical conditioning, some process like expectation was needed to explain a lot of behavior, even in non-human animals. Recall that classical conditioning is essentially learning that the Conditioned Stimulus predicts that the Unconditioned Stimulus is about to occur. For example, a dog learns that the stimulus of its owner getting the leash—the CS—predicts the stimulus of being taken for a walk—the UCS. Robert Rescorla (1969) demonstrated that it is more than simply co-occurrence of Conditioned Stimulus and Unconditioned Stimulus (i.e., the number of times both occur together) that determines whether classical conditioning will occur. Far more important is the likelihood that the Unconditioned Stimulus appears alone, without the Conditioned Stimulus (this is called the “contingency” between UCS and CS). If the UCS appears alone frequently, then the Conditioned Stimulus is not a good predictor, no matter how often the two appear together. Again, a dog will not develop a strong association between its owner getting the leash and being taken for a walk if they frequently get to go for a walk even when the owner does not get the leash. The importance of contingency cannot be explained by a strict behaviorist account. Rescorla’s research showed that some Conditioned Stimuli (those with high contingency) were more informative than others, and thus easier to learn.

The behaviorists also believed that any association could be learned, which turned out to be not quite true. Consider John Watson’s most famous quotation:

Give me a dozen healthy infants, well-formed, and my own specified world to bring them up in and I’ll guarantee to take anyone at random and train him to become any type of specialist I might select—doctor, lawyer, artist, merchant-chief and, yes, even beggar-man and thief, regardless of his talents, penchants, tendencies, abilities, vocations, and race of his ancestors. (1924; quoted in R.I. Watson, 1979)

The behaviorists believed that the relationship between stimulus and response or between behavior and consequence was arbitrary. Any stimulus could become a Conditioned Stimulus for any response, and any behavior could be reinforced equally with the same consequence. But a very important experiment by John Garcia and Robert Koelling in 1966 demonstrated that, at least for classical conditioning, this is not so. Garcia and Koelling showed that animals were biologically predisposed to learn certain UCS-CS associations and not others. For example, if food is spiked with some poison that will make a rat sick, the rat will learn to associate the taste (the CS) with the poison (the UCS). The rat will develop a conditioned response to the taste of the food, and will learn to avoid it. A pigeon, on the other hand, will have great difficulty learning this association between taste and poison. If a visual stimulus, such as a light, is used as a CS, however, the pigeon will have no trouble at all learning to associate it with the poison UCS. This time, the rat will have difficulty learning. The associations that are easy for a species to learn are ones that are biologically adaptive. A bird must learn to associate the visual properties of a stimulus with its edibility because the bird needs to see its potential food, sometimes from a long-distance away. A rat, in order to learn whether some potential food is edible, will take a small taste. If the substance does not make the rat sick, it must be food. Again, the fact that some associations are easy and others are difficult to learn cannot be explained by a strict behaviorist account.

Earlier challenges to behaviorism had been supplied by a very important study conducted by Edward Tolman and Charles Honzik in 1930. They demonstrated that learning can occur without reinforcement. One group of rats learned to run through a maze using positive reinforcement for each trial, a food reward at the end of the maze. A second group of rats was not reinforced; of course, they wandered aimlessly through the maze and did not learn to run through it rapidly to reach the goal. A third group was not reinforced for most of the trials. Then, suddenly, reinforcement was given for the last two trials. This third group learned to run through the maze nearly as well as the first group, even though they had many fewer reinforced trials. Clearly, these rats had learned something about the maze while wandering during the early non-reinforced trials.

A final shortcoming of the behaviorist view is that it does not acknowledge that learning sometimes occurs without causing an observable change in behavior. Think about learning in school. What if you study a section of a textbook and remember what it says, but are never given a test question about it? Does that mean you did not learn the content of the textbook? Rather than saying that behaviorist explanations shed light on learning, it is probably more correct to say that they help explain performance (Medin, Ross, & Markman, 2001). Although an explanation of performance may be a useful contribution to psychology, it is far less comprehensive than the original goals of behaviorism.

While the first wave of research that revealed the shortcomings of behaviorism was being produced in the 1930s and 1940s, other seeds for a revolution were being planted and beginning to grow. As you probably know, the most significant historical event of the 1940s was World War II. The war had a profound impact on all areas of life; science was no exception. Prominent scientists and mathematicians throughout the United States and Europe were recruited to the war effort, lending their expertise to develop computers, break enemy codes, design aircraft controls, design weapons guidance systems, etc. Researchers would observe certain types of behaviors (for example, reaction times or errors) and make inferences about the mental processes underlying the behaviors. At the same time, doctors and physiologists were learning much about the brain’s functioning from examining and rehabilitating soldiers who had suffered brain injuries in the war. These two fields of inquiry—computer science and neuroscience—led to profound observations about the nature of knowledge, information, and human thought (Gardner, 1985).

Many of these observations began to be synthesized in the late 1940s. At a scientific meeting in 1948, John von Neumann, a Princeton mathematician, gave a presentation in which he noted some striking similarities between computers and the human nervous system. Von Neumann and others (e.g., psychologist Karl Lashley, mathematician Norbert Wiener; see Gardner 1985) began to push this “the mind is like a computer” idea. The cognitive revolution had begun.

Now that psychologists had a new way of looking inside the head, many observers believed that a psychology of the mind could be scientific after all. Researchers such as Noam Chomsky, George Miller, Herbert Simon, and Allen Newell began the task of creating cognitive psychology. The view inside the head was not direct, of course, but as more and more talented researchers moved into this new psychological domain, behaviorism’s influence became smaller and smaller.

Strides in the Psychological Study of Cognition

If you look at a book that reviews the cognitive aspects of psychology, you will probably find chapters on Perception (and Sensation), Learning, Memory, Thinking, Language, and Intelligence, in more or less that order. Why is this? Does this organization make sense? It certainly does to psychologists. One way to understand the organization is to view the topics as going from “basic” to “higher” level. Basic cognitive processes—sensation and perception and, in some ways, learning (although it is more often considered a topic in behaviorist psychology)—are required to “get the outside world into the head,” that is, to create internal (mental) representations of the external world. Higher-level processes use these representations of the world to construct more complex “mental events.” For example, you might think of some kinds of memory as a stored grouping of perceptions. Then, you can consider reasoning (a sub-topic within the topic of thinking) as computing with or manipulating sets of facts and episodes (i.e., memories).

The cognitive revolution has propelled a vigorously scientific process of discovery that illuminates learning, memory, thinking, language, intelligence, problem-solving, as well as other cognitive processes. The results of that scientific process have led to psychologists’ current beliefs about cognition and form the basis for many of today’s most exciting avenues of psychological inquiry.

Memory – A Cognition “Case Study”

Memory may be the quintessential topic of the cognitive revolution. Nearly absent during the behaviorism days, memory research has been very well represented since then. Around the same time that Wundt was examining the components of sensation, Hermann Ebbinghaus began the systematic study of memory by constructing over 2000 meaningless letter strings and memorizing them under different conditions. Ebbinghaus demonstrated that repetition of material led to better memory (1913; quoted in R.I. Watson 1979). He also examined the effects on memory of such factors as the length of a list, the number of repetitions, and time (Watson, 1979). For example, he discovered that 24 hours after learning a list, only one-third of the items were still remembered. His findings and methods were extremely influential, and they are apparently the basis of several commonly held beliefs about memory—principally that the best way to memorize something is to repeat it.

Importantly, Ebbinghaus believed that it was necessary to remove all meaning from the to-be-remembered material in order to examine pure memory, uncontaminated by prior associations we might have with the material. The problem with his approach, however, is that memory appears to almost never work that way (or if it does, it certainly does not work very well). For example, Ebbinghaus used “nonsense syllables” such as bef, rak, and fim in his research. Well, when we look at these syllables, we automatically think beef, rack, and film. These syllables are not so meaningless after all; they have automatic associations for us. If asked to remember them, we would make use of these associations. That is the way memory typically works.

There was little memory research during the behaviorist era. One striking exception is the work by Frederic Bartlett in the 1930s. Bartlett found that meaning is central to memory. New material to be remembered is incorporated into, or even changed to fit, a person’s existing knowledge. Bartlett read a folk tale from an unfamiliar culture to his research participants and asked them to recall the story. The original tale is very odd to someone from a western culture, and it is difficult to understand. What Bartlett found is that over time, people’s memory for the tale lost many of its original non-western idiosyncrasies and began to resemble more typical western stories. In short, Bartlett’s participants were changing their memory of the tale to fit their particular view of the world. It turns out that Frederic Bartlett was about forty years ahead of his time. It was not until the 1970s that researchers began to think about the fluid and constructive nature of memory in earnest.

Instead, the early memory researchers focused on trying to figure out the different memory systems and describing their properties. The first “Cognitive Revolutionary” memory research took the form of an essay written by George Miller in 1956. His essay, “The magical number 7, plus or minus 2” has become one of the most famous papers in the history of psychology. The essay described Miller’s observation that the number 7 seemed to have a special significance for human cognitive abilities. For instance, the number of pieces of information that a person can hold in memory briefly (in short-term memory, what researchers then called working memory) falls in the range of 5 to 9 (7 plus or minus 2) for nearly all people. Miller noted, however, that short-term memory capacity could be increased dramatically by using the process of chunking, grouping information together into larger bundles of meaningful information. For example, if you think of a number series as a set of three-digit numbers (rather than isolated digits), you would probably be able to remember 7 three-digit numbers, or 21 total digits.

Research in the 1960s was dominated by the information processing approach. As in computer scientists’ flow-charting, in which systems and processes are drawn as boxes and arrows, the information-processing approach depicts the way information flows through the system. The most influential of the information processing descriptions of memory was developed by Richard Atkinson and Richard Shiffrin (1968). Atkinson and Shiffrin described memory as consisting of three storage systems.

A diagram with three text boxes. The first box is labeled sensory memory and an arrow connects it to the second box labelled short-term memory. An arrow labelled encoding extends from this box extends to the third box labelled long-term memory. A second arrow labelled retrieval extends from long term memory back to short-term memory.

Sensory memory holds information in storage for a very brief period (around one second), just long enough for someone to pay attention to it so it can be passed on to the next system. Short-term memory is a limited capacity (about 7 items, or chunks), short-duration system, a temporary storage system for information that is to be transferred into (encoded into) long-term memory. Long-term memory is essentially permanent, essentially unlimited storage. Information is retrieved from Long-term memory back into Short-term memory.

Atkinson and Shiffrin’s description of memory is very close to the one used in section 5.1. The difference is that short-term memory is replaced in the module with working memory. The two concepts are not exactly interchangeable. The theory of working memory emphasizes that information is not simply held, but rather is used during short-term storage (Baddeley and Hitch 1974). So, for example, you might simply try to hold information, such as a telephone number, for a short time until you can get to a phone. Or you might have information in mind because you are using it to solve a problem. The current view of working memory also distinguishes between verbal and visual (or visuospatial) memory, which seems to capture an important distinction (Baddeley, 1996; Jonides et al., 1996; Smith et al., 1996). Whichever model you use (the original short-term memory, or updated working memory), it is clear that this temporary storage system is both limited in capacity and very temporary. Lloyd Peterson and Margaret Peterson (1959) provided a demonstration of just how temporary short-term memory can be. Their participants were given strings of 3 letters (for example, XPF) and prevented from rehearsing (by forcing them to count backwards by 3’s). Participants sometimes forgot the letters in as little as 3 seconds. By 18 seconds, very few people could remember any of the letter strings.

Researchers in the 1970s began to move away from the boxes and arrows of the information processing approach. They began to think again about the way memory functions in life and thus were picking up the long-neglected agenda of Frederic Bartlett.

Craik and Tulving’s levels of processing research and Bransford and Johnson’s “Doing the Laundry” research were important, in part, because they focused on memory not as a static, fixed storage system, but as a dynamic, fluid process. According to the levels of processing view, for example, information might last in memory for a lifetime, not because it was fixed in a long-term memory system, but because it was encoded, or processed, very elaborately. This is important, both to psychologists interested in understanding the nature of memory and to people who might be interested in improving their own memories.

Other researchers picked up on the role of inferences in determining understanding and memory. For example, Rebecca Sulin and James Dooling (1974) demonstrated how such inferences can actually become part of the memory for the story itself (similar to the way Bartlett’s subjects back in the 1930s changed their memory of the folk tale to be consistent with their views of the world). Half of the participants in their experiment were asked to remember the following paragraph, entitled “Carol Harris’s Need for Professional Help”:

Carol Harris was a problem child from birth. She was wild, stubborn, and violent. By the time Carol turned eight, she was still unmanageable. Her parents were very concerned about her mental health. There was no good institution for her problem in her state. Her parents finally decided to take some action. They hired a private teacher for Carol.

The other half of the participants read the same paragraph, but the name Carol Harris was replaced with Helen Keller. One week later, participants were given a recognition test. One of the test sentences was “She was deaf, dumb, and blind.” Only 5% of the “Carol Harris” participants mistakenly thought this sentence was in the original paragraph, but 50% of the “Helen Keller” participants made this error. Thus, participants made inferences about the story based on their knowledge of Helen Keller; the inferences later became part of the memory of the original story for many participants.

Researchers have also continued to make strides toward describing and distinguishing between different memory systems, one of the early goals of memory research. One distinction that you already saw is between declarative memory (facts and episodes) and procedural memory (skills and procedures). A related distinction is between explicit memory and implicit memory. Explicit memory is memory for which you have an intentional or conscious recall. It pertains to most of declarative memory. Explicit memory is what you are using when you say, “I remember…” Implicit memory refers to memory in which conscious recall is not involved, such as remembering how to ride a bicycle. It includes procedural memory, to be sure, but implicit memory can also be demonstrated using declarative memory. Suppose we ask you to memorize a paragraph and it takes you 30 minutes to do it. One year later, we show you the paragraph again and ask you if you still remember it. Not only do you not remember the paragraph, but you also do not even remember being asked to memorize it one year earlier (in other words, there is no conscious recall or recognition). But if you were to memorize the paragraph again, it would probably take you less time than it did originally, perhaps 20 minutes. This 10-minute “savings in relearning” (Nelson, Fehling, & Moore-Glascock 1979) indicates that you did, at some level, remember the paragraph. This memory without conscious awareness is implicit memory.

There has been a debate among researchers about whether explicit and implicit memory are separate kinds of memory. Many experiments and case studies have been conducted that demonstrate differences (e.g., Jacoby & Dallas 1981, Rajaram & Roediger, 1993). For example, researchers have shown that some patients who suffer from brain injury-induced amnesia do not suffer deficits on implicit memory tasks, despite profound deficits on explicit memory tasks using the same information (e.g., Cohen & Squire 1980; Knowlton et al., 1994). Critics of this research, however, have suggested that the observed differences may reflect a bias in the way research participants are responding or some other phenomenon (Ratcliff & McCoon 1997; Roediger & +

McDermott 1993).

More recently, cognitive neuroscience , which combines traditional cognitive psychological research methodology with advanced brain imaging techniques, has begun to shed light on the controversy. It appears that different brain areas are involved in implicit and explicit memory. Specifically, a brain structure known as the hippocampus (along with related structures) is central to the processing of explicit memories (i.e., memories in which conscious awareness is present), whereas it is relatively uninvolved in the processing of implicit memories (i.e., changes in behavior that are not accompanied by conscious awareness) (Schacter, 1998; Clark & Squire 1998) (see sec 9.2). Thus, it appears that cognitive neuroscience has produced good evidence that implicit and explicit memory might be different kinds of memory. Such research promises to clarify a number of other aspects of memory in coming years.

Cognitive Psychology in Perspective

Psychology began as a systematic investigation of the mind, then became the science of behavior, and is currently the science of behavior and mental processes. These changing conceptions illustrate changes in the centrality of cognitive processes in the field of psychology. Cognition has been, in turn, the near-complete focus of the field, completely banned from the field, and now, integrated into the field.

Cognitive psychology is a set of inquiries and findings that may seem rather abstract. Basic cognitive processes, such as sensation and perception, function to get the external world into the head. Intermediate level processes, such as memory, create mental representations of the basic perceptions. Higher-level cognitive processes, such as reasoning and problem-solving, use the outputs of these basic and intermediate processes. But knowledge of cognitive psychology can benefit you in many ways, as Modules 5 through 8 suggest. If you do follow the prescriptions presented in the modules, you will have greater success in school, to be sure. With more solid thinking skills, however, you will also be equipped to succeed as a learner throughout life, which means that you will succeed throughout life, period.

Please realize that both this applied focus and the more scientific focus are important to you. First, of course, in order to succeed in later psychology classes, many of you will need to know about the traditional, academic approach. More importantly, perhaps, careful attention to the details of psychological principles will help you to apply them to your life more effectively. For example, it is a deep understanding of how memory encoding works, even at the neuron level, that allows you to see why the principles for remembering that are suggested in Module 5 are so effective. Understanding why something works is much more compelling than simply realizing that something works. Similarly, when you understand what deductive reasoning is, why it is important, and why it is so difficult to do correctly; it can help motivate you to learn and practice these skills.

It is reasonable to expect that several trends related to cognitive psychology that have begun over the past decades will continue. First, the integration of a cognitive perspective into other psychology sub-fields will likely continue. As you will see throughout this book, insights about cognition have added to our understanding of psychological disorders, social problems such as aggression and prejudice, and of course education. Second, the merging of neuroscience and cognitive psychology—cognitive neuroscience—will continue. As brain imaging techniques become more accessible, more researchers will turn to them to look for brain activity and the structural underpinnings of cognitive processes. Finally, cognitive psychology will continue its strong interdisciplinary orientation and Cognitive Science will flourish. Researchers and thinkers from fields such as psychology, computer science, philosophy, linguistics, neuroscience, and anthropology all had a hand in the development of cognitive science. From the beginning, then, cognitive psychologists have been talking to and working with researchers in several other disciplines. This trend, too, is likely to continue.

But wait, do researchers still care about Classical and Operant Conditioning?

Very careful readers might have noticed some important details in the section above where we began to address the limitations of the behaviorist view. First, although the research revealed limitations of these approaches, Rescorla, and Garcia and Koelling were conducting their research on the topics of classical and operant conditioning (in other words, they were not working against classical and operant conditioning, they were working for the topics). And you might have noticed the dates: 1969 for Rescorla and 1966 for Garcia and Koelling, well after the 1956 Cognitive Revolution that we told you about. It is essential that you realize that the revolution was not complete. Cognition did not force the older topics out of the research. Indeed, researchers are still publishing studies related to both classical and operant conditioning in 2020, over 100 years after the concepts were first discovered. “What else is there to discover?” you may wonder. “Surely, researchers must have figured out everything about these concepts in 100 to 120 years, right?”

It is probably fair to say that we will never learn everything there is to learn about a topic. Instead, researchers can continue to make discoveries by going in at least two directions. First, they can examine smaller and smaller parts, or details, in the processes. For example, Honey, Dwyer, and Iliescu (2020) proposed a classical conditioning model that predicted that the association between the US and the CS is different from the reverse association between CS and US. By paying attention to this asymmetry, they were able to account for findings that had eluded prior models of classical conditioning. The second key direction that can extend the useful life of a topic indefinitely is to develop applications of a topic. For example, Frost et al. (2020) recently described the operant conditioning principles that underlie a set of behavioral interventions that have been successfully used to help children with autism spectrum disorders learn new skills.

Unit 3: Understanding Human Nature

Every species has characteristics that distinguish it from every other species. These characteristics include physical traits (including brains) and behaviors, as well as the blueprints for producing these differences, which are genes. In psychology, the science of behavior and mental processes, we are interested in the genes, brains, and behaviors that make up our unique human nature. The subfield of psychology that is most directly concerned with human nature is biopsychology.

Many of the most exciting developments in psychology over the past couple of decades are related to the discovery of the biological underpinnings of human behavior and mental processes. Throughout the 1990s researchers made countless discoveries about the brain; more recently, many important discoveries have been made in genetics. To understand modern psychology, then, you must have a firm grasp of biopsychology (and the biological perspective, the application of biological thinking to any individual topic in psychology).

A 2019 survey published by United Health Services reported that 97% of US respondents agreed that mental illnesses are medical illnesses that can be treated effectively like chronic physical disorders (UHS, 2019). This is a much higher percentage than even 20 years ago, when only 55% of respondents agreed that “depression is a disease and not a ‘state of mind that a person can snap out of.” Now, of course, these are different questions, and there are likely other differences in the survey methods that make it hard to compare these numbers. It is hard to argue against the conclusion that many more people today believe that mental illnesses are medical illnesses, however. Other surveys paint a somewhat different picture, however. The American Psychological Association published a survey reporting that 55% of respondents agreed that mental illnesses are different from serious physical illnesses. So, they might be medical illnesses, but they are apparently not the same type as illnesses like cancer and heart disease, according to many members of the public (APA, 2019).

In order to have any hope of formulating an informed opinion on this question, you would need at least two things. First is some knowledge of the biological workings of “human behavior and mental processes.” That, in a nutshell, is the major goal of this Unit. The second thing you would need you will find near the end of the book: a description of the biological factors involved in depression and other mental illnesses.

But first things first, the basics. We are not saying that you need to be an expert in biology, although that would certainly work. It is simply the case, however, that if you want to understand psychological phenomena and to have informed opinions about issues such as mental illness, your best defense is to learn some details about the biological underpinnings of psychology. What does it mean when researchers report that depression is an imbalance of brain chemicals? If personality is genetic, does that mean that it can never change? If your eyes convert light energy directly into brain activity, how can you be fooled by visual illusions? You can only answer these questions, and many others like them, with a solid understanding of biopsychology.

Many students are surprised to discover the importance of biology to psychology. In fact, psychology majors who intend to go on to graduate school should strongly consider taking the biology sequence that is intended for biology majors. It really is that important. It is not particularly difficult to convince students that biology is important for psychology. One only needs to look at a representative list of recent research articles in psychology to see the profound influence of biology. That does not necessarily make biopsychology easy, though, as many students struggle with it. There are at least two reasons why many students have difficulty understanding the biological perspective. Although the reasons may seem unrelated at first, they are not.

First, there is an extraordinary quantity of biological information. For example, in a typical biopsychology chapter of a General Psychology textbook, a student may be asked to memorize 15 or more divisions or parts of the brain, each with a specific function or two. Then, they have to learn the complex and confusing process by which the individual cells of the nervous system, neurons, generate and transmit electrochemical signals. Suffice it to say that many students find this an overwhelming task.

Second is the matter of what Francis Crick called the “astonishing hypothesis.” As many psychology instructors like to say, all human behavior and mental processes result from the tiny process of electrical particle exchange on the surface of a nerve cell and the chemical transfer of the signal to neighboring nerve cells. To put it mildly, that idea is just weird and very difficult to accept. As Crick, winner of the Nobel Prize with James Watson for their discovery of the structure of DNA, more eloquently put it:

“Your joys and your sorrows, your memories and your ambitions, your sense of personal identity and free will, are in fact no more than the behavior of a vast assembly of nerve cells and their associated molecules. . . This hypothesis is so alien to the ideas of most people alive today that it can truly be called astonishing.” (Crick, 1994)

Because people have difficulty with the very premise that everything we do and are comes from “the behavior of a vast assembly of nerve cells,” the whole enterprise seems unorganized and disconnected. As we explained in Module 5, material for which we do not see the organization is very difficult to remember, precisely because it is difficult to understand. We will address the “astonishing hypothesis” and offer a way to help you accept it in section 10.1 when we describe the behavior of nerve cells.

Unit 3 contains five modules:

Module 10, How Biology and Psychology Are Linked, introduces you to some of the fundamental principles, issues, and controversies associated with biopsychology.

Module 11, Brain and Behavior, leads you through the organization and parts of the nervous system, especially the brain and its most important individual cells, neurons.

Module 12, How the Outside World Gets into the Brain: Sensation, describes the important processes involved in translating stimulus energy from the world into neural signals

Module 13, How the Brain Interprets Sensations: Perception, picks up where Module 12 left off, providing details about how the brain organizes and recognizes those neural signals so that we can make sense out of the input.

Module 14, “Biopsychology: Bringing Human Nature into Focus,” places the subfield in a historical context and wraps up with some current issues and controversies.

Module 10: Biology and Psychology

The discovery that at least some mental disorders have biological causes led to what has been called the era of biological psychiatry (Seligman, 1993). The key scientific development was Richard von Krafft-Ebing’s proof that a common form of madness, known through the centuries as general paresis, was actually caused by syphilis, a physical illness. Krafft-Ebing was able to make his discovery despite the fact that researchers had not yet developed techniques to see the germ that causes syphilis. Instead, he reasoned that because syphilis could not be caught twice, anyone suffering from general paresis must be immune to syphilis. When he exposed paresis patients to syphilis and none of them contracted the disease, Krafft-Ebing had his proof (Seligman, 1993).

According to Martin Seligman (1993), adherents to biological psychiatry believe that mental illness is actually a physical illness, which can only be cured by drugs. Further, they believe that personality, being genetically determined, is fixed. The idea that we cannot change that which is biological about ourselves, at least without pharmacological intervention, is a very sweeping and, to us, pessimistic conclusion.

Fortunately, that conclusion is far too simple to be correct, and it is rejected by most researchers in biopsychology (as opposed to adherents to what Seligman called biological psychiatry). As Modules 11, 29, and 30 reveal, sometimes a drug treatment is an important, even necessary, component of a cure for a psychological disorder. It is rarely a sufficient treatment by itself, however, and many disorders can be treated with no drugs at all.

This module gives you the background to judge why the broad conclusions of biological psychiatry are oversimplified. For example, it is true that a great deal of our behavior and mental processes have genetic causes. That does not mean that genes are the only causes, however, and it does not mean that they are unchangeable.

The module is divided into two sections. Section 10.1 describes the basics about genes and heredity. It is essential background information if you want to be able to understand many claims about genetics. The section also introduces you to behavior genetics, the subfield that allows you to estimate the degree to which a given trait is determined by genes, and the important developments in epigenetics. Section 10.2 introduces you to evolutionary psychology, a relatively new, important, and controversial perspective in psychology that tries to place what we know about genetics and psychology into the context of Charles Darwin’s theory of evolution by selection.

  • 10.1 Genes and Behavior
  • 10.2 Evolutionary Psychology

By reading and studying Module 10, you should be able to remember and describe:

  • Basic concepts of heredity: genes, DNA, chromosomes (10.1)
  • Dominant and recessive genes (10.1)
  • Genotype and phenotype (10.1)
  • Interaction between genes and environment (10.1)
  • Basic ideas and research methods of behavior genetics: heritability, twin studies, adoption studies (10.1)
  • Theory of natural selection (10.2)
  • Evolutionary psychology: natural selection, sexual selection (10.2)

By reading and thinking about how the concepts in Module 10 apply to real life, you should be able to:

  • Make a reasonable prediction of the relative heritabilities of some common traits (10.2)

By reading and thinking about Module 10, participating in classroom activities, and completing out-of-class assignments, you should be able to:

  • Compare your prior beliefs about nature and nurture to the textbook material on genetics, behavior genetics, and evolutionary psychology (10.1 and 10.2)
  • Support your position in favor or against the basic claims of evolutionary psychology (10.2)
  • Generate a possible evolutionary explanation for specific human traits and behaviors (10.2)

10.1. Genes and Behavior

  • In what ways are you similar to your closest blood relatives? In what ways are you dissimilar?
  • Do you believe that similarities and differences between people’s personalities and other psychological tendencies result more from genetic factors or environmental factors?
  • Jot down anything you can remember about genetics from a biology class you have taken.

In the introduction to this module, We raised the possibility that there are things about yourself that you cannot change. Although the idea that psychological tendencies are fixed is wrong, there definitely are some unchangeables. You cannot change your parents, and you cannot change the genes, the coded information you inherited from them. There is a genetic contribution to just about any psychological phenomenon, trait, or behavioral tendency that you can think of. As we have said, however, that does not mean that you are automatically doomed to suffer (or blessed to enjoy) the consequences of your genes.

On the other hand, you simply cannot ignore the role of heredity in psychology. In addition to helping you understand many important psychological phenomena, knowledge of the principles of heredity will certainly help you to understand yourself. A solid working knowledge of these concepts will help you make sense out of the large (and increasing) amount of information about genetics (and psychology) in the popular media.

You should be aware of two important roles for genes. As you may already know, genes are the basic unit of heredity , the biological transmission of traits from parents to offspring. They do more than just determine the color of your eyes or how tall you will be, though. They also determine many psychological tendencies, which is why we care about them in psychology. What you may not have realized is that genes continue to work throughout our lives. Although many people think that genes finish their work as soon as we are born, they actually are responsible for the building of all the cells in our bodies throughout our lives. You have perhaps heard that proteins are the building blocks of our body. Did you ever wonder where the proteins come from? They are synthesized, or built from their component ingredients, by cells that have been programmed by genes. Although in psychology we concern ourselves principally with genes’ role in heredity, it is important to keep this other function in mind, too.

How genes determine traits.

In 2003, a team of researchers completed a map of the human genome,  the complete set of all human genes, a project that began in 1990. Many casual observers believed that this impressive scientific feat would explain many human behaviors and mental processes. For example, they may have believed that somewhere in the genome we would find a gene for depression, another for happiness, another for aggression, and so on. The truth, however, is nowhere near that simple. At best, in the vast majority of cases, a particular gene might mean a predisposition, an increased likelihood that some psychological trait would be present. Further complicating matters is the fact that some genes actually have functions other than determining some trait. These genes act to turn on or off other genes, which in turn might produce the suspected predisposition only when other genes are also present and active.

As you might guess, the possible combinations are essentially limitless. Furthermore, a given behavior or trait might very well appear in many different ways. For example, consider Alzheimer’s disease, a very serious disorder that leads to profound memory loss. A few cases of Alzheimer’s strike as early as 40 years old. Three separate genes have been linked with many of these early-onset Alzheimer’s cases (Goate et al., 1991; Sherrington et al., 1995; Levy et al., 1995). Over 99% of Alzheimer’s cases begin after age 60, however. Two different genes have been linked with late-onset Alzheimer’s, and together they are related to only about 40% of the cases (Bertram et al., 2000; Stritmatter & Roses, 1995). As you might guess from this short description, Alzheimer’s disease is quite complex, and researchers are still trying to figure out what causes it; you will learn more about their progress in Module 16.

The fact that a given condition may be associated with several possible genes may help us to understand many puzzles. For example, we know that antidepressants are effective for only about half of the people who take them, but we do not know why. Perhaps there are different types of depression, with different physiological mechanisms because they result from the actions and interactions of different genes.

In order to understand genes’ role in heredity, it is important to know some details about how they are organized. Although genes are often considered the basic unit of heredity, they can themselves be subdivided into their components. Genes are made up of molecules of deoxyribonucleic acid, commonly called DNA  Nucleic acids, such as DNA, are the only kind of molecules in nature that can direct their own replication (Campbell & Reece, 2002). DNA contains all of our hereditary information, which can be reproduced and passed down to our offspring.

Genes themselves are organized into chromosomes. A chromosome is basically a doubled string of genes. Every species has a specific number of chromosomes; humans have 23. When sperm and egg meet during fertilization, each contributes one strand of 22 chromosomes (the 23 rd is a little different, as we will describe in a moment). So you inherited half of your DNA from your mother and half from your father. On each chromosome, you have strings of paired genes, approximately 34,000 genes total. Every cell in the body contains a complete copy of your 23 chromosomes; thus, every cell has all of your particular genetic material.

introduction to psychology essay questions and answers pdf

One pair of chromosomes is special, the sex chromosomes.  They determine your sex, along with some additional traits. There are two kinds sex chromosomes, X and Y. The X chromosome is much larger than the Y; in addition to information about your sex, it contains genes for other characteristics, such as colorblindness, that are not on the Y chromosome. The mother always contributes one X. The father may contribute an X or a Y. If the father contributes an X, the baby will be a girl; if he contributes a Y, the baby will be a boy.

heredity: the biological transmission of traits from parents to offspring

genome: the complete set of all genes in a species

genes: the basic unit of material that gets transmitted from parents to offspring

DNA: deoxyribonucleic acid; these are the molecules that make up genes

chromosomes: a doubled string of genes; each species has a specific number of chromosomes

sex chromosomes : the chromosomes that determine your sex; there are two types, X and Y

This description of genetic heredity is rather simplified. The genes that go into each egg and each sperm may get scrambled a little bit through a process called crossing over, so the half-chromosomes you inherit from your parents are not identical to the ones they themselves have. In other words, you are not an exact genetic copy of half of each parent. Rather, you inherit large sections of DNA, along with some modified sections from the crossing over process.

In the simplest cases of trait transmission, a single pair of genes inherited from both parents determine a trait. For example, one version of the gene for eye color causes brown eyes, and another causes blue eyes. If you inherit the brown version from both mother and father, you will have brown eyes. Likewise, if you inherit the blue version from both, you will have blue eyes.

What if you inherit a brown version from one parent and a blue version from the other? Usually, one version dominates the other; it is called the dominant gene , and the other is called the recessive gene . In the case of eye color, brown is dominant. This means that if you have a brown version from one parent and a blue from the other, you will have brown eyes.

Because there are two ways that you can have brown eyes (brown-blue genes or brown-brown genes), biologists must distinguish between what they refer to as the genotype, the particular combination of genes, brown-blue or brown-brown in this case, and the phenotype , the physical trait exhibited, brown eyes. The distinction between genotype and phenotype helps to explain why some children with brown-eyed parents have blue eyes. If both parents have the brown-blue genotype, any of their children could end up with a genotype for eye color consisting of one blue-eyed gene from the mother and one blue-eyed gene from the father.

Now, keep in mind that eye color is controlled by a single set of genes. Psychological tendencies, such as shyness or irritability, are typically far more complex than this. There are many traits controlled by more than two versions of a gene or by two genes or more, and no interesting psychological tendencies have been traced to a single gene. Even in these more complicated cases, however, the relationship between dominant and recessive genes usually holds.

dominant gene: the gene version that codes the trait that the offspring will inherit when the parents contribute different versions

recessive gene: the gene version that codes the trait that the offspring will not inherit when the parents contribute different versions

genotype: the genetic coding that underlies a specific observed trait

phenotype: an observed trait, which might result from different specific gene version combinations

It’s not Nature versus Nurture, It’s Nature and Nurture

You may recall that in Module 4, we told you about one of the key historical philosophical debates that made it into the field of scientific psychology, namely, nature versus nurture. Are we a product of our genes (nature) or our experiences, upbringing, and environment (nurture)? You probably realized that even without us telling you that this really is a false dichotomy, a kind of oversimplification (see Module 1). It is not really one or the other. It is a combination of both. That is a pretty unsatisfying answer, however, kind of like just splitting the difference. It is indeed a bit more correct to note that personality, intelligence, mental illness, etc. are a result of genes and environment. But only a bit more correct. Seriously, now that you know this, do you really feel as if you understand the relative roles of genes and environment in psychology? We thought not (we assume you said, “No.”). So, let’s talk about 3 key ideas that really do help us to understand what we mean by “it’s nature and nurture.”

Behavior Genetics

There is a subfield of psychology that tries to sort out the nature-nurture puzzle by estimating the contribution of genes and environment for a given trait; it is called behavior genetics . Behavior geneticists come up with numerical estimates of the relative contributions of nature and nurture. For example, they have determined that the genetic contribution to intelligence is in the 50% – 75% range. These percentages are known as the heritability of some trait, which is defined as the percentage of trait variation in a group that is accounted for by genetic variation.

Notice in this definition that heritability is a conclusion about a group, not an individual. So, it does not mean that 50% – 75% of your intelligence comes from your genes, the rest from your environment. Rather, it means that if you give an intelligence test to a group of people, 50% – 75% of the differences in the scores can be attributed to differences in the group members’ genes. The main reason this is important is because the group you are examining can change; when it does, so too can your estimate of heritability. For example, suppose you administered your intelligence test to a group of children at a single elementary school in a wealthy suburb of Chicago. It is easy to imagine that the children at this school might very well have similar environments, most of them coming from stable two-parent homes, with similar economic backgrounds and educational experiences in and out of school. In this case, precisely because the children’s environments are so similar, a great deal of variation in intelligence test scores would have to be attributed to genetic variation (simply because there is not enough environmental variation). On the other hand, suppose your group of children included students from a privileged background and students from a very poor neighborhood in the city of Chicago. The differences in environments are enormous. The genetic contribution to differences in this group’s intelligence test scores would be overwhelmed by the differences in between the environments. In the first, suburb-only case, estimated heritability of intelligence would be high, closer to 75%; in the second case, it would be low, closer to 50% (or lower). Thus, the second important observation about heritability is that it is not a fixed number; it depends on the actual differences in the environment that are present for a group. This also means that when you hear about stable estimates of heritability, that the numbers are averages based on many individual studies involving many thousands of people.

The two most important methods that behavior geneticists use for estimating heritability are twin studies and adoption studies. In t win studies, identical twins, fraternal twins, and non-twin siblings are compared to each other. Identical twins are natural-born clones; they are identical genetic copies of each other. Fraternal twins are genetically no more similar than non-twin siblings, each sharing on average half of their genes. In a twin study, the three groups could be given a survey of life satisfaction, for example. If life satisfaction is heritable at all, the correlation between scores for the identical twins will be higher than that for the fraternal twins and non-twin siblings. The size of the difference in correlations between the groups can be used to come up with the estimate of heritability.

Adoption studies examine the other side of the coin. They look more directly at the contribution of the environment among people who do not share genes. Two children born to different parents adopted into the same family can be compared to examine the influence of shared environment on the traits in question.

A somewhat glib summary of the conclusions of the behavior geneticists is that everything is heritable. Steven Pinker (2002) notes, however, that this statement is only a slight exaggeration. Pinker’s partial list of psychological traits and conditions that have a significant genetic component includes autism, dyslexia, language delay, learning disability, left-handedness, depression, bipolar disorder, sexual orientation, verbal ability, mathematical ability, general intelligence, degree of life satisfaction, introversion, neuroticism, openness to experience, agreeableness, and conscientiousness. We will have several opportunities to examine the link between genetics and these and other phenomena throughout the book. But keep in mind that “significant genetic component” does not mean 100%. In fact, most estimates of heritability for different psychological traits and conditions hover around or below 50%. In most cases, you are not trapped by your genes, even though they play an important role in shaping human behavior. And this brings us to the second key idea.

  • ]adoption studies: a method in behavior genetics in which children with different biological parents but the same adopted family are compared in order to assess the impact of a shared environment
  • behavior genetics: the psychological subfield that estimates the contribution of genes and environment for specific psychological tendencies and traits
  • heritability: the proportion of variability in a trait throughout a group that is related to genetic differences in the group
  • twin studies : a method in behavior genetics in which identical twins, fraternal twins, and non-twin siblings are compared in order to assess the heritability of a trait

Gene x Environment interaction. 

Further complicating matters—or as we prefer to think about it, making matters more interesting—are questions about how genes and environment combine. Basically, genes and environment interact.  Let’s take a simplified example to illustrate some of the issues involved. Suppose researchers discovered a single gene for aggression (extremely unlikely at this point). Imagine that you, as someone with that aggressive gene, are placed into an environment in which no one ever makes you angry. It is possible that you might never become aggressive. Perhaps you can begin to see why you are not a slave to your genes.

Thumbnail for the embedded element "The Twilight Zone (Classic): It's A Good Life - A Very Bad Man"

A YouTube element has been excluded from this version of the text. You can view it online here: https://cod.pressbooks.pub/introductiontopsychologywinter21/?p=61

You can also access this video directly at:  https://youtu.be/QxTMbIxEj-E

(In a classic episode of The Twilight Zone, a town lives in fear of a young boy who punishes people with his tele-kinetic powers whenever anyone makes him angry. The townspeople spend their days making sure the boy never gets angry, a failed attempt to create the environment we just suggested. If you need a fun 5-minute break, watch the clip above.)

The best way to think about the role of genes in the development of psychological traits is that the genes may create a predisposition, a tendency to possess a particular trait. In order for the trait to be realized, however, the person must be exposed to a certain environment. For a more realistic example, Terrie Moffitt and Avshalom Caspi are frequent contributors to the research literature on gene x environment interactions. In one study of over 2,000 children, they found that those who had been bullied were more likely to develop emotional problems, but only if they possessed one specific variant of a gene that is related to regulation of serotonin concentration (a neurotransmitter implicated in mood, see Module 11).

Epigenetics

And finally, the third, and perhaps most interesting, key idea to help us understand the complexities of nature and nurture. Perhaps you had already learned about the distinction between genotype and phenotype in an earlier biology class, so you were familiar with dominant and recessive genes and their role in getting from genotype to phenotype. In this section, we will describe a second essential factor in producing phenotype. This factor is epigenetics, one of the key developments in genetics over the past 40 years. Epigenetics refers to changes in gene expression that are not related to the contents of the genes themselves (i.e., changes in the DNA) (Weaver, 2020). In other words, they change the activity of DNA without changing the DNA itself (Lester et al., 2016).

To understand these definitions, you need to know what we mean by gene expression or gene activity. The basic concept is relatively simple. Genes provide the instructions for the body to produce proteins. A gene might be present in a particular cell of the body, but it only provides those instructions when it is expressed. Think of it as turning on the gene so that it can perform its work (producing a protein). These proteins then go on and do x, y, and z.

Chemicals called tags can become attached to small portions of DNA and can influence the expression of the genes (increasing or suppressing it). This collection of tags is called the epigenome. So the importance of epigenetics is that there are substances within the epigenome that can increase and others that can suppress the expression of genes, but they don’t do anything to permanently change those genes. The key idea that places this in the nature and nurture context is that research, mostly on rats and mice, suggests that the epigenome can be affected by the environment. For example, Weaver et al. (2004) showed that rat pups’ epigenome changed as a result of mothers’ licking and grooming behaviors.

Researchers have recently begun to apply epigenetics to human behavior, that is, psychology (Lester, Conradt, & Marsit, 2016). Three key psychological areas in which researchers have applied epigenetics are development and mental health (and regular stuff?)

For example, Koopman-Verhoeff et al. (2020) found that epigenetic mechanisms are associated with certain types of sleep problems in children. On the other side of the lifespan, Pivsha et al. (2020) discovered the role of epigenetic mechanisms in psychotic symptoms that affect some sufferers of Alzheimer’s disease.

All of this is interesting and reveals some important context to the idea of a gene x environment interaction. If you were not already familiar with epigenetics, though, we predict that the next point will amaze you. The epigenome can be transmitted across generations. In other words, suppose diet affects your epigenome (this is very likely true, by the way; see Hullar & Fu, 2015). The tags that get attached to your DNA—that is, the epigenetic changes—change the way your own genes express proteins, as we have described. But these same epigenetic changes might be transmitted to your children when they are born. How is that for pressure? Junk food is not just bad for you; it might also be bad for children you might have in the future.

  • Which psychological traits or tendencies do you think would have the largest genetic component (i.e., highest heritability)? Which would have the smallest?
  • What do you think would be the relative heritability of such well-known psychological tendencies as happiness, anxiety, depression, intelligence, and extraversion?

8.2. Evolutionary Psychology

  • Do you believe in Charles Darwin’s theory of natural selection (evolution)?
  • In your opinion, what is the main objection that people have to the concept of evolution?
  • Why might some people who believe in evolution in general object to its application to psychology?

Although evolution is one of the most important discoveries in the history of science, it has often been accompanied by controversy. Charles Darwin’s On the Origin of Species by Means of Natural Selection, which introduced the basic theory, generated an explosive public reaction. Today, criticisms of evolutionary psychology have gone so far as to include insults about the researchers’ sex lives (Pinker, 2002). Although this is not the place for a full discussion of evolution and the controversy surrounding it, a short treatment of the issues is in order.

The Controversy over Evolution

When people refer to evolution, they are referring to Charles Darwin’s theory of natural selection or one of the modern versions of it. Darwin formulated the theory of natural selection after a long journey through the South America region as a young man in the early 1830s. During the voyage, he observed and collected many fossils and samples of plants and animals. He was struck by how similar South American samples were to each other and how different they were from European samples with which he was familiar. At the same time, he was reading about the then-new idea that the earth was very old and changing. Upon his return, Darwin began to realize that species must have changed over time in response to their environment—in short, that species evolve. He proposed the theory of natural selection in 1859 to explain how evolution occurs.

Natural selection means that traits that allow an organism to survive are more likely to be passed down from parents to offspring. The reason is simple; the beneficial traits are more likely to keep the organism alive long enough to reproduce. Over time (usually very long periods of time), as more and more of the organisms with the beneficial traits reproduce, and fewer and fewer of those without them do, species evolve to have only the beneficial traits. These beneficial traits are known as a daptive traits.

Even back then, Darwin had an enormous amount of supporting evidence, and evolution was accepted by most biologists very quickly (Campbell & Reece, 2002). The public, however, especially in the United States, resisted the theory.  Even today, most adults in the US do not believe in evolution. For example, a recent survey found that 40% of US adults believe humans were created in our present form in the last 10,000 years (Gallup, 2019). On the other hand, there is an overwhelming consensus among scientists in favor of evolution (Pew Research Center, 2020). They note that evolution is supported by an enormous body of evidence, so much evidence that the theory has attained the status of fact (Futuyma, 1995). David Buss (2007) points out that there has never been a scientific observation that has falsified the basic process of evolution by selection.

Many non-scientists do not believe in evolution because it contradicts their belief in the literal interpretation of the Judeo-Christian Bible. Although this is not the place for a full discussion of this controversial issue, there is one important fact to keep in mind. Belief in evolution can co-exist with belief in the Bible. In 1950, for example, Pope Pius XXII stated that “there was no opposition between evolution and the doctrine of the faith about man and his vocation, on condition that one did not lose sight of several indisputable points.” In 1996, Pope John Paul II agreed. Granted, both popes did dispute some important facets of particular theories of evolution, but they had seen the scientific evidence in favor of evolution and understood that it is overwhelming.

  • natural selection: the key concept in Charles Darwin’s theory of evolution; traits that helped an individual to survive are more likely to be passed from parent to offspring and become more common in future generations
  • adaptive traits: specific traits that help an individual to survive

The Controversy over Evolutionary Psychology

With this brief background in evolution, we can now move on to evolutionary psychology.

Of course, a public that does not accept evolution, in general, is not likely to put much stock in evolutionary psychology. The controversy runs deeper, however, as even many psychologists are not persuaded by many of the claims of evolutionary psychologists. Let us briefly examine the issues. Throughout the later modules in the book, we will have opportunities to revisit and amplify these issues as they relate to specific claims about people.

The goal of evolutionary psychology is to understand the human mind/brain from an evolutionary perspective (Buss, 2007). It is concerned with how the current form of the mind was shaped; what the components of the mind are, how they are organized, and what they are designed to do; and how the environment interacts with the mind to lead to behavior. There are two ways to apply evolution to psychology, corresponding to two mechanisms originally outlined by Darwin. The first is natural selection as described above, through which traits end up common in humans because they helped our ancestors to survive long enough to reproduce. The second is  sexual selection , through which traits are selected in a species because they helped our ancestors win a mate (Buss, 2003).

Psychologists have had little trouble accepting the first application, natural selection. For example, in order for our human ancestors to survive long enough to reproduce, they had to be able to escape from predators. Those that were able to find a boost of energy and strength during such times of danger were able to successfully fight or flee from the predator. Thus, we have evolved a stress response commonly known as the “fight or flight response,” that increases our heart rate and blood pressure, and diverts blood flow from body systems not needed to face the danger, such as digestion, and sends it to the large muscles of the arms and legs.

Sexual selection, however, has not been as safe from controversy among psychologists. The types of strategies suggested by sexual selection involve competition within genders and preferences in mating partners, which are ideas that have met with a great deal of resistance. For example, evolutionary psychologists have noted that men across the world tend to prefer women whose physical appearance signals fertility, such as youthfulness and a low waist-to-hip size ratio (that is, waist significantly smaller than hips). These women, the evolutionary explanation goes, were assumed to be more likely to become impregnated, which would allow the man to reproduce. Observations like these reinforce sexist stereotypes and consequently, have been met with a great deal of resistance from some psychologists.

From a psychologist’s perspective, the problem with these descriptions of natural selection and sexual selection so far is that we have not described a scientific approach to understanding human behavior and mental processes. Rather, an evolutionary psychology fashioned this way is simply telling stories about how some human traits came to be; some critics of evolutionary psychology have called them “just-so stories,” named after the fanciful tales by Rudyard Kipling that told how the elephant got its long trunk, for example (it was stretched by a crocodile). To be a bit more precise, the critics assert that evolutionary psychology does not generate useful scientific theories because they can seemingly explain any possible phenomenon. For example, one might be able come up with an evolutionary explanation for why women would prefer men who signaled fertility. The critics feel that evolutionary psychology is too “after the fact,” or post hoc.

Supporters claim that a careful examination of the methods of evolutionary psychology reveals that this criticism may be somewhat off base, however. To be sure, evolutionary psychologists sometimes seem to work in reverse of the typical “use theory to generate hypotheses and make observations” order. Rather, they find some interesting observation and generate a hypothesis or theory to explain it. If the evolutionary psychologists stopped there, the critics would have a valid criticism. They do not stop, however. Instead, now armed with a new theory, they go on to generate novel predictions. Further, they do so by using multiple types of data collection strategies, such as comparing different species; comparing genders within a species; and examining historical, anthropological, or paleontological evidence (Buss, 2007). But the debate continues. Philosopher Subrena Smith (2019) has tried to unravel the entire field of evolutionary psychology by noting that it is impossible to know if present-day cognitive mechanisms were actually adaptive in the past because no one really knows what the true environmental pressures were in the past. Absent that knowledge, we would at least need a “fossilized” version of an ancient human brain, something else that does not exist. She refers to this as the matching problem, and contends that it makes evolutionary psychology unscientific (not being concerned with actual empirical observations). As a result, she claims that evolutionary psychology is impossible, and the early response to her thesis has stirred up strong opinions on both sides of the debate (Mind Matters, 2020).

It is difficult to completely deny that evolution plays a role in human psychology. After all, evolution is the unifying theme of all of biology. Humans are every bit biological organisms as the members of any other species are, and our brains are not exempt from the processes that affect all other animals’ brains. One problem is that part of the controversy seems to stem from some of the specific explanations that come from evolutionary psychology and with some of the non-scientific ideologies associated with the application of evolution to humans.

Throughout the theory’s history, people supposedly following the principles of Darwinian natural selection have initiated (and followed through on) some heinous activities. For example, the eugenics movement of the early 20 th century was essentially an attempt to selectively breed humans—that is, to impose selection pressures on people (Hunt, 1993). The most horrific institution of a eugenics-like policy in modern times was the “final solution” of the Nazi-led Holocaust, Hitler’s attempt to create a genetically pure Aryan master race. Another, less dramatic, perversion of Darwinian principles were known as social Darwinism. According to social Darwinism, people who were economically worse off were so because they were genetically less fit. Herbert Spencer, an early proponent of this view, believed that to help the less well off could conflict with the process of evolution and ultimately hurt humanity. Steven Pinker (2002) notes, however, that the social Darwinists were confusing economic success with reproductive success; social Darwinism simply does not follow from the theory of natural selection. The history of the misuse of evolutionary theory makes many observers nervous about any application of the theory to people.

The concept of sexual selection also has some serious political baggage connected to it. For example, evolutionary psychology reframes phenomena such as conflict, aggression, sexual jealousy, and deception as adaptive solutions to problems of survival and reproduction faced by our ancestors. For example, sexual jealousy may have evolved as an adaptive solution to the problem of uncertain paternity (Dal et al., 1982). Quite simply, a male can never be completely certain that the baby born to his mate is his. Therefore, ancient males who had a strategy to prevent female infidelity—that is, males exhibiting sexual jealousy—were more effective at impregnating their mates and producing offspring. Further, one of the key strategies of dealing with sexual jealousy may have been to assault one’s mate, to ensure that she did not stray. This is an alarming prospect that something like spousal abuse is explained by saying that the behavior was an adaptive solution to an ancient reproductive problem.

Evolutionary psychologists are sometimes seen as apologists for social ills, such as spousal abuse, gender and racial inequality, and male violence. Leaping to the defense of evolutionary psychology, Pinker’s (2002) simple observation is that we can and should have a system of values and morality that is independent of our biological predispositions. For example, evolutionary psychology suggests that men and women have key psychological differences. If our society deems discrimination against a gender wrong, however, it should be irrelevant whether women on average tend to experience basic emotions (not including anger) more intensely than men. David Buss (2003), a prominent evolutionary psychologist, argues that the fact that humans may have antisocial psychological tendencies that arose because of our ancestral past does not excuse this behavior in modern people. In fact, he notes, by acknowledging antisocial tendencies, we stand a better chance of solving social ills.

The jury is still out on many of these questions; we just do not know whether particular evolutionary explanations of human behavior are correct or not. It would be a tragedy if the fear of ideological contamination prevented the research from being done, however, especially if the evolutionary psychologists are correct. On the other hand, we must continue to be vigilant about the misuse of evolutionary principles to drive an unfair or dangerous agenda, and we must insist that proponents of a new subfield adhere to the highest scientific principles (see Module 14 for another take on this debate).

  • evolutionary psychology: the subfield of psychology that focuses on understanding the human mind/brain from an evolutionary perspective
  • sexual selection: the process through which specific traits are passed on from parents to offspring because they helped an individual win a mate
  • eugenics: a misuse of evolutionary principles that attempted to selectively breed humans to remove “unwanted” traits from humanity
  • social Darwinism: a misapplication of evolutionary principles that proposed that people who were worse off economically were so because they were evolutionarily less fit
  • stress response: commonly known as the “fight or flight response.” The physiological response that results in increased heart rate and blood pressure, diverted blood flow from body systems not needed to face the danger, and increased blood flow to the large muscles of the arms and legs.
  • Try to come up with an evolutionary explanation for: Male and female sexual infidelity, sadness, anger, anxiety, happiness

Module 11: Brain and Behavior

In section 10.1, we noted that genes are responsible for building all of the cells in our body. In this module, we will introduce you to many of the cells and groups of cells that genes build in the nervous system.

The individual cells in the nervous system are called neurons, cells that generate and transmit electrochemical signals. Neurons are the basic cells of the nervous system, including our brains. Our genetic blueprints also set up the organization of those neurons into the different divisions of the nervous system and the specific parts or functional areas within the brain.

Module 11 is divided into three sections. Section 11.1 describes the electrochemical activity that takes place in an individual neuron and allows neurons to communicate with each other. It also explains a bit more about the “astonishing hypothesis” and lays out the divisions between the parts of the nervous system. Section 11.2 is devoted to the brain; it describes the functions and locations of several of the parts that are most important in human behavior and mental processes. Section 11.3 brings together the information from the previous two sections by returning to the communication process between neurons. The section describes neurotransmitters, the chemicals that carry signals between neurons throughout the nervous system. It concludes with a short introduction to neuropsychopharmacology, the understanding of brain and behavior through the discovery of the neural actions of drugs.

11.1 Neurons and the Nervous System

11.2 The Brain and Behavior

11.3 Neurotransmitters and Neuropsychopharmacology

By reading and studying Module 11, you should be able to remember and describe:

  • Action potential and resting potential: dendrites, cell body, axon, myelin (11.1)
  • Neural communication: terminal button, vesicle, neurotransmitter, synapse, receptor site (11.1)
  • Central and peripheral, somatic and autonomic, sympathetic and parasympathetic nervous systems (11.1)
  • Major parts and functions of hindbrain: medulla, pons, reticular formation, cerebellum, thalamus (11.2)
  • Major parts and functions of forebrain: hypothalamus, pituitary gland, amygdala, hippocampus, limbic system, cerebral cortex, frontal lobe, parietal lobe, occipital lobe, temporal lobe, primary motor cortex, prefrontal cortex, primary sensory cortex, primary visual cortex, primary auditory cortex (11.2)
  • Major functions and location of midbrain  and corpus callosum  (11.2)
  • Methods of discovering functions of the brain: case studies, animal research, electroencephalogram, positron emission tomography, functional magnetic resonance imaging (11.2)
  • Common neurotransmitters: endorphins, cannabinoids, serotonin, GABA, acetylcholine, norepinephrine, dopamine (11.3)
  • Neuropsychopharmacology:  Agonists and antagonists, reuptake  (11.3)

By reading and thinking about how the concepts in Module 11 apply to real life, you should be able to:

  • Come up with your own examples of situations in which your sympathetic and parasympathetic nervous systems were active (11.1)
  • Predict the behavior changes that might result from a brain injury or disorder (11.2)

Analyze, Evaluate, or Create

By reading and thinking about Module 11, participating in classroom activities, and completing out-of-class assignments, you should be able to:

  • Evaluate the Astonishing Hypothesis in light of the computational theory of the mind (11.1)
  • Propose potential agonistic or antagonistic mechanisms of specific substances and drugs (11.3)
  • What is your opinion of the “astonishing hypothesis”? Are you comfortable with the idea that everything you do and are can be traced to the electrical and chemical activity in your nervous system?
  • Have you ever thought about how computer programs really work? Specifically, how does the computer translate the instructions from a computer programming language into a set of electrical signals that carry out the instructions?

The cognitive revolution (Module 9) showed us how mental processes, such as intentions, desires, and consciousness, could be seen as the manipulation of information within the brain. Thus it helps us begin to understand the “astonishing hypothesis,” the idea that everything we are and do starts with electrochemical processes within and among nerve cells. Researchers have called it the computational theory of mind. The brain receives input from the world through the sensory organs (details in Module 12). The input is translated into neural signals that correspond to the information from the world; then, the neural signals are transmitted to other parts of the brain for more processing.

Although you do not need to know all of the details of this process, a few observations and facts can help you to see how these neural signals might work to create a complex thinking system, such as a human brain:

  • As you will see in this section, a neuron is very much like a little switch; it is either on or off.
  • Computer programs and mental processes are both computational procedures, in which information is manipulated.
  • Any computational procedure can be expressed as a series of little on/off signals.

This last point was proven mathematically by the mathematician Alan Turing in 1936. Think of the most complicated computer program you can imagine (for example, a complex video game or a program that can recognize speech, a very difficult task). Although the program was written in a programming language, the commands are actually translated into a series of on/off signals, and that is what the computer executes. Of course, for a complicated computational procedure, it would be a very long and very complex sequence of signals. Because our mental processes are probably among the most complicated computational procedures in the universe, the sequence of on/off signals, provided by our neurons, would be extremely complex. These procedures are so complex that researchers still are not very close to figuring out what the sequence is, by the way.

Neural Signals

Module 5 briefly introduced you to the parts of a neuron and described the process of ion exchange that constitutes neural activity. In this section, you will get many more details about this process. Although the process is complicated, remember that what the neuron is accomplishing is actually quite simple; it is either turning on or staying off, just like the signals or switches that constitute complex computer programs, or the light switch in your room.

Recall that the three main parts of a neuron, for our purposes, are the dendrites, cell body, and axon. The typical neuron has many dendrites, one cell body, and a single axon (there are other types of neurons but we can learn the main ideas by focusing on this typical type). The purpose of dendrites  is to receive incoming signals; the purpose of the a xon is to carry a neural signal away from the neuron. (See Figure 11.1)

There are three main processes involved in neural activity:

  • The initiation of an electrical signal in a neuron
  • The movement of that electrical signal through the neuron’s axon
  • The transmission of the signal, in chemical form, to other neurons

introduction to psychology essay questions and answers pdf

axon: the single “tube” in a typical neuron that carries an electrical signal to other neurons

dendrites: the many branches that each neuron has; dendrites receive incoming information from the outside world and from other neurons

Resting Potential and Action Potential: Initiation of the Signal and Its Movement Through the Axon

A neuron receives excitatory and inhibitory signals at its dendrites and cell body; the signals come from many other neurons or from stimulation from the outside world. An excitatory signal is one that is instructing the neuron to pass along the signal. An inhibitory signal is one that is instructing the neuron not to transmit a signal. (Think of the excitatory signal as telling the neuron to “turn on,” and the inhibitory signal telling it to “stay off” if you’re thinking of the switch analogy). The neuron collects all of these signals and adds them together. If there are enough excitatory signals, the neuron fires and generates an action potential.

The most important concepts for you to understand in the neural signaling process are the action potential and resting potential. Potential is another word for voltage, or potential difference. It is basically a difference in electrical charge between two areas. The electrical charge is the result of the location of electrically charged particles called ions . There is a potential, or potential difference, because there is a different concentration of positive and negative ions inside and outside of the axon of a neuron.

The resting potential is the voltage on the inside versus the outside of the neuron when it is at rest (when it is “off”). The action potential is simply a voltage, or an electrical charge, that travels down the length of a neuron’s axon (when it is “on”). It begins at the part of the axon nearest to the cell body and makes its way bit by bit to the end of the axon.

Why are ions crucial to action potentials and resting potentials? Think about the concept that opposites attract and likes repel. A positively charged ion is attracted to a negatively charged ion and repelled from another positively charged ion. Similarly, a negatively charged ion is attracted to a positive one and repelled from another negative one.

Let’s delve a little deeper into the application of this concept to neural activity, so you can fully understand action and resting potentials. Neurons are floating in fluid, and they have fluid inside of them. There are electrically charged ions in both fluids. In the fluid on the outside of the neuron, there are positive ions, specifically sodium (Na+), and negative ions, specifically chloride (Cl-). There are more sodium ions than chloride ions, so the fluid outside of the cell has an overall positive electrical charge. When the neuron is at rest (i.e., resting potential), inside the cell there are positive ions, (K+), and negative ions, chloride (Cl-) and other anions  (negatively charged particles, essentially  proteins). Because of the chloride and the other anions, the fluid inside the neuron has an overall negative electrical charge.

  • So, to summarize so far: Lots of sodium (Na+) is outside, lots of chloride and protein are inside (both negative) when the neuron is at resting potential.

Remember: Opposites attract and likes repel. Negative ions will try to move away from other negative ions and toward positive ions. Positive ions try to move away from other positive ones and toward negative ones. The effect is that the negative ions are trying to get out of the neuron and the positive ions are trying to get in. At the same time, particles have a tendency to move from areas where they are highly concentrated to areas of low concentration, a process called diffusion . So, sodium, being highly concentrated on the outside is trying to move inside because of diffusion as well as the attraction to the negative particles. The surface, or membrane, of the neuron is what limits this activity that the particles tend toward, resulting in the electrical potential, specifically, the resting potential.

  • During the resting potential, the neuron has an overall negative charge, as there are more negative ions inside, and more positive ions outside. At the same time, the negative are trying to get out while the positive are trying to get in.

When the action potential begins, the surface of the axon nearest to the cell body changes. Basically, some channels open up to allow ions to move more freely into the axon. Because the sodium ions are attracted to the inside of the neuron, they immediately rush into the now opened section. This is the essence of the action potential; it is a positive electrical charge that is caused by the influx of sodium ions into one tiny section of the axon. What happens next is the same type of channels open in the section of the axon a little farther out, allowing a new rush of sodium into this new section. An instant later, new channels that will allow ions to flow out open up in the first section of the axon. Positive ions, this time potassium, can now be forced out of the axon by the abundant sodium ions surrounding them. Thus, the first section of axon has its electrical charge returned to normal. The process then repeats itself again and again down the entire length of the axon. (See the four-part Figure 11.2, beginning below.)

introduction to psychology essay questions and answers pdf

Let us pause briefly to put the process so far together:

  • Dendrites and the cell body of a neuron receive excitatory and inhibitory signals, mostly from other neurons.
  • If enough excitatory signals are transmitted to the neuron, it begins an action potential at the part of the axon nearest to the cell body.
  • The action potential proceeds as sodium ions move into and potassium ions move out of the axon; the action potential travels one microscopic section at a time down the length of the axon.

anion : a negatively charged particle

diffusion : the tendency for particles to move from areas of high concentration to areas of lower concentration

ion: an electrically charged particle

action potential: an electrical signal (voltage) that travels down a neuron’s axon; it results from the movement of positive ions into and out of the axon

resting potential: the voltage of a neuron when it is at rest; it results from positive ions outside and negative ions inside the neuron

excitatory signal: a signal entering at a neuron’s dendrites or cell body instructing the neuron to transmit its own signal

inhibitory signal: a signal entering at a neuron’s dendrites or cell body instructing the neuron to not transmit its own signal

An action potential is quite slow. Ions must flow in sequence into and out of an axon throughout its entire length (an axon of a neuron involved in movement can be several feet long). Fortunately, many neurons have a method that speeds up the process. Specifically, other cells in the nervous system, called g lia, form a substance called myelin, which wraps around the axon and joins together some of the microscopic sections to form larger sections (see Figure 11.1 above). The positive ions that make up the action potential quickly float through each myelinated portion of the axon, and the action potential itself takes place only at the breaks between the myelin sections (the sections where the action potential occurs are called nodes of Ranvier).

By the way, glia are among the most interesting topics to current neuroscientists. For many years, researchers thought that glia served little purpose other than to protect, provide nutrients for, and hold together the neurons of the brain (glia means “glue”). Recently, researchers have discovered that glia actually participate in the signaling that we thought was unique to neurons. Currently, many researchers are trying to discover the full contribution of glia to the functions of the nervous system (see Module 14).

glia : types of cells, other than neurons, in the nervous system

myelin: a substance that covers many of the brain’s neuron’s axons; it protects the axon and speeds up the action potential by allowing it to jump from one non-myelinated section to the next.

Neural Communication: Transmission of the Signal to Other Neurons

When the action potential reaches the end of the axon, it must be transmitted to other neurons in order for its signal to be carried throughout the nervous system. This communication between neurons takes place at what is called a synapse, the area where two neurons come together. A synapse is where the end of the axon of one neuron is situated very close to (but not touching) a dendrite or the cell body of a neighboring neuron. Neural communication takes place when chemicals called neurotransmitters are released from the axon, float across the small space between the two neurons, and land on the dendrite or cell body of the neighboring neuron.

A few details are necessary to understand the process well. Axons end with their own little branching sections, and each tiny branch ends with a slightly swelled area called a terminal button . Neurotransmitters are stored in tiny spaces called vesicles in the terminal buttons. When an action potential reaches a terminal button, it causes the vesicles to open and release their neurotransmitters into the synaptic gap between the two neurons. The neurotransmitters float across the gap and land on receptor sites on the dendrites or cell body of the neighboring neuron. Then, the receptor sites open up to allow ions to float into or out of the neuron. These ions will cause tiny electrical charges, or potentials, in the second neuron. In this way, the signal is sent from the first to the second neuron.

Some neurotransmitters are excitatory; other neurotransmitters are inhibitory. A receiving neuron must collect all of these excitatory and inhibitory signals and decide whether or not it will generate its own signal. We hope that this sounds familiar. This, of course, is the input collection process that we described at the beginning of this section. There are many different neurotransmitters, and each one can only influence certain receptor sites, much like a key that can only fit certain locks. Keep in mind that an axon branches off into many terminals (sometimes thousands), each of which forms a synapse with a neighboring neuron, so a single neuron releases very many neurotransmitters to many different other neurons.

introduction to psychology essay questions and answers pdf

synapse: the tiny area between two neurons, where neural communication takes place

neurotransmitter: the chemicals that carry a neural signal from the axon of one neuron to the cell body or dendrites of a neighboring neuron.

terminal buttons: the end section of axon branches, from where neurotransmitters are released.

vesicles: the storage sites for neurotransmitters in the axon, before they are released.

receptor sites: the sections on cell bodies and dendrites where neurotransmitters land, thus completing the transmission of a signal from one neuron to another.

Stimuli for Neural Activity

One of the more confusing aspect of the process of neural communication may be the question of how or where the whole thing starts. We mentioned that our target neuron is collecting signals mostly from other neurons, which are collecting signals from other neurons, which are collecting signals from other neurons, and so on. So where does the process start?

Often the first neuron in the process is a sensory neuron , one that receives input from the outside world. For example, your eye receives light, which is transformed into neural signals, which are recognized as a table or some other thing as the signal flows through the brain.

What about when input from the external world does not seem to begin the process? Where do spontaneous, random-seeming thoughts come from?

Imagine taking a shower in a health club locker room. As you turn off the water and begin drying your face, the distinctive odor of the industrially washed towel instantly reminds you of summer camp when you were in middle school. Then middle school reminds you of high school, which reminds you of your favorite teacher, who hired you to work as a lifeguard at a community pool during one summer. After reminiscing about your high school summers, and in particular, the attractive person with whom you used to flirt in the guard station, you looked up and wondered why you were thinking about them. This kind of thinking, known commonly as a stream of consciousness, is not random, although it might seem like it is. Each successive thought was clearly connected to the thought that preceded it. Imagine, then, a set of neural signals that leads from one thought to the next. The only thought that is not preceded by another one is the first. What preceded that thought? The odor of the towel. So, the whole stream of consciousness began when the external environment was perceived. That perception led to a thought (the initial reminding), and the whole stream of thoughts took off from there. Pay attention to it yourself sometime. Work backwards from your current stray thought. What made you think of it, what made you think of the previous thought, and so on? You may be surprised to discover how often the stream of thoughts can be traced back to some stimulus in the environment. Many thoughts that seem random are not.

What about when something really does just pop into your head, however? How does that happen? Many of these events can probably explained by the way memories are stored and retrieved. When an event occurs and is stored in memory, many properties of the event are stored with it (colors, odors, sounds, emotions, physical feelings, and so on). Any of these properties may serve as the entry point into the memory, that is, as a retrieval cue (Module 5). For example, imagine that you receive the best grade on your next psychology exam that you have ever gotten, and the news puts you in a very happy mood. Several weeks later, a new event puts you in the same mood; the repetition of the mood reminds you of the exam. If you are reminded of some event without realizing what did the reminding, the consequent thought will seem as if it just popped into your head.

A great deal of mental processing goes on outside of conscious awareness. Often the results of that unconscious processing seem as if they just popped into your head. It is also possible, though, that sometimes a thought really is random. Neurons fire spontaneously and randomly throughout the day. Most of the time, this random firing has no effect on our thinking. Recall that in order for a neural signal to have an effect on another neuron, the signal must be combined with the signals from many other neurons. Occasionally, however, the results of random neural firing may lead to sufficient stimulation to generate a series of signals that can lead to a conscious thought.

The Organization of Neurons into the Nervous System

Just as few thoughts are truly random, neurons are not joined together randomly. There are specific pathways and clusters of cells throughout the nervous system. They make up key parts of the brain and important pathways for information flow throughout the nervous system (for example, see Module 12 about vision pathways).

In addition, there are some major divisions in the nervous system. The most basic division is between the central nervous system and the peripheral nervous system. The central nervous system is simply the brain and spinal cord, basically command central for the rest of the nervous system. The peripheral nervous system is the part that runs through the rest of the body; it is divided into the somatic nervous system and the autonomic nervous system. The somatic nervous system controls many of the muscles of the body; it consists primarily of neurons for sensation, sensory neurons , and for body movement, motor neurons . For example, as you sit in the library studying for an upcoming exam, sensory neurons within the somatic nervous system relay information about pressure and tension from the lower parts of your body and your back muscles to the central nervous system; you are uncomfortable from sitting in the same position for three solid hours. Then, the central nervous system might “answer,” sending information via motor neurons that lead you to shift in your seat (or get up) so that you are more comfortable.

The autonomic nervous system controls the glands and internal organs; it is subdivided into the sympathetic and parasympathetic nervous systems. The sympathetic nervous system causes several responses throughout the body that increase arousal and prepare it for activity. The arousal that comes from the sympathetic nervous system is commonly known as the fight-or-flight response, a set of responses that prepare the body to meet or escape from a threatening situation. The parasympathetic nervous system reverses those responses, calming the body back down. Imagine that you are running late for work one morning and driving a bit over the speed limit. Suddenly, up ahead, you see a police car on the side of the road. Instantly, your sympathetic nervous system kicks into gear and your fight-or-flight response begins. Your digestive system began to shut down: your mouth gets dry and you get the “butterflies in the stomach” feeling. Blood that had been flowing to your digestive system is diverted to the large muscles of your arms and legs, the better to fight or flee from the imminent danger (free advice: not a good idea with a police officer). Your palms get sweaty to improve your grip (again, part of the fight response). As you get closer to the police car, you discover that it is empty. It is a decoy car, planted there to frighten violators into reducing their speed. Just as instantly, the fight or flight response begins to reverse as the parasympathetic nervous system takes over to return your body to its normal, unstressed state. As you will see in Modules 20 and 28, this fight or flight response plays important roles in stress and health, and in emotions.

introduction to psychology essay questions and answers pdf

central nervous system: the brain and spinal cord; the command center of the nervous system.

peripheral nervous system: the parts of the nervous system that run throughout the body (everything except the brain and spinal cord).

somatic nervous system: the part of the peripheral nervous system that controls the skeletal muscles

sensory neurons: neurons that receive input from the outside world and send sensory information to the brain

motor neurons: neurons that are responsible for producing movement

autonomic nervous system: the part of the peripheral nervous system that controls the glands and internal organs

sympathetic nervous system: the division of the autonomic nervous system that arouses the body

flight-or-flight response: the common name for the set of arousing responses produced by the sympathetic nervous system; they are designed to prepare the body to face some physical danger by fighting it or fleeing from it.

parasympathetic nervous system: the division of the autonomic nervous system that calms the body down

  • What is the easiest part of the whole action potential/neural communication process for you to understand? What is the hardest part? How would you explain the parts that you understand well to someone who is having difficulty understanding?
  • Come up with your own example of an event during which your sympathetic and parasympathetic nervous systems were at work. Can you recognize the fight-or-flight reactions?

11 .2 The Brain and Behavior

  • Draw a sketch of the human brain. From memory, add any labels you can showing the names of different areas of the brain or their functions.
  • Have you or anyone you know had a brain disorder or injury? Where was it located? What was its effect on mental processes, emotion, or behavior? Was there a way to compensate for the disorder?

One of the most important and interesting discoveries about the brain has been its plasticity. In short, the brain can reorganize itself as a consequence of experience or damage. For example, taxi drivers have a larger part of the brain that appears to be related to navigation skill than non-taxi driving adults, and the amount of brain area is related to experience as a driver (Maguire et al., 2000).

Sometimes, the plasticity can be dramatic. In a feat of nearly Frankensteinean proportions, researchers were able to rewire the brains of ferrets so that visual information was sent to the brain area that was supposed to process sounds, and vice versa (Sur et al., 1999). The ferrets’ brains were able to reorganize so that the animals were able to function correctly.

Although no one has attempted such a dramatic demonstration with a human brain, there are striking examples of human plasticity as well. Perhaps the most amazing example is when an operation called a hemispherectomy is performed. This operation, the actual removal of one hemisphere (half) of a brain, has been used on very rare occasions as a treatment for severe and degenerative seizures. Research has shown that the operation can substantially reduce symptoms, and patients can function quite well, in many cases as well as people who have intact brains (Moosa et al., 2013; Vining et al., 1997). Recent research compared brain scans of six hemispherectomy patients to a large group of healthy controls. They found that several functional areas of the brain had stronger interconnections for the 6 patients than for the normal controls, as if their brains compensated for the missing halves by increasing connections in the remaining hemisphere (Kliemann et al., 2019).

When you first look at a brain, you see a lumpy, light grayish-brown, wrinkled mass, with few discernible parts. At first, then, it seems plausible that the brain might be infinitely plastic. A great many people do believe that the brain can basically reorganize itself without limit. With a little careful examination, however, you can begin to notice that there are separate sections. For example, on the surface of the brain, some of the wrinkles look larger than others, and some of the lumps are more pronounced than others. These separate areas are recognizable on any brain, and they are completely unrelated to any damage or experience. So, without denying that the brain can change itself, you must realize that the brain has a very intricate structure, very specific parts biologically determined to fulfill specific functions.

introduction to psychology essay questions and answers pdf

Just as a skilled radiologist can recognize what appear to the untrained eye to be unintelligible specks on an x-ray of the body, you can learn to recognize these different areas in the brain. If you intend to be a neuroscientist, you’ll definitely need to acquire this skill. It is also handy for psychologists, given today’s emphasis on biopsychology. But even if you never have a professional need to recognize the brain’s features, we think that you will find them interesting. And although we hope you never have this experience, somebody you know might someday have a brain disorder or injury that will make your study of the brain’s geography entirely relevant.

In this section, we will introduce you to a few of the main parts of the brain and give you some information about their functions. This is a useful backbone of knowledge for your study of the rest of the book. When appropriate in other sections, we include other descriptions of brain parts and functions as they apply to various psychological phenomena.

Brain Areas

The most basic distinction in the brain is between the hindbrain, midbrain, and forebrain. As we are sure you can guess from the names, these parts describe the locations of the areas in the brain. These locations are difficult to see in the human brain, however. The forebrain in humans is so large it covers the midbrain and part of the hindbrain

Each of the three major areas of the human brain incorporates several smaller brain structures. Brain structures that are located close to each other often have similar or complementary functions. The individual sections that make up the hindbrain are largely devoted to basic survival functions. Many midbrain sections are important for processing sensory information and movements. Finally, the forebrain contains structures that further process sensory information, regulate our emotions, and carry out our higher intellectual functions.

hindbrain: the structures of the brain most closely related to basic survival functions

midbrain: structures of the brain closely related to processing sensory information and movements

forebrain: structures of the brain that process sensory information, regulate emotions, and carry out higher intellectual functions

Hindbrain and Midbrain

The hindbrain is composed of the following four individual brain parts: medulla, pons, reticular formation, and cerebellum. (See Figure 11.5)

  • As the spinal cord reaches up into the skull, it begins to widen. The medulla is the wider area at the very base of the brain; it is essentially the first brain part that is distinct from the spinal cord. The medulla controls basic survival-type functions, such as heart rate and breathing.
  • Just above the medulla is a second, more distinctly bulging area, the pons, which functions a bridge, transferring information between the brain and the spinal cord.
  • Stretched inside of the medulla and pons is an area called the reticular formation , a brain area important for attention and arousal.
  • The cerebellum is the part that looks like a miniature brain (cerebellum means “little brain”); it is tucked behind the medulla and pons. Do not be fooled by the name, “little brain,” however. The cerebellum contains more than 50% of the brain’s neurons despite being only 10% of the brain’s total volume. So obviously, it is an extremely important part of the brain. The cerebellum helps us with posture and balance, and it helps us to learn and coordinate voluntary movements (Byrne, 1997; Leiner, Leiner & Dow, 1995). The cerebellum is quite large in humans compared to other animals, suggesting that it has an important role in cognition. The cerebellum appears to be important for learning higher-order cognitive skills, such as reasoning, by making complex thinking procedures more routine (Preuss, 2000).

introduction to psychology essay questions and answers pdf

Although the midbrain does contain several individual parts, we will keep things simple for now by not subdividing it. In general, the midbrain parts play important roles in our sensorimotor abilities (sensation and movement).

medulla: the structure at the base of the brain where it begins to widen after leaving the spinal cord; it is responsible for your  heart beating and breathing

pons: a bulging area above the medulla; transfers information between the brain and spinal cord

reticular formation: an area stretched inside the medulla and pons; it is involved in attention and arousal

cerebellum: a brain area located underneath and behind the main part of the brain, it looks like a miniature brain; it is responsible for coordinating movements and helping fine-tune cognitive responses

Recall that the forebrain contains many structures that process sensory information, regulate our emotions, and carry out higher intellectual functions. Thus, the forebrain is the part of the brain that produces the behaviors that most clearly distinguish humans from other animals. It is no surprise, then, that the forebrain in humans is much larger than in other animals. The six major parts of the forebrain are the thalamus, hypothalamus, amygdala, hippocampus, limbic system, and cerebral cortex.

  • The thalamus is a roughly oval-shaped structure above the pons, reticular formation, and midbrain. Side-view drawings of the brain make many people think that there is only one lobe of the thalamus, but in reality, there is one on each side of the brain. The function of the thalamus is to route sensory information to the correct area of the brain for additional processing. For example, light that enters through the eyes is translated to neural signals at the back of the eye (see Module 12). These neural signals are sent to the thalamus, which sends them to the part of the brain that processes vision.
  • The hypothalamus is located below the thalamus (the word hypothalamus means “below thalamus”). The hypothalamus works closely with the pituitary gland, located right in front of it. Neural signals from the hypothalamus direct the pituitary gland to release chemicals called hormones into the bloodstream. These hormones travel to other parts of the body, especially to other glands, which in turn release their own hormones. As you can see in other sections throughout the book, hormones play a role in physical development, stress, sex, aggression, and other behaviors. The hypothalamus also plays an important role in motivation, including such behaviors as sex, eating and drinking, and aggression.
  • The amygdala is a small almond-shaped structure ( amygdala is Latin for almond) located outside and just below the thalamus (one on each side). The amygdala is probably the most important brain part for our emotions. For example, it plays a critical role in learning fear and anxiety responses. For example, because of the amygdala, if you get stung by a bee, you may feel uneasy and anxious when you see another one. The amygdala is also important for distinguishing different emotions, and for enhancing our memory of emotional episodes.
  • You might recall that Module 9 described the distinction between explicit and implicit memory. Explicit memory is when you have conscious or intentional recall of some information; implicit memory does not involve conscious recall. A part of the forebrain, the hippocampus , is key to explicit, but not implicit, memory. The hippocampus is located just to the outside and a bit below the thalamus. Again, just as there are two lobes of the thalamus, you have a left and a right hippocampus. It appears that the hippocampus allows us to store new explicit memories, and it aids in the reorganization of previously stored memories to allow them to be stored for longer periods (Squire & Knowlton, 2000). So, if you remember what the function of the hippocampus is, it was the hippocampus that allowed you to do so!
  • Together, the hypothalamus, amygdala, hippocampus, and a few additional parts form what is known as the limbic system , a system that appears as a ring around the thalamus (on both sides). Although the limbic system is complex and contributes to many functions, the main ones are emotion and memory. Various parts of the limbic system are involved in experiencing, expressing, and recognizing emotions. Others are important for learning, storage, and recall of information (Augustine, 2017).
  • By far the most noticeable part of the human brain is the cerebral cortex , the wrinkled surface of the brain. The cortex plays a crucial role in a great many behaviors, including perception, movement, and our higher intellectual functions such as memory and reasoning. It is also better developed in humans than in other animals. Thus it deserves a bit more attention.

introduction to psychology essay questions and answers pdf

thalamus an oval-shaped forebrain structure that routes sensory information to other parts of the brain

hypothalamus: a forebrain area just below the thalamus; it plays a role in motivation and it controls the pituitary gland

amygdala: an almond-shaped forebrain area that is important for emotions

hippocampus: a forebrain area near the thalamus that is important for storing memories

limbic system: a group of forebrain areas that are important in emotions, among other functions

cerebral cortex: the wrinkled surface of the brain that plays important roles in perception, movement, and higher intellectual function

Cerebral Cortex

Because it is so large and so important, and its functions so diverse, the cortex is subdivided into four main sections, or lobes (really, eight when you consider that there is a version of each lobe in the left and right hemispheres): frontal lobe, parietal lobe, occipital lobe, and temporal lobe. The lobes are separated by some of the most prominent wrinkles in the cortex.

introduction to psychology essay questions and answers pdf

  • The frontal lobe , of course, is in the front of the cortex (unfortunately, the remaining three lobes are not called topal, backal, and sidal). The frontal lobes receive and integrate sensory input that originates in all of the sense organs. They use this diverse input to help produce a great deal of complex behavior, such as judgments, planning and reasoning. The rearmost part of the frontal lobe contains the primary motor cortex , the main part of the brain that controls movements. The motor cortex in the left frontal lobe controls the right side of the body, and vice versa. The left frontal lobe contains Broca’s area , which as we have seen is involved in speech production (see a few details below in the discussion of Wernicke’s area). The front part of the frontal lobe is called the prefrontal cortex. It is the seat of many judgment and reasoning processes, and is involved in our working memory, the short-term store of information that is currently in consciousness—see Module 5 (Smith and Jonides, 1997).
  • The parietal lobe is directly behind the frontal lobe. The front part of the parietal lobe is called the primary sensory cortex . It is the part of the brain that gives us our sense of touch throughout the body. The area of the parietal lobe behind the sensory cortex is important for taking in sensory information and using it to plan movements (Bizzi, 2000).
  • Directly behind the parietal lobe is the occipital lobe, the area that contains the primary visual cortex . Among the lobes of the cerebral cortex, only the occipital lobe is involved with a single function, albeit a very complex one. The entire occipital lobe is devoted to visual processing.
  • The temporal lobe is located on the side of the cerebral cortex, in front of the occipital lobe and below the parietal lobe. The area near the top of the temporal lobe is the primary auditory cortex , the area that processes sounds. Another important section in the temporal lobe is Wernicke’s area. For many years, this area was thought to be important for speech comprehension (and in some textbooks, it is still presented this way). Recent research has led brain researchers to conclude that it is actually important for speech production, together with Broca’s area (Binder 2015). Indeed, Wernicke’s area is connected to Broca’s area by a large group of nerve fibers called the arcuate fasciculus . It appears that Wernicke’s area is responsible for the retrieval of speech sounds from memory, and Broca’s area is responsible for sending the commands to the primary motor cortex to move the muscles of the mouth and tongue. Speech comprehension, then, appears to reside in many other areas of the brain, including some sections in other parts of the temporal lobe and the prefrontal cortex (Binder, 2015).

arcuate fasciculus : a tract of nerve fibers connecting Broca’s area to Wernicke’s area

Broca’s area : an area in the left frontal lobe important for speech production; it works closely with Wernicke’s area in the temporal lobe

frontal lobes: the lobes in the front of the cortex that contain the prefrontal cortex and the primary motor cortex

prefrontal cortex: an area in the frontal lobes involved in judgment and reasoning, and in working memory

primary motor cortex: an area in the frontal lobes responsible for directing movement of the body

parietal lobes: the lobes of the cortex directly behind the frontal lobes; they contain the primary sensory cortex

primary sensory cortex: the section of the parietal lobes responsible for our sense of touch throughout the body

occipital lobes: the lobes of the cortex in the back; they contain the primary visual cortex

primary visual cortex: the area of the occipital lobes involved in the early processing of visual information

temporal lobes: the lobes of the cortex on the sides; they contain the primary auditory cortex

primary auditory cortex: the area of the temporal lobes responsible for the processing of sounds

Wernicke’s area : an area in the left temporal lobe important for speech production along with Broca’s area in the frontal lobe

Corpus Callosum

A short description of one additional part will complete our introduction to the brain. The corpus callosum is the main structure connecting the left and right hemispheres in the brain. This very large brain part contains axons that allow neural signals to be sent from left to right and from right to left so that the brain can function as a coordinated whole.

introduction to psychology essay questions and answers pdf

There is little doubt that the left and right hemispheres are somewhat specialized so that they typically perform different functions, and we will describe some of those differences throughout the book. For example, many verbal functions, such as speech production, are handled by the left hemisphere. It would be an oversimplification, however, to say that the left side is the language side of the brain. Consider the parts of the brain involved in listening to someone speak, for example. Much of the processing that takes place that allows us to recognize words does indeed take place in the left hemisphere. Understanding speech, however, involves much more than simply decoding words. We need to draw inferences (make conclusions about unstated information), understand subtle references and humor, read non-verbal cues and facial expressions, and so on. Many of these functions are typically handled by the right hemisphere. So, would it really be fair to say that understanding speech takes place in the left (or right) hemisphere? Because the corpus callosum is so efficient at sending information back and forth, and because so many of our behaviors are extremely complex, it is a gross oversimplification to make generalizations such as the popular notion of left-brain or right-brain dominance.

Brain Research

For the longest part of the history of brain science, the primary method for learning about how the parts of the brain relate to human behavior and mental processes was to examine case studies of patients who had suffered some kind of brain damage. For example, about 150 years ago Paul Broca discovered that many people who suffered strokes and lost the ability to speak had damage to a specific area on the left side of the brain. Broca’s area, as the part became known, was therefore assumed to be responsible for producing speech. The functions of many brain parts have been discovered this way. A second important technique has been to conduct research on laboratory animals; these researchers produced brain damage in specific areas in order to more precisely determine the function of a particular area.

Both research methodologies are quite limited, however. As you recall from the introduction to case studies in Module 2, we can never be sure if our individual case is representative of the population at large. When we are relying on individuals who are suffering from some kind of disorder or injury, we already know that there is something unusual about the people, so it becomes especially dangerous to make an automatic generalization from them. Animal research presents a similar difficulty. We must be cautious about blindly generalizing the results of animal research to humans. Many times, we find that strong relationships between brain areas and behavior in animals are much weaker in humans. On the other hand, you certainly should not automatically reject research results simply because they were based on animal research. The anatomy of nervous systems is remarkably similar across species, and neural activity, by and large, is the same across a wide variety of animals, including humans. It is definitely reasonable to draw cautious conclusions from the results of research on animals, especially if it agrees with other sources of evidence. (Recall that Module 2 contained a discussion of the specific ethical issues related to doing research with animals.)

Research that tried to assess brain activity in normal human individuals actually began about 1930 with the invention of the electroencephalogram (EEG; see Module 1 for another description). Hans Berger reported that he was able to amplify and measure electrical activity in the brain from electrodes placed on the scalp. More recently, researchers have turned to more advanced neuroimaging techniques to provide new sources of converging evidence. These techniques give us a view of brain structures and their functions, allowing researchers to examine a working human brain without being too invasive. Two common current techniques are PET and fMRI; both allow researchers to measure brain activity. PET , or positron emission tomography , allows the researcher to track glucose consumption in the brain. Glucose is the basic sugar that provides energy for body systems. Areas that are currently using a great deal of energy (because they are active) will show an increase in the consumption of glucose. In order to track the glucose, research participants are injected with a solution of radioactive glucose, which can then be detected by a radioactivity detector. The consumption of glucose can then be measured while participants complete different mental tasks. Functional magnetic resonance imaging ( fMRI ) measures the release of oxygen from blood cells in the brain (when brain areas use energy, one of the byproducts is oxygen, so fMRI also measures energy use in the brain). Both techniques have led to an explosion in knowledge about the functions of different parts of the brain. By the way, EEG’s are still commonly used. If you are interested to learn why a technique developed nearly 100 years ago is still a commonly used brain scanning technique, you will have to wait until Module 14.

Of course, there is much left to learn. This section has provided you with merely a brief overview of the major brain parts and their main functions. Other sections of the book provide many opportunities to expand on these ideas.

functional magnetic resonance imaging ( fMRI ) a brain imaging technique that measures the release of oxygen from blood cells in the brain, allowing researchers to track brain structures and their functions

positron emission tomography (PET): a brain imaging technique that allows researchers to track  glucose consumption in the brain

  • Was any of the information about brain structure and function from your brain sketch or your description of a brain disorder in the Activate exercise contradicted by information from this section?
  • Which separate areas of the brain do you think would have many connections between them?

11.3 Chemicals in the Brain

  • Have you ever experienced “runner’s high?” If you have, how would you describe it?
  • Think of the drugs and other substances, legal and illegal, that have an effect on the brain. What kind of effect do they have? Do they seem to slow down neural activity or speed it up? From what you’ve learned about the brain so far, what do you suppose is the mechanism that makes them work that way?

Caitlyn loves to run. She started running regularly when she was about 30. Prior to that, she thought that her knees were too fragile to withstand the constant pounding and could not quite understand why someone would want to run long distances. She had tried occasionally to take up the activity and might have made it 3 miles once or twice, but she honestly never did understand the appeal. The idea of running 10 miles, or even 5 for that matter, sounded more like a punishment than an activity that someone would choose. One day, though, after about a month of running 2 miles at a time a few days per week, she decided to try to run 4 miles. Her knees did not fall apart, and her lungs did not explode. It did not even hurt at all. In fact, and this is the most amazing part, she felt better at the end of the run than she had at the beginning. She had discovered “runner’s high,” and she was hooked. Nowadays, she particularly loves running in the morning, before going to work. It can put her in such a good mood that she even looks forward to coming to work on cold, snowy, dark Monday mornings in January.

What is it, this runner’s high? Is it some fictional concept that lonely runners have cooked up to trick unsuspecting non-exercisers into joining them on the streets, a cruel “misery loves company” ploy? Well, it is for real; it is not a trick. We are serious when we tell you that, for many runners, the first mile of a run hurts more than the tenth, and running really does put them in a great mood. And it looks like the whole thing is a result of neurotransmitters, those little chemicals that are released by vesicles at the axon terminal buttons of neurons and float across synapses to land on receptor sites of neighboring neurons. Not all neurotransmitters, but endorphins . Endorphins are neurotransmitters that appear to serve the function of relieving pain and elevating mood, and they are released in response to exercise. They are also released in response to injury. Caitlyn’s husband Jose once played a softball game with a broken hand that he suffered during the third inning (he batted twice with the broken hand and got two hits). He reported that it hurt a little but did not start throbbing until after the game was over. Again, Jose’s body’s natural painkillers were probably responsible.

Have you ever had surgery? If so, you were probably given a very powerful painkiller drug during your recovery, perhaps even morphine. Morphine is in the class of drugs called opiates, a class that also includes heroin, one of the most famous abused drugs. Both drugs are chemically similar to endorphins. Opiates land on the same receptor sites that endorphins stimulate and fool the sites into responding as if endorphins had been released. Thus, the pain relief and mood-enhancing properties of opiate drugs come from their mimicry of the effects of endorphins in the nervous system.

Some neuroscientists are skeptical about the endorphin explanation of runner’s high. Some deny that the concept exists (non-runners, we would guess!), but others have suggested that different neurotransmitters are responsible. For example, research has found that endocannabinoid neurotransmitters might be equally, if not more important for the feeling (Basso & Suzuki, 2017). You might recognize the word cannabinoid; these neurotransmitters are chemically similar to THC, the active drug in marijuana, both of which fit into the same receptor sites. Thus, they may produce a natural effect similar to the effects of smoking or ingesting marijuana (Dietrich & McDaniel, 2004).

If this phenomenon sounds interesting to you, you should enjoy this section. It’s about neurotransmitters and neuropsychopharmacology , the study of how drugs affect neurotransmission.

endorphins: a class of neurotransmitters that are chemically similar to opiate drugs; they function to relieve pain and elevate mood

endocannabinoids: neurotransmitters that are chemically similar to THC, the active drug in marijuana

neuropsychpharmacology: the study of how drugs affect the neural communication process

Neurotransmitters

Altogether, researchers have identified more than 50 different neurotransmitters, and observers think that there are probably hundreds (Moini & Piran, 2020; Sapolsky, 2004). Most neurons release only one kind of neurotransmitter, but because they receive input from many different neurons, they receive many different neurotransmitters. As we are sure you can guess, with so many neurotransmitters, each one a key for a specific receptor site, the neural communication process can get very complicated. To further complicate matters, each individual neurotransmitter may have many different types of receptors that it binds to, so the keys can open a few different locks each. Consider one of the most famous neurotransmitters, serotonin , which appears to be involved in mood, aggression, appetite, cognition, vomiting, motor function, perception, sex, and sleep, along with some additional processes (Aghajanian & Sanders-Bush, 2002). There are at least 14 different subtypes of serotonin receptors throughout the brain, which helps explain how it can be related to so many different functions.

When you contemplate this kind of complexity, you can begin to see how the astonishing hypothesis —that idea that everything you think and feel can be traced to electrochemical activity in your brain—could be true. Think about it: if there are 300 neurotransmitters, each with an average of 5 different kinds of receptor sites that it will bind to, roughly 1,500 different types of synapses are required.

Let us briefly mention some of the most important neurotransmitters and a few of the functions with which they are involved. Depending on where in the nervous system the neurotransmitters appear (and the type of receptor sites), they may contribute to many different functions.

Remember from section 11.1 that neurotransmitters may be excitatory or inhibitory. The most common excitatory neurotransmitter in the brain is glutamate , and the most common inhibitory neurotransmitter is gamma-aminobutyric acid , abbreviated  GABA  (Nestler & Duman, 2002). The release of glutamate by axons in the reticular formation leads to arousal. The release of glutamate in the hippocampus and probably other brain regions appears to be related to the brain’s ability to change permanently as a consequence of experience—that is, to learn (Malenka & Nicoll, 1999). Because they are producing changes, you can easily see how these two effects could be considered excitatory (although it is not always this obvious). The other side of the coin is the inhibitory neurotransmitter GABA. When released by neurons in the amygdala, GABA reduces anxiety. GABA released in sensory areas of the brain may help with our ability to integrate input from different senses into a coherent whole (King & Schnupp, 2000). For example, if a friend is talking to you, your brain has to combine the visual and auditory input—processed by different brain areas—into a single experience.

Acetylcholine is a very common neurotransmitter in the peripheral nervous system. It can be excitatory or inhibitory. In its main excitatory role, acetylcholine is released by motor neurons and stimulates the muscle cells that produce movement. It is also the main neurotransmitter used by the parasympathetic nervous system, the part of the autonomic nervous system that calms that body down.

Norepinephrine and dopamine are chemically very similar to each other; norepinephrine is synthesized from dopamine, in fact (Byrne, 1997). Dopamine is usually an excitatory neurotransmitter; its release in the midbrain and some areas of the forebrain is related to reward, or pleasure (Drevets et al., 2001; Wise, 2004). Dopamine in the prefrontal cortex is related to working memory (Goldman-Rakic, et al., 2000). Norepinephrine, which can be excitatory or inhibitory, is the main neurotransmitter used by the sympathetic nervous system, which is what controls the “fight or flight” response. Norepinephrine also appears to be related to mood.

A few of these neurotransmitters are part of the discussion in other sections of the book. For now, here is a summary of these important neurotransmitters and some examples of their effects:

acetylcholine: a common neurotransmitter in the peripheral nervous system

astonishing hypothesis: that idea that everything you think and feel can be traced to electrochemical activity in your brain

dopamine:  a neurotransmitter that is released in the midbrain and some areas of the forebrain that is related to reward

gamma-aminobutyric acid  ( GABA ) : the most common inhibitory neurotransmitter

glutamate: the most common excitatory neurotransmitter in the brain

norepinephrine: the main neurotransmitter used by the sympathetic nervous system

serotonin : a neurotransmitter that appears to be involved in mood, aggression, appetite, cognition, vomiting, motor function, perception, sex, and sleep, and additional processes

Neuropsychopharmacology

Long before we had any idea about neurons and neurotransmitters, human beings sought to influence the levels of neurotransmitters. Natural remedies, consciousness-altering substances, even some primitive weapons were used by people because the effects on neurotransmission were desirable (for example, pleasurable or, in the case of weapons, deadly). For example, people in South America have eaten the extract of coca leaves for centuries. Today, that extract is processed into cocaine. Curare is a substance that South American native tribes have used to paralyze and kill prey. Both curare and cocaine have their effects on neurotransmission (curare on acetyocholine, cocaine on dopamine).

Today we call substances that have some kind of psychological effect neuropsychopharmacological drugs ; they work by influencing the neural transmission process in some way. They include clinical drugs that are used to treat psychological disorders, clinical drugs used to treat other disorders that have psychological side effects, and recreational and abused drugs, such as nicotine, alcohol, or cocaine.

Currently, biopsychologists and medical doctors divide neuropsychopharmacological drugs into two categories; they can be classified by whether they increase or decrease the effects of a neurotransmitter. Those that increase the effects of neurotransmitters are called agonists ; those that decrease the effects are called antagonists . In addition, neuropsychopharmacological drugs work on both inhibitory transmitters and excitatory transmitters. This table will help you understand how these two dimensions interact to create four basic types of neuropsychopharmacological drugs.

  • Agonist for inhibitory neurotransmitter: Valium (chemical name: diazepam) increases the activity of the neurotransmitter GABA, which you will recall decreases anxiety when it is released in the amygdala. Hence, Valium acts as an antianxiety drug. Because GABA also has other functions, GABA agonists, such as Valium, have other effects as well. For example, Valium can cause sleepiness and memory impairment (Rudolph et al., 1999). These unintended effects are called side effects, but they are simply the natural effects of influencing the activity of a neurotransmitter on untargeted areas of the nervous system.
  • Agonist for excitatory neurotransmitter: Cocaine increases the activity of dopamine. Recall that dopamine release is related to pleasure, so cocaine has quite direct effects of increasing pleasurable feelings
  • Antagonist for excitatory neurotransmitter: Many people take antihistamines to treat allergies. They work by inhibiting the activity of the excitatory neurotransmitter called histamine; histamine from the hypothalamus causes arousal. Antihistamines therefore may lead to sleepiness. (The curare used to paralyze and kill prey is also an antagonist for an excitatory neurotransmitter, acetylcholine.)
  • Antagonist for inhibitory neurotransmitter: Antagonists of inhibitory neurotransmitters are not common. One fascinating example is oil of wormwood, the special ingredient of the liqueur absinthe. Absinthe was rumored to be a powerful hallucinogen (in addition to being a very potent liqueur); it was consumed by many creative people in the 19th and early 20th centuries. Mary Shelley was purported to be under the influence of absinthe when she wrote the novel Frankenstein. Although the actual hallucinogenic properties of oil of wormwood are not known, it is clear that the substance is an antagonist for GABA (Olsen, 2002), which helps to control sensory input.

Although this two-by-two organization is fairly simple, it gets complicated very quickly when you realize that agonists and antagonists can have their effects by different mechanisms. For example, an antagonist might work by blocking the receptor sites, by blocking the presynaptic membrane so that less of the neurotransmitter is released, or by chemically breaking down the neurotransmitter while it is floating across the synapses. An agonist might work by causing more of a neurotransmitter to be released from a synapse, by landing on a receptor site and “fooling” it into opening up, or by keeping neurotransmitters in a synapse for a longer time.

In order to understand one of the most important agonistic mechanisms—keeping neurotransmitters in the synapse—you need to know one more detail. Most neurotransmitters—for example, serotonin and dopamine—are released from the receptor sites and reabsorbed by axon terminals; the process is called reuptake.  An agonist often works by interfering with the reuptake process. For example, cocaine gets its agonistic effect on dopamine this way; it encourages neural activity in part by keeping this excitatory neurotransmitter in the synapses. Similarly, Prozac and Zoloft are antidepressants that have agonistic effects on serotonin by inhibiting its reuptake. These antidepressants are called (accurately but not gracefully) selective serotonin reuptake inhibitors (SSRIs).

agonist: a drug that increases the activity of a type of neurotransmitter

antagonist: a drug that decreases the activity of a type of neurotransmitter

neuropsychopharmacological drugs: drugs that work by influencing the neural transmission process in some way

reuptake: the process of reabsorption of neurotransmitters into axon terminal bulbs after their use in a synapse

  • Can you describe any other kinds of experiences that seem similar to runner’s high?
  • Review your list of common drugs that have psychological effects. It may include, along with other drugs you may have listed, alcohol, nicotine, caffeine, and marijuana. Which of these drugs do you now think is more likely to be an agonist or an antagonist?

Module 12: Sensation

Throughout this book, we have been emphasizing the everyday relevance of psychological principles. The modules on human sensation and perception will have a bit of a different feel to them. You might sometimes have difficulty keeping the relevance of these topics in mind, but for the exact opposite reason you might expect. It is not that sensation and perception are far removed from everyday life, but that they are such a basic, fundamental part of it. Seeing, hearing, smelling, tasting, or feeling the world are key parts of every single everyday experience that you have. Because they are so basic, though, we rarely give them a conscious thought. Thus, perceiving the outside world seems effortless and mindless. Effortless? Yes. Mindless? Not even close. Just beneath the surface, sensation and  perception are an extremely complex set of processes.

That is what we find so interesting about sensation and perception. They involve processes so basic for our daily lives that it hardly seems worth calling them processes; you open your eyes and the world appears in front of you with no effort on your part. In reality, however, these processes require the work and coordination of many different brain areas and sensory organs.

Module 11 gave you a hint of the complexity of the brain. In Modules 12 and 13, you will see how some of the brain areas work together in complex processes like sensation and perception, which may be our most important brain functions. The thalamus, primary sensory cortex, primary auditory cortex, and primary visual cortex are all major parts of the brain. The visual cortex is so important that it occupies the entire occipital lobe. Indeed, a great many brain areas contribute directly to sensation and perception.

Module 12 covers sensation, the first part of the sensation and perception duo; it is divided into three sections. Section 12.1, Visual Sensation, begins to reveal the complexity of the visual system by showing you how even the “easy” parts of the process are literally more than meets the eye. Section 12.2, the other senses, describes the analogous sensory processes for the other senses. Section 12.3, Sensory Thresholds, describes the beginning stages of what the brain does with the signals from the world.

12.1 Visual Sensation

12.2 The Other Senses

12.3 Sensory Thresholds

By reading and studying Module 12, you should be able to remember and describe:

  • Parts of the eye and functions: cornea, sclera, iris, pupil, lens, retina (12.1)
  • How the retina turns light into neural signals: rods, cones, transduction, bipolar cells, ganglion cells, fovea, optic nerve (12.1)
  • Seeing colors, brightness, and features: Young-Helmholtz trichromatic theory, opponent process theory, lateral inhibition (12.1)
  • The auditory system: outer, middle, and inner ear, pinna, tympanic membrane, hammer, anvil, and stirrup, oval window, cochlea, hair cells, basilar membrane (12.2)
  • How we sense pitch: frequency theory, place theory (12.2)
  • Taste, olfaction, and touch: umami, taste buds, olfactory bulb (12.2)
  • Pain sensation: Gate-control theory (12.2)
  • Balance: propioception, vestibular system, otolith organs, semicircular canals (12.2)
  • Absolute thresholds, difference thresholds , and signal detection theory (12.3)

By reading and thinking about how the concepts in Module 12 apply to real life, you should be able to:

  • Come up with applications of difference thresholds and Weber’s Law (12.3)
  • Come up with applications of signal detection theory (12.3).

By reading and thinking about Module 12, participating in classroom activities, and completing out-of-class assignments, you should be able to:

  • Note the parallels among the different sensory modalities (12.1 and 12.2)
  • Decide when it might be appropriate to employ the concepts from gate-control theory to control someone’s pain (12.2)
  • Based on your current knowledge of cameras and human vision, make a list of similarities and differences between the two.
  • Imagine that you live in the wild and have to be able to find food and avoid predators in order to survive. What would be the most important properties of objects for you to be able to see?

Although our experience is that there is a single mental activity involved in perceiving the outside world, psychologists have traditionally distinguished between sensation and perception. Sensation consists of translating physical energy from the world into neural signals and sending those signals to the brain for further processing. Perception is the set of processes that ultimately allow us to interpret or recognize what those neural signals are. In this section, We will introduce you to visual sensation. The two main processes to describe are how the eye focuses light onto the retina at the back of the eye and how the retina transforms that light into the neural signals that are sent to the brain.

sensation: the processes through which we translate physical energy from the world into neural signals and send the signals to the brain for further processing

perception: the processes through which we interpret or recognize neural signals from sensation

The Eye is a Bit Like a Camera

In the case of vision, the physical energy that our sensory system translates into neural signals is light. What we call “light” is simply a specific type of electromagnetic radiation, energy that spreads out in waves as it travels. Other types of electromagnetic radiation include radio waves, microwaves, x-rays, and gamma rays (the highest energy rays; they are what you would use if you were trying to create the Incredible Hulk). The types of electromagnetic radiation differ in the amount of energy that they have, which can be expressed by the wavelength, or the distance between peaks of the waves. Visible light (for humans) is electromagnetic radiation with wavelengths from 350 to 700 nanometers (a nanometer is one billionth of a meter), which is actually only a very small portion of the total range of electromagnetic radiation.

Visual sensation and perception are a very active set of mental processes, but they begin somewhat passively, with the projection of light onto a surface at the back of the eye. As you may know, there is light all around us. As light bounces off of objects in the world, it enters the eye and is focused onto the surface at the back. As you will see, different properties of that light are translated into neural signals that lead to the sensation of the visual properties of the objects, such as color and brightness.

In some ways, the eye is like a camera. Both camera and eye have a hole that lets light in, a lens that focuses the light, and a surface onto which the light is projected. The outer surface of the eyeball is called the cornea ; it is like a transparent lens cap with an added function. It protects the eye, like a lens cap, but it also begins bending the light rays so that they can be focused.

When you look at an eye, you can see a white part, a colored part, and a black part. The white part is called the sclera; the colored part, the iris; and the black part, the pupil. The iris and pupil are the important parts for the eye’s function as a light-collecting device. The iris is a muscle that controls the amount of light that enters the eye by expanding or contracting the size of the hole in the center. The pupil is nothing more than a hole that allows light to get inside the eye. In bright light, the pupil remains rather small. In dim light, the pupil opens wide to allow as much of the available light as possible to enter. (The camera controls the amount of light by varying the size of and the length of time its hole, called the aperture, stays open.)

Directly behind the pupil is the lens which, like all lenses, bends the light rays. The result of the bending is that the light is focused onto the surface at the back of the eye, called the retina . The lens is able to get the focused light to land precisely on the retina by changing its shape, a process called accommodation . (The camera’s lens focuses the light correctly by moving forward and backward.)

introduction to psychology essay questions and answers pdf

So ends the relatively passive part of the process. From here on, vision involves a great deal of brain work, beginning with the way the retina turns the light focused on it into neural signals and sends them to the brain. You will notice that our camera analogy begins to break down at this point. For example, one key difference between vision and camera is that vision takes shortcuts. One key shortcut is similar to the process of interpolation used by some digital cameras to increase their resolution, or the “effective mega-pixels.” Here is how the camera works: pixels are separate, or discrete, areas of light that are projected onto a light sensor in the camera. The more pixels the camera can squeeze into a given space, the better the picture quality. Interpolation— essentially, making a guess about what color should fill in the sections that are missing (that is, the spaces between the pixels)—can increase the effective resolution of a camera. For example, if two adjacent pixels are sky-blue, the area in between them is probably sky-blue, as well. This is similar to what the eyes do, or more precisely, what the brain does with the input from the eyes. So our new analogy is that vision is like a cheap digital camera that improves its picture quality through interpolation.

Our camera-vision analogy really begins to fail, however, when we consider what happens after the light hits the retina. With a camera, the reproduction of the visual scene on the film is close to the end of the process. With human vision, it is barely the beginning. The process seems simple; we look at some object and we see it. As we have hinted, however, recognizing or interpreting visual information (i.e., visual perception) is extraordinarily complex. Let’s look at some of the important parts of the process.

cornea: the transparent outer surface of the eyeball; it protects the eye and begins focusing light rays

sclera: the white part of the eye

iris: a muscle that controls the amount of light entering the eye by expanding or contracting the size of the pupil

pupil: the hole in the center of the eye that allows light to enter and reach the retina

lens: located right behind the pupil, it focuses light to land on the retina

accommodation: the process through which the lens changes its shape to focus light onto the retina

retina: the surface at the back of the eye; it contains the light receptors, rods and cones

The Role of the Retina

There are three layers of cells in the retina. At the very back are the light receptors, neurons that react to light. The second layer is composed of special neurons called bipolar cells, and the top layer, on the surface of the retina, contains neurons called ganglion cells . As you probably noticed from this brief description, light must pass through the ganglion and bipolar cells, which are transparent, before reaching the light receptors. A few details about how these three layers of cells work will help you understand a great deal about how visual sensation works.

When light hits the light receptors, a chemical reaction begins. This chemical reaction starts the process of neural signaling (action potentials and neurotransmission that you learned about in Module 11). This translation of physical energy (in this case, light) into neural signals is called transduction. Two types of light receptors, rods and cones, named for their approximate shapes, are involved in this process for vision. Cones , located mostly in the center of the retina, are responsible for our vision of fine details, called acuity , and our color vision. The rods, located mostly outside of the center of the retina, are very sensitive in dim light, so they are responsible for much of our night vision. The rods are also very sensitive to motion (but not detail), something you have probably experienced many times when you can see something moving out of the corner of your eye but cannot make out what it is.

The relationship between the receptors, on the one hand, and the ganglion and bipolar cells, on the other, explains some of these differences between rods and cones. The rods and cones send neural signals to the bipolar cells, which send neural signals to the ganglion cells, which send neural signals to the brain. The way that the rods and cones are connected to the bipolar cells is the important property to understand.

Let’s start with the rods. Multiple rods connect to a single bipolar cell, which is connected to a ganglion cell. Thus, when the ganglion cell sends a signal to the brain, it could have come from one of several different rods. The brain will be sent information that one of the rods has been stimulated by light, but not which one. Precision, or vision of details, is low, but sensitivity is high; this sensitivity is what makes rods able to see in dim light.

In contrast, many cones, especially those in the center of the retina in an area called the fovea , are each connected to a single bipolar cell, each of which is connected to a single ganglion cell. Thus, cones have a direct line to the brain, which allows for very precise information to be sent, resulting in good sensitivity to detail. The light must hit the cone exactly, however, in order for the signal to be sent.

The signals from the rods and cones, once sent to the ganglion cells, are routed to the brain through the axons of the ganglion cells. The axons are bundled together as the optic nerve and leave the eye through a single area on the back of the retina. Because there are no rods and cones in this section of the retina, we have a blind spot.

To experience your blind spot: Cover your right eye and look at the sun. Hold the screen or page about a foot away and adjust the distance slightly until the moon disappears.

introduction to psychology essay questions and answers pdf

visual acuity : our ability to see fine details

light receptors: neurons at the back of the eye that react to light; there are two kinds: rods and cones

cones: light receptors located mostly in the center of the retina; they are responsible for color vision and visual acuity

rods: light receptors located mostly outside the center of the retina; they are responsible for night vision

fovea: the area in the center of the retina (with many cones); it is the area with the best visual acuity

optic nerve: the area of the retina where the neural signals leave the eye and are sent to the brain

transduction : the translation of physical energy into neural signals in sensation

Seeing Color, Brightness, and Features

In order to recreate the outside world as sensations in the brain, separate parts of the visual system process different aspects of the input to represent the visual properties of the scene. Three of the most important visual properties are color, brightness, and features, so it is worth spending some time describing how our visual system processes them.

Color Vision

The other main job of the cones is to provide color vision. Visible light differs in intensity and wavelength (from 350 to 700 nanometers). Changes in intensity correspond to changes in the sensation of brightness, and changes in wavelength correspond to our sensation of different colors. When light hits an object, much of it is absorbed by the surface of the object. Specific wavelengths of light are reflected off of the object, though. It is the processing of these wavelengths of visible light by our visual systems that gives rise to the sensation of different colors. Color, then, is not a property of an object, or even a property of the light itself.

There are three types of cones, each especially sensitive to a different wavelength of light. The rate at which each of these three cones fire gives rise to the sensation of different colors. This idea is known as the Young-Helmholtz trichromatic theory . According to the trichromatic theory, we have cones that are sensitive to long, medium, and short-wavelength light. We see long-wavelength light as red, medium as green, and short as blue, so the different cones are sometimes referred to as red, green, and blue cones. The cones fire in response to a wide range of light wavelengths, but they are most sensitive to a specific wavelength. It is the relative rates of firing of the three types of cones that give rise to the sensation of different colors. For example, if the long-wavelength cones are firing a great deal, and the medium and short ones are firing little, we will see the color red. You probably noticed that there is no cone for yellow. If the long and medium wavelength cones are firing a great deal, and the short ones firing little, we will see the color yellow. In a similar way, we see all of the different colors by the amount of firing of the three different kinds of cones.

The trichromatic theory was a great idea; it dates back to the early 1800s, and there are very few theories from that long ago that are still accepted by the field. It is not a complete theory of color vision, however. It is better to think of it as the first step in color processing. The theory is not able to explain a couple of interesting observations about our color vision. First, people can easily describe many colors as mixtures of other colors. For example, most people can see orange as a yellowish-red (or a reddish-yellow) and purple as a reddish-blue. Some combinations of colors are never reported, however. Specifically, there are no colors that we see as reddish-green or blueish-yellow. There is simply something incompatible about these two pairs of colors. A second observation consistent with the idea that red-green and yellow-blue are kind of opposites, is the experience of color afterimages. For example, if you stare at a yellow object for about a minute and then look at a blank white space, you will see a ghostly blue afterimage of the object for a few seconds. Here, try this boring demonstration of an afterimage until we come up with a more interesting one:

introduction to psychology essay questions and answers pdf

These additional observations can be explained by the opponent process theory, first proposed as an alternative to the Young-Helmholtz trichromatic theory by Ewald Hering in the 1800s. We have red-green and blue-yellow opponent process ganglion cells in our retina. One opponent process cell is excited by red and inhibited by green light; there is also the reverse version, excited by green and inhibited by red. The other type is excited by blue and inhibited by yellow, along with the reverse, excited by yellow and inhibited by blue. Color information in later processing areas of the brain, such as the thalamus is also handled by opponent processing cells. Opponent process cells take over after the cones; thus, they handle later stages of color vision. Although trichromatic and opponent processing theories were originally in competition as explanations of color vision, most psychologists now think of them as complementary. The three types of cones provide the first level of color analysis, and the opponent process cells take over and handle the later processing in the ganglion cells and the brain.

Brightness Vision

Color is such an obvious property of the visual world that you may be tempted to think that it is the most important aspect for visual sensation. It probably is not; it is more likely that brightness is. More precisely, it is probably contrast, or areas where light and dark come together, that is the key property. Why? It is because areas of contrast often mark the separation of objects; an area of contrast is often an edge of an object. Brightness contrast, then, allows us to see an object’s shape, an extremely useful piece of information for its eventual recognition.

Our visual system is constructed to be very sensitive to, even to enhance contrast. The enhancement occurs through a process called lateral inhibition . Bipolar cells have inhibitory connections to each other. When one fires because it is stimulated by bright light, it inhibits its neighboring bipolar cells from firing. If those neighboring cells are not also being stimulated by bright light (which is what would happen in a contrast area), the result is a very low rate of firing. The result is an enhancement of the contrast, as the dark area looks darker. The flip side happens in the bright area too; because of reduced lateral inhibition from the neighboring dark areas, bright areas look brighter.

introduction to psychology essay questions and answers pdf

More generally, you could say that the absolute brightness of some aspect of a visual scene is of little importance. It is the brightness of some area in comparison to a nearby area that is important. Something that looks dark when surrounded by lighter sections may look light when surrounded by darker ones.

introduction to psychology essay questions and answers pdf

Detecting Features

Eventually, visual sensation allows us to end up with a meaningful perception, or recognition of scene. A key sensory process that allows us to build up to these final perceptions is the detection of specific features. Specialized neurons in our visual cortex fire rapidly when they are stimulated by input corresponding to specific features (Hubel & Wiesel, 1959; 1998). For example, if you are looking at a vertical line, the neural signals that result from that input will cause vertical line feature detectors to fire in your visual cortex. If you are looking at a different feature—for example, a diagonal or horizontal line—the vertical line feature detectors will fire little or not at all. These features are very simple and very numerous. Each is detected by a specific kind of neural feature detector.

Some feature detectors in our brains lie in wait for features in specific locations, for specific features anywhere in our visual field, and for features in limited areas of our visual field. As the neural signals corresponding to the simple features travel throughout the visual processing system, they are sent to detectors for more complex features that result from the combination of specific features, such as angles or corners (Hegde & Van Essen, 2000). Then, these more complex features are passed on to other processing sites that have feature detectors for more complex, and specific, features. For example, there are cells in the temporal lobes that fire in response to very specific shapes characteristic of particular scenes, others that respond to familiar objects, and still others that probably respond to human faces (Allison et al., 1999; Bruce et al.,  1981; Tanaka, 1996; Vogels et al., 2001).

  • Which of the three main visual processes outlined in the module (color vision, brightness vision, and feature detection) do you wish your visual system did better? Why?
  • If you were offered the opportunity to increase the number of cones in your retina, but the only way to do it was to give up some rods, would you make the trade? Why or why not?

12.2 How the Outside World Gets into the Brain: The Other Senses

  • Which sense do you think is the most important one? Why did you pick the one that you did
  • If you were forced to choose, which sense would you give up? Why did you pick the one that you did?

It hardly seems fair. Vision gets an entire section devoted to it, while the rest of the senses all have to share one section. Why is there such a disparity? One reason is that psychologists know a great deal more about vision than about the other senses. There is simply more to say. At the same time, it would be difficult to argue against the assertion that vision is our most important sense. For a human species that developed as hunter-gatherers, the visual properties of the world would seem the most useful for finding food and avoiding danger. Also, a far greater proportion of brain mass is devoted to vision than to the other senses. Finally, there are clear parallels between vision and the other senses. We will not need to describe some aspects of the other senses in as much detail because you will be able to recognize them from the corresponding processes in vision.

Our main goals in this section, then, are to describe some of the unique facts about the other senses, including the specific sensory organs, receptors, and brain areas involved, and to remind you of the similarities between the other senses and vision.

The physical energy that our auditory system turns into sounds is vibrations of air molecules that result when some object in the world vibrates. The vibrating object bumps into the air molecules, which radiate from the source in regular pulses in what we commonly call sound waves. The sound waves have two main properties that our sensory system is equipped to discern. Intensity, or the size of the air movement, is what we end up hearing as loudness, and frequency, or the speed of the pulses, is what we hear as pitch. Right away, you should recognize these two properties as analogous to intensity and wavelength of light for vision.

Our ears turn sound waves into bone vibrations, which are then translated into neural signals for further auditory processing. The sensory organs, of course, are the ears. The ear is divided into three main parts, the outer, middle, and inner ear. The outer ear collects the sound waves from the outside world, the middle ear changes them to bone vibrations, and the inner ear generates the neural signals. The outer ear consists primarily of the pinna , the semi-soft, cartilage-filled structure that we commonly refer to as “the ear.” In other words, it is the part of the ear that you can see. It is shaped somewhat like a funnel, and its main functions are to focus surrounding sound waves into the small areas of the middle ear (much like the lens does for vision) and to help us locate the source of sounds.

The middle ear consists of three bones sandwiched between two surfaces called the tympanic membrane and the oval window. This is the area where the sound waves are translated into bone vibrations. Specifically, inside our ear canal, we have a tympanic membrane , what you probably know as the eardrum. This membrane vibrates at the same rate as the air molecules hitting it. On its other side, the tympanic membrane is connected to three bones, called the hammer , anvil , and stirrup (only the stirrup really looks like its name; you really have to use your imagination for the other two). The bones, too, vibrate in concert with the air molecules. The stirrup is connected to the oval window , which passes the vibrations on to the inner ear. The main part of the inner ear is a fluid-filled, curled tube called the cochlea . This is where the vibrations get translated to neural signals, so the cochlea is the ear’s version of the retina.

introduction to psychology essay questions and answers pdf

The auditory receptors inside the cochlea are called hair cells ; they are located on the basilar membrane running through the cochlea. It is the movement of the hair cells that generates the action potentials that are sent to the rest of the brain. Two separate characteristics of the hair cell vibrations are responsible for our sensation of different pitches. The first one is the frequency of the hair cell vibrations. According to frequency theory , the hair cells vibrate and produce action potentials at the same rate as the sound wave frequency. The principle is complicated slightly by the fact that many sound wave frequencies are higher than the maximum rate at which a neuron can fire. To compensate, the neurons use the volley principle, through which groups of neurons fire together and their action potentials are treated as if they had been generated by a single neuron.

The second characteristic of the hair cell vibrations is their location along the cochlea. According to place theory , high frequency sound waves lead to stronger vibrations in the section of the cochlea nearer to the oval window, while lower frequency waves lead to stronger vibrations in the farther out sections. Together, frequency theory and place theory do a better job explaining pitch perception than either one can alone. The frequency of hair cell vibrations and action potentials may be a more important determinant of pitch for low and medium frequency sounds, and the location of the hair cell vibrations may be a better determinant for medium and high frequency sounds. Because both frequency and location are used for medium frequency sounds, our pitch sensation is better for these than for high and low ones (Wever, 1970).

pinna : the semi-soft, cartilage-filled structure that is part of the outer ear

tympanic membrane: the eardrum; it vibrates at the same rate as air molecules hitting it, which begins the process of translating the energy into neural signals for sounds

hammer, anvil, and stirrup: the three bones that are connected to the tympanic membrane; they transmit vibrations from the tympanic membrane to the inner ear

oval window: the area connected to the hammer, anvil, and stirrup; it passes vibrations on to the inner ear

cochlea: a fluid-filled tube that contains hair cells, the auditory receptors

hair cells: the auditory receptors; they vibrate when stimulation from the oval window reaches them

place theory : a theory that states that high frequency sound waves lead to stronger vibrations in the section of the cochlea nearer to the oval window, while lower frequency waves lead to stronger vibrations in the farther out sections

Taste and Smell

You may recall learning at some point that taste and smell are related to each other. Indeed, they are our two chemical senses, which translate chemical differences between substances into the experiences of different odors and tastes. The obvious biological benefit of the sense of taste is to help animals to distinguish between food and poisonous substances, nearly all of which are bitter. Have you ever noticed that food often does not taste as good when you are not hungry, though? A second benefit of taste seems to be to discourage us from eating too much of any one food (Scott, 1990).

The taste receptors on the tongue are actually many different types; each responds to a different taste, such as sweet, sour, salty, and many types of bitter (Adler et al., 2000; Lindemann, 1996). Two recently discovered taste receptors are umami and fat (Chaudhari et al., 2000; Gilbertson, 1998; Nelson et al., 2002). Umami is a Japanese word, as there is no corresponding English. It is a taste that is sometimes referred to as savory, the taste characteristic that is common to meats and cheese, for example. It comes from the chemical glutamate, which occurs naturally in some foods, and has been put into spice form as monosodium glutamate. The taste receptors are located on taste buds , which are distributed throughout the tongue on tiny bumps called papillae. Each taste bud has a variety of receptors on it, so we have the ability to detect different tastes throughout the tongue, contrary to the common belief that we have taste buds for specific tastes in specific locations. People differ from each other in the number of different receptors, and because we continually replace taste receptors, we may have different compositions of receptors ourselves over time.

Each receptor type responds to a different chemical property, and different taste receptors have different ways of translating the chemical signals into neural signals (Gilbertson et al., 2000). For example, salty taste can result from the movement of sodium ions in taste receptors, and sweet taste can result from hydrogen ions. Both mechanisms lead to a depolarization of taste receptors, which begins the neural signal. Taste information in the brain is routed through the medulla and thalamus, and finally to the cortex and amygdala for final processing.

Our sense of smell, or olfaction , is closely related to taste. One way you can see this is to notice how taste relies on our ability to smell. As many people have noticed, foods do not taste the same when you have a cold. If you do not allow people to smell, very few can even identify such distinctive tastes as coffee, chocolate, and garlic (Mozel, et al., 1969). The biological benefit of odor is clearly similar to that of taste, as well. It acts as another safeguard against eating dangerous substances. For example, few people will ever get spoiled food into their mouths if they can smell that it is rotten first. It may also be that olfaction complements our taste warning system. Taste sensations arise when chemicals dissolve in water (contained in saliva, of course). Odor, on the other hand, is most pronounced in substances that do not dissolve well in water (Greenberg, 1981).

Olfaction receptors are located in the back of the nasal passages, deep behind the nose. There are about 1,000 different types of smell receptors, and about 6 million total receptors (Doty; 2001; Ebrahimi & Chess, 1998). These receptors send neural signals to the olfactory bulb , a brain area directly above the receptors, just below the frontal lobe of the cortex. Similar to what we saw for taste, the olfactory bulb sends neural signals on to the cortex and amygdala.

olfaction: our sense of smell

olfactory bulb: a brain area directly above olfaction receptors responsible for processing smells

The sense that we think of as touch actually consists of separate sensing abilities, such as pressure, temperature, and pain. Separate receptors located throughout the skin that covers our bodies respond to temperature, pressure, stretching, and some chemicals. The chemicals can come from outside the body, or they can be produced by the body as in an allergic reaction. Neural signals from nearby touch receptors gather together and enter the spinal cord at various locations. These signals travel to the primary sensory cortex in the parietal lobes via a route through the thalamus.

Pain is particularly important because it is a warning sensation. Simply put, pain is an indication that something is wrong, such as illness or injury. Pain leads us to stop using an injured body part and to rest when injured or ill, so that the body can heal. Animals, including humans, can quickly learn responses that allow us to stop or avoid pain. In other words, we can easily learn to avoid things that can harm us.

Pain receptors are a specific class of touch receptors called nociceptors  that are located throughout the body. Nociceptors respond to stimuli that can damage the body, such as intense pressure or some chemical. For example, the pain we feel from inflammation is the result of the hormone (a chemical) histamine stimulating nociceptors after its release at the inflammation site. Some are located deep within the skin, and others are wrapped in a myelin-like shell, so the pain receptors respond only to extreme stimuli (Perl, 1984). Two different kinds of nerve fibers hold pain receptors; one produces sharp, immediate pain and the other produces a slower, dull pain (Willis, 1985). Pain signals are processed in the brain by the cortex, thalamus, and probably other brain areas as well (Coghill et al., 1994).

The final sense that we will consider is a little different from the first five. At first, our sense of balance does not really seem to be about getting the outside world into our heads, but rather about our place in the outside world. It is more similar to the other senses than it appears at first, however. Similar to the other senses, our sense of balance comes from our nervous system’s ability to translate aspects of the outside world into neural signals. One key difference is that balance does not typically undergo further processing that leads to a conscious perception in the way that looking at a chair or tasting an ice cream cone does.

To get you thinking about how our sense of balance works, try this. Stand up and balance on one foot; you will probably find it very easy. Now try it with your eyes closed. If you have never tried this before, we suggest you do it far away from sharp corners because it is much harder than with your eyes open. Although you can definitely learn to balance without visual feedback, our sense of balance ordinarily comes from the integration of information from vision, proprioception, and the vestibular system.

Balance again on one foot with your eyes closed, this time with your shoes off if possible. This time, pay close attention to what your balancing foot and lower leg are doing. Even if your body is immobile, the muscles are hard at work while you are balancing. Our proprioception system is a key component of this ability. Proprioceptors are receptors throughout the body that keep track of the body’s position and movement. Neural signals are sent from the proprioceptors to the spinal cord, which sends back messages that adjust the muscles. So, ordinarily, the tiny muscle adjustments that you make, such as when you step off of a curb onto the lower street, are outside of conscious awareness. Of course, you can be aware of these adjustments if you attend to them, so there is also neural communication between proprioceptors and the brain.

The vestibular system is a bit like a specialized proprioceptor that applies to the position of the head. Five interrelated parts located in the inner ear—two otolith organs and three semicircular canals , sense tilting and acceleration of the head in different directions. The vestibular system sends neural signals to the brainstem, cerebellum, and cortex (Correia & Guedry, 1978).

proprioception: a system with receptors throughout the body that keep track of the body’s position and movement

otolith organs and semicircular canals: structures in the inner ear that sense tiling and acceleration of the head in different direction

  • Answer the questions from the Activate for this section again (Which sense is the most important, and which sense would you give up)? Were any of your answers or reasons different after reading the section?
  • Draw as many of the parallels between different sensory modalities as you can. This will help you organize them, so you can keep track of them and remember them.

12.3. Sensory Thresholds

  • Are you good at detecting faint stimuli (e.g., a dim light in the dark or a quiet sound in a silent room)?
  • Are you good at detecting differences between similar stimuli, such as the weights of two objects, or the loudness of two sounds?
  • Which sensory mode is your most sensitive?

In section 12.1, we cautioned against carrying the camera-eye analogy too far. By now it should be clear that even the straightforward parts of our sensory systems do not simply create a copy of the outside world in the brain. From lateral inhibition in vision to the differences in taste receptors across people, there is ample evidence that the information that comes from the sensory system is not a recording. Now it is time to look ahead and consider how we begin to use our sensory information. It will become clearer and clearer from this section and Module 13 on perception that sensation and perception are extremely active, fluid, and constructive processes.

Although the goal of our sensory systems is to get a neural representation of the outside world into the head, that does not mean that we need a perfect copy of it in there. Creating a perfect copy of the world in the brain would require much more mental work than we can spare. Really, what we need our sensory systems to do is to give us enough information about the surrounding world to survive in it. Sometimes, as in the case of blocking out pain when you are concentrating, survival might depend on your ability to not sense something (see Module 13). For example, if a hungry lion were chasing you, it would be helpful not to notice how much your hamstring hurts. Other times, an efficient sensory system requires making guesses about what is out there from very little evidence.

Absolute Thresholds

The task of detecting whether or not a stimulus, any stimulus, is present is one of the most fundamental jobs of our sensory system. After all, in order to do any further sensory and perceptual processing, you have to know that something  is there. Even in the case of detection, however, you will see that it is not simply a matter of turning on the recorder.

It is possible to measure the absolute sensitivity of our sensory systems, but as you will see, our actual sensitivity in any given situation can vary considerably from that. This absolute sensitivity is called the absolute threshold , the minimum amount of energy that can be detected in ideal conditions, for example, in vision or hearing in a completely dark or quiet room with no distractions. The different sensory modes, then, have their own absolute thresholds and they are very impressive. Human beings can see a candle flame from 30 miles away on a dark night, hear a watch ticking from 20 feet in a quiet room, smell one drop of perfume in a three-room apartment, taste one teaspoon of sugar in two gallons of water, and feel the touch of a bee’s wing falling on the face from a height of one centimeter (Galanter, 1962).

Of course, there are differences across people. Your absolute threshold for vision might be better than your 81-year old grandfather’s, for example. Perhaps more importantly, or at least more interestingly, there are differences within people. Basically, your own absolute threshold can be very different at different times. It is a very simple idea. Your absolute threshold can change dramatically, depending on factors such as motivation and fatigue. For example, if you are being paid $5 to sit in a dark room for five hours during a psychology experiment and press a button every time you see a dim light, you will probably miss a few. Especially as the session wears on, your motivation may be low, and fatigue will be high, leading to a relatively high absolute threshold (in other words, a relatively bright light will be required for you to detect it). On the other hand, if you are a guard watching for an approaching enemy and are supposed to report every time you see a dim light on a radar screen, you are likely to see every possible light.

The relationships between threshold and personal factors have been expressed mathematically by signal detection theory (Tanner & Swets, 1954; Macmillan & Creelman, 1991). According to signal detection theory, there are two ways to influence your absolute threshold. First, you can increase your sensitivity, something that is possible only through some kind of enhancement (such as eyeglasses, or night vision goggles). The other way is to change your strategy for reporting the detection of a signal. This is the part that varies with factors like motivation and fatigue. If you are very motivated to see a dim light, your strategy may be to say that you see one whenever there is the slightest bit of evidence, so you will be sure to see all of the lights. Because you will be saying “there it is,” so many times, however, you will also have a lot of false alarms, reporting a light when none is there. If you later want to reduce your false alarms, perhaps because you have been “crying wolf” too many times, you can change your strategy again, requiring a brighter light before you report that you see it. Of course, now you will increase the number of times that you miss a dim light that is really there. This relationship between hits, misses, and false alarms is the important lesson to be gained from signal detection theory. If you cannot increase your sensitivity, there will always be this type of relationship. If you get a lot of hits, you will also get a lot of false alarms; if you get few false alarms, you will get a lot of misses.

If you think about it, there are a number of situations in which an observer is asked to detect a faint stimulus, and thus, several real-life applications of signal detection theory. For example, a friend of ours once spent a week with his hand in a cast because the doctor examining his x-ray detected a hairline fracture that was not there. Because the doctor did not want to miss a broken bone, he adopted a strategy that increased his likelihood of getting a hit, but in this case, he got a false alarm. As another example, imagine a teenager trying to sneak in silently after missing curfew. He freezes on the stairs when he hears the slightest creak of a floorboard, sure that his mother has heard him and is getting out of bed. Another false alarm resulting from a high motivation to detect a stimulus.

absolute threshold: the minimum amount of stimulus energy that can be detected in ideal conditions

signal detection theory : a mathematical model that describes the relationship between sensory thresholds and personal factors, such as motivation and fatigue

Difference Thresholds

A second fundamental use of sensory information is detecting differences. One of our favorite key terms is the alternative name for difference threshold because it may be the most self-explanatory term in all of psychology; the term is just noticeable difference. It is, of course, the smallest difference between two stimuli that can be detected. Although the principles from signal detection theory can be applied to detecting differences, there is a second important way that these thresholds vary.

The notable fact about the just noticeable difference, often abbreviated JND, is that it is not a constant. For example, suppose you are holding a pebble in one hand; you may be able to detect the difference in weight if we add another pebble. In other words, the second pebble is more than a JND. What if you were holding a bowling ball in your hand, though? If we added a pebble now, you would not notice the difference; now it is less than a JND.

Over 175 years ago, researchers discovered that the JND was related to the size of the comparison stimulus. If you are looking at a dim light, you can detect a small difference. On the other hand, if you are looking at a bright light, you need a larger difference before you can detect it. This relationship is known as Weber’s Law , and it holds for judgments of brightness, loudness, lifted weights, distance, concentration of salt dissolved in water, as well as many other sensory judgments (Teghtsoonian, 1971).

Again, you do not have to think hard to realize that applications of JND’s and Weber’s Law reach far beyond judging the loudness of tones in a psychology experiment. When one of the authors used to lift weights with a friend in college, we used to joke that adding five pounds to our current bench press weight was like wearing long sleeves; we would not even notice it (of course, if we had been bench pressing 25 pounds, we probably would have noticed it). Or think about how consumer products companies may take advantage of the JND. For example, if a company is going to decrease the size of product (a secret price increase, as they will charge the same price), they will be sure to decrease it by less than a JND. This probably happens much more than you think, precisely because the companies have been successful at staying within the JND (and sometimes because they cheat by keeping the package size the same, reducing only the contents).

difference threshold (just noticeable difference, or JND): the smallest difference between two stimuli that can be detected

Weber’s Law : a perceptual law that states that the difference threshold for a stimulus is related to the size of the comparison stimulus

  • Describe some other examples where signal detection theory would apply.
  • Describe some other examples of JND’s and Weber’s Law.

Module 13: Perception

You will usually find sensation and perception treated separately, as we have done in this book, but you should realize that psychologists draw this distinction for ease of explanation only. You may be tempted to think of sensation as a somewhat straightforward translation of the outside world into brain signals, and perception as a heavily brain-dependent, higher-level set of processes that have little direct contact with the original outside world. You can see the distinction is somewhat artificial from some the topics in Module 12, however. For example, a process seemingly as straightforward as detecting whether or not a stimulus is present is affected by your decision strategy. We sense brightness not in the absolute, but by comparing nearby objects to each other. So already, the brain is taking an active role in processing the neural signals that come from the outside world. You can see, however, that sensory processes do make extensive use of that information from the outside world. In perception, the brain steps to the forefront. That certainly does not mean that perception has no contact with the information from the outside world, only that the emphasis is on procedures that the brain uses to make sense out of the input.

Recall the “surviving in the wild” question asked in the Activate exercise at the beginning of Module 12. In the module, we suggested that brightness contrast, because it helps us separate objects, might be the most important visual property to help us survive. Of course, we would want to know more than simply where one object begins and another ends. Specifically, if you were trying to find food and avoid predators, you would want to know where something is, where it is going, and what it is. For example, there is a big difference between a hungry lion 30 feet in front of you sprinting out of the forest toward you, and a cute bunny 30 feet in front of you hopping into the forest away from you. So, an expanded list of processes essential for survival includes ones that allow us to locate objects and perceive their motion, and then to recognize what they are. These are the key perceptual processes, and they are quite complex, comprising several sub-processes. They include:

Localization and organization

  • Perceiving distance using monocular and binocular cues
  • Perceiving motion
  • Grouping parts of a scene into a single object and grouping objects together

Recognition

  • Bottom-up processing, such as detecting features (which you saw briefly in module 12 already)
  • Using top-down processing (expectations and context) to recognize objects

This module has four sections. As we did in Module 12, we will cover perceptual topics for vision and the other sensory modes separately. Section 13.1 describes how we perceive distance and motion in vision, the main processes involved in localization. Section 13.2 covers organization in vision. It describes how we group different parts of a scene together to see distinct objects. Section 13.3 is about recognition in vision and about all three processes (localization, organization, and recognition) in the other senses. You will read about our brain’s remarkable ability reach a final perception by combining sensory input from the world with its own expectations. The section concludes with a brief discussion of sensory integration, the process through which we combine the input from the different sensory modalities into a unified experience. Section 13.4 covers attention, an important precondition for turning a sensation into a full-blown perception.

13.1. Localization in vision: Where is it and where is it going?

13.2. Organization in vision: How do the pieces fit together?

13.3. Recognition: What is it? And the Other Senses

13.4 Attention

By reading and studying Module 13, you should be able to remember and describe:

  • Basic idea of localization, organization, and recognition. (13 introduction)
  • Monocular distance cues: linear perspective, interposition, relative size, relative height, texture gradient, motion parallax (13.1)
  • Binocular distance cues: retinal disparity (13.1)
  • Size-distance illusions (13.1)
  • How we perceive motion (13.1)
  • Gestalt principles of organization: similarity, proximity, figure-ground perception, good continuation, connectedness, closure, temporal segregation, common region (13.2 and 13.3)
  • Bottom-up and top-down processing (13.3)
  • Expectation and context effects (13.3)
  • Localization in the other senses (13.3)
  • Sensory integration: superior colliculus (13.3)
  • Selective attention and divided attention (13.4)
  • Multimode model of selective attention (13.4)

By reading and thinking about how the concepts in Module 13 apply to real life, you should be able to:

  • Identify monocular distance cues in scenes and art (13.1)
  • Identify Gestalt principles in real-world perceptions (13.2 and 13.3)
  • Come up with your own real life examples of context and expectation effects in recognition (13.3)
  • Generate your own real life examples of divided and selective attention tasks (13.4)

By reading and thinking about Module 13, participating in classroom activities, and completing out-of-class assignments, you should be able to:

  • Draw a picture that uses monocular cues to give the appearance of distance (13.1)
  • Explain how distance cues lead to size-distance illusions that were not covered in the text (13.1)

13.1. Localization in Vision: Where Is It and Where Is It Going?

  • Look out of a window that has a good long-distance view. How does the appearance of close objects compare to the appearance of far away objects? List as many differences as you can.

In order to visually perceive where something is, you have to perceive how far away it is and in which direction and at what speed it’s moving. Localization, then, is a matter of perceiving both distance and motion.

Distance Perception

How do you see distance? The naïve understanding of vision is that distance is something that is directly perceived. Some objects are simply farther away than others, and the eye must somehow record that difference. The problem with this idea is that the three-dimensional world needs to be projected onto a two-dimensional retina at the back of the eye. The loss of that third dimension means that distance cannot be directly “recorded” by the eye. For a simple demonstration of this fact, try looking at a car that is far away from you. The car in your visual field appears very small. Then imagine looking at a toy car sitting on a table near to you. Both cars might project the same-size image to your retina, so your brain must be able to figure out the difference in their actual sizes and their distances. Although it happens with no conscious effort on your part, it is actually a complicated task.

Monocular Cues

The brain reconstructs distance by using information beyond the image of the single object projected on the retina. There are a number of cues to distance that the brain uses to do this; they are divided into binocular cues and monocular cues. Binocular cues work because we have two eyes; monocular cues need a single eye only.

Common monocular cues include the following:

  • Linear perspective. As you look at lines over distance, they appear to converge, or come together. This convergence of lines is called linear perspective.
  • Interposition. Although the term “interposition” is probably new for you, the concept is extremely simple. Sometimes, when you are looking at two objects at different distances, you can judge their relative distances because the closer object partially blocks your view of the farther object. So when parents complain that they cannot see the television because a child is in the way (the “dad joke” way to say it is that you make a better door than a window), they are complaining about interposition.
  • Relative size. If two objects that are the same size are different distances away from you, the farther object will appear smaller than the closer object.
  • Relative height. Objects that are farther away appear to be nearer to the horizon than closer objects. This means that above the horizon, the far away object appears lower than the close object; below the horizon, the far away object appears higher than the close object.
  • Texture gradient. A gradient is a change in something. The apparent change in texture, or the texture gradient, is a cue that you are looking at something over distance. For example, imagine standing at the edge of a long field of grass. Very close to you, the texture appears very rough; you can see individual blades of grass and many details of the surface. As you move your gaze farther away, the field begins to look smoother; you cannot see as many details, and you cannot see the individual blades of grass. Very far away, the field looks like a smooth, green surface. That change in apparent texture, from rough to smooth, is the texture gradient that tells you that you are looking at a change in distance.
  • Motion parallax. Motion parallax is the one monocular cue that requires you to be in motion to use it. If you are moving, riding along in a car, for example, the world outside appears to move in the opposite direction of your motion. The speed with which the world appears to move is a cue to how far away an object is. Houses that appear to move slowly will be perceived as farther away from you, while parked cars that appear to move very fast will be perceived as very near to you.

introduction to psychology essay questions and answers pdf

localization : the process of perceiving where something is – how far away and in which direction – and whether or not it is moving

monocular cues: distance cues that require the use of a single eye only. They include linear perspective, interposition, relative size, relative height, texture gradient, and motion parallax.

Binocular Cues

Did you ever wonder why animals have two eyes? One of the main reasons is that they provide binocular cues to help us to perceive distance. One major binocular distance cue is retinal disparity . Because your eyes are a few inches apart from each other, when you focus both eyes on a single object, each eye sees the object from a slightly different angle. Try this little demonstration: take your left index finger and point it at the ceiling, with the first knuckle touching the tip of your nose. Then alternate looking at the finger with your left eye and right eye. You will be able to see your fingernail with your left eye, but not with your right eye. In addition, it appears that your finger is jumping back and forth each time you switch eyes. Now hold your index finger in front of you with your arm fully extended. Again, alternate looking with your left and right eyes. You can tell that each eye has a different angle of the finger, but the difference is much less pronounced. Your finger still appears to jump back and forth, but much less than it did when it was touching your nose. You have just demonstrated that retinal disparity is reduced when the object is farther away. Quite simply, the greater the difference in view between the two eyes—that is, the more retinal disparity there is—the closer the object is to you.

The Viewmaster, a toy that has been around for more than 70 years, uses the principle of retinal disparity to give the illusion of distance. Each picture that you see when looking into the Viewmaster is composed of two slightly different versions of the picture, one projected to each eye. The different versions are interpreted as retinal disparity, and as a result, the scene appears to be three-dimensional.

introduction to psychology essay questions and answers pdf

binocular cues: distance cues that require the use of two eyes

retinal disparity: a binocular cue; the difference between the image projected to the left and right retina is a cue to how far away some object is

Because distance and size are not directly perceived, but rather figured out from cues, we might be wrong occasionally. When we are, we fall victim to one of the many size-distance illusions that affect us. Although illusions are, by definition, errors, they result from the operation of the same perceptual processes that ordinarily lead us to correct judgments. For example, look at the diagram below on the left. It looks like a woman being chased down a long hall by a taller monster. The two characters are the same size on the page, however. How are we tricked into seeing the monster as taller than the woman? Well, two distance cues, linear perspective and relative height, tell us that the woman is close and the monster far away. A third distance cue, relative size, leads us to the error, though. If the monster is the same size as the woman in real life, he should look smaller when he is judged to be farther away. His appearance is not smaller like we would expect, however. We cannot judge that he is closer because of the cues suggesting he is far away, so the only alternative is to conclude that the monster is actually larger than the woman.

introduction to psychology essay questions and answers pdf

We are not saying that this conclusion is a conscious decision. You do not consciously decide that something is far away, close, large, or small. Your brain uses the distance cues and draws the conclusion unconsciously; to your conscious mind, it feels as if the distance and size are perceived directly, though, despite all of the work that goes into it.

Motion Perception

When perceiving distance, the image that is projected to your retina is ambiguous. For example, a small retinal image can mean you are looking at a small, close object or a large, far away object. You needed extra distance cues to help sort out the ambiguity. Perceiving motion also involves ambiguous retinal images that require additional cues to resolve. The problem is more difficult, however, because often the retinal image is completely misleading. It seems reasonable to suppose that moving objects would cast a moving image on your retina, and stationary objects would cast a stationary image. This is not typically the case, however. If you are watching the moving object, the image will be fairly stationary, as it is maintained in the fovea in the middle of your retina (your eyes will track, or move along with the object to do this). On the other hand, if you move your eyes away from a stationary object to look at something else, the image will zip across your retina. In both cases, you are likely to perceive the motion correctly, however.

Two key cues that allow us to detect motion are contrast and eye-head movements. Contrast in general is an important concept for perception; you already saw how brightness contrast is enhanced in visual sensation. For the perception of motion, we are interested in the contrast in movement between different elements of a scene. When you move your eyes around, the whole world around you appears to move. You do not perceive that as motion, however, because everything is “moving” at the same rate and direction. It is only when some objects in your view move and others do not that you perceive motion. This is an extremely effective cue that tells us that part of the scene is moving (Wallach et al., 1985). Researchers have also shown that neurons in the temporal lobes are sensitive to this kind of contrast (Tanaka et al., 1993).

The second cue for motion perception is eye and head movements (Epstein & Hanson, 1977; Stork & Musseler, 2004). Information about eye movements is sent from the muscles of the eyes—another example of proprioception—and from the motor commands from the brain to the visual processing areas, such as the visual cortex. Both of these sources of information can provide cues that the eyes are tracking a moving object (O’Regan & Noe, 2001).

  • Draw your own picture (or pictures) illustrating the monocular distance cues.
  • Can you use the information on size-distance illusions to explain why the moon looks larger when it is on the horizon than when it is higher in the sky?

13.2. Organization in Vision: How Do the Pieces Fit Together?

  • Look at the room around you. What are the separate objects that you see? Which aspects of the scene are the most important for allowing you to see the different objects in your view?

Localization is certainly important for turning a sensation into a meaningful perception, but it is just the beginning. The next key step is to begin to assemble the different pieces of a visual sensation into unified whole. Only then, can the table be set for a final recognition. The processes through which we create this unified whole are referred to as organization, and the main ones are referred to as Gestalt principles.

Grouping Using Gestalt Principles

The most important strategies that the brain uses to organize parts of a scene into distinct objects and to group objects together were discovered by the group of German psychologists called Gestalt psychologists ; they were introduced in Unit 2 in the discussion of problem-solving. The Gestalt psychologists were interested in how human perception can construct a single, coherent, whole perception from the individual parts. They proposed that the brain must augment the physical input (the parts) by imposing its own organizing principles; it is from the Gestalt psychologists that we get the common idea that the whole is greater than the sum of its parts. Their chief concern with perception was how we decide which parts of a possible perception should be grouped together into objects or sets of objects.

You should keep these two related kinds of grouping in mind: grouping separate elements together into a single object and grouping different objects together into sets. We can use the Gestalt principles to describe both kinds of groupings. The Gestalt psychologists identified many principles, the best-known being the following. It is important to note that a given scene might require the application of more than one Gestalt principle.

Similarity, Proximity, and Connectedness

Objects will tend to be grouped together when they are similar to each other ( similarity) , when they are close to each other ( proximity) , or when they are physically connected to each other connectedness .

introduction to psychology essay questions and answers pdf

Sometimes the different Gestalt principles lead us to make the same grouping, sometimes not. If you have ever watched a soccer game between two teams of six-year-olds, you can use different groupings based on similarity and proximity to help make sense of the action. All of the children with the same-colored shirt are on the same team; in other words, a similarity grouping helps you figure out which players you should be cheering for. And you can tell where the ball is on the field if you temporarily lose sight of it because most of the children tend to cluster around the ball; in those situations, you are using a grouping based on proximity (in case you have never had the pleasure to see for yourself, six-year-olds have not figured out that sometimes, you are not supposed to be next to the ball).

Figure-Ground Perception

The observation that multiple groupings are possible points out the need for another Gestalt principle, called figure-ground perception . According to this principle, we can shift our attention throughout a scene to pick one section as the object of interest, or figure, and relegate the rest of the visual information to the background, or ground. At the soccer field, you might perceive the group of children who are bunched around the ball as the figure and the rest of the children, the coaches (who are also on the field), the referee, and the field itself as part of the background. If you are paying attention to one specific kid chatting with a friend from the opposing team, they are the figure, and everything else is the background. The best-known illustrations of our ability to switch figure and ground are reversible figures, a figure that can have two completely different interpretations by switching figure and ground. Even when the objects themselves are unambiguous, we may shift our figure-ground perception at will to make one part the figure and the rest the background.

introduction to psychology essay questions and answers pdf

Good Continuation

Grouping by good continuation is a principle that helps us to see a pattern in the simplest input. We have a preference for grouping that will allow us to see a smooth, continuous form. So you are more likely to group (or see) the dots in the diagram below as two intersecting curved lines, rather than four separate segments.

introduction to psychology essay questions and answers pdf

In our quest to perceive a coherent, whole perception (a Gestalt), we may have to add to what is really there. In other words, when other Gestalt principles strongly suggest a certain grouping but the picture is incomplete, we may use closure to fill in the missing gaps. For example, the similarity principle suggests that the three simple angles in the figure above should be grouped together to form a triangle that is underneath a second triangle formed by the rectangles with the angles missing. That top triangle is not really in the picture, however. Closure allows us to complete the triangle, though, and we see the whole shape.

This is probably a good time to remind you that even though most of the examples have focused on how we group separate objects of a scene into sets, they also can be used to group different parts into a single object, as in the closure example.

Recent researchers have added to the Gestalt grouping principles. For example, elements that appear at the same time tend to be grouped together, a principle known as temporal segregation (Singer, 2000). To use another soccer example, the players who run onto the field together are perceived to belong to the same team. Finally, elements that are bound into a common region tend to be grouped together (Palmer, 1992). Many soccer parks in our town have several fields on them. The children who are confined to one field constitute a single grouping, a game.

It is also worth noting that recent research results have found evidence in the brain for some of the original Gestalt principles. For example, there appear to be figure-ground cells in the cortex that respond to one figure-ground grouping and not its reversal, suggesting that “figure” is a feature coded by the visual system (Baylis and Driver, 2001).

closure : a Gestalt principle that says that we tend to fill in missing perceptual information

common region : a perceptual principle that says that objects that are found in the same space tend to be grouped together

connectedness : a Gestalt principle that says that objects that are connected to one another will be grouped together

figure-ground perception : a Gestalt principle that says that we can shift our attention to pick out one part of a scene and to shift the rest to the background

Gestalt principles: a set of principles that describe how be organize sensory input, mostly by grouping or separating individual parts; they were originally discovered by Gestalt psychologists in the early 20 th century

good continuation : a Gestalt principle that says that we have a preference for seeing patterns that are smooth continuous forms

proximity : a Gestalt principle that says that objects that are close to one another will be grouped together

similarity : a Gestalt principle that says that objects that are similar to one another will be grouped together

temporal segregation : a perceptual principle that says that objects that appear at the same time tend to be grouped together

  • Come up with some visual examples of the Gestalt principles of similarity, proximity, figure-ground perception, good continuation, and closure.

13.3. Recognition: What Is It?

  • Did you ever notice that it takes you an extra moment to recognize a familiar person in an unfamiliar location (for example, your psychology professor in the grocery store)? Why do you think that is?
  • Have you ever walked into a dark room when you were (already) frightened, and mistaken a harmless object, such as a stuffed animal, for something much more sinister and dangerous? Why do you think that happens?

We have just crossed over a fuzzy and somewhat arbitrary line. Although the ideas we have talked about already are important for localizing and organizing, they also contribute mightily to final recognition. For example, when you group the eleven children on one side of the soccer field together, it is only a small step beyond that for you to recognize them as The Blizzards (the team’s name). Nevertheless, it seems useful to separate the processes as we have done, as is typical within psychology, as long as you realize that these earlier processes contribute to recognition.

Think back even earlier, to the visual sensation processes we talked about in Module 12, such as detection of brightness contrast, features, and color. Add to those the localization processes of distance and motion perception. All of these processes help you to recognize objects. In some ways, these earlier parts of the overall recognition task are like putting together a jigsaw puzzle. Small regions of the puzzle are assembled out of individual pieces. Then, those small regions are combined into larger sections, which are assembled into the final completed puzzle. In the same way, our perceptual system builds up to a final recognition from simple features, such as colors and lines, through more and more complex features, such as angles, shapes and surfaces, all the way up to a complete scene, a soccer game. This kind of perceptual processing, in which a final recognition is “built up” from basic features is called bottom-up processing . It begins “out in the world,” with the basic properties of the objects to be perceived.

We can push the puzzle analogy a bit further to introduce you to the other major type of processing that takes place during recognition, Think about the procedure that many people use when they assemble jigsaw puzzles. They spread out the pieces on the table in front of them and prop up the cover of the puzzle box, so they can see what the completed puzzle is supposed to look like. That box cover tells them which pieces belong in which areas. For example, the brown pieces might be part of a horse’s body, which belongs on the lower left side of the puzzle, according to the picture on the box. We have a set of mental processes in perception that correspond to the puzzle box cover. It is called top-down processing and it consists of expectation effects and context effects (the Gestalt principles are essentially top-down processes, too, as they are organizing strategies imposed by the brain). Just like referring to a picture on a box when assembling a jigsaw puzzle, the top-down processes help you predict what will go where in your final perception, or recognition. Even better, they help you to direct your attention to the appropriate areas so that you can recognize objects and scenes very quickly.

The combination of bottom-up and top-down processes typically makes final recognition efficient and effortless. We have already spent some time on the bottom-up processes, so let’s turn to top-down ones. First consider how expectation effects influence recognition. Suppose you worked your way through high school as a kids’ soccer referee. Through this experience you have come to expect certain things. For example, at the beginning of each half and after goals, the teams assemble on their respective halves of the field. After a goal is scored, then, you have an expectation. In other words, you know where to look if you want to find the different teams. This is the basic idea behind the expectation effect. Because you know what to look for, it becomes easy for you to find it. Although this seems obvious and perhaps uninteresting, it is important because these top-down processes are extremely powerful.

A simple example of the interplay between top-down and bottom-up processing will help you to see how they work together to give us such an effective recognition system. Imagine that you are trying to recognize a printed letter on a page. Bottom-up processing, such as feature detection, sends the signal that you are looking at a vertical line and a horizontal line. The fact that the letter follows two other letters, C and A, sets up an expectation. If the three letters are to form a word, only a few letters, such as B, D, M, N, P, and T will fit. Final recognition of the letter “meets in the middle,” as the powerful bottom-up effects of detecting the features and top-down effects of expecting certain letters allow you to instantly see it as the letter T. Although most real-life experiences of recognizing objects are more complicated, the same basic “meeting” of top-down and bottom-up processes occurs.

introduction to psychology essay questions and answers pdf

Ordinarily, the top-down processes help you to perceive the world accurately and instantly. For example, given your expectation during a soccer game, it only takes a quick glance to find the different teams after a goal. Your expectation, however, can be powerful enough to change your perception. For example, in 2003, Las Vegas magician and tiger trainer Roy Horn, of the team Siegfried and Roy was attacked, dragged offstage by his neck, mauled, and nearly killed by a 600-pound tiger during his act (Roy actually died in 2020 from COVID-19). Roy’s partner, Siegfried, reported soon after the attack that the tiger was trying to help Roy during a moment of confusion (Roy had just tripped). Animal behavior experts disagreed. They noted that the tiger went for Roy’s neck, the key killing behavior that tigers use on their prey, an interpretation shared years later by one of the team’s animal trainers (Nash, 2019). Because of their different expectations—Siegfried thought about the tiger as a partner in the act, even a friend, while the animal behavior experts thought about the tiger as an instinctual killer—they perceived the same behavior very differently. You will be able to find many similar examples of someone’s expectation changing the way that he or she perceives something.

The second key type of top-down process is context effects, in which the objects or information surrounding the target object affect perception. When you see your psychology professor walk into class every day, it is easy to recognize him or her in the familiar context of a classroom. But have you ever run into a professor (or other teacher) in an unexpected location, such as a grocery store, or even a bar? If so, you might not have recognized him or her at first because of the unusual context. As with expectation effects, context effects can be powerful enough to change your actual recognition. The middle character in the example below can look like the letter B or the number 13, depending on the context in which you find it (Biderman, Shir, & Mudrik (2020).  And again, think of the Siegfried and Roy example. A 600-pound tiger pouncing on a man, grabbing him by the neck, and dragging him to another location would certainly not be perceived as the tiger helping the man if it occurred in the context of an expedition in the wilderness of India.  You should also be able to see that context and expectation are related; often it is the context that helps set up an expectation.

introduction to psychology essay questions and answers pdf

bottom-up processing: perceptual processing that leads to recognition by beginning with individual features in the world and “building up” a final recognition

top-down processing: perceptual processing that leads to recognition by beginning with the brain, which directs (via expectation and context effects) how recognition proceeds

expectation effects: a top-down processing effect in which having an expectation leads an individual to perceive some stimulus to be consistent with the expectation

context effects: a top-down processing effect in which the information that surrounds a target stimulus leads an individual to perceive the stimulus in a way that fits into the context

Localization, Organization, and Recognition in the other senses

Many of the principles that we identified for vision apply to the other senses too, so it seems unnecessary to repeat them in great detail. Basically, regardless of the sensory modality, you need to localize, organize, and recognize. Top-down processing and similar organization principles affect hearing, touch, taste, and smell. Let us spend a few minutes discovering how these ideas apply to the other senses, then. At the end of the section, we will talk about how our brain takes the input from the different senses and assembles it into a single perceptual experience.

You will recall that the outer, middle, and inner ear translate air vibrations into bone vibrations via the tympanic membrane, hammer, anvil, and stirrup, and then into neural signals via the oval window and hair cells (in the cochlea). The intensity of the vibrations is translated into loudness, and the speed, or frequency of the vibrations is translated into pitch. This is a far cry from the rich detailed auditory world in which we live. The basic sensory processes as we described them in Module 12 explain how we detect the beeps of a hearing test in an audiologist’s office, but how do we get from there to hearing complex sounds, such as speech, music, and city noise?

First, there are relatively few pure tones, made up of a single frequency, in the natural world. Complex sounds, however, can be broken down into their component frequencies, so the auditory system has a set of processes that are analogous to feature detection from vision.

Of course, localization, organization, and recognition are essential for the perception of sounds. Localization takes place through a process similar to the binocular cues from vision. When a sound comes from one side of the body, it reaches the corresponding ear sooner and is louder to that ear (Gaik, 1993; Middlebrook et al., 1989). The brain is extremely sensitive to these tiny time and loudness differences and uses them to locate the source of the sound. There are also hearing analogs of some of the monocular cues to distance. For example, close objects make louder and clearer sounds than far away objects

For the organization of sounds, Gestalt principles for grouping apply as well to hearing as they do to vision. In fact, some of the best examples of figure-ground perception are auditory. Any time you try to listen to one message, such as your friend whispering in your ear, while ignoring another, such as a boring lecture, you are selecting one as the figure and the other as the background. As soon as your professor calls your name to ask you to answer a question, you can instantly switch and make the former background into the figure. A strategic pause in a string of sounds can lead to different groupings based on proximity or connectedness in time, as in “What is this thing called love?” versus “What is this thing called, love?” Good continuation (as well as figure-ground perception ) enhances our ability to follow a melody during a complex musical recording (Deutsch & Feroe, 1981). Similarity helps us to separate sounds that occur at the same time. For example, if a complex sound, such as a musical chord, contains a badly mistuned element, we may hear two separate sounds. If everything is in tune, we hear a single integrated sound (Alain, Arnott, & Picton, 2001).

Closure , too, works in hearing, as we can often hear a complete sound, such as a word, even if a small section is missing. For example, if a sound is interrupted by bursts of noise, we will hear the tone as constant (Kluender & Jenison, 1992; Warren, 1984)

The top-down processes  for recognition are also extremely important in hearing. Many times, we hear what we expect to hear. For example, consider song lyrics. Sometimes, you may be unable to understand the lyrics of a song until you read them. Once you have the expectation that comes from reading the lyrics, you can hear them from then on. In general, our experiences leave us with a wealth of knowledge about sounds, and we use this knowledge to help us recognize (Bregman, 1990). Of course, one key source of both context and expectation is information from other sensory modes, such as vision. For example, the sight of a violin in a companion’s hands would help you to recognize the awful screeching sound as an attempt at music rather than a sick cat.

Although we certainly have to locate odors, localizing is an unnecessary process for taste. Localization for olfaction occurs largely through sniffing and detecting increasing concentrations of the odor, essentially searching for JND’s. Some of the principles of organization and recognition apply to both taste and smell. You can certainly use Gestalt principles, such as similarity to separate different odors and different tastes.

If you do not believe that top-down processing affects the sense of taste, ask yourself this: why do food manufacturers use artificial colors? Expectation effects have a remarkable and surprising effect on taste. Pepsi once released a clear version of its cola, called Crystal Pepsi; it was a spectacular failure (We once found an online petition to compel Pepsico to bring back Crystal Pepsi; it was signed by 79 people; the “Save Spongebob Squarepants” petition on the same site was signed by 25,000. And we didn’t even know Spongebob was in danger).

People really do think that food tastes better if top-down processing leads them to expect it. Researchers have found that if a restaurant offers a food with a descriptive name, such as Legendary Chocolate Mousse Pie, it sets up an expectation in customers that leads them to judge it to be higher quality (Wansink, Painter, & van Ittersum, 2001). Also, the fact that odor is necessary for proper tasting indicates that olfaction can function as the source of expectation effects for taste.

Some of the Gestalt principles even apply to touch. Similarity and proximity are likely important ways for us to judge whether pressure, temperature, and pain sensations result from the same or different stimuli. For example, sometimes, when people have a bad fever, they get a bad headache, as well as pain in their knees, lower back, and other joints. It can take them several hours, or even days, to connect these different sensations as part of the same illness because they are in such different locations, and they usually strike at different times during the course of the illness.

After a sensory organ has received constant input for a short time, the sensation fades away and eventually disappears. It is called sensory adaptation , and it applies to all of the senses. It even applies to vision, but you never experience it because your eyes are continually moving, ensuring a constantly changing sensation. Probably the most obvious example is from olfaction. A few minutes after entering a room with a distinctive odor, you adapt to it and no longer notice it. Sensory adaptation is also obvious in touch. If you wear glasses, you rarely notice them touching the bridge of your nose. Also, moments after getting dressed, you no longer feel the elastic waistband of your underwear. If, however, we draw your attention to it, as we just did, you are likely to be able to feel it again. This can be seen as an example of figure-ground perception; we can reverse sensory adaptation and turn a former background sensation into “figure.”

Finally, there is little doubt that top-down processing can be quite important for touch. For example, the very same sensation may be experienced as an affectionate caress or an annoying rub, depending on the context in which it occurs. Even pain can be affected by top-down processing. You probably recall that our sensation of pain comes from nociceptors that respond to potentially damaging stimuli. Our brains can override these pain signals through motivation, context or attention. For example, the motivation-decision model shows that if there is something that we judge more important than pain, our brain can suppress the signals from the nociceptors, thus reducing our perception of the pain (Fields, 2006).

Putting a Perception all Together: Sensory Integration

In real life, perception is not so neatly separable into different sensory processes as Modules 12 and 13 have implied. Very few, if any, experiences impact a single sensory mode only. Instead, we experience a coherent event, in which the input from the separate senses is integrated. This is an obvious point when you are perceiving something, but it is hard to keep in mind when each sensory modality is discussed separately in a textbook. Even within a single sense, the results of many separate processes must be integrated to give us the experience of a single perception.

We have given you an idea of one way that the different modes interact by suggesting how one sense may function as a source of expectation or context effects for another sense. In general, we often experience multisensory enhancement , in which the contributions of individual sensory modes are combined and result in a perception that is, in a way, stronger than the contributions of each sense individually (Lachs, 2020).

More importantly, our perceptual system must have some method of combining the sensations from the different modes, so that life does not seem like a foreign film badly dubbed into English. To achieve integration, it makes sense that neural signals from the separate sensory channels would be collected in specific areas of the brain, and indeed this seems to be the case. There are areas throughout the midbrain and cortex that respond only to input from multiple sensory channels. One key brain area for this integration work is a part in the midbrain, called the superior colliculus (King and Schnupp, 2000). The superior colliculus receives input about timing and spacing from the different senses, and it is very sensitive to the exact timing and location of inputs. Basically, the superior colliculus is essentially able generate signals that allow us to conclude that sights, sounds, smells, and touches that originate at the exact same time and in the exact same location are part of the same perception.

introduction to psychology essay questions and answers pdf

multisensory enhancement : process through which input from separate sensory modalities combine to produce a perception that is stronger than the individual contribution of the modalities

superior colliculus: an area in the midbrain that plays a key role in integrating the inputs from the different senses into a single coherent perception

  • Try to think of your own examples of the Gestalt principles of similarity, proximity, figure-ground perception, good continuation, and closure.
  • Try to think of some examples when your expectation or the context in which you encountered something influenced the way that you perceived it.

13.4. Attention

At first, you might wonder why the topic of attention is located in a module on Perception. It is true that the topic could appear elsewhere, including in its own module, as it is an important topic in its own right. But it is perhaps the most important precondition for transitioning from sensation to perception, or conscious recognition. In order to recognize something, you must direct attention to it.

You are in class on a super sailorific sunshiny day, and you just cannot keep you mind on the boring lecture being delivered by your professor, Dr. Dronesonandon. Well, at least you don’t have to write an essay.

Thumbnail for the embedded element "Super Sailorific Sunshiny Day (SpongeBob Clip)"

A YouTube element has been excluded from this version of the text. You can view it online here: https://cod.pressbooks.pub/introductiontopsychologywinter21/?p=94

You can also access this video directly at:  https://youtu.be/C-1O2TvnrNE

But back to Dr. D. You struggle to listen, but your mind keeps wandering, first to the activities outside, then to your shopping list and planning your afternoon workout. Before you know it, class is over, and you have not heard one word out of your professor’s mouth in the last 30 minutes. Oops. Many people (not just students) struggle with paying attention. But what is attention, exactly? For the moment, let us define it as the current contents of your consciousness, or what you are thinking about right now. We will need to be a bit more precise later, but this preliminary definition will allow us to start the conversation.

At any given time, you have a virtually limitless amount of possible conscious mental activity available to you. You can consciously perceive any subset of the world surrounding you, can retrieve information from you episodic and semantic memory, and you can think about possible events in the future.  We would like you to try a thought experiment. Try to think about ALL of those possible mental activities right now. Didn’t get very far, did you? Most of us would not even know how to start. So, one of the easiest facts to notice about attention is that it is limited. Very limited.

One way to think about attention is as a filter.

There is some information that is important, so we focus on that. Other, unimportant information, then, is filtered out or ignored. But how and when do we filter it, and how do we make the decision that something is important or not? That process is called selective attention , and it was one of the earliest topics about attention that psychologists studied, and their research gave us some of the answers to those questions.

Imagine you are sitting in your dining room watching a TED Talk as background information for a paper in English class, and two people are having a conversation in the next room. You have to focus on your paper and ignore the other voices. We can achieve this filtering on the basis of physical characteristics, and those physical characteristics can make the process easier or more difficult. For example, consider the physical intensity or strength of the stimulus (see Module 12). Suppose the conversation in the other room is extra loud and the TED Talker very quiet? Obviously, that would make it difficult to block out the voices on the TV. Or how about the similarity of the channels of information? If the TED Talker is a high-pitched woman’s voice, and the other conversation between two men’s voices, it is fairly easy to select on the basis of the difference in pitch.

Suppose you are basically successful. You are in one room concentrating on the TED Talk, and focusing so hard that you do not even really hear the other conversation. Or do you? If the conversation between two men suddenly FaceTimed in a woman, do you think you would notice? Most people would (Broadbent, 1958). So we are filtering based on physical characteristics, but not fully blocking out the ignored channel. We can detect changes in those physical characteristics. But there is something missing in our understanding so far. For example, if the people speaking in the other room changes from English to German, do you think you would notice that? Most people do not (Cherry, 1953). If you maintain focus on the task at hand (the TED Talk), you can monitor for changes in physical characteristics, but you cannot hear what was being said. Or can you? What if they said your name? Do you think you would notice that? Most people do. And there is other personally meaningful information that people can often hear from the non-attended information.

How do we make sense out of these somewhat confusing findings? We notice the addition of a woman’s voice, we do not notice a change from one language to another, but we do notice our own name. It is clear that we are not simply filtering out the unwanted information, but monitoring it in the background, ready to select for different kinds of characteristics. Perhaps the best explanation comes from the multimode model of selective attention  (Johnston & Heinz, 1978). According to this model, we can change the type of information we monitor in the filtered-out information based on the demands of what we are trying to do. At nearly all times, we can monitor simple physical characteristics, such as pitch or loudness. Suppose you were aware that the other-room conversation might get interesting (for example, you are expecting it to turn to some juicy gossip). You can monitor for that information in the background so if the conversation gets interesting you might notice it. And there is some information that you are basically always on the lookout for, your name, for example.

Now that you realize that we can at least extract some information from non-attended channels, you might wonder if we can actively pay attention to it. In other words, can we pay attention to two separate channels at the same time? This is called divided attention , and we commonly refer to it as multitasking. And here the research results are a bit more straightforward. Are people good at multitasking?

It is true that research has found that some people can learn to perform two tasks at the same time—in this case taking dictation while reading unrelated text (Spelke et al., 1976). In order to be successful, though, they were trained for 17 weeks, 5 days per week for an hour on the two specific tasks. That is a far cry from trying to read psychology while posting on Instagram without any specific training. It actually appears that in most cases, people are not truly multi-tasking, but task switching  (moving back and forth rapidly between tasks). In both the simultaneous case and the task-switching case, however, it appears that performance suffers on both, at least for most people (Hirst, Spelke, & Neisser, 1978; Monsell, 2003). So, if you want to make good Instagram posts, you had better stop distracting yourself with psychology.

divided attention : the process of focusing on more than one stimulus or task at the same time, often called multitasking

multimode model of selective attention : a model of attention that suggests that our attentional filter is flexible; we can monitor the contents of filtered-out information depending on tasks demands

selective attention : the process of focusing on one stimulus or tasks and screening out others

task switching : moving back-and-forth rapidly between tasks

Module 14: Biopsychology: Bringing Human Nature into Focus

Scientific psychology is less than 150 years old. Although scientists had been interested in studying the brain before the discipline of psychology got started, it is safe to say that inquiries into the structure and function of the brain were in their infancy. After all, the human brain is widely considered the most complex biological organ in the universe. We are probably still in the infancy of learning about the brain.

The very early days of biopsychology yielded overly simplistic and sometimes completely wrong-headed ideas. On the other hand, a few ideas were so good that it is astounding that researchers were able to come up with them without the methods of inquiry that we have available today. It is a very small portion of the overall research, though, that we still remember and admire today.

It wasn’t that the early researchers were poor scientists. You should not be surprised to find that early discoveries about the biological bases of psychology were not always correct. Until the advent of advanced brain imaging techniques, such as PET and fMRI (see Module 11), brain researchers had to make a lot of guesses. When you think about it, these early researchers were like the rest of us when we are having trouble seeing. Perhaps it is too dark to see, the objects we are trying to see are too far away, or we have poor eyesight. We may often be able to get along despite these limitations. For example, while driving at night, we may very well be able to figure out what a blurry sign says before we can actually see the words on it. Sometimes, though, we make mistakes. When we tire of making too many mistakes, we may try to augment our natural observation abilities; we can turn on a spotlight, buy a pair of binoculars, or get fitted for eyeglasses. The scientific version of a new pair of glasses is a more advanced technology for doing research. PET and fMRI have helped to bring otherwise blurry images of the brain into focus.

The need for “a new pair of glasses” in science is not always obvious, however, because our exposure to science in everyday life does not reveal the bumps, turns, and missteps that occur along the way. Nearly all research in a scientific field is destined to be forgotten because it is overly simplistic or just plain wrong. Before the science majors among you decide to change to business or art history because of this messy truth, however, you should realize that scientific progress depends on researchers making small improvements over previous research. Although individual studies may turn out to have been too simplistic in the way they explained some phenomenon, those earlier studies were essential. New, improved research would perhaps not be attempted if old research had not already been done. Thus, scientific progress is incremental. Another important fact about science that you should keep in mind is that progress is not continuous. Many people think of science as a steady series of groundbreaking discoveries, each of which greatly advances the field. Although the broad trend may look that way, when you look at the day-by-day history of science, you find that a small minority of discoveries turn out to be the blockbusters we hear about in the news. Quite often, new theories and ideas turn out to be flat-out wrong. When that happens, the best that can happen is that researchers go off on a tangent; at worst, the whole field is set back. As you come to understand the development of the biological perspective in psychology, you will see both the incremental progress and the wrong turns.

Of course, it’s easy for us today to look at the research that turned out to be simplistic or misleading and criticize it as crackpot science. We are, however, falling victim to hindsight bias, which was introduced in Module 1. After the good ideas turn out to be good, and the bad ideas turn out to be bad, it seems obvious in hindsight that they would do so. But in reality, it is not so obvious. Some brain scientists who got things amazingly right ended up on the wrong track about something else. Consider Paul Broca, who discovered that the seat of spoken language production is in the left frontal lobe—a significant early discovery that has stood the test of time. But Broca was also a proponent of craniometry, using skull size and shape to categorize people’s race, intelligence, morality, and other characteristics (Carroll, 2003). For example, Broca believed that women are less intelligent than men because their brains are smaller. Of course, we say, craniometry was a terrible idea that was motivated by people’s personal prejudices. But what do you think about the idea that people’s brain size adjusted for body size is related to intelligence? Is this an obviously good or bad idea? In reality, this is a 150-year controversy within the scientific community. Some researchers have found a positive correlation between brain size (adjusted for body size) and intelligence (Posthuma et al., 2002; Rushton & Ankney, 1996). Others have found no correlation (Schoenemann et al., 2000). Recent meta-analyses have indicated that there is a small positive correlation between brain size (adjusted for body size) and intelligence, much smaller than some researchers had found, but not quite zero (Pietschnig et al., 2015; Woodley of Menie et al., 2016). Fifteen years from now, assuming these results hold, they will seem to have been obviously right, the other obviously wrong. That is how the hindsight bias works.

So, with the benefit of hindsight, what were some of those great ideas that revolutionized our thinking about biopsychology and some of those poor ideas that hijacked the field for a time? We will look in this Window at discoveries about the structure of the neuron and about localization of brain functions, thus hitting a couple of the major topics of the modules in this unit. You will see that developments in research technology were sometimes a key to making these discoveries, just as a new pair of glasses might help us improve our game or our grades in a rather dramatic way. Other discoveries were made despite the researchers’ severely limited research techniques. We will also examine a couple of recent detours in brain research that are currently being reexamined. At the end of the Module we will address the issue of the extent to which “nature” and “nurture” affect the neural system and the role of evolutionary psychology, a new theoretical tool, in this debate. There is a current controversy in the field regarding whether evolutionary psychology is a new pair of glasses or the wrong prescription altogether.

Discovering the Structure of Neurons

As Module 9 related, the early biopsychology researchers had only very crude methods available to them. For example, they could examine individual cases of people who had suffered brain damage, they could open the skulls of dead people, or they could experiment on the nervous systems of non-human animals. Microscopes were nowhere near as powerful as those available today, and methods of examining a functioning brain, such as PET and fMRI, were not even the stuff of science fiction. Researchers relied on their ability to make ingenious inferences from observations using their limited methods.

For example, in 1850, Hermann von Helmholtz reported his discovery of the speed of neural transmission, a problem that had previously seemed unsolvable (R.I. Watson, 1979). Helmholtz made his discovery by applying an electric current to a neuron in a preserved frog’s leg. The electric current generated a neural impulse, which made its way through the neuron. Then, the signal was sent to the leg’s calf muscle, causing it to contract. When the calf muscle moved, it lifted a small weight and broke the contact in the electricity generator, thus stopping the current. The duration from onset to a cessation of the current was how long it took the neural impulse to travel.

Advances in research techniques helped refine researchers’ focus and led to important new discoveries. In the late 1800s Camillo Golgi developed a method of staining neurons so that they could be seen under a microscope. Ramon y Cajal was able to use this method to show that neurons remained separate from each other.

In 1906 Charles Sherrington built on Cajal’s and Helmholz’s findings by describing how neural communication through the synapse differs from the type of signaling that occurs inside the neuron. Sherrington, too, made his discovery by the ingenious inference method. He compared the speed of neural transmission within a single neuron to the speed of transmission over an equal distance when multiple neurons were involved. Because multiple-neuron transmission was slower, Sherrington inferred that a different kind of transmission takes place between neurons; Sherrington postulated a space between neurons and called the area a synapse. We now know that the synapse is where the chemical signaling involving neurotransmitters occurs. Note that between Helmholtz’s and Sherrington’s discoveries, 56 years passed. During that period a great deal of research was conducted, some of which built on the previous research, some of which wound up being a dead-end, most of which ended up forgotten.

Localizing Brain Functions

In a classic Bugs Bunny episode, Bugs dresses up as a “mind reader” and offers to read the bumps on his co-star’s head. When his victim, a gambler in search of a lucky rabbit’s foot, protests that he doesn’t have any bumps, Bugs obliges, giving him some by tapping on his head with a hammer. Many people probably laughed at the joke without realizing that it was a reference to phrenology, the analysis of people’s traits and abilities by examining the bumps on the skull. Literally a subject of ridicule, phrenology actually got something right. Franz Gall, the developer of phrenology in the early 1800’s, guessed correctly that different brain areas were responsible for different functions. Unfortunately for him and his place in history, Gall also guessed, this time incorrectly, that those different functions were reflected in different sizes of brain areas, which then caused the skull to bulge from the pressure of larger sections. Phrenology captured the imagination of many people throughout the 19 th century (it also led directly to craniometry); it even has proponents today, despite the complete lack of scientific evidence supporting it (Carroll, 2003). Although phrenology was a major detour from scientific progress, it did prompt a long and continuing line of research to find out which areas of the brain govern which functions. Once again, the evolution of research technology, especially recently, has helped researchers refine their focus.

Broca’s area. Paul Broca, despite his belief in craniometry, is credited with the first solid discovery of the function of a specific brain area. Broca made his discovery, in the mid-1800s, by examining case studies of patients who had lost the ability to speak. (See Broca, 1861 for a description of his most famous patient.) After the patient died, Broca found damage in the middle section of the left frontal lobe.

phrenology: the discredited belief that people’s traits and abilities could be determined by examining bumps on their skulls.

Broca’s area: an area in the left frontal lobe that plays a very important role in producing speech.

Cortex. In the 1950s, neurosurgeon Wilder Penfield electrically stimulated patients’ brains during surgery for epilepsy. Because he made a serious attempt to discover the functions of brain areas prior to cutting into them, he is responsible for some very important advances in our knowledge of the brain. Specifically, Penfield is still admired today as the neuroscientist who mapped the primary motor cortex and sensory cortex. He showed that different sections of the cortex control different parts of the body.

Penfield also stimulated other parts of his patients’ brains and was able to get the patients to report images, which he interpreted to be memories. Even today, some people believe Penfield’s original conclusion that memories are recorded permanently in specific neurons in the cortex (Penfield, 1955; Penfield & Perot, 1963). Daniel Schacter (1996), on the other hand, has pointed out that Penfield was able to get these “memories” from only a very small number of his patients, and the reports are suspect. For example, some patients reported events that clearly had not happened. Schacter suggests that these reactions to brain stimulation are more reasonably interpreted as hallucinations than memories.

Today, we do believe that specific brain areas are involved in memory, but they are thought to be involved as processing sites, not storage sites. Module 9 explains that a key processing site for working memory is in the prefrontal cortex, and a key processing site for storing memories is in the hippocampus. New tools for studying brain activity have allowed this refinement of Penfield’s ideas. Penfield’s experiments had interesting results, though, if you think about it: using a single procedure, he was able to make one of the most important discoveries as well as one of the most famous errors in mapping brain functions.

Hippocampus. The discovery of the hippocampus’s role in memory is a good example of the way scientific progress occurs as we refine our focus and discover complexities about brain areas. Probably the first breakthrough in our knowledge came from the most famous case study of memory research, that of a patient known by the initials H.M., a man whose temporal lobes were damaged by surgery that attempted to cure his epilepsy. Several specific brain parts were removed, including both hippocampi. H.M.’s seizures were reduced (but not eliminated), but he suffered several minor deficits as a result of the surgery—and one major one. He lost his memory. Not his total memory, however. He was able to remember events from long before the surgery but lost his memory of most of the 11 years immediately preceding it. In addition, he lost the ability to transfer new information into long-term memory.

On the basis of H.M.’s case, researchers began to believe that the hippocampus helps us to store new memories into long-term memory and to make those memories permanent. Other research (some on H.M) helped to sort out what kinds of memories are involved. For example, many case studies of brain-damaged patients and research with normal people and non-human animals have suggested that the hippocampus helps storage of explicit memory (for facts and episodes) but not implicit memory (for skills) (Schacter and Tulving, 1994; Squire and Knowlton, 2000); see the Unit 2 Window for more on this research. Recent research has even recorded changes in individual neurons of the hippocampus of monkeys as they learn new explicit memory associations (Wirth et al., 2003).

Other researchers have discovered that the hippocampus appears especially important for spatial memories. For example, the taxi driver study mentioned in Module 9 (Maguire et al., 2000) used MRI brain scans to show that the taxi drivers had especially large hippocampi. The more we discover about the hippocampus, the more we realize that it is an extremely complex brain area, involved in many different, but certainly not all, kinds of memories.

Corpus callosum. Our left and right hemispheres are not mirror images of each other, as section 9.2 explains. Each is somewhat specialized, better equipped to handle certain functions. For example, in most people, the left hemisphere is more adept at speech production and word comprehension. The left hemisphere also does a better job of seeing details in visual scenes, and it is better at arithmetic. The right seems to beat the left in understanding the emotional content of language, seeing overall patterns in visual scenes, and processing spatial information, as in geometry. The two brain hemispheres are ordinarily joined by the massive corpus callosum. Some of our important discoveries about the differences between the left and right hemispheres come from case studies involving people whose corpus callosum has been severed. In some cases of severe epilepsy, in which seizures travel from one side of the brain to the other, the only successful treatment has been this dramatic surgery, which leaves the patients with a “split-brain.” These patients appear completely normal, but their two half-brains function independently.

Through research with these split-brain patients, Roger Sperry and his colleagues were able to demonstrate that the left hemisphere has much better ability to handle language than the right (Gazzaniga, 1967). They made this discovery by flashing words or pictures to the left visual field or the right visual field. Input to the left visual field goes to the brain’s right hemisphere, and vice versa. Split-brain patients could say a word that was flashed to the right visual field but could not say a word flashed to the left visual field (because the left hemisphere could “talk” while the right could not). They could, however, indicate with their left hand—which is controlled by the right hemisphere—that they recognized the word, perhaps by picking up an object that the word named (Nebes, 1974). In people with intact corpus callosums, information that is initiated on one side of the brain is nearly instantly transmitted to the other side, so you would certainly not be able to observe these different functions of the left and right hemispheres in casual observations.

Frontal lobe. As indicated by the case of H.M., mistakes about the functions of brain areas have sometimes had disastrous consequences. Sometimes, these mistakes resulted from researchers’ failure to make serious efforts to determine the effects of their surgery or other treatments before understanding the functions of brain areas. For example, throughout the 1940s and 1950s, 40,000 patients suffering from psychological disorders were given prefrontal lobotomies, a surgery in which the frontal lobes are separated from the rest of the brain. As amazing as it sounds, the lobotomy was tried on humans because it had been successful at calming a single chimpanzee on which the procedure was performed (Pinel, 2003). Supporters of lobotomies believed that the procedure calmed patients without serious side effects. Although lobotomies did tend to calm the patients, it also left them with very serious side effects, including loss of morality, emotional unresponsiveness, and an inability to plan. Today, we think of the prefrontal cortex as the major brain area for integrating input from many other parts of the brain so that we can perform our most complex mental activities, such as planning and reasoning.

The case of H.M. and the large-scale tragedy of prefrontal lobotomies remind us that discoveries about the localization of brain functions have not been academic exercises. Some real people who suffered very serious consequences have contributed to what we know today.

Getting Back on Track with a New Focus

Throughout this Module, we have highlighted some important discoveries and some bad mistakes along the path to learning about biopsychology. It is important that you realize that missing a turn and going down the wrong track is not simply something of historical interest. Our knowledge of the brain is currently undergoing a radical change because researchers now realize that they have gotten some facts completely wrong for many years. They have been able to see those mistakes mainly because advanced techniques, such as PET and fMRI technology, give them unprecedented means of examining the living brain while it is working. Thus, we are in the process of getting back on track from a number of detours and setbacks.

Two important recent discoveries of wrong turns are described here. But how do we know whether these current hot topics in brain research represent true progress or just new detours? The answer is, we don’t. Only in hindsight can we judge with confidence whether a development was a progression, digression, or regression. In the meanwhile, we must critically evaluate both sides of every scientific debate.

Mistake #1: The brain makes no new neurons after early childhood.

Most of you have undoubtedly been told that neurons, once killed, can never come back. Perhaps you first heard this “fact” as a teenager, in the assertion that drinking alcohol kills brain cells. It is true that dead brain cells do not come back to life. Researchers also believed, however, that the brain does not generate any new brain cells after early childhood, so the dead brain cells could not ever be replaced. Brain researchers’ disbelief in the possibility of neurogenesis , as it is called, has severely hampered scientific progress over the past 40 years (Gage & Van Praag, 2002).

Isolated researchers through the years did find evidence of new neuron formation in birds and in mice and rats, but it was not until 1998 that a persuasive demonstration of neurogenesis in humans was provided. Eriksson and colleagues (1998) found that some cancer patients generated new neurons in the hippocampus. The fact that the process occurs in the hippocampus suggests that neurogenesis is important for memory (Alam et al., 2008; Gage & Van Praag, 2002).

Our developing knowledge about neurogenesis has been spurred by cutting-edge research technologies. Early on, researchers relied on the electron microscope; more recently, they have used the techniques of growing neurons in a culture and tracing specific genetic markers associated with new neuron formation. Brain-imaging techniques cannot observe neurogenesis, but they can reveal areas with more or fewer neurons than expected, often assumed a result of the rate of neurogenesis (Shelene, 2003).

Currently, brain researchers believe that neurogenesis in the adult human brain is a daily phenomenon. The basic process is that the brain produces neurons called stem cells. These are “general purpose” neurons that can develop into any specific type of neuron. The stem cells can move to different parts of the brain while they become specialized into particular types of neurons. Researchers are currently trying to figure out just what neurogenesis accomplishes for our brains.

neurogenesis: the creation of new neurons in the nervous system

stem cells: general purpose, immature neurons that have the capacity to develop into any specific type of neuron

Mistake #2. Glia are really only glue.

For many years, researchers believed that glia play a relatively minor supporting role in the brain, despite their outnumbering neurons approximately ten to one. They likened glia to Elmer’s Glue, believing that they did little more than hold the brain together (the word glia means glue). For many years, researchers have realized that glia contain glycogen, which is how sugar is stored in the body for energy release. Thus, glia are the storage houses for the brain’s fuel. Also, as Module 11 relates, the substance myelin, which surrounds many axons, comes from glia.

The traditional belief was that these support functions were the only functions of glia. Researchers have discovered, however, that glia also participate in the neural transmission process (Magistretti & Ransom, 2002; Volterra et al., 2002). Again, advanced techniques gave researchers the tools to allow them to change their focus and make these new discoveries. For example, Yuan and Ganetsky (1999) demonstrated that glia communicate with neurons by tracing a specific kind of protein produced by glia that found its way to axons.

One important research methodology is ablation, in which researchers remove the cells of interest using a variety of techniques from an adult brain of an animal and observe the consequent changes in behavior. For example, using this method, researchers have discovered that some glia cells function as an extension of the immune system in the brain (Jakel & Dimou, 2016).

Other researchers have demonstrated that glia form synapses with neurons in the brain. These synapses were first discovered in the hippocampus, but have since been discovered in many other areas as well (Sun & Dietrich, 2013).  Neuroscientists still do not know what the purposes of these synapses are, perhaps because they have an odd property. Neurons have synapses in which their signal is sent to the glia, but the glia do not have their own synapses for sending the signal on beyond that.

There are many additional functions of glia, some quite well understood, others are still a mystery (Jakel & Dimou, 2016). That is quite a interesting story for a substance that we used to think was the brain’s Elmer’s Glue.

Examining the Nature-Nurture Through Evolutionary Psychology: Is It a Leap Forward or a Wrong Turn?

Aggressiveness, anxiety, intelligence, happiness, depression, shyness, loneliness, obesity, and many other traits tend to run in families. Many people observe these correspondences and assert that they prove that the traits or behaviors in question are a consequence of heredity, or nature . Others are equally convinced that these observations validate their belief that the traits or behaviors are a consequence of environment, or nurture . They are both wrong. Or rather they are both right. When you observe the similarities among members of the same family, you are very likely witnessing the influences of both nature and nurture.

The nature-nurture controversy has a long history. As Module 4 describes, the debate about the influence of nature versus nurture began in philosophy, as typified by the writings of Rene Descartes on the side of nature and John Locke for nurture. As psychology became scientific, questions about the roles of nature and nurture began to be examined empirically. As is often the case when there is a controversy between two somewhat extreme positions, the truth is somewhere in the middle. As mentioned in Module 10, behavior geneticists have discovered heritabilities for many psychological characteristics to be around 0.5; that is, about half of the variation across a group is explained by genetic differences. In essence, all human behavior and mental processes are likely a complex interaction between heredity and the environment, nature and nurture. Hence, it no longer makes sense to talk of nature versus nurture. The question is how do nature and nurture interact?

Evolutionary psychology is one of the newer influences on the nature and nurture debate. Its rise has given some biopsychologists a new theoretical tool for examining why human behavior and mental processes are what they are. To use the “focusing” analogy, they believe that evolutionary psychology is like the Hubble space telescope, a remarkably powerful tool that can provide us with both a grand picture of the universe and very fine details.

Evolutionary psychologists essentially offer two answers to the question of how nature and nurture interact. First, they interact through the process of natural selection. The environment provided the adaptive problems that shaped our human ancestors. Those who survived the challenges of the environment were able to pass their genes on to their offspring; in a sense, nurture shaped nature. This is the grand picture. Second, nature and nurture interact on a smaller scale in all of us, providing us with the fine details. Our human nature, brought to us through many generations of natural selection, leaves us with many predispositions but no guarantees. For example, as Steven Pinker (1994) points out, human language ability is an instinct; it is part of our nature. Every human is born with a predisposition to learn language. The specific language you learn, however, is the one to which you are exposed, and this is the influence of nurture.

As we mentioned in Module 10, the evolutionary view of psychology is still controversial. Some psychologists and biologists believe that evolutionary psychology is obscuring the facts, not helping us focus. Supporters of evolutionary psychology contend that we are on the verge of a great unification of knowledge, what Edward O. Wilson (1999) has called consilience ; they believe that evolutionary psychology offers an overarching explanation for human behavior and mental processes. They believe that evolutionary psychology offers us a bridge between biology and psychology. Critics have two main arguments against evolutionary psychology. Some claim that evolutionary psychology distorts our understanding of human behavior and mental processes by trying to explain them all as evolutionary adaptations; thus it seeks to make psychology unnecessary. Others, that evolutionary psychology is unscientific and thus built on a shaky foundation.

Steven Pinker (2002), a well-known proponent of evolutionary psychology, has tried to address the first criticism. He contends that critics have incorrectly labeled evolutionary psychology “deterministic” and “reductionistic.” As we have already seen, genes do not determine behavior, they predispose it: evolution applied to psychology does not change that point. No one seriously denies that we can deviate from at least some of our genetic blueprints in response to our environment. Pinker also notes that evolutionary psychology does not seek to explain psychology out of existence (the “reductionism” charge). Rather, it seeks to fit psychology firmly within the hierarchy of sciences of living organisms. In other words, evolutionary psychology seeks to provide explanations for psychological phenomena that are consistent with everything we know from evolutionary biology. In this way, evolutionary psychology seeks only to become a new perspective, a new way of looking at human psychology (see Module 3).

David Buss (2007), has defended evolutionary psychology from the charge of being unscientific, however, by pointing out that it is unlikely that anyone will overturn Darwin’s theory of evolution by natural and sexual selection. Further, one cannot plausibly deny that human beings are biological entities like any other and thus subject to Darwin’s theory. There is no reason to expect that behavioral tendencies would be exempt from selection. Thus, it seems very likely that some kind evolutionary psychology should apply to humans. Buss argues that we should be debating specific evolutionary hypotheses. But we are, in fact, still debating whether evolutionary psychology should even exist (Smith, 2019, see Module 10 for details).

As you may have guessed, the jury is still out on the question of whether evolutionary psychology offers the potential for a valid new set of explanations. We will, at times do as David Buss recommends and evaluate its specific claims rather than its basic existence. Evolutionary psychology may simply be a new tool, a new pair of glasses with which we might be able to bring human nature into better focus. Like advanced brain-imaging techniques, evolutionary psychology provides us with a different view. The brain imaging techniques have done wonders for helping us see how the brain is organized; evolutionary psychology may offer us the opportunity to discover why it is organized that way. Seen through the lens of evolutionary psychology, our behavior and mental processes become consistent with the most important idea in all of biology, namely natural selection. We may someday decide it is the wrong lens, but for now, at least, it is offering us a fascinating and useful view of the nature and nurture of psychology.

Sometimes You Can Keep Your Old Glasses Too

Note what we said about the emergence of interdisciplinary. approaches above. Just because a new perspective or a new tool has been introduced, it does not necessarily mean that the old methods of discovery get abandoned. First, the new approaches do not typically take over a field immediately. There are a great many researchers in the field at any given time who are attached to the techniques and perspectives with which they are comfortable. Many are senior researchers in psychology that feel too invested in their approach to learn a new way of doing things. So the new and the old approaches coexist for a while, as researchers slowly migrate over, or researchers who use the old approaches retire and are replaced by users of the new tools.

Something very different can happen too. Many people in their 50’s need reading glasses because their close-up vision starts to deteriorate. But what if they already had glasses for far-away vision? They certainly do not throw those away when they start using the reading glasses. Both the old and the new glasses stick around because each offers its own unique and useful contribution to the user’s vision. EEG and fMRI offer a great example of this “getting new glasses, but keeping the old ones, too” scenario. As you have seen a couple of times already, EEG was developed around 1930. For several decades, it was the only way to “see” regular brain activity. The view was in many ways blurry, though. Measuring electrical activity inside the brain through a small number of sensors (sometimes, 19 or even fewer) placed on the scalp is not a great way to see where brain activity is taking place. This is where fMRI is quite good, however. fMRI has a spatial resolution of about 1 millimeter. In other words, a readout of an fMRI image will be able to show the location of brain activity within about 1 millimeter. The temporal resolution of fMRI is not particularly good, though. So fMRI is very good at showing where brain activity is occurring, but not very good at showing when it is occurring. And this is where EEG shines. The technique that EEG researchers use is called event-related potential (ERP) . In ERP research, a participant is presented with some stimulus (the event), and a positive or negative electrical charge (the potential) is observed a short time later. And the temporal resolution is excellent, as little as 1 millisecond in optimal conditions (Luck, 2014). So if we want to know both where and when brain activity occurs, we will need the results of both fMRI (the new glasses) and EEG/ERP (the old glasses).

event-related potential (ERP): the brain scanning technique used with EEG, in which a stimulus is presented and a corresponding electrical charge is detected a short time later

spatial resolution: the accuracy level of location information from a brain scanning technique

temporal resolution : the accuracy level of timing information from a brain scanning technique

Unit 4: Developing Throughout the Lifespan

Understanding and valuing what all human beings have in common is important to successfully interacting with people. And without a doubt, understanding and valuing diversity, or what makes individuals different, is also a key, especially in the 21stcentury. A solid knowledge of psychology goes a long way toward helping you achieve these goals.

We are especially advised to attend to and respect diversity in gender, ethnicity, nationality, religion, and sexual orientation. We absolutely agree that these are extremely important areas of difference. When you think about it, though, diversity in age may be the most pronounced form that you will encounter. Terms like the “generation gap” hint at the scope of the differences between younger people and older people. A great deal of mental effort during childhood and adolescence is spent trying to understand our parents. Then, as adults, we marvel at the inexplicable workings of the minds of children and teenagers.

There is little doubt that our exposure to people who differ from us in age is increasing just as it is for other forms of diversity. For example, if you had been a college student in the early 1980’s, you would be hard pressed to meet a single student over age 25. Today, more than 30% of students at community colleges are over 30.

The significant differences among people at different ages can lead to a serious lack of understanding of other people. For example, parents and teachers are less effective, adolescents are more troubled, and siblings are less tolerant when they are unaware of the ways that different-aged people think and relate to others.

We are not saying that age diversity is the most important kind; that is essentially a value judgment that you should make for yourself. We will say, however, that it may be the most overlooked kind. One reason for this is that we sometimes forget that we are changing over time, too. We have a strong sense of a constant self; for example, every fall, faculty at colleges and universities across the US note how young the students are getting, as if they are staying the same age and the students are getting younger (recall what we said a couple of paragraphs ago; in reality, the students are getting older). For many people, this feeling that “I am the same person I always was,” translates into a lack of understanding of different aged people. A father may complain that he does not understand his teenage children, but it is not simply because they are teenagers; he may think, “I was never like that at that age.”

This unit is about Developmental Psychology, the subfield that examines the changes and the constants throughout our lifespans. You will be able to use your knowledge of Developmental Psychology to understand changes you have undergone, to know what to expect 10, 20, 30, even 50 years from now, and to improve your understanding of and interactions with people who are not the same age as you.

Of course, there are a great many ways that we develop throughout life. In order to examine development, we will focus on thinking, and on social abilities and relationships. This division into  cognitive development  and social development  is consistent with the way the subfield has been organized by developmental psychologists. The cognitive developments include such phenomena as gaining an understanding of the physical world in infancy, learning how to reason and solve problems, and balancing declining abilities with increasing abilities in old age. Social development includes such phenomena as forming emotional bonds between children and parents; making friends in childhood, adolescence and adulthood; and how children are affected by parenting practices. Developmental Psychology actually includes a third topic area in addition to cognitive and social development, namely physical development.  Although the three topic areas are somewhat separate, as you will see, they interact profoundly.

There are three Modules in this unit:

  • Module 15, Physical Development Throughout the Lifespan, describes the changes that occur in our bodies and brains from conception through late adulthood. You will learn some new information about the nervous system and will be introduced to details about the body system that produces hormones, the endocrine system.
  • Module 16, Cognitive Development, covers the remarkable and surprising journey of development in our language, thinking, memory, and reasoning abilities. You will discover that infants are quite a bit more capable than you might guess and the future is not as bleak as you might think for older people.
  • Module 17, Social Development, takes off from the prerequisite cognitive developments and shows you how they allow the infant and young child to get along in the social world. You will learn about the infant’s first emotional bonds, the role of parents in their children’s social development, and the effects of these early developments throughout our lives.
  • Module 18, Developmental Psychology: The Divide and Conquer Strategy, describes the decisions that scientific psychologists must make when they choose a subfield in which to specialize for their careers. As you will see, developmental psychology, as the most complex subfield, requires budding scientists to pay more attention to its organization and to make more decisions than other subfields do.

Module 15: Physical Development

Although the bulk of Unit 4 is primarily about cognitive and social development, people certainly develop in another obvious way, that is, physically. It is worth focusing exclusively on physical development at first, as it is one of the most obvious ways that people differ from each other. Although physical development is separated from cognitive and social development in this unit, you should realize that it does interact with them. First, many cognitive and social developments depend on prerequisite physical developments. Second, the different types of developments can influence each other. For example, in Module 17, you will learn about attachment, an infant’s emotional bond with a specific person, such as a parent. This is most clearly a social development. In order for an infant to be attached to a specific person, however, they must be able to recognize that person; this is a cognitive development. As the infant develops physically, they become able to move from location to location and can explore her environment. They can use the parent to whom they are attached as a secure base from which they feel confident to stray, so they can make discoveries that will enhance their further cognitive and social development.

This Module has three sections; it is organized principally by age. Section 15.1 describes the extraordinary changes that take place before birth and during childhood. At the moment of conception, the baby-to-be consists of exactly two cells; they divide and subdivide and differentiate rapidly so that nine months later an infant prepared to survive and learn in the world is born. Although the rate of change slows down dramatically after birth, physical developments in childhood are also remarkable, interesting, and important. Section 15.2 covers adolescent and adult development. In a striking reversal of the trend of decreasing rates of growth and change, the adolescent develops rapidly on the path to reaching sexual maturity. Adulthood is traditionally conceived as a period of decline. As you will see, the news is not nearly so pessimistic. Section 15.3 is the exception to the chronological organization of the first two sections in the module. The last section describes the changes that the brain undergoes from the prenatal period, all the way through to late adulthood.

15.1 Prenatal and child physical development

15.2 Adolescent and adult physical development

15.3 Brain development throughout the lifespan

By reading and studying Module 15, you should be able to remember and describe:

  • Physical development in the embryo and fetus: zygote, neural tube, testes, ovaries, androgens, amniotic sac, placenta, teratogens, fetal alcohol syndrome, fetal alcohol effect (15.1)
  • Physical development in infancy and childhood (15.1)
  • Physical developments in adolescence and adulthood: adolescent growth spurt, puberty, primary sex characteristics, secondary sex characteristics (15.2)
  • Hormones and the endocrine system: hypothalamus and pituitary gland, gonads, androgens, testosterone, estrogens, progesterone, growth hormone
  • Increasing rates of obesity in adulthood: basal metabolic rate, muscle mass (15.2)
  • Brain development before birth: neural plate and neural tube, neural stem cells, migration (15.3)
  • Brain development in infancy and childhood: myelinization, synaptogenesis (15.3)
  • Brain development in adolescence and adulthood (15.3)

By reading and thinking about how the concepts in Module 15 apply to real life, you should be able to:

  • Recognize the characteristic physical features of different aged infants, children and adolescents (15.1 and 15.2)

By reading and thinking about Module 15, participating in classroom activities, and completing out-of-class assignments, you should be able to:

  • Combine your knowledge of neurons and the brain from Unit 3 with the developments in Module 15 to predict some behavioral developments (Module 11 and 15; best done before reading the remainder of Unit 4)
  • Speculate whether the physical characteristics of people you know, especially older adults, are more likely to be a result of physical development or lifestyle changes (15.1 and 15.2)

15.1 Prenatal and Child Physical Development

  • Have you ever noticed how pregnant women often avoid some common objects and substances? Perhaps if you have been pregnant you have even done so yourself. Make a list of some of the “to be avoided” entities. Do you know what the specific risks associated with the listed entities are?
  • Did your parents ever tell you how old you were when you began to walk? If you are a parent, at what age did your children first walk? If you have more than one child, did they walk at the same age? Why do you think some children begin walking at different ages?

From two single cells—one among the largest in the human body, the other, the smallest—to a fully formed newborn infant in 266 days: It is a development in amount and form that will never be approached again in an individual human being.

Physical Development in the Embryo and Fetus

As you may recall from a biology class, when an egg is fertilized by a sperm cell, the resulting cell is called a zygote. The zygote, which contains the combined genetic information from the mother and father, quickly develops through the process of cell division. By about one week after fertilization, the cluster of about 100 cells attaches to the mother’s uterus, from which it begins to receive blood and nutrients; now it is called an embryo. At this stage, the embryo looks like a tiny, mostly-hollow ball of cells called a blastocyst .

During the following weeks, the cells of the embryo change their shapes and begin to relocate, as the embryo organizes itself. The different areas of cells develop into different body parts and organs. For example, one set of cells develops into the neural tube , which will eventually become the central nervous system (spinal cord and brain). By around five weeks, all of the organs have started developing, and although the embryo is only one-half inch long, the eyes, heart, and the beginnings of the arms and legs are visible. How does the embryo “know” how to organize itself? The embryo’s genes direct the specialization, along with hormones that are produced by the embryo itself. Because the embryo is especially sensitive during this time to hormones, which are chemicals, the period during which the major organs are first forming is also a time of great sensitivity to other chemicals, such as toxins.

zygote: the cell that results when an egg is fertilized by a sperm cell

embryo: the developing cells during the early period of gestation, the first 8-weeks in humans

blastocyst: an embryo about one week after fertilization (in humans); it resembles a hollowed-out ball of cells

neural tube: the embryonic precursor to the central nervous system

After 8 weeks, the embryo becomes a fetus ; at this point many of the major organs and parts can be recognized easily. The fetus grows rapidly over the next several months, from about 2 inches (about 5 centimeters) at 12 weeks to about 12 inches (30 centimeters) at 24 weeks and about 20 inches (50 centimeters) at birth.

The sex organs are among the last parts to become differentiated in the developing fetus. Prior to the seventh week, male and female embryos have indistinguishable primitive sex organs; they resemble female organs, by the way. If the 7-week old embryo has a Y chromosome (i.e., if it is male), the male gonads, called testes, begin to develop. If there is no Y chromosome, the female gonads, ovaries , develop. In a sexually mature person, the testes produce sperm, and the ovaries produce eggs. At this point, sex hormones begin to play a role. The newly-formed testes (in males) begin to produce androgens , a group of hormones that play a role in male traits and reproductive activity. These hormones cause the primitive sex organs to develop into male organs. In the absence of androgens, the organs develop into female organs.

fetus: the developing baby after 8 weeks of gestation

testes: male sex glands; they produce hormones and sperm

ovaries: female sex glands; they produce hormones and eggs

androgens: a group of hormones that play a role in male traits and reproductive activity; a fetus that is exposed to androgens will develop male sex organs

The developing fetus is housed in a very controlled environment, a fluid-filled sac called the amniotic sac ; it protects the fetus by acting as a shock absorber and temperature regulator. Outside substances can only get in through the placenta, the structure found at the attachment point between the fetus and the mother’s uterus. The placenta allows the exchange of nutrients and waste products. To prevent harmful substances from reaching the fetus, the placenta also acts as a kind of filter. It is a remarkable system, but alas, it is not perfect.

Occasionally, harmful substances from the environment outside of the fetus can reach it; they are called teratogens. Have you ever noticed the cautions posted in x-ray areas? Women are warned to tell the x-ray technician if they might be pregnant. This is because x-rays are a teratogen; they can cause the fetus’s developing organs to become deformed. Other teratogens include cigarette smoke, some prescription drugs, other drugs, such as caffeine and marijuana, lead, and paint fumes.

Alcohol is a very well-known teratogen. If the mother drinks heavily (five to six drinks or more per day) during pregnancy, the child is at a greater risk of developing fetal alcohol syndrome. Children who suffer from fetal alcohol syndrome grow slowly and have distinctive facial features, such as wide-set eyes, thin upper lip, and flattened bridge of the nose. Many fetal alcohol syndrome children catch up and lose the distinctive facial features as they develop (Steinhausen et al., 1994). They are not so fortunate with the other symptoms, however. Fetal alcohol syndrome is also characterized by brain damage and many cognitive and behavioral deficits. The damage can be severe enough to be observed using standard brain imaging techniques but is often simply inferred from behavioral and cognitive testing. The deficits include lower intelligence and academic achievement, increases in learning disabilities, poorer language skills, and increases in distractibility and hyperactivity. These effects of alcohol on a developing fetus are not all-or-none (Astley & Clarren, 2000). Rather, they are graded, and even moderate drinking during pregnancy is associated with less severe versions of many of the same effects (these less severe versions are sometimes called fetal alcohol effect ). Clearly, the best advice a pregnant woman can follow—and the advice given by the US Surgeon General—is to completely abstain from drinking alcohol. Women who drank alcohol before discovering that they were pregnant should stop immediately because further consumption would increase the risk of alcohol-related effects.

It is scary; sometimes it seems like the only way to keep a developing fetus safe is to live in a sterilized room and never go out, eat organic rice cakes only, and drink nothing except distilled water. If you have ever heard or wondered about a pregnant woman’s avoidance of wet paint, cigarette smoke (including second-hand smoke), caffeine, and even cat litter boxes, it is because of the possibility that substances contained in these common environmental elements can reach the fetus and disrupt its development. With a little bit of attention, guidance (from healthcare professionals and pregnancy books), and planning, however, the risk of damaging a fetus is actually very low. Still, many mothers-to-be choose to err on the side of caution and avoid substances that may pose little overall risk. This is probably a good idea because the consequences of a teratogen will last a lifetime.

amniotic sac: the fluid-filled sac that houses the developing fetus; it acts as a shock absorber and temperature regulator

placenta: the structure at the attachment point between the fetus and the mother’s uterus; it allows the exchange of nutrients and waste products and acts as a filter to keep out harmful substances

teratogen: a substance that can harm a developing fetus

fetal alcohol syndrome: a condition in children that results from high levels of alcohol exposure during the mother’s pregnancy

fetal alcohol effect: a condition in children that results from moderate levels of alcohol exposure during the mother’s pregnancy

During the remainder of the fetal stage, the fetus grows rapidly, and the organs develop so that the baby will be able to survive on its own when it is born. Obviously, the longer the fetus is able to develop in the uterus, the greater the chances of survival are. For example, a study in Sweden found that babies who are born at 22-26 weeks have about a 70% chance of surviving; those born at 22 weeks have only a 10% chance, while those born at 26 weeks have an 85% chance (The Express Group, 2009). In the US, infants overall have over a 99.3% chance of surviving to age 1.

This 99.3% survival rate corresponds to an infant mortality rate of 5.8. This means that for every 1,000 live births in the US, 5.8 infants will die before they reach one year old. The United States’ rate is higher than you might guess. Monaco (1.60),  Japan (2.0), and Iceland (2.10) have the lowest infant mortality rates in the world. The US rate is only the 55 th best in the world, worse than such countries as Canada, Czechia, Ireland, Belgium, Hong Kong, France, Germany, Slovenia, and the Netherlands. A staggering 18 countries have infant mortality rates above 60. Afghanistan’s rate is 110 per 1000 live births. Let us repeat that. In Afghanistan, for every 1,000 live births, 110 children will not survive to see their first birthday. (The infant mortality rates can be found in the CIA World Factbook, 2017). Countries that have extremely high infant mortality rates are, without exception, very poor. The children die from the disease (including AIDS), parasites, malnourishment, and poor sanitary conditions (Population Reference Bureau, 2004).

Within the US there are substantial differences in infant mortality for different ethnic groups. According to the Centers for Disease Control and Prevention (2016), in the year 2016, the Asian mortality rate had a rate of 3.6, White and Hispanic infant mortality rates were around 5.0, Native Hawaiian or other  Pacific Islanders had a rate of 7.4, American Indian/Alaska Native had a rate of 9.4, and African Americans had a rate of 11.4, an infant death rate similar to  Tonga’s (in 2017), the 99th ranked country in the world. The US Government’s Centers for Disease Control and Prevention Office of Minority Health had set up a goal to eliminate the racial and ethnic differences in infant mortality by the year 2010, but they were obviously unsuccessful. They have focused on the likely causes, such as medical problems and illnesses, lack of prenatal care, poor nutrition, smoking and substance abuse, but it is clear that more effort is necessary (CDC Office of Minority Health, 2004).

Physical Development in Infancy and Childhood

Newborns enter the world with a set of programmed behaviors. Several of these reflexes are clearly designed to help the infant to survive . For example, if you stroke the cheek of a newborn, they will turn their head toward the stroke; this is called the rooting reflex and it helps the newborn find their mother’s nipple. Newborns will also reflexively suck anything that touches their lips. Contrary to some people’s beliefs, newborns can see, just not very well (in the words of Module 12, their visual acuity is poor). Their clearest vision is for objects that are about nine inches away, almost exactly the distance between a nursing infant and his mother’s face. As you will see in Modules 16 and 17, newborns are actually quite a bit more capable than you might think, and they are prepared to make enormous strides in cognition and social relationships.

Thumbnail for the embedded element "Infant Reflexes"

A YouTube element has been excluded from this version of the text. You can view it online here: https://cod.pressbooks.pub/introductiontopsychologywinter21/?p=99

Children are usually referred to as infants until they are two years old, although some people refer to children between one and two as toddlers. Physical development, or at least growth in size, slows dramatically during the first year, a trend that continues until the adolescent growth spurt. Think about what would happen if growth did not slow down. The fetus grew from 2 to 20 inches during the last 26 weeks before birth. If the new baby grew 18 inches every 26 weeks, it would be about 4 feet, 8 inches tall at one year. Parents complain now about their children outgrowing clothes too quickly. In the US, one-year-old babies actually average about 29 inches in height and weigh about 22 pounds, and two-year-olds average 34 inches and 28 pounds (National Center for Health Statistics, 2000).

Young parents sometimes have mixed feelings about one of the most important physical developments, their first child’s developing locomotion skills. Many parents compare notes with peers, swelling with pride when their child can crawl or walk earlier than another child. On the other hand, her newfound ability to move herself to a different location shows them how unprepared they really are for the rigors of vigilant parenting. For the first six months, a parent can be pretty sure that their child will be where they left her if they needed to leave her alone for a minute or two. An infant that can move around, though, requires constant attention and an extremely “child-proofed” house (e.g., electrical cords and all small objects safely out of reach, outlets covered, stairs barricaded).

Most children learn to walk some time around 1 year of age. The stages of development on the path to walking differ little across children, but the length of time the children stay on a particular stage does differ a lot. Sometime around four to five months old, infants learn to roll over. By seven months, most can sit up, and they begin crawling by around eight to nine months. Many infants can stand while holding on to something at the time they learn to crawl. From, then, they typically learn to walk while holding on to objects, then to full, albeit extremely unsteady walking (usually sometime around 12 to 14 months). Infants “wobble” when they walk; the side-to-side movements of each step can be larger than the forward progress. Also, each step covers a different distance, making for a very unsteady and irregular gait (Clark et al., 1988). As the infants get older, these irregularities even out and gait becomes more steady.

What about the pride that some parents feel from their children’s walking accomplishments? Do they really deserve any credit? Infants’ walking skills develop through increases in strength and balance; these increases come about via a combination of growth of the body, maturation of neurons, and experience. Experience can have some impact on the age at which an infant begins to walk, but the other two components must be in place. No matter how much practice you give a 4-month old infant, it will not help her walk at that time. Once the infant’s body is ready, however, early practice does seem to accelerate learning to walk. For example, one study found that daily “practice” over the first eight weeks after birth, in which parents guided the infants through a walking reflex, led the infants to walk two months earlier (9 months versus 11 months) than a “passively exercised” control group (Zelazo et al.,  1972). Newborns have a walking reflex; if you hold them upright and allow their feet to contact a moving surface, they will move their legs as if walking. Infants whose parents worked them through this walking exercise for two and a half minutes per day for three weeks walked earlier than a group whose parents pumped their legs “bicycle-style” for the same amount of time. So, parents might have some impact on the age at which their children begin to walk.

You should realize, however, that it is not necessarily a good thing to have an infant who walks early. First, infants who begin walking at later ages will quickly catch up to earlier walkers, and later practice is the most important factor for improving walking skills (Adolph et al.,  2003). Second, infants’ skulls are not yet fully formed; infants who walk very early may be more prone to injuries from falls because their skulls may not be ready for it (Gott, 1972).

There are really no physical milestones in later childhood as momentous as learning to walk. Rather, the remainder of the period is marked by continuing slow growth and development of more complex skills. Gross motor skills, such as running, jumping, skipping, and balancing begin developing first and continue to develop throughout childhood. For example, children’s ability to balance, the key skill underlying all standing skills, improves throughout the first decade (Roncesvalles et al., 2001). We tested a four-year-old, an eight-year-old, and an eleven-year-old on their ability to balance on one foot with their eyes closed. The four-year-old lasted three seconds. The eight and eleven-year-old were able to balance for a full two minutes, but the eight-year-old needed to hop around a lot in order to make it. Fine motor skills, the ones that use small muscles of the hands and fingers and require a fair amount of precision, begin developing later than gross motor skills. Perhaps the easiest way to see this is through children’s drawing skills. Children progress from scribbling at age two, to drawing simple shapes at three, to drawing recognizable pictures by about four or five (Kellogg, 1967).

Growth slows to about two to three inches and five pounds per year. It is as if the little body is lying in wait for the bombshell of puberty, which marks the beginning of adolescence.

  • Based on stories you might have heard from your family, do you think that your early experiences influenced the rate of any childhood physical developments, such as walking?
  • Which period of physical development do you find more interesting, the nine months before birth, or the two years after birth? Why?

15.2 Adolescent and Adult Physical Development

  • Think ahead to how your body will change over the next several decades. Be specific.
  • Are the changes generally good or bad?
  • Which changes seem in your control?
  • Assuming that there are some bad changes that you anticipate, is there anything you plan to do to prevent them?

From the prenatal period through infancy and childhood, we see a pattern in physical development, namely a slower and slower rate of change. With the advent of adolescence, there is a stark reversal of that trend. Seemingly overnight, the physical growth rate increases dramatically, and the individual who was a little boy or girl yesterday rapidly comes to resemble a man or woman. When we reach the end of adolescence and enter adulthood, physical growth stops completely. Common wisdom holds that people quickly reach their peak in adulthood, make it “over the hill,” and begin the gradual, but accelerating and inevitable decline. As is often the case, however, common wisdom is not exactly right. Let us now turn to physical development in adolescence and adulthood and see what, in fact, does typically happen.

Physical Changes in Adolescence

We’ll begin with the two immense physical changes that occur during adolescent development: sexual maturity and rapid growth, commonly known as the adolescent growth spurt. Puberty is the term used to describe the period during which the body reaches sexual maturity; it roughly corresponds to adolescence, or around the teenage years. But let us ignore these obvious signs of physical development for a moment and focus on the brain and biochemistry. Hormones help to explain how both the growth spurt and puberty take place.

In Module 11, we briefly described the hypothalamus and pituitary gland; you may recall that the hypothalamus directs the pituitary gland to release hormones. It is time to give you some details about the hormones released by the endocrine system, to which the pituitary gland belongs. The endocrine system is composed of several glands throughout the body; the principal function of these glands is to release chemicals called hormones. These hormones travel through the bloodstream to reach target areas elsewhere in the body, typically other glands or nervous system parts.

The glands that are important for sexual development and the growth spurt are the pituitary gland and the gonads, or sex glands. The pituitary gland is often called the master gland because one of its key functions is to release hormones that direct the activity of other glands, such as the gonads. The gonads, testes in males and ovaries in females, serve the dual function of producing sex hormones, and producing the sperm cells and ova (eggs). The most important sex hormones are androgens (especially testosterone , one specific kind of androgen), estrogens, and progesterone. Both testes and ovaries produce all three of the sex hormone types, but the testes produce more androgens and the ovaries produce more estrogens and progesterone. Consequently, androgens are often referred to as male sex hormones, whereas estrogens and progesterone are referred to as female sex hormones.

pituitary gland: a gland responsible for controlling vital body functions

endocrine system: the system of hormone-producing glands located throughout the body

gonads: sex glands; they produce sex hormones

androgens: a group of hormones that play a role in male traits and reproductive activity; the best-known androgen is testosterone

estrogens: a group of hormones that play a role in female traits and reproductive activity

progesterones: a group of hormones that play a role in female traits and reproductive activity

During adolescence, sex hormones trigger the development of primary and secondary sex characteristics. Primary sex characteristics are the maturation of the reproductive organs. They become fully functioning and capable of reproduction during puberty. Secondary sex characteristics are other features that signal the maturation of the reproductive organs and distinguish men from women. They include growth of facial, body, pubic, and underarm hair; voice changes; changes in body shape; and growth of girls’ breasts. Basically, the pituitary gland increases its release of hormones that direct the testes and ovaries to release their own hormones. In boys, the increase in androgens leads to masculine physical features; in women, the increase in estrogens leads to feminine physical features.

primary sex characteristics: the maturation of the reproductive organs

secondary sex characteristics: features that signal the maturation of the reproductive organs and help to distinguish men from women

The Adolescent Growth Spurt

A second major function of the pituitary gland is to secrete growth hormone, which travels through the bloodstream to reach muscles and bones and causes them to, well, grow. At puberty, the pituitary gland also increases the amount of growth hormone it releases, leading to the adolescent growth spurt. There is enormous variability in the beginning of the growth spurt. Females usually start sooner than males. At the peak of the spurt, males average nearly 4 inches per year, and females at their peak growth average just over 3 inches per year. Many parents swear that their adolescent child grew an inch overnight. That is probably not true, but it would be very difficult to prove that it could not happen. During the total growth spurt, the average male will add 14.5 inches, while the average female will add nearly 13.5 inches. Females start the growth spurt sooner, so they are shorter at its beginning than boys are; this accounts for most of the difference in height between men and women (Tanner, 1991).

Different body parts grow at different rates, so the body proportions change dramatically during the growing period. This sometimes leads to anxiety and embarrassment about the adolescents’ beliefs that their feet or hands are too big, and about clumsiness or awkwardness (Downs, 1990).

Females finish growing taller at about 17, males at about 21; again, however, there is large variability. Weight and height increases occur at around the same time for males. For females, however, weight sometimes begins to increase earlier than height, leading some females and their parents to worry about weight gain (Spear, 2002).

So, at the end of the adolescent period, we see a bit of a parallel with what happened after the dramatic growth over the first year of life. This time, however, instead of a slowing of growth, there is an outright stopping. As you will see, however, the cessation of physical growth does not mean that development and change stop, and it certainly does not mean that unavoidable decline is right around the corner.

Physical Changes in Adulthood

It is common for men and women to gain weight as they age. Both physical and lifestyle changes associated with aging contribute to these common increases. Further, being overweight can lead to a reduction in physical activity that can accelerate age-related changes, making a bit of a vicious cycle. Specifically, basal metabolic rate and muscle mass both decline as we age, beginning at around age 30 (Poehlman et al., 1990; Poehlman et al., 1993). Basal metabolic rate (BMR) is the amount of energy that our body expends when it is at rest; it represents the energy requirements (or the calories burned) for the basic functions of life, such as breathing, maintaining our heart rate, and supporting the cells of our body. As we age, those basic requirements decrease, meaning we burn fewer calories at rest. The cause of the decline is at least partially related to another loss due to aging, namely muscle mass, or the amount of lean muscle tissue in our bodies. Because our bodies use more energy maintaining muscle cells than fat cells, the loss of muscle cells leads to a lower BMR.

basal metabolic rate: the energy requirements for the basic functions of life

muscle mass: the amount of lean muscle tissue in a body

Maximum aerobic capacity, bone density, and flexibility all decline gradually over time, also beginning at around age 30 (Lim, 1999). Altogether, these declines are very bad news for the minuscule portion of the population who are currently at their peak strength and aerobic fitness. For example, consider the all-time greatest NBA basketball player, Michael Jordan, a man who was not only at his own physical capacity but was also a world-class athlete. From age 26 to 32, Mr. Jordan averaged a steady 31.5 points per game. From 1996 to 1998, when he was 33 to 35, his scoring began to drop a little, to an average of 29 points per game. Then, he retired for three years. At 38, Michael Jordan returned to the NBA for two final seasons, during which he scored only 21.5 points per game. Although many factors, such as the quality of one’s teammates and a player’s role on the team, contribute to a player’s scoring average, it is difficult to deny an age-related decline as part of the story. Oh, and if you disagree and think that Lebron James is the all-time greatest, you will have to write your own textbook to include that opinion.

For the rest of us who are not at our maximum physical capacities, the actual physical decline is so gradual that we barely notice it for years. When most people complain about physical decline beginning in their 30’s, they are very likely reporting on the results of the lifestyle side of the equation. As people settle into careers, many of them behind a desk, and take on family and other responsibilities, they find it difficult to exercise regularly and wind up leading far more sedentary lives (this, of course, also contributes to the increase in weight). Thus, the decline they experience is more of a detraining effect than anything else. The best news about the actual age-related physical decline is that it can be slowed with physical activity. What this means in practical terms is that unless you are currently at your maximum possible fitness, you can continue to increase strength and fitness for many years, as the benefits of activity will more than offset the small declines in your maximum capacity. For example, if you engage in muscle-building exercise (i.e., strength training), you can prevent the decline in muscle mass and the consequent decrease in BMR for many years. The declines do become more noticeable around age 55 to 60 (Lim, 1999), so even if you continue to exercise strenuously, you will probably begin to notice a drop off around then. One potential problem to keep in mind is that it becomes more difficult to exercise strenuously as we age. Range of motion, flexibility, as well as recovery time after exertion all deteriorate, making injuries more likely and slower to heal, and requiring more rest between workouts. Again, these declines begin gradually and accelerate as we age.

  • Did you or your parents keep track of your growth during your childhood and adolescence?
  • When did you start the rapid growth phase of adolescence?
  • What was the most you grew in any one year?
  • Do you remember when various physical developments, such as the growth of body hair and the beginnings of sexual maturity took place for you?
  • If you are over 30, have you noticed any declines yet? Have you noticed any physical areas in which you are still improving?

15.3 Brain Development Throughout the Lifespan  

  • What do you think happens to the brain when it develops after birth? Hint: it is not the addition of new neurons.

It is true that brain development is a physical development in many ways no different from the others in this module. Because the brain, as the source of all of our behavior and mental processes, bears a special relationship to psychology, however, it is worth pulling it out (so to speak) and describing its changes in a section separate from the other physical developments. As you learn about the types of developments that take place at different times during the lifespan, as well as the different brain areas involved, you will begin to understand and appreciate many of the differences in the psychology of infants, children, adolescents, and adults.

Prenatal Brain Development

Recall that when the embryo begins organizing itself, one of the new specialized areas is the neural tube. The neural tube actually develops from a section called the neural plate, which appears by three weeks after conception. The cells in the neural plate are called neural stem cells ; they have the ability to develop into any cells of the nervous system, such as glial cells (the cells that support and communicate with neurons) and immature neurons (Varoqueaux & Brose, 2002). The cells at the top of the developing neural tube will become the brain; three distinct sections that will become the hindbrain, midbrain, and forebrain can be seen during the second month after conception. The rest of the neural tube develops into the spinal cord.

Once the neural tube is formed, the number of neurons increases rapidly. Then, they begin to move, or migrate to their eventual location. Migration is guided by chemicals contained in the particular areas through which the neurons move (Gleeson & Walsh, 2000; Golman & Lushkin, 1998). Although the cells are changing while they travel, they will not develop into a specific type of neuron or glial cell until they reach their destinations.

The main changes that the cells undergo while they are migrating is axon growth. Both axon and dendrite growth, which begins in earnest after the migration is completed, prepare the developing nervous system for the process of synaptogenesis, the formation of new synapses. Recall that a synapse  is the small area where neural communication takes place, a “connection” between the axon of a sending neuron and the dendrite or cell body of a receiving neuron. Some synaptogenesis takes place before birth, but the bulk of it happens after the infant is born.

neural stem cells : primitive nerve cells that have the ability to develop into any cells of the nervous system

migration: the movement of neurons to their point of origin to their eventual location in the developing brain

synaptogenesis: the formation of new synapses between neurons

Infant and Child Brain Development

After birth, the infant’s brain does continue growing rapidly, approximately tripling in weight over the first year. A newborn’s brain has about 86 billion neurons. An adult’s brain has about 86 billion neurons. So, rather than adding new neurons, infant brain growth occurs primarily within the existing neurons. Myelin sheaths develop to cover many axons (this process is called myelinization ), and dendrites develop many new branches. The increase in dendrite branches allows the massive synaptogenesis to occur.

Synaptogenesis is very selective. Specific axons hook up with specific target areas, resulting in the creation of many different types of synapses. It is among the more amazing engineering feats in the universe, with the brain ending up with some 100 trillion synapses, an extraordinarily complex network of interconnected neurons. Synapse formation is particularly massive in the cortex, the most important brain area for higher intellectual functions. At the end of the major synaptogenesis during infancy, there are no “stray” neurons. Every neuron in the brain is connected, through synapses, with many others.

The major ways the developing brain accomplishes the final wiring is through the overproduction and later pruning of synapses and the death of unused neurons. Sometime during the first or second year, the number of synapses in particular brain areas reaches a maximum (the exact time depends on the brain area), each area containing many more synapses than will be present in the adult brain. Then, the pruning of the oversupply of synapses occurs largely through the “use it or lose it” principle. Synapses are activated by environmental stimulation; those that are not used die off, as do neurons that are left unconnected.

Research has shown that when infants are raised in impoverished environments, brain development suffers (Rutter, 1998; Shonkoff & Phillips, 2000). Many parents, and more than a few marketers, have responded to these kinds of research findings by surrounding (or recommending to parents that they surround) their children with educational toys, games, and videos. These toys aren’t bad, but, the trouble is, that the research findings are likely being misinterpreted. Children that are in impoverished environments aren’t struggling because of a lack of educational toys, they are struggling because of a lack of social stimulation. It appears to be social stimulation, such as talking and singing, playing, and providing consistent and loving care, that is most needed for healthy brain development. Although the more rigorous academic kind of stimulation may be beneficial (future research may be able to tell us this), it almost certainly will not be if it takes the place of a more nurturing kind of environment.

The pruning of synapses makes it sound as if there is only loss after early childhood. Recall that neurons do continue to be generated throughout life, particularly in the hippocampus. Also, after the conclusion of the synaptogenesis burst in infancy, the brain continues to create new synapses; it does so at least throughout adolescence, just never as much as it did during infancy. Throughout life, the brain will continue to strengthen some synapses, often through using them, and weaken or eliminate others (Varoqueaux & Brose, 2002).

Adolescent and Adult Brain Development

Synaptogenesis slows dramatically by adolescence. Another critical process for brain development, myelinization, continues, however. In particular, the addition of myelin sheaths to axons is most pronounced in the prefrontal cortex during adolescence and early adulthood, making it the latest area of the brain to mature. This late development may very well be related to the cognitive developments that take place in adolescence (Kwon & Lawson, 2000).

Obviously, learning continues throughout life. Learning in the adult brain appears to involve the strengthening and modification of existing synapses, rather than the creation of new ones (Bourgeois, Goldman-Rakic, & Rakic, 2000; Varoqueaux & Brose, 2002). As a consequence, although the adult brain does undergo a reduction in the number of synapses over time, the reduction does not dramatically affect our ability to learn. Although the brain does continue to produce new neurons from stem cells throughout life, researchers have not yet discovered how these neurons are put to use.

Without a doubt, the big story about the adult brain is the decline. Synapses decline, plasticity (the brain’s ability to reorganize itself) declines, and so on. The story is not nearly as pessimistic as you might think, however. Biologist Robert Sapolsky (2004) notes that it is a myth that we lose enormous numbers of neurons as we age, an error based on researchers’ conclusions that the brains of people suffering from dementia showed the same brain changes as normal aging people. We do lose neurons as we age, but not nearly as many as common wisdom holds. Neuron loss is not distributed evenly throughout the brain either, which explains many other characteristics of aging people. The hippocampus, an important structure for memory, is one of the biggest losers. Finally, because of the reduction in synaptogenesis, the brain plasticity that we observe so readily in younger people, particularly children, is less dramatic in the older brain. Older people’s brains can recover from various types of damage, such as injuries or strokes, but the recovery is slow and it is usually not complete.

  • Compare the sections on physical development in this module to the one on brain development. Can you recognize parallels or mismatches between the two types of developments?

Module 16: Cognitive Development

Take a few moments to sit quietly and think about anything you want.

What just went on in your mind? If you are like many people, you heard a little “voice” in your head articulating your thoughts. Maybe it said, “Hmm, I wonder what I should think about.” What happens inside the minds of infants and children when they think? It seems pretty likely that, at least for infants, there is no “voice in the head” speaking to them, so at some level, their thinking, or cognition, must differ from ours.

But how different is it? And how could you find out? Maybe infants do not think at all. Perhaps newborns’ behavior is fixed by simple reflexes. Over time, they begin to react to the external world and through experience (i.e., learning) they begin to think. At the other extreme is the possibility that infant thought is exactly like adult thought (with the obvious exception that they cannot think in words). Psychologists who study cognitive development in children have been asking these questions and providing the answers through research for many years. As you will see, particularly in the case of infants, the psychologists have come up with some very ingenious methods of finding out what goes on inside the mind of a child.

What about preschoolers and older children? Do their thought processes differ from ours? Why might these be important questions? If you have not already, many of you will have the opportunity to teach children one day, perhaps as an educator, but more likely as a parent. No one would try to teach children subjects for which they lacked sufficient background knowledge. For example, you would not teach calculus to a child who had not yet learned arithmetic. But what if you ignore the way the child’s mind works, assuming that it works the same way that yours does? You run the same risk of being unable to reach the child. You need not be teaching an academic subject to a child to be concerned about this. Many common parenting difficulties can be improved through an understanding of child cognition. One goal of this module is to introduce you to what psychologists have discovered about child cognition and to hint at some of the ways you can use this knowledge to understand and improve relations with children.

And what about when we get older? Do we continue to think the same way? Or are we destined to break down, to decline? Modern developmental psychologists ask these questions about people’s changes in cognition throughout their whole lifespan. And their answers can help us to understand and interact with each other and can teach us what to expect for ourselves as the years go by.

16.1 begins our coverage of cognitive development with a description of what might be the greatest intellectual feat of our lives, developing and using language.

16.2 introduces us to the work of the most famous and influential researcher in the history of cognitive development, Jean Piaget. Because his work dealt exclusively with children and adolescents, this is the one section that will not include any reference to development and cognition in adulthood.

16.3 expands upon the work of Piaget and introduces some other topics in cognitive development

16.4 concludes our coverage with another age-limited topic, namely cognitive disorders associated with aging.

16.1 Developing and Using Language

  • How many different words would you estimate that you know?
  • At what age do you think children develop the ability to formulate unique utterances that they have never heard previously?
  • Try to think back to your conversations over the past 24 – 48 hours. What kinds of things do you typically talk about? How do your conversation topics differ when you are talking to friends versus romantic partners versus family versus people you encounter at a place of work?

Language Development

We have bad news. All of us may have already had our most impressive cognitive achievement. We did most of it by the time we were three, so it has been downhill ever since. Think about it. Between birth and age three, nearly all children worldwide have developed from being completely unable to communicate beyond a reflexive cry to having a solid working knowledge of one language or more. This knowledge includes a vocabulary (in English, for example) of 1,000 words, the ability to understand almost any utterance they hear, and the ability to produce an infinite variety of complex and unique utterances that other speakers of the language can understand. By one estimate, fifth graders have learned 40,000 words (Anglin, 1993). Assume that an average college textbook has 500 key terms in it. By the time an average child reaches eleven, he or she has acquired a vocabulary equivalent to the key terms in 80 college textbooks.

Many mental abilities once thought to separate us from the “lower” animals have turned out not to hold up under close scrutiny. Using tools, planning, solving problems, categorizing, and many other cognitive feats have been observed in non-human primates. What about language, though? Without a doubt, non-human animals communicate with each other. But so far, no other species has been found that can duplicate our ability in language. Although there have been a few apes that have been taught sign language, the cases have been largely oversold. Although their learning is impressive, it takes a massive effort to get the apes to reach a level of facility that an average three-year-old reaches with no special instruction whatsoever.

The traditional view of language is that it is the ultimate cultural invention; it is as if we are unlike other animals in profound ways because we have developed large vocabularies and grammar and syntax. In other words, language has transcended our biological roots, and shows that humans have become something more than a really smart animal. We see this kind of thinking as another example of believing in nature versus  nurture. “If something is an invention of culture, it cannot be a product of nature,” the falsely dichotomous thinking goes.

We prefer to think of language as perhaps the greatest example of nature and nurture working together to produce a uniquely human ability (Pinker, 1994). The nurture of language is that each culture develops its own unique language, and children learn the language to which they are exposed. The nature of language is that children will learn a language if they are exposed to one. In other words, all children are born with the ability to learn language. In a striking parallel between humans and monkeys, Poremba et al. (2004) demonstrated that the left hemispheres of monkeys are active when they are listening to monkey vocalization, but not other similar sounds. Human beings process human vocalizations in much the same way. This evidence suggests that human language may have evolved as an innate capability.

Another indication of the nature of language is the universality of the language-learning process. Throughout the world, infants progress through the same stages in the same order on the road to learning a language, although they may reach a specific stage at a different time (this is true within a given culture, as well). Infants quickly progress by the second or third month from simple crying to producing sounds commonly known as cooing . Coos consist mostly of vowel sounds (“aah”) sometimes with consonants in the front, (“goo”). Infants coo when they are interacting with other people, and parents often learn that they can increase the frequency of cooing by responding to the sounds. Around six to nine months, infants begin the babbling  stage. When babbling, infants string together syllables consisting of vowels and consonants. Early on in babbling, the infant produces nearly all possible human speech sounds. You can see the influence of nurture as they progress through the stage; they begin to focus on only the speech sounds present in the language they are beginning to learn (de Boysson-Bardies et al. 1989).

Incidentally, deaf children who are learning sign language develop very similarly. Just like hearing children, they begin babbling as infants, although often a little later. They also “babble” with their hands, producing hand formations that will eventually be used in signs. Some researchers have reported that deaf infants often begin producing signs earlier than hearing infants produce words (Bonvillian et al., 1983; Orlansky & Bonvillian, 1988). Others have found that the age at which signs and words are produced are similar for deaf and hearing children (Acredolo & Goodwyn, 1991; Petitto & Marentette, 1991).

Infants enter a one-word stage  at about one year old. The magical appearance of the first word is not as much of a milestone as you might expect. First, it probably is not all that important developmentally. Throughout the development of language infants and young children consistently understand far more than they can produce, so infants know many words before they produce their first one. Second, the first word is hard to catch. Many parents miss it (even if they don’t realize it) because it is difficult to recognize the first real word from a coincidental babble. For example, imagine a parent who hears her daughter say “ba” one time in the presence of a ball when she is 8 months old. Of course, many parents would believe that it was her first word, and they might also be quite proud of her for being so advanced and of themselves for being such good parents. It does not count as a real word, though, unless the infant produces it consistently. And much of the time, they do not, especially at 8 months old. Most children’s actual first spoken word probably occurs some time around one-year-old.

Once children begin to use words consistently, parents and other people familiar with a particular child can understand what they are saying, even if the pronunciation often makes the speech unintelligible to an outsider. The learning of new words begins slowly at first and picks up speed dramatically at around 18-months. A typical 18-month old may be able to say 50 words, while a two-year-old can say 300 or more.

During the height of the new word explosion at age two, children begin a  sentence production  stage. Now that they have large sets of words with which to express themselves, producing a single word at a time is too restrictive. In order to express complex thoughts, a child needs to learn how to string together words to form meaningful sentences (Bloom, 1998). At first, they are simple two-word utterances consisting of a noun and a verb or another descriptive word (e.g., “mommy go,” or “daddy home”). These sentences resemble the way messages used to be sent by telegram, by including only the essential words, and the child is said to be at the telegraphic speech  stage. During the third and fourth years, children’s sentences become longer, as the missing words from the telegraphic speech get inserted (for example, “daddy home” becomes “daddy is coming home”), and sentences are used to express more complex ideas (for example, “mommy is going to the store to buy a cake”). From this point, language continues to grow more complex, and children still have a rapidly increasing vocabulary. As they progress through childhood, they are able to understand and produce more and more complex sentences.

Thumbnail for the embedded element "The 4 to 6 Month Baby Communication Milestones to Look For"

A YouTube element has been excluded from this version of the text. You can view it online here: https://cod.pressbooks.pub/introductiontopsychologywinter21/?p=101

You can also access this video directly at: https://youtu.be/d0FGHFrMRXI

How Does Language Work for Adults?

Let’s skip ahead a few years and consider how we communicate with each other through language as adults. In essence, we are trying to build the same situation model between a speaker and a listener. A situation model is a mental representation of the time, space, causes, intentions, reference to individuals and objects related to a conversation. In other words, it is a mental representation of the topics (Kashima, 2020; Pickering & Garrod, 2004). One key part of the process is prediction. While each person is trying to plan what to say next, they are actively trying to predict what the other person is going to say next, which can only happen accurately if the two share the same understanding of the situation, that is, the same situation model  (Pickering & Garrod, 2013). Have you ever been in a conversation where the other person tries to finish your sentence for you? Although that can be annoying, it does reflect a natural outcome of the normal processing.

One way to construct the same situation model could be for both people to literally say everything. And how many conversations have you had where that happens? Right, none. Instead, we rely on the other person making the correct inferences (see Modules 5 and 7). Again, an inference  is assuming that something is true based on previous knowledge or reasoning. So the two people speaking are trying to ensure that each is making the same inferences to build the same situation model while avoiding stating unnecessary things. How? Two ways are through two related ideas: audience design and common ground. Both require making an assessment of the knowledge that the other person has about the topics of the conversation. With  common ground , we make a judgment of the knowledge shared between two people, which allows certain information to go unstated and unexplained. For example, two baseball fans can meaningfully share that WHIP and OPS are vastly superior to ERA and batting average for determining the value of an MLB player. If you have no idea what we just said, then you understand the need for audience design , in which a speaker assesses that different listeners require that different information be provided in order to make an utterance understandable. As a result, we tailor our utterances to the specific audience we are talking to. So, to someone new to baseball, we would begin by explaining that WHIP is a measure of the quality of a pitcher; it stands for the number of walks plus the number of hits that a pitcher gives up per innings pitched ( W alks H its I nnings P itched, or WHIP). And of course, we would continue by defining the meaning of the other abbreviations and terms.

We rely on priming to do much of the work for us. You can think of  priming as reminding, the activation of some idea or concept from memory by another related concept. So if we say Y-M-C-A, and you instantly think of the Village People, you can thank priming for that. And, of course, you can thank the Village People (The Village People was a musical group that had a few gigantic hits back in the 1970s, none more gigantic than YMCA).

Thumbnail for the embedded element "Village People - YMCA OFFICIAL Music Video 1978"

You can also access the video directly at:  https://youtu.be/CS9OO0S5w2k

audience design : In conversation, when a speaker assesses that different listeners require that different information be provided in order to make an utterance understandable. As a result, we tailor our utterances to the specific audience we are talking to

common ground: judgment of the knowledge shared between two people, which allows certain information to go unstated and unexplained

priming: the activation of some idea or concept from memory by another related concept

situation model : a mental representation that is formed based on a person’s understanding of language

  • Think about some different-aged infants and young children that you know. Can you recognize the stage of language development in each child? Try to recall some representative utterances or sentences from each child.

16.2. Where It All Started: Jean Piaget

  • Imagine each of the following situations:
  • You are playing with a six-month-old infant and suddenly leave the room to answer the telephone.
  • You take a four-year-old child’s small cup of juice and empty it into a larger cup.
  • While trying to settle a fight over the TV between a seven-year-old and a 12-year old, you decide to let a coin flip decide. The seven-year-old loses.

For each situation: How do you think the child will react? What is going on inside the child’s mind?

The Swiss psychologist Jean Piaget (1896 – 1980) was the most influential theorist in the history of developmental psychology. His thinking has forever changed our view of how the minds of children work, and he basically invented the field of cognitive development as we know it today. Piaget’s insight was to notice that children understand the world differently than adults do. Like so many brilliant and creative ideas, Piaget’s insight came about by thinking about a commonplace phenomenon in a new way. He was working in Paris in the 1920s for the Binet Laboratory, a publisher of intelligence tests, helping them prepare a reasoning test that had been developed in English for French children. It is almost trivially true that children get questions wrong when taking reasoning tests. After all, that is how psychologists measure individual differences in reasoning ability or intelligence. What Piaget noticed is that children’s errors were not haphazard; if a child missed one specific question, he or she was likely to miss other specific questions, as well. Moreover, children at similar ages tended to err the same way. It was his realization that children made errors, not because of a lack of intelligence, but because of their developmental state that led to the vastly different conception of childhood cognition that we have today.

So, Piaget first raised the question, do children think differently from adults? Most people are surprised to learn that prior to Piaget, children were more or less considered miniature adults, whose cognitive processes were essentially the same as those of adults. The “obviousness” with which many people observe that children certainly think differently from adults is itself a testament to Piaget’s influence. His answer, of course, was “yes, they think differently,” and it explains many everyday observations about children. If you understand the differences between adult and child thought, it can help you to correct misunderstandings and miscommunications that might otherwise occur. As you will see, however, things are not so simple. Although there is good agreement among psychologists that the cognitive processes of children differ from those of adults, there is also agreement that the differences are not as dramatic as Piaget proposed. So, let us take a look at some of Jean Piaget’s major ideas and try to use them to understand the minds of children (particularly children you may know).

Piaget sought to explain two main aspects of the development of cognition:

  • How conceptual schemes are used to interpret some new experiences, and how the schemes are changed to account for others
  • How cognitive development proceeds through four stages over the first 12-15 years of life

Conceptual Schemes, Assimilation, and Accommodation

Piaget believed that throughout life, our goal is to build up an understanding of the world through establishing and modifying conceptual schemes. For Piaget—and many others who followed him—a scheme  is a framework that a child uses to organize knowledge about the world and interpret new information; it is essentially the same idea as a concept(see Module 7.1). Schemes are the mental frames that allow us to comprehend the vast amount of information to which we are constantly exposed.

Schemes for infants are very simple; they are frameworks for understanding actions or simple sensory input. For example, Piaget would have called the newborn’s reflexive sucking a scheme. Later in life, conceptual schemes include frameworks for understanding more complex actions, such as laughing or walking, as well as other entities in the world, such as objects, people, and animals. It is the infant’s cognitive task to come to an understanding of what the world is and how it works by using and modifying schemes. As the child develops, conceptual schemes become more complex through the processes of assimilation and accommodation.

Sometimes, a new experience or piece of information is understood as an example of an already established scheme, the process that Piaget called assimilation.  For example, suppose a child has a conceptual scheme for dogs that has been built through experiences with the family dog, a Labrador retriever. Some time later, they encounter a corgi and is told that this, too, is a dog. The new example is assimilated into the scheme for the dog, allowing the child to understand what the new animal is.

At other times, a new experience or piece of information may not fit into a preexisting scheme. The child may initially try to assimilate it but will fail. In order to arrive at a satisfactory understanding of the world, the child will need to use the process of accommodation,  modifying the initial scheme to allow for separate concepts. For example, upon seeing a wolf at the zoo, our child may assimilate at first and think that it is a dog. They will need to accommodate, that is, change this too-inclusive scheme for dog and divide it into dogs and wolves.

Accommodation need not follow an inappropriate assimilation, as in the previous example; it can also be used to help the child make subtle distinctions between similar concepts. Again, think about our child who has formed their scheme of dog from their encounters with their very friendly family dog. Perhaps the neighbor’s dog is not so friendly. The child will need to learn to distinguish between friendly and unfriendly dogs, so that they can figure out which ones are safe to approach and which ones they should avoid. That is, they will need to accommodate and create new sub-conceptual schemes, one for friendly dogs and one for unfriendly dogs.

Assimilation and accommodation often occur at the same time. For example, while the child is assimilating the neighbor’s dog, correctly realizing that it is another example of the same conceptual scheme (i.e., it is also a dog), they can also accommodate, or subdivide their scheme of dog into friendly dogs and unfriendly dogs.

Assimilation and accommodation occur for all of the conceptual schemes that we hold, including social categories. You should realize that forming and modifying conceptual schemes are not trivial processes and can be quite difficult for a child to carry out.

Thumbnail for the embedded element "Development: Schemas, Assimilation, & Accommodation"

You can also access the video directly at:  https://youtu.be/Xj0CUeyucJw

scheme:  mental framework for organizing knowledge about the world and interpreting new information; same idea as concept (Module 7)

assimilation:  interpreting a new experience or piece of information by understanding that it is an example of an existing scheme

accommodation:  changing an existing scheme to account for a new experience or piece of information that does not fit into it

Piaget’s Stage Theory of Cognitive Development

Piaget proposed that children progress through four broad stages of cognitive development. Within each stage, children continue to use the processes of assimilation and accommodation with their conceptual schemes; in fact, these processes continue throughout life. The key idea for Piaget’s stage theory, however, is mental operations.  These are mental procedures that can be reversed and are used for thinking, understanding, reasoning, and problem solving (see Piaget, 1942; 1957; 1970). For example, one mental operation is multiplying two numbers to obtain a product. If you start with the product, you can run the operation in reverse; of course, this is division. According to Piaget, children younger than about 2 are nowhere near using these mental operations; these children are at what he called the sensorimotor stage. From about 2-7, they were close, but still unable to use mental operations in most situations; hence, he called these children preoperational. Between about 7 and 11 or so, children could use mental operations, but only in certain situations; these children were at the concrete operations stage. Finally, adolescents (and adults) beyond age 11 could use the mental operations in any situation; he called this stage formal operations.

Sensorimotor stage (ages 0 – 2)

Piaget believed that during the first two years of life, the main cognitive tasks for the infant were to learn about how the physical world works and how to interact with the world. During the sensorimotor stage,  the child learns how to coordinate sensory input and movements, and thus learns how the world works and her place in it. The infant progresses from being able to produce simple reflexes only, such as sucking when a nipple is placed into the mouth, through more complex motor responses to more complex sensations.

Early in the sensorimotor stage, the infant makes many random movements. As some of these movements lead to pleasurable sensations, the infant learns over time to produce them. For example, infants will typically insert their hands into their mouths by accident during the first couple of months after birth. Although they probably find this pleasant, as evidenced by the vigorous sucking that they do, these young infants do not yet purposely put their fingers in their mouths. It is not until a bit later that the infant “discovers” their own fingers and can move them to their own mouth to produce pleasurable feelings. Thus, it is a coordination of sensation (the pleasurable feeling of the fingers) with the motor response (moving the fingers to the mouth) that is a hallmark of sensorimotor development.

At the beginning of the sensorimotor stage, the infant’s attention is essentially focused on her own body. They gradually change to an outside world focus throughout the first two years. At around one year, the infant begins actively exploring the world by manipulating objects—for example, picking objects up, stacking them, putting things inside of other things. Over the years during which my three children progressed through the sensorimotor stage, our family ended up losing several television remote controls, as each child delighted in discovering the countless spaces into which the small rectangular devices fit.

One essential conceptual scheme that develops during the second half of the first year is that of an object .  It is the key scheme that allows a rapid movement of thought beyond the child herself to the outside world and is an important basis of nearly all future cognitive development. Just imagine, if you did not even realize that there was such an entity as an object, how could you even think about the world? The sensorimotor child must realize that objects are separate from and independent of the self. In other words, they have to learn that objects are in the world and that the objects do not depend on the child to be there.

The realization that objects (including other people) continue to exist after the child stops looking at them is called object permanence,  and it develops between about six months and one year of age. Picture a six-month-old infant sitting in a high chair and playing with a rattle. The rattle is slippery from all of the saliva on it (because, as you know, “playing with” for a six-month-old probably means putting it in their mouth), so they drop it. The typical six-month-old infant will not even look for the toy and immediately become interested in something else as if they forgot that they just had a rattle in their hands. That is essentially what Piaget proposed; more precisely, he proposed that she forgot that the rattle had ever existed. As the child advances through the second six-months, you would see a developing awareness that the rattle exists after it falls. At eight months, she might strain to look or reach for it for a few seconds, but will quickly lose interest. By one year, most infants have a very clear understanding that the rattle still exists. They will very obviously look for and try to reach the rattle, and do not soon forget about it. It is at this age, that infants first understand the concept of hiding, and they can begin to play games like hide-and-seek. Prior to that time, the infant would simply forget about the hiding person and fail to seek.

Thumbnail for the embedded element "Piaget - Object permanence failure (Sensorimotor Stage)"

You can also access the video directly at:  https://youtu.be/rVqJacvywAQ

This realization about what an object is allows the child to make great strides in understanding the world. By the end of the sensorimotor stage, infants have learned a great deal more about objects, about the way the physical world works, and how they can interact with the world. As the infants get ready to move into the next stage, they begin to think more in symbols , meaning that they can now represent information from the world in their minds. For example, they can form a mental picture, or image, of a dog, and their conceptual schemes contain a great deal of information for individual concepts (for example, dogs bark, they have fur, they have four legs, and so on). And, of course, perhaps the greatest accomplishment related to the developing child’s use of symbols is their growing language ability.

Preoperational stage (ages 2 – 7)

To a large extent, children in the preoperational stage  are defined by what they do not have, namely, mental operations. Although they have mastered the coordination of their sensory experiences and motor responses, learned many of the important principles related to physical causality, can represent the world in their minds, and are quite adept as using language, preoperational children lack the ability to apply the reversible mental operations in most cases. For example, although some very advanced five-year-olds may be able to multiply two numbers together, at least for some simple problems, few of them understand that division is the reverse of multiplication. Piaget believed that preoperational children lacked most important mental operations that allow older children and adults to reason logically.

One key type of mental operation that preoperational children lack pertains to physical manipulations of substances. For example, if we take a glob of clay and flatten it out so that it looks bigger, you can simply run the flattening in reverse to realize that the amount of clay has not changed. As a result, you realize that the amount of some substance does not change, or is conserved , when it is subjected to various physical manipulations. Piaget called this understanding conservation,  and you can easily imagine many examples of how the form or shape of something might change without changing how much of the substance there is. For example, imagine pouring water from one container to another, spreading out pizza dough before cooking it, cutting spaghetti into small pieces, even tearing a sheet of paper into pieces. In each case, we can mentally reverse the action and know that it is still the same water, pizza dough, spaghetti, or paper. Preoperational children, on the other hand, because they have not yet acquired this operation, are bound by their senses; if something looks like it has more, it has more; if there are more pieces, there is more.

Preoperational children’s failure to conserve shows up in many different everyday reasoning situations. Picture a seven-year-old deciding to use a larger-than-usual bowl for their breakfast cereal. Their three-year-old brother thinks this is an excellent idea as he sees a normal amount of cereal into his own large bowl. When their brother sees how the cereal does not fill up his bowl, he cries because he does not have enough cereal. Unable to “mentally pour” the cereal back into a normal-sized bowl, the youngster does not realize that it is the same amount of cereal that he gets every day.

Thumbnail for the embedded element "Conservation task"

You can also access the video directly at:  https://youtu.be/YtLEWVu815o

Piaget went further than simply describing what preoperational children cannot do; he also described the characteristics of the reasoning processes that these children do have. Recall that they have just left the sensorimotor stage, in which the children develop a basic understanding of the way the physical world works as a consequence of coordinating their senses and actions. You can think of preoperational reasoning as a step beyond this. Preoperational children are beginning to reason about the world, but in a way that is still tied to their own sensations or perceptions. As a consequence, they are egocentric , able to reason using their own point of view only. Piaget’s most famous demonstration of children’s perceptual egocentrism was through the use of the mountain-view problem (Piaget & Inhelder, 1969). He set up a model of some mountains and placed a doll in the display. Piaget then asked the children what view the doll saw. The children answered by pointing to one of several pictures that showed different views of the mountains. Preoperational children usually chose the picture that showed the view that they themselves saw, regardless of the doll’s position. You can see preoperational children’s egocentrism frequently. For example, preoperational children are not very good at hide-and-seek. As long as they cannot see the seeker, they think they are hidden.

conservation:  the realization that the amount of a given substance does not change, even though its appearance might

egocentrism:  the ability to reason from an individual’s point of view only

Concrete operations stage (age 7–11)

The concrete operations stage,  lasting from approximately 7 until adolescence, marks the beginning of the child’s consistent, though still limited, use of mental operations. For example, if you test a nine-year-old on a conservation task, they are likely to get it right; they are able to mentally reverse an activity such as pouring liquid from one container to another. The use of these operations is limited, however, to situations involving tangible, or concrete, concepts. Just as the preoperational child’s thinking was tied to current perceptions; the concrete operational child’s use of mental operations is also tied to perceptions.

Concrete operational children, then, have acquired mental operations whose absence had formerly led them to make errors as preoperational children. There are several operations that pertain to mathematical reasoning. For example, you might recall from elementary school the transitive property of numbers: if A is larger than B and B is larger than C, then A is larger than C. Concrete operational children can understand transitivity. If you tell them that Jack is taller than Jill and Jill is taller than Jim, they can verify mentally that Jack is taller than Jim. You can also see that the development of these mental operations allows the concrete operational child to begin reasoning in a more logical manner (the Jack, Jill, and Jim problem is a simple logical reasoning problem).

Thumbnail for the embedded element "Piaget - Stage 3 - Concrete - Reversibility"

You can also access the video directly at:  https://youtu.be/gA04ew6Oi9M

Concrete operational children’s use of the operations is limited to situations in which the reasoning context is concrete. For example, although they would have no difficulty with the Jack-Jill-Jim problem, some concrete operational children might fail at the abstract A-B-C version of it. You can also see the limitations of concrete operational children by examining other aspects of their math reasoning ability. Although they may be quite skilled at using arithmetic operations—for example, understanding that addition and subtraction or multiplication and division are reverses of each other—most have difficulty understanding algebra concepts. In algebra, a symbol (e.g., the letter x) is an abstract variable that can assume any specific, or concrete, number. An understanding of this idea, which may be beyond most concrete operational children, it develops in the final of Piaget’s stages, formal operations.

Formal operations stage (over age 11)

Piaget suggested that children’s thinking undergoes its last major change beginning around age 11-14 when they enter the  formal operations stage . The shift from concrete to formal operational thinking is marked by a release of reasoning from perceptions. Formal operational thinkers begin reasoning about abstract concepts, such as justice and fairness, in a qualitatively different, more sophisticated way than concrete operational children. To an eight-year-old, “unfair” may mean they did not get the most, or a coin flip is “unfair” if the child loses. A formal operational thinker realizes that “fairness” requires one to consider the perspectives of all of the people involved.

Along with the adolescent’s new way of thinking about abstract concepts comes an increase in hypothetical reasoning, that is, reasoning about things that are possible or that are untrue. They can imagine a better world and often wonder why we cannot achieve it. Their wondering and reasoning are also marked by an increasing skill at logical thought.

Thumbnail for the embedded element "Piaget - Stage 4 - Formal - Deductive Reasoning"

You can also access the video directly at:  https://youtu.be/zjJdcXA1KH8

Evaluation of Piaget’s Theory

As we have said, Jean Piaget has been the most influential theorist in cognitive development by far. Actually, it is fair to say that he is the most influential developmental psychologist, period. Virtually all of the cognitive development research that has been conducted since Piaget’s work was discovered in the US around 1960 has been a reaction to it. What have the researchers found? Although Piaget gave us a profound new understanding of how children may understand the world differently from adults, he misjudged many specific aspects of children’s reasoning.

Is children’s thinking really so primitive?

If you have ever had the opportunity to spend time with young children, you might wonder what other psychologists think that Piaget got wrong. After all, the examples we have given you are real; young children really do make these kinds of reasoning errors. Two-year-olds really are bad at hide-and-seek; we did not make that up. Well, one way you can begin to see the problem with Piaget’s ideas is to realize that children do not always make these kinds of errors.

For example, preoperational children’s egocentrism often does not extend beyond simple perceptions. On the contrary, they can sometimes show a remarkable sensitivity to other people’s point of view. For example, four-year-olds will typically use simpler speech when talking to two-year-olds than when talking to older children or adults, something that requires them to take into account the perspective of the other person (Shatz & Gelman, 1973). Other violations of preoperational egocentrism, even the perceptual variety, are common, as well. For example, we recently held up a cereal box and asked a three-year-old to point to what they saw; they pointed to the picture on their side of the box. When we asked them to point to what we saw, they turned the box around and pointed at the side we had been looking at.

Piaget underestimated children in other respects as well. For example, Renee Baillargeon has demonstrated that infants show some understanding of object permanence as early as three months of age. In one of her experiments (Baillargeon, 1987), three-month-old infants watched a screen move 180 degrees from horizontal to vertical and to horizontal again; the infants were positioned at one end, so the screen moved away from them the whole time. While the screen was still low enough, a block was visible behind it; as the screen continued to move, the block was soon hidden behind the screen. At this point, Baillargeon was able to demonstrate that the infant had some understanding that the block was still there (object permanence). A trap door allowed the block to slip down so that the screen could continue to move. From the vantage point of the infant, though, this was an “impossible event;” the block should have stopped the screen. Infants stared longer at this event, as if surprised at what happened, than they did at a “possible event” in which the block actually did stop the screen.

Do all people develop the highest levels of thinking ability?

In one very important respect, Piaget probably overestimated people. Think again about our description of formal versus concrete operational thinkers’ conceptions of fairness. Perhaps you had the same reaction that we do when thinking about this example. To be blunt, we know a few adults whose definition of “fairness” sounds an awful lot like the eight-year-old we described.

The situation looks even worse when we consider the proposal that logical reasoning is a natural part of development. Some researchers have shown that logical reasoning ability is much more common in technologically advanced societies, suggesting that its development is dependent on educational experiences (Super, 1980). Even more dramatically, it is, in fact, very difficult for even highly educated people to reason logically (Module 7). It looks as if ascension to formal operational thinking is not exactly a sure thing. Although it is true that adolescents get better than younger children at reasoning logically (Müller et al., 2001), that is not the same thing as saying that they get good at it.

Later in his career, Piaget reevaluated his position on formal operations, concluding that many adolescents fail to use their formal operational thinking in many situations (Piaget, 1972). Others have made the more extreme assertion that many people never develop the ability to use formal operations (Leadbeater, 1991).

Is cognitive development stagelike or continuous? The reality that children can perform some reasoning tasks earlier and others later than Piaget believed (or not at all) indicates that development may be much more continuous than stagelike. Piaget believed that all of the operations within a stage developed together, resulting in a very rapid acquisition of abilities across a wide variety of domains. For example, concrete operational children who have acquired conservation would be able to use it in all appropriate situations and would be able to use all of the other concrete operations as well. If this were true, it would make sense to characterize concrete operations as a stage, a period of development that is different in kind (i.e., qualitatively) from other periods.

It is easy to find cases in which this is not true, however. Imagine a four-year-old who fails a standard “liquid in bottle” conservation test. The same child asks you for a cookie. When you hand over the cookie, the child, pressing his advantage, asks for two cookies. You take the first cookie back, break it in half, and return it to the child. Although this trick often works on a two-year-old, it will not fool many four-year-olds. Thus, the child in this example conserves in the cookie domain, but not in the liquid-in-bottle domain.

In general, development seems to be domain-specific. Skills or abilities acquired in one area do not automatically transfer over to others. The resulting view of cognitive development is one of more continuous change, as an operation such as conservation is applied to different situations at different times.

It is also important to remember the cases of adults and older children who fail in reasoning tasks that they should have mastered years earlier. You may be surprised to realize that the abilities required for the tasks do not even always come from the formal operational stage. For example, the last time you called someone “self-centered” or “egocentric” you were probably not talking about a four-year-old. Indeed a great many adults have difficulty understanding other people’s perspectives and consequently show a lack of empathy. Indeed, when asked to explain why violence occurred in the world, His Holiness the Dalai Lama once explained, “There is too much cruelty … or lack of compassion and empathy with our fellow human beings,” (quoted in McCool, 1999).

After all of the criticisms of important aspects of Piaget’s theory, you might wonder what is left. The truth is, not many of the details of his original theory have withstood without significant modification (but recall from Unit 1 that is the way science is supposed to work). The major principle that children think differently at different points in development is alive and well, however. It is probably useful for us to describe some more recent discoveries about cognitive development in the next section. By paying attention to the similarities to and differences from Piagetian ideas, you should be able to see his continuing influence on the field of cognitive development.

16.3 Other Topics in Cognitive Development

Developing a theory of mind: understanding what other people think.

We hope that you are getting the impression that children’s cognitive abilities are often closer to those of adults than Piaget believed. Again, though, we are not saying that their abilities are identical. Think about words such as believe, want, intend, pretend, and, for that matter, think. When you use these words to describe other people (as in, “My psychology professor  believes  that psychology is the most important subject in the world”), you are actually doing something quite remarkable. You are assuming that other people have minds just like yours, ones that lead them to engage in certain behaviors. It is something that psychologists have called a  theory of mind  (Wellman, 1990).

A theory of mind may not seem all that remarkable at first, but if you think about it, it really is pretty impressive. Having a theory of mind is a form of mind-reading, the ability to know how other people’s thoughts direct their behavior. It is not the psychic variety, of course, but it is mind-reading nonetheless. Our closest relatives in the animal kingdom, chimpanzees, despite their many notable cognitive achievements, such as tool use and problem-solving, apparently do not understand the inner states of other minds as well as an average four-year-old human (Povinelli & Vonk, 2003; Tomasello et al., 2003). Although computer scientists have created a computer program powerful enough to defeat the world chess champion, they cannot come up with one that has a theory of mind as advanced as an average chimpanzee.

Here is another excellent opportunity to ask the question, “How would you know if a young child (or a chimpanzee) has a theory of mind?” We could focus on one key concept, that of  belief . Three philosophers separately suggested the following kind of test to see if young children realize that other people hold beliefs (Bennett, 1978; Dennett, 1978; Harman, 1978). Suppose you are in a room with a red box and a blue box, and you hide a ten-dollar bill under the red box and leave the room. While you are gone (and are completely unable to see the room), a prankster enters and moves the money to under the blue box. When you come back to retrieve your ten-dollar bill, the first place you look is under the blue box. Would you be surprised? Most adults would be; they realize that you should have looked under the red box, where you  believed  the money would be. They have a theory of mind.

Actually, this is quite an advanced theory of mind. One study that used a version of this task found that some four-year-olds (and no three-year-olds) demonstrated this level of theory of mind (Wimmer & Perner, 1983). Other researchers have found evidence for more primitive theories of mind in younger children. Three-year-olds can reason about other people’s desires and about some beliefs (Stein & Levine, 1989). There is even evidence that infants as young as 9 months old can express their understanding of other people’s minds through their gestures (Bretherton et al., 1981).

One reason these observations are important is that they, too, reveal that Piaget underestimated children. If Piaget was correct that children under 7 (preoperational or lower) could only see the world through their own eyes (i.e., if they are egocentric), then they would not have a theory of mind, which requires an understanding of the other person’s perspective. Another reason these observations are important is that they show us that very young children are far more perceptive than we might imagine. In a sense, they are able to “read our minds,” at least for simple messages, at a very early age.

Although most psychologists agree that infants do not have an adult-like understanding of other people’s minds, they do have at least a primitive version of a theory of mind. Even very young infants will follow the gaze of another person. For example, if the infant sees a parent looking at a toy, the infant will look at the toy, too. From these early beginnings, the child’s theory of mind develops as the child realizes more about the internal states of other people. They eventually come to realize that people have perceptions, desires, and beliefs.

Let us think about an interesting implication of the development of children’s theory of mind, namely children’s ability to deceive. Have you ever heard the claim that very young children are unable to lie? Is that true? If it is true, why is it so? When two-year Juliana is asked, “Who broke the flowerpot?” “Who made a mess on the family room floor?” or “Who put the sock in the toilet bowl?” Juliana will often reply, “Ben (Juliana’s fifteen-year-old brother) did it.” In most cases, Ben is probably innocent, but is it fair to say that Juliana is telling a lie? How would you know? Clearly, we would need some insight into her intention. A lie is a lie because it is told with the intention to deceive. Does Juliana intend to deceive when they blame the flooding in the basement after a heavy rain on their brother?

You see, what is happening is that all of the adults that Juliana spends time with think it is hilarious when they say, “Ben did it.” Juliana gets very happy when they make people laugh, and the laughter often leads the adults to forget about any transgressions that Juliana may have committed. So, Juliana’s untruths are more accurately seen as examples of operant conditioning. Juliana gets positive reinforcement (laughter) and avoids punishment by saying “Ben did it.”

But that does not really prove that we know that Juliana is not lying. Perhaps two-year-olds are very cunning and skilled prevaricators. Well, a clue that two-year-olds might not realize that their goal is to deceive is that they really are not very good at it. Ask Juliana who made it rain and ruin the family picnic, and Juliana might very well say, “Ben did it.” If Juliana were really trying to deceive people with the lie, Juliana would be a bit more selective in their use of the statement.

A second and more important reason to think that Juliana is not really lying is that very young children appear to lack an essential element of a theory of mind, the absence of which renders them literally unable to deceive. In order to deceive, you must have quite a sophisticated conception of what is going on in the other person’s head. You have to realize that other people have beliefs before you can try to trick them into adopting a false belief. This level of theory of mind is fairly late to develop. According to Henry Wellman (1993), two-year-olds know that other people can perceive and want, but they do not yet understand that other people hold beliefs. This appears to be an important developmental change in the theory of mind of children between around three and four years of age. Before then, children’s lack of understanding about beliefs makes it impossible for them to understand that the goal of a lie is to lead someone else to adopt a false belief.

Thumbnail for the embedded element "The theory of mind test"

You can also access the video directly at:  https://youtu.be/YGSj2zY2OEM

theory of mind: the realization that other people have thoughts, beliefs, desires, etc. that guide their behavior

Developing Memory

It is very likely that memory begins before birth, and infant and child memory is quite impressive. At the same time, we can remember virtually none of the events from the first three years of our lives. How can we reconcile these seemingly contradictory points?

First, let us talk about how we know that newborns and infants can remember and that memory begins before birth. Of course, you cannot just ask the infants. Instead, researchers had to develop ingenious techniques that allowed them to get at this information indirectly. The basic idea is simple: if you can consistently get an infant to respond differently to two different stimuli, you can conclude that the infant perceives, or remembers, a difference between the stimuli. That simple idea allows us to draw many conclusions about the capabilities of infants. So, for example, consider the famous Cat in the Hat studies (DeCasper & Fifer, 1980; DeCasper & Prescott, 1984; DeCasper & Spence, 1986, 1991). Anthony DeCasper and his colleagues were able to demonstrate that newborns can recognize their mother’s voice, and, even more impressively, they can recognize a story that had been read to them before they were born. The researchers had a “wired” pacifier that could record the rate at which an infant sucked. Newborns sucked on the pacifier faster when listening to a tape of their mother than a tape of another woman. In one version of an experiment, one group of newborns had been read Dr. Seuss’s The Cat in the Hat by their mothers several times over the last weeks of pregnancy. After birth, they sucked faster only when their mothers read the familiar story.

A similar research technique called the  habituation  paradigm has led to many additional discoveries about infant abilities. The important observation that underlies this technique is that infants appear to bore easily. When infants are shown a new object, they stare at it with apparent interest. Then, they get used to it, or habituate, and their attention is easily drawn to other things. By exposing infants to different stimuli and keeping track of whether or not the infant has habituated or not, you can tell whether the infant recognizes a stimulus is something familiar or not. Researchers using the habituation paradigm have demonstrated that infants from three to six months old could remember visual information for periods from two weeks to a couple of months (Fagan, 1974; Bahrick & Pickens, 1995). Even newborns have demonstrated brief memories using the habituation technique (Slater et al., 1991).

Researchers have also shown that infants have impressive memory ability for associations. For example, researchers placed infants in a crib and attached a mobile to their foot with a ribbon. The infants quickly learned to associate moving their foot with the movement of the mobile. Even an eight-week-old infant could remember the association for up to two weeks if the training was given over time (Rovee-Collier & Fagen, 1981; Vander Linde et al., 1985). Six-month-olds, if they were briefly reminded by placing them in the same situation again during the interval, could remember the association for six weeks. The infants’ memories are heavily dependent on context; if you change the situation slightly, for example, by changing the color scheme of the crib in which the experiment is conducted, the infants are much worse at remembering the association (Rovee-Collier et al., 1992).

The types of memories we have described so far are implicit memories (memories for skills and procedures without conscious recall), and it is clear that young infants have them. Older infants begin to show signs that they have explicit memory (memory with conscious encoding and recall). Suppose an adult shows a child how to play with a novel toy, but does not let the infant play with it. After a delay, the infant is given the opportunity to play; the researchers look for the infant to imitate the behaviors previously modeled by the adult. Bauer and Wewerka (1995) used a procedure similar to this to show that one-year-old infants could remember an event 12 months later. Bauer and her colleagues have also shown that nine-month-old infants can remember events for one month if the modeled behavior is repeated one week after the first experience (Bauer et al., 2001). Researchers have demonstrated very impressive memory abilities in children of many ages, particularly if the events are meaningful to the child. For example, one study found that three- and four-year-olds could remember events from a trip to Disneyworld a year and a half earlier (Hamond & Fivush, 1991).

That is not to say that children’s memories are entirely reliable. Recall that the memories of adults can be easily distorted (Module 5). It turns out that children are even more susceptible to these kinds of memory distortions. In an extensive review of the psychological research on the issue, Bruck and Ceci (1999) concluded that children under 10, and especially preschoolers, are more easily misled into falsely remembering events than adults are. In one dramatic demonstration, Stephen Ceci and his colleagues (1994) were able to get almost 60% of preschoolers to falsely remember an event like getting their fingers caught in a mousetrap, simply by repeating a set of leading questions over an 11-week period.

In addition to the increased susceptibility to distortion, there are some clear differences between children’s and adult’s memories, with older people having better memory; an important source of the differences is the speed of mental processing. Other improvements seem more related to differences in memory strategy use or in some non-memory processes, rather than a fundamental difference in the way memory works. For example, short-term or working memory capacity increases with age, but the improvement likely reflects the role of background knowledge on memory (Dempster, 1978; 1985). One way you can see this is by observing children who are experts in chess; their working memory of chessboard positions is better than it is for unrelated strings of numbers, and more like the working memory of adults (Chi, 1978; Schneider et al.,  1993).

So, infants and children have quite good, but by no means perfect, memories, which brings us back to the question with which we began this section: why do adults have almost no memories of events and episodes from early childhood? We have very few memories from earlier than six or seven, and memories of events that happened before three and a half are extremely rare. They are so rare, that it is more likely that they result from memory distortions than from actual memory. No one really knows why we have  infantile amnesia , as it is called. There are two good candidate explanations we would like to share with you; both are related to the principles of encoding through recoding from Module 5:

  • Because children younger than three and a half are still developing their language abilities, they often try to encode events into memory verbatim (in other words, exactly as it occurs). A verbatim memory trace, being relatively unconnected from the rest of knowledge in memory, may be difficult to access later on, so these memory traces die away from disuse. Older children and adults, because their language skills are more flexible, can encode an event in a richer narrative form, which makes it easier to access in the future (Ceci, 1993; Fivush & Hamond, 1990; Nelson, 1993). In essence, the memories are embedded in a network of other knowledge, so they can be retrieved again.
  • The second possibility is related to the suggestion that in order to improve your memory for material, you can make it meaningful by applying it to yourself. Because children’s self-concepts are developing over the first few years, the adult self may be trying to retrieve an event that happened to a different, child self (Fivush, 1988; Howe & Courage, 1993).

habituation (research technique): a technique that researchers use to demonstrate infant memory by showing that infants look longer at new objects than familiar ones

infantile amnesia: adults’ near complete lack of memory for events from early childhood

We do not need to really talk about memory in adults because that is essentially what you saw in Module 5. Let us turn, then, to what may happen to our memories as we age. Many people fear “mental decline” when they age, and most probably do not mean intelligence or reasoning when they talk about it. Rather, they are referring to the dreaded “senior moment,” the unfortunate and apparently inevitable memory loss that we all have to look forward to. People in their 40’s and many in their 50’s often complain of senior moments. The truth of the matter is that memory decline for most people is very minor as they age. The decline is more of a perception than a reality. Although three-quarters of people older than fifty in the US report that they suffer from memory problems, only around one-third of the over fifty population, actually do (Arnst, 2003). It is true that the older one gets, the more likely memory problems become, but very few fifty-somethings are actually suffering from age-related memory problems, as that one-third rate also includes people in their 60’s, 70’s, 80’s and beyond.

Two factors that contribute to our inflated perception of age-related memory declines are confirmation bias and expectation effects in perception (Module 1, Module 13). According to the confirmation bias , we have a tendency to notice and remember cases that confirm our belief, such as when a 60-year old man loses his car keys. We will fail to notice and remember cases that do not confirm our belief, such as a 22-year old student who misses class because he lost his car keys. Expectation effects will lead us to perceive forgetful behavior exhibited by different-aged people in a way that is consistent with our expectations. For example, an older faculty friend of ours reports that when he was a child, he used to forget a lot; he would forget to bring home his homework, and he lost several watches until his parents gave up and stopped buying them for him. His childhood forgetfulness was perceived as a reflection of the fact that he was careless and irresponsible. When he lost his wedding ring at age 25, it was because he was (still) irresponsible. In his early 30’s, his forgetfulness was seen as evidence that he had become the prototypical “absent-minded professor.” In his early 40’s when he used to go to the wrong parking lot at the end of the day because he forgot where he parked his car, it was a consequence of the amount of stress in his life. Now that he is in his 50’s and he just locked himself out of his office because he forgets his keys for the fifth time this semester, it is a senior moment. Although we think you get the idea, we should emphasize that our colleague’s forgetfulness has been a constant throughout his life; it is only our expectations that lead us to perceive it as something different at different ages.

We will say this: memory decline in old age is big business. At the risk of being cynical, we might suggest that this fact contributes to the salience of “senior moments.” According to Consumer Reports, sales of supplements to aid memory doubled from 2006 – 2015 (Calderone, 2018). Consider the sadly typical story of the herb ginkgo biloba. This herb, an extract from leaves of the ginkgo tree, had been shown to slightly improve cognitive functioning in patients who are suffering from mild to moderate cognitive impairment, usually patients in the early stages of Alzheimer’s disease. When the herb was tested on people who are experiencing normal, age-related memory problems, the effects have been weak and inconsistent, however (Gold et al., 2002). Undeterred by the lack of support for ginkgo’s effectiveness, many manufacturers have sold the supplement to millions of normal individuals; it has been especially popular in Europe. Still today, it is readily available, despite the current consensus that there is no conclusive evidence that ginkgo helps for ANY condition (National Center for Complementary and Integrative Health, 2016).

Still, the quest is on for a serious cure for age-related memory decline. Business, having conquered those other scourges of the aged, impotence and baldness, has turned its attention to cognition, and they continually come to the party with a new cure. But when these supposed remedies are held under the light of research, the results have been equally bleak for other popular supplements, such as B vitamins and Omega 3 fatty acids (Kivipelto et al., 2017; Meng-Meng et al., 2014).

Many people have a preference for the quick fix, the easy solution. For example, many will take a diet drug rather than exercising and changing their eating behavior in order to control their weight. This is true even though most people know how important exercise is for controlling weight. In the case of age-related cognitive decline, however, many people do not even realize that there are two non-drug solutions to the problem. The first solution is to “use it or lose it.” Quite simply, people who continue to use their cognitive abilities as they age continue to be able to use them. In fact, a growing body of evidence suggests that many different activities, even non-intellectual ones, can help people retain their cognitive functioning as they age (Kramer, et al., 2004; Richards et al., 2003; Singh-Manoux et al., 2003).

The second solution is physical exercise, both aerobic exercise and strength training (Busse et al., 2009; Robitaille et al., 2014) . Hmm, controlling weight and stemming cognitive decline from one solution. Maybe we should bottle this exercise thing and sell it.

Changes in Reasoning and Intelligence

Earlier in this module, we observed that reasoning develops in different domains, as children (and for that matter, adolescents and adults) gain knowledge and experience in those domains. For example, when children learn about biological categories, they begin to be able to reason about the types of properties that are essential for different animal types, for example (Gelman & Markman, 1986). As people gain more knowledge and experience in different subject areas, their reasoning often gets closer to what we would recognize as logical reasoning (Müller et al., 2001).

Perhaps because of this increased knowledge and experience, adolescents and adults are better than younger children at other types of reasoning and thinking as well. For example, they are better able to use analogies to solve problems (Moshman, 1998). In  analogical reasoning , one understands a concept or solves a problem by noting similarities to another concept or problem. For example, an individual might learn about the behavior of the parts of an atom by realizing that an atom is like the solar system (Gentner, 1983). More broadly, adolescent thinkers become more deliberate in their reasoning, and their metacognitive skills improve (“thinking about thinking,” see Module 7; Campbell & Bickhard, 1986; Inhelder & Piaget, 1958; Moshman, 1990; 1994; 1998)

Popular wisdom holds that we reach various peaks in cognitive ability around age 30, followed by a gradual, but accelerating decline, a near-perfect parallel to the common beliefs about physical changes associated with aging. As we have said before, however, popular wisdom is not always correct. In module 12 you learned that in reality, the physical declines are barely noticeable before age 50, and they can be dramatically slowed through physical activity. The news is even better with respect to cognitive changes. Many of the declines that people talk about reflect illness. In normal, healthy adults, some aspects of reasoning, intelligence, and memory do not decline at all and may even improve throughout the lifespan.

Many instructors who teach diverse groups of undergraduates (for example at a community college), notice a difference between “traditional” (i.e., 19 or 20 years old) and “non-traditional,” or returning students (i.e., older). In short, the older students seem better able to apply the course content, and this seems particularly true of those who were in their 40’s and 50’s, These instructors’ casual observations are supported by research.

Several decades ago, Raymond Cattell (1963) proposed a distinction between fluid and crystallized intelligence, a distinction that has withstood the test of time.  Fluid intelligence  refers to your speedy reasoning ability. Think of it as your ability to solve logic and math problems or brain teasers. It does look as if fluid intelligence reaches a peak at around age 30 and then begins its long decline, a result of a reduction in speed of mental processing (Salthouse, 1991; 1996).  Crystallized intelligence  is your accumulated store of knowledge and your ability to apply that knowledge to solve problems (Lemme, 2002; Sternberg, 1996). Crystallized intelligence continues to increase, at least through the 50’s and perhaps throughout the lifespan (Baltes, 1987; Schaie, 1996). We do not believe it would be a stretch to note that crystallized intelligence is closely related to what we think of as wisdom. Thus the popular image of the wise elder may well be grounded in truth. It is also worth noting that fluid and crystallized intelligence constitute one key dimension of the CHC Theory of Intelligence you read about in Module 8.

analogical reasoning : a problem-solving technique that involves noting similarities between concepts or problems

fluid intelligence:  an individual’s speedy reasoning ability

crystallized intelligence:  an individual’s accumulated store of knowledge and the ability to apply the knowledge to solve problems

  • What is your earliest memory? Are you willing to admit that the memory might not be a genuine one? How would you explain the persistence of this particular memory?
  • Are you a good judge of other peoples’ goals, intentions, and beliefs, or do you often misconstrue them? How sophisticated is your theory of mind?

16.4 Cognitive Disorders of Aging

For some people, aging can be associated with cognitive decline, however. As people age, their risk of developing serious disorders that can affect their cognitive functioning does increase. One key risk is a reduction of blood flow to areas of the brain. Although these can be minor, the extreme version, a stroke , is very severe. When a blood vessel that feeds an area of the brain is blocked or bursts, the neurons die in the sections of the brain that normally receive blood from the blood vessel. Strokes can be deadly; they are the third leading cause of death in the US, accounting for over 160,000 deaths per year. People who survive a stroke suffer from brain damage and a consequent loss of abilities, such as memory, movement, and speech. When the lost functions include cognitive abilities, a person is said to be suffering from  [pb_glossary dementia.  Some stroke victims are able to regain some functions lost through the damage, however, a demonstration of adult brain plasticity.

stroke: a loss of blood flow to an area of the brain as a result of the blockage or bursting of a blood vessel. The brain areas die from lack of oxygen, and the consequence is brain damage and some loss of abilities.

dementia: a serious loss of cognitive abilities as a result of disease or disorder

One severe disorder of aging, and the most important source of dementia, is  Alzheimer’s disease,  a fatal and incurable affliction. There are two types of Alzheimer’s disease. Early-onset Alzheimer’s is quite rare and can strike as early as 30. Late-onset Alzheimer’s disease by definition strikes after 65. Alzheimer’s is a progressive disease; the symptoms start slowly and gradually worsen. Its most famous symptom is memory loss. A person in the early stages of Alzheimer’s may occasionally forget the names of common objects or get lost in a familiar place. As the disease progresses, the memory problems become more severe, and Alzheimer’s patients eventually wind up unable to recognize even their closest family members. Additional symptoms may include other cognitive problems such as confusion, loss of language and judgment skills, and personality changes. Eventually, they end up unable to care for themselves, and completely unresponsive. Death occurs an average of 8 years after the disease is diagnosed.

Because of people’s awareness of the disorder and its tragic consequences, many middle-aged and elderly people fear that they are entering the early stages of Alzheimer’s after occasional memory lapses. They are probably not. Although estimates of the incidence of Alzheimer’s vary, one study estimated that about 4.3% of 75-year olds, 8.5% of 80-year olds, 16% of 85-year olds, and 28.5% of 90-year olds suffer from it (Brookmeyer et al., 1998). Another research team estimated that about 40% of 95-year olds also suffer from Alzheimer’s (Ritchie & Kildea, 1995). Still, although the percentages of people affected are low for the younger groups, they translate into enormous numbers. In 2000, 4.5 million people in the US suffered from Alzheimer’s (15 million worldwide in 2004), and given the projected growth of the elderly population over the next several decades, the US alone can expect to have up to 16 million Alzheimer’s patients by 2050 (Hebert et al., 2003).

Because of these projections, a great deal of research effort is currently being devoted to discovering the causes of and treatments for Alzheimer’s disease. Scientists have a pretty good handle on what happens to the brains of Alzheimer’s patients, thanks to autopsies of victims and advanced brain-scanning techniques. They are still figuring out the causes, however. One key abnormality in the brains of Alzheimer’s patients is the presence of  tangles,  twisted protein fibers inside the neurons. In addition, the neurons in Alzheimer’s patient’s brains are surrounded by globs of proteins called  plaques.  Research with mice has suggested that these plaques cause the dementia of Alzheimer’s, rather than being a symptom of the disease. Masuo Ohno and colleagues (2004) genetically engineered mice to be prone to developing dementia but also to be unable to produce the specific protein that forms the plaques, called amyloid beta. The mice did not develop plaques, nor did they suffer from dementia. The plaques attack the brain somewhat selectively. Hardest hit are the hippocampus and parts of the cerebral cortex, areas that are keys for memory, language, judgment, and personality. The neurons that surround the plaques deteriorate, and the levels of acetylcholine, an important neurotransmitter for learning and memory, are very low.

Although genetic causes are likely part of the Alzheimer’s story (see Module 8), scientists are still struggling to find all of the causes. And the race is on to beat the enormous projected increase in Alzheimer’s disease resulting from the aging of the US population.

Thumbnail for the embedded element "How Alzheimer's Changes the Brain"

You can also access the video directly at:  https://youtu.be/0GXv3mHs9AU

Alzheimer’s disease:  a progressive, fatal disorder characterized by memory loss, other cognitive symptoms, and personality change

tangles:  twisted protein fibers inside the brain’s neurons in Alzheimer’s disease patients

plaques:  globs of protein that surround the brain’s neurons in Alzheimer’s disease patients

Module 17: Social Development

Many parents are proud of their children’s physical and cognitive developments. Milestones in social development, the third type of development, elicit very different emotions from some parents when they witness them for the first time. Sure, they are proud the first time their daughter walks across the room, but they are overjoyed the first time she runs into their arms and hugs them. An infant’s first smile, his obvious realization that you, his parent, will protect him and soothe his fears, his willingness or unwillingness to go to preschool without you, all are social developments, and for many parents, they are accompanied by strong emotions of happiness, fear, and even some sadness. And yes, some pride, too.

As we pointed out in Module 16, many key cognitive developments are closely related to social developments. This is particularly easy to see in infants. For example, think about the ability of newborn infants to recognize their mother’s voice as soon as they are born, or more generally, in infants’ apparent preference to attend to all things human. Clearly, these predispositions, abilities, and preferences are going to influence the processes through which children develop socially.

Cognitive developments might begin first, so they may be more fundamental, but one can make a good argument that social developments are really the goal. Infants and young children must use their fledgling cognitive abilities to learn how to get along in the world of other humans, that is, the social world. This module describes what some of the important social developments are and how they occur. Section 17.1 describes attachment, the emotional bonds that develop between an infant and one or more specific people, and how those bonds affect us throughout our lives. Section 17.2 covers issues related to the roles that parents and other caregivers play in their children’s early social development. It describes the effects of different parenting styles, and physical punishment. Section 17.3 focuses on Erik Erikson’s influential theory of psychosocial development and one of its major components, the development of our identity. Section 17.4 describes gender identity and explains how genders are different and how they are similar. Section 17.5 concludes the module with a discussion of the role of friendship in our social development.

Social Development

17.1 Developing Social Bonds: Attachment

17.2 Parenting Styles and Discipline

17.3 Developing Identity

17.4 Developing a Gender Identity

17.5 Friendship and Intimacy

By reading and studying Module 17, you should be able to remember and describe:

  • Attachment: Strange situation technique, secure attachment, resistant attachment, avoidant attachment, disoriented attachment (17.1)
  • Parenting styles: authoritarian, permissive, authoritative, neglecting (17.2)
  • Physical punishment (17.2)
  • Erikson’s theory of psychosocial development (17.3)
  • Components of identity: religiosity, ethnicity, nationality (17.3)
  • Different gender identities (17.4)
  • Gender differences and similarities (17.4)
  • Changes in friendship over the lifespan (17.5)

By reading and thinking about how the concepts in Module 17 apply to real life, you should be able to:

  • Recognize different attachment styles in young children (17.1)
  • Recognize different parenting styles (17.2)
  • Recognize examples of crises from Erikson’s theory (17.3)

By reading and thinking about Module 17, participating in classroom activities, and completing out-of-class assignments, you should be able to:

  • Recognize characteristics from attachment styles in your own relationships (17.1)
  • Support your opinion about the use of physical punishment (17.2)

17.1 Social Bonds: Attachment

  • Have you ever seen a group of one-year-old children with their parents? (If not, try to observe some, perhaps at a park or at the grocery store) Have you noticed any differences in the way these children relate emotionally to their parents?
  • What purpose do you think the emotional bond between a child and parent serves?

Our descriptions of infants so far have left out two of the most noticeable facts about them. First, they cry. Boy, do they cry. Surely, there must be some reason for this universal behavior. Second, infants are, shall we say, needy. They cannot feed or clean themselves, they cannot move around, they cannot keep themselves safe, and so on. To borrow from Blanche DuBois of Tennessee Williams’s A Streetcar Named Desire,  they are completely “dependent on the kindness of strangers.” You probably realize that these two facts are related to each other. Infants’ primary way of expressing their needs is by crying. (By the way, although it is not critical for the points we want to make about crying, it is not a trivial observation that the characteristics of crying change as the infant develops, from a more-or-less automatic reflex to a conscious strategy; Thompson, 1998). What you may not realize is that these two facts are probably related to the most significant social development during the child’s first year. In essence, infants move from being interested in all people to having a strong emotional bond with one person (or a small number of people), often with an accompanying fear of others.

Consider crying for a moment. Adults are quite good at judging differences in distress or emotion in infants’ cries (Leger et al., 1996; Thompson et al., 1996). In addition, parents are good at recognizing their own child’s cry from other children’s (Gustafson et al., 1994). Finally, listening to an infant cry hurts. Really. Jeffrey Lorberbaum and his colleagues (1999) used fMRI to record brain activity while mothers listened to the sound of babies crying. The researchers found that the mothers’ brains were active in the anterior cingulate cortex, a cortex area that is involved in the emotional distress that accompanies physical pain (Rainville et al., 1997).

Think about it. Infants cry when they need something. Parents are good at telling when the need is urgent and when it is their own child, so they can  respond appropriately. Listening to crying hurts, so they want  to respond to it. It all fits together so well. This pattern of infants expressing needs and parents responding to the need is an important component of that shift from infants’ interest in humans in general to their emotional bond to individual people, as you will see in a moment.

Why Infants Are Attached to Caregivers

The emotional bond to specific people to which we have been referring is called attachment. Think about how some young children display this attachment when they are unsure but curious about a new situation—for example, a loud but interesting looking new person. They would grab onto their parent’s legs and cautiously peer at the person from between their knees. At these times, it is easy to think that attachment means that parent and child are physically attached. Although there are many times when the attachment seems like a physical one—for example, when you can see a young child clinging to a parent during a threatening situation— attachment is defined as the emotional  bond between the child and the other person. The other person can be, and often is, a parent, but it really can be anyone, another caregiver, a grandparent, an older sibling, and so on.

Let us think about the purposes of attachment for a moment. We just noted that physical clinging is common when a child feels threatened. Perhaps being attached to a parent provides safety for the child. You might imagine, then, that attachment serves some biological purpose. In other words, perhaps it is adaptive; offspring that stay near their parents are more likely to survive; therefore, the tendency can be passed on to future generations (Bowlby, 1982). It is clear that some kind of bonding mechanism between parent and offspring occurs throughout the animal kingdom, suggesting that the adaptive, evolutionary, explanation is correct.

What, then, might be the specific benefits that the child derives from being attached to a parent? Because the mother is the sole biological source of nourishment for an infant, it seems reasonable to suppose that attachment helps the infant stay near its food source. If that is true, then you would expect a child to be more attached to the caregiver that provides food than to other caregivers. Indeed, infants often do have a stronger attachment bond to their mothers than to anyone else. Early psychologists, such as the behaviorists made these observations and drew the very same conclusion, namely that attachment bonds form because the mother provides nourishment for the infant. Like too many sensible and obvious conclusions, however, it is wrong. The relationship between nourishment and attachment is a correlation only. Recall that a correlation is simply a relationship between two variables, and we are not permitted to draw causal conclusions from correlations alone. Mothers, and for that matter many other kinds of caregivers, provide much more than simply nourishment.

The discovery that a separate factor was responsible for attachment required the ingenuity to separate the provision of nourishment from other factors, and the ability to conduct research designed to disrupt the attachment bond between a parent and offspring. Because such a study would be extremely unethical with human children, the important research was done by Harry Harlow with monkeys, during the 1950s. The factor that Harlow pitted against nourishment was body contact. Again, think about human infants. When they physically cling to a caregiver, it seems that they are rarely doing so to seek food. Rather, it is more likely comfort that they are seeking. Could it be that this comfort is the cause of the attachment bond? In his research, Harlow separated infant monkeys from their mothers and raised them with different kinds of “substitute mothers.” The substitutes were designed to provide nourishment, physical comfort (specifically, a soft, warm surface to cling to), or both. By varying these aspects of the substitute mothers, he was able to discover the importance of comfort over food. Specifically, some monkeys were raised with a single substitute mother that provided both comfort and food; this was a soft terrycloth “doll” that also provided food. Other monkeys were raised with the terrycloth mother and a separate “nourishment” mother, a wire model that simply provided the food. The results of the research were simple and straightforward. It did not really matter which substitute mother provided the food; the infant spent most of its time with the terrycloth “comfort” mother.

Thumbnail for the embedded element "Harlow's Studies on Dependency in Monkeys"

A YouTube element has been excluded from this version of the text. You can view it online here: https://cod.pressbooks.pub/introductiontopsychologywinter21/?p=103

You can also access the video directly at:  https://youtu.be/OrNBEhzjg8I

attachment: an emotional bond between a child and another specific person, often (but not necessarily) a parent

Attachment Styles

If you spend some time around different children, you will notice that they do not all seem equally attached to their parents. Furthermore, the differences among the children can be observed quite early, certainly within the first year. Some of you might think about that and conclude that the attachment differences between children must reflect some differences that they were born with. Others of you might conclude that something must have happened to the children early in life to lead to these differences. You may both be right.

First, let us describe the common differences in attachment that we can observe among children before turning to the possible reasons. Psychologist Mary Ainsworth pioneered the research technique that has been commonly used to assess what is known as attachment style (Ainsworth et al., 1978). Through this technique, known as the Strange Situation technique, four different attachment styles have been identified: secure attachment, and three styles of insecure attachment—resistant attachment, avoidant attachment, and disoriented attachment.

The Strange Situation research is frequently conducted when the child is about one year old. In a typical study, an observer watches while the child plays with the mother in the room. After a few minutes a new person enters the room to play with the child for a few minutes, then the mother leaves. Then, the new person leaves and the mother returns. For a few more minutes, the mother and stranger alternate being in the room with the child, and the child is even left alone for a short time. Although the reactions of the child to each change in the situation are recorded, it is the response when the mother returns that is key; how does the child act during the reunion?

  • Securely attached children (about 65% of children in the US) are happily reunited with the mother. If they had been distressed by the mother leaving, they are easily calmed upon her return.
  • Resistant attached children (about 10% – 15% of children in the US) appear angry when the mother returns. They may push her way, or fight to be put down.
  • Avoidant attached children (about 10% – 15% of children in the US) display little response when the mother returns. It almost looks as if the infant did not even realize the mother was gone.
  • Disoriented attachment (about 5 % – 10 % of children in the US) display inconsistent behaviors when the mother returns. They may appear disoriented and confused and may want to be picked up, only to scream when they are. They may also show distress again after calming down.

Thumbnail for the embedded element "The Strange Situation - Mary Ainsworth"

You can also access the video directly at:  https://youtu.be/QTsewNrHUHU

So, what causes these differences in attachment style? Recall the discussion above about crying signifying an infant’s needs and parents responding to it. Although all parents may be motivated to respond to an infant crying, not all do so the same way. The pattern of responding by the mother when the child is in distress is closely related to the child’s attachment style. Mothers who are relatively unresponsive to their children often have children with avoidant attachment, mothers who respond inconsistently often have resistant or disoriented attached children, and mothers who respond appropriately often have securely attached children (Carbonell et al., 2002; Cox et al., 1992; Isabella, 1993). Although we have been talking primarily about attachment to the mother, it is true that infants can be attached to several individual people. For example, van IJzendoorn and De Wolff (1997) found that fathers’ responsiveness also predicts infants’ secure attachment. It appears, then, that the pattern of parent responding causes the attachment style. This conclusion, also based on a correlation, is sensible, obvious and this time, at least partially right.

There is additional evidence that helps us conclude that parents’ behavior influences the attachment style of an infant. For example, providing support services for at-risk mothers can increase secure attachment in their infants (Jacobson & Frye, 1991; Lyons-Ruth et al.. 1990). Also, there are cross-cultural differences in attachment style, which strongly suggests that experience (that is, parent responsiveness) also plays an important role. For example, van IJzendoorn & Kroonenberg (1988) found that German infants were more likely than American and Japanese infants to have avoidant attachment. Japanese infants were more likely than American and German infants to have resistant attachment. In all three, however, secure attachment was the most common style.

There might be more to the story than parent responsiveness alone, however. As many parents of multiple children report, you can often recognize differences between children’s personalities soon after birth. Some children cry a lot and are difficult to console; others seem rather content, you might even say easygoing. It is possible that these differences, called temperament, are partly responsible for differences in attachment style (Kagan, 1987; 1998). Temperament refers to biologically-based differences in a person’s emotional and motor reactions to new stimuli, and tendencies regarding self-regulation. Also, because differences in temperament appear so early, they are very likely partially genetic (Ebstein et al., 2003).

To illustrate how it might work, suppose as a parent, you have a child with a difficult temperament; she cries frequently, and is very unpredictable and difficult to console. Sometimes when you pick her up, she stops crying right away, other times the crying goes on for hours. This type of behavior is essentially what a researcher would recognize as disoriented attachment. It is easy to imagine that your responding would grow to be inconsistent; sometimes you would respond right away, other times you might wait for a while. There you have it; a consistent pattern between the attachment style of the child and the responsiveness of the parent. But instead of parent responsiveness leading to attachment, we have infant temperament leading to attachment style, and then causing parent responsiveness.

Researchers, then, have been interested in the role of temperament, especially the genetic component of it, in infant attachment. Although some early studies indicated that it played a significant role, more recent research has failed to replicate some of these studies, and has indicated only a minor role of genetics and temperament (Oliveara & Fearon, 2019). The conclusion that most psychologists have drawn is that although a combination of infant temperament and genes, and parent responsiveness lead to the attachment style of the infant, genetics appear to play a relatively smaller role.

Consequences of Attachment

Before moving on to a broader discussion of parenting behaviors, We should talk a little bit about why attachment has been such an important topic for psychologists to study. If attachment style manifests itself only in the Strange Situation research studies, psychologists’ attention to it would be a pretty pointless intellectual exercise. That appears not to be so. The attachment style that is established during infancy appears to be repeated in relationships with other people throughout life. It is as if infant attachment style forms a template that guides the developing child when forming later relationships (Sroufe & Fleeson, 1986). In an unexpected twist, genetics appears to play a stronger role in these attachment patterns for adolescents and adults than they do in the infant attachment we just described (Fearon et al., 2014; Franz et al., 2011).

Attachment style, as it turns out, ends up being related to a great deal of older child and adult social behavior. Researchers have examined the same children over time, a research design called a longitudinal study, in order to find out whether later behavior is indeed related to infant attachment style. A longitudinal study is a difficult and expensive way to do research, but it is the best way to discover changes in individual people. Studies have found that attachment style does predict later social behavior. For example, securely attached infants are more socially competent when they get older, are more empathetic, have higher self-esteem, and do better in school (Sroufe, et al. 2005; Urban et al. 1991). The most consistent attachment style over time is the disoriented attachment. Children with this style have problems with aggression and anger throughout the school years (Lyons-Ruth, 1996; Lyons-Ruth et al., 1997).

No one would claim that infant attachment style is the only determinant of social behavior during adulthood. Many life events, such as parents’ divorce, can also influence the way we form social relationships (Lewis, 1997). Even if it were the only important factor, infant attachment style can change as the child develops. Also, if the parent-child relationship changes—for example, the mother or father grows more or less responsive—the child’s attachment style might change along with it (Thompson, 2000).

  • Think about a current or recent close relationship in your life (romantic probably works best, but friendships will work, too). Use the four categories of attachment style and try to classify your relationship with the other person, as well as their relationship with you. Do the two of you seem to have the same attachment style? Do you think that your attachment style is the same or different for other relationships in your life?

17.2. Parenting Styles and Discipline

  • How strict were they?
  • How many rules were there, what kind of rules, and how well were they explained to you?
  • Did they use physical punishment?
  • Describe the same aspects of your own parenting style—your expected style if you are not yet a parent, or your actual style if you are.

Parents differ from each other in much more than simply their responsiveness to their children. Some parents feel that their role is to control their children, and they employ harsh discipline, firm rules, and physical punishment. Others prefer to let children make their own choices and provide very few rules and little guidance. Still others think of themselves in more of a guiding role; they explain the purposes of rules and allow their children to develop as independent thinkers. Some parents spend a great deal of time with their children, others because of career and work obligations are with them for only a couple of hours each day. There are a great many aspects that we can use to describe differences among parents. Many of these differences influence the way that children develop socially, and they could be the topic of an entire book. Indeed, there are dozens of advice books covering many parenting strategies and behaviors. In this section, we will describe two key topics: parenting styles and physical punishment. Each has been the subject of substantial research and in some cases, significant news coverage.

Parenting styles

The most heavily researched difference among parents has probably been parenting style. Diana Baumrind (1989) identified four major styles of parenting: authoritarian, authoritative, permissive, and neglecting. Researchers over the years have found several differences in the adjustment of children whose parents have different parenting styles.

Authoritarian parents rule with very firm rules and harsh discipline, often using physical punishment. They seek to teach their children to obey. Permissive parents have few rules, instead letting the children set their own courses. Authoritative parents seek to guide their children to make the right choices. Although they have firm rules, similar to authoritarian parents, authoritative parents can be flexible; they allow the children some say in formulating rules. They see their role more to teach than to control; therefore, they spend a lot of time explaining reasons for rules to the children. The fourth style, disengaged or neglecting parents, is just what it sounds like, a parenting style marked by leaving the children alone. Of the four, it is clearly the worst style, and is actually a sufficient cause for a court to suspend someone’s parental rights.

Of course, the four styles are not equally effective. Children who have authoritative parents are better adjusted than those with authoritarian or permissive parents (and children of neglecting parents fare very poorly). Children of authoritative parents—particularly compared to children of authoritarian parents—are more independent, less anxious, and friendlier and more competent in social situations; they also have high self-esteem (Baumrind, 1989; 1991; Kaufmann et al,. 2000; Maccoby & Martin, 1983). This point is worth emphasizing because some people believe that harsh discipline is necessary to have well-behaved, well-adjusted children. There is very little evidence suggesting that such parenting is superior to authoritative parenting, and a great deal of evidence suggesting the opposite. For example, one study of 10,000 high school students across a wide range of ethnicities, family structures, socioeconomic statuses, and types of community found that children of authoritative parents tended to do better in school, were more self reliant, had less psychological distress, and had fewer behavior problems (Steinberg et al., 1991).

authoritarian parenting: parenting style characterized by demands for unquestioning obedience; often makes use of harsh and physical punishment

authoritative parenting : parenting style characterized by firm rules for children, along with explanation of the rules and an opportunity for children to have some autonomy permissive parenting: parenting style characterized by few demands and rules for children

neglecting (disengaged) parenting: parenting style characterized by a lack of attention to and care for children

Before completely condemning authoritarian parenting, however, we have to admit that there are alternative interpretations of the research results. Perhaps you have already realized that, similar to what we saw for attachment, the relationship between parenting style and adjustment is a correlation. Again, we are not permitted to draw causal conclusions because of the directionality and third variable problems (see Module 2). Consider the directionality problem. Just as we saw in the role of temperament on attachment, it may be the case that easygoing, well-adjusted children allow parents to adopt a more flexible, authoritative parenting style. Indeed, several psychologists have suggested that such child-to-parent effects, as they are called, can explain a lot of the relationship between parenting style and adjustment (e.g., Bell, 1968; Harris, 1995; 1998; Rowe, 2002).

We also have a version of the third variable problem preventing us from concluding that parenting styles cause differences in adjustment. In short, one variable that could lead to both is genes; again, the description parallels what we just saw for attachment. Specifically, perhaps a parent is an easygoing, authoritative parent and a child an easygoing, well-adjusted child because of genes that they share. Behavioral genetics examinations of personality have revealed heritabilities for many personality characteristics in the 30%–50% range (Ebstein et al., 2003). In other words, about 30% to 50% of the variation in personality characteristics in the population can be attributed to genetic differences. The parent-child shared genetic contributions to personality could certainly account for some of the correspondence between parenting style and child outcomes.

Given the plausible directionality and third variable alternative explanations, you may be tempted to completely discount the role of parenting effects. A few psychologists have indeed assigned a minor role to parenting styles (Harris, 1995). What we really need is some experiments to help us draw the causal conclusion. Unfortunately, not many have been conducted. For example, Philip and Carolyn Cowan (2002) randomly assigned 100 couples with a child entering kindergarten to one of three groups: two groups participated in 16 weekly discussions with other parents (led by a pair of psychologists), and the third was a control group that had no discussions. One of the discussion groups had a special emphasis on parenting issues, and the other on marital issues. The Cowans found that parents in the parenting discussion group increased their authoritative parenting behavior, and as a result their children adapted to school better than the control group did. Even more interesting, the parents who participated in the marital discussion group also increased their authoritative parenting, and their children fared better in school, too.

Rather than continuing to look for straightforward parenting effects, many researchers have become interested in the possibility that parenting styles do not affect all children equally (Bates et al., 1998). Most psychologists believe that children’s adjustment reflects a combination of genetic effects, child-to-parent effects, parenting styles, and other environmental forces such as peer influences (Maccoby, 2002).

Physical Punishment

A parenting tool common in authoritarian parents, but present among all styles, is physical punishment, or spanking. The estimates of the frequency of physical punishment in the US vary widely, from 37% – 80% of parents (Gershoff et al., 2018; Finkelhor et al., 2019). The discrepancy largely results from different methodologies, particularly with respect to the ages of the children included in the study (younger children are more likely to be spanked than older children are). It is probably safe to say that the majority of parents in the US have used physical punishment, though.

The key question, of course, is does all this spanking create better children? This has been the subject of debates over the years. One of the key problems is that ethical concerns make it difficult to conduct the type of research that allow confident casual conclusions (of course, you remember that this is the experimental research design, right? If not, we recommend that you go back to Module 2, if not now, at least before your final exam in this class). The primary research method that has been used is a longitudinal design, which is technically a correlational design.

Further complicating matters, or so it seems, is the fact that the research is not entirely consistent. There are some studies that show spanking is associated with poorer outcomes, some that do not. And media accounts have continued to report these controversies, with article titles like Spanking Can Be an Appropriate Form of Child Discipline (Pingleton, 2014) and Meet the Scientists Who Haven’t Given Up on Spanking (Pelley, 2018). One final bit of apparent contradiction and complication: 30% of members of the American Psychological Association surveyed in 2016 did not agree that spanking is harmful to children (Gershoff, 2018). It is a requirement to have a PhD in psychology to be a member of APA, so this certainly seems like a legitimate scientific controversy that has not yet been settled.

But hold on, things are not so simple. Or is it things are not so complicated? We are starting to confuse ourselves here. Let us clarify. And we will start by simplifying: the science is largely settled. Spanking is harmful to children.

We will let the American Psychological Association speak for itself. From the Resolution on Physical Discipline of Children By Parents (APA, 2019):

“. . .the American Psychological Association recognizes that scientific evidence demonstrates the negative effects of physical discipline of children by caregivers and thereby recommends that caregivers use alternative forms of discipline that are associated with more positive outcomes for children.”

And how about the American Academy of Pediatrics, who have had a policy against physical punishment for many years? Here is an excerpt from their 2018 revision. It is their strongest statement against the practice yet:

“Parents, other caregivers, and adults interacting with children and adolescents should not use corporal punishment (including hitting and spanking), either in anger or as a punishment for or consequence of misbehavior, nor should they use any disciplinary strategy, including verbal abuse, that causes shame or humiliation.”

It is true that longitudinal studies are correlational. In essence, we are stuck with the same kinds of difficulties that we had when trying to draw conclusions about the roles of parent responsiveness on attachment and parenting style on adjustment. Physical punishment is consistently associated with negative outcomes, but is it the case that the spanking caused the poor outcomes, or that the poorly behaved children caused parents to spank them more? Gershoff et al. (2018) produced an excellent explanation of the scientific conclusion that physical punishment is harmful and ineffective, and they did it by following an important historical model. No one doubts that smoking causes lung cancer, despite the fact that there are no experimental studies on humans. Gershoff and her colleagues applied the same criteria used to evaluate the appropriateness of causal conclusions from the smoking-lung cancer non-experimental research to the research on physical punishment to produce a very convincing case that spanking causes poor outcomes and spanking does not produce better-behaved children.

So what are these poor outcomes to which we keep referring?

A well-known meta-analysis found that children who are spanked have lower levels of moral internalization (essentially, learning that what they did was wrong and taking responsibility for it), lower quality of parent-child relationship, worse childhood and adulthood mental health, higher childhood and adulthood levels of aggression, and higher childhood antisocial behavior (Gershoff 2002). Ouch.

The one “positive” outcome? Immediate compliance. The quotation marks, of course, are intended to convey that this only seems like a positive outcome. In reality, it is one of the key factors that lead parents to believe that spanking is effective. But even here, it is not doing what they think it is. Although spanking often stops a behavior in the short term, the long-term results are less promising. Essentially, children learn how to avoid the spanking, sometimes by making sure that they commit the behavior only in situations in which they are unlikely to be caught (Johnston, 1972). Picture the 13-year old who is spanked for using profanity at the dinner table. He is likely to stop swearing in the presence of his parents, but unlikely to do so with his friends.

Some critics have charged that research showing negative effects combined mild physical punishment with harsher punishment that crosses over the line to abuse, and that only lower quality studies have found negative effects. Gershoff and Grogan-Kaylor (2016) conducted a new meta-analysis that addressed these concerns. They found separate effects for lower-level and harsher physical punishment. They also found no evidence that the size of the negative effects varied according to the measure of study quality they employed.

Still, spanking does have some adherents. Even they admit, however, that spanking should be occasional and mild (Baumrind et al,. 2002; Larzelere & Kuhn, 2005). The problem is, children are often judged most in need of a spanking when they have committed an act that has frustrated, insulted, or angered a parent. In short, the time that the parent probably most wants to spank the child is the exact time that the “mild swat” is most likely to spiral into an anger-fueled abusive episode. If parents accept the advice that they should wait until they are not angry when they spank, they force themselves to contradict one of the principles of the effective use of punishment. You might recall that consequences are much more effective at influencing behavior if they are immediate (Module 6). If it takes a parent an hour after a child’s infraction to calm down enough to administer a controlled swat, the time for the effective use of the punishment has long passed. Also, it is worth considering what the word   discipline means; it comes from the same Latin root as the word   disciple. It means to teach. The goal of discipline is not simply to stop unwanted behaviors; it is also to teach wanted behaviors. Punishment, physical or otherwise, is designed only to stop specific behaviors. There is no guarantee that unwanted behaviors will be replaced by appropriate ones.

What can we conclude from these sometimes confusing results about the role of parents? First, because of the genetics/temperament/personality issues, it is safe to say that parents’ behaviors are not as strong an influence as some believe. At the same time, few psychologists have gone so far as to say that parents are unimportant, rather that they are one of several influences. Second, there is growing agreement that a one-size-fits-all approach to parenting is inappropriate, so it can be difficult to track the individual influence of a parent’s behaviors. Third, there is a widespread, but not universal, agreement that the negative aspects of physical punishment outweigh any possible benefits. There is a good possibility that physical punishment and authoritarian parenting cause problems such as aggression, antisocial behavior, and poor relationships between parents and children. Keep in mind that very few psychologists are out there advocating strongly that physical punishment (and authoritarian parenting) is better than alternative techniques, only that they might not be worse.

Let us conclude this section by returning to two lingering problems. What about the news articles in favor of spanking and the 30% of APA members who are not against spanking? Well, part of the answer comes from the seven tips for evaluating information that we shared with you in Module 1. In particular, we think a version of what we called the myth of two equal sides is going on with respect to the evaluation of research. The number of studies that find that spanking is not harmful is quite small. The overall evaluation of the whole body of research has led to the two most appropriate professional associations (the American Psychological Association and the American Academy of Pediatrics) to produce unequivocal recommendations that parents NOT use physical punishment under any circumstances. As for the 30% of APA members, we cannot be sure because we do not know who the respondents are, but there are a significant number of psychologists who do not endorse a scientific, evidence-based approach to psychology. We believe this is a serious problem in the discipline and will describe it more fully in Module 31.

  • Did this section lead you to reconsider any personal decision you had made regarding your own parenting practices? Why or why not?

17.3. Development of Identity: Learning Who You Are

  • Answer the following question at least five different ways: Who are you? Include only the important aspects of your identity. At what age did these aspects solidify in your view of yourself?

Life is hard. And we are not even talking about school. Throughout your life you have had to learn how to survive in the physical and social world. You had to learn which kinds of situations were safe and how to navigate the social landscape. There were friends to make, rivals to best, enemies to avoid. Each task may require a unique set of abilities. Is it any wonder that many psychologists over the years have characterized human life as a monumental struggle?

Erik Erikson was one of those psychologists. He is one of the most famous social developmental psychologists, and his theory guided a great deal of research throughout the second half of the 20th century. Erikson divided the entire lifespan into eight separate stages; during each, he explained, we are faced with particular kinds of conflicts or challenges. Social development proceeds through our resolution of these conflicts, or the way that we meet the challenges. The “footprints” of the challenges are left on our later personalities, as they influence the way that we approach later social relationships.

Erikson’s Stages of Psychosocial Development

https://youtube.com/watch?v=aYCBdZLCDBQ%3Ffeature%3Doembed%26rel%3D0

You can also access the video directly at:  https://youtu.be/aYCBdZLCDBQ

As you read the descriptions of the eight stages, you may have two reactions that correspond to the major evaluations of Erikson’s theory. First, he was right on the money about the types of challenges that many people face throughout their lives. Second, the timing seems off. Although it does seem correct that we face important challenges and that our response to those challenges will influence our later social relationships, the challenges probably do not come so neatly packaged into particular stages of life. Rather, the challenges may occur at any time throughout life and in no particular order.

This section will expand on one of Erikson’s key challenges, developing an identity, to illustrate how they can span well beyond a single life stage. Your identity is your sense of self, the important aspects of your life that make you a unique person. Most people have a very strong sense that their identity is constant, but the reality is that identity is formed and modified throughout your life.

Self-Awareness in Infancy and Childhood

The first step along the path to establishing a solid identity is to realize that you are an individual person. This is not as ridiculous as it may sound. Recall from Module 16 that the first time an infant is absorbed watching her hand move, she may not even realize that it is her hand, under her control. A key accomplishment during Piaget’s sensorimotor stage is for the infant to realize that objects exist apart from the self. Part of that key cognitive development comes from the realization that there is such a thing as the self.

To be sure, infants love looking at their reflection. If you hold an infant up in front of the mirror, she is likely to kick her legs excitedly, coo, or laugh. But this response is a bit like a dog that barks at its own reflection, thinking it another dog. How do we know that? That is, how do we know that a young infant does not realize that the baby in the mirror is a reflection of her? Suppose you manage to sneak a sticker onto the child’s nose without her knowledge. When she sees the baby in the mirror, will she reach for her own nose, or will she reach for the mirror? If the infant is under 15 months old, she will probably reach for the mirror; if she is over 18 months, she will reach for her own nose. Thus, the child’s realization that she  is the baby in the mirror develops during this period (Butterworth, 1992; Schneider-Rosen & Cicchetti, 1991). After this time, the infant begins to focus on and think about her self. She can experience emotions such as embarrassment and pride, and will soon learn to recognize photographs of herself (Bullock & Lutkenhaus, 1990; Lewis, 1990).

Development of thinking about the self seems to parallel, or at least closely follow the development of a theory of mind (Module 16; Wellman, 1993). For example, children begin to think more clearly about their own intentions and plans, and develop the ability to systematically about pursuing goals during the ages of 5 to 7. Psychologists call it the 9 to 5 shift. Just kidding. They call it the 5 – 7 year shift  (Sameroff & Haith, 1996).

Adolescent Identity Crisis and Emotional Turmoil

Erik Erikson thought of adolescence as a time of crisis, as the teenager struggles to figure out who he or she is. Researchers have found that he was at least partially right. Because of the physical, cognitive, and social changes that occur, identity does appear to become key in adolescence (Grotevant, 1998). Research is less supportive of the idea that the search becomes a crisis, however.

Erikson was not the first psychologist to think of adolescence as a turbulent time. Over 100 years ago, one of the pioneers of psychology, G. Stanley Hall, characterized adolescence as a period of “storm and stress.” The accepted wisdom is still that adolescence is a time of extreme turmoil, filled with risky behavior, explosive conflict with parents, and moodiness (Arnett, 1999). Public perceptions are clear: many people in the US believe that adolescence is a very difficult time (Buchanan et al., 1990; Buchanan & Holmbeck, 1998).

But is it really? Is adolescence really a time filled with frequent mood swings, excessive risk-taking, and constant fights with parents? The answer is a resounding, “sort of.” It is true that “storm and stress” are more likely during adolescence than at other times of life. The turmoil is by no means a sure thing, however, and when it does occur, it tends to be less dramatic than in the movies, which, by their very nature, must be dramatic. Come on, would you pay thirteen dollars (plus six dollars for popcorn) to watch 112 minutes of a fourteen-year old spending quiet time with, speaking respectfully to, and not arguing with her parents? In reality, there are large individual differences, and many adolescents do not have much conflict with their parents. Even in families that do experience a lot of conflict, adolescents and their parents still report that they have a good relationship with each other (Arnett, 1999).

The famous adolescent mood swings are based in reality, however. Adolescents do report more extreme moods, especially negative ones, than adults report (Arnett, 1999; Larson and Richards, 1994). But contrary to public opinion, the mood swings are probably not the result of “raging hormones.” Researchers found that the mood swings were not related to the stage of puberty an adolescent was experiencing, which would be tied to the kinds and levels of hormones; instead, they suggested that the causes were cognitive and environmental (Larson & Richards, 1994).

Finally, it is also true that adolescents are more likely to engage in risky behavior than at other times in their lives. Risky driving and sexual behavior, criminal behavior, and substance abuse all tend to peak during the adolescent years (Arnett, 1992; 1999; Moffitt, 1993). Although there are individual differences as in the other areas of storm and stress, most adolescents occasionally engage in at least one kind of risky behavior (Arnett, 1992). It’s important to note that while research suggests that risk-taking behaviors are more likely to occur during adolescents, new research indicates that there are some adolescent risk-taking behaviors that have decreased in the last 30 years including unprotected sex and substance use among (Arnett, 2018).

Emotional turmoil, when it does occur, may very well be related to the struggle to form an identity independent from parents. A typical adolescent belief is that no one, especially parents, understands them. Originally, psychologists viewed this as solely a cognitive issue. Specifically, they believed it was a version of Piaget’s egocentrism that applied to adolescents (Elkind, 1985; Elkind & Bowen, 1979). More recently, some psychologists have proposed that adolescents’ feelings of being misunderstood may be more an effect of social development, specifically establishing one’s identity. Key parts of the process of establishing one’s own identity are paying extra attention to the self and separating the self from parents. During these processes, many adolescents tend to exaggerate their differences from other people. They (correctly) notice their own uniqueness, and they (probably incorrectly) believe that because they are so different from everyone else, no one—especially parents—can possibly understand them (Lapsley, 1993; Vartanian, 2000). Many adolescents struggling with their identity come to think that other people notice them as much as they notice themselves, as if they are on a stage in front of an “imaginary audience” (Elkind, 1976; O’Conner, 1995). Adolescents who have difficulty during the process of establishing an identity—in other words, those who suffer an identity crisis—are especially likely to adopt these “egocentric” beliefs.

So, identity formation can be a struggle, but is it a crisis? For example, think about one of literature’s best-known examples of an adolescent in the throes of an identity crisis, Holden Caulfield from J. D. Salinger’s The Catcher in the Rye.  During the course of a single weekend, Holden is expelled from school and reveals that he cannot relate socially to people with whom he comes in contact. He despises and ridicules his roommate, yet Holden clearly envies him. He also expresses a deep-seated need to protect the innocence of childhood, part of his identity that he is giving up as he approaches adulthood. Increasingly alienated, depressed, and hopeless, Holden ends up in a hospital unable to cope with his crisis.

How accurate was Salinger’s portrayal of an adolescent struggle for identity? One of the great appeals of the novel is that teenagers can identify with Holden Caulfield. Individual readers recognize pieces of themselves in small aspects of Holden’s experiences. Very few people experience a weekend as dramatic as Holden Caulfield’s, however. In reality, as you might have guessed from the earlier discussion about emotional turmoil in general, the search for an identity is not as much a crisis as is commonly assumed. Although a crisis may occur, it is by no means necessary (Grotevant, 1998). For many people, it is a better characterization of the process to call it an exploration. Some people choose an identity without much fanfare and searching, others seemingly never do (Grotevant, 1998; Marcia et al., 1993). Many adolescents do, however, engage in active exploration, and for some of these people, identity search can be a crisis. As we are sure you realize, identity is a very complex concept; individuals may experience a crisis for some aspects of identity, such as gender, sexual orientation, ethnicity, or religion, but not for others.

Identity Development Beyond Adolescence

Although forming an individual identity is a critical task during adolescence, refining your sense of who you are is a lifelong process. People have many opportunities throughout their lives to reassess and redefine their identities (Yoder, 2000). For example, becoming a parent can force a profound change in someone’s identity.

Erik Erikson (1950) proposed that a person’s occupational choice was a key part of his or her identity. Think of how adults in the US introduce themselves. Very often they give their name and then their title or occupation (“I’m JoAnn; I’m an accountant”) rather than referring to their geographical history or family status (“I grew up in Colorado Springs” or “I’m the middle daughter in my family”). But occupational identity is rarely constant throughout your life. Career counselors commonly advise students who are graduating from college today that they can expect to have an average of four different careers (not jobs within the same career, but completely different careers) during their lifetimes. Each career change is likely to entail a significant revision of your identity.

Aspects of Identity

Although it is clear that people’s identities develop over time, it does not really feel that way. Quite the contrary, your identity feels like the part of you that does not change; it is what makes you, you . One possible explanation of this contradiction is that some aspects of your identity seem freely chosen, such as career or religious affiliation, and others, such as sex or ethnicity, are assigned to you. It may be that the assigned aspects of one’s identity play the key role in providing that sense of continuity, despite changes in other aspects. Even the chosen aspects are often conceived in relation to the unchosen aspects (Grotevant 1992; 1993). For example, a female adolescent may choose a career based on her gender identity. Thus, even the chosen and changing aspects of identity are tied to the invariant, assigned ones. We will finish our coverage of identity by discussing three different sources of identity, one chosen, one assigned, and one somewhere in the middle. Note, we will address a fourth aspect of identity, gender in Module 26.

Religious affiliation: A chosen aspect of identity

The majority of people throughout the world affiliate themselves with a specific religion, and for many it is among the most important aspects of their identities. Although there are certainly areas throughout the world where people are not exactly free to choose their religion, citizens of the US and the rest of the western world do have that choice.

According to the CIA World Factbook (2020), the World and the US are represented by the following religions:

  • Note that the US figures are listed for some individual Christian religions, rather than for Christianity overall. As a consequence, some of the “Other Religions” may be Christian as well.

The two most prominent differences between the US and the rest of the world are the percentages of Christians (69% versus 31%) and Muslims (0% versus 24%). Although nearly one-quarter of the world’s population is Muslim, less than 1% of the US is, according to the CIA World Factbook. (Another estimate of the number of Muslims in the US is 3.45 million, which corresponds to a bit over 1%; Pew Research Center, 2018) Obviously, there is an extraordinary difference between the distribution of religions in the US and the rest of the world.

In one key way, however, the United States may have more in common with deeply religious Muslim-populated countries than with the Western European and North American countries more similar in terms of religious affiliations. The US is quite religious. More people believe in God, attend church regularly, pray at least occasionally, and read the Bible in the US than in any other western country. Although religious commitment in the US has remained higher than other nations, it has begun to decline recently, however (Pew Research Center, 2019).

Religion, of course, is not a monolithic concept. Some aspects of religious identity lead to good outcomes, others to bad. Religious commitment, or religiosity, is often related to good deeds. The relationships are not always strong and straightforward, but they are there. For example, religious people—especially those who adopt a flexible, questioning attitude toward their religion—are likely to help people in need (Batson et al., 1989; Batson et al. 2001). Researchers occasionally question the motives of some religious helpers. For example, some religious people help only because they want to look like helpers (Batson & Gray, 1981). Still, it is difficult to criticize them, because they are embracing an aspect of identity that encourages them to do good deeds.

Religious intolerance, on the other hand, just like any intolerance, leads to bad outcomes. One of the driving forces behind atrocities committed throughout history is an overabundance of religious identity and the unwillingness to accept alternative religious viewpoints. Religious fundamentalism is the belief that one’s own religion is the sole legitimate source of fundamental truths about humanity and deity (Altemeyer & Hunsberger, 1992). The fundamentalist is required to fight against those who oppose this truth. Thus, fundamentalism, which can be an element of any religious affiliation, essentially includes intolerance as a defining feature. Fortunately, strict fundamentalism is atypical among the world’s religious people, but the seeds of intolerance are often present in any person with a strong religious identity. For example, researchers have shown that religiosity is positively related to prejudice (Batson et al., 1993; Dittes, 1969).

Many people have a set of experiences that lead them to assess their faith and affirm religious beliefs as a key part of their identity. In the classic case, such conversion, as it is called, is swift and radical, following the Biblical story of Paul, who converted to Christianity suddenly when he received a visit from Jesus in the form of a voice and a bright light while traveling. You should realize that conversion can work both ways; people can also have revelatory experiences that lead them away from religious beliefs (Roof & Hadaway, 1979).

Perhaps the key source of religious identity is socialization. In short, children learn their religious beliefs and the strength of those beliefs from other people in their lives. Some of that learning takes place in a formal educational setting. For a great many people, however, parents play the primary socialization role. So, although people are free to choose the religious aspects of their identities, many end up adopting their parents’ beliefs. If you examine the degree to which people identify with their parents’ religion as they mature, the agreement between parents’ and children’s attitudes about religion, or the self-reported influence that parents had on their children’s religious beliefs, the conclusion is the same. Parents play a very important role in the development of religious identity (Spilka et al., 2003). As we have seen in other aspects of life, parental influence wanes through adolescence; in the case of religion, the decrease of parental influence is fairly late, around traditional college-age (Ozorak, 1989). Even during adolescence, when general disagreements with parents may be at their highest level, parents and children still tend to agree on many issues related to religion (Glass et al., 1986; Hunsberger, 1985). As you might guess, parent influence on religious identity is strongest when the parent and child have a high-quality relationship (Bao et al., 1999; Myers, 1996).

As we are sure you realize, religious identity is related to additional aspects of an individual’s social identity. For example, one researcher has identified in Christian adults attachment styles with God that resemble the secure and insecure attachments that researchers have observed in infants (Kirkpatrick (1992; 2002). Some research has also found that religious individuals trust others more than non-religious individuals do. Religiosity is also related to people’s stated beliefs (but not always their behaviors) about morality, marriage, non-marital sex, love, and homosexuality (Lefevor et al.,  2019; McFarland et al., 2011; Pew Research Center, 2017 ).

Ethnicity: An assigned aspect of identity

Another key aspect of identity for many people is ethnicity. You can choose your religion, but you cannot typically choose which ethnic group you belong to. Ethnic minority adolescents are faced with the double-sided problem of fitting into the majority culture while keeping elements of the ethnic minority group culture in their identities (Erikson, 1968; Phinney et al. 2000). Adolescents who are members of a majority ethnic group are less likely to acknowledge their ethnicity as an important aspect of their identity, so these issues pertain mostly to members of minority groups (Phinney, 1990).

Successful development of ethnic minority identity within the context of the majority culture leads to good feelings about the self, the majority culture, and one’s ethnic group. A positive ethnic identity also leads to good feelings about other ethnic groups (Phinney, et al., 1997).

It is not always smooth sailing, however. Media depictions and news reports of adult members of ethnic minority groups as morally and socially bankrupt can make it difficult for children to find positive role models, an important step in developing a positive ethnic identity (Glassner, 1999). Adolescents who understand that their options in life may be limited by the ethnic group to which they belong also have trouble developing a positive ethnic identity. Indeed, the stress associated with the problems of forging an ethnic identity can lead to depression, anxiety, and other psychological problems (Caldwell et al., 2002).

Prejudice and discrimination in the larger culture against members of minority groups can also stand in the way of minority adolescents’ attempts to develop a positive attitude about their ethnic group. The problem of discrimination is compounded by the fact that many members of the majority culture are unaware of it. For example, white respondents in the US have consistently rated racial relations and racial discrimination better than black respondents over the past 20 years (Davis, 2020). (By the way, we are writing this book in the immediate aftermath of the worldwide protests against systemic racism that resulted from the George Floyd killing. There has been a rapid change in these attitudes as a result, which we will describe in more detail in Module 21.)

Because of these complications and difficulties, ethnic minority adolescents in the US often form their identity in a different way than white adolescents do. For example, ethnic minority college students may be more likely to engage in extended exploration of their identity options than white students are (Phinney & Alipuria, 1990). On the other hand, because options often seem limited for members of minority groups, many wind up doing less exploration, essentially accepting the identity designation that the dominant culture assigns to them (Streitmatter, 1988).

Nationality: An aspect of identity in the middle

Some political scientists have characterized much of the turmoil in the world as a conflict between dramatically different national and cultural identities, a sort of “clash of civilizations” (Huntington, 1998). For some people, cultural or national identity is something assigned to them by virtue of the location in which they were born. Others, such as immigrants who become citizens of their adopted countries, make a very conscious decision to change their national identity. Researchers have discovered that individuals who immigrate to the United States, for example, often do reassess their identity during adulthood (Birman & Trickett, 2001).

On what do people base a national identity? Many observers have noted that shared religion, language, and ethnicity are among the most important keys to developing a strong national identity. Paradoxically, citizens of the US have had among the strongest national identities in the world over the years despite lacking many of the characteristics commonly thought to be important. As the famous “melting pot,” we are ethnically and racially diverse, we prohibit the establishment of a single religion, and our residents speak many different languages. What we do share, though, is a common history and commitment to ideals such as democracy and freedom. And for many years  that was sufficient to create a very strong national identity and a great deal of national pride among the citizens of the US. For example, surveys consistently found that US citizens report levels of pride that are among the highest rates of any country in the world (Smith & Jarkko, 2001). In recent years, however, national pride has been declining in the US to an all-time low of 63% who are extremely or very proud to be American in 2020 (from a high of 92% in 2003; Branan, 2020). Political and racial divisions that have grown deeper over the last several years are related (and perhaps responsible for) this dramatic and rapid decrease.

You should realize that it is not a simple matter to state that national pride is automatically good, or as some people claim, national pride is automatically bad. When a country has extremely high levels of pride and a strong sense of ethnic identity, it can lead to abuse of people with other ethnic identities. In addition, nations that are seen as too proud can be resented by other countries throughout the world. At the same time, however, the rapid decline of pride in the US, as at least a reflection of the increasing (and increasingly damaging) political and racial divisions cannot be seen as a positive development, at least not yet. If it triggers a serious reflection that leads to real improvement in relationships across the political aisle and among different ethnic and racial groups, then we can change our assessment. Only time will tell, however.

  • When you wrote down aspects of your identity in the Activate section, did you include your ethnicity, nationality, or religion among the important parts?
  • Have you ever suffered from an identity crisis? If so, was it for your whole identity or just for specific aspects? How difficult was the experience? When do you expect the next period of adjustment for your identity will come?

17.4. Developing  Gender Identity

  • What are some important (non-anatomical) differences between men and women? What do you think the causes of those differences are?
  • Do you tend to have traditional or non-traditional attitudes about the behavior that’s appropriate for females and males?

The concept of gender is complicated.  While as previously mentioned, sex is more or less determined by nature or genes (except in cases of surgical transformation from one sex to another); gender is more obviously an interaction between nature and the environment.   In essence, gender is a person’s feelings about their masculinity or femininity within a given culture. There are two related concepts you should think about when you examine gender in psychology. Gender identity is a person’s inner feelings about being male or female. Gender roles are the behaviors that a particular culture finds acceptable for males versus females.

The next thing we have to do is clarify something important. A person’s sex (based on their genitals, chromosomes, or internal organs) is not the same thing as their gender identity. Well, the simplest way to think about it is by considering the match or mismatch between an individual’s gender and biological sex. People for whom the two-match are referred to as cisgender . Individuals whose gender identity mismatch their biological sex are transgender .

Actually, psychologists have begun to recognize that gender identity is not even really binary. Some individuals are a mixture of genders, for example, bigender , in which someone identifies as both genders. Others are genderfluid , in which gender varies over time (from one to the other to both or neither). For example, Jonathan Van Ness, one of the stars of the Netflix show, Queer Eye has been quoted: “…somedays I feel like a boy and somedays I feel like a girl” (Tirado, 2019). Still, others do not identify with a gender at all, which is known as agender .

Underlying most psychological and everyday interest in the concept of gender is the obvious observation that men and women differ from each other. The important questions are how much and why?

agender : denotes a person who does not identify with a gender

bigender : denotes a person who identifies with both genders

cisgender : denotes a person who identifies as the gender that matches their biological sex

genderfluid : denotes a person whose gender identity changes over time

gender identity: a person’s inner feelings about being male or female

gender role: the behaviors that a particular culture finds acceptable for males versus females

transgender : denotes a person whose gender identity does not match their biological sex

How Different are Females and Males?

The dominant themes in all of the gender conceptions is maleness and femaleness, whether we are talking about presence and absence, stability, or degrees. For that reason, it is helpful to focus on that binary conception, as long as you remember observations about non-binariness that we just shared.

The short answer to this main question is that males and females differ enough in many characteristics that you will be able to notice the difference but won’t be able to predict anything about a person from their gender. Researchers typically report gender differences as an average difference between males and females. The size of that difference is, without exception, quite small. Furthermore, differences within a gender are, without exception, quite large. As a result, you will routinely find individuals who “violate” the gender difference.

Complicating matters somewhat is the unfortunate fact that most of the gender differences you have probably heard about are probably the oversimplified, “newspaper headline” kind. Let us illustrate with an example. “Everyone knows” that females are the emotional gender. Emotion is quite a complex phenomenon, however. There are specific aspects of emotions on which males and females tend to differ, such as emotional expressiveness. Females  on average  tend to be more emotionally expressive than males (Kring & Gordon, 1998). There is very wide variability within genders. There are females who are extremely expressive and females who are not, and there are males who are extremely expressive and males who are not. The result is that the average difference between males and females is dwarfed by the differences within the genders, and you will encounter many males, for example, who are more expressive than the average female. Thus, you cannot make any predictions about a person’s emotional expressiveness from knowing his or her gender.

At the same time, some gender differences are definitely noticeable. For example, Alice Eagly (1995) has noted that although within-gender variation is much larger than the average gender difference for frequency of smiling, 65% of females and 35% of males smile more than the average person.

Remember these points any time you hear, for example, that girls are better than boys in verbal ability and boys are better than girls in math. Girls do tend to learn language earlier than boys, and boys are more likely to have reading problems, but overall the differences are small and some of them tend to get smaller as children develop (Hyde & Linn, 1988). With respect to math, boys on average are better than girls at math problem solving, but only after adolescence. They also are overrepresented among the very small number of people who have very high math ability (Benbow et al., 2000). On the other hand, girls on average are better than boys at computation, and the genders are equal in how well they understand math concepts (Hyde et al., 1990).

Why are Females and Males Different?

We are going to shift back to talking about biological sex now because a portion of our explanations for gender differences relies on it. Some people think that men and women are exactly alike; others that they are so different that they may as well be from different planets (Gray, 1993; 2008, no relation to one of the co-authors of this book). Contrary to both extreme views, research has shown consistent, noticeable, if small, average differences between men and women, or between boys and girls on various characteristics (Hyde, 2005). The question that remains is the origins of those differences. We can look at both nature (genes and hormones) and nurture (social learning). In order to describe the two types of influences clearly, let us pretend for a moment that it is either nature or nurture and discuss them separately.

A temporary dichotomy, part 1: Genes and hormones

In Module 15, you saw the important role of genes and hormones in the prenatal development of sex organs. Further, you saw in Module 22 that the two probably play a role in the childhood gender nonconformity that is strongly associated with non-heterosexual orientations. Although the notion has been controversial through the years, it is clear by now that biology plays a central role in the development of gender differences between boys and girls, beyond the obvious anatomical differences. Do not make the mistake of oversimplifying, however. Accepting the role of biological factors is not to deny a role for social factors—remember, this is a temporary dichotomy. You have seen a number of times in this book—for example, in the section on parenting styles—that psychological phenomena are always an interaction between nature and nurture. In some cases, it is not even clear how to distinguish between nature and nurture. Consider the prenatal role of hormones in gender development. Hormones are clearly biological substances, produced by a fetus on indirect orders from an X or Y chromosome. On the other hand, some hormones originate in the environment outside of the fetus, and the amounts of hormones produced by the fetus can be influenced by environmental factors. When it gets right down to it, it is difficult to clearly classify hormones as nature or nurture. In Module 22, we referred to hormones as a part of the non-social environment. We include them with genes on the nature side this time to draw a sharper distinction with the social environment explanations we will be sharing soon.

Prenatal hormones, in addition to their role in differentiation of fetal sex organs, play an important role in our gender identity through differentiation of our brains (Dennis, 2004; Maccoby, 2000; Ruble & Martin, 1998). Experimental research with rats has indicated that exposing a female prenatally to high levels of male hormones leads the females to learn to navigate a maze as well as males (male rats typically outperform females). It is tricky to generalize research results like this to humans; after all, we only know a few people whom we would equate with rats. There are cases of accidental or naturally occurring human prenatal exposure to the “wrong” sex hormones, however, and they are consistent with the rat research. For example, some fetuses have been exposed to a synthetically produced hormone, progestin, which is chemically similar to some androgens. It has been used as a drug for women who are at risk for miscarriages. Female children whose mothers received the drug exhibit some behaviors more commonly considered male-typical, such as aggression and independence (Reinisch & Saunders, 1984; Reinisch, 1981). Other children have suffered from genetic conditions that have caused them to receive too much or too little of the gender-appropriate hormones. Here, too, the research has tended to confirm the experimental results with rats. For example, some genetic girls suffer from a disorder called congenital adrenal hyperplasia. Their adrenal glands produce too many androgens during prenatal development (Ruble & Martin, 1998). Sometimes, when doctors catch it early, the condition is reversed, and the infants are raised as girls. Although they have a female gender identity, it is not as strongly female as in girls who had not been exposed to high levels of androgens (Collaer & Hines, 1995).

There are likely other direct effects of genes on gender formation, as well, but researchers have not yet uncovered details about what these effects might be (Dennis, 2004). Research with mice, for example, has found that some genes begin to express themselves differently in male versus female brains before hormones begin to play a role (Dewing et al., 2003). Altogether, we have a persuasive case for a critical role of hormones and genes on the development of differences between men and women. Let us now turn to the other side of the coin, to the role of social learning.

A temporary dichotomy, part 2: Social learning

There is little doubt that parents, other relatives, peers, and society have different expectations for boys and girls; these expectations are called gender roles. It is from the communication of these expectations—which begins essentially at birth—that children learn their own gender identity. From the first days of life, boys and girls are treated differently by many people that they encounter. Although parents often try to treat children of opposite genders similarly they do not always succeed. For example, a group of adults participating in a research study were given a one-year-old infant (not their own child) to engage in play. Each baby was addressed half the time as a girl, and half the time as a boy. When the baby was a “boy” the adults offered more boy-typical toys and encouraged more active play. When the baby was a “girl” the adults engaged in more nurturing style of play (Frisch, 1979).

Many parents do not even try to treat their female and male children similarly. But among those who do, there are many other messages in the environment that convey gender roles. Even if each individual message has only a small effect, the cumulative effect can be quite substantial. For example, imagine the birth of a baby girl whose parents disapprove of “girl” versus “boy” treatment. Well-meaning friends and relatives might want to respect the wishes of the parents, but they might also long to indulge their desire to dress a girl. So, they attempt to satisfy both goals. They very generously gave the infant girl two sets of clothes each. One relative gives her a yellow outfit to satisfy the parents and the pink one that they loved so much. Another gives a multicolored jumpsuit to keep the parents happy and a pink one that they could not resist. A third even gives a blue outfit. Oh, and there is that pink one that was on sale. And so on. At the end, half of the clothes the new daughter receives as gifts were pink, and the other half were spread among several other colors.

There are many possible sources of gender role information. In addition to the differential treatment of boys and girls, there is role modeling. Parents, siblings, teachers, peers, and the media have all contributed (Ruble & Martin, 1998). For example, a young boy would develop expectations about men’s and women’s roles by observing the division of household chores between his mother and father.

How Do Thinking and Gender Identity Interact? 

When you meet a new classmate, one of the first facts you notice about the person is gender. It is worth noting that this is a cognitive phenomenon. Recall the discussion in Module 16 about Piaget’s conceptual schemes. They are used to interpret new experiences and to help us understand the world. Many modern researchers refer to these mental representations of categories as schemas. Gender, because it is emphasized in many (perhaps all) cultures, turns out to be a very important schema for people.

Sandra Bem (1981; 1993) has proposed a gender schema theory to explain how our gender-related cognitions help shape our gender identity. It is a great theory with which we can remind you of the interaction between cognitive and social development. A schema for a complex concept, such as male or female, would consist of a lot of information, such as physical characteristics, typical behaviors and activities, and so on. Gender schemas do more than simply helping us to recognize a particular person as male or female. They help organize and guide all of our knowledge about the concept. If a particular schema becomes active in memory, it means a great deal of related knowledge is available. The schema leads you to notice information consistent with it and to neglect information inconsistent with it. For example, suppose your schema for male contains the information that men are aggressive. You will tend to notice cases of men being aggressive. If this mental habit reminds you of the confirmation bias that we told you about in Modules 1 and 7, you should ask your professor for extra credit. The biasing power of a schema is indeed very closely related to the confirmation bias.

People also apply their gender schemas to themselves. Their expectations about the important aspects of gender can lead them to engage in “gender-appropriate” behaviors. In addition, they can compare their own behavior to their schema to help figure out aspects of their own gender. For example, an assertive woman whose gender schema contains the information that women are submissive may think of herself as less feminine. You can now begin to see how a person’s way of thinking about gender becomes an important determinant of that person’s gender identity.

  • Try to think of some errors that you have made about someone else, or that someone else has made about you as a result of assuming something from your gender. Did the error lead to any conflict or bad feelings?
  • Think about your oldest friend (meaning the friend you have had the longest). How has your relationship changed over the years? Think about your best friend (even if it is the same person). What makes you such good friends? What are the specific benefits the two of you gain from the friendship?
  • Think about a casual friend. What are the main differences between your relationship with this person and with your best friend?

Friendships begin very early in life. But there are profound differences between the friendships you can observe between young children and between a pair of twenty-year-olds. These differences roughly correspond to the person’s developing capacity for emotional intimacy. Again we will return to childhood to help us understand friendship and intimacy later in life.

What is friendship? This turns out to be a difficult question for psychologists to answer, in part because friendship can mean different things to different people. With so many different kinds of relationships between people, it is not surprising that there would be disagreement regarding what constitutes a friendship. Let me propose that we define friendship as a relationship between two people that they choose to create and that is based on mutual affection or fondness or liking. According to this definition, shared interests and activities—for example, between classmates, teammates, or co-workers—are not by themselves sufficient bases for friendship, although friendship can develop from them. Other characteristics that people might like to include in a definition of friendship, such as trusting and self-disclosing, might be more useful at judging the strength or quality of a friendship, rather than whether or not the relationship is a friendship.

Childhood Friendship

Many of the young infants’ cognitive abilities seem especially geared to promoting the growth of social relationships. These abilities are not applied solely to caregivers and other adults, however, and they are very important components of developing friendships. Infants show a great interest in other infants, at least by the last three months of the first year; for example, they commonly look at, smile at, and imitate each other (Rubin et al., 1998). During the second year, infants’ social interactions become quite a bit more active and impressive, often involving games of mutual imitation of complex behavior (Brownell, 1990; Hanna & Meltzoff, 1993; Ross, 1982). Although you cannot always tell by the behavior that goes on in my household, two-year-olds also know how to take turns. These older infants’ interest in other children is selective; they choose to associate with some children and not with others (Strayer, 1990). They prefer familiar and sociable playmates, and children that have similar behavioral tendencies and other characteristics (Howes, 1983; 1988; Rubin et al., 1994). Thus, with the children’s demonstration of preferences, even these early relationships contain an important component of friendship.

You can see the increasing complexity of social interactions among preschoolers by examining the ways that they play. Most children between the ages of two and five engage in solitary, group, and parallel play (playing alongside other children, but not really with them), so it is not simply whether or not the child plays alone that reveals their social development. Rather, it appears that parallel play serves a different function for older preschoolers; it is an important way for the children to approach each other. First, they play next to each other, and then they begin to play together (Bakeman & Brownlee, 1980). Another difference that emerges during the later preschool years is a large increase in the frequency of pretend play. Although literally child’s play, a five-year-olds’ ability to share meanings and symbols with playmates through pretending shows a remarkable repertoire of cognitive and social abilities. They must be able to imagine and fulfill complementary roles, such as husband and wife when playing “house,” and the children must agree with each other about the roles and the rules by which pretend scenarios will operate (Rubin, Bukowski, & Parker, 1998). Again, this sharing and collaborating between young children, important as they are for increasing bonds of affection, indicate an increased capacity to form and retain friendships.

Despite these impressive social abilities in evidence in their relationships, young children still define their friendships based on shared activities; they are friends because they do things or play with the same toys together (Berndt & Perry, 1986; Hartup, 1993). It is during the elementary and middle school years that we see a shift toward a less “object-oriented” definition of friendship. By age 10, children realize that friends need each other, that they stick up for and are loyal to each other, and that they share interests and not simply activities (Bigelow, 1977; Zarbatany, McDougall et al., 2000). They have been moving toward it for a while, but children at this age are now ready to begin their attempts at building intimacy. They make efforts to understand each other and are self-disclosing, able to share private thoughts, fears, and feelings (Bigelow, 1977; Clark & Bittle, 1992; Hamm, 2000). Think about the game “Truth or Dare.” When a group of nine-year-olds plays this game, it often devolves quickly into an uninterrupted succession of goofy dares. Twelve-year-old friends, on the other hand, are likely to include many “truth” rounds, in which the players take turns disclosing their private thoughts.

Thumbnail for the embedded element "Small Talk | Friendship | CBC Kids"

You may also access the video directly at:  https://youtu.be/d9HH3pTmHz8

Adolescent and Adult Friendship

Adolescents continue the trend of redefining friendship to include shared thoughts and feelings, and loyalty (Berndt & Perry, 1990). What they are doing is increasingly recognizing the importance of quality friendships (Berndt, 2002) and emphasizing intimacy in their friendships (Buhrmester, 1996). They also become less possessive. Younger children often act as if friendship is exclusive: if he is my friend, he cannot be yours. Adolescents recognize that their friends can have other friends (Rubin, Bukowski, & Parker, 1998). In part because of their increasing focus on friendship quality, adolescents depend on their friends more than younger children do. They also depend more on their friends than on their parents to meet their emotional needs (Furman & Buhrmester, 1992), and they spend much more time with friends than parents—in one study, double the amount of time (Csikszentmihalyi & Larson, 1984). As was true for younger children, adolescents are similar to their friends; similarity is more broadly defined, however. For example, adolescent friends tend to have similar ethnicities, genders, school achievement, and attitudes about many aspects of life (Hamm, 2000; Hartup, 1993; Youniss & Haynie, 1992).

Friendship in young adulthood changes in a way that most people would guess. Because parenting and work require so much time and effort in young adults’ lives, their friendships often end up tied to these activities. Newlyweds have the largest number of friends, more even than children and adolescents (Hartup & Stevens, 1999). As people get older, their circle of friends gets smaller. Time spent with friends declines steadily, from one-third of an adolescent’s waking hours to less than 10% of a middle-aged person’s day.

Friendship in adulthood resembles friendship in adolescence in important ways. Both types of friendships are characterized by intimacy and self-disclosure, and emotional support. Adolescent and adult friendships differ mainly in the contexts in which these behaviors occur; an adolescent friend may provide emotional support during a breakup with a steady boyfriend or girlfriend, while an adult friend does so during a divorce.

Psychologists have paid a great deal of attention to gender differences in friendship. On the whole, women tend to report higher levels of intimacy in their friendships than men do. Women’s friendships tend to be based more on talking and providing emotional support for each other, whereas men’s tend to be based more on sharing activities (Sapadin, 1988). In this way, then, many men’s friendships are reminiscent of young children’s friendships. Now, we are not saying that men are children. Many men do have friendships in which self-disclosure and support are key features, and the differences between men and women grow smaller later in life.

Some other characteristics of our friendships change as we age, as well. According to the socio-emotional selectivity theory (Carstensen et al., 1999), our circle of friends grows smaller and more selective.  As we age, we are more motivated to seek positive emotions and have less need for friends to serve information needs, resulting in a more carefully chosen, and therefore smaller, group of friends.

We will have quite a bit more to say about friendship and other intimate relationships in Module 23, when we take a non-developmental approach to some of these concepts, so stay tuned.

  • Think about your best male friend and your best female friend. Are the differences between the two friendships consistent with the general differences between men’s and women’s friendships?
  • At what age did emotional intimacy become an important aspect of your friendships? What were your friendships like before and after this time?

Module 18: Developmental Psychology: The Divide and Conquer Strategy

By reading and studying Module 18, you should be able to remember and describe:

  • Agenda setting by major theories in (developmental) psychology: Piaget and Erikson
  • The divide and conquer strategy in psychology research
  • Dividing and conquering in developmental psychology: development as a perspective, division by topics, division by chronology
  • Problems with dividing and conquering
  • Closing the loop: relationship between cognitive and social development, Vygotsky’s sociocultural theory of cognitive development

Developmental psychology, like several of the other subfields and the field of psychology as a whole, has progressed in an odd-seeming direction. You might expect that progress would occur by building up small theories into larger ones. One psychologist develops a theory about children’s memory, another about children’s problem solving, and a theory about childhood cognition in general emerges from an integration of these and other theories. You might recognize this process as analogous to bottom-up processing in perception, through which we recognize objects by building up, or integrating, individual features (Module 13). Instead, the early influential theories were often quite broad, attempting to explain fully some major aspect of psychology. These major theories then functioned as frameworks, setting the research agenda for later psychologists to follow—analogous to the way that top-down processing in perception guides our recognition of particular objects.

The two most famous developmental psychologists are cases in point. Jean Piaget formulated a comprehensive theory of cognitive development, and Erik Erikson, one of psychosocial development. Piaget sought to explain how a child’s understanding of and reasoning about the entire world develop, and Erikson’s theory was intended to show how challenges and crises that we encounter throughout life profoundly influence all of our future social interactions and developments. Many observers consider Piaget the first true developmental psychologist. Child development researchers prior to Piaget devoted themselves to cataloging the developmental milestones that were expected in normal children (Hunt, 2007). But the contributions of these earlier psychologists, because they were not guided by theories, are largely forgotten today. Although Piaget was not the first psychologist to study children, he was the first to offer a comprehensive theory of a major aspect of development. Nearly all research in cognitive development since Piaget’s time has been a reaction to his work. We have little doubt that one of the key aspects of Erik Erikson’s work that has led to his lasting influence is, similarly, the fact that he was the first psychologist to propose a theory of development that spanned the entire life (Hunt, 2007).

This “agenda setting” characteristic of early major theories is a key element of a psychologist’s lasting influence. Consider the big-name researchers and thinkers from early psychology, such as Piaget, B.F. Skinner, Erikson, and Sigmund Freud. Many of their theories are not currently accepted, but they set the research agendas that still guide much of the current work in psychology. For instance, it seems unlikely that James Marcia would have thought to examine what happens during the search for identity and whether the search occurs exclusively in adolescence had Erikson not proposed identity formation as a key task of adolescence. Once the search for identity was a legitimate topic of psychological inquiry, researchers like Jean Phinney could examine important questions about specific aspects of identity, such as ethnicity. The contributions of these later researchers are part of the incremental process through which scientific theories are refined and sometimes replaced (Module 14).

Starting with comprehensive theories works well because it helps individual researchers choose questions to examine that others will find interesting. For a scientist, it is not enough to conduct good research. You also have to conduct research that other scientists will want to read. If each researcher randomly chooses a topic to examine, there will be very little common ground for researchers with different specific interests to talk about. On the other hand, if two researchers begin with alternative explanations for Piaget’s observation that three-year olds have difficulty taking someone else’s perspective, they automatically have something to talk (or argue) about. The two explanations may compete with each other or they may complement each other to form a more accurate explanation of phenomena originally covered by Piaget’s theory. But this creative tension can only happen if Piaget’s theory already exists.

The Divide and Conquer Strategy in Psychology Research

Many students, when they try to generate their own research ideas in their psychology classes, often begin by thinking like Piaget and Erikson. They try to propose research that can answer every question about some complex phenomenon. As faculty members, we appreciate their interest and ambition, but the truth is most research does not proceed that way. Instead, researchers use a strategy that can be called divide and conquer to progress from a comprehensive, agenda-setting theory to an actual research project that they can reasonably complete. Individual researchers choose small elements from a broad theory or from a complex phenomenon, and they develop research ideas that pertain to those specific elements. They may then conduct entire programs of research, interrelated sets of studies that span years or decades, that are in-depth examinations of small chunks of the complex phenomena and theories.

For example, Renee Baillargeon, among her other research programs, has been publishing research on infants’ understanding of object permanence since 1985. Her early research demonstrated that infants have at least a primitive understanding of object permanence, realizing that unseen objects existed much earlier than Piaget proposed, at five or even three and a half months (Baillargeon et al., 1985; Baillargeon, 1987). As her research progressed, it demonstrated that infants as young as five months old have a solid understanding of object permanence, or at least one that lasts for 6 – 7 minutes (Luo et al., 2003). Recently, theories based on her research have begun to explain why infants retain some knowledge about unseen objects– for example, that impossible events like one object passing through another cannot occur, but do not retain knowledge about other aspects– for example, when the number of unseen objects changes (Stavans et al., 2019).

Casual observers might believe that focusing on such a limited phenomenon is boring or, worse, trivial. What these people fail to realize is that we have gained a far more thorough understanding of the important concept of object permanence through Baillargeon’s work through the years than if she had not persisted in examining this small piece of the puzzle.

Thumbnail for the embedded element "Object Concept VOE Ramp Study Baillargeon"

A YouTube element has been excluded from this version of the text. You can view it online here: https://cod.pressbooks.pub/introductiontopsychologywinter21/?p=105

You can access this video directly at:  https://youtu.be/hwgo2O5Vk_g

In a similar way, most modern researchers opt for depth over breadth, choosing to become an expert about a very limited set of phenomena. If there are enough researchers throughout psychology becoming experts in their own small sections of the field, the result is a rich, detailed research literature that ends up covering the whole field. In essence, you get breadth, not from a single researcher, but from all of the researchers.

When you join the ranks of scientific researchers in psychology, you have several decisions you must make that relate to dividing and conquering. The first choices are made when you apply to graduate school. As we like to tell our students, no one has a PhD in psychology. Students earn a PhD in _ __ psychology, where the blank is filled in with one of the subfields. Along the way to earning the degree, you become an expert in that specific subfield and not necessarily in any others. Many students are surprised to discover that someone can earn a PhD in cognitive psychology and know very little about clinical psychology, for example. Instructors of General Psychology, by the way, are an important exception. New teachers learn a great deal about the subfields that they may have neglected as a graduate student when they begin to teach General Psychology. Unfortunately, not everyone has the opportunity to teach General Psychology, however, and many psychologists remain relatively uninformed about topics outside of their area of expertise.

Other choices you would make as a psychology researcher are related to how the subfields themselves are organized. Developmental psychology may be the most complex subfield, with many different organizing schemes possible. Psychology subfields are usually divided according to topics. For example, a social psychologist studies topics that are related to the ways that people think about, influence, and relate to one another (Unit 5). A cognitive psychologist examines topics that are related to the use, understanding, and communication of knowledge (Unit2). Developmental psychology is a little bit different. Although we tend to think of it as a subfield, it really is more of a perspective, in other words, a way of looking at any  topic in psychology (Module 1) Whereas a cognitive psychologist may choose to study problem-solving, and a social psychologist may choose to study aggression, a developmental psychologist might choose either topic and examine how it changes or develops through different points in the lifespan. Any specific topic, then, maybe examined developmentally. Indeed, if you examine a textbook in Developmental Psychology, you will find many topics that are also covered in other textbooks. This is not redundancy; it is a difference in perspective, the distinction between what a phenomenon is versus how it develops.

A good example of that difference in this book is the different treatment of identity and self-concept. There are psychologists who talk about identity and those who talk about self-concept. You might very well wonder whether there is a difference between the two. Both refer to people’s sense of self, their knowledge and beliefs about what the essential aspects of their attitudes, characteristics, and behaviors are. The difference between the two concepts is one of focus and perspective. Psychologists who refer to “identity” are more likely to focus on how it emerges, following the groundbreaking work of Erik Erikson. Thus, they tend to examine identity from a developmental perspective. They are also likely to focus on the adolescent years because that is when Erikson proposed people formed their identities. Psychologists who refer to “self-concept” are more likely to be social psychologists; their focus is less on how it develops, and more on what it contains. Because this unit has been about developmental psychology, we have focused on the development of identity for now. Later, in Module 26, we will return to these ideas in the context of social psychology and talk more about the contents of people’s self-concepts.

Once the subfield is chosen, a budding researcher narrows their focus by selecting specific topics to specialize in. For example, a cognitive psychologist may choose to do research on problem-solving, reasoning, or memory (among other topics). A developmental psychologist may choose to do research on attachment, identity, or object permanence. There is really no end to researchers’ opportunities to specialize and sub-specialize. For example, an identity researcher might decide to focus on the negative effects of early adoption of an ethnic/racial identity in Asian American adolescent girls.

Another important organizational distinction, one that applies only to developmental psychology, is between a chronological and a topical division of the subfield. Individual researchers in developmental psychology tend to select both topics and ages to study. For example, you may choose to study working memory in infancy, childhood, adolescence, or adulthood. Some researchers specialize even further, focusing on infants under 6 months old or young adults between 25 and 40, for example. Others broaden their view by examining changes as individuals move from one age group to another, or even throughout the lifespan.

Problems with the Divide and Conquer Strategy

So, the divide and conquer strategy affords researchers the opportunity to make incremental scientific progress and creates enclaves of psychologists who have common interests. You may have picked up on a danger associated with dividing and conquering, however. Recall that by becoming an expert in one area of psychology, researchers might end up relatively uninformed about other areas. In order for the divide and conquer strategy to be completely successful, we need to remember that it is only part of the process. After dividing and conquering individual areas, we need to close the loop, so to speak, by looking for ways to reintegrate the divided areas. Unfortunately, however, reintegrating is extremely difficult for several reasons.

Few Psychologists Work at Closing the Loop

Earlier, we presented divide and conquer as if researchers freely chose the strategy. Although it is true that how  you divide is a choice, whether  you divide is not. Quite simply, it is necessary to divide and conquer. The subject matter of psychology is too large and complex to be tackled all at once. The overwhelming quantity that makes dividing necessary is precisely what makes reintegrating an imposing prospect.

It is a full-time job for a researcher in psychology to study his or her piece of the puzzle. It is no one’s full-time job to assemble the individual pieces. Reintegration requires a researcher to look beyond his or her own research area and keep current by reading research produced in other areas. Let us show you how unrealistic that task can be. The American Psychological Association maintains a database of descriptions of scholarly work in psychology, comprising journal articles, edited and authored books, and book chapters reporting research. The database is called PsycINFO; if you do not learn about it in your General Psychology course, you almost certainly will if you take additional classes in psychology. In June 2020, PsycINFO covered over 1,700 different journals. Each journal publishes from 3 to 12 issues per year, and each issue has several research articles. Using conservative estimates of four issues per year and six articles per issue, it works out to almost 41,000 journal articles published in psychology every year . On top of that, there is research presented at scientific conventions and in many scholarly books, and recently, researchers have been sharing their research on pre-print servers (brand new research that has not been peer-reviewed yet). It is, we are sure you will agree, an overwhelming quantity of research.

If a researcher tries to read even 1% of the journal articles published per year, he or she would need to finish more than one article per day, every day of the year. Psychology researchers who are employed as college professors, as the very large majority are, are paid to produce research and to teach, so the reading has to take place during their spare time. Suffice it to say that no one is able to keep up with more than a very small portion of current research. Because it is a full-time job to produce and keep up with research in your own corner of the field, there are relatively few opportunities to communicate with people outside of your area.

Other Viewpoints are Neglected

When researchers are not careful about reintegrating, a divide and conquer mentality can lead to serious oversights. In essence, divide and conquer sometimes becomes divide and conquer and forget about the rest. For example, throughout most of its history, the psychology of religion has really been closer to the psychology of Christianity (Hood et al., 2018). Viewpoints from other religious traditions have all too often been overlooked.

The subfields of psychology are not immune to the dangers of neglect and oversight. Over the years, many cognitive psychologists have simplified their research by excluding social factors. For example, a researcher who is interested in problem-solving from a purely cognitive point of view would likely focus on things like the speed with which the problem was solved and the way the answer was determined and not pay attention to whether or not a problem solver was anxious. Many researchers conduct studies and never consider how they know that their research participants are motivated to complete their projects. By focusing too closely on the “topic” of the research study, we might leave out important details that could affect the interpretation of the results.

One key oversight that has taken place throughout the history of psychology is culture. Many researchers through the years have focused solely on psychological processes within a single culture; in some cases, researchers have devoted an entire career to studying phenomena by doing experiments with students at a single college. What if people from somewhere else are different? The cross-cultural differences that I have outlined throughout this book just scratch the surface of the differences among people across the world. You should realize that there are entire courses on cross-cultural psychology, and many psychologists who do not focus on this approach are not well informed about the findings. This can be an enormous oversight, by the way. Whenever there are large differences between Western and Non-Western cultures, for example, it is worth keeping in mind that there are many more Non-Westerners than Westerners in the world. China and India alone account for over one-third of the world’s population (2.8 billion out of 7.8 billion in 2020). Anything that is true of people in the US alone can hardly be called a truth about human psychology. Critics have charged that psychology has, for far too long, been WEIRD– based on research participants from Western, Educated, Industrialized, Rich, and Democratic societies (Heinrich et al., 2010).

Our Divisions Might Be Arbitrary

Perhaps when we separate phenomena and study them individually, the division changes the way the phenomena work. Let us illustrate by analogy. Imagine that you are a parent that has recently begun giving your two school-aged children an allowance contingent on their getting ready for school on time every day. A divide and conquer strategy would dictate that you consider each child separately, as the problem might not be equally bad for each child. You might be tempted to conclude that one child will need a larger reward (i.e., a bigger allowance) than the other will in order to motivate him to get ready. How do you think that individualized strategy will work out in the real world, though? You might discover that the other child judges the unequal allowances to be unfair, and she would likely end up less motivated to comply.

In a similar way, if we follow the typical division of subfields in psychology and separate phenomena such as cognitive development from the social contexts in which they occur, we may discover that when we return to the real world, we have missed something important.

This book, like every other psychology book we have ever seen, separates cognitive and social development. Each individual topic must fit neatly into one or the other. Attachment is a social development, and language is a cognitive development, for example. But the real-world phenomena that these theories attempt to explain do not observe the boundaries between cognition and social development that we have drawn. The phenomena are integrated wholes in which cognitive and social processes interact.

The artificial separation that may come from dividing and conquering can lead to a false dichotomy, an oversimplification based on seeing a phenomenon in either-or terms (Module 1). Researchers may harbor beliefs about the supremacy of cognition over social processes, or vice versa. Or, when researchers who focus exclusively on biological correlates of behavior have little contact with those who examine environmental impacts on development, the resulting research articles seem to be proposing that either nature or nurture is the explanation, not that both might work together (Harris, 1998; Maccoby, 2002).

Closing the Loop

As you have seen, dividing and conquering is both necessary and beneficial for progress in psychology, but it can lead to serious problems. Psychologists need to be vigilant about the short-sightedness and errors that can follow from a rigid adherence to the strategy. We must close the loop by encouraging approaches that cross boundaries or by choosing unconventional ways of dividing and conquering.

Fortunately, many psychologists are vigilant. Even when they cannot become experts in multiple areas of the field, they can appreciate the contributions of research that comes from different perspectives. In addition, there are striking success stories in which discoveries have crossed the traditional boundaries, as well as some general principles that can be, and in many cases, are followed to minimize the dangers associated with unchecked dividing and conquering.

When researchers reintegrate phenomena, putting back together concepts that had previously been divided, such as cognitive and social development, the results are very elegant. For example, the realization that the cognitive abilities of newborns and infants allow important social developments, such as attachment, is striking (Bowlby, 1982; Flavell & Miller, 1998). The fact that a newborn baby, despite poor vision and muscle control, is already equipped to look at the face of her mother and can recognize her mother’s voice reveals how finely tuned and closely related cognitive and social development really are.

The work of the Russian psychologist Lev Vygotsky (1978) is an excellent example of a psychological approach that has not observed the typical separation between the cognitive and the social. Although Vygotsky’s research was originally done in the 1930s, it was unpublished and unknown in the US for many years. After its discovery, the work has steadily gained in influence. Vygotsky’s contribution is called the sociocultural theory of cognitive development and it shows how cognitive developments are embedded in the social world in which we live. The young child is seen as an apprentice of sorts. His development takes place not by itself, but under the guidance of older people. In other words, the social activity of a parent or older sibling teaching a skill to a young child is an integral part of the child’s cognitive development.

The key concept that revealed the level of development that a child had reached was something Vygotsky called the zone of proximal development . This is the level at which a child can perform while being helped by someone else. For example, a 7-year old might not be able to multiply yet. However, if a teacher guides him through a problem while explaining it along the way, he may come up with the correct answer. The skill is within his zone of proximal development. It indicates that the child is closer to being able to multiply than another child who cannot understand it even when someone guides him. For Vygotsky, that process of being guided by another person is the key that leads to cognitive developments.

Thumbnail for the embedded element "ZoneOfProximalDev.mov"

You can also access the video directly at:  https://youtu.be/Zu-rr2PRNkE

In general, we can continue to encourage and value other boundary-crossing approaches, such as cross-cultural research and interdisciplinary efforts. Even if only a small number of researchers and thinkers in psychology take active roles in these endeavors, they can act as catalysts. Psychology in general can progress in its typical divided and conquered fashion while we develop an integrated understanding of psychology that comes from crossing boundaries.

Unit 5: Getting Along in the Social World

In earlier units, you learned a great deal about yourself and other people—for example, how you think, remember, and reason; how you interpret the outside world; how you change throughout the lifespan. Although much of this information can help you in your interactions with other people, not much of it is directly about your interactions with other people. This unit, however, is organized around personality psychology and social psychology, two subfields that share the same goal: to help us understand how we think about, influence, and relate to other people. To a social psychologist and a personality psychologist, the fun starts when you put two people together.

Although the two subfields share the same goal, they approach it from two different angles. Personality psychologists tend to focus on factors that are internal to the person and stable, whereas social psychologists tend to focus on situational influences on behavior. For example, when trying to explain why a student might or might not cheat on an exam, a social psychologist might examine the role of peer pressure in dishonest behavior, while a personality psychologist might focus on the role of personality traits such as honesty and conscientiousness. Social psychologists and personality psychologists do not deny the influences from the other side, they just pay less attention to them. Thus, you will find it useful to think of the two subfields as complementary perspectives, both of which can make important contributions to our understanding of how we think about, influence, and understand one another.

Many observers note that a major lesson of social psychology is the power of the situation to cause behavior. They are not saying that personality factors play no role in human behavior and everything results from the situation. Rather, they are pointing out that no one seems surprised that personality factors account for our behavior. In fact, many people assume that personality is the only influence on behavior, to the point of ignoring the potential role of situational factors. As you will see in Module 21, this tendency itself is an important discovery of social psychology. There are six modules in this unit:

  • Module 19, Personality: Who Are You?, describes people’s dispositions in thinking about, influencing, and relating to other people. It is a discussion of the approaches that personality psychologists have used to characterize different types of human beings over the years. You will learn about the trait approach, Sigmund Freud’s psychoanalytic approach and the psychodynamic approach that grew from it, and the cognitive–social learning approach.
  • Module 20, Emotions and Motivation: What Moves You?, covers two topics that are key components in thinking about, influencing, and relating with other people. Our emotional life and motivated behaviors are an integral part of helping us get along in the social world. You will learn about some general concepts in motivation and about the important relationships among emotion, bodily response, cognition, culture, and emotional expression.
  • Module 21, Social Cognition and Influence: How Do People Interact?, covers topics from the first two-thirds of the social psychology definition: thinking about and influencing other people. You will easily be able to recognize and use in everyday life the information in this module about attribution, stereotypes and prejudice, attitudes, persuasion, obedience, and group effects.
  • Module 22, Intimate Relationships, is a partial list of some important ways that we relate with other people with whom we are close. Again, you will be able to use much of the information in this module about the closest relationships that we have during our adult years; there are individual sections on love and sexual behavior, sexual orientation, and marriage and divorce.
  • Module 23, People in Organizations, is an introduction to industrial/organizational psychology, a subfield of psychology that applies many of the concepts of social psychology to people’s behavior at work and in other organizations. You will learn about human resources–related topics, such as job selection and training, as well as about motivation, leadership and influence, and workplace diversity.
  • Module 24, Social Psychology and Personality Psychology: Science and Society’s Problems, describes how the two subfields differ from some others in psychology—namely, in their stronger focus on finding real-world application for many psychological theories and research finding

Module 19: Personality: Who Are You?

A key element of the ways that you think about, influence, and relate to other people is the dispositions that you bring to any situation—in other words, your personality. For example, suppose you are a hostile person in general. You are likely to interpret others’ actions as challenges or threats, you may prefer to use bullying tactics to influence other people, and your relations with other people are likely to be confrontational.

In this module, we will talk about these dispositional factors. We will introduce you to the major approaches that psychologists have used to try to understand personality. These approaches correspond roughly to the ways that you might attempt to learn about the personality of a new friend. First, you would notice that your new friend has certain tendencies; she is pleasant and cooperative, hard-working, and comfortable with routines, for example. This first task, noticing and describing stable tendencies, corresponds to the trait approach. After you have known your friend for a while, you may begin to wonder why she has the tendencies, or traits, that she does. You may wonder, for example, if her comfort with routines results from an overly structured childhood or if she has found that routines help her to complete her schoolwork on time. This search for explanations corresponds to the remaining approaches to understanding personality that psychologists have developed: genetic (which we include as part of the trait approach), psychoanalytic, and cognitive–social learning.

This module has three sections. Section 19.1 introduces you to the important traits that psychologists have identified and the major method that has been developed to measure them. Together, they constitute the trait approach. Although genetic explanations for personality are not strictly part of the trait approach, we have included them in this section because the two camps have grown closer over the past couple of decades. Section 19.2 describes the cognitive–social learning approach to explaining personality, which is really three separate approaches: cognitive, social learning, and the combination of the two. Section 19.3 begins the concerted efforts to explain personality with a discussion of the once-influential ideas of Sigmund Freud.

19.1 Trait approach: Traits, genes, and continuity

19.2 Cognitive–social learning approach: Personality and the environment

19.3 Psychoanalytic approach: Unconscious conflict in personality

By reading and studying Module 19, you should be able to remember and describe:

  • Factor analysis (19.1)
  • The Big Five personality factors (19.1)
  • Self-report personality inventories (19.1)
  • Temperament (19.1)
  • Learning approach and Cognitive-Social Learning approach to personality: reciprocal determinism, learning through observation (19.2)
  • Conscious, preconscious, and unconscious (19.3)
  • Defense mechanisms: repression, denial, projection, reaction formation, sublimation, displacement (19.3)
  • Criticisms of psychoanalytic approach (19.3)

By reading and thinking about how the concepts in Module 19 apply to real life, you should be able to:

  • Identify where you likely fall on the Big Five personality factors (19.1)
  • Recognize examples of learning through reinforcement and punishment, and through observation in personality (19.2)
  • Recognize examples of defense mechanisms (19.3)
  • Identify examples of consistency and inconsistency in personality (19.1 and 19.2)

By reading and thinking about Module 19, participating in classroom activities, and completing out-of-class assignments, you should be able to:

  • Describe examples of reciprocal determinism (19.2)
  • Describe interpersonal difficulties that may result from mismatches in personality (19.1 and 19.2)

19.1. Trait approach: Traits, genes, and continuity  

  • Write your own personal ad. Focus on your personality instead of on appearance and interests. No lying unless one of the traits you would use to describe yourself is “dishonest”!
  • Have you ever taken a personality test? Did you agree or disagree with the results?

In many cases, the first thing we want to know when we meet a new person is along the lines of “What are they like?” “Are they friendly?” “Do they have a good sense of humor?” A simple description is all we want. When we ask questions like this, we are searching for a person’s personality traits , tendencies that predispose people to act consistently over time and across situations. Instead of describing individual people, many psychologists were interested in coming up with a list of the full range of traits that could be used to describe all people.

In our own lives, after we have formed an impression about the stable traits that describe a particular person, we may turn to a search for explanations. “They sure are a suspicious person; I wonder if they have cheated many times in the past or if they have always been that way, even since childhood.” Traditionally, psychologists taking the trait approach were content to work toward describing people only. More recently, we are beginning to see an explanation component to the trait approach along with the descriptive component. Trait psychologists are most interested in the genetic contribution to personality (answers to questions like “Have they always been this way?”). They do not deny the effects of the environment (as in questions like “What kinds of experiences made a person a certain way?”); they just do not pay as close attention to them as to the possible explanations of persistent, innate tendencies.

Identifying the Range of Human Traits

Take a look at the personal ad that you wrote for yourself. How many different traits did you use to describe yourself? How many do you think you would need in order to do a reasonable job of describing someone’s personality? You could take an entire book to do it; a major part of biographies and memoirs is the description of the personalities of the subjects. In 1961, Gordon Allport, one of the original trait theorists, counted over 4,000 adjectives in English that describe aspects of people’s personalities.

If you think about it, though, many of those words are very similar and probably do not really denote different traits. For example, consider shy, timid, bashful, quiet, reserved, introverted. Although they do not all mean the exact same thing, they are extremely similar. These words may really just indicate versions of the same underlying personality trait.

One way you could determine which words refer to the same trait is to examine the similarities in ratings on a personality test that asks people to rate themselves on the different items. Suppose all of the test takers gave themselves the same ratings for “shy” and “bashful.” That would suggest strongly that these two words are actually measuring the same trait. A statistical technique called factor analysis does a mathematically complicated version of that very examination. In factor analysis, the correlations between ratings for all of the items are computed. Subsets of items that have high correlations with one another are assumed to be measuring the same trait. When personality data are subjected to factor analyses like this, you can reduce hundreds of separate personality terms into a smaller number of dimensions, or factors, as they are called. For example, shy, bashful, reserved, and several other terms might be reduced to a single factor that we could call introversion.

Over the past two decades, there has been a growing consensus that the “correct” number of personality factors is five. In other words, there seem to be five particular personality dimensions that can do a good job of describing anyone’s personality:

(Neuroticism is sometimes called Emotional Stability, but doing so ruins the OCEAN acronym that you can use to remember these five factors.)

These five factors have turned up so many times as the key ones that they have been named the Big Five personality factors (McCrae & Costa, 1985; 1987; 1997; 2003). Keep in mind that these are not the only five personality factors, just that they are the most useful ones. They are the five factors or dimensions that will tell you the most about someone’s personality. Certainly, you can learn more by discovering additional traits, but each new one gives you much less information about the person than the Big Five will.

One important extension that has gotten some attention recently is known as the HEXACO model (Lee & Ashton, 2007). The EXACO part corresponds to the Big Five factors (E for Emotionality and X for eXtraversion), while the H is for Honesty-Humility, a trait that seems both important and somewhat separated from the original five. Honesty-Humility is reflected by a person’s willingness to break rules, willingness to manipulate others for personal gain, interest in wealth, and feelings of self-importance.

Determining Individuals’ Traits: Self-Report Personality Inventories

One of the key contributions of trait psychologists has been the development of methods to describe individuals’ personalities. Self-report personality inventories, or more commonly, personality tests, have been around for years, giving researchers ways to classify people and helping clinical, educational, and organizational psychologists figure out the best ways to help them achieve their potential.

Now you can take hundreds of “psychologist certified” personality tests for free on the internet. We should caution you about taking these tests too seriously. Let us remind you about the 7 “tips” for evaluating psychological information from Module 1, as most of them can be applied to these personality tests:

Many websites that offer free personality tests are peppered with testimonials, and they may claim to be based on “brain science,” which sounds like an oversimplification and distortion of research. And they never do describe the actual research. Although the test itself might be free, you are often invited to purchase a more detailed report of the results, or a book, of course. Although the fact that someone is trying to sell something to you is not an automatic reason for you to reject the results, it is certainly a fact that should make you more cautious.

We are not prepared to condemn all of the free personality tests out there, but we certainly do suggest strongly that you consider them “for entertainment purposes only.” You will find many other personality tests that are accompanied by full disclosure about how they were constructed. In addition, good tests have been “tested,” so to speak, in order to demonstrate that they really do measure what they claim to measure—in other words, they have been tested for validity . Well-constructed tests have been assessed for reliability as well, and their results have been standardized appropriately; in other words, they are consistent and scores are compared to a previous group of test-takers (see Module 8 for details on this). It should not be too difficult for you to discover these facts about a particular test if you are looking for one you can trust.

Let us consider three well-known personality tests.

Minnesota Multiphasic Personality Inventory

  • NEO Personality Inventory (and Five-Factor Inventory)

Myers-Briggs Type Indicator

All three are self-report personality inventories, meaning that the results are based on what the test-takers themselves think about their traits, not on what an observer thinks (although there is an observer option for the NEO).

The Minnesota Multiphasic Personality Inventory (MMPI), which was originally developed in the 1930s, and is being updated to a third edition, so it is now known as the MMPI-3 (release date is fall 2020). It is expected to have around 350 items, which are grouped into several separate scales (for example, Antisocial Behavior, Self-Doubt, Dominance, Aggressiveness, Compulsivity, Introversion, and many others), each of which functions like a mini-test score for a specific trait. The MMPI was originally developed to measure psychological disorders, and the names of many of the scales reflect that history. The individual scales can also be sorted into different groups to construct larger scales, such as Interpersonal, Internalizing, and Externalizing. If the names of some of the scales sound negative, it is because of the MMPI’s heavy use in clinical settings to help diagnose psychological disorders. The test is also used in non-clinical settings, however. For example, the MMPI is commonly used to help make hiring decisions about candidates for high stress, high-risk jobs, such as police officers and firefighters. Some marriage counselors and career counselors also use the MMPI when they are advising their clients.

The MMPI-3’s greatest strength, perhaps, is its very careful construction. Each individual test item for a particular scale—for example, Depression—was selected for its ability to distinguish those suffering from depression from those who are not. The test developers administered the test to 2,200 people (including 550 Spanish speakers) from throughout the US for their standardization sample. The MMPI-2 also contains scales designed to assess the likelihood that the test taker is being truthful, which helps control for the temptation to make yourself look good when taking self-report tests.

NEO Personality Inventory

The NEO Personality Inventory (Costa & McCrae, 1992; 1995) is used to measure the Big Five factors. There are different lengths of the test, from 60 to 240 items.  The NEO Personality Inventory can be administered as a self-report test, or it can be used to allow an observer, such as a spouse, to make judgments about an individual.

When you shorten a personality test, you usually reduce its reliability, so that is what you give up for the gain in ease of administering when you go for the short version. In addition, the full-version test provides subscales that can give you information about different facets of the Big Five factors. For example, the Neuroticism scale can be subdivided into Anxiety, Angry Hostility, Depression, Self-Consciousness, Impulsiveness, and Vulnerability. The Agreeableness scale is subdivided into Trust, Modesty, Compliance, Altruism, Straightforwardness, and Tender-Mindedness.

The publisher of the Myers-Briggs Type Indicator (MBTI) claims that it is the world’s most widely used personality inventory. That is true. This means that if you are familiar with personality tests, it is most likely that it is the MBTI or a test based on its principles. The test sorts people into 16 different personality types based on their placement on four dimensions (Introversion-Extraversion, Sensing-Intuition, Thinking-Feeling, Judging-Perceiving).

Unfortunately, what might NOT be true is that the MBTI is a reliable and valid test of personality. Even the makers of the test themselves seem to be admitting as much when they assert that the MBTI does not measure traits, it measures “preferences” (Kerwin, no date). We are not even sure exactly what that means, but apparently, according to Kerwin, that makes it acceptable to categorize people as introvert versus extravert, for example, rather than to accept that introversion-extraversion is a dimension. In other words, you are not one or the other (which sounds a bit like a false dichotomy, doesn’t it). The reality is that people fit somewhere along a dimension, or continuous distribution, that is labeled introvert on one end and extravert on the other.

Kerwin continues to criticize the critics for using out-dated research to support their viewpoints. Fair point, but then Kerwin goes on to use NO peer-reviewed research to support his viewpoint. Here is an excellent bottom-line summary evaluation of the MBTI that Kerwin does not mention: The test does not agree with known facts and data in psychology, it is not internally consistent, it allows validity reports to be based on self-verification (in other words, it is valid because someone who took the test likes the results and agrees that it is valid), and many of its key claims cannot be tested (Stein & Swan, 2019). Yes, 2019.

The one of the seven tips you might not have learned about yet is Beware of Persuasion Tricks. One key persuasion trick is to use social proof, essentially trying to persuade you by pointing out that other people agree with the persuader. The MBTI is taken by two million people per year and is used by people in 89 of the Fortune 100 companies (also from Stein & Swan, 2019). All those people can’t be wrong, can they? Yes, they can.

Debating the Trait Approach

Go back to the personal ad that you wrote down for yourself in the Activate section at the beginning of the Module. Did you find it easy to come up with a set of personality traits? Many people do not. Even if you did, reconsider some of them. Maybe you wrote down that you are serious; in reality, it depends largely on the situation. When you are relaxing at home or hanging out with friends, perhaps you are quite playful. At work or school, you are serious. When you exercise, you are somewhere in the middle. At church, you are very serious. Ouch! What is the good of using the trait “serious” to describe yourself if your seriousness depends so much on the situation?

This is a problem that critics of the trait approach have pointed out over the years. When you use personality traits to describe someone, you are assuming that traits are stable, that they apply to people consistently over time and in most, if not all, situations. The critics have argued that traits are not so consistent. Walter Mischel (1968; 1990), for instance, argued for many years that advocates of the trait approach have gone too far with their claims. Behavior, Mischel noted, is far too variable from situation to situation for psychologists to use a single personality test score to predict anything about an individual person with much confidence. Despite this supposed limitation, however, many psychologists do just that; for example, they make psychological diagnoses, hiring decisions, and career recommendations on the basis of personality test scores at a single time and place. As early as 1928, researchers discovered that behavior varies considerably from situation to situation (Hartshorne & May, 1928), so there is clearly merit to Mischel’s arguments.

Trait theorists have countered that no one claims that traits are completely consistent (no one is always serious) but that it is impossible to imagine that there is no consistency (Epstein, 1980; 1983). They point out that if there were no consistency at all, a person’s current behavior would have no relationship to her or his prior behavior, making social interaction very difficult. Imagine someone being helpful, kind, and considerate on Tuesday and suspicious, mean, and vindictive on Thursday. Clearly, say the trait theorists, there is more consistency to an individual’s personality traits than that.

Although it is dangerous, as the critics say, to predict a person’s specific behavior from a trait, if you look at the “big picture,” you will see that traits can be quite useful. Although you might be an introvert according to a trait measurement, at any given moment you might appear outgoing and sociable; perhaps you are excited about some good news or with someone who encourages you to be open. If we look at you over a two-week period, however, we would probably discover that you exhibit more reserved behaviors than someone whose test indicated that he or she was an extravert (Epstein, 1979).

Investigating Trait Consistency: Temperament and Genes

In the past couple of decades, the trait approach has become increasingly associated with a biological, or genetic, explanation for personality. This is not an automatic or necessary association. Personality traits could arise from many different combinations of genetic and environmental causes or, for that matter, entirely from environmental causes. It seems very likely that, just like everything else we have examined in psychology, personality does result from the interaction of genes and environment. Many psychologists have begun to pay a great deal of attention to the genetic side of this equation, however. Two key ideas that have helped out with the development of genetic explanations of personality are temperament and the Big Five traits.

We first saw the concept temperament in Module 17. You may recall, then, that temperament refers to biological differences in a person’s emotional and motor reactions to new stimuli, and tendencies regarding self-regulation (Rothbart & Bates, 1998). In Module 17, we noted that early excitement about the role of temperament in childhood attachment faded as the accumulation of research suggested a relatively minor role of genetics and temperament. The role of biologically based temperament on personality is still an unsettled question. Many psychologists believe that biologically based dispositions provide the basis for the development of consistent personality traits; others believe that the role of temperament in the development of personality is quite limited.

Behavior geneticists have determined that the heritabilities for many personality traits are about 50%, meaning that about half of the variation in a group can be explained by differences in the genes of the group members. In other words, of the differences in a trait such as extraversion among the students in your class, about half of the variation results from differences in genes across students.  Some research has even found heritability estimates for the Big Five traits in the range of 66% to 79% (Reimann et al., 1997). Robert McRae, Paul Costa, and their colleagues have gone so far as to assert that the Big Five factors—openness to experience, conscientiousness, extraversion, agreeableness, neuroticism—are the key dimensions of childhood temperament (and adult personality, of course). They note that research has found evidence for the existence of the Big Five factors throughout the world (Martin et al., 1997; McRae et al., 1998; 2000; McRae & Costa, 1997).

Although using the Big Five factors as the key dimensions of temperament creates an obvious link between biology and adult personality, not all researchers agree with this choice. There have been many alternative proposals through the years regarding the key aspects of childhood temperament. Many recognize that infants differ reliably in their dispositions toward positive emotions, anger, fear, excitability, and ability to focus attention (Caspi, 2000; Belsky et al., 1996; Rothbard & Bates, 1998). These characteristics, then, are also good candidates for stable dimensions of infant temperament.

There is some research support for the idea that childhood temperament is related to adult personality and for the idea that both childhood temperament and adult personality are fairly stable. For example, several of the infant temperament measures (along with parent personality) appear important in the development of the parent-child relationship (Kochanska et al., 2004). In addition, longitudinal research has demonstrated consistency in temperament throughout early childhood, consistency from preschool temperament to personality traits in middle childhood, and consistency in adult personality over periods ranging from 6 to 30 years (Costa & McRae, 1992; Emde, et al. 1992; Finn, 1986; HageKull & Bohlin, 1998; Saudino & Cherney, 2001; Siegler & Costa, 1999). It is true that there have been few demonstrations of direct links between infant temperament and adult personality, but there is substantial indirect support for the existence of those links (McRae et al. 2000; Rothbart, Ahadi, & Evans, 2000).

But fairly stable does not mean that personality cannot change. In fact, it can. One key precursor to personality change is major life events, such as marriage, divorce, becoming a parent, starting a new career, and retirement (Bleidorn, Hopwood, & Lucas, 2018). Other non-genetic factors can contribute to the development of, and changes to, personality as well. To explore some of those, we will turn to the cognitive-social-learning approach in the next section.

  • Do you agree more with trait theorists or with critics like Walter Mischel?
  • Do you have any major personality traits that have changed considerably over the years?

19.2. Cognitive–Social Learning Approach: Personality and the Environment

  • If you go to a party where you know no one, are you more likely to hang back and wait for others to approach you, or to move around and try to meet people? How might this preference influence your expectations about and experience at the party?

During his first couple of years teaching full-time, Edward, a psychology professor, was disappointed that he did not meet many people. His college is a large commuter school; he rarely saw his students outside of the classroom, and informal encounters with colleagues were rare. Then, one day, he made two simple changes to his daily routine. First, instead of parking in the lot nearest to his office, he began parking in a faraway lot, so he would have to walk across campus to reach his office. Second, he made a special effort to notice the people he was passing in the halls (he had a tendency to be unobservant). The effect was immediate. He began to recognize and greet familiar faces every day. Emboldened by his success, he made more changes to his typical behavior. He began to purposely arrive at school between classes so that he could walk through the halls when they were filled with people. He started eating an occasional lunch in the main campus cafeteria. Over time, he developed to the point where he almost never makes it to his office without exchanging smiles and hellos with colleagues and with current and former students. He frequently arrives at his office later than intended because he stopped to chat with someone for a few minutes. A “before” and “after” examination would look as if he had undergone a magical, almost Ebenezer Scrooge-like, transformation. But no ghosts were required to turn Edward from quiet, surly, unfriendly passerby to a sociable and outgoing colleague and professor.

To be sure, the transformation was not complete. Edward still struggles with many leftovers of days past, such as an awkward feeling when someone fails to return his greeting (and an uncertainty about whether he should try to get people’s attention to greet them if they do not see him). These feelings are internal, however, and as such are invisible to others. As a consequence, despite having his own very clear answer to the question of whether he is introverted or extraverted, others are not so sure. His students are typically split about 60-40 when he asks them (the slight majority peg him as introverted, as he pegs himself).

It is these types of transformations that the cognitive–social learning view of personality intends to explain. Originally, psychologists developed a social learning perspective as a straightforward application of learning principles, which you learned about in Module 6. In essence, social learning psychologists proposed that personality emerged as a person learned to respond to the environment. For example, when a particular behavior, such as bullying on the playground, was reinforced (for example, other children are intimidated into handing over their lunch money), that behavior becomes more likely in the future, and a personality trait—in this case, aggressiveness—is born.

Often a behavior may be reinforced in one kind of situation but not another. For example, maybe the playground bully gets rewarded at school for his behavior but not at home. According to the principle of stimulus discrimination , the child would learn to be aggressive on the playground but not at home. In this way, the social learning perspective can account for some of the inconsistencies in people’s behavior across situations.

Now, these are terrific ideas; we are sure you can think of dozens or hundreds of examples of personality-like behavior that is rewarded or punished. Still, the approach is limited. There are many cases of behavior that become common without it being rewarded, and how can you explain two people who experience the same environment yet come away from it with different behaviors? To help out the straightforward learning explanations, psychologists expanded them by turning to cognition. Basically, it is the way that people think about the environment (and the rewards and punishments therein) that determines their typical behaviors in different situations—and hence their personalities.

Reciprocal Determinism

Because people think about the environment, they behave in ways that will change it, which will then later affect their behavior. The result is a slightly complex, but very interesting, interaction among cognition, behavior, and environment. Albert Bandura, one of the pioneers of this view, coined the term reciprocal determinism  to describe the way that personal factors (for example, traits, predispositions, and styles of thinking or cognition) and behavior interact with the environment. To illustrate, we will use “expectation” as a kind of personal factor. Because of experiences, traits, predispositions, or styles of thinking, you have a great many expectations about events. It was professor Edward’s expectation that he would run into people in the halls that led him to change his parking space. That change had the desired effect, and the rest, as they say, is history.

Let us look at another example so you can see how reciprocal determinism works. As you read the example, look for the following kinds of interactive effects:

  • Expectations influence our behaviors
  • Behaviors influence our environment
  • Environments influence our behaviors
  • Behaviors influence expectations
  • Environments influence expectations

It is 7:55 on Monday morning, the first day of the semester, and you are walking to your first class, History 333, The Detailed History of Boring Speeches. Because of your obvious expectation that you will be bored to tears, you decide to sit in the back row, behind a very tall student just in case you need to doze off (expectation influences behavior). From your obstructed-view seat, you find that you can neither hear what the professor is saying clearly nor see the board where she is writing (behavior influences environment). Then, the environment influences another behavior, as you fall asleep because there is nothing to see, hear, or do (environment influences behavior). Because you are falling asleep in this class, you become even more convinced that 8:00 AM on Mondays and Wednesdays is nap time (behavior influences expectations). On your way out of class one day, you overhear two students who had been sitting in the front row, bright-eyed and wide-awake. “I just love this class,” says the first. “Yes, you were right,” her companion replies. “Professor Jones has a way of making the course material fascinating.” The front-row students’ different expectations led to different behaviors and a different environment, which in turn influenced their expectations.

This is an important aspect of reciprocal determinism. Personal factors can lead to unique expectations, which will lead different people to experience a completely different environment in the same situation.

Learning Through Observation

Another key contribution of the combined cognitive-social learning perspective is the realization that learning can occur even though a behavior has not been specifically reinforced. In particular, you can learn to engage (or not engage) in a behavior by observing someone else engage in the behavior (remember, you first learned about this concept in Module 6). If you can also observe the consequences of that behavior—that is, does the behavior lead to reinforcement or punishment?—you are likely to learn very effectively whether and how to engage in it.

As a parent, you should keep in mind that modeling appropriate behavior can be more effective than using other forms of influence. For example, a naturalistic observation conducted throughout the U.S. found that only 41% of children under 14 wear helmets while biking, skating, skateboarding, or riding a scooter. Fifty percent of children wore helmets when they were with adults who were not wearing helmets, presumably because the adults tried to influence the children to wear a helmet. If the adults themselves were wearing helmets, 67% of the accompanying children did too (Cody et al., 2004). Very likely, much of the increase in children’s helmet wearing results from the modeling of the appropriate behavior by adults. Modeling is the scientific equivalent of the well-known phrase “actions speak louder than words.”

As we noted above, the cognitive-social learning perspective suggests that, because behavior is (in part) determined by the environment and the environment is quite varied, personality would not be completely stable. Psychologists from the social learning perspective are indeed more likely to point out that behavior can be somewhat inconsistent across situations. That is not the same thing as saying behavior is unpredictably random, however. Remember, according to the cognitive–social learning perspective, individuals choose some aspects of their environment. Most people are likely to choose environments that led to rewarding outcomes in the past, so the similarity in chosen environments will lead to some consistency in behavior. In addition, learning principles such as generalization would lead to increased consistency across similar situations.

Finally, we should note that, just as a social learning perspective may exist without reference to cognition, there is a cognitive perspective that makes little reference to social learning. It is an important current area of research among personality psychologists, as you will discover in discussions of the self-concept in Module 25.

  • Where do you typically sit in each of your classes? Try to describe how your seating choices influence and are influenced by your expectations and school experiences.
  • Can you recognize other examples of the interactions among expectations, environment, and behavior that is, reciprocal determinism) in your life?

19.3. Psychoanalytic Approach: Unconscious Conflict in Personality

  • Can you define the following concepts? Oral fixation, anal-retentive, repressed, denial, id. If they were familiar, where did you hear about these concepts? Did you know that they all come from Sigmund Freud?
  • What do you know about Sigmund Freud’s life and work?

Although in everyday life we are likely to be interested initially in descriptions of personality and later in explanations for it, that is not the way the psychology of personality progressed. The field essentially went in reverse. Sigmund Freud was the first to address human personality in a big way; he was extremely interested in proposing explanations for human personality. He suggested that the causes are far from obvious. On the contrary, human personality, according to Freud, resulted from long-forgotten conflicts that we failed to resolve when we were younger. Although we could not recall the original conflicts, they left a permanent mark on our adult personality. The trait perspective described in Module 19.1 emerged after Freud, somewhat in reaction to his thinking.

For half a century, Freud’s ideas were the most influential in personality psychology. As such, his theories provided the agenda-setting function that we outlined in Module 18. Indeed, even today, a great deal of research in personality can be seen as a reaction to Freud’s psychoanalytic approach. You should know from the outset, though, that Freud’s influence within our field has fallen dramatically over the years; the majority of psychologists reject many of his most central ideas. It is still useful to outline some of those ideas for three reasons: First, it is reasonable to cover him simply because he is one of the most famous psychologists of all time. Second, to understand more modern theories in personality, it is helpful to understand the context in which they arose. Third, and perhaps most importantly, people outside of psychology still believe many of these ideas. You may be able to recognize that many commonly held beliefs about psychology, particularly those that pertain to the “mysterious powers of the unconscious mind,” are probably myths based on rejected aspects of Freud’s ideas. It will be your job after learning this information to correct these mistaken beliefs. We will be checking on you periodically for the rest of your life to make sure that you do.

In a nutshell, Sigmund Freud proposed that people’s personalities develop through the conflicts resulting from the opposition of biologically based drives and social restraints. We must confess, however, that it is extremely unfair to Freud to try to present his ideas “in a nutshell.” An extremely prolific thinker and writer, Sigmund Freud’s continually evolving ideas spanned four decades.

The basic idea underlying the psychoanalytic view is that human beings have biological drives, such as sex and aggression, that need to be held in check because to act on many of these drives would cause social disarray. But according to Freud’s conception, a biological drive remains until it is satisfied. As drive upon drive remains unsatisfied, there is a buildup. The struggles to learn how to control these biological drives, keep them from reaching consciousness, and release them safely, without the negative social consequences of acting on them openly or the explosion that results from keeping them in are the keys to healthy personality development in children and healthy psychological adjustment in adults.

Conscious, Preconscious, and Unconscious

To Freud noted that personality could be divided into conscious, preconscious, and unconscious parts. The conscious part, the smallest part of personality, consists of the thoughts that are in your mind right now. The preconscious part is the potential thoughts that are not currently in consciousness but that you can bring into consciousness at will. For example, think about what you ate at your last meal. As you did so, you moved that information from preconsciousness to consciousness. The  unconscious part, which Freud believed to be the largest part of personality, consists of thoughts that cannot be brought into consciousness. Freud likened the conscious and preconscious to the tip of an iceberg. He believed that the unconscious, as the major part of personality, has an extraordinary influence on human behavior.

conscious: the part of the personality consisting of current thoughts

preconscious: the part of the personality consisting of thoughts that are not conscious but can be brought into consciousness

unconscious: the part of the personality consisting of thoughts that are not conscious and cannot be brought into consciousness

According to Freud, conflicts between biological desires and social demands play a key role in the development and maintenance of our personality. Each of us has a limited reserve of psychological energy available for both the resolution of conflicts and everyday functioning. If too much energy is used up on the conflicts, there will not be enough left over for healthy everyday adjustment. One key way that psychological energy can be tied up over the long term is through fixation . Fixation occurs when an individual gets stuck in a childhood stage because of poor resolution of the conflict when it first occurred. As a result, they have to devote psychological energy to these same conflicts throughout life. Freud suggested that a number of specific personality characteristics accompany fixations at different childhood stages.

Adult Personality, Adjustment, and Defense Mechanisms

In adulthood, the painful struggles of childhood, along with the still massive biological impulses, are locked away in the unconscious; they are repressed, to use Freud’s term. Unfortunately, we did not say they are safely locked away. These impulses are constantly swirling around in the unconscious, looking for an opportunity to be expressed. It is the adult personality’s job to keep those id impulses where they belong, in the unconscious. When the personality feels as if it is beginning to lose control like this, the person experiences anxiety. To help control that anxiety, defense mechanisms are used to keep the unwanted impulses from reaching consciousness. These are some of the most important defense mechanisms:

  • Repression. You can think of repression, the most basic defense mechanism, as a kind of motivated forgetting of conflict. For example, a teenager may be unable to remember a serious fight she had with her brother when they were younger. The goal of all defense mechanisms is either to repress the id impulses or to release them safely.
  • Denial . One strategy that the ego might use is simply to deny that the unwanted impulse exists. For example, a student who is sexually attracted to his biology professor might deny that he is. Of course, people deny things that are true all the time; it is called lying. The property that makes denial a defense mechanism is that it operates in the unconscious, and the person is unaware that he or she is using it.
  • Projection . The ego seeks relief by believing that unwanted or threatening impulses actually apply to other people instead. For example, if you have frequent aggressive impulses, you may complain that other people are aggressive drivers.
  • Reaction formation . The ego escapes anxiety by doing the complete opposite of the unwanted impulse. A man whose id leads him to want to look at pornography may become a crusader against it. The common notion of homophobia is a version of reaction formation. According to the reaction formation explanation of homophobia, men who secretly fear that they are gay are extremely biased against other gay men.
  • Sublimation . The ego redirects the unwanted impulses into socially acceptable activities. For example, aggressive impulses can be turned into competitiveness in sports or in business activities.
  • Displacement . Sometimes the ego can find relief from anxiety by redirecting an unwanted impulse toward a safer target. For example, instead of punching your instructor when he gives an unfair exam, you may come home and yell at your boyfriend, girlfriend, or spouse.

Keep in mind that defense mechanisms must be unconscious; if you were conscious of using them to repress some unwanted impulse, they would have already failed. This is our biggest objection to the concept. It does seem very likely that people engage in defense mechanism-like behavior, but they are largely aware of what they are doing. Psychologists refer to these consciously employed behaviors as coping strategies (Lazarus, 1974). For example, when people are guilty of displacement, they often realize that they are taking out their anger on an inappropriate target.

defense mechanisms: strategies that the ego uses to relieve anxiety that results from unwanted impulses

fixation : when an early-life conflict is resolved poorly, the id gets stuck in a psychosexual stage, and the adult ego must use energy to continue to try to resolve it throughout life

Other Criticisms of Freudian Concepts

Sigmund Freud’s greatest contribution to personality psychology has been through his influence on other psychologists—that is, his agenda-setting—much the way that Jean Piaget and Erik Erikson influenced developmental psychology. For nearly 75 years after Freud began formulating his ideas, almost all research and theory in personality was a reaction to the psychoanalytic approach.

Critics of the psychoanalytic approach have noted that although any fact that happens to be true can be explained in psychoanalytic terms, Freud’s theory cannot be used to generate testable predictions. Thus, the psychoanalytic approach is missing one of the two essential properties of a scientific theory. Theories must organize observations—and the psychoanalytic approach certainly excels at this—but they also must be able to generate hypotheses. In order for a hypothesis to be testable and useful, it must have the potential to prove the theory wrong, and a “theory” that can explain any fact fails that test. For example, based on a friend’s traumatic episode with his mother when he was a baby, you might predict that he would harbor bad feelings toward his mother. If your friend claims that he has a great relationship with his mother, however, you can say that he is using one of the defense mechanisms: denial or reaction formation. In short, there is no potential fact that could conceivably prove the psychoanalytic approach wrong, making it unusable as a scientific theory.

One of the biggest problems with the psychoanalytic theory is the concept of repression. Remember, according to Freud, the conscious and preconscious parts of our personality are only the tip of the iceberg; the rest consists of massive amounts of impulses, thoughts, and memories that cannot be brought into consciousness because they are repressed. The impulses that are repressed are forced into the unconscious precisely because they are too upsetting to experience consciously. According to Freud, repression was “the cornerstone, on which the whole structure of psychoanalysis rests” (1914). Well, the cornerstone is not particularly stable. Consider memories that Freud would say have been repressed. The events in question must have been extremely emotionally arousing in order to activate the need to repress it, right? The problem is that a great deal of research has shown the opposite effect. Emotionally arousing events lead to a boost in memory through the influence of stress hormones and the amygdala (McGaugh et al., 2000). Although it is true that intense and prolonged stress does interfere with memory, it is extremely unlikely that repression of too-upsetting impulses and memories could be happening on the massive scale proposed by Freud.

Freud based his discoveries primarily on case studies; as you might recall from Module 2, the danger of relying on case studies is that you can never know if it is fair to generalize to the population at large from your individual cases. Even if Freud’s observations about his cases were on the money, he may have only been discovering facts about upper-class, well-educated Europeans who were suffering from adjustment problems. And there is reason to believe that Freud’s observations were biased. Most observers have noted that Sigmund Freud had a very pessimistic view of human nature. As a result, he may have had an expectation that biased him to look for information that was consistent with his view—you should recognize this as the confirmation bias (see Module 1). We think one example goes a long way toward illustrating the possibility that Freud may have “seen what he expected to see.” Freud noted that people commonly employ the defense mechanism of projection, in which we attribute our unacceptable impulses to other people. Perhaps you noticed that this is essentially what we called the false consensus effect in Module 1, but with one key difference. The false consensus effect does not occur only for unacceptable impulses; because that was what Freud was looking for, however, that was what he saw.

Personality Approaches in Perspective

So which approach is right—trait approach, psychoanalytic approach, or cognitive–social learning approach? Well, we can give you a clue: Remember when we introduced the term perspective in Module 3? It is a specific way of viewing a problem that helps us to understand some aspects of a complex phenomenon. We think human personality qualifies as a complex phenomenon that would benefit if we were to look at it through multiple perspectives. The merged trait-temperament-genetic perspective offers insights about the reasons for much of the consistency in people’s personalities (and the difficulties we can have when trying to change). The cognitive–social learning perspective is an excellent candidate to inform us about the effects of the environment and in particular about why personality is sometimes not consistent. Although the psychoanalytic perspective is nowhere near as influential as it once was, even it can be useful at illuminating some of the behavior patterns we have adopted to manage anxiety and some of the influence of early childhood experiences on adult personality. It would not be a stretch to say that all of the perspectives contribute to give us a much more complete picture of human personality than any single one could alone.

  • Can you think of any other everyday concepts (similar to inferiority complex or oral fixation) that may have come from the psychoanalytic approach?
  • Can you think of any images and themes from books, movies, or songs that may have come from the psychoanalytic approach?

Module 20: Emotions and Motivation: What Moves You?

Have you ever noticed the similarity between the words emotion and motivation? It is not coincidental. The Latin stem, -mot, means “move.” This linguistic similarity will help you understand the relationship between the psychological conceptions of emotion and motivation. In many cases, an emotion is something that motivates you. For example, if you are happy, you are energized to do something that you believe will help you maintain the happy feeling. If you are angry, you may be motivated to act aggressively against the person who made you angry.

This module explores two topics, emotion and motivation, that do not fit neatly into the social psychology subfield. Emotion does not fit, perhaps, because it might just deserve to be a subfield by itself. Some observers have noted that historically, cognition (a subfield) and emotion have vied for the position of psychological primacy. Although cognition has “won” in many camps, one can make a good case for the importance of emotion over cognition (Zajonc, 1998). For example, emotion seems much more critical for survival than cognition. It is because of our emotional responses that we can recognize something as threatening or safe, far earlier than we recognize what it is. If emotion does indeed deserve at least equal billing with cognition, it hardly seems fair that cognition is allotted an entire subfield while emotion is pigeon-holed into the best fit among the existing subfields, social psychology.

Motivation, as you will see, is really too broad to fit neatly into a subfield; it encompasses any behavior that is not automatic. It would be reasonable, then, to place motivation into any (really all) of the subfields. But because that is not really possible, psychologists tend to place it along with emotion because of the similarities.

This module has three sections. Section 20.1 is about general concepts in motivation and pursuing goals. You will find additional topics related to motivation throughout the rest of the book, (as in the discussion of specific types of motivations in Module 19 (sex), Module 22 (sleep), Module 23 (hunger), and elsewhere. Section 20.2 is about emotions; it begins to define emotions and describes the changes in brain, cognition and body that mark them. Section 20.3 focuses on one particularly important emotion in everyday life, anger, and its most obvious and sometimes-explosive response, aggression.

20.1. Motivation and Goals

20.2. Emotion, Brain, Body, Cognition, and Expression

20.3. Anger and Aggression

By reading and studying Module 20, you should be able to remember and describe:

  • Drives and incentives, basic motivations (20.1)
  • Nucleus accumbens, dopamine and reinforcement (20.1)
  • Goals, intrinsic vs. extrinsic motivation (20.1)
  • Self-regulation and self-control (20.1)
  • Key observations about emotions (20.2)
  • Emotional triggers (20.2)
  • Basic emotions: anger, fear, contempt-disgust, sadness, happiness, surprise (20.2)
  • The amygdala (and other brain areas) and emotions (20.2)
  • Autonomic nervous system arousal: problems with the polygraph (20.2)
  • Interactions between emotion and cognition (20.2)
  • Mood-congruent memory, hot cognition, and motivated skepticism (20.2)
  • Verbal and nonverbal expressions of emotion (20.2)
  • Gender differences in emotion: empathy, display rules (20.2)
  • Consequences of anger (20.3)
  • Types of aggression: instrumental and hostile (20.3)
  • Gender and aggression: relational and physical aggression (20.3)
  • Biological causes of aggression (20.3)
  • Social learning and aggression (20.3)
  • Aversive conditions and aggression: catharsis (20.3)

By reading and thinking about how the concepts in Module 20 apply to real life, you should be able to:

  • Use principles from self-regulation to improve goal-related outcomes (20.1)
  • Identify emotional triggers (20.2)
  • Find examples of the interactions between emotion and cognition (20.2)
  • Identify risk factors for aggression in yourself or others (20.3)

By reading and thinking about Module 20, participating in classroom activities, and completing out-of-class assignments, you should be able to:

  • Identify a goal and develop a plan to achieve it (20.1)
  • Judge whether verbal and non-verbal expressions of emotion match (20.2)
  • Identify specific criteria that you use to decipher facial expressions (20.2)
  • What motivates you to do your best in school?
  • Can you see any parallels between your school motivation and motivation in other areas of your life?

Motivation in general is an internal desire or need that energizes an individual and directs his or her behavior. Any time you do something that is not automatic, motivation is involved. Although this definition is a good starting point, it really does not provide us with a great deal of understanding about what motivation is. In particular, when you read this definition, you may be left with two important unanswered questions:

  • What exactly  is motivation? In other words, is there some way to recognize when an individual is motivated—for example by examining brain activity?
  • What are the different entities that create “internal desire”?

What is Motivation?

Your conscious experience when you are motivated is that you “want or need to do something.” Psychologists often refer to the internal body state that characterizes a motivation as a drive , a feeling that most people would describe as a want or a need. Basically, you can think of the drive as the “energize” part of the motivation definition. Having a drive alone, however, may not be enough to direct our behavior because we may not know how to act on the drive. Often, we need some external cue, an incentive , to provide that direction. For example, imagine that you are experiencing a drive: hunger. Many different cues in the environment may be incentives for you to act on that drive. Imagine that there is a plate of fresh-baked cookies on the kitchen counter. Almost without thinking, you might pick up a cookie and eat it.

drive: the internal body state that characterizes a motivation

incentive: an external cue that directs motivated behavior

With such a broad idea of motivation, it is easy to believe that no single brain area is key to motivated behavior. Many researchers have devoted themselves to studying a particular important motivated behavior, such as sexual behavior or hunger, and examining the brain areas that are involved. This section will look at more general conceptions of motivation.

Some characteristics appear to be common to nearly all motivated behavior, and so a few brain areas may be particularly important. Many researchers have turned to the concept of reinforcement. It is probably fair to say that behavior that occurs because it has been reinforced is motivated behavior. So, if researchers could discover a brain area that is key for reinforcement, it would shed a great deal of light on motivation, agreed? Neuroscientists believe that they have found such an area. It is a section of the brain very near the hypothalamus called the nucleus accumbens . Over 60 years ago, rats that had been fitted with electrodes that electrically stimulated the brain areas near their nucleus accumbens would press a lever 2,000 times per hour in order to receive that stimulation (Olds & Milner, 1954; Olds, 1958). This area uses the neurotransmitter dopamine , so researchers believe that this particular neurotransmitter is also a key to reinforcement and thus motivation. Other researchers working with rats have verified that dopamine in the nucleus accumbens seems to be the important neurotransmitter for motivated behavior (Basareo, et al., 2003; Salamone & Correa, 2002). Dopamine is also released during sexual behavior in rats, a famously motivated behavior (Lorrain et al., 1999).

When you start to examine the research literature on the role of dopamine and the nucleus accumbens in motivation, you quickly get the impression that we know an awful lot about the sex lives of rats but not too much about humans. That is not the case, however. Studies using fMRIs   have consistently shown that brain activity in the nucleus accumbens increases when people engage in reward processing (Wang et al., 2016). We certainly do not mean to suggest that the nucleus accumbens is the only area that is key to motivation. Many additional brain areas that are likely important for motivation, in general, are the same ones that are key to emotions, such as the amygdala and areas throughout the cortex.

Human beings have a set of basic biological motivations, similar to the rest of the animal kingdom. These basic motivations are for behaviors that helped our ancestors survive and reproduce. The most fundamental motivations are to approach something good and to avoid something bad or dangerous. More specific basic motivations include the drive to reproduce, hunger and thirst, the need to sleep, and pain avoidance. Sex, sleep, and hunger, three very commonly discussed motivations, are covered in Modules 22, 26, and 27, respectively.

For now, it is worth noting that although humans, chimpanzees, pygmy marmosets, and jellyfish may share these biological motivations, they are scarcely recognizable across species. As you know, humans have a highly developed cerebral cortex; axons from the nucleus accumbens, a sub-cortical area (meaning it is located underneath the cortex), form synapses throughout the cortex. Both cortical and sub-cortical areas, then, are heavily involved in motivation (and emotion, for that matter). The participation of these diverse areas leads to very flexible motivated behavior in humans. For example, whereas a jellyfish probably does not have too many interesting variations in sexual behavior, a quick examination of a book like the Kama Sutra reveals an amazing diversity of sexual variation in humans. A good shorthand reminder of the difference is that non-human animals tend to engage in motivated behaviors in fixed patterns that are tightly regulated by sub-cortical brain areas and hormones. Humans, on the other hand, depend heavily on both cortical and sub-cortical areas, allowing them great variety in their motivated behaviors.

Abraham Maslow was a key early psychologist who tried to make sense of the different types of motivations that guide human behavior. He ordered these motivations in what was known as a hierarchy of needs that ranged from basic survival motivations at the bottom, through more interpersonal motivations (such as belongingness), through what he called self-actualization, the need to devote oneself to a higher purpose, such as religious or political freedom, or to put the needs of society ahead of one’s own. Although Maslow’s ideas were very influential at one time, and are still well-known outside of psychology, they do not have much influence within psychology at the present.

The Importance of Goals in Motivation

One important way to understand the complexity of human motivation is to focus on how they are related to goals. A goal  is a cognitive representation of an outcome that influences our thoughts, evaluations, emotions, and behaviors (Fishbach & Ferguson, 2007). A goal is what allows us to focus and direct the energy that comes from motivation directly on specific behaviors and possible outcomes. And it is that cognitive representation part that makes our goals interesting, complex, and, well, human.

For example, consider an important distinction between types of goals that allow a basic categorization of motivations in intrinsic motivation and extrinsic motivation. If we see a specific behavior as a means to a more desired end, we call that extrinsic motivation . In other words, we have a desired goal, and we engage in an activity in order to achieve that goal. For example, suppose your goal is to be a lawyer and you work hard in school because it will allow you to achieve that goal. It is termed extrinsic because the goal is outside of (extrinsic to) the activity itself.  However, consider a different possibility. What if the goal is the behavior? In other words, the activity or behavior itself is rewarding. This is called intrinsic motivation . Here, it is intrinsic because the goal is part of (intrinsic to) the behavior. A person engages in a particular behavior, in most cases, for a combination of extrinsic and intrinsic rewards. It is not an either/or proposition (no false dichotomy here!). Still, it is useful to characterize a particular behavior is driven primarily by intrinsic or extrinsic motivation. This becomes important when you consider the rest of the definition of a goal above because the two motivation types lead to different emotions, evaluations, and behaviors. For example, one obvious benefit of intrinsic motivation is that because the tasks you are engaging in are rewarding almost by definition you will derive more satisfaction from these tasks than extrinsically motivated people do. Also, intrinsic motivation is associated with more excitement and confidence, better performance, more persistence and creativity, more vitality, higher self-esteem, and better general well-being (Ryan & Deci, 2000). Tasks that are motivated only extrinsically are judged to have no value, except as they relate to the direct achievement of some other goal (have you ever heard someone talk about how worthless some particular class is because it will not be relevant to their future career?).

We are not saying that extrinsic motivation is useless. It can certainly be effective and has an important role in human behavior. For example, students who are highly motivated by the desire for a future career often study very hard indeed. And sadly, there are many situations in which unpleasant tasks must be done. Extrinsic motivation may be the only way to motivate someone to complete these tasks. Keep in mind, however, that extrinsic motivation must be supplied continually. If the external reward is removed, the behavior might stop.

As we said, people typically engage in tasks for both intrinsic and extrinsic rewards. There is, however, an important relationship between the two, a negative relationship. Specifically, the more extrinsically motivated people are, the less intrinsically motivated they tend to be. For example, people who are extremely extrinsically motivated at their jobs (they are doing it for the money only) commonly enjoy their jobs less than people who are intrinsically motivated. Certainly, some people are highly motivated both extrinsically and intrinsically, but they are far less typical than people who are high on one and low on the other. The loss of enjoyment in the task that often accompanies extrinsic rewards can be avoided. The key is to make the extrinsic rewards meaningful, to relate them clearly to good performance.

goal: cognitive representation of an outcome that influences our thoughts, evaluations, emotions, and behaviors

extrinsic motivation: motivations that are associated with the benefits associated with achieving a goal

intrinsic motivation: motivations that are associated with the process of pursuing a goal

Self-regulation and Achieving Goals

Self-regulation  is the term that refers to the complex processes through which we change our thoughts, emotions, and actions when pursuing a goal (Baumeister & Monroe, 2014). It is how we get ourselves to do things when intrinsic motivation is absent on its own. Some of the individual processes it includes are goal setting, planning, organizing and coding information, metacognition, modifying self-motivating beliefs, managing time, deriving satisfaction or pride from activities, and controlling actions and choices (Vancouver, 2018). As you can see, self-regulation is an enormous topic, and we can only begin to describe some of its important aspects.

To begin, though, let us stay with intrinsic and extrinsic motivation for a bit. Although it is not always possible to make activities enjoyable, we might be able to modify some of our beliefs and some of our choices (i.e., self-regulate) to make unpleasant tasks more bearable. For example, perhaps you make activities approach intrinsic motivation by being autonomous, in other words, making your own choices (Deci & Ryan, 2000). For example, maybe you have the opportunity to choose a topic or choose a method for achieving a course goal in a class you do not enjoy (for example, some teachers may offer the option to write a paper or give a speech about some course topic). By emphasizing your choices, your autonomy, you may be able to move an unpleasant task closer to intrinsically motivating. Another strategy is to look for opportunities to develop mastery of some behavior or information. What about the activities that you just cannot approach intrinsic motivation for? How do you get through a required class that you dislike with a teacher that you detest, for example? Clearly, in this case, you would have to resort to extrinsic motivation. The key is to find or create extrinsic rewards that are personally meaningful or consistent with your values and goals. If you can recognize how the class will help you succeed in your eventual career, you will probably have an easier time enduring it. If you can internalize extrinsic goals in this fashion, you will reap many of the benefits of intrinsic motivation (Deci & Ryan, 2000). We are not saying that you will suddenly fall in love with the task, but you may surprise yourself and enjoy it, or at least endure it, more than you expected.

Another key element of self-regulation is self-control, a key element (if not the key element) of controlling your actions. So far, we have been talking about goals as if they exist in isolation. That, of course, is not the case. At any given time you have many possible goals that you might pursue, and several individual self-regulation processes are devoted to helping us choose and focus on one. To illustrate the importance of self-control, consider the common situation when you have two goals that are in conflict: one is to engage in an activity that you will enjoy yourself right now, and another is to devote yourself to something less pleasant now that will lead to a more valued goal sometime in the future. Not that you need an example, but imagine that you have a big test in a week in your most important class (and your performance so far has not been up to your own standards). But the video game for which you have been waiting for 5 months is available tonight for the first time. You know that you really should study, but. . .

Previous research and theory had suggested that perhaps self-control is some fixed resource that you can run out of. For example, what if right before you had to decide whether to play or study, you had to exert self-control to force yourself to exercise when you did not really want to. Ego depletion theory suggested that it might be much harder to choose to study because you had already depleted your store of self-control (Baumeister et al., 1998). This is a compelling idea and one that you can probably illustrate with an example from your own life. Unfortunately, however, the idea might be wrong. Ego depletion was controversial for several years while psychologists argued over whether it could be replicated or not. It was finally tested in a multi-lab preregistered replication effort (23 separate labs testing over 2000 participants), and the results revealed a very small effect size that was consistent with the possibility that the effect does not exist. Indeed, 20 of the 23 individual lab replications could not rule out the conclusion that the effect size was 0. A couple of the labs (two for one dependent variable, one for a different dependent variable) even found an effect in the opposite direction (Hagger et al., 2016). So for now, at least, we are going to say that ego depletion is not correct (keep in mind, that could change with future research).

Other famous research suggested that self-control is a trait that is revealed very early in life and has profound influences in our later success (Shoda et al., 1990). This research was dubbed “the marshmallow challenge” and was detailed in a popular book (Mischel, 2015), and was even featured on Sesame Street, in a segment that was once parodied by Stephen Colbert.

//media.mtvnservices.com/embed/mgid:arc:video:comedycentral.com:3f7ee9e6-ed01-11e0-aca6-0026b9414f30

You can also access the video directly at: http://www.cc.com/video-clips/ykt712/the-colbert-report-close-sesame

In the original research, preschoolers who were able to exert self-control and avoid eating a marshmallow (or other desired treat) for 15 – 20 minutes in return for two treats were much more successful years later. The problem is, a large-scale replication with more diverse participants was published in 2018 (Watts et al., 2018). This new version of the research found that the advantages enjoyed by the children who had exerted self-control disappeared when the children’s family characteristics and experiences were taken into account.

Thumbnail for the embedded element "The marshmallow test: can children learn self-control?"

A YouTube element has been excluded from this version of the text. You can view it online here: https://cod.pressbooks.pub/introductiontopsychologywinter21/?p=110

You can also access the video directly at:  https://youtu.be/d8M7Xzjy_m8

You may feel as if you have a pretty good idea of what self-control is not right now. This is not very helpful, though, is it? Right, the key question to answer is, what self-control is . We can offer two good ideas. One is essentially a recap of what we said earlier, but the second is new. First, some people have managed to get closer to intrinsic motivation for tasks that they need to do. In other words, they have focused on mastery and autonomy, for example. They have been able to redefine extrinsic rewards to be consistent with their personal values. In essence, their behavior has become self-determined (Deci & Ryan, 2000), so that they have the sense that they are choosing to do the things that they have to do.

Second, some people have learned strategies to deal with temptations. One way is to redefine a temptation to try to make it less appealing than the helpful activity.  For example, an individual might get in the habit of calculating how much exercise it will take to burn off the calories contained in an unhealthy treat to make that treat seem less valued. Another strategy is to get rid of temptations before they become temptations. For example, are you irresistibly drawn to Instagram when you are trying to study? Then put your phone in another room so you cannot even see it. (Fishbach et al., 2003; Fishbach & Trope, 2007).

20.2. Emotion, Brain, Body, Cognition, Culture, and Behavior

  • How would you define emotion?
  • Which emotions do you think are the basic ones, common to all humans?
  • Make a list of the different ways that you think your emotions influence thinking processes, such as memory and reasoning.
  • How would you respond to the suggestion that sometimes, your body reactions are so similar for different emotions that you might not be able to tell the difference between them? If that is true, what else can help you distinguish the different emotions?

Emotion is among the hardest concepts for psychologists to define. Many writers talk about emotions without defining the concept at all, and others note that everything that psychologists have to say about emotions is part of the definition. One psychologist proposed that a complete definition of emotions could be found by reading the entire paper he had written in combination with the complete contents of the 10 pages of references he listed (Zajonc, 1998). In general, we agree with that approach. You can get a sense of the complexity that we are implying by looking at the title of this section (Emotion, Brain, Body, Cognition, Culture, and Behavior). If you really want to know what we mean by emotion, you will need to read the entire section (and you should probably read all of the papers and books that are referenced, too, but we will not be checking on you).

In the meanwhile, though, let us provide a shorthand definition to give us a jumping-off point: Emotions are brain-and-body states that are experienced as strong feelings, such as arousal, pleasure, or displeasure.

It is worth noting at the outset that one of the key questions about emotions pertains to the relationship between universality , aspects of emotions that are common across all humans because of our shared biology, and social constructivism , aspects that are created by individual cultures and learned by members of those cultures. We will use the somewhat more meaningful term culture (or cultural) to indicate social constructivist arguments. Culture  is knowledge, customs, and other behavior that are created by a group (such as a society, ethnic group, or nation), and that members learn by being part of that group. You will notice throughout that we propose some tentative answers to these questions, but we are left with many that are still unanswered.

Five key facts about emotions are:

  • Emotions typically occur in response to something
  • There are different emotions
  • Emotions are marked by similar and distinct body states
  • Emotions interact with cognition
  • Emotions are related to behavior through expression and motivation

We will devote the rest of this section to expanding upon these five observations.

emotion: a brain and body state that is experienced as a strong feeling such as arousal, pleasure, or displeasure

universality: aspects of emotions that are common across all humans because of our shared biology

social constructivism  : emphasizes that all cognitive functions are dependent on interactions with others

culture: knowledge, customs and other behavior that are created by a group (such as a society, ethnic group, or nation), and that members learn by being part of that group

Emotions Typically Occur in Response to Something

Sometimes, we experience emotion-like feelings that occur spontaneously; psychologists will typically refer to these feelings, which tend to be less intense and longer-lasting than emotions, as moods (Gendolla, 2000). In contrast, an emotion usually has an easily identified cause. Paul Ekman (2004), one of the most famous emotion researchers, has noted that the most important emotional trigger is realizing that something affecting our welfare is happening, is about to happen, or has just happened. For example, if you come home late one night to a dark house and discover that a ground-floor window has been broken and a room has been messed up, this realization is likely to trigger the emotion fear. When you realize that your house has been robbed and the culprit has left already, it is likely to trigger the emotion anger.

Other than events that impact our welfare, there are other categories of emotional triggers, though they are probably not quite as important. A second type of emotional trigger is when we think or talk about emotional experiences or imagined events. For example, if you go to your high school reunion and spend the evening talking about the fun times of old, you are likely to experience the emotion happiness. If you imagine winning the lottery, buying a new car, or even getting an A on your next big exam, you might very well experience happiness. A third key trigger for emotions is to observe them in other people. This, of course, is called empathy.

There are Different Emotions

In English alone, there are nearly 600 emotion words (Johnson-Laird & Oatley, 1989; Davitz, 1970). This is an impressive demonstration of how important emotions are in our everyday lives, but psychologists are not necessarily impressed with the sheer quantity of emotion words. Many emotional words have very similar meanings, such as anger, rage, and hostility. Instead of simply listing all of the different terms that can be used to describe emotion, psychologists have found it more useful and interesting to search for basic emotions. They have typically assumed that there is a small set of fundamental, or basic, emotions, from which more complex emotions are created through mixing. For example, Johnson-Laird and Oatley (1989) conducted a linguistic analysis of the 600 emotion words and extracted five basic emotions: happiness, sadness, fear, anger, and disgust.

If we are to identify truly basic human emotions, however, we should not limit our examination to the analysis of a single language. Instead, we should look for emotions that appear throughout the world. Paul Ekman and his colleagues (Ekman et al., 1969) were perhaps the first to examine this question empirically. They found that people in many different cultures around the world could consistently recognize six separate emotions from facial expressions: Anger, Fear, Contempt-Disgust, Sadness, Happiness, and Surprise.

These six emotions were considered the basic ones, that is, the human universal ones, for many years. The only one that you might need defined is contempt; it is a feeling of being better than someone else. More recent research, however, has begun to question this conclusion. For example, some research has suggested that anger and contempt-disgust are quite similar, as are fear and surprise. The researchers suggested that only anger and fear of these four are the basic biological emotions, and the distinctions that lead to contempt-disgust and surprise are culturally learned (Jack et al., 2014). Other research has suggested that even fear might be a distinction from anger that is learned culturally or that some cultures agreed very little with Ekman’s original labels (Crivelli et al., 2016, 2017). So it may be that there may be only three (Anger, Happiness, and Sadness) or four (plus Fear), or even fewer basic emotions. This is an intriguing idea: there may be a minimal amount of universality in emotions. Beyond that, the complexities that arise in our emotional experiences might be culturally learned.

Emotions are Marked by Similar and Distinct Body States: Physiology and Emotion

Regardless of which emotion you are experiencing, it is accompanied by physiological responses. When you are happy about a good grade or angry because a driver just cut in front of you, a great many complex events take place throughout your body. Particular areas of the brain become active, and obvious, and sometimes dramatic, changes occur throughout the body. Although some of those changes occur similarly across different emotions, others are typical of specific emotions only.

Some Brain Areas Important for Emotions

It is far too simple to say that there is an emotion area of the brain. Rather, as with all complex phenomena, an ensemble cast of many brain areas is involved. In particular, different parts of the cerebral cortex, limbic system, and the thalamus all play important roles in various aspects of our emotional experiences and expressions. That said, there is probably a star of the ensemble cast. Through the years, the one brain area that has been consistently identified as important to emotions is the amygdala, the almond-shaped structures in the limbic system.

Researchers through the years have identified several different emotion-related functions of the amygdala. They can be difficult to keep track of unless you notice that there appear to be two basic functions: recognizing the emotional content of a situation and learning how to respond to similar situations in the future. In particular, the amygdala seems particularly important for helping us to recognize a threatening situation. For example, there is a direct neural pathway from the thalamus—the brain area that first receives sensory input—to the amygdala (LeDoux, 1995; Zajonc, 1998). Activity in the amygdala can lead to an instant avoidance response, before you even recognize what led to the response (as when you open a door to leave a room and are startled to see an unexpected person who is about to enter). Similarly, the amygdala becomes active when you detect a new stimulus but not when you encounter a familiar one. In cases like these, the sensory input is instantly interpreted as a potential threat (Schwartz et al., 2003). Other researchers have found that the amygdala helps us to recognize different emotions from facial expressions, a useful clue if you are trying to figure out, for example, if someone is about to punch you (Adolphs & Tranel, 2003; Yang et al., 2003). Many researchers now believe that the amygdala helps us recognize many different types of emotional situations, not only threatening ones (Ono & Nishijo, 2000).

Nearly 70 years ago, researchers discovered that electrical stimulation of the amygdala led people to report feeling fear or anxiety (Chapman et al., 1954). There is little doubt that the amygdala plays an important role in fear, an obviously important emotion in threatening situations. One key piece of evidence is that drugs that change the activity of amygdala neurons also influence fear. For example, valium is a GABA agonist—in other words, a drug that increases the activity of the major inhibitory neurotransmitter GABA. When a drug like Valium is directly applied to the amygdala, the inhibition of neural activity leads to a decrease in fear or anxiety (Davis & Lang, 2003). Valium, as you may know, is an anti-anxiety drug.

The second major function of the amygdala is to help us cope with similar situations in the future—in other words to help with learning and memory in emotional situations. During an emotional experience, the adrenal glands release stress hormones, such as epinephrine and glucocorticoids. These hormones activate the amygdala, which is connected to memory areas such as the hippocampus. The consequence is that emotional experiences are remembered better than non-emotional ones (Cahill, 2000; McGaugh et al., 2000; Pelletier &  Pare, 2004). A second key way that the amygdala contributes to learning in emotional situations is through its connections to brain areas that release dopamine in the nucleus accumbens, the area that appears to be a key for reinforcement (Everitt et al., 2000). In both the memory and reinforcement situations, the amygdala’s role is to facilitate, or boost, the processes handled by the other brain areas (McGaugh et al., 2000; Parkinson et al., 2000).

Although the amygdala plays a starring role in the emotional stage, many additional brain areas are also key actors. For instance, many areas within the cerebral cortex get into the act. A full description of these areas would be well beyond the scope of this book, but I can give you an idea of a couple of important cortex functions. You may recall that we told you about an area called the anterior cingulate cortex in Module 14; it is involved in the emotional distress that accompanies pain (Rainville et al., 1997). The anterior cingulate cortex receives a great deal of neural input from the amygdala (Vogt & Pandya, 1987). It appears that the anterior cingulate cortex is involved in the recall or the conscious generation of emotions (Ono & Nishijo, 2000). Also, areas of the prefrontal cortex appear to be involved in integrating cognitive and emotional information for decision-making (Wagar & Thagard, 2003; 2004).

Autonomic Nervous System Arousal

The key physiological responses that take place throughout the body during an emotional episode are caused by the autonomic nervous system (ANS)—specifically, the sympathetic nervous system, which is the part that arouses the body. The amygdala has neural pathways to the hypothalamus, an important brain area for starting the autonomic nervous system (ANS) arousal (LeDoux et al., 1988).

Among the most common measures for assessing ANS arousal during emotions are the ones used in polygraph tests, commonly (but incorrectly) known as lie detector tests. The polygraph works on the assumption that when a person lies, they will experience a stress-related emotion, which will be reflected in physiological measures of sympathetic arousal, such as increases in heart rate, blood pressure, respiration, and electrical conductivity of the hand (it changes when your palms sweat). There are two major problems with polygraphs as lie detectors. First, skilled liars can learn to control some of the aroused body systems. For example, many people can reduce their heart rate and blood pressure through relaxation. Second, the sympathetic nervous system arousal that may be detected indicates that an emotion is occurring but does not indicate which emotion it is. The gross changes that the polygraph can detect are consistent with a variety of emotions. In addition, the differences in emotion between a liar and a truth-teller may be very subtle indeed, far more subtle for any technological ability to detect them. In short, the polygraph cannot tell the difference between someone who is anxious about telling a lie and someone who is anxious that the polygraph will accuse him of telling a lie. Because of the problems associated with polygraphs, it is illegal for employers (except the federal government) to use them for making hiring decisions, and they cannot be used as evidence in courts.

Just because the polygraph cannot detect physiological differences between emotions does not that mean that there are no differences. Through the years, many psychologists have believed that the type of arousal that occurs is the same for different emotions. This belief very likely stemmed from psychologists’ drawing the wrong conclusions from their limited methods of measuring arousal. Today, though, a range of methods have emerged that allow researchers to detect more subtle differences. Currently, researchers believe that ANS arousal is reliably different for different emotions (Christie & Friedman, 2004; Levenson & Ekman, 2002). For example, researchers have found greater increases in heart rate for sadness, fear, and anger compared to increases for happiness (Levenson et al., 1990). For anger and fear, the heart rate increase appears to be related to the biological “fight-or-flight” stress response (Levenson, 1992). Other physiological changes associated with fear and anger are also consistent with this idea. If you are experiencing anger, the “fight” response is activated and finger temperature increases, indicating increased blood flow to the hands so that you could use them to fight. With fear, blood flow increases to the legs with the activation of the “flight” response (Ekman, 2003; Levenson et al., 1992).

Emotion Interacts with Cognition

In the Module introduction, we noted that there has been a bit of a competition between cognition and emotion for primacy within psychology. Historically, some psychologists have argued that the fundamental processes of mental life are cognitive ones; others have argued that they are emotional ones. At this point, however, it seems clear that human mental processes and the behavior that springs from them are the products of a complex interaction between the two. You are probably not surprised to learn that emotion interacts with cognition. For example, when people are happy, they tend to interpret new information positively. When sad or angry, they interpret it negatively. So, your professor’s jokes may be funny or annoying, depending on your current emotions (Clore et al., 1993).

In addition, our memories are influenced by current emotion, a phenomenon called mood-congruent memory . We find it easy to remember other happy events when we are happy, other sad events when we are sad (Fiedler et al., 2001). We also predict the likelihood of future events with the same bias (DeSteno et al., 2000): Happy events seem more likely when we’re happy, sad events when we’re sad. Interestingly, some people can engage in mood incongruent memory as a strategy to improve their emotional state. In other words, when feeling sad, they force themselves to think of happy memories (Rusting & DeHart, 2000).

More broadly, you might realize that other aspects of thinking can be influenced by the emotions we are feeling. Psychologists refer to this idea as hot cognition , a term that refers to changes in thinking and reasoning that result from emotions and motivations (Abelson, 1963). For example, consider the concept of weak-sense critical thinking we introduced in Module 7, in which people engage in critical thinking-like behavior only for the purpose of proving their point right. This is similar to the idea of motivated skepticism , in which an individual’s emotions or motivation lead them to think critically only about information that disagrees with what they believe (Ditto & Lopez, 1992). 

hot cognition : changes in thinking and reasoning that result from emotions and motivations

mood-congruent memory: a phenomenon in which people tend to remember events that are consistent with their current emotions or moods

motivated skepticism: an individual’s emotions or motivation lead them to think critically only about information that disagrees with what they believe

Also, consider language. There is no doubt that language is deeply related to cognition (see Module 16), and in some respects, it reflects the cognition that occurs. In other words, when you “think about something,” it is likely that you “talk to yourself in your mind” in terms that reflect the way you’re thinking. Different languages across the world obviously differ in their emotional vocabulary. The question is, do the differences in language imply that different cultures do not experience the same emotions? For example, do the inhabitants of a country such as Micronesia, because they have no word for anger, not experience anger?

Above, we described some research on the still-open question about the extent to which some specific emotions are universal. It does seem pretty safe to say that it is very likely that people in all cultures throughout the world experience anger. They probably do not, however, experience it the same way, and the differences are often related to the different ways that cultures think (or talk) about emotions. Anna Wierzbicka (1999) notes that Russian has two very common words that correspond to the English word sadness. They are not synonyms and reflect common differences in Russians’ experience of sadness. One word indicates deep sadness that has a clearly identifiable cause (somewhat similar to the English grief), whereas the other is light or passing sadness that may occur for no particular reason. The fact that they are very common words suggests that both are very common emotions in the Russian experience.

There are a great many linguistic differences across cultures that suggest real diversity in emotional experiences. Indeed, research has found substantial cultural differences in the frequency of different emotions. For example, North Americans report more positive emotions and fewer negative emotions than Asians, such as Japanese and Indians (Diener et al., 1995; Scollon et al., 2004). In addition, some cultures value particular types of emotions; for example, many Asian cultures value emotions that lead a person to feel engaged with or connected to other people. An emotion like pride, which tends to separate a person from others, is not valued, and indeed Asian research participants report feeling pride less often than those from Western cultures (Kitayama et al., 2000; Scollon et al., 2004). It is likely that these linguistic differences are both a reflection and cause of these emotional differences. This interesting possibility is related to the Sapir-Whorf hypothesis , the idea that language people use, helps determine (not simply mirror) their thoughts (Sapir, 1921; Whorf, 1956).

Emotions are Related to Behavior Through Expression and Motivation  

Our fifth key observation about emotions is that they are related to motivation and expression. We have already briefly described some of the relationship between motivation and emotion, so let us focus on expression here. The idea is that we engage in behaviors that communicate the emotion we are experiencing to other people. For example, perhaps the most obvious expression of an emotion is a verbal one; you can simply tell someone that you are experiencing an emotion. If you feel angry, you say to someone, “I am angry.” Of course, there are many additional ways that we express our emotions, and it is these nonverbal expressions that have received the most attention from psychologists over the years.

Nonverbal Expression of Emotion

Some behaviors motivated by emotion are grand displays involving words and a complex series of actions. Think of how you might react to a surprise birthday party: speechless astonishment followed by repeated assurances to the guests that you were surprised, hugs and smiles and words of appreciation. But other expressions of emotion are more subtle, taking place on a smaller scale. Tone of voice, body movements, and facial expressions can all help to convey the emotions that we are experiencing.

What happens when verbal and nonverbal expressions of emotion disagree? For example, imagine that you are having a heated discussion and you decide that the other person is angry. “No, I am not angry!” comes his or her verbal expression. At the same time, though, the person’s voice is loud and strained, and his or her lips are tight and eyebrows are low. Are you going to trust the verbal expression or the nonverbal expression of emotions? Research from nearly 50 years ago suggested that only 7% of the emotional content of a message comes from the actual words; the rest comes from vocal characteristics, such as tone of voice, and nonverbal expression, such as facial expressions and gestures (Mehrabian, 1972). People are largely aware of the relative value of the different modes of expression, too; research has shown that individuals extract more emotional meaning from vocal characteristics than from the actual words (Argyle et al., 1970).

The observations about the relative unimportance of verbal expressions of emotions have led some psychologists to ignore them completely. That is probably going too far. The dominance of vocal characteristics and facial expressions over words is evident only when they disagree with each other (for example, the words say “I am not angry,” but the facial expression says otherwise). There is little doubt that the words that people use to indicate that they are experiencing an emotion can be very important, especially when they agree with the other kinds of expressions. For example, although non-verbal expression allows an observer to figure out the approximate emotion—for example, anger—words may be essential for determining the exact emotion and what triggered it—for example, “my favorite baseball team had the bases loaded with no outs and failed to score” or “my roommate came in and changed the channel from the baseball game to Say Yes to the Dress.” In addition, the verbal communication of emotions is a key component of many psychological therapies (Fussell, 2002).

It is a mistake to ignore a verbal expression of an emotion, but we would certainly be cautious if nonverbal expression disagrees. Do not be overconfident about your ability to recognize nonverbal emotional expression, however. In general, we are good at recognizing intense emotions in others. We do not do as well with milder emotions and with subtle distinctions among emotions, and we tend to be quite poor at seeing through concealed emotions (Ekman, 2003; Ekma et al., 1999) .

The type of nonverbal emotional expression that has received the most attention is facial expressions. Many aspects of facial expressions are not under our conscious control. Therefore, facial expressions cannot lie, if you know what to look for. Paul Ekman and Wallace Friesen know what to look for. They have developed a system for measuring the precise muscle movements involved when we form particular facial expressions (Ekman & Friesen, 1978; 2002). From this work, we now have detailed maps of how the parts of the face move for different emotions. For example:

  • Sadness: inner corners of eyebrows are angled upward; eyebrows are drawn together; eyelids droop; gaze is down; lips are stretched horizontally with lower lip up and corners down; cheeks are pulled up, creating a crease from nose to outside of lips
  • Anger: eyebrows are lowered and pulled together, with inner corners pointed toward nose; staring hard with eyes opened wide; jaw is pushed forward; lips are pressed together and tensed

Of course, everyone knows that the key to recognizing happiness is the smile. Although people certainly do smile when they are happy, it turns out you cannot reliably judge happiness from a smile. People smile when they want to be polite, when they want to convey that they are not a threat, and for many other reasons. Ekman notes that if you want to know whether a person’s smile really signals that they are happy, you have to look around the eyes. More precisely, the muscle that surrounds the eye causes the section of skin between the eyelid and eyebrow to be pulled down and leads to a smile that engages far more of the face than the area around the mouth.

Gender and Emotional Expression

In section 20.3, we will make the disturbing observation that aggression has a significant genetic component; hence, it is part of human nature. We also have, as part of human nature, the capacity for kindness, generosity, and sympathy. A key component of this capacity is empathy , the ability to identify with someone else’s emotions. When you experience empathy, you begin to feel the emotions that you recognize in someone else. There are large individual differences in empathy, from people who constantly cry at weddings and while watching movies to people suffering from an antisocial personality disorder (commonly known as sociopaths), who appear to experience something close to no empathy at all. According to behavior genetics research, empathy has a significant genetic component (Davis et al., 1994). It probably does not surprise you that women tend to be more empathetic than men, a difference that shows up as early as one-year-old (Trobst e al., 1994; Zahn Waxler, et al., 1992).

Women also tend to express emotions more effectively than men, but they might not feel them any differently (Coats & Feldman, 1996; Hall, 1984; LaFrance & Banaji, 1992). The difference appears to be, more than anything else, one of display rules ,  the rules for how and when emotions should be expressed outwardly. In the US, for example, display rules for men dictate that they are not supposed to express sadness, fear, and embarrassment. Women are supposed to express emotions that improve relationships; for example, they should express happiness frequently and should not express anger (Brody, 2000; Hecht et al., 1993). Keep in mind that there are many individual men and women who violate the display rules and express the “gender inappropriate” emotions, but the rules do capture the average differences between men and women.

Incidentally, cultures also differ on many aspects of emotional expressions. In particular, cultures have different display rules (Ekman & Friesen, 1969), just as females and males in American culture do.  For example, in many Asian cultures, the display of anger is considered inappropriate, so Asians may often be seen smiling when they are angry. The use of specific gestures to express emotions is also different across the world.

empathy: the ability to identify with someone else’s emotions

display rules: rules for how and when emotions should be expressed outwardly

  • Can you think of examples when you were successful and unsuccessful at concealing an emotion that you were feeling? Was your success or failure related to your inability to conceal, an observer’s ability to read your emotions, or both?
  • Look at your list of sayings from Activate #2. What relationships can you identify between the sayings and the material you have just read about emotional expression, gender, and culture?
  • Think about something that reliably triggers an emotional response for you. How does it relate to your personal welfare?
  • Have you experienced a situation in which emotional arousal seems to have improved your memory of the event? Can you explain in your own words why these sorts of memories are so vivid?

20.3 Anger and Aggression  

  • How often do you get angry? What are some of your typical reactions when you are angry?
  • Aggression is a biological trait and therefore inevitable in humans
  • Aggression is a learned behavior and thus it can be eliminated

We often categorize emotions as positive or negative, based on how pleasant or unpleasant they feel. If we applied this categorization scheme to the list of basic emotions, we would have anger, fear, and sadness on the negative side, happiness as the one clearly positive emotion, and two unclear cases in contempt-disgust and surprise. Surprise can be pleasant (e.g., surprise party), unpleasant (e.g., threatening dog jumps at you), or neutral (e.g., you open a door to leave just as someone else is opening it to enter). Contempt and disgust, although listed together, probably differ from each other in pleasantness. Few people would judge disgust as pleasant, but regarding someone with contempt can feel very good.

This section focuses on anger, which is only one of the unpleasant emotions. Why is it getting special attention? Because it sometimes leads to aggressive behavior. Aggression has been in the news a great deal in over the years, in cases like school shootings, road rage, and terrorist activities throughout the world. Many social psychologists study anger and aggression in hope of helping to reduce their impact on members of society.

Anger: An Emotion

It is very tempting to categorize emotions as positive or negative based, not on their pleasantness, but on their consequences. We have to be careful about that temptation, however. Although a particular emotion might often lead to negative consequences, it might also often lead to positive ones. Consider anger. Consider the story of a college student who stormed off after an argument with his girlfriend one night. The next morning, he was the sorry recipient of a broken hand, a result of an angry encounter he had with a tree. (Yes, he punched a tree. The tree won.) In this case, it was clear that the anger was not positive.

On the other hand, consider perhaps the most famous bus passenger of all time, Rosa Parks. As the high school history class version of her story goes, Ms. Parks was tired and refused to give up her seat to a white passenger, as was required of African Americans in Montgomery, Alabama, in 1955. In reality, Rosa Parks was tired, not from work but from the mistreatment of African Americans that was accepted practice at that time (Parks, 1994).  Thus, one of the key events in a series that culminated in the massive progress in civil rights throughout the US was essentially motivated by anger. In this case, then, the anger was a positive emotion.

Aggression: A Behavior

An examination of the evening news, the history books, or the local playground will reveal something important about us; humans have a nasty habit of hurting each other. Psychologists define aggression as any behavior that is intended to harm or injure another living being. Note that it includes physical and verbal behavior, as well as physical and emotional harm. Killing, punching, insulting, teasing, and not letting someone “join in any reindeer games” all qualify as aggression when they are committed with the intention to harm or injure. Even without such a broad definition, aggression is depressingly common; it has been a primary goal of psychologists for many decades to understand aggression’s causes so that we may reduce or eliminate it or its consequences.

It is true that aggression need not necessarily follow from anger, but is anger necessary for aggression? Some psychologists draw a distinction between instrumental and hostile aggression. Instrumental aggression is used to achieve some other end and can presumably be committed without feeling angry. Hostile aggression  is by definition fueled by anger. Some psychologists do claim that anger is necessary for all aggression (Zillman, 1988). Even if some aggression can be committed without anger, however, the distinction between instrumental and hostile aggression is not always clear. For example, although there are times when soldiers aggress dispassionately, essentially because they are following orders, it can be easy for the aggression to slip into out-of-control hostile aggression. For example, many times throughout history, military personnel have gone beyond the instrumental aggression that they were ordered to commit and engaged in anger-fueled hostile aggression (e.g., torture of Iraqis by American soldiers at Abu Ghraib).

aggression: any behavior that is intended to harm another living being

instrumental aggression: aggression that is used to achieve some other end

hostile aggression: aggression fueled by anger

It might seem odd to lump together different types of aggression, such as teasing, insulting, excluding from play, spreading false rumors, punching, and killing. There is some conceptual confusion even about the differences in physical aggression. Are the acts of punching someone and shooting someone similar in some fundamental sense, or are they different kinds of aggression altogether? Some researchers have decided to use violence and aggression interchangeably, suggesting that much of what we learn about one can be applied to the other (Ramirez, 2003). We can define violence  as extreme aggression with the goal to seriously injure or kill another living being. There may be some truth to that. Have you ever seen or been in a fight where nothing mattered except hitting the other person? There were definitely times when one or both of the fighters “lost it” and began swinging and hitting uncontrollably. It is easy to imagine that if the fighter had been holding a weapon at that time, he would have used it (in our experience, it has always been a he). In some cases, at least, the difference between trying to hurt someone with your fists and shooting someone may only be a difference in the means available for acting. On the other hand, as we will see, there are some important differences between violence and aggression.

Even physical aggression and indirect aggression (e.g., verbal or relational, such as spreading false gossip) may be more similar than they appear on the surface. For example, as adolescent boys “grow out” of their aggressive phase, they appear to be replacing physical aggression with verbal taunts and other indirect kinds of aggression, so the two may serve the same function (Geen, 1998).

The possibility that different types of aggression are quite similar and perhaps substitutable for each other complicates one of the most common beliefs about aggression. “Everyone knows” that men and boys are more aggressive than women and girls. The situation is not nearly so straightforward, however. By using different conceptions of aggression, some researchers have found few differences between genders in their aggressiveness, and in some cases, they have found females to be more aggressive than males. There is no doubt that men and boys throughout the world commit more serious physical aggression and violence than women and girls (Daly & Wilson, 1988; Hilton et al., 2000; Hyde, 1986). Within the confines of a heterosexual relationship, however, women may commit more aggressive acts, while men’s aggression more commonly results in injury (Archer, 2000). Other researchers have found the genders equal in verbal aggression (Buss & Dedden, 1990). If verbal and relational aggression are included along with physical aggression, gender differences in total aggression disappear (Bjorkqvist, 1994). In light of these findings, saying that “men are the aggressive gender” seems wrong.

So, what does cause aggression? I recently did an informal search using a commercial search engine for “aggression cause.” Among the first few pages of search results, I found the following factors implicated (in no particular order):

  • Brain injury or disorder
  • Violent media
  • Frustration
  • Alcohol or drug use or withdrawal
  • Peer pressure
  • Low serotonin
  • Hypoglycemia
  • Poor relationship skills
  • Processed foods
  • 192 medical conditions
  • Severe infection
  • Touch deprivation

As implausible as it may seem, all of these could be correct. That is the nature of a complex phenomenon; it has many causes, and the fact that one proposed cause is correct usually does not rule out the possibility that another factor is also a contributing cause. Although w e do not have all of the answers to the question of why people behave aggressively, we have made great progress in understanding the complex set of causes. We will talk about the two broad categories of causes that have been proposed, biological and psychological. Both types include some environmental causes, so you should not think of them as substitutes for “nature” and “nurture.”

Biological Causes of Aggression

Many people are reluctant to grant that aggression might be biological, especially genetic, because of the assumption that explaining some phenomenon as genetically influenced means that it is inevitable. Evolutionary psychologists argue that aggression has developed as a human trait because of our human ancestors’ need to compete with each other for scarce resources and for mates. Critics counter that because human aggression can be so deadly, it is hard to imagine how aggression could benefit the species (Baron & Byrne, 2000). This counterargument makes two false assumptions, however. First, the fact that a behavior is not beneficial today says nothing about the environmental conditions in which it may have evolved (Tooby & Cosmides, 1990). Second, evolution through natural selection does not operate on the level of increasing the survival of the species; it operates on the level of increasing your (and your family’s) chances of survival. Aggressive behavior to drive off or eliminate competitors for scarce resources led to an increased chance of survival for individuals and their offspring, so aggression can very well be seen as an evolutionarily adaptive solution.

Today, however, we do not scare off competitors with stones or occasionally injure or kill a small band of rivals. But, we do have weapons that kill people from 100 yards away available for purchase on many street corners (in some states there are more gun dealers than McDonald’s restaurants). We also have weapons that are capable of killing thousands, or even millions, in a fraction of a second. Now that the human race has the ability to obliterate itself, we need to understand why people attack and sometimes kill each other. It is small comfort to believe that aggression was once an adaptive trait. We need to know whether biology plays a role if we want to have a chance to minimize it or the damage it can cause.

The fact that aggression appears throughout the world is important evidence that some aggressive behavior is biologically influenced. Consider war. Contrary to early research by anthropologists (Mead, 1928; 1940), it is clear that virtually all societies have had wars (Divale, 1972; Ember, 1978). There are certainly large cross-cultural differences in the frequency and type of aggression, suggesting that other factors play important roles as well (Fry, 1998), but human aggression is a fact throughout the world.

Probably the most common argument through the years that aggression is biological has been gender differences, but as you saw above, men are not necessarily more aggressive than women. Rather than arguing about gender differences, it is more useful to examine the behavior genetics evidence. Generally, the heritability of aggression appears to in the 40% – 50% range– but sometimes higher (DiLalla, 2002; Miles & Carey, 1997; Niv et al., 2013; Porsch et al., 2016); in other words, about half (or more) of the differences in aggression across people is due to differences in their genes. The situation is a little complicated, however. For example, researchers have only found significant genetic contributions when they examine aggression reported by individuals, parents, or teachers. When the researchers observe aggressive behavior in a laboratory setting (that is, by putting people in situations contrived to elicit aggression), the genetic contribution has been much lower (DiLalla, 2002).

A second, related set of biological causes is epigenetics. Recall, these are changes in phenotype, or gene expression (in this case aggressive behavior) that result from tags operating on genes without changing those genes. Researchers have found that epigenetic mechanisms can lead to increases in aggression through changes in our stress response and immune system (Waltes et al., 2015).

A third source of biological evidence is hormones. Testosterone is the hormone that has shown the closest association with aggression. You have to be very careful, though, because the common belief that “testosterone causes aggression” may be an oversimplification. Several researchers have found correlations between the amount of testosterone and aggression, but the relationships are weak and not present for all people (Berman et al., 1993). Meta-analyses have shown weak correlations between testosterone and aggression (Book et al., 2001). There is some experimental evidence that testosterone can cause aggression, for example, in experiments that give people large doses (Pope et al., 2000).

But, there is also evidence suggesting that aggression causes increases in testosterone. This research has not examined aggression directly but rather competition and dominance feelings. For example, when tennis players beat their opponents by a clear margin, there is an increase in testosterone (Mazur & Lamb, 1980). Even winning a simple game testing reaction time in a psychology laboratory has been associated with increased testosterone (Gladue et al., 1989). If you are thinking that perhaps the testosterone increased first, leading to the victories, good job; that is what you should be thinking. Another team of researchers examined testosterone levels in soccer fans at the end of a game; the levels were higher in the fans of the winning team (Fielden et al.,, 1998). Clearly, then, testosterone can increase as a consequence of feeling dominant. If testosterone levels increase in these situations, it seems likely that they would also increase in response to aggression.

One biological factor that has received a lot of attention is the neurotransmitter serotonin. Specifically, low serotonin has been linked to an increase in aggression. Animal studies, in which mice are bred to be high or low in aggressiveness, show marked difference in serotonin neurotransmission (Veneema et al., 2004). In humans, the relationship is quite small, however. A large meta-analysis found a weak relationship between low serotonin and higher levels of aggressiveness (Duke et al., 2013). The effect appears to be indirect, as low serotonin relieves people of inhibition, leading them to act on aggressive impulses more frequently.

Our table of supposed aggression causes culled from the internet included several foreign substances—such as drugs, steroids, or processed foods—which we also count as potential biological causes. Their direct effects would be to influence the brain or physiological systems associated with aggression. Without going through all of the candidates in detail, let us just say that alcohol is the foreign substance that has the strongest evidence of a link to aggression (Bushman & Cooper, 1990). Alcohol has been specifically noted as a contributing factor to sexual violence and spousal abuse (Stith & Farley, 1993; Testa, 2002). Alcohol makes it more difficult for the drinker to interpret social cues, which may partly explain its specific role in sexual assault (Taylor & Leonard, 1983). More generally, alcohol appears to increase aggressiveness by decreasing one’s ability to focus on the self and by making reactions to conflict more extreme (Ito et al., 1996; Steele & Southwick, 1985).

Psychological Causes of Aggression

The most influential psychological explanation for aggression comes from a straightforward application of social learning and cognitive social learning, which you saw in Modules 6, 17, and 19 already. Individuals can learn to be aggressive if they are rewarded for their behavior. For example, the bully who gains the respect of his peers, as well as the lunch money he took when he punched the little guy, is likely to repeat his aggressive behaviors (Bandura, 1986; Patterson et al., 1967). In addition, we can learn aggressive behaviors through observational learning.

Two important situations where we often witness aggression are at home and in the media. As we have seen spanking is associated with antisocial tendencies and aggression in children. The child watches the parent spank, and the parent gets rewarded for it (the child stops misbehaving). This is a classic setup for observational learning to occur. The media, too—television shows and movies, sporting events, even cartoons—are sometimes overwhelming sources of violent and aggressive images. You may occasionally hear on the news, that research has been inconclusive about the role of media violence in aggression. That is not true. From over decades of research, correlational, longitudinal, and most importantly, experimental, the results are consistent, conclusive, and clear. Viewing violence in the media causes aggression (Geen & Thomas, 1986; Geen, 1998; Huesmann, 2003). Researchers have also drawn similar conclusions about violent video games. Reviews of the research found evidence that violent video games cause aggression in children and adolescents (Anderson et al., 2010; Bensley & Van Eenwyk, 2001; Gentile et al., 2004).

“But hold on,” you might say. “I have heard about this, it has been in the news. They have talked a lot about the research. Violent video games do not make people aggressive. Besides, I play violent video games all the time, and I am not aggressive.” So let us address these objections. We will take care of the second one first. Every single person is just one single person. In other words, each person is a case study, and just because something is true for one case, it does not mean it is true for everyone. By now, we hope this argument sounds very familiar. It is a serious overgeneralization, and one of the most important errors to try to avoid. It is, quite simply, exactly why we have to do research because an individual’s experience is not a valid way to discover the truth.

The second point is a bit harder to address. You will recall when we talked about the research on spanking, we talked about the myth of two equal sides. That does not seem to be going on here. The evidence on one side feels stronger, but less overwhelmingly so in this case. In other words, there is some legitimate research that finds that violent video games are not as damaging as the critics say, so the topic is still controversial. Before we get to that, though, we have to address a confusion, or more technically, a conflation (a blending or fusing of separate ideas). To illustrate the problem: in 2019, there were articles in Psychology Today, the LA Times, the NY Times, the Guardian, PBS, Time, CBSnews.com, and many other publications with virtually the same headline: “Violent video games do not cause violence.”

We never said they did.

We said they cause aggression. This is the one key difference between violence and aggression that we hinted at in the beginning of this section. Research has not really addressed the question of violent video games and violence. So it is true that there is no scientific evidence that playing violent video games causes violence. There is evidence that playing violent games causes short-term, mild increases in aggressive thoughts and aggressive behaviors. There is a possibility that these mild increases could accumulate over time to lead to longer-term changes, but so far that evidence is missing (Allen et al., 2018).

The pro-video game side has conducted meta-analyses of their own that indicate that violent video games do not lead to increases in aggression, or that the effects are very small (Ferguson, 2015). We wish we could tell you that the use of meta-analysis should make these kinds of arguments unnecessary, but it does not. Individual researchers who conduct a meta-analysis can set up their own criteria for which studies get included, and there are still options for data analyses that can, in some cases, yield different conclusions.

And we have not addressed one of the key areas of contention. Each side has been criticizing the quality of the other side’s research. For example, one study that found no effect of violent video games only had their participants play the game for 15 minutes (Hilgard et al., 2019). Critics charged that this was too short to expect any effects. At the same time, some studies that do show an effect of the games on aggression have used the amount of hot-sauce the participant chooses to give to a person who dislikes spicy foods. Critics here charge that this is a poor operational definition of aggressive behavior.

To be sure, the situation is messy. But that is how scientific progress often works. Currently, we are sticking with the “short-term, mild increases in aggressive thoughts and behavior” side, but honestly, that could change in the future as more convincing evidence is produced by the other side. You might recall that this is what we called skepticism in Module 7.

Let us conclude with some less contentious ideas. In many cases, aggression is a response to an aversive condition, which is one that creates a negative emotion. The way an individual responds to aversive conditions is determined by several factors, such as genes, prior learning experiences, and the situation itself (Berkowitz, 1989; 1993). One common situational factor is violent images in the media.

The most well-known aversive condition for psychologists is frustration. Often when we become frustrated, we can become aggressive if the other factors line up. For example, video games can sometimes be frustrating, particularly when the player loses. That frustration can, in some cases, lead to aggression (Breuer et al., 2015). Frustration is by no means the only aversive condition that can lead to aggression. Other common ones include fear, anger, pain, and uncomfortable temperature (Anderson et al., 1995). For example, one group of researchers examined 826 games from three seasons of major league baseball and discovered that the hotter the temperature was, the more batters were hit by pitches (Reifman et al., 1991).

When frustration was first proposed as an explanation for aggression, researchers believed that frustration always produced aggression (we now know that this is not true; for example, if there is a reasonable explanation for the frustration, people are less likely to act aggressively; Dill & Anderson, 1995). To explain how this works, they used the concept of catharsis (Dollard et al., 1939). Catharsis is the release of anger through the expression of it. In other words, in order to release one’s anger, you have to act aggressively. This idea, which really dates back to the philosopher Aristotle, has taken on a life of its own and is very commonly given as advice. If you want to get over your anger, get angry; yell at someone, kick a pillow, punch a heavy bag, etc. It is true that vigorous activity releases arousal, which can be helpful in reducing anger and aggressive impulses. It often does not work out that way, though, unless the problem that led to the anger is solved. More typically, we think of the event and get angry all over again (Caprara et al., 1994). If we choose the wrong kind of activity, catharsis does even worse. Acting aggressively through some surrogate activity, such as punching an inanimate object, tends to increase aggression instead of reducing it (Bushman et al., 1999). And that is assuming we don’t choose to punch something harder than our hand. Broken hands are not the only practical risk. When we release anger explosively, for example by yelling at others, we may provoke them into an angry response, thus creating new problems for ourselves.

So what is the answer to the question, “Does testosterone/pain/low serotonin/etc. cause aggression?” In very many cases, the correct answer is “yes, but. . .”

Consider an individual who was spanked as a child, watched violent cartoons and television shows as a youngster, and graduated to violent horror movies later. This individual has played many violent video games and to this day continues to watch violent movies. He was in several fights as a teenager; he won most and received a great deal of sympathy when he was attacked by a large group (and lost). It certainly sounds as if this poor soul should be a fairly aggressive individual. But we know him; he is not. He is one of the most gentle people we know. He has never once spanked his children and has not been in a fight in over 35 years. He does not drive aggressively, and we have never even heard him utter a single aggressive sentence.

What is going on? It turns out that you can never use the causes that psychologists have uncovered for a complex behavior to predict information with a high degree of confidence about any single individual. Whatever risk factors for aggression an individual may have in their background, there may be something that counteracts these. One critical factor may be the knowledge about the causes of aggression and about the relationship between anger and aggression, which can allow people to pause long enough when they have aggressive impulses to let them pass. Such is the nature of human nature. Most interesting behaviors are extremely complex with many potential causes, making it impossible to make predictions about individuals. It is only when we look at large groups that we can observe the increased likelihood that some particular risk factor leads to an increase in aggression on the average.

By the way, this very same complexity makes it extremely difficult, if not impossible, to explain any specific behavior in an individual after the fact. For example, after nearly every highly publicized mass shooting in the US, many observers are quick to blame the shooter’s feelings of alienation, or affection for “death metal” music, or fondness for violent video games as causes of their highly aggressive actions. Filmmaker Michael Moore, on the other hand, noted that the two boys who committed the mass shooting at Columbine in 1999 went bowling on the morning of their infamous act and that by the same logic (i.e., if X follows Y, Y must cause X), bowling is as likely a cause for their violence as any of the other factors commonly cited—hence, the title of his 2002 documentary, Bowling for Columbine .

  • Think about some episodes when you got angry. Can you identify positive and negative outcomes?
  • How has your opinion about aggression changed after reading this section?

Module 21: Social Cognition and Influence: How Do People Interact?

Helen Keller is reported to have said, “Blindness separates people from things, deafness separates people from people.” If you Google the quote, you will find that most people using it are companies selling hearing aids, but that is beside the point. The point is, we think, that being separated from people is worse than being separated from things. Our world, it seems, is largely a social one. As such, the purpose of cognition (see Unit 2) is not simply to understand and reason about the world of objects. Rather, a great deal of cognition concerns the social world. And an understanding of principles of social cognition can help you to navigate our social world.

Of course, we go well beyond simply thinking about other people; we also interact with them every day. In particular, we influence them, and they influence us.

These three ideas – thinking about, interacting with, and influencing other people – are the topics of this module. They are spread over five sections. Section 21.1 describes the process through which people explain others’ behavior; it is called attribution. In 21.2 you will learn about the relationship between thinking about other people (attitudes) and interacting with them (behavior). Section 21.3 is about one of the most serious and harmful problems associated with thinking about and interacting with other people: the human tendency to rely on stereotypes, prejudice, and discrimination in our social relations.  Sections 21.4 and 21.5 cover the “influence” part of the module. One key way source of influence is the many types of groups in which we find ourselves. Section 21.4 describes many of these group influences. Finally, section 21.5 covers specific efforts and techniques people use when they attempt to influence others—namely, persuasion and obedience to authority.

21.1 Attribution: Explaining People’s Behavior

21.2 Attitudes and Behavior

21.3 Stereotypes, Prejudice, and Discrimination

21.4 Group Effects on Individuals

21.5 Attempts to Influence Others

By reading and studying Module 21, you should be able to remember and describe:

  • Attribution: situational and dispositional attributions, correspondence bias (fundamental attribution error), actor-Observer bias (21.1)
  • Attitudes (21.2)
  • Cognitive dissonance (21.2)
  • Stereotypes, prejudice, and discrimination: social identity, realistic group conflict (21.3)
  • Modern racism, implicit attitudes, implicit bias (21.3)
  • Social norms: descriptive norms, injunctive norms (21.4)
  • Conformity, groupthink, social facilitation (21.4)
  • Bystander effect: diffusion of responsibility (21.4)
  • Elaboration likelihood model of persuasion: central and peripheral routes (21.5)
  • Milgram’s obedience research (21.5)

By reading and thinking about how the concepts in Module 21 apply to real life, you should be able to:

  • Recognize dispositional and situational attributions, as well as the correspondence and actor-observer biases (21.1)
  • Explain why attitude and behavior are consistent or inconsistent in an example (21.2)
  • Recognize the use of persuasion and obedience principles (21.5)

By reading and thinking about Module 21, participating in classroom activities, and completing out-of-class assignments, you should be able to:

  • Improve your ability to make situational corrections or attributions when they are appropriate (21.1)
  • Differentiate stereotypes, prejudice, and discrimination; identify examples, and propose how they may be corrected (21.3)
  • Determine which group effects are operating in groups to which you belong (21.4)
  • Create a multi-part persuasive appeal using both the central and peripheral routes. Determine when each strategy should be employed (21.5)
  • Imagine that on the first day of the semester, your new chemistry professor walks in and shocks you with their behavior. They are rude and disrespectful, and insult students and bad-mouths their colleagues. How would you react? What would you say to yourself?

In Module 1, we noted that people commonly have the same goals that psychologists do, namely explaining others’ behavior. The process by which we try to figure out the causes of someone’s behavior is attribution , another word for explanation. There are an infinite number of specific attributions that we can make to explain any specific behavior. In the case of the rude chemistry professor of the Activate section, for example, you might conclude that they are a nasty person, that they are acting tough to set a tone for the semester, or that they had a fight with their spouse that morning. Individuals tend to think that they make attributions well, in other words, that they are “a good judge of character” (van Boven, 2003). Unfortunately, this confidence is not always well placed . Of course, if we were perfect at explaining others’ behavior, there would be little need for psychology, or at least for this part of social psychology.

We are fairly good at some kinds of attribution, however. Charles Darwin found that people are good at the simple recognition of emotions revealed through facial expressions, and others have since confirmed that observation. We make these kinds of attributions hundreds of times each day. It is important to keep these successes in mind as we focus on the errors that can creep into our other interpersonal judgments.

There are two broad types of attributions. One is a dispositional attribution , an explanation that the behavior is a result of an enduring personality trait, or disposition. For example, someone who concludes that the chemistry professor is a nasty person would be making a dispositional attribution. The second type is a situational attribution , an explanation that the behavior results from some external factor, or the situation. A student who concludes that the chemistry professor must have had a fight with their spouse is making a situational attribution. Many people initially make dispositional attributions somewhat automatically. Then, upon reflection, they may update or correct that initial attribution with situational explanations (Quattrone, 1982). So, your first reaction to the chemistry professor is likely to be, “what a jerk!” After you think about it for a while, you may realize that there might be other reasons why they might be acting like that.

attribution: the process of explaining the causes of someone else’s behavior

situational attribution: an explanation of someone’s behavior that focuses on environmental or situational causes

dispositional attribution: an explanation of someone’s behavior that focuses on stable personality traits of the actor

This is a great process when it works. The only problem is, it does not always work. For one thing, situational corrections tend to be insufficient to overcome dispositional attributions, even when the situational factors are obvious. Making matters much worse, situational factors are often hidden; often, the behavior alone is observable. If the chemistry professor’s behavior is a result of a fight with their spouse, the students do not know that for a fact. All they know for sure is the classroom behavior. Finally, because dispositional attributions are more-or-less automatic and situational ones effortful, guess which ones sometimes get skipped? Right, the situational ones.

One extremely important situational factor that often gets overlooked is the observer himself or herself. When you are observing someone else, you are often part of the other person’s situation and can be a contributing factor in a behavior. For example, teachers who encounter their students out in the community can be surprised to find that the students’ behavior is very different from the classroom (of course, we could say the same thing reversing the role of teacher and student, by the way). When they are surprised, it is because they have not realized the powerful situation that they, as teacher, impose on the behavior of their students. We are not necessarily observing our students’ dispositions, simply how they act in our classes.

This tendency to attribute behavior to enduring dispositions was originally noticed by Gustav Ichheiser (1943), and elaborated by Fritz Heider (1958). Early researchers called it the fundamental attribution error ; more recently, it has been renamed the correspondence bias (although you will still hear many psychologists using the old name). The name change came about because the tendency is not fundamental, and it is not always an error. First, it is a bit presumptuous to automatically judge that a dispositional attribution is incorrect. After all, the whole subfield of personality psychology is predicated on the observation that many behaviors do, in fact, result from dispositions. Sometimes, then, the tendency to draw dispositional attributions is correct, so it is fairer to call it a bias than an error.

Thumbnail for the embedded element "Fundamental Attribution Error | Ethics Defined"

A YouTube element has been excluded from this version of the text. You can view it online here: https://cod.pressbooks.pub/introductiontopsychologywinter21/?p=112

You can also access the video directly at:  https://youtu.be/Y8IcYSrcaaA

Second, “fundamental” means “essential,” or “serving as a foundation.” At the very least, a fundamental bias should be one that everyone uses and that would be very difficult to counteract. Consider a famous demonstration of the correspondence bias (Jones & Harris, 1967). Participants are asked to judge the personal views of an essay writer whom they know was assigned to the topic; for example, the writer had been told to write an essay in support of the death penalty. Most American and Asian college students who have participated in experiments like this have fallen for the correspondence bias. In other words, even though the participants knew that the writer had been assigned the topic, they assumed that the essay reflected the writer’s actual opinion. What if, prior to participating in this demonstration, however, you were placed in the position of the essay writer and asked to write a persuasive essay on an assigned topic (which should alert you to the situational factors involved)? How would that change your dispositional judgment of another essay writer? Well, if you are like the typical American participants, the answer is that your dispositional judgment would not change at all. But when Korean participants took part in the same experiment, the correspondence bias disappeared (Choi & Nisbett, 1998). The correspondence bias, then, although common, can be eliminated in some people. Thus, it is not what you would expect from a fundamental bias.

Extraordinary, or in many cases even ordinary, situations can move people to engage in a wide variety of behaviors, as you will see throughout this unit. You are likely to be very surprised by many of these demonstrations. Why? Because of the way the correspondence bias affects our thinking. Daniel Gilbert (1998) has pointed out that the subfield of social psychology, and its scientific approach to investigating human social behavior, exists precisely to counteract the strong tendency to attribute behaviors to dispositions rather than situations.

One extremely useful point about the correspondence bias that you should keep in mind throughout school and your career is that other people often fall for it when making judgments about you. Always remember that teachers and employers are likely to make dispositional attributions about you from the behaviors that they see. Many times we have heard our colleagues complain about “lazy, disrespectful” students who fall asleep in class, “unmotivated” students who do not turn in assignments, and “none-too-bright” students who do poorly on “easy” exams. Remember, even if the instructors were aware of the midnight to 8:00 am shift you just worked, the sick child that you have at home, or the four additional classes you are currently taking, the situational corrections they might make would likely be insufficient. And, as have been saying, the situational factors are often invisible to the instructors, so in many cases, they are unlikely to be taken into account at all.

A few moments of reflection will likely lead you to an important observation about the correspondence bias—namely, that you are less likely to mistakenly attribute your own behavior to dispositional causes. If you act rudely toward someone at the end of a bad day, you will probably realize that the misfortune that you have suffered throughout the day is more of the cause of the behavior than your personality. This interesting phenomenon is known as the actor-observer bias ; we are prone to the correspondence bias when making judgments about other people (when we are the observer) but not about ourselves (when we are the actor). Clearly, we judge ourselves differently from how we judge other people. Much to our chagrin, we have sometimes noticed the actor-observer bias in ourselves.  For example, while driving, we might notice and complain about the carelessness of pedestrians who cross in the middle of a block or against the traffic light (a dispositional attribution). Then when we are pedestrians ourselves, we sometimes engage in the same behaviors, but only when I am in a hurry (a situational attribution). Again, when we are observers, we have limited information; all we see is the behavior and we are quick to make a dispositional attribution. Then, when we are the actors, we know about the situations, the reasons, for our specific behaviors and take those reasons into account. Think about it; we, as psychology professors and textbook authors, are keenly aware of the actor-observer bias, and we still fall for it. Imagine how common it must be among people who never even heard of it.

correspondence bias/fundamental attribution error: our tendency to attribute people’s behavior to dispositional causes

actor-observer bias: our tendency to attribute others’ behavior to dispositional causes, and our own behavior to situational causes

  • Can you think of any times that you may have judged someone in error because of the correspondence bias?
  • Has anyone ever judged you unfairly because of the correspondence bias?

21.2. Attitudes and Behavior

  • Who is your favorite actor or actress? What behaviors do you have that reveal this preference?
  • What kind of thoughts do you have after you have made a major purchase, especially if the purchase decision was difficult for you?

Attitude is one of those pesky terms that have a different meaning in scientific psychology than in everyday conversation (other examples include experiment , nerve , and depressed ). In casual conversation we might use the word attitude to mean something like a person’s overall mental approach to life, as in “She has a great attitude, always positive, never in a bad mood.” Or, sometimes we reserve the word to denote a negative outlook only, like when parents complain that their teenager “has an attitude.” These senses of the word are both too broad and too narrow for the psychologist. When psychologists talk about attitudes, we are referring, not to a general state of mind, but to an attitude toward a specific target, for example, your attitude toward school, which may be very different from your attitude toward amusement parks. So in that way, the everyday definition is too broad. It is also too narrow, though, because the positive-negative dimension that seems to capture the whole everyday meaning is only a fraction of the psychologists’ multidimensional concept.

There are, in fact, three elements—belief, feeling, and behavior—together that compose what psychologists refer to as an attitude , a psychological tendency that people express by evaluating some entity with favor or disfavor (Eagly & Chaiken, 1993). You can certainly hold attitudes about other people, a primary concern of social psychology, but you can also have an attitude about virtually anything, including a concrete object (e.g, a soft drink brand), an abstract concept (e.g., democracy), or an activity (e.g., quilting).

We feel confident that we do not need to define beliefs and behavior for you. Let us be a little more careful about what we mean by feelings, however. This is the evaluative part of the attitude, the part that corresponds to the favor or disfavor part of the definition. In short, it is the part of the attitude that makes you say, “I like the show Tiger King ” or “I dislike politics.”

Consider, for example, your attitude toward psychology. It might consist of the following elements:

  • Belief: You believe that psychology is the most important discipline in the history of the world.
  • Feeling: You love psychology.
  • Behavior: You have filled your entire course schedule with psychology classes.

Why do we care about attitudes? Well, the fact that advertisers pay attention to them is a very important clue. Attitudes have that all-important behavior component. One can use people’s attitudes to predict their behavior and can change their attitude to influence their behavior. Marketers, of course, are interested in purchasing behavior, and in section 21.5, you will learn about some of the techniques that they use to change your attitudes in order to persuade you to purchase their products.

Do Attitudes Cause Behavior?

Many years ago, a researcher made an important observation about attitudes and behavior: people’s stated attitudes in some cases did not coincide with their behavior at all. Specifically, the researcher traveled around the US with a Chinese couple during a period of quite serious prejudice against Asians. They stayed at 66 different hotels and ate at 184 restaurants throughout the country and were refused service only once. Yet, when he returned home and wrote to the very same hotels, 90% of the owners who responded reported that they would not serve Asian people (LaPiere, 1934).

This striking example of attitude-behavior discrepancy set off a kind of crisis among those who believed that attitudes can predict what people will do. A 1969 review of research found that attitudes did a very poor job of predicting behavior (Wicker, 1969). Still today, some observers are critical of researchers’ attempts to predict behavior from attitudes. Many researchers, however, realized that the relationship between attitude and behavior is complex, and they set out to discover the conditions under which the two do coincide. Three of their findings stand out.

First, you have to look at the right kind of attitude. Specifically, if you want to predict the likelihood that someone will engage in a behavior, you have to know their attitude toward the behavior, not toward an associated object. You might express a positive attitude about a political candidate, but a negative attitude about contributing to their presidential campaign. In this case, you would be extremely unlikely to donate when you get a telemarketing call from the campaign headquarters. Another way to think about this idea is that there are many potential behaviors that might be consistent with a positive attitude toward political candidates. If looking for consistency, you can broaden your search to include these different possible behaviors. On average, individuals will act in ways that are consistent. So, although you might not contribute to the political campaigns of your favored politicians, you might tell your friends about them, subscribe to their email newsletters, attend their rallies, and vote for them (Fishbein & Azjen, 1974).

A second important consideration when trying to predict behavior is the influence of other people. We often act in ways that are inconsistent with our attitudes under the influence of people who are important to us (Azjen & Fishbein, 1980). If you do not believe us, we invite you to ask the parent of a young child who plays the game Candyland with their child every day how much they like the game.

A third factor is a bit obvious, but it is an important one. An attitude is not likely to be a good predictor of behavior if it is a weak attitude. An intense attitude about something important to the individual, one that can be easily called to mind and about which the individual has a great deal of knowledge—in other words, a strong attitude—is a very good predictor of behavior (Kraus, 1995; Krosnick et al., 1993). Attitudes that are formed through direct, personal experience are especially strong and likely to lead to consistent behavior (Fazio & Zanna, 1981; Millar & Millar, 1996). For example, an individual whose life was saved because she was wearing a seatbelt during an automobile accident is likely to have an extremely strong attitude about seatbelts.

Does Behavior Cause Attitudes?

Another group of researchers took a very different approach. They discovered that sometimes, it goes in the opposite direction; engaging in a particular behavior changes your attitude. The process through which this can happen is called cognitive dissonance (Festinger, 1957). According to cognitive dissonance theory, when we have two contradictory cognitions, we experience an aroused mental state called dissonance. We are then motivated to reduce the dissonance. The only way to do that is to change one (or both) of the cognitions so that they no longer contradict each other. Suppose one cognition is the realization that you just did some behavior, and the other is that you have a negative attitude toward the behavior. For example, you volunteer at the humane society, and one day you suddenly decide to adopt a kitten, even though you have never liked cats. You cannot change the fact that you adopted the kitten, so in order to reduce the dissonance, you are forced to change the other cognition (maybe cats are not so bad after all).

The effects of cognitive dissonance can be difficult to accept, so perhaps another example is in order. You are in the market for a car and are trying to decide between a fuel-efficient compact car and a pickup truck. Because both cars have features that you judge extremely important, it is a toss-up; the two alternatives are equally desirable. You realize that you need to make a decision, so you buy the pickup. At first, you will probably experience dissonance (“maybe I should have bought the compact; fuel is over $4.00 per gallon.”), specifically, postdecisional dissonance , commonly known as buyer’s remorse. You are motivated to reduce the dissonance and you cannot change the cognition that you did, in fact, buy the pickup truck. Therefore, you change your inconsistent cognitions and begin to think of the truck as the far better alternative (Brehm, 1956).

Cognitive dissonance theory has led to some very surprising discoveries. Suppose we offer to pay you to give a speech about a stance with which you disagree, and we are secretly motivated to get you to change your opinion. Should we pay you $100 or $5? Most people’s first inclination is to predict that the large payment would lead to a larger actual opinion change. Cognitive dissonance theory, however, makes the opposite prediction. In the $100 dollar case, there is no dissonance because all of your cognitions are consistent (I gave the speech, I don’t really agree with the content of the speech, but I was paid a lot of money for it and so I had a good excuse for making it). On the other hand, if you are paid only $5, your cognitions are inconsistent (I was paid next to nothing for giving a speech I don’t really agree with, so why did I do it?). In order to reduce the dissonance, you try to persuade yourself that perhaps you actually did agree with the content of the speech somewhat. Of course, you are unlikely to change your opinion completely, but research on cognitive dissonance over the years has shown a consistent shift in opinion in the predicted direction (Festinger & Carlsmith, 1959; Riess & Schlenker, 1977).

cognitive dissonance: an aroused feeling that results from holding two contradictory cognitions at the same time. An individual is motivated to reduce the dissonance

postdecisional dissonance: the feeling of regret or unhappiness that may occur after we make an important decision

  • Try to identify some examples of when your behavior was not consistent with your attitude. Do you recognize any of the principles from this section in the example? Were there any additional reasons for the inconsistency?
  • Can you recall a time when you may have changed your attitude to be more consistent with a behavior that you had already completed?

21.3. Stereotypes, Prejudice, and Discrimination

  • Why do you think people engage in stereotyping, prejudice, and discrimination? List as many reasons as you can.

How we think about and interact with other people is determined to a great extent by stereotypes, prejudices, and discrimination. People often use these terms as if they are interchangeable, but they are not. A stereotype is a set of beliefs about an individual person derived from their membership in a category, such as an ethnic group, age group, gender, or sexual orientation. Prejudice is a feeling, an evaluation of the person who has been stereotyped. Although a prejudice can be positive or negative, the negative ones, because they are so damaging, have much more frequently been the focus of psychologists’ attention.  Discrimination   is a set of behaviors, treating a person differently because of stereotypes and prejudice.

You probably noticed this already, but in case you didn’t, these definitions parallel what we just told you about attitudes very closely. So, one way you can think about stereotypes, prejudice, and discrimination, is that they compose a person’s attitude toward some targeted group of people (and individual members of that group).

Anyone can be harmed by stereotypes, prejudices, and discrimination. When they are applied by a member of a more powerful group to a member of a less powerful group—such as when a white male uses a stereotype to judge a black male—the damage is greater. In particular, the discriminatory behaviors stemming from stereotypes and prejudices lead to tangible (for example, economic or, in some cases, life-or-death) consequences.

Sadly, bias and discrimination are not things of the past. For example, according to the US Census Bureau, for every dollar a white man earns, a woman earns $0.82. The New York Times (Leonhardt, 2020) reported that the wage gap between white and black men is as large today as it was in 1950 (when accounting for everyone, not just those employed or officially looking for work).

Some of the differences can be explained by different career choices and opportunities, education level, and geographic distributions (which is bad enough). Some differences even persist for the same work and same experience. There is little doubt that a great deal of these wage gaps is a consequence of discrimination. To give you an idea of just how pervasive discrimination is, researchers have even discovered that the fresh fruit purchased in stores of the same supermarket chain in the same city was lower in quality in neighborhoods with lower socioeconomic status (Topolski et al., 2003).

stereotype: a set of beliefs about an individual person derived from their membership in a category

prejudice: a feeling or evaluation of a person who has been stereotyped

discrimination: treating people differently because of stereotyping and prejudice

Stereotypes and Social Categorization

One of the most important discoveries of the past 40 years about stereotypes is that they resemble other types of concepts in many important ways (Taylor, 1981). You may recall from Module 7 that mental concepts affect our thinking about everything we encounter, so it should not surprise you that they have been used to help people understand complex social categories as well. Social categorization serves many of the same functions that categorization of the non-social realm does. The main benefits are simplicity and efficiency; we need not go to the trouble of collecting detailed information about all the people we meet. All we have to do is figure out what category they belong to, and then make inferences based on category membership. “Oh, they are a psychology professor. They must be odd, absent-minded, and geeky.”

It turns out that our social identity , the part that is based on our group memberships, is a key part of our identity (Tajfel & Turner, 1986). Our social identity leads us to make a basic social categorization of “people like me” and “people not like me,” or in other words, ingroups and outgroups.  People have a tendency to see members of an outgroup as more similar to each other and less like the in-group members (Fiske, 1998; Linville, Fischer, & Yoon, 1996). Individuals are also likely to think favorably about their ingroup and unfavorably about their outgroup, as a way to enhance their individual self-esteem through their group memberships (Lindeman, 1997; Tajfel et al., 1971; Wilder, 1981; 1984). As a result, you can already see prejudice beginning to flourish from these basic social categorizations.

Beyond the basic ingroup-outgroup distinction, our ability to create new social categories is unlimited. In other words, we can easily create infinitely many different ingroups (e.g., black, female, psychology major, runner) and just as many outgroups (e.g., white, male, history major, non-exerciser).

ingroup : social group to which a person psychologically identifies as being a member

outgroup : social group to which a person does not identify as a member

social identity: the part of our personal identity that is based on our group memberships

The most important problem with a stereotype is that it typically gets applied to all members of the target category equally. It is, in a way, irrelevant whether a stereotype is based on truth. The error lies not so much in stereotyping but in applying the stereotype equally to all members of the target group without regard to any individual information.

In addition, many stereotypes are exaggerated at best and flat-out wrong at worst (Fiske, 1998). The problem is that many people form their stereotypes on the basis of unrepresentative cases (for example, vivid cases in the media). The availability heuristic leads them to overestimate the frequency of stereotypical behavior; once the stereotype is formed, the confirmation bias   leads to its maintenance and exaggeration. For example, a person who has formed a negative stereotype about the criminal behavior of African Americans would be very likely to notice the news reports of criminal activity and fail to notice his law-abiding African American classmates.

Prejudice and Perceived Threat

Once an individual has categorized people into ingroup and outgroup, prejudice can easily increase. The emotional response of prejudice often comes with the perception of the outgroup as a threat. For example, realistic group conflict theory explains how two groups who compete for the same resources, such as jobs, often have prejudiced feelings toward each other (Esses, Jackson & Armstrong, 1998; Fiske & Ruscher, 1993; Pettigrew & Meertens, 1995).

In addition, prejudice can flourish when a person is uncomfortable with direct contact with members of the stereotyped group. This discomfort would lead the individual to avoid contact and to experience anxiety and other bad feelings when forced to come into contact (Fiske, 1998). Thus, once prejudice has started, it can feed on itself.

Did the 2020 Racial Justice Protests Surprise You?

If so, you are probably white. On May 25, 2020, 46-year old George Floyd, a black man who was suspected of passing a counterfeit $20 bill in a store, was killed when a Minneapolis police officer (who has since been charged with murder in the case) kneeled on his neck for 8 minutes and 46 seconds. The ensuing worldwide protests against police mistreatment of people of color and against systemic racism (which included protests in all 50 US states) began a period of activism among the general population not seen since the civil rights era in the 1960s. Although a great many white Americans took part in the protests and demonstrations, some had been surprised by the depth of black Americans feelings about their mistreatment at the hands of the police and other parts, systems, organizations, and people in our society. A full discussion of the concept of systemic racism, as it is known, is beyond the scope of an Introduction to Psychology textbook. One thing we can do is explain some of the psychological roots of the ongoing problem (which we have already begun through the explanation of stereotyping, prejudice, and discrimination), and perhaps explain the surprise experienced by some white Americans. By the way, this is probably as good a time as any to point out that we are using the problems faced by black Americans to illustrate the concepts. You can choose any other minority group that is a target of bias (e.g., members of the Latinx community, Muslim Americans, LGBTQ+ Americans) and make many of the same points.

For years, perceptions of Blacks and Whites about their beliefs about race relations in the US have been markedly different. In 2016, the Pew Research Center found that 46% of white Americans and 61% of black Americans said that race relations were bad. By 2019, a clear majority of the population had begun to realize that race relations were poor (and getting worse). Still, the different perceptions between Blacks and Whites persisted (56% of whites and 71% of Blacks said race relations in the US are generally bad).

Of particular importance has been the differences between Blacks’ and Whites’ perception of discrimination. Again, in 2016, Pew found gigantic gaps in perceptions of how Blacks are treated by police, banks, employers, in stores and restaurants, and in voting (gaps ranged from a low of 23%, for voting, to 42%, in the workplace). Indeed, one month before the George Floyd killing, the Pew Research Center conducted a new poll in which they found glaring differences between Blacks and Whites. For example, 84% of white respondents, but only 56% of black respondents had a fair amount or a great deal of confidence in the police to act in the best interest of the public. 78% of white Americans, but only 52% of black Americans rated police officers’ ethics high or very high (Pew Center, 2020).

How could white Americans not see what has seemed obvious to black Americans for decades (centuries, really)? Well, laws barring discrimination had reduced cases of overt discrimination. Further, many of the open biases that people held, seemed to be a thing of the past. People who used to openly tell racist jokes at work and freely shared their opinions about “lazy [insert racial epithet here]” found their audiences less receptive. Psychologists cautioned that some of the overt ethnic and racial discrimination of the past had been transformed into what they call modern racism , subtle discriminatory behavior (and stereotypes and prejudice) that can be hidden behind some other motive or belief (Pettigrew, 1998). For example, the opinion that racial discrimination was no longer a problem in the US was generally considered a reflection of modern racism (Swim et al., 1995).

At the same time, some of the discriminatory actions were not at all subtle to the targets of those acts. For example, compared to white Americans, black Americans are stopped more often by police, more likely to be arrested, more likely to have force used against them, more likely to be shot and killed. Making these biases worse is the fact that Blacks who are stopped by police are more likely to be innocent than Whites who are stopped (Bronner, 2020). (Note, we are venturing into the systemic racism part of the conversation. We have to touch on it to illustrate the scope of the problem).

Importantly, researchers began to discover and develop ideas related to implicit attitudes  and implicit bias . These are attitudes or biases that people are unable or unwilling to express overtly. These implicit biases can influence our behavior in sometimes subtle ways, so they can be unnoticed by some. But think about how this might work. An individual white person might have an implicit bias that they are at best vaguely aware of. Perhaps they encounter a black person two or three times per week, so most of the time that implicit bias is completely buried. A black individual, however, may be the target of dozens if not hundreds of other people’s implicit biases every single day. It becomes easy to see how Whites might have been surprised to discover that Blacks did not experience the same America. Then, combine that with the systemic racism part of the equation. It is a lot easier to notice when something is happening than when it is not. For example, if you are white, it is hard to notice that you are NOT getting pulled over by police with regularity for no reason. The result was that many white Americans did not realize how serious and common the problems faced by black Americans still are.

Can We End Stereotypes, Prejudice, and Discrimination?

Some observers complain that we cannot legislate our way out of this mess. Although that is true, it is part of the solution. We can absolutely ensure that serious forms of discrimination are criminal acts, and we can enforce those laws. But the observers are right, legislation alone is very unlikely to be sufficient to cure society of discrimination as long as there are new ways for it to manifest itself. Ultimately it will be more effective to address the stereotyping and prejudice that drive discrimination, essentially killing discrimination at its roots.

An understanding of how stereotyping and prejudice develop is an important first step in reducing it, both for psychologists and individuals. We are certainly not going to put a stop to the process of social categorization, but we can make people aware of the need to get information about people that highlights their individuality. Research has shown that simply giving people different goals, such as to be accurate when judging someone or to learn about a stereotyped person as an individual, can encourage them to devote the extra effort to go beyond a simple social categorization (Brewer, 1988; Fiske & Neuberg, 1990; Stangor & Ford, 1992).

A second key to reducing prejudice and discrimination is to encourage contact between groups. Not just any contact will do, though; recall that prejudice itself sometimes emerges from contact. In order for contact to work, it should be between people who are of equal status, and it should allow members of different groups to discover their similarities and to learn about each other as individuals. Finally, the contact should take place in a context in which cooperation is expected and encouraged and the members of different groups depend on each other (Pettigrew, 1997; Schwarzwald et al., 1992; Wright et al., 1997).

A third key is to acknowledge and learn how to compensate for implicit bias, and control for the substitution of a new discriminatory behavior when an old behavior becomes unacceptable. For example, some research has shown that police officers who undergo some types of implicit bias training are better able to distinguish between images of black people with and without weapons in shooting training exercises. Although the effects do not last, this is a promising avenue to develop.

implicit attitudes : attitudes that people are unable or unwilling to express openly but nevertheless affect their behavior

implicit biases : biases that people are unable or unwilling to express openly but nevertheless affect their behavior

modern racism: racial stereotypes, prejudice, and discrimination that can be hidden behind some other motive or belief

  • Think about a time that you have been the target of stereotyping, prejudice, or discrimination. Which one did you experience? How did it make you feel? If you have never been the target of stereotyping, prejudice, or discrimination, imagine how you would feel.
  • Would you feel differently about being the target of stereotyping versus prejudice versus discrimination? Which one do you find more troubling?

21.4. Group Effects on Individuals  

  • What are some important groups that you belong to?
  • Have you ever “gone along with the crowd” against your own personal views or better judgment?

In Section 21.3, you saw that stereotypes and prejudice essentially emerge from a categorization of others into “people like me” and “people not like me.” In other words, we tend to categorize people in terms of whether or not they are in the same important groups as we are. It is easy to see that certain social categories, such as ethnicity and religion, are extremely influential aspects of people’s lives. What you may not realize is that belonging to a group is very important to human beings in general. One way you can see this is to note how easy it can be to get people to feel as if they are part of a group . In many cases, researchers have been able to create ingroup and outgroup categorizations by randomly assigning people to groups or basing group membership on some trivial difference such as the type of shoes that people were wearing (Billig & Tajfel, 1973; Locksley et al., 1980; Tajfel, et al., 1971).

Many cultures across the world differ on what we call collectivism and individualism. Obviously, in collectivistic cultures, in which membership in groups is emphasized, groups have a profound influence on people. Even in the US, however, a culture that is staunchly proud of its individualistic values, groups play essential roles in people’s lives.

Groups in general have powerful influences on individuals. This section talks about five aspects of that influence—in the language of social psychology, they are social norms, conformity, groupthink, social facilitation, and bystander effects.

Social Norms

Groups have rules; they are called social norms . The strength of the rules and penalties for violating them are a large part of the power of a particular group. The rules are very often unstated, so they become obvious only when they are violated.

Even unofficially, weak groups have a set of norms. If you do not believe us, stand up in the middle of class during your next exam and entertain your classmates by singing a Broadway show tune (just make sure you do not do it in your psychology class). You might just discover that the penalty for violating some social norms is to be kicked out of the group, by the way, so you probably should not accept our challenge. Frequently, the penalty for violating norms is more subtle, however, something like embarrassment or anxiety.

Try this yourself someday. Go out onto a crowded street corner and stare up at the sky for as long as your neck will allow. Depending on where you do this, people may question you, ignore you, or push you out of the way, but a few will join you. On another occasion, do the same thing but with one small change: bring four friends to stare at the sky along with you. Within a few minutes, you are likely to attract quite a large group of passersby perfectly willing to cramp their necks to gaze at nothing. In one famous study, 84% of people passing through a street corner fell for this very trick, a striking example of the power of descriptive norms, one of the key types of norms we observe in social environments (Cialdini et al., 1991; Milgram et al., 1969). A descriptive norm is a group rule based on the actual behavior that group members exhibit. In other words, descriptive norms simply describe the group members’ behavior.

Robert Cialdini and his colleagues have noted that, in addition to descriptive norms, we can also be influenced by injunctive norms, norms based on other members’ expressions of approval or disapproval. Basically, descriptive norms are what the group does, while injunctive norms are what the group says we are supposed to do. Both descriptive and injunctive norms can influence us, and they usually agree with each other. In other words, the behaviors that a group expresses approval about are often the ones they engage in. On the other hand, the two can contradict each other sometimes. When that happens, whichever norm seems most prominent in the situation will be the more powerful influencer (Cialdini et al., 1991). For example, consider cheating. Without a doubt, colleges in general have a strong injunctive norm against cheating. In essence, no one condones cheating. On some campuses, however, cheating may be common, indicating that the descriptive norm is the opposite of the injunctive norm. The college might try to shame students into having more integrity by drawing attention to the rampant cheating, but that tactic is likely to backfire. In essence, highlighting the descriptive norm makes it seem as if, despite the disapproval, “everyone is doing it.” Many potential cheaters would probably be encouraged to see if they could do it too.

social norms: rules for the behavior in a particular group

descriptive norm: a social norm that is based on the actual behavior that group members do

injunctive norm: a social norm that is based on the behaviors that are approved or disapproved by a group

You are sitting around a table looking at a catalog with five other people. Each of them in turn marvels at the jacket on page 14. You noticed the jacket instantly because it is hideous; you cannot believe that anyone would buy it. Yet, inexplicably, your first companion loves the jacket. You look at it again, without saying anything, thinking perhaps that it might look better from another angle. Nope, still ugly. Then, the second person at the table chimes in: “it is beautiful,” she exclaims. You take off your glasses and wipe them on your shirt before looking again. Then, the third and fourth tablemates both agree that the jacket would be a bargain at twice the price. The fifth person picks up the phone to order it, and all eyes turn to you. Do you express your opinion that the jacket is the ugliest waste of cloth you have ever seen, or do you go along with the rest of the table and say that it is an attractive jacket?

The power of descriptive norms suggests that conformity would be surprisingly common. Solomon Asch (1951; 1955), in some of the most famous experiments of all time, demonstrated that many people would deny the evidence from their own senses and go along with what the rest of the crowd said. Asch used judgments of line lengths to test his hypothesis. Participants were shown a line and asked to judge which of three comparison lines was the same length. Although it was an easy judgment to get right when making judgments alone, over three-quarters of Asch’s participants conformed at least once, and over one-third of the total judgments were of the wrong line simply because other people had picked it first.

Thumbnail for the embedded element "Asch Conformity Experiment"

You can also access the video directly at:  https://youtu.be/TYIh4MkcfJA

Asch and later researchers have found that not all situations lead to equal conformity. For instance, factors such as gender, culture, and minority influence can influence conformity (Bond & Smith, 1996; Griskevicius, et. al., 2006; Kim & Markus, 1999; Pasupathi, 1996). Moreover, if the group opinion is not unanimous, many fewer participants make the wrong judgment. The size of the group also makes a difference: Once a group has three members, conformity begins in earnest, and conformity levels continue to increase up to at least eight members (Bond & Smith, 1996). Researchers have also discovered that high-status individuals are particularly effective at inducing compliance (Harriman & Costello, 1980; Mullen et al., 1990). Finally, if group members like and admire each other, conformity pressures will be very high (Crandall, 1988; Latane & L’Herrou, 1996).

Most of the research through the years examining the effectiveness of groups when they are trying to accomplish some task has found that groups tend to outperform individuals. In essence, two heads are indeed better than one. A key exception to the superiority of groups was discovered in the early 1970s by Irving Janis. He noted that in some situations, groups that are trying to make decisions do very poorly. He called the phenomenon groupthink and used it to explain some of the most famous disasters of the US government over the years. Janis (1982) proposed that highly cohesive groups that are headed by a directive leader who already has a preferred decision often try to stifle disagreements, leading to a poor evaluation of alternatives, bad decisions, and over-commitment to the bad decisions. The group comes to see itself as invulnerable and morally superior, and it becomes very difficult for individual members to speak out against the apparent unanimity of the rest of the group. (We say apparent becaus e several individual members often harbor private reservations that they are afraid to bring into the open).

Groupthink is one of the most famous concepts in social psychology; it has broad appeal in the discipline, and in political science and business. It is presented as fact throughout the land (we will bet you can guess where we are going with this). In reality, however, it has received little research support. One problem is that it is extremely difficult to do research on groupthink. One version of the groupthink model (Janis, 1983) lists seven separate preconditions, seven separate symptoms of groupthink, and seven separate symptoms of defective decision-making. The options for doing research are overwhelming. If you want to choose which combination of the seven preconditions to vary in an experiment, for example, you have 127 different options. Then, you would need to operationally define your chosen variables, another difficult task. Rather than the huge number of options spurring researchers to try many tests of the concept, it seems to have stifled research. An article commemorating the first 25th anniversary of the groupthink concept found that an average of only one empirical study per year was published (Turner & Pratkanis, 1998). There have been several compelling case studies through the years describing disastrous decisions that have occurred when several groupthink preconditions and symptoms were present, but you have to remember that it is difficult to generalize from case studies. It is impossible to know whether the poor decisions resulted from groupthink or from some unknown and unnoticed factor. Still, however, the case studies are worth listing. For example:

  • The Bay of Pigs invasion of Cuba by US-backed Cuban exiles in 1961
  • Lyndon Johnson’s decision to escalate the Vietnam War
  • Richard Nixon’s decision to break into Democratic party offices, setting off the Watergate scandal
  • The Space Shuttle Challenger explosion
  • The widespread misprediction by news organizations of Donald Trump’s victory in the 2016 presidential election

Although the reasons for these decisions are complex, important elements of groupthink were present in all of them: high levels of group cohesiveness among the decision-makers, strong leadership that had a preconceived idea about the “correct” decision, and threats from the outside. According to Janis, these key preconditions lead members of the decision-making group to have a strong desire for consensus, be closed-minded and believe strongly in their moral superiority. The consequences are poor decisions characterized by too little examination of options, failure to realize the risks of chosen action, and an over-commitment to the poor decision (Turner & Pratkanis, 1998).

Although Janis believed that cohesiveness of the group was a very important aspect of groupthink, most research has not verified that it is. Similarly, the presence of an outside threat may not be as important as Janis originally thought. The one groupthink precondition that has found the strongest support is the presence of a strong, biased leader, one who strongly favors a particular decision at the beginning of the process (Esser, 1998). Currently, many psychologists believe that groupthink is a useful concept to guide research but that it does not explain defective group decision-making nearly as well as its popularity would suggest (Kerr & Tindale, 2004; Turner & Pratkanis, 1998).

Social Facilitation

Sometimes, we are influenced by what the others are doing, as indicated by the concept of descriptive norms. Sometimes, we are influenced by our expectations of how others will act or what they expect of us. Sometimes, it is literally the fact that other people are there that influences us.

Many serious fitness enthusiasts across the US discovered an unpleasant truth during the spring 2020 COVID-19 shelter-in-place. As you may know, running races around the world were canceled, from the prestigious (and enormous) Boston Marathon, to countless local 5k and 10k races. Many race organizers replaced their events with virtual races: participants strapped on their GPS watches and raced the required distance alone in their own neighborhood. And many found that their times were quite a bit slower than expected. In short, it is very difficult to run fast when you are alone; it is much easier when you are surrounded by hundreds (or thousands) of other people.

This is social facilitation ; it is the social psychologists’ explanation the common race-day excitement that leads to fast times. One of social psychology’s first discoveries, social facilitation is the tendency for people’s performance to improve when other people are present. It has been demonstrated many times and in many situations, such as the rate at which children wind a fishing rod and the ability to produce word associations and solve arithmetic problems (Allport, 1920; Dashiell, 1930; Triplett, 1898).

But wait, you might think. What about that time I gave a speech and the audience got me so nervous, I could barely remember my name, let alone what I was supposed to say? That sure does not sound like social facilitation. Well, you may have a career ahead of you as a famous social psychologist. Robert Zajonc (1965) noticed the same discrepancy (which others had observed previously) and proposed that the presence of others does not necessarily produce facilitation. What it does is increase arousal, and arousal is likely to produce the response that is most dominant. If the dominant response is the activity you are doing, as would be the case if the activity is straightforward or one that you do well, then the result will be social facilitation. If some other response might be dominant, such as being nervous because the task is difficult or you are not very good at it, the result will be social hindrance. Perhaps you did not need us to say it, but you probably should have practiced your speech more.

Bystander Effect

In Module 2, we described a research study in which participants who were talking about college adjustment in a group over an intercom heard one of the group members suffer an epileptic seizure (Darley & Latane, 1968). The researchers discovered that the larger the size of the group, the less likely individuals were to try to help the student in trouble. This failure to help became known as the bystander effect . Darley and Latane began this line of research in response to a well-publicized gruesome murder that had been committed in 1964. In an interesting twist, the reports of this murder were wrong, but they still led to one of the most important findings in the history of social psychology. As the original story went, in the early morning of March 13, 1964, the manager of a bar, Kitty Genovese, was attacked and murdered on her way to her New York City apartment after work. Ms. Genovese began screaming when the attacker began stabbing her, but apparently no one came to help or called the police. The attacker was briefly frightened away and then returned to finish murdering her. The entire attack took between 30 and 45 minutes, and there were 38 witnesses watching from their windows. The original reports were that not one even called the police until Ms. Genovese was dead.

Observers were disturbed and judgmental. They said the witnesses were uncaring, callous, big-city dwellers. The country was doomed because people no longer felt enough empathy to help a stranger in need. First, let us set the record straight. Those original reports missed many crucial details. In reality, there is no evidence that there were 38 witnesses (the prosecutor in the trial only found 3 who could identify the killer), a number of neighbors did try to help immediately by calling the police and yelling to frighten the murderer away, one witness, in particular, rushed to Ms. Genovese’s aid and comforted her until the police arrived, and Ms. Genovese died in an ambulance, well after police had been called (Kassin 2017; Manning, Levine, & Collins 2007; Takooshian et al. 2004).

Still, at least some people who witnessed part of the attack did do nothing. Darley and Latane’s line of research, discovered that key factors in the situation play a major role in people’s failure to help in times of emergency. Reverse those situational factors, and helping behavior increases dramatically.

One key situational factor that explains the Kitty Genovese case and many other failures to help is the number of bystanders. The more people there are, the less responsibility each one feels individually to act; this effect is called diffusion of responsibility . People essentially say to themselves, “someone else will call the police.” The problem is, they are all saying that. Remember the bystander effect the next time an emergency situation arises. You may be the only bystander who realizes that everyone else is assuming that another person will take responsibility to act.

Thumbnail for the embedded element "The Bystander Effect:The Death of Kitty Genovese"

You can also access the video directly at:  https://youtu.be/BdpdUbW8vbw

Darley and Latane (1970) also discovered several additional factors that help determine whether or not an individual will help in an emergency. First, you have to notice the event and judge it to be an emergency (several witnesses in the Genovese murder reported that they did not realize it was an attack). Although this may seem simple, real emergencies can often be ambiguous. Have you ever seen someone lying on a sidewalk in a large city? Was it a homeless person sleeping, or was the person injured and unconscious? Then, you have to know what to do. If the person is unconscious, do they need CPR? Do you know how to administer CPR? Again, these can be very difficult to know. Finally, after all of that, you have to decide to take action. Only then,  you would be able to start to help in the emergency.

bystander effect: the common finding that individuals will fail to help others during an emergency

diffusion of responsibility: one of the reasons for the bystander effect; individuals fail to take responsibility to help in an emergency when other people are present

  • Try to recall an example of conformity in yourself or a friend (perhaps the episode you came up with in the Activate exercise). Which of the principles that encourage conformity can you recognize?
  • Have you ever been influenced by injunctive or descriptive norms? Which type do you think is more effective on you? Have you ever tried to use them to influence other people?

21.5. Attempts to Influence Others

Imagine that you are married or living with a life partner (easy to do if you are married or living with someone). How would you try to persuade your spouse or partner:

  • that you should go out to dinner tonight
  • that you should buy a new car
  • that you should go camping on your next vacation

Are there any important differences in your strategies for the three situations?

  • What would it take for you to administer painful electric shocks to a complete stranger?

One key source of influence, as you have just seen, is a group. The group effects discussed in section 21.4 are powerful and can be manipulated, but they take place even though no one is trying to influence us. As I am sure you realize, we are certainly prone to influence when that is someone’s goal.

The two primary kinds of “on purpose” influence are persuasion and obedience. The key distinguishing element between the two concepts is authority. If you do not have any authority over another person—for example, the two of you are peers or the other person has authority over your—your attempts to influence would be called persuasion . If you do have authority over the other person—for example, you are a parent—the influence is called obedience . A child’s attempts to get her parents to let her stay up late to watch 1959 Sci-Fi classic The Brain That Wouldn’t Die would be persuasion. The parent’s insistence that the child goes to bed at 8:30 is an attempt to elicit obedience.

persuasion: an attempt to influence people when you have no authority over them

obedience: influence of an authority figure over another person

Persuasion: Influence Without Authority

Suppose a sales associate is trying to persuade a customer to purchase a new $130 pair of running shoes. She begins by describing the innovative pronation control features, the extra cushioning provided by the brand-new (patented, of course) gel composite in the forefoot, and the carbon rubber outer sole that is both durable and lightweight. By the time she gets to the mesh upper material that allows a runner’s feet to breathe, some customers are already taking off their shoes to give the new ones a “test drive,” while others are dumbfounded as they begin eyeing the $65 dollar pair on the shelf. Why does the same persuasion effort work so well for some people and fall so flat for others? You will be able to select the types of persuasion strategies that are most likely to be effective if you know the answer to one question: how likely is it that the person you are trying to persuade will be devoting a lot of effort to making the decision?

In the running shoe example, what if the customer is an often-injured, self-described serious runner who has just recovered from an Achilles tendon injury brought on by the wrong pair of shoes? He subscribes to several running magazines and is knowledgeable about most of the major technologies that running shoe companies have developed. He paid $170 for his last pair of running shoes, which he knows have 350 miles on them. This customer’s background and current situation are likely to make the choice of a new pair of running shoes a fairly important one, and he will probably be devoting a great deal of effort to the decision.

On the other hand, what if the customer was attracted to the store by the flashy window display. He is not a runner but has been thinking about starting so he can get in shape for his 20-year high school reunion. He has never spent more than $60 for a pair of athletic shoes before and has never really thought about the differences between running shoe models. He essentially thinks of all athletic shoes as “sneakers.” This customer is not likely to be devoting a great deal of effort to choosing among the different kinds of shoes. He is very unlikely to be persuaded by the technical details about the features of the shoes. On the other hand, he may very well be persuaded if he learns that Eliud Kipchoge, the current world record holder in the marathon, runs in these shoes.

These relationships between the likelihood that an individual will be devoting effort to a decision and the appropriate persuasion strategy have been summarized by the elaboration likelihood model (ELM) of persuasion (Petty & Cacioppo, 1981; 1986). According to the ELM, there are two basic methods of persuasion, the central route and the peripheral route. The central route to persuasion (sometimes referred to as central route processing) is appropriate when the person to be persuaded is motivated to devote effort to the decision. Because the person will be scrutinizing the persuasion attempt, the central route strategy involves making solid, strong arguments that convey the important (i.e., central) features about the persuasion topic. For example, the technical features of the running shoes. The peripheral route to persuasion (sometimes referred to as peripheral route processing) is appropriate when the person to be persuaded will not be devoting effort to the decision. This strategy, actually a number of strategies, involves associating the persuasion topic with peripheral (i.e., unimportant or irrelevant) cues that nevertheless may influence the person to like it. For example, the fact that a man who can run 26 miles at a 4 minute-38 seconds per mile pace runs in a particular pair of shoes says nothing about whether you should be running in those shoes (unless you can run that fast, in which case, we apologize for underestimating your running ability). There are many different kinds of peripheral cues, such as famous endorsers, association with attractive models or catchy music, or other classical conditioning effects. Just remember, any time the persuasive attempt does not really contain solid information that will stand up under scrutiny, it is taking the peripheral route.

Of course, persuasion is not just the goal of people who are trying to sell you something, although you will certainly be able to recognize these principles in marketing efforts. You can also see (and use) them in many other everyday persuasion situations. For example, a political candidate who runs an ad in which they are surrounded by American flags without actually saying anything substantive is using the peripheral route. A friend who flatters you before asking you if they can borrow your car is using the peripheral route. A child who draws you a picture so you will forgive him for squirting yogurt on the stereo is using the peripheral route. Remember that the success of these persuasion attempts depends on the likelihood that you will be motivated to devote effort to processing the information. If you are prepared to scrutinize the merits of a political candidate’s position, and all they come up with is “I love the flag,” you are likely to be insulted rather than persuaded. On the other hand, if you are not paying very close attention, the flag appeal may be quite effective (there is a civics lesson not very subtly hidden in here).

elaboration likelihood model: a theory that explains how different attempts to persuade others will be successful based on the targets’ likelihood that they will scrutinize the attempt

central route to persuasion: a persuasion strategy that employs solid reasoning and strong arguments

peripheral route to persuasion: a persuasion strategy that relies on irrelevant cues to persuade

Some Specific Persuasion Techniques and Tricks

Individuals who are skilled at persuasion often use principles that have been discovered and developed through psychological research. Sometimes, they figure out these techniques on their own. Often, however, when persuasion is part of someone’s job (for example a salesperson), these techniques are specifically taught. You will probably realize it without us saying this, but these can all be seen as specific kinds of peripheral cues.

Triad of Trustworthiness

We are sure it comes as no surprise to you to learn that persuaders who are seen as trustworthy are more effective. Psychologists have identified 3 key concepts that can make someone appear more trustworthy:

  • Authority (having an official position of authority, being well-credentialed, and appearing knowledgeable and competent)

Note that for the triad to work, the persuader only needs to appear trustworthy. Thus, these principles can be manipulated by an untrustworthy individual to fool us into trusting them.

The remainder of the strategies have been adapted from Robert Cialdini’s excellent book, Influence: Science and Practice (2008).

Reciprocation

“There is no duty more indispensable than that of returning a kindness.” (Cicero, quoted in Goulding, 1960).

Reciprocation might be a universal human norm. In other words in every (or nearly every) culture in the world people are expected to return kindness with kindness. If someone does a favor for you, you are expected to return the favor. And people can use this principle to persuade. For example, a successful door-to-door vacuum cleaner salesperson carried rolls of paper towels to give to prospective customers as soon as they opened the door. That little impulse to repay the kindness, even a small one, was often enough to get the customer to listen to the salesperson’s pitch.

And this kindness to be reciprocated can take many forms. Think about the free samples that grocery stores used to hand out in the days before COVID-19. They were not only there to allow you to discover what the food tasted like. They were a small gift given to you by a pleasant person. The expectation is that you will repay this kindness with a purchase. When a website offers you valuable free information (for example, a “white paper” about investment strategies), they intend for this kindness to be repaid by becoming a client. Keep in mind, for such a large commitment, the reciprocation alone is often not enough. That is why they will often use some of these additional strategies.

Commitment and Consistency (Foot-in-the-Door)

People like to appear consistent. So if you get someone to make a commitment, you can use this consistency desire to gradually get them to commit more and more. This technique, commitment and consistency, is also known as the foot-in-the-door technique, as the initial goal is to get a very small commitment (that is the foot-in-the-door part). Once that is achieved, the persuader will gradually escalate the commitments asked for. For example, a canvasser knocks on the door and asks the resident to sign a petition in favor of some social cause. After a couple more commitments (for example, asking which neighbors are also likely to sign the petition), the canvasser springs the real ask: a donation to support the organization spearheading the effort.

Or here, magician Derren Brown uses commitment and consistency to persuade strangers to hand over their wallets:

Thumbnail for the embedded element "Russian Scam (Complete)"

You can also access the video directly at: https://youtu.be/DR4y5iX4uRY

Reject Then Retreat (Door-in-the-Face)

Let us return to our paper towel gifting vacuum cleaner salesperson from above. After a 45-minute demonstration with the potential customer (making great use of commitment and consistency, no doubt), it is time to make the sale. The salesperson announces that the price of the vacuum cleaner is $2000. The customer is aghast! That is way too much money! “Well, let me call my manager, and see what I can do,” is the reply. Minutes later, the customer is offered the sale price that is going to be offered next week: $1000. You see, the salesperson never expected to get $2000 (of course, they wouldn’t turn it down if the customer accepted the price though!). Rather it was the reject then retreat technique: come in with an outrageous opening offer that is expected to be rejected. Then, “give in” and offer what seems like a much more reasonable price. It is like the teenager asking his parents to spend the night in Chicago, knowing that his parents will say no, so he can “settle” for spending the night with his friend across town.

Social Proof and Testimonials

Social proof is a lot like using descriptive norms to persuade. We decide what to do by looking at what others do, or did, in a similar situation. So advertisements that tell us that a particular brand of car, soft drink, or laundry soap is the best-selling brand are using social proof. We can also look beyond the sheer numbers of people as a version of social proof, too. For example, testimonials, in which a customer or better yet, a celebrity, sing the praises of the product we are considering can be a very persuasive tool.

The fact that an item is scarce seems to make it desirable, an observation that follows from the basic economic principles of supply and demand (i.e., when demand is high, a seller can charge more for the product). Of course, persuaders have found a way to manipulate this to their benefit. Now, we admit this could be completely innocent, but doesn’t seem a bit odd that Apple never seems to make enough iPhones when they launch a new model (or, if you like, substitute Nintendo, Sony, Tesla, and any other of the countless companies that somehow can’t seem to make enough of their products). Even simple matters like putting a limit on the number of items you are allowed to purchase makes the product seem scarce and increases sales.

Many of these techniques work, in part, because people often go through life on automatic pilot, so to speak. We operate on routine and habit without much thought. Ellen Langer (1992) called it mindlessness.  When we are operating mindlessly, we are prone to respond to what Cialdini called trigger features , environmental cues that lead a specific response without much thought. For example, when meeting an acquaintance in the hall who says, “Hi, how are you?” many people reply without thinking, almost reflexively, “Fine thanks, how are you?” In this case, the phrase (hi, how are you) is the trigger feature. If you think about it, you can probably recognize many such nearly automatic responses to common trigger features (for example, reaching for your phone when someone else’s makes a noise, taking out your notebook when you see your professor enter the room, and so on). By the way, if this sounds like classical conditioning, A-plus for you today!

There is no question that a savvy persuader can take advantage of this mindless responding to trigger features. For example, some companies produce private-label store brands that resemble name-brand products, apparently hoping that consumers will respond to the approximate appearance and pick up the look-alike.

Or again, Derren Brown has an entertaining (and amazing) demonstration:

Thumbnail for the embedded element "Derren Brown - "Paying with Paper""

You can also access the video directly at:  https://youtu.be/3Vz_YTNLn6w

By simply getting the vendors into a mindless conversation, and slipping in the trigger feature phrase, “Take it, it’s fine,” he was able to pass off blank paper as money. Interestingly, the one person for whom it did not work was the street vendor, who probably cannot afford to be mindless because people might try to rip him off routinely.

trigger features : environmental cues that lead a specific response without much thought.

Obedience: Influence with Authority

Above, we introduced the concept of obedience with the example of a parent enforcing an 8:30 bedtime. Your own experience as a parent or child may have led you to notice something. Many people recall these often-futile attempts to elicit obedience in their children and think that it is difficult to accomplish. Particularly in the US, where we tend to value independence and individuality, you might think it is nearly impossible to exert authority and force people to be obedient against their will. Well, I have a surprise for you.

In what may be the most famous set of experiments in all of psychology, Stanley Milgram demonstrated dramatically that it can be easy to get ordinary people to obey commands, even commands to commit horrible acts. In a typical experiment from Milgram’s program of research, a participant would be paired with a fake participant, an actor who was working for the experimenter. The “two participants” were informed that they were going to help the experimenter examine the effects of punishment on learning. The true participant was assigned the role of “teacher,” and the actor was assigned the role of “learner.” The teacher’s job was to read a list of word pairs to the learner and then test the learner by reading the first word of the pair. The learner was to respond with the second word of the pair. If the learner was correct, fine; the teacher moved on to the next word pair. If the learner made a mistake, however, the teacher was instructed to punish the learner with a painful electric shock. Before beginning the teaching-learning process, the learner was brought to an adjacent room out of sight of the teacher and strapped into a shock-receiving contraption. Then the procedure began. Because the whole setup was fabricated (including the shocks; no real shocks were given), Milgram was sure to plan it so that the learner made lots of mistakes, and for each one, the teacher was ordered to administer a shock. For every successive mistake, the teacher was instructed to use a stronger shock than the one used previously. The shock-generating machine (also fake, of course) was clearly labeled with increasing voltage levels, and the highest levels were labeled, “Danger- Severe Shock,” and “XXX.”

As the teacher (the true participant) began to protest and complain that he or she wanted to stop the experiment, the experimenter, a stern man in a white lab coat, urged the teacher to continue, saying things like, “It is absolutely essential that you continue,” and “whether the learner likes it or not, you must continue until he has learned all of the word pairs.” I should point out that it is difficult to convey the drama and realism of this experimental situation in writing. Although you may feel that the setup is so obviously contrived that no one would believe it, video recordings of the actual experimental sessions show that the participants took it very seriously.

At this point, I would like you to recall the answer to one of the questions in the Activate section. What would you do if you were a participant in this experiment?

  • Early on, the learner is in obvious pain; he yells and demands to be released from the experiment. Behind you, the experimenter continues to order you to continue the procedure. Do you continue or stop the experiment?
  • If you continue, as the shock level increases, the learner complains more loudly. Soon, he reports that his heart is starting to bother him. Still, the experimenter commands you to go on. Do you continue or stop the experiment?
  • If you continue, the learner, in an attempt to quit the experiment, refuses to answer any more. The experimenter tells you that a non-response is considered a wrong answer, so you must administer the electric shocks. Do you continue or stop the experiment?
  • If you continue, after several refusals, each followed by a stronger shock and very loud screaming, the learner goes completely silent. He is obviously unconscious and can no longer respond. The experimenter tells you that you must continue the experiment. Do you continue or stop the experiment?
  • If you continue, after several completely silent trials, you have reached the top levels of the shock generator, the ones labeled “Danger- Severe Shock” and “XXX.” The experimenter tells you that you must continue the experiment. Do you continue or stop the experiment?

We assume that by now most of you have “dropped out” of the experiment. Well, do not be so sure. In Milgram’s original studies, more than 50% (the exact number is 65 percent) of the participants obeyed the experimenter fully, past the point of the learner passing out, all the way to “XXX”.

Many people learn about these results and conclude that Milgram’s experiment demonstrated that it is easy to get “some people” to obey orders to commit horrible acts. After all, “only” 65% complied with all of the orders. Surely, strong-willed people, people taught to question authority and think for themselves, would not obey such obviously evil commands. We think a few details are in order. First, no one refused to begin the experiment. In other words, everyone was willing to inflict some pain on the learner simply because the white-coated, bossy experimenter commanded it. Second, and more importantly, you may have missed an important point (one that we have purposely not emphasized yet). This section is about the Milgram experiments, not experiment. In reality, Milgram conducted a series of experiments that varied many aspects of the experimental situation. By introducing some changes to the situation, obedience was driven down to 0 (that is, nobody obeyed). On the other extreme, Milgram was able to design a situation in which 93% of the participants complied fully. It should also be noted that in a few almost direct replications of Milgram’s studies researchers have found that many participants were still willing to obey the experimenter all the way to “XXX” (Burger, 2009, Dolin´ski1 et al., 2017).

These are easily some of the most famous experiments in all of psychology and Milgram’s original interpretation has been accepted uncritically by many for decades (with only occasional discussions of ethics). But there are old criticisms and new developments that call into question the major conclusion that Milgram’s work revealed the dangers of blind obedience to authority. As you will see, the results are still very instructive and useful, but perhaps not what we have thought all this time.

As early as 1968, critics were concerned about the possibility that participants might figure out the deception in studies like Milgram’s (Orne & Hammond, 1968). In that case, it would not really be obedience that Milgram was observing, rather just a version of participant demand, in which research participants play their expected role to perfection (Module 2). Indeed, a previously unreported analysis that Milgram conducted revealed that most (but not all) participants who really believed the learner was being shocked did not obey (Perry et al., 2019). So, it is very likely that the true level of obedience was lower than Milgram reported.

But even among the participants who believed the deception, it might not have been obedience that made them comply. A finer-grained analysis that looked at participants responses to specific verbal prods found that the ones that most resembled orders were rarely followed. The ones that emphasized science and encouraged identification with the experimenter and scientific process tended to be the ones that did the trick (Haslem, Reicher, & Birney, 2016). This reinterpretation is consistent, not with blind obedience, but with something closer to persuasion. Reicher, Haslem, and Smith (2012) called it engaged followership , compliance that comes from identification with the leader. In other words, the participants were likely to follow (not obey) when they believe in and are committed to the experimenter and the scientific process in the study more than with the learner.

Some of the important factors that led to higher compliance are consistent with both the obedience and the engaged followership interpretations. For example, if the learner was depersonalized and far away from the participant, and the researcher had higher status, participants were more likely to comply. If the learner was located in the same room as the teacher, compliance dropped, even more, if the teacher had to move the learner’s hand to the shock terminal. If the learner was quiet and in another room, obedience was very high. If the authority phoned in the orders, obedience was very low. Also, if the authority had low status or was not perceived as legitimate, few people obeyed the commands.

engaged followership: compliance that comes from identification with the leader

Whether obedience or engaged followership, many observers have used Milgram’s work to help us understand how many evil acts throughout history have occurred. For example, there are chilling similarities between Milgram’s experiments and the Nazi Holocaust; atrocities committed by armies, such as the US forces in My Lai during the Vietnam War, and widespread genocide, such as Hutus slaughtering 2 million Tutsis in Rwanda in the 1990s. If you still doubt that such effects would occur in this century, you would be wrong. Early in 2004, photographs of US military guards abusing, torturing, and humiliating Iraqi prisoners were released and publicized throughout the world.

Milgram’s experiments shed some light on how such horrible acts could be committed—not to excuse them, but to understand them so that we may someday be able to prevent their recurrence. If you put someone in a very powerful situation (and can get followers to identify with your goals), even a good person can commit a horrible act.

Social psychologists have often focused on the dangers of obedience (and now engaged followership). Given humanity’s history of ghastly atrocities committed by people following orders this attention is warranted. We should remember, however, that these kinds of compliance can be a good thing. First, some cultures across the world and sub-cultures in the US place a great value on obedience. Second, even cultures and individuals who disdain blind obedience in general often recognize its value in some situations. For example, parents who encourage their children to think critically for themselves and to question authority (respectfully, of course), realize that sometimes they must “do what they are told.” For example, a three-year-old does not fully understand what it means to be hit by a car and killed, so they must learn to obey a parent’s commands to stay out of the street.

We are left with a bit of a puzzle, then. Why can it be difficult to elicit obedient behavior in children when it can be easy to do so in strangers? We think that part of the answer lies in the distinction between the original obedience and the new engaged followership interpretations. It can be difficult to get a child, particularly a young child to identify with the parent, even if the parent is aware of the importance of engaged followership. Instead, they just give orders, sometimes with threats that the child soon learns can be empty. For example, as soon as the child realizes that the parent has no real intention of “stopping this car and making you walk home,” that particular threat loses its power to elicit obedience.

  • Think of some commercials or other marketing communications. Do they use the central route or the peripheral route to persuasion? Does the choice of strategy seem consistent with the likelihood that consumers will be devoting effort to the decision?
  • Some observers have criticized Stanley Milgram’s obedience experiments as unethical. What is your opinion? (You may want to look at the discussion of ethics in Module 2.)

Module 22: Intimate Relationships

This module is principally about the third part of the definition of social psychology, relating to other people. It is nowhere near an exhaustive list of the different ways that we relate with each other; rather it covers some of the close relationships that we have in our lives. In particular, We will be covering important aspects of intimate, often physical, relationships that many people have. Some related topics are covered elsewhere in the book. For example, friendship and parenting from Unit 4 are both important aspects of intimate relationships. Also, emotions and behaviors such as helping and aggression clearly are relevant to a discussion of how people relate to each other.

This module has three sections. Section 22.1 covers two key components of close physical relationships, namely love and sexual behavior. Section 22.2 describes the important progress we have made toward understanding sexual orientation, or the gender of people to whom we are sexually attracted. Section 22.3 covers some important observations about marriage, the specific long-term relationship in which love and sexual behavior play prominent roles.

22.1 Love and Sexual Behavior

22.2 sexual orientation.

22.3 Marriage, and Divorce

By reading and studying Module 22, you should be able to remember and describe:

  • Characteristics of love/kinds of love (22.1)
  • Why people fall in love physical attraction, similarity, mere exposure (22.1)
  • Misconceptions about sex (22.1)
  • Sexuality throughout the lifespan (22.1)
  • Evolutionary psychology applied to sexual behavior: long-term and short-term mating strategies, and  evolved preferences in mates (22.1)
  • Sexual response cycle (22.1)
  • Sexual orientation: different sexual orientations, genetic factors, rejected environmental explanations (22.2)
  • Benefits of a good marriage, dangers of a bad marriage (22.3)
  • What predicts and does not predict divorce (22.3)
  • Strengthening marriage (22.3)

By reading and thinking about how the concepts in Module 22 apply to real life, you should be able to:

  • If you have ever been in love, recognize the possible roles of physical attraction, similarity, and mere exposure (22.1)
  • Recognize the phases of sexual response in yourself or a sexual partner (22.1)
  • Recognize behaviors in yourself or others that are consistent with long-term and short-term mating strategies (22.1)

By reading and thinking about Module 22, participating in classroom activities, and completing out-of-class assignments, you should be able to:

  • If you have ever been in love, describe how Sternberg’s triangular theory and Berscheid and Hatfield’s passionate and companionate love apply to your relationship. (22.1)
  • Articulate your response to the information about the causes of sexual orientation. If you agree, why do you agree? If you disagree, why do you disagree? (22.2)
  • Apply John Gottman’s ideas about marriage to a long-term romantic relationship (successful or unsuccessful) in your own life (22.3)
  • What is your opinion about the benefits of scientific research into the causes of love?
  • Have you ever been in love? If so, what led you to fall in love? If not, what do you think will lead you to fall in love with someone?
  • Do you think that US culture is too restrictive or too permissive about sexuality?

If people object to the scientific study of anything more than they object to the study of love, we cannot imagine what it would be. To many people, part of the appeal, the mystique, of love is that it is incomprehensible. Reducing something as wondrous and transcendent as love to psychological, or worse yet, biological, explanations seems to render it sterile and mundane. On top of that, it seems frivolous to try to study love. What possible benefit comes from discovering what leads to love? Recall from Module 4, a dubious distinction intended to signify a waste of US taxpayers’ money, was given to research about love (research, by the way, conducted by Ellen Berscheid, about whom you will read soon).

To put it mildly, we disagree with those who object to the scientific study of love. First of all, we are driven by a curiosity about what love is and what makes it happen or not happen. There is a practical reason for studying love as well. Approximately 40% of first and second marriages in the US end in divorce or separation (Bramlett & Mosher, 2002). The vast majority of these people are probably deeply in love when they marry. Later, when something in the relationship changes, a series of events is set in motion that ends in the dissolution of the marriage. Could it be that many of these people entered into the marriage with unrealistic expectations about love that they acquired from romantic depictions of it in movies, books, and poems (Segrin & Nabi, 2002)? Could it be that because they did not understand what love really is, they failed to anticipate how it might change, and were disillusioned by the changes? Could be. Let us take a look at what the science of love has to say and find out.

What is Love?

Think seriously for a moment about what it means to be in love with someone. What kinds of feelings and behaviors are essential parts of romantic love? David Buss (1988) found that both males and females view commitment as a key aspect; he suggests that it is the most important component of love.

Robert Sternberg (1986), in his triangular theory of love , added passion and emotional intimacy to the mix. Intimacy is the emotional closeness that comes from sharing private thoughts. Passion is the desire for physical closeness; it is based on physical attraction, and it includes sexual desire. Each of these components can work together to represent different types of love (e.g., intimacy, romantic, companionate, passion, fatuous, commitment, and consummate. According to Sternberg, strong relationships are those for which the partners are similar in the levels of passion, intimacy, and commitment they feel toward each other (i.e., consummate love).  Sternberg (1998) has also noted that individuals harbor their own personal expectations about how romantic love is supposed to be. Relationships that meet those expectations will be satisfying, and those that do not meet them will not be satisfying.

introduction to psychology essay questions and answers pdf

Ellen Berscheid and Elaine Hatfield (1978) have distinguished between two kinds of romantic love. Passionate love is common early in a relationship; it is marked by intense feelings and physical desire. Over time, passionate love often fades in a relationship, leaving companionate love. Notice that we did not say, “leaving only companionate love.” That would imply that companionate love is some lukewarm leftover feeling that lingers after the “real” love has faded. Companionate love  is the stuff of commitment and emotional intimacy. Partners who feel companionate love essentially regard each other as best friends. Both types of love are related to happiness but in different ways. Passionate love is related to the presence of positive emotions, whereas companionate love is related to overall life satisfaction (Kim & Hatfield, 2004)

It looks as if romantic love is a universal human experience; researchers using a limited set of criteria found it in 90% of cultures surveyed throughout the world (Jankowiak & Fischer, 1992). There are, however, significant cultural differences in how romantic love is experienced. As you might guess, individualistic and collectivistic cultures differ on several important aspects of love. For example, people from collectivistic cultures tend to value and experience the companionate aspects of love more than people from individualistic cultures, who experience more passion and place more emphasis on personal fulfillment in love relationships (Dion & Dion, 1993; Gao, 2001). In many collectivistic cultures, marriages are arranged and passion is not expected. Instead, couples are expected to develop companionate love over time (Epstein et al., 2013; Levine et al., 1995).

passionate love: love that is marked by intense feelings and physical desire

companionate love: love that is marked by high levels of commitment and emotional intimacy

triangular theory of love : Robert Sternberg’s theory that love involves passion, commitment, and emotional intimacy

Why Do People Fall in Love?

The exact reasons why any two people fall in love are still a mystery. Psychologists have been able to figure out many of the important factors that can lead to love, however. We will mention three of them; the first two you might be able to guess, but we may surprise you with the third.

What first draws you to a potential romantic partner? It is not necessarily true in all cases, but the initial attraction between two people is often physical. And although there is some truth to the common belief that “beauty is in the eye of the beholder,” people tend to agree very well about the physical attractiveness of individuals, regardless of the ethnic and racial groups of rater and rated person (Cunningham, et al. 1995). For example, faces that are symmetrical—that is, the left side is close to a mirror image of the right side—are judged attractive throughout the world, as are faces that do not have extreme features (Jones et al., 2003; Mealey et al., 1999; Rhodes et al., 2002). Some combinations of physical features are very frequently found attractive. For example, females with large, widely spaced eyes and a small nose are judged attractive (Cunningham, 1986). Other features are more dependent on culture. For example, whites rate the attractiveness of heavy females lower than African Americans do (Hebl & Hetherton, 1998; Jackson & McGill, 1996).

Once you discover that you are physically attracted to someone, your next step is to talk to that person. What discovery about the other person is most likely to increase your attraction? Perhaps the biggest factor is the degree to which the other person is similar to you. When you speak to the person, the more they agree with you—on politics, music, sports, television shows, favorite color, whether you are a dog person or a cat person—the more you will like them (Byrne & Nelson, 1965). In short, the common idea that “opposites attract” is nearly entirely a myth. Similarity even extends to physical attractiveness. People very typically wind up with a partner who is close to them in physical attractiveness (Berscheid et al., 1971). Keep in mind that similarity levels in physical attractiveness do not always equate to higher levels of relationship satisfaction (Hunt et al., 2015).

Many people believe that each individual has one true soul-mate, his or her perfect match. If this were true, your soul-mate would probably live in Asia, where well over half of the world’s population resides. Instead, out of the 7.8 billion people on the planet, there are likely thousands, if not millions of potential partners who would make excellent matches for an individual. In reality, most people fall in love with someone who is close to them, someone they met at work or school, for example (Michael et al., 1994). It is not simply that having someone nearby magically attracts you to him or her. Rather, proximity affords the opportunity for frequent contact, and that is what makes you like the other person. This effect, called mere exposure , was discovered by Robert Zajonc (1968), and it is one of the most well-known effects in psychology. It has been demonstrated many times, with many different stimuli, such as pictures, sounds, and characters from an unfamiliar alphabet. It also works with people. For example, in one study, the researchers varied the number of times (0, 5, 10 or 15 times) that four female research assistants attended a class during the semester. At the end of the semester, the asked students how much they liked each assistant. The more times the assistant had attended, the more the students liked her (Moreland & Beach, 1992). We confess, this is the least magical reason for falling in love, but in a way, it is the most intriguing. Think about it. Your affection for someone increases simply because you come in contact with the person a number of times.

Human Sexual Behavior

By now, you may have noticed that throughout this book, we have been illustrating concepts with thinly disguised examples from our own lives, friends and families. We will not be doing that in this section.

We are simultaneously attracted to sex and shielded from it. It is mysterious and alluring, yet forbidden and shocking. That is how we can have a situation like the 2004 Super Bowl halftime show, during which two entertainers shocked hundreds of millions of people with a sexual display that may or may not have been choreographed. When Justin Timberlake grabbed at Janet Jackson’s costume, her breast was exposed on live TV. This “wardrobe malfunction” led people to express outrage at such an indecent act and launched a Federal Communications Commission investigation. Yet, the exposure was the single most frequently replayed moment ever recorded by TiVo, a precursor appliance to the DVR  that allowed viewers to pause and replay live TV, and it was the most sought item ever reported from an earlier internet search engine Lycos.

Misconceptions about sex

In light of our culture’s contradictory feelings about sexual behavior, many people’s interest in sex ends up being indulged secretly. It is, after all, an intensely private act. People’s discomfort with admitting to an interest in sex and asking open questions about it leads to a great deal of misinformation. In the case of romantic love, misinformation may contribute to the high rate of marital discord and divorce. In the case of sex, ignorance and misinformation is even more serious; it contributes to sexually transmitted diseases or infections, unwanted pregnancy, and even distorted attitudes that can lead to sexual violence.

To give you an idea of what we are talking about, let us address a few commonly held misconceptions about sex in the US. Keep in mind that this is by no means intended to be an exhaustive list.

  • Misconceptions about sex education . Many people believe that sex education leads to sexual behavior and that abstinence-only sexual education is the most (or only) effective way to reduce early sexual activity and teen pregnancy. As you might suspect, this has been a heavily researched topic, but because some of the research in this area is conducted by people who have a strong interest in the outcome, some of that research is problematic. A recent review of reviews has drawn some solid conclusions, however. The researchers identified 37 review articles containing 224 individual studies (all randomized controlled trials, the best design for drawing causal conclusions). The overall conclusions were that although abstinence-only education generally increases knowledge about the risks of sexual behavior, it does not lead to positive changes in adolescents’ actual behavior. In contrast, comprehensive education generally improves knowledge, attitudes, and behavior (Denford et al. 2017). The results are not extraordinary, though. There are some studies that found no effect, and even a few that found that the education increased sexual behavior. Part of the problem is that there are a great many variations in what the education includes, and several different behaviors that might be affected.
  • Misconceptions about condoms . Many people believe that condoms have a very high failure rate. In reality, if condoms are used correctly, they have a failure rate of 3%. Unfortunately, however, another set of misconceptions concerns the correct usage of a condom. Between one-third and one-half of adolescents in the US have incorrect beliefs about condom usage that can lead to their failure ( Brown et al., 2008; Crosby & Yarber, 2001).
  • Misconceptions about rape . Among the myths about rape are these: females commonly lie about being raped, only sexually promiscuous females are raped, and many females secretly enjoy the thought of being raped. A great deal of research has examined these rape myths and discovered that they can be quite harmful. First, you will not be surprised to learn that males are much more likely than females to believe rape myths (Ashton, 1982; Fonow et al., 1992: Hockett et al., 2015). People who believe rape myths are less likely to judge that a scenario meeting the legal definition of rape is actually rape; they assign less blame to a rapist and more blame to the victim than other people assign; they assign shorter sentences to male convicted of rape in “mock trial” research; they score higher on surveys that measure hostility toward females; they have more negative attitudes toward rape survivors; and they are more accepting of interpersonal violence (Lonsway & Fitzgerald, 1994). More dramatically, males who believe rape myths are more likely to admit that they might force a woman to have sex with them (Hamilton & Yee, 1990; Reilly et al., 1992). After this list, we are sure it will come as no surprise that males who believe the myths are also more likely to commit acts of sexual aggression, measured by self-report and by the actual commission of criminal sexual aggression (Murphy, Coleman & Haynes, 1986; Fromuth et al., 1991).

Sex throughout life

There is a wide range in media portrayals of adolescent sexual behavior. In reality, it can be difficult to gauge adolescents’ sexual behavior because people hold different opinions about what constitutes “having sex.” For example, one survey indicated that 59% of college students did not consider oral-genital contact to be “having sex” (Sanders & Reinisch, 1999). Not surprisingly, rates of oral-genital contact are still fairly high among young adults with many engaging in these activities without using safer sex practices (Holway & Hernandez, 2017).  This furthers the point that young adults may not view oral-genital contact as sex and thus do not perceive it as an activity that has many risks.

Many parents have been concerned about early sexual activity in their children through the years. It is true that adolescents had sexual intercourse at earlier ages throughout most of the 20th century. The trend peaked in the early 1990s and began to reverse, however. For example, in 1990, 54% of high school students reported that they had ever had sex. By 2002, it had dropped to 46% of 15 – 19 year-olds. From there, it fell lower, especially for males. In 2015 – 2017, 42% of 15 – 19-year-old females and 38% of 15 – 19-year-old males reported that they had ever had sexual intercourse (Martinez & Abma, 2020).

In part because of a lack of consistent and correct information about sex and sexuality, adolescents are in particular danger of contracting sexually transmitted diseases/infections. According to the US Centers for Disease Control, teenagers and young adults ages 15-24  have about half of the sexually transmitted diseases in the US, despite being only one-quarter of the sexually active population (CDC, 2017; Sherman, 2004).

Sexual activity peaks in the adult years, during the early years of marriage. Contrary to many people’s beliefs and the countless “married equals no sex” jokes, married couples have more frequent sex than singles (Twenge et al., 2017). For example, in one of the most comprehensive national surveys on sexual behavior ever conducted in the US, 19% of single males versus 36% of married males and 13% of single females versus 32% of married females reported having sexual activity 2 to 3 times per week (Lauman et al. 1994). There is a gradual decline in married couples’ frequency of sex as they age, from 2.2 times per week in their 20’s to 1.3 times per week in their 40’s, to 0.6 times per week in their 60’s and 0.3 times per week in their 70’s (Smith, 1994). By the way, if you are tempted to compare yourself to these averages, do not do it. There is huge variability; a better gauge is your and your partner’s personal satisfaction with your sex life.

Biological and psychological influences on sexual behavior

As you may recall from Unit 3, evolutionary psychology has a great deal to say about human sexual behavior. As it turns out, evolutionary psychology’s explanations of male and female mating behavior are among its most provocative and controversial contributions to the field. As the evolutionary thinking goes, the specific selection pressures faced by a species leads to the adoption of, or preference for, different reproductive strategies. Because males and females throughout our evolutionary history have frequently faced different selection pressures, the two sexes have somewhat different preferences.

A basic distinction is between long-term and short-term mating strategies. A long-term mating strategy denotes a set of behaviors that promote sex only within a long-term, committed, monogamous relationship. A short-term strategy essentially means casual sex with multiple partners.

Both males and females can and do use both, but situations throughout evolutionary history have suggested some important differences related to their use.  Specifically, pregnancy involves a much higher biological investment for females than males. Males produce millions of sperm per day, whereas females have about 400 ova in their entire lives, so right from the start, the conception of a single child represents a much higher proportion of a woman’s lifetime reproductive capacity. In addition, between pregnancy, birth, and breastfeeding, a woman is unable to conceive for a period that can reach 4 years or more for each child. Biologically speaking, the man is done nine months before the birth of the child. The only time that he is unable to conceive again is during the refractory period that occurs after ejaculation, which can be as short as a few minutes (see the section below). The overall result is an enormous biological investment in a child by a female and a very small biological investment by a male. Because of the large differences in parental investment required by males and females in the case of a pregnancy, one would expect females to be more selective in mate choices, even when engaging in a short-term encounter. And this is indeed what researchers have found (Buss & Schmitt, 2011). For example, men are far more likely to agree to have sex with a stranger than women are. In one dramatic demonstration of this, a member of the opposite sex approaches a target participant and says, “I have been noticing you all day, and I find you very attractive. Will you have sex with me?” In one version of this study, 75% of college-aged men said yes, and 0% of women did (Clark & Hatfield, 1989). Similarly, in a survey across 52 different countries, men were found to consistently desire more sexual partners than women do (Schmitt, 2003).

Evolutionary psychologists point out that when men and women are engaging in long-term mating strategies, their preferences are more similar, but that the differences in parental investment lead to some lingering differences between males and females. Although we are simplifying a bit, ancient males were more successful biologically if they chose mates who could conceive and give birth, so they evolved more of a preference for females whose appearance signaled that they were fertile and healthy, such as youthful appearance, smooth skin, and a specific ratio of the size of the waist compared to hips. Because these physical cues to fertility are so important, evolutionary psychology predicts that males will place a great deal of value on physical appearance and age when choosing mates. Females, on the other hand, were more successful biologically if the few children that they did bear survived. Thus, they evolved a preference for males that could help them do that. In other words, ancient females preferred males who signaled that they could be good providers for their children. These cues to a man’s ability to provide include high social status, ambition, dependability, willingness to commit, and athletic ability (Buss, 2003).

Evolutionary psychologists have conducted very ambitious research projects to test many of these claims about mate preferences. Of course, the claims cannot be tested directly because no one knows for sure what the specific evolutionary pressures were in ancient times. Instead, the reasoning is that if the evolutionary explanations are correct, then the predicted mate preferences should be seen throughout the world, regardless of culture. In one early study, researchers conducted surveys of people aged 14 to 70 in 37 different cultures around the world, over 10,000 people total, and found strong support for the predictions of evolutionary psychology. For example, in all 37 cultures, they found that men judged the physical attractiveness of a mate more important than women did—although both genders judge it important (Buss & Schmitt, 1993). Similarly, in all 37 cultures, women judged the social status of a mate more important than men did (Buss et al., 1990).

As impressive as these results are, they are not the last word on the issue. Alice Eagly and Wendy Wood (1999; Wood & Eagly, 2012) argued that the gender roles that have developed in a specific culture can have a large impact on mate preferences. Suppose in a particular culture men are expected to work outside the home and provide material support for the family while women are expected to stay at home and care for the children. In this case, you would expect men to select women who appear to be good at fulfilling their expected role, and vice versa. Indeed, all of the cultures in the Buss et al. research have gender role expectations just like these, although the role expectations are stronger in some cultures than in others (that is, some cultures have very rigid gender roles, others are more lax). Eagly and Wood reanalyzed the Buss et al. (1990) results and found that the cultures that have stronger gender role expectations have larger sex differences in mate preferences. Eagly and Wood have drawn the conclusion that gender role inequality in general can explain the existence of the gender difference in mate preferences. Thus, they have offered an alternative explanation of the Buss et al. results that does not rely on an evolutionary theory.

Now you may be wondering which view is correct: Do people choose mates because of preferences that have evolved throughout human history (the Buss et al. view) or because of their adaptations to gender role expectations in the current culture in which they live (the Eagly and Wood view)? Perhaps this is a false dichotomy (see Module 1). In other words, they could both be correct. Similar to the way the nature and nurture issue has played itself out in other contexts, evolutionary psychology can help us to understand the mechanism in nature by which gender differences exist at all. Then, the properties of individual cultures, such as gender inequality, can explain the nurture side of the equation, through which human behavior can stray from or intensify the predispositions that nature has put in place. More recent research has pursued this line. David Schmitt was involved in a survey of nearly 17,000 people from 53 nations (even more ambitious than the Buss research; one article based on this survey project listed 99 individual co-authors!) that supported this view. In another article that described the phenomenon of “mate poaching,” that is, trying to attract someone who is already in a relationship, Schmitt found evidence that supported the impact of evolution, social roles, and individual personality, another example of the complex interaction of nature and nurture (Schmitt, 2004). Most recently, Walter et al. (2020) replicated a version of the multi-culture studies (this time with 45 different countries; 108 co-authors this time around!) and found solid support for the evolutionary psychology predictions of men having a stronger preference than women for attractive, young mates and women having a stronger preference than men for older mates with good financial resources. They also found some support for the gender role expectation side of the explanation as well, as gender equality predicted the actual age of participants’ partners.

Sexual response cycle

Another way to think about the biology of sex is to consider what happens during the sexual act itself.  According to Masters and Johnson (1966) the sexual response cycle is similar in men and women. There are four phases: excitement, plateau, orgasm, and resolution. During excitement , the genitals become aroused; the penis becomes erect and the vagina becomes lubricated. In women, the tip of the clitoris swells, the upper portion of the vagina expands, and nipples become erect (as they do in some men, too). During plateau , breathing rate, pulse rate, and blood pressure increase. In men, the tip of the penis swells, and a small amount of fluid (not semen) appears. In women, the outer portion of the vagina contracts and the clitoris is retracted. During orgasm , the genitals contract rhythmically, as do many muscles throughout the body. Breathing rate, pulse rate, and blood pressure are at their highest during orgasm. The pleasurable feeling that both men and women experience is actually quite similar, contrary to what many people think. Vance and Wagner (1976) had men and women write out descriptions of how orgasms felt. A panel that included gynecologists, medical students, and clinical psychologists could not tell the difference between men’s and women’s descriptions. The final phase is resolution , during which the bodies return to normal. After orgasm, men have a refractory period , during which they cannot have another erection or orgasm. The period can last for a few minutes in some men, up to a full day in others; the refractory period tends to grow longer as men age. Women do not have a refractory period, so they are able to have multiple orgasms if they are stimulated again.

excitement: phase of sexual response when genitals become aroused

plateau: phase of sexual response before orgasm, when breathing, pulse rate and blood pressure increase

orgasm: phase of sexual response during which genitals contract rhythmically

resolution: phase of sexual response during which the body returns to normal

refractory period: period of time after orgasm in men during which they cannot have another erection or orgasm

At one level, sexual response is a simple reflex between neural centers in the spinal cord and the sex organs. In men, the spinal cord receives input from stimulation of the penis (or nearby areas), which then sends neural signals through the parasympathetic nervous system—the part of the autonomic nervous system that generally calms the body—to produce an erection (McKenna, 2000). The erection itself results when muscles that surround the arteries in the penis relax, allowing the arteries to fill with blood. With continued stimulation of the penis, the spinal cord sends a message via the sympathetic nervous system (the arousing part of the autonomic nervous system), which begins muscular contractions that control ejaculation, the expulsion of semen from the penis. In women, the clitoris and vagina participate in sympathetic and parasympathetic nervous system reflexes, but the exact mechanisms by which they work are not as well understood (Hyde & DeLamater, 2003).

For both men and women, sexual response is far more than these simple reflexes. For example, tactile stimulation of the penis while a man is washing in the shower usually does not produce an erection, whereas the gentlest touch during the opening moments of a sexual encounter often does. In fact, you may realize that both men and women can enter the excitement phase of sexual response without any physical stimulation at all. Obviously, the brain is involved in sexual response. An individual’s cognition and perception are essential features of sexual arousal (Walen & Roth, 1987). Brain areas that are important for cognition, sensation, and perception, then, such as the thalamus and certain areas of the cortex, are active during sexual behavior. Another area of the brain that appears important for sexual behavior is the limbic system, the set of brain structures that form a ring around the thalamus and are important for emotions in general. Research in which electrodes were used to electrically stimulate the limbic system of monkeys and rats, and fMRI research on humans have both found support for the role of specific limbic structures such as the hypothalamus and the cingulate cortex, usually considered part of the limbic system although it is part of the cortex (Paredes & Baum, 1997; Park et al., 2001; Van Dis & Larsson, 1971).

Hormones are also involved in sexual arousal and behavior, although they are involved far less dramatically than people believe. If you remove all testosterone from the body of a male by castration (as has been done on occasion to violent sex offenders), they will very likely lose interest in sexual behavior—but not immediately and not always completely (Carter, 1992). On the other hand, if a male already has sufficient testosterone, giving them more will probably not increase their sex drive. There is a relationship between the amount of testosterone in the bloodstream and sexual activity in adolescent males, but no solid relationships have been demonstrated in healthy adult men (Sherwin, 1988; Udry et al., 1985). Testosterone is also related to sexual behavior in females perhaps more strongly than in males. Researchers have found correlations between sexual motivation and levels of testosterone in healthy adult females (Morris et al., 1987). It is important to remember that in both males and females once the level of hormones reaches a minimum threshold, changes in the level have a relatively small effect (or no effect) on further sexual behavior.

  • What is your opinion about the mate selection ideas of evolutionary psychologists?
  • Why do you think our culture is so conflicted about sex and sexuality?
  • What, specifically, attracts you to another person romantically?

As you may realize, there has been a marked improvement in attitudes toward people with homosexual orientations recently. Some casual observers might even believe that the struggle for acceptance is over, but unfortunately, that is not quite true. Despite the significant movement in public opinion over the last ten years or so, the topic of sexual orientation is still controversial. Consider the results from 10 different countries in a worldwide survey conducted by the Pew Research Center:

Percentage who say that homosexuality should be accepted by society

Percentage who say that homosexuality should be accepted by society in 10 different countries

So maybe in Sweden, the struggle is (almost) over. Many people, often for religious reasons, still do not accept gays or lesbians and believe that same-gender sexual behavior is wrong. Many also note that it is “unnatural,” as sex without the possibility of reproduction cannot be biologically useful. Well, one thing we can tell you is that, despite many unanswered questions, sexual orientation from psychologists’ perspective is far less controversial than it is in the general population. To be sure, there are still serious disagreements about what causes sexual orientation, but there is a growing agreement about several important aspects. If you have strong religious-based attitudes against homosexuality, we are unlikely to change your mind in this short section. You should know what the science of sexual orientation has to say, however. And we will return to the “it is unnatural” argument shortly.

Let us start with some definitions and clarifications. Sexual orientation   refers to the gender to which an individual is sexually attracted and with which the individual is prone to fall in romantic love. Many people categorize sexual orientation into heterosexual and homosexual (or, to use more inclusive language, gay or straight), but that is really a false dichotomy (see Module 1). Why? Because there are more than two sexual orientations, of course. People who have a bisexual orientation are attracted to both genders, people with a heterosexual orientation are attracted to the opposite gender, and people with a homosexual orientation are attracted to the same gender. We will be adopting the term non-heterosexual orientation  to refer to sexual orientations that are not heterosexual. Of course, you probably already knew that there are many different sexual orientations (asexuality, bisexuality, homosexuality,  pansexuality, etc.). There is even the intriguing possibility that sexual orientation falls along a continuum, a position that has some scientific support (Savin-Williams, 2016). Note that sexual orientation is separate from one’s gender identity (Module 17), although there is some relationship, as we shall soon see.

Instead of focusing on understanding non-heterosexual orientations, many psychologists consider that their focus is to understand sexual orientation in general .  This is not “political correctness,” as some may charge. The reason is that it looks like variations of the same basic mechanisms are responsible for heterosexual and non-heterosexual orientations. Reflect on your answers to the Activate question for this section. Many straight people are puzzled by the question and respond that they do not know why they are attracted to and fall and love with the opposite gender; they just are and they just do. It is the same with people who have a same-gender orientation. Despite this change in psychologists’ general focus, however, the practical focus still tends to be on understanding same-gender attraction.

How Common is Non-Heterosexual Orientation?

This has been a remarkably difficult question to answer. There are different ways to measure it (physiological, self-reports of behavior or attraction, self-reports of identification), the different measures and people’s orientations might not be stable over time, and people might be unwilling to admit to non-heterosexual orientations because of stigmas (Bailey et al., 2016). Let us use one representative fairly recent estimate to give us an idea of the numbers. Gates (2011) reviewed several individual estimates and concluded that about 3.5% of the US population identifies as gay, lesbian, or bisexual. Altogether, that corresponds to approximately 9 million LGBT (T for transgender) individuals in the US, which is near the population of New Jersey. Nineteen million (8%) report engaging in same-sex sexual behavior, and 25.6 million (11%) report some same-sex sexual attraction. So, we may be talking about numbers around this range: 3.5% to 11% (or 9 to 25.6 million).

non-heterosexual orientation: sexual orientations that are not heterosexual

sexual orientation: gender to which an individual is sexually attracted and with which the individual is prone to fall in romantic love

What Causes Sexual Orientation?

Before describing some specific possible causes of sexual orientation, let us address the “it is unnatural argument” that some advance against non-heterosexuality. It turns out that this is a relatively easy argument to refute using the rules of deductive logic that we outlined in Module 7.

If p, then q; p; therefore q.

If something appears in nature, it is, by definition, natural (If p, then q) Homosexual behavior, defined as genital interaction between animals of the same sex, appears in nature (in hundreds of species, see below) (p) Therefore, homosexual behavior is natural (q)

Indeed same-sex genital interaction, sometimes to orgasm has been observed at least occasionally in hundreds of species, including bonobos, mountain gorillas, macaques, baboons, and dolphins.

As we turn to describe causes of sexual orientation in humans, it might be useful to start with an important statement from a comprehensive review of the science published in 2016: “No causal theory of sexual orientation has yet gained widespread support. The most scientifically plausible causal hypotheses are difficult to test.” (Bailey et al., 2016, p. 46). The reasons for this might be obvious to you. First, it is impossible to conduct experimental research in this area. Second, even longitudinal research can be a challenge, given the political difficulties that sometimes accompany research in this area, and the relatively low prevalence of non-heterosexual orientations. For example, a researcher would need to start out with 10,000 participants to follow over decades in order to have an adult sample of 350 people with a non-heterosexual orientation. And that is assuming that everyone remains in the study for the next 20 years (which never happens). Researchers are forced to do a lot of retrospective design studies, in which adults remember experiences and thoughts from their childhood, and prospective design studies, which are similar to longitudinal studies with one key difference. In a prospective study, participants are chosen at the outset because they are likely to be interesting subjects as time goes on. You probably noticed that there are key limitations to each of these study types: retrospective studies rely on imperfect human memory, and prospective studies have a selection bias.

Even with all of the difficulties, scientific support for what Bailey and his collaborators called non-social causes is much stronger than for social causes. One reason this is important is that people’s attitudes toward non-heterosexuality are strongly related to the types of causal explanations they believe. If they believe that non-heterosexual orientations result from social causes, such as early sexual experiences and societal acceptance, they tend to hold negative attitudes. On the other hand, people who believe in non-social causes such as genes and hormones tend to hold positive attitudes.

One of the strongest correlates of adult non-heterosexuality is consistent childhood gender nonconformity , when a child (as early as pre-school) behaves like the other gender. These behaviors can include: cross-dressing, playing with dolls, desire for girl playmates, not liking competitive sports, and wanting to be a girl in boys, and more-or-less the opposite in girls. Some of the most convincing research (to our eyes) showed videos of children who later grew up to be heterosexual or non-heterosexual to research participants, who were much better than chance at predicting who ended up with each orientation (Rieger et al., 2008). Gender nonconformity is also more common among adults with non-heterosexual orientations. Because these differences start so early and are seen in cultures throughout the world, it is not likely that the nonconforming behaviors are culturally learned (Bailey et al., 2016).

The non-social explanations consist largely of the role of hormones on development and genetics. First, let us talk about hormones. As you may recall, hormones play an important role in the development of a fetus. For example, developing testes produce androgens, which then lead the sex organs to become male sex organs. These circulating hormones also likely help determine brain differences that are related to adult sexual behavior. It is through this mechanism that heterosexual and non-heterosexual orientations might develop. Again, without being able to experimentally manipulate levels of androgens for a developing fetus, we are left to examine the hypothesis using indirect techniques. There are three parts to the evidence stream:

  • Experimental and observational research in animals.  For example, research in rodents has found that by manipulating levels of androgens early in development can cause males to exhibit female-typical sexual behavior and females exhibit male-typical behaviors (Henley et al., 2011).
  • Consistent evidence through case studies in humans . Some humans are exposed to abnormal levels of androgens prenatally because of genetic conditions. For example, women who are exposed to higher levels of androgens before birth have higher rates of gender nonconformity (Berenbaum & Beltz, 2011).
  • Evidence of other consistent effects of different levels of early hormones in humans . This might sound like the oddest piece of evidence but “finger-length ratio” studies are consistent with this early hormone exposure hypothesis. Compare the length of your index finger and ring finger on one hand. Females tend to have a larger ratio of index to ring finger than males do. But females who have been exposed to high levels of androgen prenatally have smaller ratios (more like males), and males who have conditions that make them insensitive to androgens have larger ratios (more like females) (Honekopp & Watson, 2010; Berenbaum et al., 2009). Although finger length ratio differences for non-heterosexual versus heterosexual individuals have only been consistently observed for females, this is still a contender for part of the explanation (Bailey at al., 2016).

Taken together, most researchers agree that exposure to androgens prenatally at least contributes to gender nonconformity, which then can play a significant role in adult sexual orientation (but no one thinks that this is the whole story).

Genetics clearly also plays a significant role in sexual orientation. Recall that behavior genetics research tries to identify differences in a trait in a population (in this case sexual orientation) that result from differences in genes and differences in the environment. A stable estimate of heritability takes years to develop because it requires several individual studies. Bailey et al. (2016) arrived at the best current estimate of 0.32 for the heritability of sexual orientation. This is a moderate level, with approximately one-third of sexual orientation variation appearing to be a result of genetic differences.

This means that there is a substantial role for environmental effects. But probably not the social environment. In other words, there is no high-quality evidence that social aspects, such as observational learning, other types of social contagion, parenting, or early sexual experiences influence sexual orientation. The studies that have examined these proposed links are plagued with various combinations of weak to null results, poor quality data and methods, and plausible alternative explanations for relationships that do appear (Bailey et al., 2016).

Rather, it is what we would call the non-social environment that appears important. One of the strongest pieces of evidence of the effects of this non-social environment is the fraternal birth order effect , in which having older brothers increases the likelihood that a male will have a same-gender orientation. This effect only occurs for biological older brothers with the same mother, not with adopted, step- or half-brothers, which would seem to rule out a social environment explanation. Just to be clear, we are saying that there is something in the mother’s uterus that changes when a developing fetus has had older brothers pass through that same uterus. One key possibility is the way the immune system of the mother responds to the earlier brothers, therefore changing the uterine environment for later developing males (Blanchard, 2001).

retrospective design:  research study in which adults remember experiences and thoughts from their childhood

prospective design:  research study that is similar to a longitudinal study in which participants are chosen before the study begins

gender nonconformity: when a person behaves or dresses like the societal expectations of another gender

fraternal birth order effect: theory that having older brothers increases the likelihood that a male will have a same-gender orientation

How is Sexual Orientation Discovered?

A key issue about having a non-heterosexual identity is that many people are afraid to admit it. It is still quite socially acceptable in many quarters to insult a gay person openly, and the US Supreme Court only deemed discrimination against LGBTQ+ individuals in employment unacceptable in 2020.

The culture of secrecy, shame, and discrimination contribute to the still problematic issue of violence against lesbians and gay men. The Federal Bureau of Investigation reported that hate crimes—crimes principally motivated by bias about the race, religion, ethnicity, gender, disability, or sexual orientation of the victim—based on sexual orientation was  1,303 in 2017 and  1,196 in 2018. With many of these crimes being underreported as hate crimes, scholars believe that these numbers may be higher.

Sexual orientation is pretty well set before adolescence (Weinberg & Hammersmith, 1981). The first challenge for the adolescent is to discover what their orientation is. This would seem a relatively straightforward process of simply noticing whether you are sexually attracted to men or women. Two factors make this discovery process more difficult. First, remember, there are at least a minimum of 3 orientations and a very real possibility that sexual orientation is on a continuum. It can be difficult to fit oneself into a box that has two compartments when the reality is far more continuous and complex. Second, because our culture is overwhelmingly heterosexual and correspondingly still somewhat disapproving of non-heterosexual orientations, adolescents do not approach this period in an unbiased way. Rather, they operate under the assumption that they are straight, and they tend to hold on to that belief for as long as they can. They may deny their orientation to themselves, or be confused about it, and only in time, come to accept it (Cass, 1979). Despite the barriers in the way of gay, lesbian, bisexual, and other people with non-heterosexual orientations developing healthy identities, most do seem to succeed. For example, they are as well adjusted as the general population (Ross et al. 1988).

Even after accepting their own sexual orientation, individuals will need to decide how open they will be about it to other people. Can they tell their parents, friends, acquaintances, and employers? Being openly gay can lead to hate crimes, social rejection and disapproval, and until very recently (June 2020) discrimination on the job (Corrigan & Matthews, 2003; Ragins & Cornwell, 2001).

  • Do you have any friends who are gay? Can you see how the ideas from the stereotype, prejudice, and discrimination discussion (Module 22) apply to sexual orientation?
  • What do you think some of the “missing” environmental influences of sexual orientation are?

22.3 Marriage and Divorce

  • If you are not yet married, do you expect to marry someday? If you do expect to marry, or already are married, how likely do you think your marriage is to end in divorce?
  • What are the important differences you can see between the two types?
  • Which differences do you think might be the causes of happiness or discord, and which do you think might be the effects?

News reports in the US during the first half of 2004 were strident: the institution of marriage is under attack. Observers were referring to the decisions in San Francisco and Massachusetts to allow same-sex couples to marry. Now, there is little doubt that the institution of marriage is in decline, but it has nothing to do with same-sex marriage. Demographic changes in the US in the latter half of the 20thcentury led to sharply increasing numbers of divorced and never-married people. The percentage of households headed by a married couple declined from 71% in 1970 to 53% in 2000 (US Census Report, 2001). The decline is largely a result of people marrying later and divorcing more. Despite the large decline, the majority of US adults do find themselves in a long-term monogamous relationship for a large portion of their adult lives. Among all people over age 15 in the US, 27% have never been married, and fewer than 10% of people over 45 have never been married (US Census Report, 2003).

There are real physical and psychological benefits to being married. People who are married tend to have better physical health and psychological well-being than other adults (House et al., 1988; Kim & McKenry, 2002; Lee et al., 1991; Mookherjee, 1997; Murphy et al., 1997). Longitudinal research in many cases has led most observers to conclude that there is something about marriage that leads to positive outcomes, as opposed to good physical or mental health causing people to get and stay married (although this relationship is surely part of the picture).

However, not all marriage is good.  Destructive marital conflict is related to many physical and psychological problems, such as depression, alcoholism, cancer, and heart disease (Beach et al., 1998; Fincham & Beach, 1999). It is also related to the poor adjustment of children, similar to the well-known negative effects of divorce (Grych & Fincham, 1990; Owen & Cox, 1997).

There is no lack of advice for people in search of marriage advice. A search on Amazon.com for “self-help marriage” returned over 30,000 results in June 2020. However, this abundance of advice is not all worthwhile. John Norcross and colleagues (2003) have undertaken a gargantuan effort to help us wade through the immense number of self-help resources that are on the market. They have conducted a series of surveys of almost 3,500 clinical and counseling psychologists in the US in order to determine the quality of thousands of self-help resources that are available to the general public. In the category of marriage, John Gottman’s books were clear favorites (Gottman, 1994; Gottman & Silver, 2000).

Gottman is not simply the author of the self-help books. He has conducted a great deal of research himself, which has appeared in the scientific literature throughout the years. Through his research, Gottman has discovered many key factors that seem to lead to marital satisfaction, dissatisfaction, and stability. The rest of this section summarizes some of them.

What Causes Marriages to Disintegrate

Many people have heard the famous statistic that the divorce rate in the US is 50%. This is a misinterpretation of the fact that for every two marriages in a single year, there is one divorce. These are not necessarily the same people marrying and divorcing. It is a safer conclusion to predict that someone’s lifetime risk of divorce is around 40% (Glick, 1988). Half of the divorces occur in the first 7 years of marriage (Cherlin, 1982; cited in Gottman & Levenson, 2002). One way that researchers estimate the divorce rate is to measure the number of divorces in a given year for every 1000 married women. According to this measure, the divorce rate in the US hit a 40-year low in 2018, with 15.7 divorces per 1000 married women (Allred, 2019). Gottman is convinced that the divorce rate could be even lower.

It can be quite difficult to determine the causes of marital discord, particularly if we try to do it simply by observing high-functioning and low-functioning marriages. For example, a marriage may sour because the partners frequently criticize each other, or the partners may frequently criticize each other because they have a poor marriage. As is so often the case when we are interested in real-world social and emotional questions, we are faced with a correlation, which does not point clearly to causes and effects. One way to help us determine which factor is the cause and which is the effect, however, is to use a longitudinal research design. If we can determine that a factor comes into play long before a couple is unhappy in their marriage, it gives us some confidence that the factor is a cause. Gottman and his colleagues were able to do just this. By observing newlyweds while they discussed an ongoing conflict, they were able to predict which couples would eventually divorce with 91% accuracy.

Although you could probably predict some of what Gottman and his colleagues found in their research, there were certainly a few surprises. First, contrary to popular opinion (and the advice given in some self-help books), anger in a marriage does not predict divorce because, in essence, everyone gets angry. Rather, it looks as if a particular kind of anger, or a particular way of expressing anger, is destructive. Specifically, when couples filled their conflict discussions with high-intensity negativity—criticism, defensiveness, contempt, listener withdrawal, and belligerence—they were much more likely to divorce (Gottman, 1994; Gottman et al., 2002). You don’t even need to be an expert to recognize some of the warning signs, if you know what to look for. In one study, college students were able to predict divorce with 85% accuracy by identifying hostility in husbands, sadness in wives, and lack of empathy in both partners (Waldinger et al., 2004).

Second, Gottman’s team found that one of the most common strategies recommended by marriage counselors was unrelated to the likelihood of divorce. Active listening is a communication strategy in which the listener paraphrases what he or she hears without evaluating. It can be very effective at clearing up miscommunication and is a key component of many therapies. Marriage counselors in particular often teach active listening to their clients so that the couples can use it when they argue. Gottman and his colleagues concluded, however, that the strategy was unrelated to marital stability, not because it did not work, but because couples rarely used it. During a conflict, couples’ emotions are running too high to allow them to employ active listening.

Gottman’s research has dispelled several additional myths. Many people believe that common interests keep marriages strong, but the research did not support this idea. Also, many think that a sign of a strong marriage is when couples reciprocate good deeds with other good deeds. Quite the contrary, Gottman found that this “quid pro quo” is more common in bad marriages; and the flip side occurs, too (bad deeds are reciprocated with bad deeds). Finally, the research has shown that the belief that avoiding conflict is unhealthy (that you should “tell it like it is”) is wrong.

Something that probably will not surprise you is that different behaviors in each partner predicted divorce. When a spouse quickly changed the tone of discussion from neutral to negative—for example, by beginning a discussion of a sensitive topic such as money disagreements by criticizing their partner—the couple was more likely to divorce. On the other hand, spouses who would not let their partners influence them were more likely to divorce. Finally, a couple’s ability to maintain positive behaviors, such as smiling or using humor, while they were discussing conflicts was related to marriage stability. Gottman has called these sorts of strategies to de-escalate conflict repair attempts . He notes that nearly all couples use them, but they are successful in stable couples only.

active listening: a communication strategy in which the listener paraphrases what he or she hears without evaluating.

repair attempts: a couple’s attempts to maintain positive behaviors, such as smiling or using humor, while they were discussing conflicts.

How to Have a Successful Marriage

So far, nearly everything has been focused on what predicts divorce. This is an unfortunate side effect of the fact that when people seek help for their marriage, they are nearly always at risk for divorce; it becomes the main focus. There is, however, much positive to say. Although couples that are in trouble are the most likely to seek advice to strengthen their marriage, Gottman has noted that all couples can benefit from it. In fact, it is easier to adopt the strategies if the marriage is not yet in trouble. As John F. Kennedy famously said, “The time to repair the roof is when the sun is shining.”

So, while the sun is shining, you might pay attention to many of the characteristics that Gottman has discovered in successful marriages (of course, these will work when it is “raining,” too). Perhaps the most important basic idea is that the couple needs to work on friendship first—in other words, on their companionate love. A marriage in which the couple are close friends is not devoid of passion, by the way. Gottman has noted that these marriages are much more passionate than when the couple encourages passion through occasional romantic gestures. Instead, the couple should work on everyday gestures, such as paying attention and responding positively to comments. They should look for opportunities to create a life together, for example, through new family traditions and rituals. Passion flows more freely when each partner feels generally positive about each other, as they do when they are intimate friends. As part of strengthening their friendship, couples need to become very familiar with each others’ likes, dislikes, hopes, and fears. They must find and build on fondness and admiration; if the couple is already in trouble, they may need to find these aspects in memory. The partners need to continually remind themselves of the positives. Gottman has said that the encouragement of fondness and admiration is especially important for protecting against contempt (a feeling that you are better than someone else), which may be the most destructive emotion for a marriage.

Every day is filled with a great many opportunities to improve a couple’s relationship. Imagine a couple just relaxing together in their family room after dinner. The wife turns to the husband and says, “Hey, let’s go on a hike this weekend.” The husband is busy watching TikTok videos on his phone and does not even respond to the request. Obviously, this is not a great interaction, but let us analyze it a bit to discover what is going on. Many times throughout the day, romantic partners make bids for connection. In this case, the invitation to go on a hike is a bid to share an adventure together (Brittle, 2015). The husband has three options, and in this case, he took the worst of the three. First, he can turn toward the bid. In other words, he can respond positively, by accepting the invitation. Second, he can turn against the bid, by acknowledging it but rejecting it, perhaps offering an alternative. Although not ideal, turning against does at least keep the lines of communication open. The worst response is turning away, in essence not even noticing or acknowledging the bid. In Gottman’s research, he discovered that couples who stayed married turned toward each others’ bids for attention 86% of the time. Couples who divorced turned towards each others’ bids only 33% of the time (Navarra & Gottman, 2013). So there are two elements to the advice: first, recognize when a statement or request is actually a bid for emotional connection. Second, turn toward that bid as often as you can.

In the discussion about predictors of divorce, you learned what a couple should not do during a conflict situation. On the positive side, couples need to learn how to distinguish between solvable problems tied to a particular situation and perpetual problems that reflect an underlying conflict and are likely to recur throughout their married lives. You can often tell the difference by how painful the problem is to discuss. When a couple recognizes that a problem is unsolvable, they should not try to ignore it. Instead, they should develop skills for coping and try to get to the point where they can discuss the problem without experiencing great distress. They should acknowledge that neither spouse will win and avoid doing anything that will make the problem worse.

Finally, Gottman also has some specific advice for couples: Accept influence. He emphasizes that attitudes such as “I wear the pants in this family” are fast roads to marital problems. Even in marriages in which one partner is the “boss,” they need to honor and respect their partner; the goal is mutual respect.

  • Which of John Gottman’s principles seem the most useful to you?
  • Do you recognize any of Gottman’s warning signs in a current or past relationship or even as general tendencies in your personality?

Module 23: People in Organizations

If you have a full-time job, you may spend more than one third of your time working at it; many people spend much more than that. The eight hours per day that full-time employment entails is more time than most of us spend on any other activity throughout the day, including sleeping. At our jobs, we are expected to be motivated to work hard, learn new skills, cooperate with our coworkers, coddle our clients, manage subordinates, be managed by supervisors, and handle job stress. Understanding and succeeding at all of these tasks would seem quite an undertaking. To us, it sounds like a job for a psychologist.

As luck would have it, there is a subfield of psychology that addresses these concerns, along with many others that pertain to the psychology of the workplace; it is called Industrial/Organizational Psychology. Industrial/Organizational Psychology (commonly called I-O Psychology) is the primary “non-clinical” applied subfield of psychology. It deals with business applications of psychology, especially workplace applications. I-O psychologists use psychological theory and research to solve real world problems that businesses experience.

I-O Psychology can be roughly divided into two main sub-areas. The first is Human Resources-related topics. I-O Psychologists play important roles in defining jobs, selecting and training employees, and designing compensation and benefits packages that will provide appropriate incentives for employees without breaking the company’s bank. The second sub-area is essentially applied social psychology. Topics include how managers can influence their subordinates, how groups can function effectively on the job, what the causes and consequences of motivation and satisfaction on the job are, and how communication functions in workplaces. At large companies, a Human Resources department handles the functions related to human-resources topics. Managers and other employees are more directly interested in the topics from the applied social psychology side. There is definitely a conceptual overlap between the two sub-areas, however. For example, although policies on compensation and benefits typically come out of a Human Resources department, many individual managers are very aware of how these policies affect workers. In particular, these factors are likely to influence workers’ job satisfaction and motivation, two important topics of the social psychology side of I-O psychology.

According to the 2020 Occupational Outlook Handbook, a comprehensive resource about career descriptions, qualifications, and opportunities published by the US Bureau of Labor Statistics, employment for psychologists in general is expected to grow 14% from 2018 – 2028, much more rapidly than the average career; Industrial/Organizational Psychologists will lead that growth. Although a Master’s degree is required for most Industrial/Organizational psychology careers, you can find related work in the field of human resources with a Bachelor’s degree.

According to the Society of Industrial/Organizational Psychology, a division of the American Psychological Association, I-O Psychologists work in companies’ departments of Human Resources, Employee Relations, Training and Development, and Organizational Development. They can also work as independent consultants. I-O Psychologists who earn a PhD may also be university professors in Psychology, Organizational Behavior, and Management. In short, particularly for those who earn advanced degrees, there are a great many opportunities for careers in I-O Psychology.

This module has four sections. Each section describes an important topic within the subfield of Industrial/Organizational Psychology. Together, the topics will give you a representative sample of the subfield. Section 23.1 describes the important tasks involved in finding the right people for the job and training them to succeed. It is the section from the Human Resources side of the subfield. Section 23.2 covers the important and related topics of motivation and satisfaction on the job. Section 23.3 touches on some of the important issues surrounding communication and influence in the workplace, issues of special (but not exclusive) importance to managers and anyone else who has to get things done by collaborating with others. Section 23.4 closes out the module with a discussion of diversity, an overarching theme in the workplace of the 21 st century.

23.1 Finding the Right Person for the Job

23.2 Making Work Satisfying

23.3 Exerting Influence at Work

23.4 Dealing with Diversity in the Workplace  

By reading and studying Module 23, you should be able to remember and describe:

  • Role ambiguity and job analysis (23.1)
  • Recruiting and selecting employees (23.1)
  • Training: transfer of training, on-the-job training (23.1)
  • Motivation from the individual: need for achievement, need for power, need for affiliation (23.2)
  • Motivation from characteristics of the job (23.2)
  • Increasing intrinsic motivation
  • Increasing workers’ job satisfaction: job enrichment (23.2)
  • Persuading peers and superiors: ingratiation (23.3)
  • Leadership styles: autocratic leadership, democratic leadership, transformational leadership, gender differences in leadership (23.3)
  • Bases of power: reward power, coercive power, expert power, referent power (23.3)
  • When diversity helps/when diversity hurts (23.4)

By reading and thinking about how the concepts in Module 23 apply to real life, you should be able to:

  • Recognize different job selection techniques (23.1)
  • Recognize examples of different bases of power (23.3)

By reading and thinking about Module 23, participating in classroom activities, and completing out-of-class assignments, you should be able to:

  • Identify characteristics in yourself or a job that lead to motivation and satisfaction. (23.2)
  • Describe how you might try to influence a supervisor, peer, and subordinate (23.2)
  • Evaluate leaders you have known with respect to leadership style and bases of power (23.3)
  • Describe whether and why your experiences with diversity have been beneficial or harmful (23.4)

23.1. Finding the Right Person for the Job

  • What is the most unusual question you have ever been asked at a job interview?
  • care for your young children?
  • teach a college psychology class?
  • deliver the mail?
  • paint your house?
  • lead a multi-national corporation?

Did you choose different criteria for the different kinds of jobs?

If you have ever had a job that you hated, then you know how unpleasant the experience can be. The stress and dissatisfaction from work can spill over into your non-work hours and affect your whole life. An organization called Sleep Judge surveyed 1000 employed people in the US (they did not report if it was a representative sample), and found that 81% suffer work-related anxiety on Sunday nights (The Sleep Judge, 2019). Businesses do not like workers to hate their jobs—not out of benevolence but because an unhappy worker can be very expensive. One of the most costly labor issues with which a company must contend is job turnover—in other words, workers leaving. Of course, some turnover is not a problem; for example, the loss of the guy who takes a three-hour nap after lunch every day is probably good for the company. The loss of quality employees, however, is an enormous expense. One way that an organization can keep good employees is to motivate and satisfy them (see section 23.2). Prior to that, however, many smart companies devote particular effort to recruiting, hiring, and training the right people. Industrial-organizational psychologists have been involved in all of these functions.

The first step in finding the right person for the job is to define what the job is; you may be surprised to learn that this step is often neglected. Many people are employed without knowing their exact role and job responsibilities, a situation known as role ambiguity . It leads to high levels of stress and job turnover (Jackson & Schuler, 1985). An essential way to help prevent role ambiguity is to develop a precise definition of a job. Industrial/Organizational psychologists can accomplish this by conducting a job analysis, a detailed description of a job; it can contain information about the types of tasks to be performed, the skills required for a worker to succeed at the job, or both. The benefit of conducting a job analysis goes well beyond simply minimizing the possibility of role ambiguity. A job analysis is the basis for virtually all of the remaining human resources and training functions. For example, if a company discovers a mismatch between the skills of current workers and the skills listed in a job analysis, they know that widespread additional training is needed. Criteria obtained from the job analysis should be the basis for performance appraisals. And, of course, candidates are recruited and selected for a job on the basis of how well they fit the criteria laid out by the job analysis.

role ambiguity: a situation in which employees do not know their exact roles and job responsibilities

job analysis: a detailed description of a job; it can contain information about the types of tasks to be performed, the skills required for a worker to succeed at the job, or both

Recruiting and Selecting Employees

For many companies, recruitment is far more than simply placing an ad on LinkedIn or Indeed.com. In fact, in one survey, approximately 70% of employers reported that in addition to online resources, they use referrals from current employees, the company’s own website, and word-of-mouth as successful recruitment strategies (Davis, 2017).

If you have ever applied for a job, you have been subjected to at least some of the common job selection techniques that have been developed and evaluated by industrial/organizational psychologists. Application blanks, interviews, and psychological tests (e.g., intelligence, skills, personality, and integrity tests) are among the most common selection techniques used throughout the business world. In order for an employer to use a selection technique it must be reliable (consistent) and valid (measure what it is supposed to measure), two concepts you should recall from Module 8. Although companies might choose to ignore the reliability and validity of their selection tools, doing so opens them up to the risk of losing a discrimination lawsuit. You may be surprised and a bit dismayed to learn that, from a legal standpoint, the validity of a selection technique need not be high. In essence, as long as a specific technique allows a company to choose a successful employee better than selecting candidates at random and does not systematically reject members of minority groups at higher rates, it can be used.

For many years, personality tests were great examples of selection techniques that just barely made the grade (Schmidt et al. 1984). In fact, for many years, companies were afraid to use them because of their low validity and the consequent threat of lawsuits. Personality tests such as the MMPI-2 or Myers-Briggs Type indicator do a famously poor job of predicting job success. More recently, companies have grown more confident in the validity of personality tests, however. Refinement of the Big Five personality traits has been a big reason for the newfound confidence. Conscientiousness in particular is a strong predictor of job success (Hurtz & Donovan, 2000). The four remaining Big Five traits—agreeableness, extraversion, openness, and neuroticism (emotional stability)— also have acceptable levels of validity for many jobs (Caldwell & Burger, 1998; Salgado, 2003; Tett et al., 1991).

Another surprise for most people is that one of the most common job selection techniques is very poor. Personal interviews, at least the way they are traditionally conducted, have notoriously low validity (Hunter & Hunter, 1984; McDaniel et al., 1994). In order for an interview to be successful at predicting future job performance, it should be based on behavior (i.e., the candidate should be asked questions about how he or she acted or would act in various situations), and the answers should be scored based on a job analysis (Day, 2003; McDaniel et al., 1994; Taylor & Small, 2002). A great many interviews are conducted by managers—often not even human resource managers—who ask any questions they want and use their own criteria to judge the quality of candidates’ answers. Researchers have consistently found that this type of interviewing strategy does a very poor job of predicting future performance (McDaniel et al., 1994).

Even before the 2020 COVID-19 shelter-in-place orders across the US made it necessary to conduct job interviews remotely, this practice was already common. At some point you will stop being surprised by our revelations, but here is another. There is little research about the effectiveness, and possible biases introduced, by using selection techniques that allow little (computer-mediated) or no (phone) visual feedback. And the research that does exist suggests that these technologies can change the decisions that hiring managers would make (McColl & Michelotti, 2019; Silvester & Anderson, 2003).

Training Employees

An undergraduate student we knew committed an interesting symbolic act when he finished his last class before graduation. He piled up his notes and papers (and some textbooks that he could not sell) and made a small bonfire out of them. His message was clear, in part because he chanted over and over, “I am done with classes, FOREVER!” We do not condone this act. First, of course, there is the issue of the safety of setting your books and papers on fire; he was, after all, indoors at the time. More to the point, however, the soon-to-be-former student was probably wrong. You have by now heard the term “lifelong learner.” Given the rapid changes that take place in technology and in the world economy, and the reduced tenure that people have at any particular job compared to decades past, it has become necessary for the vast majority of people to continue to learn throughout their lives. A lot of that learning takes place in the context of a specific job, during training, and industrial/organizational psychologists have been involved in virtually all aspects of training.

Psychology in general, of course, should have a great deal to say about training. Principles gleaned from the study of learning and memory, thinking, and problem-solving have been very useful for companies that hope to train their employees to succeed. For example, principles of operant conditioning have played an important role in many training programs. One of the most important concepts to come from psychology is transfer of training , the degree to which training carries over to other situations, such as the job itself. Psychologists have discovered that in order to promote positive transfer, the training needs to take place in situations that match the job as closely as possible (Allen, Hays, & Buffardi, 1986). It is also important to ensure that the workers have many opportunities to use on the job the new behaviors that they have learned (Ford, Quinones, & Sego, 1992; Tracey, Tannenbaum, & Kavanagh, 1995).

Industrial/organizational psychologists have devoted a lot of effort to comparing different formats of training. In a notable parallel to what they discovered with respect to the usefulness of interviews, psychologists have concluded that on-the-job training—one of the most common training methods—is probably not the best training technique. Essentially, with on-the-job training, the new worker shows up, and for the first few days, a more senior worker teaches him or her how to do the job. Employers often prefer this kind of training because it seems very inexpensive. After all, the other worker is already on the job being paid, and the new worker can begin being productive right away. Also, it seems as if on-the-job training should promote positive transfer because training cannot get more similar to the job situation than this. There are hidden costs, however, that make on-the-job training much less efficient than it seems. First, the trainer is probably not doing his or her own job during the training period. Second, the trainer probably does not really know how to teach, so the new employee may not learn the job very well. The result is that he or she takes longer to become productive than if some kind of off-site training had been provided instead. Even after the worker has been on the job for some time, those who had undergone on-the-job training tend to be paid and promoted less than those who had other types of training (Kovach & Cohen, 1992).

  • Can you identify the strengths and weaknesses in some job selection techniques that have been used on you?
  • Can you recognize whether job training you received for a job promoted positive transfer of training?

23.2. Making Work Satisfying

  • What motivates you to work hard at a job? If you have never had a job, you can substitute school.)
  • What is the difference between a job and a career?

Chances are you have a job to help you get through college. For nearly all of you, however, your current job is just that, a job; it is a way to pay the bills. You may very well enjoy the job (often because of a pleasant boss or co-workers), but there is likely not anything deeper than that. How does that attitude coexist with the idea that a person’s career is one of the key elements of his or her identity (Erikson, 1959)? It might help to distinguish between a job (“what you do”) and a career (“part of who you are”).

Many factors contribute to the shift from thinking in terms of job to thinking in terms of career, such as the length of time you expect to work in the field and your feelings of loyalty toward the company. In this module, we will describe two factors: motivation and job satisfaction. The question is not simply whether you are motivated and satisfied, but rather what motivates and satisfies you about a particular job. The answers to those questions reveal much about whether your employment is a job or a career.

Motivation on the job is complex and determined by many factors. They can be roughly categorized as motivational factors that come from the individual, factors that come from characteristics of the job, and factors that are an obvious interaction of worker and job characteristics.

Motivation from the Individual

Maslow’s hierarchy of needs (Module 20) was one of the first theories used to explain motivation on the job; researchers soon discovered that it was limited, however, and in many cases completely wrong. For example, suppose you are driven by a need to connect with people socially on your job, a need most clearly related to Maslow’s “belongingness” needs. You may very well seek out a position that would allow you to fulfill that need. Once you got that job, however, it seems unlikely that the need would dissolve and stop motivating you, which is key part of Maslow’s theory. Maslow’s emphasis was how people are motivated to continue up his hierarchy once they have satisfied a need. In reality, however, the satisfaction of a need often makes it a stronger motivator rather than eliminating it (Alderfer, 1969).

Other psychologists have noted stable differences in motivation among individuals that are not related to any kind of progress through a hierarchy. Rather, it seems that people have different motivation needs, that these differences may be considered personality differences, and that the differences determine what motivates people in the workplace. For example, one influential theory has proposed that there are three needs that are important elements of job motivation: need for achievement, need for affiliation, and need for power (McClelland, 1975). People who are high in need for achievement set high standards for themselves, and they work hard to achieve those standards. They thrive in situations that provide frequent feedback about success and are moderately challenging. Of course, then, they will seek out these situations at work, and they tend to be very successful. People who are high in need for power are motivated by the desire to influence others. They seek opportunities in the workplace to gain positions in which they can direct other people (McClelland, 1970). People who are high in need for affiliation  are similar to people whom Maslow would describe as satisfying their belongingness needs; in essence, they seek workplace situations that will allow them to satisfy their need for interpersonal relationships. Again, however, they will not “outgrow” this need once it is satisfied because the need is part of their personality.

need for achievement : a strong motivation to set high standards for oneself, and to work hard to achieve those standards

need for power: a strong motivation to influence others

need for affiliation: a strong motivation to satisfy one’s need for interpersonal relationships

Motivation from Characteristics of the Job

Other psychologists have focused on the particular characteristics of jobs that lead to motivation. For example, jobs that are meaningful, provide a high degree of autonomy, allow a worker to draw from a wide variety of skills, allow a worker to complete an entire task (rather than being an anonymous “cog” in a giant operation), and provide frequent feedback are motivating for many people (Hackman & Oldman, 1976). In our view, it is these job characteristics, particularly the meaningfulness, that mark the distinction between a job and a career. By the way, research has also shown that if college classes are restructured to have these characteristics, students are more motivated as well (Cantanzaro, 1997).

Motivation from Interactions Between Individual and Job Characteristics

If you recall the discussions of intrinsic and extrinsic motivation in section 20.1, you may realize that these concepts should apply to the workplace as well, and indeed they do. Because intrinsic motivation, motivation coming from within the individual, leads to high performance and is essentially free to the company compared with extrinsic motivators like bonuses and perks, many organizations strive to increase their employees’ intrinsic motivation. Organizations can increase their workers’ intrinsic motivation by providing challenging tasks, encouraging them to satisfy curiosity, and giving them a sense of control over work tasks (Lepper & Cordova; 1992; Malone & Lepper, 1987). Positive performance feedback or praise also increases intrinsic motivation, unless the worker perceives that the feedback is delivered in a controlling manner (Deci 1972; Pittman, et al., 1980; Vansteenkiste & Deci, 2003). Also, followers are likely to be intrinsically motivated if they believe that their leader is also intrinsically motivated (Wild et al., 1992).

Job Satisfaction

Perhaps the most surprising observation about job satisfaction, and one of the most often-repeated findings in industrial/organizational psychology, is that job satisfaction is only weakly related to job productivity (Iaffaldano & Muchinsky, 1985). For many years, it was simply assumed that if you are satisfied on the job, you will try to perform your work conscientiously and well. Although this assumption is likely correct, there are many other reasons why you might work hard and be productive. For example, if you fear that you will lose your job, you will probably be productive regardless of how satisfied you are. Beyond productivity, however, there are several additional reasons for an organization to enhance the job satisfaction of its employees. Workers who have high satisfaction tend to be committed to the organization, and they have lower rates of absenteeism and quitting (Jenkins, 1993; Ostroff, 1993; Pratkanis & Turner, 1994; Williams & Hazer, 1986).

Because of the positive outcomes associated with high job satisfaction, many organizations seek to enhance it. One strategy that organizations have used is to increase employees’ levels of responsibility and independence; it is called job enrichment . One important element for increasing satisfaction is to increase employees’ perceptions of fairness, especially with regard to pay (Miceli, 1993; Witt & Nye, 1992). Many organizations also increase their employee’s satisfaction through benefits, such as flexible working hours, on-site child care, healthcare coverage, retirement plans, and profit-sharing.

Although many efforts to increase satisfaction are successful, there are limits to what the organization can accomplish because some elements of job satisfaction come from the workers themselves. In other words, some workers are likely to be dissatisfied no matter what the organization does. There is increasing evidence that workers with certain personality types—such as high alienation (feeling isolated and lonely)—will probably have low job satisfaction regardless of what the employer does and that these personality variables may even be genetic (Efraty & Sirgy, 1990; Keller et al., 1992). In addition, personality differences among workers suggest that different aspects of a job lead to satisfaction, so a one-size-fits-all approach to increasing job satisfaction will not work for all employees (Pierce et al., 1993).

  • Can you identify the different motivational principles that were being used in jobs that you have liked? The motivational principles in jobs that you disliked?
  • Do you think that you are the type of person who is generally satisfied at work?

23.3. Exerting Influence at Work

  • Think of a person in your life who you consider to be a good leader. What is it about this person that makes you want to follow him or her?

On the job, your behavior toward a goal is often energized and directed by another person. Broadly, we can think of one person’s efforts to get someone to do something as influence. Much of the information from section 22.5, then, generally applies to the workplace. If you are trying to influence a peer or someone who has authority over you, psychologists refer to your efforts as persuasion. But instead of referring to the influence exerted by an authority figure as obedience (as in section 22.5), industrial-organizational psychologists usually call it exerting power. This terminology shift allows psychologists to focus on the behaviors and characteristics of the leader who is trying to do the influencing rather than the person being influenced, as in the classic social psychological research.

Let us clarify the difference between persuasion and power (or obedience) with some examples:

  • Your co-worker friend wants you to take the afternoon off on a beautiful spring Friday so that the two of you can go to a baseball game. His attempts to talk you into going to the ballgame would be persuasion.
  • You would never take the afternoon off on a beautiful spring Friday to go to a baseball game because you are an extremely conscientious worker. And you are vastly underpaid and deserve a raise. Your attempts to get your boss to agree with these ideas would also be persuasion.
  • Your boss finds out that your friend skipped work to go to the baseball game, and she threatens to fire him if he ever does it again. Her attempts to influence him to stay at work by issuing this threat would be using power.

Of course, persuasion in the workplace is very much like persuasion in other areas of life, so the principles from section 22.5 apply to this section as well. Perhaps the most important observation about persuasion as it pertains to the workplace specifically is that different kinds of strategies are needed to influence peers and superiors. In general, persuading superiors sometimes requires the use of more indirect methods, or the use of peripheral cues, whereas efforts to persuade peers can be more direct, involving the central route.

One specific type of peripheral technique that employees can use with superiors is called ingratiation , trying to make oneself more appealing by using flattery or doing favors, for example (Kumar & Beyerlein, 1991). If an employee is trying to persuade his boss to give him the day off, he is likely to use ingratiation, perhaps by commenting on the attractive picture of her children on her desk. On the other hand, if he is trying to convince his boss to change some institutional policy, a proposal that the boss is likely to scrutinize closely, he is likely to employ the central route to persuasion (Ansari & Kapoor, 1987; Schmidt & Kipnis, 1984). Peers sometimes use ingratiation with each other, as well as offering to exchange favors (Kipnis et al., 1980).

Power and Leadership

You might be tempted to think that exerting power would be a snap compared to trying to persuade someone who does not have to listen to you. As Mel Brooks quipped in the movie The History of the World, Part I, “It’s good to be the king!” Although there are obvious advantages to being able to compel subordinates to follow your orders, it not as simple as it seems. You may be able to get your underlings to fetch your dry cleaning today, but what is to stop them from revising their resumes on company time and interviewing for a new job during their lunch hours? Workers who are subjected to poor leadership and exertion of power are not simply more likely to quit. They may suffer more stress, be absent more, be less intrinsically motivated and productive, and engage in negative behaviors, such as stealing or sabotaging company property or sharing confidential information with competitors. Perhaps we should mention that the Mel Brooks character we quoted above is King Louis XVI, the king of France who was beheaded in 1793 after being overthrown in the French Revolution. Maybe it is better to say, “It’s good to be a good king!”

When a manager of other workers learns effective skills for leading and exerting power, subordinates will comply with directives with little risk of negative consequences; in many cases, the subordinates will not even feel as if they are following orders. Leaders’ success often comes from using principles of leadership and management that have been uncovered by industrial/organizational psychologists.

Leadership styles

Years ago, many psychologists believed that leadership was a quality that came naturally to some people, that great leaders are born, not made. If you were not fortunate enough to be born with leadership ability, you were destined to be one of the myriad anonymous followers. Although many people outside of psychology still believe that leaders are born and not made, a great deal of research has shown the opposite. Today, it is widely accepted throughout I/O psychology that leadership is a set of skills that can be learned like any other. Because leadership is a learnable skill, you should also realize that one can be a leader in many different situations, at work, in the family, in social circles, at school, and so on. Being a leader in one situation does not automatically mean that you will be one in other situations.

Many different styles of leadership exist. Perhaps the most important distinction is between autocratic and democratic leadership. You should think of these labels as the extreme ends of a dimension, with individual leaders lying somewhere along the dimension. Autocratic leadership , sometimes called directive or authoritarian leadership, is leading through ordering; the leader does all of the decision-making. In democratic leadership , sometimes called participatory leadership, the leader shares some of the power and allows subordinates to participate in decision making. Although you may have an intuition about which style would make the best leader, the truth is it depends on the situation and the people being led (Fiedler & House, 1994). For example, autocratic leadership may be necessary when decisions need to be made quickly or when the subordinates have insufficient knowledge or cannot reach decisions collaboratively. In situations in which subordinates work well together and are knowledgeable, democratic leadership will usually fare better.

One generally effective leadership style that psychologists have researched is transformational leadership . Under a transformational leader, followers set aside their personal goals and adopt the goals of the organization as their own. In other words, they are transformed into the ultimate team players (Bass, 1990). Transformational leaders treat their followers as individuals, and they provide intellectually challenging situations and encourage creativity. They are likable and have low levels of aggressiveness and criticism (Bass, 1985; Ross & Offerman, 1997). As you might guess, transformational leadership is linked to very good performance (Barling et al., 1996; Eagly et al., 2003).

Most people are aware of (and many believe) the stereotype that men are better leaders than women. After all, just look at the corner offices of virtually all of the big companies and the political leadership throughout the world. There is only one problem with the stereotype, however. It is wrong. In general, men and women are quite similar in their leadership behaviors; often, the stereotypic differences are observed in laboratory research only and not in research that is done in the workplace. Women do tend to use a democratic style of leadership more than men, who tend toward autocratic leadership (Eagly & Johnson, 1990). Neither style is clearly better than the other in all situations, however. The one area in which there is a consistent, albeit small, advantage for one gender is in the use of transformational leadership, perhaps the most effective leadership style, and here the advantage goes to the women (Eagly et al., 2003).

autocratic leadership : sometimes called directive or authoritarian leadership, it is leading through ordering; the leader does all of the decision making

democratic leadership: sometimes called participatory leadership, the leader shares some of the power and allows subordinates to participate in decision making

transformational leadership: leadership that encourages followers set aside their personal goals and adopt the goals of the organization as their own

Bases of power

In addition to having a style of leading, leaders have different bases, or sources, of power. Four bases of power are especially important:

  • Reward power is the ability to offer incentives, or rewards, if subordinates follow orders. Supervisors may have the ability to recommend or grant raises, give promotions, offer choice assignments, or give a worker the first option for vacation time, for example.
  • Coercive power is the ability to threaten punishments if orders are not followed. Supervisors can fire or demote, verbally reprimand, or give poor performance evaluations, for example.
  • Expert power is when the subordinates believe that the leader has greater knowledge about the important tasks involved in the job, as, for example, when students believe that a professor knows a great deal about his or her subject matter.
  • Referent power is when the subordinates look to the leader as a role model; in other words, they identify with and like the leader.

Many leaders rely on reward and coercive power because they are such direct methods of exerting influence. They are reluctant to rely on expert and referent power because they fear that subordinates will not comply if they are not being compelled or otherwise encouraged by tangible incentives or threats. Research, however, has shown otherwise. For example, some research on the effectiveness of reward power has found no relationship between pay incentives and performance; some research has even found a negative relationship in some situations (Amabile et al., 1986; Asch, 1990; Pearce et al., 1985). Other researchers have found a positive relationship, however, perhaps because the monetary awards help workers to set goals for what they want to achieve (Abowd, 1990; Leonard, 1990; Locke & Latham, 1990). Research on the effectiveness of coercive power has been quite a bit more definitive. Although the threat of punishment can lead to high performance, it is also associated with high levels of job dissatisfaction, which can lead to absenteeism and turnover, among other ills (Hinkin & Schriesheim, 1989; O’Reilly & Weitz, 1980). On the other hand, leaders who use expert and referent power have highly motivated subordinates. Importantly, the subordinates also have high satisfaction and positive attitudes about the leader and the job (Podsakoff & Schriesheim, 1985; Rahim & Afza, 1993). In essence, leaders who wield expert and referent power, like transformational leaders, are able to get their subordinates to want to comply. We have little doubt that there is a time and a place for reward power and coercive power, but relying on them exclusively may leave little room for applying two of the most effective leadership tools.

reward power: power that comes from the ability to offer incentives, or rewards, if subordinates follow orders

coercive power: power that comes from the ability to threaten punishments if orders are not followed

expert power: power that comes from subordinates’ belief that the leader has greater knowledge about the important tasks involved in the job

referent power: power that comes from subordinates looking to the leader as a role model

  • Can you apply the two leadership principles discussed here—bases of power and leadership styles—to teachers you have had? Can you associate certain principles with your favorite teachers?

23.4. Dealing with Diversity in the Workplace

  • Would you estimate that your college is more or less diverse than average?
  • Have you had many opportunities to work with people who differ from you in terms of ethnicity or gender (at school or work)?
  • What are the advantages of working in these diverse groups?
  • What are the disadvantages of working in these diverse groups?

There are two ways to think about diversity related to the workplace. One way is that organizations have to avoid discrimination against minority groups of employees. Although laws exist to protect disadvantaged groups, discrimination is pervasive, and it extends to non-protected groups as well. A more beneficial way to think about the issue is to turn diversity into a strength that can be leveraged. Many people dismiss sensitivity to diversity as “political correctness.” What these people do not realize is that:

  • As current trends continue in the US and across the world, it will become essential for all of us to learn how to interact with diverse groups of people.
  • Successfully coping with diversity helps companies to make more money.

Increasing diversity is a fact of life. In 1950, men were 70% of the workforce in the US; in 2019, they were only 53%. As recently as 1980, White, non-Hispanic workers made up 82% of the workforce; by 2000 it had dropped down to 73%, before creeping back up to 78% in 2019. The ethnic composition of the workplace is expected to continue to change substantially over the next 30 years (Toossi, 2002):

Although we typically think of diversity in terms of gender and race, there are many types of diversity. For example, disability, age, and sexual orientation are protected from discrimination by law. Some of these less publicized kinds of diversity will be increasing as well. For example, by the year 2050, workers aged 55 and up will increase to 20% of the workforce. In addition, there is diversity in personality traits, problem-solving preferences, communication styles, abilities, and skills.

In short, no type of diversity seems to be decreasing, and much of it is increasing. To be successful, individuals must learn to work with people who differ from them in important ways, and organizations must learn how to cope with an increasingly diverse employee base.

Poor Management of Diversity Hurts Workers and Organizations

Given the prevalence of stereotypes, prejudice, and discrimination in society as a whole, it is not surprising to find significant problems in the workplace as well. Members of ethnic minority groups report higher levels of job stress and negative experiences than majority-group members do and lower job satisfaction, involvement, and psychological well-being (Forman, 2003; Mor Barak et al., 2003). Although women overall may not suffer as much as ethnic minorities (research is inconsistent), a lack of respect for gender diversity can lead to an environment ripe for sexual harassment. Even when the target of harassment does not realize that she is being harassed, the result may be reduced job satisfaction and psychological well-being and increased job stress (Fitzgerald et al., 1997). In fact, both men and women report reduced well-being when they work in a climate that is hostile to women (Miner-Rubino & Cortina, 2004).

The negative effects of discrimination, combined with the increasing diversity of the workforce, suggest that it makes good business sense to be sensitive to diverse workers. First, overt job discrimination and sexual harassment are illegal and can lead to multimillion-dollar penalties against a company (when plaintiffs are allowed to join together and form a class-action lawsuit). For example, the Coca Cola Company paid a remarkable $192 million in 2000 to settle a racial discrimination lawsuit. Companies including Ford, BMW, Walmart, Target, and Google have paid millions in individual lawsuits over the years.

Less dramatically, but perhaps more importantly, researchers have shown that genuinely valuing diversity does not simply help a company to avoid penalties but also helps it to achieve its goals. In other words, diversity is not an inconvenience to which we must adapt but a strength that we can harness. There are distinct advantages to diversity in work groups (in school and career) if it is managed appropriately. In 1995, the Glass Ceiling Commission, a large-scale research project commissioned by the US government to assess the state of ethnic minority and gender issues in the workplace, found that companies with strong records of promoting women and ethnic minorities had double the return on investment of those with poor records.

When Does Diversity Hurt and When Does It Help?

The benefits of diversity are not automatic. You cannot simply throw diverse groups together, make sure there is no overt discrimination, and hope for the best. Many people have horror stories about a work or school group composed of very dissimilar members that disintegrated or exploded. Diversity, like any powerful force, must be managed. Think of a physical analog, rushing water. When unmanaged, a large amount of rushing water is called a flood, and it is a catastrophic event. When harnessed, however, rushing water can be used to generate electricity.

Stereotypes and discrimination, harassment, unhealthy working conditions, and poor group performance are the “flood” of unmanaged diversity. Traditional stereotypes and biases flourish when diversity is not valued and managed. For example, consider age. As you saw in Module 16, people’s common beliefs about cognitive decline as we age are greatly exaggerated. Many people hold the stereotyped belief that older workers’ performance suffers as the years pass. Performance evaluations that allow supervisors to make subjective judgments about workers are particularly prone to age-related bias (Miller et al., 1990). In fact, however, research that examined workers in over 100 different occupations showed that older workers’ performance declined only for certain clerical jobs, such as keyboarding or taking dictation. And because increasing age is often associated with increasing experience, older workers tend to make better decisions and work more efficiently than younger workers in many cases (Aviolio et al., 1990).

Similarly, inaccurate stereotypes about women, racial and ethnic minorities, and lesbians and gay men abound in situations in which diversity is not valued. For example, people commonly believe that women make poorer leaders than men. The truth is, however, that men and women are more alike than different in their leadership abilities and that when there are advantages for one gender, they often depend on the type of situation, as you saw earlier in this module (Eagly and Johnson, 1990; Eagly et al., 2003).

Some kinds of diversity have been associated with poor group performance and functioning. For example, age diversity has been linked with higher turnover and absenteeism and lower innovation (Cummings et al., 1993; Wiersema & Bird, 1993; Zajac et al., 1991). In general, diversity tends to be associated with turnover and absenteeism (Jackson et al., 1991). Ethnic diversity has been related to lower group commitment and lower group performance in some cases (Kochan, et al., 2003; Riordan & Shore, 1997).

When organizations know how to use it, however, diversity leads to better group performance. For example, if the organization can enhance group members’ beliefs that they depend on each other to complete the task, diverse groups perform very well, especially if the groups stay together for a long time (Sargent & Sue-Chan, 2001; Schippers et al., 2003). Also, increasing group cohesion and highlighting similarities that group members share leads to positive outcomes (Jenn et al., 1997; Sargent & Sue-Chan, 2001). Other strategies for harnessing group diversity include creating and encouraging a group identity, addressing stereotypes, and learning how to manage conflict constructively (Johnson & Johnson, 2003). Although there is still much that researchers do not know about how to gain the maximum benefit from diversity, recent results such as these have shown that we are on the right track (Williams & O’Reilly, 1998).

  • Can you recognize the strengths and challenges of diversity in your experiences working in diverse groups (at work or school)?

Module 24: Social and Personality Psychology: In Search of Hidden Solutions to Society’s Problems

By reading and studying Module 24, you should be able to remember and describe:

  • Basic and applied research
  • Social and personality psychology research in response to real-world events
  • How does terrorism work? Availability heuristic; fear, anxiety, and health problems
  • Coping with terrorism, counterconditioning
  • Who are terrorists? The failure of profiling
  • Society, cultural, and group conditions leading to terrorism: isolation, selective moral disengagement, scapegoating, poverty and relative deprivation
  • Situations and dispositions in good and evil: authoritarian personality
  • The roles of questionable research practices and surprisingness in the replication crisis in psychology

There is a distinction between basic and applied research, as Module 4 explained. Basic research  is conducted with the goal of advancing knowledge in a discipline. Applied research  is conducted with the goal of solving some real-world problem. Scientists often think of themselves as either basic or applied researchers, not both.

A large majority of scientific research is basic research. We are much more likely to hear about applied than basic research in the media, however. Reports on applied research, whether about the conclusions of a group of climate scientists on the causes of global warming or about the clinical tests for a new treatment for depression, have an obvious appeal to the public. Casual observers may not realize that they are seeing a small portion of the total amount of actual research.

Psychology in general is no different. Most research in psychology is not conducted with any particular real-world application in mind. Personality and social psychology, on the other hand, are a little different. Although it is often still conceived as basic research, research in social and personality psychology has often been driven by a desire to understand and solve real-world problems.

Module 9 explained how World War II influenced the development of cognitive psychology. If anything, the impact of World War II was even greater on personality and social psychology. Some observers have noted that social psychology in particular did not really exist as a separate subfield until after WWII, and that Adolph Hitler was the most influential person for its development (Cartwright, 1979). Personality and social psychology have continued to keep an eye focused on the outside world ever since. From the Nazi atrocities during WWII, to the apparent failure of 38 witnesses to help Kitty Genovese in the 1960s (but see Module 21), to the Al Qaeda terrorist attacks on the US in 2001, to the 2020 global COVID-19 pandemic, large bodies of research have been undertaken in response to real-world events.

For example, a search for coronavirus in PsycInfo,  the comprehensive listing of research published in psychology, returns a total of 295 articles (on June 24, 2020). The first appeared in 1992 (you may recall that the common cold is a coronavirus, and some of the disease scares over the past 20 years, for example, the 2002 – 2004 SARS outbreak, were based on coronaviruses. Then COVID-19 came on the scene. The first paper to appear in PsycInfo related to COVID-19 was an editorial that appeared in the Journal of Crisis Intervention and Suicide Prevention in April 2020, just four months after the first person contracted COVID-19. Then, in the next two months another 217 articles appeared, many of them full-fledged research studies. In other words, in a span of two months, nearly 75% of all of the articles on coronavirus appeared, as hundreds of researchers planned and conducted their studies, then wrote them up and got them published in peer-reviewed journals. The topics ranged from combatting misinformation and racism related to the disease, to the psychological impact of the worldwide quarantine on the general population, to psychological distress, such as post-traumatic stress disorder in health care workers. In two months!

Although this is an extraordinary mobilization and response to a global crisis, we are too close to the beginning to draw any conclusions from the COVID-19 research during summer 2020 (and realistically, we will not be able to draw solid conclusions for at least a few years). So, we are going to turn back the clock a bit, so we can see what a whole cycle looks like. In other words, let us choose a set of real-world events that are recent enough to still be relevant and remembered, but far enough in the past that we can draw some solid conclusions from the massive mobilization of psychological researchers.

Specifically, between 1991 and 2000, there were an average of 17 articles per year listed under the subject heading Terrorism in PsycInfo . In particular, in 2000, the year prior to the September 11 terrorist attacks, there were 25 articles. In 2002, there were 475. The number increased over the next few years, peaking at 638 in 2006 before hovering in the mid- 300s for about a decade. In 2019, the number was down to 210

This module begins with a brief look at what psychologists learned about this still-present social problem.

The Psychology of Terrorism

The effort to understand terrorism was, and still is, a true interdisciplinary endeavor. Researchers from history, political science, sociology, and anthropology, as well as social, personality, clinical, and cognitive psychology, have all contributed to our current understanding. Three main goals have guided a great deal of research on terrorism:

  • Understanding how terrorism works on a population
  • Discovering how can people cope with the effects and threat of terrorism
  • Understanding why people commit acts of terrorism

How Does Terrorism Work?

Approximately 3,000 people died during the September 11, 2001 attacks on the World Trade Center and the Pentagon. These victims were not the intended targets of the attacks, however. The real target of a terrorist attack is the larger population; the murdered victims are simply tools used by the attackers to achieve their real goal (Hallett, 2004). Of course, that goal is terror. The terrorists intend to cause an entire population to fear that they, too, may become victims of an attack. Terrorism’s main effect is in many ways nothing more than the availability heuristic, the misperception of the likelihood of an event because it can be easily brought to mind.

Media reports of terrorist attacks unwittingly contribute to this fear. A survey conducted in Alabama in March 2002 found that many residents feared they would be victims of an attack; nearly 50% of people who watched four or more hours of television per day expressed this view (but only about 20% of infrequent television viewers). Further, 60% of men and 75% of women expressed the belief that there was nothing they could do to avoid being a victim of terrorism (Powell et al., 2004). Seeing victims as similar to yourself compounds viewers’ distress, which is also made worse by watching more media accounts (Wayment, 2004).

The unrealistic fear that terrorism causes can lead indirectly to more deaths. For example, Gerd Gigerenzer (2004) estimated that 350 Americans who would not normally have died between September 11 and the end of 2001 did die because they were afraid to fly and drove instead. Driving is far more dangerous than flying.

The fear and anxiety that the population suffers also lead to more health problems. After the September 11 attacks, many people suffered from nightmares, insomnia, grief and depression, and avoidance, symptoms that appear commonly in post-traumatic stress disorder (LeDoux & Gorman, 2001). Particularly after chemical or biological attacks, such as the 1995 sarin gas attack on the Tokyo subway system, many people complain of unexplained physical symptoms long after any possible effects of the chemical agent itself (Kawana et al., 2001). After any terrorist attack, increases in substance abuse and acute stress responses are also common (Danieli et al., 2004). Anxiety disorders are likely as well.

How Do People Cope with Terrorism?

Much of what we know about successful coping techniques is based on studies of people who have been forced to deal with terrorism for many years. According to the Israel Defense Forces, there were over 22,000 terrorist attacks against Israelis between September 2000, and July 2004. One survey of residents of Israel found that coping techniques such as using alcohol, tranquilizers, and cigarettes were associated with higher levels of stress and depression. Avoiding television and radio and maintaining faith in God were associated with lower levels of these symptoms. Many respondents also reported that the search for information about loved ones, humor, distraction through activity, and search for social support were also effective coping techniques (Bleich et al.,  2003). Although these results are based on correlations, they do suggest some important coping techniques that an individual can try in coping with the stresses of terrorism.

Psychological counseling can also help. In the immediate aftermath of September 11, Joseph LeDoux and Jack Gorman (2001) pointed out the parallels between people’s fear reactions and classical conditioning. Many stimuli, such as a picture of the World Trade Center or even the sight of an airplane, can become a conditioned stimulus that will lead to a conditioned response of fear. One key method of coping is to break the link between the conditioned stimulus and conditioned response. In therapy, this technique is called counterconditioning (you can learn more about it in Module 29); it essentially involves replacing a fear-conditioned response with an incompatible response such as relaxation. Outside of a therapy setting, some people can achieve the same results through some kind of distraction strategy—in essence, getting on with life (LeDoux & Gorman, 2001).

Who are the Terrorists?

Researchers have struggled for many years to construct a profile of terrorists. By now, many have concluded that no such profile exists, and few are continuing to look. In essence, personalities among the terrorist population differ as much as they do in the general population (Hudson, 1999). Besides, the happy truth that there really are very few terrorists in the world makes it extremely difficult to identify them. Any profile you might come up with to describe terrorists would also fit many non-terrorists. Even if you had a profile that matched terrorists perfectly, the huge majority of people who fit that profile would not be terrorists. Let us explain. Suppose that 1 out of every 100,000 innocent people matches your terrorist profile. There are more than 7 billion non-terrorists in the world, so your profile would identify 70,000 innocent people as terrorists. This is not an implausible scenario. One frequently cited profile of a terrorist described an unmarried, healthy, strong man between 22 and 25 years old, from a middle-class background, with at least some college education (Russell & Miller, 1977). It is not much of a stretch for us to estimate that this profile matches many readers of this book, and we will bet that none of you are terrorists.

To be fair, we must admit that we left out some of the characteristics from the profile, including some psychological ones. The inclusion of these characteristics does not improve the usefulness of the profile, however. Some of the extra characteristics, such as being devoted to a religious or political cause, also apply to countless non-terrorists. Others, such as ruthlessness and a belief that one’s terrorist actions are not criminal, would be a little difficult to use for identifying potential terrorists. They are not the kind of facts about oneself that a smart terrorist would want to publicize. More generally, that is another point about terrorists. They are often recruited and trained to blend in and be inconspicuous (Hudson, 1999).

Efforts to identify society, cultural, or group conditions that may lead to terrorism may ultimately be more fruitful. For example, physical or psychological isolation of a group may be necessary for terrorism to flourish. Isolation encourages the group to develop strong conformity pressures, and it can lead members to adopt a strict “good” versus “evil” worldview. It also makes it easy for group members to believe that the world is in desperate need of radical change and that there are no legitimate means to achieve their goals. These last two beliefs can lead to a strong “the ends justify any means” approach (Moghaddam, 2004). Albert Bandura (1990; 2004) has used the term selective moral disengagement to describe terrorists’ (and their supporters’) ability to justify their horrible actions by appealing to noble ends. Other psychologists have noted that the ingroup versus outgroup social categorization that leads to stereotyping, in general, is an important precursor that can spiral into hatred and in extreme cases violence. Scapegoating , or blaming an outgroup for current economic, religious, or cultural problems, can lead to strong negative feelings and possible violent actions (Staub, 2004).

Many observers over the years have blamed poverty as a key social condition that breeds terrorism. It is more correct to say that inequality breeds terrorism. If everyone were poor, there would be nothing to gain from terrorism. But members of a society who can see that others have more and who believe that the unequal distribution is not justified may develop feelings of relative deprivation , which can spiral into hatred and violence (Pilisuk & Wond, 2002).

Although these psychological, cultural, and social conditions are important (and perhaps necessary) for terrorism to develop, they are by no means sufficient. Throughout the world, many individuals and groups that have all of these “strikes against them” do not engage in terrorist activities. Rather, these conditions increase the likelihood that a group would turn to terrorism (Moghaddam, 2004).

scapegoating: blaming an outgroup for current economic, religious, or cultural problems

relative deprivation : negative feelings that develop when members of a society believe that others have more and that this unequal distribution is not justified

The Psychology of Good and Evil

The contributions of social and personality psychology to the understanding of terrorism should perhaps be seen in the larger context of the psychology of good and evil (perhaps you remember this as related to one of the key philosophical debates we introduced in Module 4). Within social and personality psychology, these questions about good and evil have often come down to the relative roles of the situation and the person in determining our behavior. To what extent do good and evil reside inside of people as opposed to being elicited by situations? Social psychology has contended that each person has the capacity for good or evil and that, many times, the situation is what determines which way we will go. The traditional view from personality psychology is that stable dispositions within individuals, such as the Honesty-Humility trait of the HEXACO model, largely determine good and evil. You may recognize this debate as an example of the basic problem of attribution in our everyday lives. An actor has committed some behavior. As an observer, you might explain the behavior in terms of stable dispositions or situational factors. The correspondence bias would lead you to conclude that the actor is evil.

Early efforts to explain one of the most evil acts of the 20th century, the Nazi Holocaust, attempted to identify dispositional characteristics of the German population that led them to support the Nazi atrocities. Psychologists proposed that the social structure in Germany led many of its citizens to develop an authoritarian personality —they were overly conventional, needed to submit to authority, were committed to harsh punishment, and were generally hostile (Adorno et al., 1950). The Milgram obedience line of research (discussed in section 21.5), motivated largely by Stanley Milgram’s desire to understand the Holocaust, effectively put an end to the authoritarian personality explanation and helped usher in the “power of the situation” era of classic social psychology, in which the situation was considered to be almost entirely responsible. This conclusion was shocking (no pun intended), but it has been accepted for many decades. In retrospect, however, perhaps the lesson was that we should only have muted our tendency to blame individual dispositions for evil behavior.

There were undeniable individual differences in Milgram’s research. In other words, some people were more apt to be obedient than others. It is certainly reasonable to ask whether some stable differences among the participants can explain these individual differences. In fact, one researcher found a significant relationship between participants’ scores on a measure of the authoritarian personality and their obedience levels (Elms, 1972). Even Milgram himself found that some stable differences, such as religious affiliation and education level, were related to obedience. Milgram noted, however, that these dispositional factors played only a small role (Milgram, 1974). Still, this is an important observation. In the most famous research demonstration of the power of the situation, there was evidence in favor of a contribution from the person. You may also recall that another individual difference was the degree to which a participant believed Milgram’s deception. Those who believed that they were actually giving shocks were far less likely to comply. Thus, the situation might not have been all-powerful.

Further, the reinterpretations of Milgram’s research have cast additional doubt on the idea that situations overpower all. It is helpful to tell you about a similar famous experiment now, too.  Philip Zimbardo is the architect of perhaps the second most famous experiment in social psychology, the Stanford Prison Experiment (Haney, Banks, & Zimbardo, 1973). Zimbardo and his colleagues apparently managed to turn ordinary college students into sadistic guards and meek, submissive prisoners simply by placing them in a simulated prison situation. This study was taken as a powerful demonstration of the ability of a situation to elicit dramatic changes in people’s behavior. Recent analyses, however, have muted these conclusions. Although there are several criticisms that have been developed, the most serious one to our minds is that there appeared to be a great deal of participant demand behind both guards’ and prisoners’ behavior. For example, the students who were assigned as guards were not told that they were subjects in the experiment; they were led to believe that they were co-experimenters. Both guards and prisoners alike reported after the study that it was always top of mind to them that they were taking part in an experiment and that they had specific roles to play as part of that experiment. Zimbardo and his colleagues made the purposes of the experiment quite obvious, and transcript records indicate that the guards were given quite specific guidance as to the behaviors that were expected of them (Le Texier, 2019).

Despite the revised conclusions of research like Milgram’s and Zimbardo’s, there is little doubt that situations can exert powerful influences over our behavior. Think about the extreme cases. On the one hand, there are probably not any dispositions that will overpower every possible situation. For example, even the most evil person could certainly be prevented from committing an evil act in some situations (like solitary confinement). On the other hand, there are probably situations that will overpower nearly every disposition. For example, it is not difficult to imagine an extremely coercive situation that practically no one could refuse: very persuasive ideological arguments to encourage engaged followership, very strong authority figures, severe punishments for defiance, no models of defiance, and so on. So it seems that good people can indeed sometimes be influenced to do bad things.

Is There a Problem with Social Psychology?

Let us return to the observation that Milgram’s and Zimbardo’s conclusions were shocking. Indeed, the very great influence of social psychology rests largely on the fact that its conclusions cannot generally be predicted by the “person on the street,” which brings us to the remaining part of this Module’s title, the hidden solutions. We pointed out in Module 21 that one reason we need social and personality psychology is that we make these kinds of judgments incorrectly on our own. For example, we have a strong tendency to explain people’s behavior using dispositional, that is, personality, attributions to the exclusion of situational factors. That tendency does not just make social and personality psychology necessary; it also makes it surprising.

Indeed, social psychology especially has thrived on revealing hidden factors in human behavior. And we cannot emphasize enough; that has been extraordinarily valuable. Unfortunately, however, there may be a downside to the extra focus on the unexpected and hidden.

Let us explain by reminding you of a fact you first learned in Module 4. When researchers first drew widespread attention to the replication crisis in 2015, they did so by revealing that only 36% of the studies they did could be successfully replicated. The rate of successful replication varied by subfield, though. Specifically, the researchers were able to replicate 53% and 48% of the cognitive psychology studies (from the two different journals they used), but only 29% and 23% of the social psychology studies (social and cognitive were the only two subfields included in the effort). Neither set of numbers is impressive, but why is social psychology’s so much lower?

Many critical observers of psychology have focused on the use of questionable research practices (QRP) as one of the driving forces behind research that cannot be replicated. Indeed John et al. (2012) conducted an anonymous survey (with special methods to encourage truth-telling) of over 2000 active psychology researchers and found that nearly 40% admitted to QRP’s (averaged over 10 separate practices), with a frightening 94% admitting to at least one. There is little doubt that this is a major factor behind the replication crisis. The only problem is, there were only tiny differences in the admission rates for QRP’s between social and cognitive psychology researchers, clearly not enough to explain the doubled successful replication rate for cognitive psychology.

So what else might it be?

Perhaps it is the very shockingness of some of the results that offer a clue. It would seem that the fact that some research finding is surprising is at least indirectly and partially a reflection of the probability that it is true. Imagine that your friend invites you to predict what will happen when she lets go of a book she is holding. Then when she does let go, the book shoots up instead of dropping down. The fact that this is surprising is likely to make you doubt that what you witnessed is genuine (in this case, you might suspect some kind of illusion or trick). The surprise and doubt are because you are probably right; it is not genuine.

The important lesson to draw from this is that the more surprising a research finding is, the more you should be surprised by it. Oh, too trivial? How about this: the more unbelievable a research finding seems, the less you should believe it. We are obviously trying to be funny, but this really is a serious point. All research needs to be replicated, but research that surprises you is less likely to be true, so it has a special obligation to demonstrate both reproducibility and scientific rigor.

Can Psychological Knowledge Contribute to a Better World?

The short answer is yes, we believe so very strongly. Despite the missteps of non-replicable and scientifically questionable research, there are many examples of where it has already made that contribution. For example, we are not aware of any significant criticisms that would call into question the major conclusions we shared about the psychology of terrorism earlier. And throughout the rest of this book, we have shared research that can without a doubt be used to improve individual’s lives and society. After all, what is society, if not an organized system of individuals?

Further, even some of the reinterpretations of classical research offer important lessons. For example, consider the engaged followership explanation of Milgram’s research (an explanation that, to a certain extent, could be applied to the Stanford Prison Experiment, too). In some ways, this new explanation is even more chilling than the original. It might not be authority, coercion and fear of punishment that leads some people to comply; it might be an identification with the leader, and a true belief in the system espoused by the leader. What if that system is evil? What if the leader is malevolent? Milgram and Zimbardo have shown that it is at least possible to get people to commit horrible acts without much difficulty at all.

We should always remember that we have a responsibility to be vigilant. And that includes all of us. We, as educators and authors have a special responsibility to keep abreast of current developments, new research, and reinterpretations of old research. We must engage our skeptical minds, and sometimes have to change our beliefs when the evidence in favor of an alternative view becomes sufficient. You should do the same. You should neither automatically believe every study you hear about, nor automatically reject any study that surprises you or contradicts your personal experience. Each important claim deserves a critical evaluation, using the skills you are learning in this class and many other classes, too.

Unit 6: Achieving Physical and Mental Well-Being

Very commonly, when students begin studying psychology, they view it as a helping profession.  It is, of course, but as you have learned, it also includes many scientific sub-disciplines: biopsychology, cognitive psychology, developmental psychology, social psychology, and personality psychology. The facts and theories developed through research in these subfields can certainly help you, but they are not technically considered part of applied psychology. Rather, these subfields are basic divisions within the science of psychology that happen to have useful applications for people’s lives.

The subfield that encompasses most of the psychological applications is Clinical Psychology (and Counseling Psychology), which is first and foremost about helping people. Unit 6, then, is about the aspect of psychology that many of you thought of originally as representing all of psychology. In this unit, however, you will see that the knowledge gained from the basic scientific subfields helps to make the helping side of psychology better, stronger, more effective. There are six Modules in Unit 6:

  • Module 25, A Positive Outlook, is principally about pleasant-feeling emotions and beliefs, such as happiness and optimism, and about the self-concept and self-esteem. Although the psychologists who are interested in these topics are often social psychologists and personality psychologists, we are sure you can see how these concepts are critical for achieving physical or mental well-being.
  • Module 26 is about sleep and consciousness. As you may know, a great many people do not get enough sleep, and this has an enormous effect on their physical and mental health. Although sleep deprivation is not the entire contents of this module—it also includes descriptions of attention and consciousness, hypnosis, sleep in general, and dreaming—it is certainly the topic most germane to the overall goal of the unit.
  • Module 27, A Healthy Lifestyle, covers stress, obesity, and eating disorders, and offers suggestions for adopting healthier behaviors.
  • Modules 28 and 29 are the two that are most directly part of clinical psychology, as they cover psychological disorders and treatments. Module 28, Introduction to Mental Illness and Mood Disorders, begins with a discussion about psychological disorders in general and finishes with a description of the very important mood disorders (including Major Depression). Module 29, Other Psychological Disorders and Treatments, covers some of the other major categories of disorders. Both modules describe the causes and treatments for the different disorders.
  • Module 30, Clinical Psychology: The House That Psychology Built, describes how all of the subfields of psychology fit together to help people live better lives and concludes with a discussion of a current controversy among psychologists known as the scientist-practitioner gap.

Module 25 A Positive Outlook

Module 26 Sleep and Consciousness

Module 27 A Healthy Lifestyle

Module 28 Mood Disorders and Treatment

Module 29 Other Psychological Disorders and Treatment

Module 30 Clinical Psychology: The House That Psychology Built

Module 25: A Positive Outlook

Throughout its history, the “helping” side of psychology has focused on relieving suffering. For example, researchers and therapists in clinical psychology have made great progress at identifying the causes and characteristics of and treatments for psychological disorders, such as depression, schizophrenia, and anxiety disorders (Modules 28, 29, 30). The removal of disordered patterns of behavior certainly relieves people’s suffering. It allows them to function in society and in some cases, literally, to live. It does not necessarily lead them to happiness and fulfillment, however; the absence of bad does not equal good. For example, you can be non-depressed and not particularly happy at the same time. So psychology can improve people’s lives not just by alleviating negative emotions and disorders but by helping them to increase their positive emotions, such as happiness and fulfillment. These are the goals of humanistic and positive psychology, psychological perspectives on topics within personality, social, and clinical psychology that emphasize positive emotions and behaviors.

Humanistic psychology originally arose in the 1950’s as a reaction against both behaviorism and Sigmund Freud’s psychoanalytic approach (Mischel & Morf, 2003). Humanistic psychologists believe that human beings have a natural orientation to develop and reach their full potential, that they are basically good (Rogers, 1951). Humanistic psychologists essentially asserted the opposite of what Freud had believed. They contended that people do not need to be restrained because of their negative and unacceptable impulses but that they are already too restrained. Behaviorists believed that people are inherently neither good nor bad but that good and bad behaviors arise as the natural consequences of reward and punishment. Humanistic psychologists, on the other hand, emphasize the role of people’s free will in determining their own behavior.

The two most important humanistic psychologists were Abraham Maslow and Carl Rogers. Maslow proposed that people are motivated by a hierarchy of needs (Module 20). According to Maslow, human beings’ ultimate goal is to become self-actualized, to reach their full potential. A self-actualized person is one who actually is what he or she has the potential to be. As Maslow put it, “What a man can be, he must be” (Maslow, 1943, p. 383). Carl Rogers similarly emphasized that human beings have a deep-seated need to develop or grow (Rogers, 1980). Both Maslow and Rogers recognized that aspects of our environments often interfere with our natural need to grow.

As humanistic psychology developed and gained followers, it began to stress that a human being must be considered as a whole person, rather than as a collection of individual mental processes, and that we are conscious and have free will. We are driven to achieve goals and to seek meaning, value, and creativity in our lives (Bugental, 1964). It was the idea that human beings are whole persons that brought humanistic psychology into its most serious conflict with mainstream scientific psychology. Psychologists throughout the academic world have built their careers on a “divide and conquer” strategy (Module 18) examining individual mental processes, an approach that humanistic psychologists rejected. As researchers made great strides in identifying those processes throughout the 1960’s, 70’s, 80’s, and 90’s, and humanistic psychology became alienated from research, humanistic psychology’s influence among mainstream scientific psychologists waned.

]Positive psychology —the subject of the bulk of this module—is a newer approach that has adopted many of the core beliefs of humanistic psychology, including the idea that human beings need and want to grow and achieve their own goals, combined with a respect for and reliance on research. Although humanistic psychology still exists, it has declined in influence in recent years and positive psychology has grown more popular among psychologists. The primary focus of positive psychology is research related to positive emotions and feelings, such as happiness, contentment, optimism, and life satisfaction.

humanistic psychology: a psychological approach based on the belief that human beings have a natural orientation to develop and reach their full potential

positive psychology: a research-based psychological approach that explores how we can enhance positive emotions, such as happiness and optimism

The first two sections in this module deal with positive emotions and feelings. Section 25.1 describes the sometimes-surprising findings about what does and does not lead to happiness. Section 25.2 discusses the idea of gender identity. In the final section of the module, I turn to a discussion of self-concept, and especially self-esteem, a key element of one’s outlook on life and an important area of focus within positive psychology.

25.1 Causes and consequences of happiness and optimism

25.2 Prosocial behavior

25.3 Self-concept and self-esteem

By reading and studying Module 25, you should be able to remember and describe:

  • Goals, characteristics, and methods of humanistic and positive psychology (25 intro)
  • How we can predict our own happiness (25.1)
  • Relationships between life situations and happiness: hedonic adaptation (25.1)
  • What predicts happiness (25.1)
  • Benefits of pleasant emotions, for self and others (25.2)
  • “Bad” pleasant emotions: schadenfreude, contempt, hubris (25.2)
  • Dangers of happiness and optimism (25.2)
  • Contents of self-concept (25.3)
  • Benefits and dangers of high self-esteem (25.3)
  • Self-enhancement and self-serving biases: defensive pessimism, self-handicapping (25.3)

By reading and thinking about how the concepts in Module 25 apply to real life, you should be able to:

  • Generate examples of hedonic adaptation (25.1)
  • Describe good and bad consequences of your own pleasant emotions (25.1)

By reading and thinking about Module 25, participating in classroom activities, and completing out-of-class assignments, you should be able to:

  • Explain how your behaviors, such as choice of major, are consistent or inconsistent with your beliefs about the role of money in happiness (25.1)
  • Describe the importance of possible selves for your own self-concept and motivation. (25.3)
  • Identify self-enhancement or self-serving biases in yourself (25.3)

25.1 Causes and Consequences of Happiness and Optimism

  • You have probably heard the saying, “Be careful what you wish for; it might come true.” What does this saying mean to you?
  • What makes you happy?
  • What choices are you making today that you believe will make you happy in the future?
  • Do you consider yourself an optimist or a pessimist?
  • What do you imagine are some of the benefits of pleasant emotions?
  • What do you imagine are some of the perils associated with pleasant emotions?

We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty, and the pursuit of Happiness.

-The Declaration of Independence

I believe that the very purpose of our life is to seek happiness.  That is clear.  Whether one believes in religion or not, whether one believes in this religion or that religion, we all are seeking something better in life.  So, I think the very motion of our life is toward happiness.

-His Holiness the Dalai Lama

Suppose that you are healthy, financially secure, and living in relative comfort and safety. But somehow, something does not quite feel right. It is not that there is anything wrong, really, it just feels like something is missing. You would probably start thinking about how to have a more positive outlook on life, how to be happier, how to feel more comfortable with who you are and what you are doing with your life. You might try to find a solution by meditating or praying or talking to people you trust. Maybe you would read a self-improvement book. You might not be thinking about consulting a psychotherapist, a helping psychologist. Again, it is not that there is anything wrong, so it might not seem that psychology has anything to offer you. You are thinking about only one side of the helping part of psychology, however. If you turn to the findings of positive psychology, you just might find the answers you are seeking. Positive psychology marries a desire to promote positive emotions, including happiness, satisfaction, and optimism, with a scientific approach to acquiring psychological knowledge. What positive psychologists have discovered about happiness, optimism, and self-esteem might surprise you. More importantly, it just might help you.

Predicting Our Own Happiness

More than anything, positive psychologists are interested in what makes people happy. Maybe you’re saying to yourself, “Why would I need a psychologist to tell me how to be happy? I know what makes me happy!” But perhaps the most significant observation about happiness is that we often make serious errors when predicting what will make us happy. More generally, we refer to this process as affective forecasting , predicting our future emotions, usually in response to some present or possible future event (Kurtz, 2018). And, similar to what we saw about human reasoning, this process is prone to errors. This is bad enough if we just consider getting the prediction wrong. When we plan future events on the basis of these incorrect predictions, the problem gets much worse.

Of course, much of the time, we do not make these errors. For example, you might choose to go running because you predict it will put you in a good mood. Many people eat chocolate, or some other favorite food, for the same reason. An individual might predict that if she has an argument with her husband, she will be in a bad mood for the rest of the evening. Very commonly, predictions such as these are exactly right (Lowenstein & Schkade, 1999). Problems often occur when we think more long-term and large scale, however. We have a tendency to be pretty bad at predicting how changes in our major life situations will or will not impact our long-term well-being. The two essential errors we make are called impact bias and durability bias . So we are not saying you are going to be completely wrong in most cases. We are saying that people have a tendency to overestimate the intensity of our future emotions (impact bias) and how long the effects will last (durability bias).

Consider careers. Suppose you are fired from a position that you liked a great deal and had hoped to keep for many years. How would that influence your happiness over the next few years? If you are at all like a group of college professors who had a similar experience, the answer is, “not as much as you might think.” Let us explain a bit. When professors begin their careers, they are essentially “on probation” for a number of years and are known as assistant professors. At the end of the probationary period, perhaps 3 to 6 years, if the assistant professor is granted tenure, he or she gets to keep the position and is promoted to associate professor (and later, to full professor). If tenure is not granted, the assistant professor is essentially fired. Daniel Gilbert and his colleagues (1998) asked 97 assistant professors to predict how much the eventual decision about their tenure would affect their long-term life satisfaction. They predicted that they would be significantly happier if they received tenure and that the feeling would last for five years. The research team compared these predictions to the actual life satisfaction of 123 former assistant professors, some of whom had received tenure and some of whom had not. Contrary to what the current assistant professors predicted, the tenure decision had had virtually no impact on overall life satisfaction, even in the first few years after the decision, an example of both impact and durability biases.

Gilbert’s research team repeated this kind of comparison for being selected or rejected for a job, breaking up with a romantic partner, seeing a preferred candidate win an election, discovering from a personality test that one has an undesirable personality, and reading about the death of a child in a newspaper article. For all of the situations, the respondents overestimated the long-term impact of these events on overall life satisfaction or happiness.

Because people do not realize that many life changes and situations have a relatively small role in happiness, they make important decisions and plan their lives based on faulty predictions. For example, many people select locations to live based on climate; in essence, they believe that a particular climate will make them happy. Indeed, students attending colleges in California are more satisfied with the climate than those attending colleges in the Midwest; but the two groups of students do not differ at all in overall happiness (Schkade & Kahneman, 1998). Other research has shown that people mispredict the impact on future emotions of personal and environmental changes, such as weight gain or increases in air pollution, major job changes, and major health changes (Loewenstein & Frederick, 1997; Wiggins et al., 1992). For example, one study found that people incorrectly predicted their emotional response to positive or negative HIV tests; people who received positive results were less distressed and those who received negative results were less happy than they had predicted (Sieff et al., 1999).

affective forecasting: predicting our future emotions, usually in response to some present or possible future event

durability bias: the tendency to overestimate how long our future emotions will last

impact bias: the tendency to overestimate the intensity of our future emotions

What Does Not Predict Happiness Very Well?

It turns out that many major differences in people’s life situations—or at least the ones that have been researched—have only small relationships with happiness. Consider, for example, what may be the most common belief, that money makes people happy.

Money and Happiness

Even if people do not come right out and state that money buys happiness, many live and make important decisions as if they believe it to be true (Seligman, 2003). The belief is strong, and it has been getting stronger. For example, the Higher Education Research Institute at UCLA has been surveying college freshmen throughout the US every year since 1967. The desire to be “very well off financially” has been the number one rated value for several years in a row, with 84% of students rating it is essential or very important in 2019 (Stolzenberg et al., 2020).

So nearly 85% of college freshmen believe that it is very important or essential to be very well off financially. Are they correct? Not really. There is almost certainly a positive relationship between money and happiness, but as we have been hinting, the relationship is weaker than many people believe. The research is quite consistent. Among the very poor, an increase in income is likely to lead to an increase in happiness. Once you reach a level at which your basic needs are met, however, more money has a smaller impact on your happiness (Diener & Diener, 1995; Diener & Seligman, 2004; Helliwell, 2003; Schyns, 2003). As Daniel Gilbert, one of the key researchers in this area has put it, there is more difference in happiness between earners of $15,000 and $40,000 per year than between earners of $100,000 and $1,000,000 per year.

Overall, income explains about 9% of the variability in happiness ratings. Although this is a low overall amount, it does reveal that there can be substantial differences between people at the top and bottom of the income scale (Luhmann & Intelesiano, 2018), so again, we are not saying there is no relationship. We are saying that it may not be not worth it to choose a $60,000 job you hate over a $50,000 job you love.

Interestingly, researchers have also discovered what they call the  “dark side” of the American Dream, although by now, the research has been conducted in other areas of the world, too. Surveys of 18-year olds and adults in the US, college students in the US and South Korea, and business school students in Singapore found that individuals who said that acquiring wealth was an important goal in their lives had lower psychological well-being and energy, were less productive in school, work, and community, and had more anxiety and depression than those not so concerned about acquiring wealth. Those who reported that their important goals were not wealth but rather personal growth, self-esteem, good relationships with family and friends, and a desire to improve the world fared better on most of the measures (Kasser & Ahuvia ,2002; Kasser & Ryan, 1993; 1996; Kim et al., 2003).

You might note that these studies measured only whether people have goals to acquire wealth, not whether they achieved those goals. Fair enough. The best way to determine the relationship between happiness and a goal of acquiring wealth is through longitudinal research (examining groups of individuals over time), which one study has managed to do. The study found that when college students who had the goal of acquiring wealth reached their late 30’s, those who earned high incomes did not score lower on measures of life satisfaction, as you might expect from the earlier “dark side” research. On the other hand, they did not score higher, as you would expect for people who had achieved their life’s goals. Further, whether they had achieved their goal or not, the participants who had the goal of acquiring wealth had lower satisfaction with their family and friends, so there may still be lingering negative effects of focusing on money (Nickerson et al., 2003).

Taken together, these results suggest that materialism , placing importance on money, possessions, image, and status, is consistently associated with lower levels of happiness, a conclusion verified by a meta-analysis of 179 individual studies (Dittmar et al., 2014; Kasser, 2018). Further, longitudinal studies covering time periods ranging from one year to 12 years found that as peoples’ materialism increased, their happiness decreased (Kasser, 2018).

But wait, you might say. My uncle makes $3 million dollars per year; he goes on many exotic vacations, drives a $90,000 Porche and a Tesla, and he has many friends and is very happy. We have two thoughts for you. First, you cannot be sure that someone is happy by seeing how he spends his money. It may just be that he is going on all of those vacations and buying all those things because he is unhappy and is searching for some experience or magical purchase that will make him happy. Second, and more importantly, you cannot argue from individual people that something is true in general; you cannot generalize from testimonials and case studies. (We have lost count of how many times we have made this point, but by now we are sure you get the idea that it is an important one.) For every happy rich person you find, we may be able to find several happy poor people, several unhappy rich people, and several unhappy poor people. Psychologists do research using large numbers of people so we do not have to rely solely on individual cases to draw conclusions.

On the other hand, we have little doubt that some people can use their money to purchase vacations and expensive products that make them happy. If they had not had the money, however, perhaps they would have found another way to be happy. Consider a finding from a survey of lottery winners. These newly rich people reported that they derived less pleasure than non-winners from everyday events, such as reading or eating (Brickman et al., 1978). And anyway, it may be that buying expensive products is not the most effective way to use the money. Individuals report that they derive more happiness from purchases that lead to experiences (say, concerts, travel) than material purchases (such as clothing, jewelry, electronics), regardless of the cost of the purchase (van Boven & Gilovich, 2003).

Other Potential Sources of Happiness

You might now be able to guess that it is not just money that makes a relatively small contribution to our long-term happiness. In reality, researchers have found many additional major life circumstances that have only a small impact on happiness. Factors such as age, gender, race, income, marital status, education level, physical attractiveness, and health have only small correlations with happiness or life satisfaction (Campbell et al., 1976; Diener et al., 1995; Okun & George, 1984).

You might be tempted to think that, although individually these factors may not have much of an effect on happiness, perhaps in combination they are the formula. For example, maybe being married with a high income, good health, and lots of education together lead to happiness. There is probably some truth to this, but not as much as you might think, and the information may not be particularly useful anyway. First, many of these factors themselves are correlated with one another. For example, income tends to go up with education, so the two factors would probably not contribute independently to happiness, even if they did cause it. (Of course, because these studies are not experiments, we cannot say with confidence that any factor is a cause at all.) Second, several of the factors (for example, age, ethnicity, intelligence, and health to a degree) are not exactly under our control, so even if they were independent causes, we could not use them to enhance our own happiness.

When a Really Good Thing Is Not Enough

It is true that changes in one’s life situation, such as a sudden windfall of money, can lead to an immediate increase in happiness. The problem is that the increase tends to be short-lived. Similarly, although negative events cause an initial large increase in sadness and other negative emotions, the effects often wear off over time (see durability bias above). How can that be? Why do these major life changes not have an impact on our long-term happiness? One major reason is an extension of some ideas from research into Sensation and Perception. We judge the brightness and movement of objects by comparing them to surrounding objects. We are relatively insensitive to the absolute levels of motion or brightness, but extremely sensitive to contrast, or changes. So in order to sense some stimulus over time, it must change. If it does not change, we adapt to it and stop sensing it, a phenomenon called sensory adaptation (Module 12). It is a remarkably efficient strategy that saves our brains the trouble of paying attention to the vast majority of our world that is exactly the same as it was one second ago.

We make judgments about our experiences using a similar process. We adapt, or get used to, situations that do not change over the long-term. These unchanging aspects act essentially as a background and do not enter into our judgments about our current happiness. The idea has been dubbed hedonic adaptation by researchers to distinguish it from sensory adaptation (Frederick & Lowenstein, 1999). Basically, because of hedonic adaptation, we will often judge our happiness by comparing our current situation to the background level determined by the recent past. If it is markedly different, then we may notice that we are particularly happy or unhappy.

So, if you win the lottery tomorrow, you will jump up and down and laugh at how wrong your psychology textbook was about money and happiness (perhaps in your new BMW on your way to the Lexus dealership). Wait a year, though, until after you have had time to adapt to an opulent, luxurious lifestyle. You might find that you will be no happier than you are today.

There are important violations of this general rule, however. In other words, there are some changes to our lives and circumstances that can lead to long-lasting changes in our happiness. Generally speaking, people tend to adapt within a few years to marriage, divorce, and becoming a parent. People also can adapt to having a spouse die, but it can take longer. Research has found that unemployment and becoming disabled can lead to happiness changes that are permanent in some people (Luhmann & Intelisiano, 2018). It is important to note that the events that seem to lead to permanent changes are negative life events.

hedonic adaptation: a phenomenon in which we tend to adapt to our circumstances and judge our happiness by comparing the current situation to the recent past

materialism : placing importance on money, possessions, image, and status

What Does Predict Happiness?

So far, we have told you about what does not make us happy (or at least what does not have much effect). Remember, though, that the goal of positive psychology is to discover what does lead to positive emotions. It would be quite odd, then, if we were to leave out that part of the equation. Although it is obviously important to realize that some commonly believed “causes” of happiness will not likely have the expected effects, we certainly want to know what does have a major effect on happiness.

You have seen one suggestion in the “dark side” research. The goals that the happier people had were personal growth, self-esteem, good relationships with family and friends, and a desire to improve the world. Note that the goal to be happy was not included. As the philosopher John Stuart Mill astutely noted, “Ask yourself whether you are happy, and you cease to be so.”

Genetics plays a role in nearly all psychological phenomena, so it should come as no surprise to you that happiness is no exception. Researchers have estimated the heritability for happiness is about 30% – 40% (Røysamb & Nes, 2018). In other words, 30 – 40% of the variability in people’s tendency to be happy is related to differences in their genetic makeup, and 60 – 70% is related to differences in their environment. So, although genes play a significant role in giving us a predisposition to be generally happy or unhappy, there is room for a substantial contribution beyond that. In other words, we have many opportunities to deviate from the level of happiness that is suggested by our particular genetic makeup.

Cognitive Factors and Happiness

Two factors with reasonably strong links with happiness are subjective health and satisfaction with religion (Argyle, 1999). Note that these factors are not simply being  healthy or religious but rather believing  that one is healthy and being satisfied  with one’s religion. So happiness in these cases is not just a matter of environmental circumstance. The beliefs that people hold and the judgment styles that they adopt can influence their happiness. As you might expect, then, when people have certain characteristics, such as optimism or hope, they are very likely to be happy (Buchanen & Seligman 1995; Snyder, 1994).

These cognitive factors even include basic phenomena like attention. For example, people who suffer from anxiety and depression tend to pay attention (measured by eye-tracking, as where you look is a very good indicator of what you are paying attention to) to negative and threatening stimuli (Armstrong & Olatunji, 2012). When researchers have trained individuals to change what they attend to, happiness ratings changed correspondingly (Wadlinger & Isaacowitz, 2011).

One very important factor that researchers have focused on is gratitude , being thankful to an outside source for some positive situation or outcome. At first, correlational, then later, experimental research, has consistently indicated that gratitude is associated with and causes happiness. For example, several studies have followed research participants who have been randomly assigned to express gratitude or complete some control task over time and found larger happiness increases for the gratitude groups (Margolis & Lyubomirsky, 2018). Some observers have feared that gratitude is limited in its usefulness because sometimes, there is no outside source to thank. Appreciation , acknowledging the positive situation, finding meaning, and experiencing positive emotions connected to that situation, however, appears to serve the same purpose, as the two concepts appear to share the important essential features (Adler & Fagley, 2005; Wood et al., 2008).

gratitude : being thankful to an outside source for some positive situation or outcome

appreciation : acknowledging a positive situation, finding meaning, and experiencing positive emotions connected to it

Think Small

Think about what it means to be a happy person (as opposed to happy right this moment). Although there may be different definitions, one useful one is “having many episodes throughout the day that are accompanied by positive emotions, such as happiness, contentment, and satisfaction.” Thus, we need not look at general, long-term judgments of how happy an individual claims to be in life. We learn a lot by asking how many times she felt happy today and what she was doing when she did. When researchers ask the question this way, they find several activities that are associated with high levels of positive emotions. A recent survey of 1,000 women, for example, found that having sex, socializing, relaxing, praying or meditating, eating, and exercising were accompanied by the highest levels of happiness and enjoyment. The researchers concluded that daily features of life have a large influence on a person’s mood and enjoyment of life. Long-term features of one’s life situation have an influence only when you focus on them—for example, right after they happen or when something draws attention to them (Kahneman et al., 2004).

Suppose after an exam you walk out feeling very uncertain; there were several questions that you are afraid you got wrong. When you go to class on the next day, you discover that you got all of the uncertain questions right, and you got the highest grade you have ever received on an exam. You are ecstatic and cannot wait to go home and share your good news. You walk in and say, “Guess what, I got a 97% on my psychology exam. It is my highest grade ever, and it looks like I will be getting an A in the class.” Your friend (or boyfriend, girlfriend, spouse, parent) replies, “Huh, sorry, I wasn’t listening, were you talking to me? Did you see this hilarious meme?” or, worse, “Big deal. I am not in the mood to listen to your bragging. I happened to fail that test, and I need to figure out how I am going to pass this class.” All of a sudden, your ecstatic mood is gone. You may even feel angry or depressed because your friend did not appreciate your good news.

Researchers, too, have discovered this phenomenon. Although good news can make you happy, sharing that news has benefits that go way beyond the news itself—but only if it is constructively and actively received (Gable et al., 2004). So you would have gotten an extra boost if your friend had responded, “Hey, that’s great. All of that hard work you put in is really paying off. Maybe I should start studying with you more often.” We hope you remember the next time someone shares good news with you that you have the power to make the person feel even better about it.

Benefits Associated with Pleasant Emotions

Contentment and happiness are goals in and of themselves. Beyond that, happiness and optimism have several benefits. For example, they are associated with perseverance and achievement. Optimistic and happy people are also healthier. (Peterson, 2000; Seligman 1990; Taylor, 1989; 2000). Researchers have even shown that our immune systems are boosted by positive emotions (Salovey, Rothman, Detweiler, & Steward, 2000). Longitudinal experimental research has also found that happiness causes some improved outcomes. In one study, individuals who received training on a set of activities designed to increase happiness (for example, expressing gratitude) improved on happiness ratings, and on both self-reported health and number of sick days taken (Kushlev et al., 2020).

There are also benefits for others when you are happy. Research has shown that people are more cooperative, generous, and helpful when they are in a good mood (Gendolla, 2000; Isen, 1987).

We are about to move on to the dangers of pleasant emotions. Do not be fooled by the fact that the following section on the perils is longer. These benefits that we just described are extremely important to both mental and physical well-being.

Dangers Associated with Pleasant Emotions

Even if you agree with the value judgment that it is a worthwhile goal to enhance our happiness, you should be aware of the potential dangers associated with some pleasant emotions. Quite simply, the fact that something feels  good does not necessarily mean that it is  good. Think about the thoughts behind some of these pleasant-feeling emotions:

  • Schadenfreude (pronounced sha-den-froy-duh) is a German word that means “the malicious enjoyment of the misfortunes of others” (Oxford English Dictionary). For example, a disliked rival fails an important exam, and we may be secretly glad. Many people take pleasure in witnessing the failures and setbacks of others, especially when they feel threatened by the others and especially if the others’ prior successes had been judged undeserved (Feather & Sherman, 2002; Leach et al., 2003). As you may realize, schadenfreude is a serious barrier to fighting against stereotypes, prejudice, and discrimination.
  • Contempt, feeling that you are superior to another person, also feels good yet is very damaging. Contempt may be the most destructive emotion for a marriage (see Module 22; Gottman & Silver, 1999). Indeed, researchers have found that contempt is common among married couples who are violent toward each other (Holtzworth-Munroe, Smutzler, & Stuart, 1998). We are sure it comes as no surprise, then, that contempt is associated with conflict and problems in other relationships, such as between parents and adolescents (Beumont & Wagner, 2004).
  • Hubris is the dark side of pride. According to psychologist Michael Lewis (1992), pride occurs when you feel good about yourself because of a specific action. A child may be proud that she got a high grade on an exam, that she won an athletic competition, or that she cleaned her room without being asked. Hubris, on the other hand, is a good feeling about yourself that is completely unrelated to any specific actions. In essence, it is saying, “I am great because I am me.” Lewis notes that hubristic people have poor interpersonal relationships and often feel contempt toward other people and that hubris can be linked with grandiosity and narcissism in extreme cases. We tend to use the word pride  for both kinds of feelings, but it is probably worth paying attention to the distinction because hubris is clearly a good-feeling emotion that has negative consequences. Note also that, although pride is considered a positive emotion in western cultures, it is a negative one in many Asian cultures because it separates one from others in the community.
  • Unrealistic optimism is not helpful, even though we typically think of optimism as good. A college sophomore we knew a few years back turned to his best friend one day, cigarette in one hand and a beer in the other, and said, “I’m going to get in shape by next week.” Seeing as he had not exercised once in the previous 18 months was at least 30 pounds overweight, it strikes us that his optimism was a bit unrealistic. As it turns out, he did nothing at all to get into shape. On the other hand, an individual who is optimistic that he can get in shape and lose 30 pounds over the next six months and has a realistic idea about how to accomplish it has a much greater chance of success. Part of the puzzle is a trait known as self-efficacy , the belief that one has the ability to perform a task or reach a goal (Bandura, 1977). You can think of self-efficacy as a specific kind of optimism, a (realistic) optimism, perhaps derived from past successes, about one’s own abilities (Cervone, 1997).

It turns out that emotions often interact with other behaviors, sometimes in ways you would not expect and in ways that make “positive” emotions bad and “negative” emotions good. Let us close this section with a few examples:

  • When people feel guilty, they are more likely to cooperate with others, so it is not only pleasant emotions that can increase helping behavior (Ketelaar and Au, 1999).
  • When people are happy, they are more likely to use stereotypes when making judgments about people. When they are sad, they pay more attention to specific information about individuals (Isbell et al., 1999).
  • When people are in a good mood, they are more likely to fall for the correspondence bias, in which they (often mistakenly) attribute people’s behavior to personality and ignore situational influences (Forgas 1998).
  • When people are in a good mood, they are less accurate at recalling events that they have witnessed and make less persuasive arguments than when they are in a bad mood (Forgas, 2006).

schadendfreude: the malicious enjoyment of the misfortunes of others

contempt: the feeling that you are better than someone else

hubris: a good feeling about yourself (similar to pride) that is unrelated to any specific actions

unrealistic optimism : the overestimation of the likelihood of desirable events or outcomes and the underestimation of the likelihood of undesirable events or outcomes

self-efficacy: the belief that one has the ability to perform a task or reach a goal

  • After reading this module, what could you do to make your life happier? Is this any different from what you would have said before reading?
  • Think of the last time you tried to share good news with someone. How did the person respond? Did the response make you feel better or worse?
  • What are some of the daily events that often make you happy?
  • Do you agree or disagree with the idea that humans have a right to be happy and that the purpose of our life is to move toward happiness? Why?
  • Try to think of some examples of schadenfreude, contempt, hubris, and unrealistic optimism in yourself or others. Were the outcomes negative or positive?
  • Try to think of some examples of when you were helpful, generous, or cooperative while you were in a good mood.

25.2. Prosocial Behavior

Are people basically good or bad?

You might remember this as one of the enduring questions from philosophy that psychology has tried to answer empirically. And by now, of course, you may realize that this is a false dichotomy. The answer need not be one or the other; rather, it is some combination of “both.” Personality psychology, through its recognition of traits such as agreeableness, might suggest that part of the answer might be that some people are basically good while others are basically bad. Social psychology, on the other hand, through its recognition of the power of situations in determining behavior, might suggest that sometimes, people are basically good and sometimes, they are basically bad. Both suggestions are probably correct.

Let us remind you for a moment about the bystander effect from Module 21. Remember, this concept was developed after the discovery that some people fail to help in an emergency. We suspect that many people who are aware of the bystander effect remember only that part of the lesson, that some people fail to help. To our minds, the more important lesson is that many people do help, and many factors can increase the likelihood that people will help. In essence, that is the subject of the present module, prosocial behavior , defined simply as behavior that is intended to help other people. As you will see, there are situational, dispositional, and biological factors that are related to this important, but easy to overlook, behavior.

And it can be easy to overlook. For example, during the spring and early summer of 2020, as scientists promoted the wearing of masks to prevent the spread of COVID-19, many observers were dismayed that many people were unwilling to engage in a seemingly simple behavior that would help others (even before the benefits to the mask-wearers themselves were known). Every week, it seemed, news reports highlighted the throngs of people who refused to comply, sometimes violently. The New York Times reported that most people actually did wear masks, however. According to several (self-report) surveys, about 80% of the public wore masks frequently or always when they could not maintain physical distance in public. One survey found that only 14% of people never wear masks (Katz et al., 2020). In other words, the vast majority of the population reported prosocial behavior, and the news reports were emphasizing the behaviors of a relatively small group. When we overestimate the prevalence of not wearing masks because of the ease of recalling examples of non-wearers, we are falling victim to the availability heuristic (Module 6).

Why Do We Help Each Other?

In Module 21, we told you that reciprocation may be a human universal. And this is certainly one reason that people help others, that they expect the others to return the favor someday. How can we explain the interesting case of altruism , helping behavior when no such expectation of reciprocation exists, however.

Helping behaviors are no different from many other human behaviors in many respects. Thus, they are governed by the same laws. To be sure, we can learn these behaviors through observation and through operant conditioning (Module 6). For example, when children watch their parents and older siblings engage in helping behavior, they can learn it themselves. We can also learn from larger elements of culture. For example, many religions teach followers the importance of charity and helping others.

Let us focus on the operant conditioning idea for a moment because it quickly gets interesting. It is a bit crass to suggest that we do good deeds for others to get a reward, but it certainly sometimes happens that we do get those rewards. Picture the middle schoolers who win citizenship awards at 8th-grade graduation because of their habitual helpfulness, or a thankful dog owner offering money to the kind person who returned a lost puppy. We once knew a college student who was given $100 in return for giving up her seat on an airplane so a married couple could sit together. It is not unreasonable to expect that these pleasant outcomes would make similar behaviors more likely in the future (positive reinforcement).

But wait, have you ever seen someone refuse to accept a reward? And frankly, the tangible rewards seem few enough that it would be difficult to expect them to consistently lead to high levels of helping behavior (remember, with partial reinforcement, it is difficult to establish a behavior, but once it is established it lasts a long time).

We are acting like this is a big puzzle, but the truth is, many of you figured it out a long time ago. Often, we do not help others because we get a reward, or even because we expect a reward. Sometimes, we just like the way it makes us feel to do good deeds. In other words, we might find reward or positive reinforcement from the pleasant emotions that result from helping others. Indeed, volunteering, caring for family members and friends, and donating money are all positively correlated with happiness (Helliwell et al., 2018). And before you remind us about correlation and causation from Module 2, we should point out that experimental research suggests that the relationship is causal. For example, one study (and don’t worry, there are others) found that research participants who were randomly assigned to do five good deeds for other people on one day during each week had more positive emotions and fewer negative emotions over a six-week period than participants instructed to do something good for themselves (Nelson et al., 2016).

OK, so it feels good to help others, even in the absence of rewards, but why? We think biology and evolution offer quite a good explanation. Humans are social animals, more social than most species. As such, we are disposed to creating, maintaining, and strengthening social connections. According to the social brain hypothesis, in fact, our brains are so large and complex compared to other animals precisely to allow us to function in complex social groups (Dunbar, 1998; 2016). Humans are not a particularly strong or fast species. Alone, a human being living in prehistoric times would seem to be easy prey and a poor predator. Together, however, a group of humans can accomplish a great deal more and thus achieve the goals to survive and reproduce. In other words, a key element of the social complexity that our brains have evolved to accommodate is the need to connect with others socially to achieve goals that we cannot achieve on our own. Evolution has a way of making behaviors that are helpful for survival and reproductive success pleasant to ensure that we will engage in them (e.g., eating and sexual behavior). So perhaps this is why we find it so enjoyable to help others. It should come as no surprise to you, then, to discover that prosocial behavior that strengthens social connections are particularly associated with positive emotions (Helliwell, 2018).

What are Some Other Traits That Lead to Prosocial Behavior?

Earlier, we noted that people are more generous and helpful when they are in a good mood. And we learned from the bystander effect in Module 21 that people will help in an unambiguous emergency situation where they are the only helper available and they know how to help (Darley & Latane, 1970). Let us conclude this section with a couple of additional traits that are associated with prosocial behavior.

One key trait is humility . In section three of this module, we will cover self-concept and self-esteem more fully. Humility is obviously related to both, but because of its role in prosocial behavior, we will cover it in this section.

We hope you recall the HEXACO model from Module 19 (Ashton & Lee, 2007). This was the trait approach that expanded the Big Five personality factors (Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism) to include Honesty-Humility.  This additional important trait is subdivided into separate facets of Sincerity, Fairness, Greed Avoidance, and Modesty. The fact that jumps out to us is that three of the four facets seem variations on the Honesty part of the trait. Only modesty is obviously related to humility, which is unfortunate because humility is a remarkable trait.

The modesty facet is related to the idea of not seeing yourself as overly important and not entitled to special privileges. Other researchers have broadened the concept of humility, and we like this broader conception. Weidman et al., (2018) noted that there may be two types of humility. One is related to the appreciation of others and self. This dimension includes aspects such as being kind, compassionate, and respectful, as well as having an awareness of one’s weaknesses and limitations while maintaining a healthy regard for oneself and an appropriate pride in one’s accomplishments. It may be that both types of humble people are more generous and more likely to help people in need than non-humble people (Exline & Hill, 2012; LaBouff et al., 2012). Individuals who have too much of the modesty-only type, however, might suffer some ill effects, such as lower self-esteem (Weidman et al., 2018). 

25.3 Self-Concept and Self-Esteem

  • What are some of the benefits of high self-esteem or dangers of low self-esteem that you have heard about? Are there any that you think might not be true?
  • What sorts of behaviors do you do think help you to raise your own self-esteem?

Humanistic and positive psychology have a deep interest in the self: how people regard the self, how they can enhance their sense of self and their self-esteem. A person’s general outlook on life has a great deal to do with her or his sense of self. But the self or identity on its own is a concept seen throughout social and personality psychology and in developmental psychology. It was also one of the key ideas examined by the first psychologists.  William James (1890) divided the self into three parts: contents of the self-concept, feelings and emotions about the self, and the actions to which they lead. Psychologists through the years have devoted a great deal of effort to describing these elements of the self. The main focus in this section is what we have learned about maintaining a sense of self that contributes to well-being.

Contents of the Self-Concept

We want you to consider seriously the term “self-concept.” It is, literally, a concept  about the self. In other words, we can understand our thinking about our self by referencing many of the principles that psychologists have learned about other concepts (Cantor & Kihlstrom, 1987; Markus, 1977). You may be resisting this idea, thinking that your self-concept is far more complex than your concept of apple, for example. What if you were the world’s leading expert on apples, though? It is reasonable to expect that your concept of apple would be far more detailed, complex, and flexible than someone who simply eats one, two, or three days per week. Well, you are the world’s leading expert on another concept, namely you . Our self-concepts, because of the rich set of detailed knowledge that we have about ourselves, are extremely complex, but they are concepts nonetheless.

We have a sense of self that feels constant—that is, a self-essence or core self. That self is defined, in part, by our stable relationships with other people (Baumeister, 1998). At the same time, we have very different conceptions of the self in different spheres of life or in different situations. For example, you might be competitive in sports versus cooperative at work, playful with your friends versus serious with your partner. The idea of a complex self-concept helps solve this apparent puzzle. Factors such as our motivations and emotions, the strategies we use to access the knowledge, and the environmental context help determine which self-information is called to mind in any particular situation (Showers & Zeigler-Hill, 2003). For example, if you are in a good mood, you might be more likely to think about your playful side. If you are giving an important speech in front of a large group of students and faculty members, perhaps you would be aware of the serious, studious, professional part of your self.

We also extend our self-concept to include future possible selves . Each of us may have sets of ideal, expected, and feared “selves” that we might become in the future. We use these possible selves to motivate us, especially when we have strong positive and feared possible selves. For example, a student who hopes to be a doctor yet fears that she will fail to graduate and end up working at Wal-Mart for the rest of her life is likely to be very motivated to work hard at school. Our future possible selves also protect us by providing hope (Cross & Markus, 1991; Oyserman & Markus, 1990; Frazier et al., 2000; Morfei et al., 2001).

Feelings and Emotions About Self: Self-Esteem

William James (1890) defined self-esteem rather simply, as the ratio of successes or accomplishments over the opportunities that we have to succeed. For example, if you are successful in most or all of the opportunities, you have high self-esteem. According to James, each person gets to choose the aspects that enter into the calculation. For example, some of the situations that you judge important may involve accomplishments in your career or athletic pursuits and goals related to generosity and honesty. Another individual’s important situations may involve being a good friend and being a hard worker.

Although thinking about self-esteem has advanced a great deal since James’s time, and different researchers may define it differently, the basic idea is still the same. The degree to which you believe you “measure up” on aspects of your self-concept that you judge important is essentially your self-esteem.

“Everybody knows” how important it is to have high self-esteem. People have judged it so important that in the early 1990s the Department of Education of California made it a primary goal to improve children’s self-esteem (Baumeister, 1998). State officials believed that raising children’s self-esteem would reduce violence, crime, drug abuse, teen pregnancy, and child abuse and would improve school achievement (California Task Force, 1990).

The problem is, these conclusions were based on non-experimental research and everyday observations only. The public has assumed that high self-esteem leads to good outcomes, but in fact the reverse, that good outcomes lead to high self-esteem, is just as likely true. For example, there is no doubt that children who have high self-esteem tend to do better in school than those with low self-esteem. It may very well be, though, that success in school leads the children to feel good about themselves, so the high self-esteem is a consequence of success rather than a cause. Robyn Dawes (1996) has gone as far as labeling the belief in the power of improving people’s self-esteem as one of the key myths of psychology.

Several years ago, Roy Baumeister and his colleagues (2003) attempted to examine the research literature on the effects of self-esteem. They focused primarily on studies that used objective measures of various outcomes in school, health, work, and social interactions. Objective measures are important to prevent the outcomes from being contaminated by the participants’ own self-esteem. For example, you cannot ask people with high self-esteem whether they believe that others like them (the answer is usually yes); you have to ask the other people directly (the answer is sometimes no). The research team found that nearly all of the believed benefits of high self-esteem do not hold up under the scrutiny of comparison to actual research. In other words, Robyn Dawes was very nearly exactly correct.

Baumeister’s team concluded that high self-esteem does not lead to better school performance; it does not make one less likely to smoke, drink alcohol, use drugs, or engage in sexual activity. It does seem to protect against eating disorders and may provide some protection against depression. The most consistent positive finding is that high self-esteem makes people more persistent, so they are less likely to give up after failure. It also probably makes people happier; in short, it feels good to have high self-esteem.

Given this observation that it feels good to have high self-esteem, it would seem a reasonable idea to bolster it, even if self-esteem does not have the strong benefits so many people believe that it does. Unfortunately, however, the fact that it feels  good to have high self-esteem does not necessarily mean that it is  good to have high self-esteem. There is evidence showing that inflated self-esteem can be a problem for some people. For example, many people believe that low self-esteem leads to violence or aggression. In fact, there is little evidence that this is true, and often individuals with very high self-esteem commit acts of violence and aggression (Baumeister et al., 1996). Certainly, not everyone with high self-esteem is prone to aggression. People to whom high self-esteem is extremely important; however, are especially likely to become aggressive when their self-esteem is challenged (Bushman & Baumeister, 1998).

It is simply the case that not all self-esteem is alike. Some people who have high self-esteem probably do benefit from it. On the other hand, some people with high self-esteem try too hard to boost or keep their high self-esteem, and for these people, high self-esteem is likely to lead to problems (Tangney & Leary, 2003). One way to tell whether high self-esteem will be good or bad is probably to look at how secure that self-esteem is (Kernis, 2003). Some people with high self-esteem need to defend it; their self-esteem is unstable or fragile. On the other hand, when people are secure in their high-self esteem, it may be associated with more positive outcomes. For example, people with secure high self-esteem and fragile high self-esteem score on opposite ends of scales that measure anger and hostility (Kernis et al., 1989).

Actions That Enhance Self-Esteem

Originally, William James described the key actions associated with the self as self-seeking and self-preservation behaviors. Although the general self-concept plays a role in a great many of our actions throughout our lives, we would like to focus on those that enhance our self-esteem.

Roy Baumeister (1998) has noted that it certainly feels good to have high self-esteem. Thus, we have strong tendencies to try to increase it, through what are called self-enhancement behaviors, or self-serving biases . People often naturally seek situations and engage in behaviors that will increase their self-esteem, and as a consequence, their self-esteem tends to be high (Baumeister et al., 1989). For example, people have a strong tendency to take credit for their successes and to blame outside causes for their failures (Zuckerman, 1979). Consistent with this idea, students give lower course evaluation ratings to teachers when they get lower grades than expected but do not give higher ratings when they get higher grades than expected (Griffin, 2004). Our self-serving biases also lead us to overestimate the amount and importance of our contributions to group tasks, a fact you might want to remember when you participate in group projects in school or throughout your career (Tesser, 2003). These are not isolated examples; on the contrary, there is an enormous list of behaviors and attitudes for which researchers have demonstrated that we judge ourselves very favorably, including generosity, fairness, morality, cooperativeness, kindness, loyalty, sincerity, honesty, and politeness, among others (Epley & Dunning, 2000). And, in what may be our favorite example of a self-serving bias, many people believe that they are less prone to self-serving biases than others are (Pronin et al., 2002).

Another sort of bias is our tendency to choose for ourselves the basis of our self-esteem (James, 1890). It should not surprise you that we are likely to choose factors that will make us feel good about ourselves. For example, if you are an athlete, you probably do not base your self-esteem on your singing ability (unless you are a singing athlete). Of course, then, people are likely to seek out situations and engage in behaviors that will help them to feel good about themselves in the domains that they have chosen (Crocker & Wolfe, 2001; Crocker et al., 2003).

There are a number of additional strategies that people use to enhance their self-esteem; let us describe two more. You will probably be able to recognize these in yourself or other people (try not to fall for the self-serving bias of only recognizing these tendencies in other people). First, when some people face situations in which they might damage their self-esteem by failing, they lower their expectations, a behavior researchers have called defensive pessimism (Crocker & Park, 2003; Norem & Cantor, 1986). For example, you will commonly hear classmates predict that they will not do well on an upcoming exam. This is often an attempt to manage expectations, so the students are not too upset if they do poorly.

Individuals can also lower their expectations, and thus protect their self-esteem, by self-handicapping , engaging in behaviors that sabotage their chances at success (Jones & Berglas, 1978). For example, a student might agree to work when his boss calls him the night before a big exam. Then, if he does poorly on the exam, he can attribute it to the handicap of not being able to study instead of to a lack of ability. By the way, if an individual succeeds after a bout of self-handicapping, the effect on self-esteem is doubly positive. He has the benefit of succeeding at the task even in the face of the barriers that were in the way (Tice, 1991).

Although you can (and probably should) read the information about self-serving biases (and other concepts in this module) as cautionary notes, there is another perspective to keep in mind. Perhaps His Holiness, the Dalai Lama was right; the very motion of our life is toward happiness. If we can avoid such pitfalls as unrealistic optimism, fragile self-esteem, pleasant emotions that damage other people, and, self-serving biases that do more damage than good, most of us have the ability to create our own positive outlook.

self-serving biases: strategies that people use to increase their self-esteem

defensive pessimism: a strategy of lowering one’s expectations in a situation in which failure might damage self-esteem

self-handicapping: engaging in behaviors that sabotage people’s chances at success, so they can lower their expectations and protect their self-esteem

  • Do you have a set of possible future selves? What are they? How do they motivate or otherwise influence you?
  • Which of the self-serving biases are you able to recognize in yourself?
  • What are the important aspects of your own self-esteem? How do they affect your behavior?

Module 26: Consciousness and Sleep

In the 1968 Stanley Kubrick movie 2001: A Space Odyssey,  we are treated to a view of the possible evolution of computer thought. One of the real stars of the movie is HAL-9000, a computer that has more personality than the astronauts on the spaceship HAL controls. HAL has emotions and is intelligent and seemingly aware of itself and of other people. In short, HAL-9000 is conscious, or at least seems to be. But HAL is a character in a story taking place in 2001. It is now 2020, and real computers, able to perform 400 quadrillion operations per second (that is 400,000,000,000,000,000), probably far outstrip the fictional HAL in terms of sheer computing power. These computers can also outperform humans in a great many areas. Supercomputers are used to accurately simulate climate and weather, combine data from around the world to help simulate possible outcomes and develop vaccines and treatments for COVID-19 and other diseases, and play chess (and Jeopardy) at above world-championship level. Still, however, no one is even close to programming a computer to make it conscious.

Of course you know that you are conscious. But what does that mean? The ultimate challenge for a science of the mind may be to figure out what consciousness is. Throughout history, the concept of consciousness has been explained in terms of mystery, wonder, religion, and even weirdness. Inside and especially outside of psychology, you will discover that people have different, sometimes very different, definitions for it. For a scientist, consciousness may be an even harder concept to define than emotion is (see Module 20).

Nevertheless, section 26.1 proposes a working definition. Once we agree on a definition, we can begin to understand some of the psychological principles that underlie consciousness. The later sections, then, turn to some of the changes in consciousness that we can experience. Section 26.2 is about the sometimes mysterious state of consciousness called hypnosis. Sleep and dreaming, as complex, interesting, and important phenomena related to consciousness, deserve a bit more coverage. Section 26.3 describes a lot of our scientific knowledge about sleep and dreams, and section 26.4 covers the most important practical fact about sleep (that most adults do not get enough).

26.1 Consciousness

26.2 Hypnosis

26.3 Sleep and dreaming

26.4 Sleep deprivation

By reading and studying Module 26, you should be able to remember and describe:

  • Consciousness and attention, working memory, and explicit memory (26.1)
  • Hypnosis (26.2)
  • Functions of sleep (26.3)
  • Circadian rhythms: suprachiasmatic nucleus (26.3)
  • Sleep stages (26.3)
  • Sigmund Freud’s views on dreams (26.3)
  • Modern dream interpretation (26.3)
  • Dream contents (26.3)
  • Activation-synthesis theory of dreaming (26.3)
  • Neurocognitive model of dreaming (26.3)
  • Effects of sleep deprivation (26.4)
  • Sleep Hygiene (26.4)

By reading and thinking about how the concepts in Module 26 apply to real life, you should be able to:

  • Recognize parallels between dream content and waking cognition (26.3)

By reading and thinking about Module 26, participating in classroom activities, and completing out-of-class assignments, you should be able to:

  • Identify problems in your life that might be a consequence of sleep deprivation (26.4)
  • If you are sleep deprived, explain what might prevent you from increasing the amount of sleep that you get (26.4)

  26.1. Consciousness

  • How do you define the word consciousness(without looking in the dictionary)?
  • Think of a time that you were trying to block out some stimulus, such as a sight, sound, or odor, in order to pay attention to something else (for example, trying to block out an argument between two friends while trying to study for an exam). What made it easy or hard to do?

Philosophers have been interested in consciousness for hundreds of years. The most famous and important philosophical debate about it has been whether the mind (the source of consciousness) is separate from the body, a viewpoint called dualism,  or whether it is part of the body, a viewpoint called monism . René Descartes (1637/1931; 1641) was the most famous philosopher to argue the dualist viewpoint. The body, being essentially mechanical, must obey the laws of physics. The mind, or consciousness, according to dualists, is not a part of the body, and it is not a physical entity. Therefore, it does not obey the laws of physics, and any attempt to understand it using the same principles through which we understand physical systems will fail. Dualists often, although not necessarily, equate the mind with the soul, the essence of an individual that continues to exist after the body has died. In contrast, Thomas Hobbes, a contemporary of Descartes, championed the monist viewpoint. Monists contend that mind is simply part of the body, that there is no separation. As Hobbes (1650) put it, thinking is motion within the brain.

Today, after more than 350 years of debate, most neuroscientists agree with Hobbes (Crick, 1994). Neuroscientists largely agree that consciousness must be a result of brain activity, but who knows what activity? The problem is that no one has made great progress in figuring out exactly what consciousness is . The last great intellectual quest of Francis Crick, the Nobel prize winning biologist who co-discovered the structure of DNA with James Watson, was to try to understand consciousness. He ended up scaling back his goals considerably, settling for outlining a blueprint for how to examine consciousness and proposing a model of how one portion of consciousness, awareness of a visual scene, might be realized in the brain (Crick, 1994; Koch & Crick, 2000).

A Definition of Consciousness

In everyday conversation, we tend to use the word consciousness  to describe a state that a person is in. In other words, an individual is either conscious or unconscious, corresponding roughly to whether he or she is awake or asleep (or knocked out, in a coma, or something similar). This seems like a reasonable approach, but it does not take us very far. Consider wakefulness versus sleep. What is it about being awake that means you are conscious and about being asleep that means you are unconscious? Our brains are extremely active during both. You might propose that when you are awake you are aware of the external environment and that when you are asleep you are not. Even that will not work, however. At any given moment, awake or asleep, you are aware of some aspects of the environment and not aware of others. People can be unaware of things going on right in front of their eyes while awake. And even while asleep, you maintain enough awareness of the outside world that you do not fall out of bed.

As confusing as these examples seem, they may help us to solve the problem of defining consciousness. Instead of thinking about consciousness as a state that describes a whole person, we can think of consciousness as a rough synonym for awareness. Whether we are awake or asleep, we are aware of something at all times. In other words, we are always conscious of something. We may be conscious, or aware, of stimuli from the outside world, of our own thoughts and feelings, and of our selves (Schacter, 2000).

Consciousness in the Brain: Explicit-Implicit Memory, Working Memory, and Attention

We are making a little bit of progress. Using the term “awareness” in place of the more unwieldy “consciousness” helps, and we are in agreement that this awareness resides solely in the physical brain, but it doesn’t really tell us where this state of mind or how it arises. Attention, working memory, and explicit memory are involved, but it is not clear how central to consciousness they are (Koch & Crick, 2000). Many processes and brain areas are involved in creating consciousness. One key idea is that the sharing or exchange of information across different brain areas is important. Dehaene & Changeux,(2011) proposed that awareness arises through the sharing of information among the prefrontal cortex, parietal, and occipital lobes, for example. You may recall that the hippocampus is a brain area that is involved in explicit memory (memory with a conscious “I recall” feeling), but not implicit memory. It may be that this consciousness is a byproduct of the connections that the hippocampus has with multiple brain areas, again allowing that exchange of information.

You might also recall that in Module 4, we defined working memory is the current contents of consciousness—in other words, what you are thinking about, or aware of, at the current moment. It is a simple-seeming idea, but a great many brain areas are involved. Networks connecting areas of the frontal lobes to parietal lobes, the prefrontal cortex, and areas in the temporal lobes all seem to be important for working memory. (Chai et al., 2018). Our descriptions here might also remind you of Attention from Module 13, particularly the process of selective attention, choosing which element you are experiencing to focus on and which to filter out. It turns out that attention, too, involves multiple cortical areas throughout the separate lobes (as well as some sub-cortical areas; Lindsay, 2020).

To summarize, the first key idea is sharing information across multiple areas. A second key idea that appears throughout these different descriptions is control. We select in attention, we move information into working memory, we search memory to retrieve from explicit memory. Many researchers refer to executive control as an essential process to allow consciousness to exist. The key area that is most important for executive control is the prefrontal cortex. So although as we have been emphasizing, multiple brain areas contribute, the prefrontal cortex is a possible candidate for “consciousness center” because of its role in executive control (Chai et al., 2018; Koch & Crick, 2000; Lindsay, 2020).

  • Do you think that your selective attention ability is pretty good or not so good? Explain why.
  • Imagine yourself as a participant in the “gorilla in the basketball game” study. Do you think that you would have noticed or failed to notice the gorilla? What is it about you that makes you say that?

26.2. Hypnosis

  • Write a short description of what you think it is like to be hypnotized. Of course, if you have been hypnotized before, you can use your experience for the description.
  • Do you think you would want to undergo hypnosis for medical or psychological therapies? Why or why not?

Many people inside and outside of psychology refer to hypnosis as an “altered state of consciousness” (Yapko, 1995). This characterization makes hypnosis sound quite mysterious. We once got our hands on a book that promised to teach us, “How to secretly hypnotize anyone.” (As a special bonus, it came with “How to secretly read anyone’s mind,” at no additional cost.) The ad for the books, in the back of a comic book if we remember correctly, promised that it would teach how to secretly control friend and foe alike. Hey, for only ten dollars, what do you have to lose? Ten dollars, as it turns out. Apparently, the secret to both secret mind-reading and secret hypnosis was to secretly “join minds” with your unaware subject. Secretly. Unfortunately, the books secretly forgot to explain how one was supposed to accomplish the “mind-joining.”

Perhaps hypnosis is the only legitimate topic in psychology that could be touted in the same advertisement as a mind-reading book. It has an interesting fictional and real history that makes it seem almost magical. The concept of hypnosis has been associated with evil and with the supernatural, as in the creepy stare that Bela Lugosi’s Dracula used to subdue his victims and in the 1960 B-rated horror movie The Hypnotic Eye,  in which a stage hypnotist places attractive women in hypnotic trances so that his psychopathic assistant can attack and disfigure them (at long last, we have recently discovered that this movie is available to stream legally if you are interested). Movies have also depicted hypnosis as a tool to allow people to relive past lives, something that has been claimed in the real world, too. When not evil or supernatural, hypnosis is still often presented as bizarre, a fact you can verify by attending a stage hypnotist’s performance. You may not see someone cluck like a chicken, but you will walk out shaking your head and thinking, “How did the hypnotist get those people to do that?”

Given these popular images of hypnosis, it is easy to overlook the fact that hypnosis is a tool used by some legitimate and competent psychotherapists. The common view in psychology is that it is very likely real but is widely misunderstood. We have a pretty good idea what you can accomplish with hypnosis. It can be used to control pain, manage stress and anxiety, and help clients in psychotherapies change behavior (Yapko, 1995). And yes, for entertainment purposes, a stage hypnotist can probably lead many people to cluck like a chicken. We also have a pretty good idea what hypnosis is not: It is not a mysterious window to past lives or the hidden unconscious of Freudian psychoanalysis. It is not a tool to help uncover hidden or repressed memories, as it is more likely to trigger false memories than uncover genuine ones (Spanos, 1996). It is probably not even an altered state of consciousness, at least not any more than watching television is.

We are not so clear about what hypnosis is, however. Consider this observation: In PsycInfo, the American Psychological Association database of research in psychology, there were 13,862 listings under the subject heading “Hypnosis” in summer 2020. Only 736 of those also used the subject heading “Theories,” and many of these articles are theories of other phenomena that use hypnosis as part of the explanation. In other words, we do not have much less to say when it comes to a theoretical explanation of what hypnosis is.

We like to think of hypnosis as a somewhat extreme version of two common phenomena. First, at all times, we focus our attention on some stimuli and block out others, a phenomenon we referred to as selective attention earlier. Second, individuals very commonly comply with requests, orders, or suggestions from other people; in other words, they are obedient or readily persuaded (Module 21). Hypnosis is simply a method that enables one person (the hypnotized subject) to focus his or her attention on another individual (the hypnotist) or on stimuli that the hypnotist emphasizes. The hypnotist then gives suggestions with which the hypnotized subject is likely to comply. Both of these phenomena, our ability to selectively attend and our suggestibility, are quite ordinary. For example, have you ever been so involved in a conversation that you walked right past your destination? This is essentially the same phenomenon as a common experience under hypnosis, a subject’s failure to see an object that should be in plain view. And in 2019, the advertising industry spent 247 billion dollars in the United States alone, counting on people’s tendency to comply with suggestions (Guttmann, 2019).

One last way to show that there is nothing magical about hypnosis is to point out that everything we know can be accomplished with hypnosis—including controlling pain, managing stress and anxiety, helping psychotherapy clients change their behavior, and even clucking like a chicken—can be accomplished without it. For example, if you ask people to pretend that they are hypnotized, you can get them to do just about any behavior that a genuinely hypnotized subject will do (Barber, 1979; Orne & Evans, 1965; Spanos, 1996). In addition, more than 40 years of research has demonstrated that many of the therapeutic benefits achieved with hypnosis can be achieved equally well without it (Barber & Hahn, 1962; Hyman et al. 1986; Lang, 1969; Spanos, 1991; Spanos et al. 1994).

  • Would you consider letting yourself be hypnotized by a stage hypnotist? Why or why not?
  • Would you consider using hypnosis to get rid of a bad habit or improve yourself? Why or why not?

26.3 Sleep and Dreams

  • What do you think the benefits of sleep are? Be as specific as you can.
  • Do you think of sleep throughout the night as steady, or can you recognize patterns of different kinds of sleep?
  • Describe some recent dreams that you can remember. What, if anything, do you think they mean?
  • Why do you think we dream?

When you are sleeping, you have a fairly dramatic shift in your awareness or consciousness. In essence, you cannot pay attention while sleeping, so there is something very different about being asleep and being awake. Still, I would not want to refer to sleeping as being unconscious, because you are certainly aware of something. For example, you are aware enough of your surroundings that you can be easily awakened at various points throughout the night (although it is quite difficult at other times), and most adults rarely if ever fall out of bed. More notably, perhaps, we become aware of an entire new world several times every night, the world of our dreams. Thus it is reasonable to think about sleep as a large change in your consciousness but not as a shift into unconsciousness.

Although researchers have been making great progress over the past few decades in figuring out what sleep is and why we sleep, there is much that we still do not know. Sleep is a puzzle. However, it does seem to be extraordinarily important biologically. Think about it. All animals spend a portion of the day or night relatively motionless and completely defenseless. Any predator that can figure out where an animal typically sleeps would have an easy meal just ready for the taking. There must be an enormous biological benefit to sleep to offset that kind of risk.

Most people probably think that we sleep in order to rest, but there is much more to it than that. After all, simply slowing our activity down to a level that is less than usual provides our bodies with rest. The unique benefit of sleep seems to be that it helps us to rebuild and restore the body. For example, we store energy in our brains while we sleep. Glycogen, a substance that we derive from food, is the main source of energy throughout the body. The longer we stay awake, the lower the levels of glycogen in the brain. When we sleep, the supply of glycogen in the brain is replenished (Kong et al., 2002). Sleeping also helps us to boost our immune system (Everson, 1993; Rechtschaffen et al., 1983; Bryant, Trinder, & Curtis, 2004). And while we sleep our pituitary gland releases growth hormone, important for both growth and repair and maintenance throughout the body (see 11.3).

Circadian Rhythms and the Biological Clock

As you know from your own experience, we have biological patterns of activity throughout the 24-hour day, which are known as circadian rhythms . At night, while we are sleeping (typically), all physiological activity is slowed: Heart rate and breathing are slow, brain activity is slow, body temperature is reduced. As morning approaches, these systems begin to speed up, peaking often in the early afternoon. Many people experience a slight dip in energy before things pick up again heading into the evening. In the late evening, the body systems start slowing down again until reaching their lowest point again, about 24 hours after the start of the cycle.

Circadian rhythms and the sleep-wake pattern are guided by a tiny section of the hypothalamus called the suprachiasmatic nucleus, which could be considered our “biological clock.” Although the basic circadian rhythm is produced inside of the body, the biological clock uses light from the outside world to fine-tune it to correspond to the 24-hour day. Basically, light hitting the retina leads the biological clock to send signals that cause the pineal gland—another gland located in the brain—to release the hormone melatonin . After a while, accumulations of this hormone make us sleepy.Sec9.3The fine-tuning occurs as a result of naturally occurring light, making our circadian rhythms last a few minutes longer than 24 hours. The electric lights to which we are exposed every day have an additional effect, making our actual circadian rhythms last around 25 hours (Czeisler et al., 1999).

If you use the right amounts of light at the right times, you can reset the biological clock to make an individual sleepy or awake at just about any time you want (Shanahan, Zeitzer, & Czeisler, 1997). If you expose people to light early in the evening, it delays the biological clock, making them stay awake later. Light in the early hours of the morning, say 3:00 am, sets the biological clock earlier, making it easier to wake up in the morning. An alarm clock that turned on the lights in the middle of the night might be quite effective at helping people wake up more easily in the morning. If you decide to invent such an alarm clock and grow rich doing so, remember where you got the idea.

One last interesting point about the biological clock and light: Researchers have discovered that light hitting other areas of the body can also reset the biological clock. In one study, the researchers shined bright lights on the back of participants’ knees for three hours a day and shifted their circadian rhythms as much as earlier researchers had done using light exposure to the eyes (Campbell & Murphy, 1998). Of course, this finding suggests that a “knee-wrap” version of our alarm clock would be just as effective as one that turned on the room lights, without disturbing our sleep at 3:00 am. Again, remember where you got the idea.

circadian rhythms: biological patterns of activity throughout the 24-hour day

suprachiasmatic nucleus: a tiny section of the hypothalamus that could be considered our biological clock

melatonin: a hormone that is released by the pineal gland and makes us sleepy

Stages of Sleep

Sleep is not a constant “unconscious” state; rather, there are five recognizable and somewhat distinct periods of sleep that we repeat throughout the night, and the differences between the stages give us some clues about the functions of sleep and what we lose by not getting enough of it. Although researchers have made great strides in identifying functions of different periods of sleep, a lot of work remains.

The four separate periods are called sleep stages. Altogether, the stages constitute a complete ninety-minute (or so) sleep cycle, which repeats itself—with some changes, as you will see—throughout the night. You can often recognize which stage someone is in by examining brain activity levels and the person’s responsiveness to the external environment. If you have an EEG (electroencephalograph) machine (Modules 1, 11, and 14), go hook yourself up to it and look for the patterns of brain activity associated with the separate stages of sleep. An EEG records the general level and speed of neural activity in different parts of the brain through electrodes that are placed on the scalp. The EEG can pick up on the synchronized pattern of activity that takes place as electrochemical signals flow across neurons. Researchers typically refer to these patterns of brain activity as  brain waves . They differ from each other in strength, speed, and regularity (that is, how synchronized the different areas of the brain are). Generally speaking, the less active your brain is, the slower and more regular the brain waves are. Each of the sleep stages has a characteristic type of brain wave associated with it.

EEG (electroencephalograph) machine: a machine that records the general level and speed of neural activity in different parts of the brain through electrodes that are placed on the scalp

brain waves: synchronized pattern of brain activity that takes place as electrochemical signals flow across neurons

NREM 1 or Stage 1

Right before you fall asleep, the brain waves that the EEG picks up are called alpha waves . They are reasonably fast, fairly strong waves. As you drift off to sleep, you enter Stage 1, and your brain produces slower and less regular waves called theta waves . You can think of entering Stage 1 sleep as being like taking one step into a room and keeping the door open behind you. Although you are in the room, you can get out easily. When you are in Stage 1 sleep, you are unaware of most aspects of the environment. You can become briefly aware of some stimuli, though. Frequently, these stimuli, such as the jerk of a muscle, will wake you up momentarily. This, of course, is the famous “head bob” that we often see during a lecture: people whose heads keep slowly falling forward until they jerk suddenly and startle themselves awake. During Stage 1 sleep you might also experience brief dreamlike images and sensations, such as a bright light, or a loud noise, the feeling of floating or being touched; these are called hypnagogic sensations (Dement, 1999). Because people often slip in and out of Stage 1 sleep, it is difficult to say that they are really asleep during this stage.

NREM 2 or Stage 2

After, you slip into Stage 2 sleep. This is when you are clearly asleep, but we would call this light sleep. Stimuli from the outside environment are very unlikely to reach conscious awareness while asleep, but you can wake up easily. Theta brain waves continue (a bit slower than in Stage 1), but now two new kinds of waves are present. Sleep spindles are short bursts (about 2 seconds long) of more rapid brain waves, about twice as fast as theta waves. K-complexes are bursts of a single higher-voltage wave. We spend about half of our time sleeping in Stage 2

NREM 3 or Stage 3

Stage 3 is characterized by slow brain waves, called delta waves. Stage 3 is deep sleep. Not only does next to nothing make it in from the outside environment, it is quite difficult to wake up from Stage 3 sleep. Much of the restoring functions of sleep probably occur during Stage 3; it is the stage in which most of the growth hormone is released by the pituitary gland, for example (Obal & Krueger, 2004). Also, most of the enhancement of the immune system that occurs during sleep probably happens in Stage 3 (Bryant, Trinder, & Curtis, 2004). Stage 3 is also the stage in which sleepwalking occurs for some children (some adults, too, but it becomes quite rare the older you get).

In a way, it is good that the fourth stage is not called Stage 4, because it is very different from the other three sleep stages. Besides, it does not generally happen after Stage 3. Instead, in the first few hours of sleep tend to cycle between Stages 1 and 3 a few times. Then later, deep sleep (Stage 3) tends to stop and is replace by REM. The name REM stands for rapid eye movements, which is one of the more noticeable facts about this stage. While you are in REM sleep, your eyes move back and forth rapidly under your eyelids, as if you were watching some scene. A second very noticeable fact about REM sleep, at least to the person experiencing it, is that this is the stage during which most dreaming occurs. In addition, brain waves become faster, more like the awake waves (alpha). Breathing and heart rate increase, almost as if you were awake, and genital arousal occurs. But the body is otherwise motionless, as if paralyzed. It is this mismatch between high internal activity and motionless body that has led some observers to label REM sleep paradoxical sleep . REM sleep is clearly important—if we are deprived of this stage of sleep, we spend more time in it the next time we sleep—and researchers are beginning to unravel the mysteries of its functions. Because infants and children have a great deal more REM sleep than adults do, it is likely that this stage is important for brain development and for helping us to keep our neural synapses tuned up.

For many years, researchers have suspected that REM sleep helps us to store events into long-term memory (Feldman & Dement, 1968; Chernik, 1972). The results over the years were inconsistent, however. One possibility is that REM sleep is important for procedural memories (for example learning a new physical skill, but not for declarative memories (facts and episodes) in most cases (Rasch & Born, 2013). There is some evidence that REM sleep improves creative problem-solving (Can et al. 2009).

Many researchers have been examining the idea that non-REM sleep is also important for memory, and they have made some important discoveries. For example declarative memory seems to improve with Stages 3, slow wave sleep (Rasch & Born 2013; Wagner et al. 2004). Even the sleep spindles of Stage 2 may help us to store memories (Gais et al. 2001). Thus there is a good possibility that all of the sleep stages are important for memory. Here is a summary of the sleep stages. Note that it does not include information about the role of different stages in different kinds of memory.

alpha waves: reasonably fast, fairly strong brain waves that occur while relaxed and awake

theta waves: slower and less regular brain waves that begin when we fall asleep

hypnagogic sensations:   brief dreamlike images or sensations that occur at sleep onset

sleep spindles: short bursts (about 2 seconds long) of more rapid brain waves that occur during Stage 2 sleep

K-complexes: bursts of a single higher-voltage wave that occur in Stage 2 sleep

delta waves: slow brain waves that occur during deep sleep (Stage 3 and 4)

paradoxical sleep: another term for REM sleep, so named because of the apparent contradiction between high levels of activity inside the body and a motionless body

Perhaps the most interesting observation about sleep is that we become conscious of a whole new world, the dream world, while we are in that state of awareness. As you may have already known, dreaming occurs principally during REM sleep. Although we do experience dreamlike images during other sleep stages, only during REM sleep do the dreams seem to follow some kind of story line. Because they are so interesting, observers have long suspected that dreams serve some important psychological purpose.

Sigmund Freud proposed perhaps the first comprehensive explanation of the meaning of dreams. He believed that dream images reflect the hidden desires and impulses contained in the unconscious, chiefly related to sex and aggression. By dreaming about these impulses instead of acting on them in real life, we could release them safely. Now, you probably realize that our dreams are not entirely filled with images of sex and aggression. Freud, too, realized this. His answer was that these impulses are too threatening for us to even dream about them openly. Instead, dream images reflect these hidden impulses symbolically. For example, according to psychoanalytic interpretation of dreams, long, thin objects, such as sticks or guns, symbolically represent penises, and dreaming about dancing represents sexual intercourse.

Most current researchers reject this kind of dream analysis. Modern content analyses, detailed descriptions of the topics and images of people’s dreams, are a key source of evidence against Freud’s view. For example, a great many of our dreams are reminiscent of worries and concerns that we are conscious of during our waking lives (Domhoff, 1996; 2003; 2010; Hall & Nordby, 1972). Freud also noted that emotions in dreams commonly did not match the content (Freud, 1900). For example, you might be frightened by an everyday event in a dream. Modern content analyses have suggested that Freud was a victim of the confirmation bias, however, noticing only those cases that confirmed what he already believed. In reality, content analyses reveal that the large majority of dream content is accompanied by the appropriate emotions (Foulkes, et al 1988; Merritt et al. 1994).

Many therapists, and more than a few book authors, still enthusiastically embrace a version of symbolic dream interpretation. However, the research that we used to show how Freud’s version of dream interpretation is wrong applies equally well to the more recent versions of symbolic interpretation. The problem does not lie in the possibility that dreams have symbolic meaning. It seems very likely that many dream images do have such meanings for people. The problem is that the meaning probably differs for different people. For example, you may dream about garbage because, as one “Dream Dictionary” puts it, you have something or someone in your life you need to get rid of or “throw out.” But another individual might dream about garbage because it is Wednesday night, and they have to remember to get the garbage cans out to the front of the yard in time for trash collection on Thursday morning.

It may even be that your dreams are not as weird as you think. If you have three REM stages per night, you probably have a minimum of three dreams per night (remember, we also dream during non-REM stages—Domhoff, 2003). You do not remember them all, so think about which ones are more likely to be remembered. Suppose, for example, that you had four dreams last night. One was about sitting at the breakfast table, eating your typical breakfast of Cocoa Puffs and orange juice. The second was about sitting in psychology class, listening to a lecture about classical conditioning, and the third found you driving to McDonald’s for a quick cheeseburger. In the fourth, you rode a flying yellow penguin and were dressed in a flowered tuxedo while you battled a three-headed dragon that shot beams of vanilla, strawberry, and hydrochloric acid ink. Which one do you think that you would remember? This is, of course, the availability heuristic  at work. Aspects of an event that make it available to memory will cause us to mistakenly conclude that the event is common. Indeed, research has found that factors such as intensity, vividness, bizarreness, and length are related to the likelihood that a dream would be recalled (Cohen & MacNeilage, 1974; Schredl, 2000). In reality, the most consistent finding from content analyses of dreams is that our dreams tend to be clearly and obviously related to the events that are going on in our waking lives; in other words, they are mundane (Domhoff, 2003; 2010).

We apparently need a theory of dreams that explains not their bizarreness and symbolic meaning but their “ordinariness.” Two good candidate theories are the activation-synthesis theory and the more recent neurocognitive theory. According to activation-synthesis theory (Hobson, 1988; Hobson & McCarley, 1977; Hobson, Pace-Schott & Stickgold, 2000), dreams begin when random bursts of neural activity occur in the brainstem while in REM sleep. These neural signals reach the forebrain, especially the limbic system, where the brain tries to weave them into a coherent story. The theory does a pretty good job of explaining why dream images do not really fit together very well; it proposes that our dreams reflect the best job that the brain can do at making sense of a series of random thoughts. The theory has difficulty explaining a few notable observations, however. First, signals from the brainstem may not be essential for starting a dream. Second, not all dreaming occurs during REM sleep. Third, and perhaps most importantly, dreams are not quite random. As we pointed out earlier, there is a strong similarity between the contents of dreams and the events of waking consciousness, and they may not be as bizarre as people think. Even the well-known abrupt changes of scene that occur during dreaming are probably more similar to the way we think when we are awake than it seems at first (Chapman & Underwood, 2000). Have you ever traced your “stream of consciousness”? You might look out of the window and notice that snow is covering the ground. That makes you think about weather, warm weather in fact, so you think about going to Florida. Then, that reminds you that you will be traveling to Florida in a few months to attend a friend’s wedding. In the span of 3 seconds, your thinking jumped from snow outside to attending a wedding in Florida months from now, a change that would very likely qualify as an “abrupt change of scene” if it occurred during a dream.

These shortcomings of the activation-synthesis theory led William Domhoff (2003; 2010) to offer a neurocognitive model of dreaming. His theory proposes that a specific neural network is responsible for dreaming; the network is spread through the limbic system, areas surrounding the limbic system, and specific parts of the cortex. Dreams occur when this network becomes active without any external stimulation—in other words, when we are asleep. The idea is that dreaming is a process that is very similar to the way our minds wander when one thought reminds us of another, and another, and another. Many of the brain areas involved in the neural network for dreaming are the same ones that are active during waking cognition. Hence there should be substantial overlap between dreams and waking thought (just as content analyses have indicated that there are). In essence, Domhoff believes that there is nothing particularly special about dreaming. His neurocognitive model essentially considers dreaming an extension of normal waking cognition and consciousness. Although researchers are still very much trying to sort out how dreaming occurs and what it means, the data are beginning to converge and reveal that Domhoff may be right: There are many more similarities between dreaming and waking cognition than first appear.

activation-synthesis theory : a theory that proposes that dreams begin when random bursts of neural activity occur in the brainstem while in REM sleep. These neural signals reach the forebrain, especially the limbic system, where the brain tries to weave them into a coherent story.

neurocognitive model of dreaming : a theory that proposes that a specific neural network in the the limbic system, areas surrounding the limbic system, and specific parts of the cortex is responsible for dreaming. Dreams occur when this network becomes active without any external stimulation.

  • The individual pieces of information about the sleep stages can be difficult to remember. What strategy can you devise to help you remember and understand them? (For example, you might try to draw a concept map.)
  • Think again of some recent dreams. Can you recognize the overlap between your waking thoughts and your dreams more than you did before?

26.4 Sleep Deprivation

  • How much sleep do you typically need per night? How much do you typically get?
  • How does lack of sleep affect you?

How much would you pay for an elixir that:

  • Increases your energy, eliminates fatigue, and gives you more energy than you ever thought possible.
  • Makes you more productive at work or school—you will accomplish more in less time. Friends, colleagues, and associates will marvel at how much you can do.
  • Improves your memory, concentration, attention span, reasoning ability, critical thinking skills, vocabulary and communication skills, and decision-making skills. Your increased mental ability will astound your friends and frighten your foes.

But wait there’s more. If you act now, we will include a free sample of elixir that also:

  • Improves your mood. You will be less angry and irritable and will have a better sense of humor. You will be more sociable and more interested in socializing.
  • Enhances your immune system. You will succumb to viral and other illnesses far less often.
  • As an added bonus, improves your physical coordination and makes you a safer driver.

These elixirs leave no bitter aftertaste, have no unpleasant side effects, and are 100% safe and legal. Now, how much would you pay?

As overblown as these claims may seem, the truth is, there is an “elixir” that does all of this and more, and it does not cost you a penny. It is called getting enough sleep.

From a practical standpoint, the most important fact about sleep is probably not that it marks a change in your consciousness or that you spend 55% of the night in Stage 2 or even that it affects memory. Rather, it is that you probably do not experience it enough. In other words, you are probably suffering from sleep deprivation. 

The US Centers for Disease Control and Prevention reported that 35% of US adults sleep less than 7 hours per night, the minimum amount they estimated necessary for healthy functioning. Indeed, the National Sleep Foundation (2020) found that 28% of US adults feel sleepy 5 – 7 days per week (yes, essentially every day). And college students are certainly not spared. One multi-university study found that 36% of students sleep less than 7 hours per night. Sleep expert James Maas (1998) once reported that college students frequently score as badly as sufferers of serious sleep disorders on measures of alertness and daytime sleepiness.

Importantly, you do not have to know  that you are sleep deprived to be  sleep deprived. In one study, a group of college students who averaged 7 hours of sleep per night and who reported that they were not particularly sleepy were encouraged by researchers to get as much sleep as they could for a period of time. The students initially increased the amount of sleep to about 9.5 hours per night before leveling off at a bit over 8. By the end of the research study, the students showed large improvements in alertness, mood, and reaction time (Kamdar et al., 2004).

Some people have been sleep deprived for so long that they think that the way they feel is normal. Students complain that they were put to sleep by a boring professor. Professionals complain that they were put to sleep by a boring after-lunch meeting in a warm room. Half of the adults in the US believe that it is normal to feel so sleepy in mid-afternoon that it is hard to stay awake (National Sleep Foundation [NSF], 2002). In reality, there is only one reason why healthy people fall asleep when they are not supposed to. They are not getting enough sleep when they are supposed to. Well-rested people do not fall asleep when they should be awake. James Maas notes that the “sleep-inducing” situations simply serve to reveal the underlying sleepiness that is already there. So, unintended sleep is the first clue that you are sleep deprived. Researchers and clinicians often use a simple scale called the Epworth Sleepiness Scale to measure sleepiness. The scale (which requires a license to use) asks 8 questions about a person’s likeliness to doze off while doing several everyday activities.

James Maas offers a simple 5 question test you can take to estimate if you are sleep deprived. You can access it  here .

There are some very serious consequences of sleep deprivation, in addition to the obvious effects on your schoolwork, job, and social relationships. Sleep-deprived people are prone to accidents. For example, medical interns who worked a 24-hour shift every three days made 36% more serious medical errors than they did when working without the long shift and with fewer overall hours (Landrigan et al. 2004). More generally, a recent survey of US parents with children under 10 found that 48% of the parents admitted to driving while drowsy, and 10% admitted that they have actually fallen asleep at the wheel (National Sleep Foundation [NSF], 2004). The National Highway Traffic Safety Administration has estimated that at least 100,000 drivers per year are involved in police-reported automobile accidents (National Safety Council, 2020).

Sleep deprivation also reduces levels of leptin, a protein produced by the body that controls appetite, which may lead to overeating (Flier & Elmquist, 2004). Thus, there is a link between sleep deprivation and obesity, which was demonstrated in a survey of 18,000 adults. Those who got 6 hours of sleep per night had a 23% increased risk of obesity; 5-hour sleepers had a 50% increased risk, and people who averaged 2 to 4 hours of sleep had a 73% increased risk (National Health and Nutrition Examination Survey [NHANES], 2004).

How Much Sleep Should You Get?

Although individuals have different sleep needs, we can make some broad generalizations. Many people think that they can function just fine on 6 or 7 hours of sleep per night. Although that may be true for a small number of people, most sleep experts contend that the majority of us have just gotten used to the lower levels of functioning and quality of life that come along with getting too little sleep. There are occasional cases of people who remain healthy and alert with as few as 3 hours of sleep (Jones & Oswald, 1968). These people are probably extremely rare, however. Some can function reasonably well on 7 hours per night. Many people probably need at least 8, and others need even more.

You can use 8 hours as an approximate starting point to determine your own sleep needs. If you are currently nowhere near 8 hours, take a serious look at the sleep-deprivation effects throughout this section. You are likely to recognize yourself in there. In order to figure out how much sleep you really need, you obviously have to increase the amount that you get and find out how it affects you. It is a very simple concept; you just increase until the sleep-deprivation effects go away. You need to make the changes gradually, however, giving your body time to adjust to even small changes in your sleep pattern. Sleep experts recommend going to bed 15 to 30 minutes earlier than usual for a week. Keep adding 15 to 30 minutes of sleep every week until you can get up in the morning without an alarm clock. That is how much sleep you should be getting, and if you are like most people it is at least an hour more than you currently get.

How Can You Get Enough Sleep?

Over the years, we have not been able to persuade many of our students to increase the amount of sleep that they get. Their main objection is that there are too few hours in the day for them to do everything they need to do. Meeting school, job, and family responsibilities while trying to maintain a semblance of a social life seemingly leave little time to lie in bed unproductively for nine hours per night. In essence, these people cannot accept the idea that when well-rested you are so much more efficient that you can accomplish far more in far less time. Take another look at some of the performance and thinking-related problems associated with sleep deprivation. It really is not outrageous to suggest that if all of these abilities improved—as they very likely would if you got enough sleep—you could accomplish all that you do now, do it in less time, and do it better.

In addition to figuring out your personal sleep needs and doing your best to sleep that much every night, it is important to practice sleep hygiene , essentially habits that promote sufficient restful sleep. Some important habits to cultivate are:

  • Make your sleep environment ideal for sleep. The temperature should be on the cool side, perhaps 65 degrees for most people. The room should be completely dark, so blackout curtains or a sleep mask might be necessary. You can also minimize noise distractions by wearing earplugs. Try to use your bed for sleep only.
  • Train your circadian rhythms to make you alert when you are awake and not when you are trying to sleep by having a consistent sleep schedule. Try to go to sleep and wake up at the same time every single day, including weekends. Unfortunately, we fear that we have already lost many of you. It is, of course, impossible to do this. You know that there are some nights when you will have to stay up later than usual and some mornings when you will have to get up earlier than usual. But every time you have a significant departure from your normal routine, try to go back to it the next night. It will probably take you a few days to recover from the sleep deprivation fully, but having that routine is the key idea. If you go to sleep and wake up at the same time every day, your body gradually synchronizes itself with this consistent pattern of activity. In other words, your circadian rhythms will exactly match your sleep-wake schedule.
  • Make up for lost sleep. This rule is contrary to what many people believe. It is a myth that you cannot make up for lost sleep. Go to bed earlier than usual instead of sleeping in (sleeping in makes it harder to get to sleep the next night). And do not try to regain all of your lost sleep in one night if you have a large sleep debt.

sleep deprivation : not getting enough sleep

sleep hygiene: habits that promote sufficient restful sleep

  • Which, if any, current difficulties in your life do you think might result in part from sleep deprivation?
  • What changes in your sleep habits do you think would be useful to you?

Module 27: Healthy Lifestyle

Let’s speculate for a few minutes about what would be the most effective way to improve people’s health. For example, if the world community made it a priority to immunize all children, the impact would be amazing. Or perhaps the impact would be greater if we ensured clean drinking water throughout the world. One factor that people often do not think about is improving mental health. The fact is, improvement in traditional “physical” health would be dramatic if mental health were given the attention it deserves. For example, major depressive disorder is probably a contributing factor to at least heart disease and diabetes, among other diseases (Module 29).

At a personal level, there is much about health that is out of your control. Good or bad genes and exposure to dangerous or infectious substances in the environment all make quite significant contributions to whether we are sick or healthy. On the other hand, we can make a dramatic impact on our health by focusing on the factors that are under our control.

We are sure you have heard of the “mind-body” connection. It is not a New Age concept but rather a concept well grounded in the scientific principles about human physiology, behavior, and mental processes. Remember that the mind is simply the brain and that the brain controls the whole body.

There are at least two ways to think about this mind-body connection. The first is to consider the role of psychological stress in physical illness. If people would learn how to manage stress effectively, their physical health would improve. Managing stress may be the single most effective action that is under your control, in no small part because it will also improve other controllable disease risk factors, such as obesity, substance abuse, and sleep deprivation. The second way to think about the mind-body connection is to realize that a great deal of our behavior has a direct impact on our physical health. For example, the types and quantities of food that we eat and the amount of exercise we get are obviously key components of having a healthy life. Psychology has a great deal to offer to help us understand, predict, and control these important behaviors.

This module has four sections. Section 27.1 describes the causes and health consequences of stress, and section 27.2 offers some of the best strategies that research has identified to help us manage stress in our lives. Section 27.3 helps you to understand eating behavior in general, as well as the causes behind the epidemic of obesity that the US is currently experiencing. Section 27.4 offers advice from one of the leading experts on behavior change about how people can make lasting changes to improve their own health.

27.1 Understanding stress

27.2 Coping with stress

27.3 Understanding obesity and eating disorders

27.4  Promoting good health and habits

By reading and studying Module 27, you should be able to remember and describe:

  • Stress response: fast and slow response, fight-or-flight, tend-and-befriend (27.1)
  • Long-term effects of stress (27.1)
  • Kinds of stressors (27.1)
  • Methods for coping with stress (27.2)
  • Body Mass Index, obesity, and overweight (27.3)
  • Biological and environmental influences on eating behavior (27.3)
  • Eating disorders: anorexia nervosa, bulimia nervosa (27.3)
  • Prochaska’s transtheoretical theory (27.4)

By reading and thinking about how the concepts in Module 27 apply to real life, you should be able to:

  • Identify sources of stress in your life (27.1)
  • Recognize fight-or-flight and tend-and-befriend responses in yourself or others (27.1)
  • Identify the ways that you already cope with stress (27.2)
  • Determine your own BMI and decide whether you are in the health range or not. Speculate about factors that may have influenced the number. (27.3)

By reading and thinking about Module 27, participating in classroom activities, and completing out-of-class assignments, you should be able to:

  • Describe examples of the way you think about some stressors that you could change to cope with stress (27.2)
  • Identify environmental cues that sometimes lead you to eat more than you should (27.3)
  • Identify the stage you are in from Prochaska’s transtheoretical theory for a problem behavior in your own life (27.4)

27.1 Understanding Stress

  • What are the most important sources of stress in your life?
  • How does stress affect you? What are the physical, cognitive, and emotional consequences for you?

If we may be so bold, we would like to suggest that you need to understand stress. By now it has a very bad reputation, as most people have heard that it contributes to physical illness. We fear that many people do not realize just how strong the evidence is, however. On the other hand, it is easy to forget that stress can be good, even necessary sometimes.

Let us start with a definition. Imagine a conversation with a friend who is complaining about her current situation. She tells you, “I have four exams over the next two days, I have to work until midnight, and my car is broken down. I am under a lot of stress.” When people use the word in everyday conversation, they tend to refer to the environment itself as the stress. For example, the exam is stress. For psychologists, however, the events or situations are stressors , and our response to those events is stress .

Making this distinction is not simply playing with words. It reveals a very important fact about stress, one that you can use to reduce stress in your own life. Quite simply, the very same event may lead to stress for one person and not for another. It is not the event itself that is the stress. Think of a situation like traffic. Many people hate traffic. After an hour on an expressway creeping along at 10 miles an hour, they are often tense, anxious, rushed, and angry. These individuals may be surprised to discover that some people like heavy traffic, however. They consider it one of the few chances during the day to be alone, with no family, work, or school responsibilities. They listen to music, a podcast, or an audio-book and relax. If the traffic-haters could figure out how to look at traffic the way some other people do, and not as a stressor, they could reduce their stress.

Certainly, some general characteristics of events are likely to make them stressors. In short, events that are threatening and uncontrollable are stressful.  Say you are confronted with a deadline for an important project at work or school. You can’t get out of it. Because of the project’s importance and the possibility of failure, you might find it threatening. That threat, combined with your lack of control over the deadline, is likely to make the project very stressful.

The response to stress is physical and emotional arousal. If the arousal is not too severe, many people enjoy the way stress makes them feel. They are energized and alert, and they feel ready to tackle the challenge. This is the well-known “I do my best work under pressure” crowd. Anxiety sometimes affects test performance this way (Module 8). Short-term, mild anxiety leads to better performance than being completely relaxed.

When the stressors become long term or severe, however, the troubles begin. With persistent stress we are forgetful, have difficulty concentrating, are easily moved to anger and depressed moods, and are overly anxious. We walk out of the mall during the stressful holiday shopping season and spend 15 minutes wandering through the parking lot because we cannot remember where we parked the car. We grind our teeth at night (a sleep disorder called bruxism), which can lead to temporomandibular joint disorder (TMJ), a common cause of jaw discomfort, headaches, and dizziness. We have difficulty sleeping and are more likely to overeat and use cigarettes, drugs, or alcohol. But did we really have to tell you this? There is little doubt that being under constant high stress for long periods of time is extremely bothersome. It is also dangerous. To understand how stress makes us sick and even kills us, it is important to know why and how our body reacts to stress.

stress: an individual’s physical and emotional arousal in response to a threatening event or situation

stressors: threatening events or situations that might lead to stress

The Stress Response

We have a fast and a slow physical response to stressors, a fact first observed by Hans Selye (pronounced SELL-yay) in the middle of the 20th century (Selye, 1956). The fast response  is handled by the sympathetic nervous system , the arousing part of the autonomic nervous system. When your brain recognizes a stressor, the activity of the sympathetic nervous system leads to increased heart rate and blood pressure and reduced digestive activity, among other things (Module 11). Because the nerve endings of the sympathetic nervous system release their neurotransmitters directly to the organs throughout the body, the effects are immediate. One key response is the release of epinephrine , commonly known as adrenaline, by the inside sections of the adrenal glands , which are located on top of the kidneys. Epinephrine and norepinephrine are also released by areas throughout the sympathetic nervous system. When these two chemicals act directly at synapses, they function as neurotransmitters. Some of the epinephrine and norepinephrine also travels through the bloodstream to affect other areas of the body, so they function as hormones as well (and contribute to the slow stress response described below).

Image depicting the pathway of the stress response

sympathetic nervous system: the arousing part of the autonomic nervous system

epinephrine: commonly known as adrenaline, it functions as a neurotransmitter in the fast stress response, and a hormone in the slow stress response

adrenal glands: glands located on top of the kidneys; they release glucocorticoids and epinephrine as part of the stress response

norepinephrine: a chemical produced throughout the sympathetic nervous system; it functions as a neurotransmitter in the fast stress response, and a hormone in the slow stress response

The slow response  to stress (which is actually pretty fast but just slower than the fast response; it is a difference of seconds versus minutes) is handled by the endocrine system. The hypothalamus sends signals to the pituitary gland, which releases hormones that travel to the adrenal glands. The outside sections of the adrenal glands release hormones called glucocorticoids , along with epinephrine and norepinephrine. Altogether, these three hormones are known generally as the stress hormones.

Essentially, the sympathetic nervous system and the first release of epinephrine and norepinephrine get the stress response started. The immediate neurotransmitter activity from the sympathetic nervous system along with the slightly slower activity that occurs as these chemicals act as hormones take you through the first few minutes of a stressful situation. This fast response keeps you going until the endocrine system can catch up, a few minutes after the onset of the stressor.

endocrine system: the system of glands that releases hormones throughout the body

glucocorticoids: hormones that are released by the adrenal glands as a major part of the slow stress response

This stress response, which exists to help animals survive during a physical emergency, has been named the fight-or-flight response . Animals, including humans, that can find an extra burst of energy or strength when being attacked by a predator, for example, are more likely to survive. So, the stress response is essentially an evolutionary adaptation. Human ancestors one million years ago who could put on a burst of speed or find the strength to fight off the attacking sabre-toothed tiger were more likely to live and pass on that ability, the stress response, to their offspring.

For many years, researchers considered the fight-or-flight response to be the only stress response that humans have. Shelley Taylor and her colleagues (2000) pointed out that the large majority of research on the physiological responses to stress has been conducted on males, however. When you focus on typical female responses to stress, a different picture emerges. You probably realize that fighting and fleeing are not the only possible ways to survive in the face of a physical threat. In fact, in many situations, they are probably not even the best ways. For example, what if you need not only to survive yourself but to help your offspring to survive as well? You and your offspring might fare better if you focus your energy on seeking the company of other people who can help both you and your offspring to survive. Taylor and her colleagues have named this alternative the tend-and-befriend response , and they note that it is quite common in females. They also note that females in many animal species release more of the hormone oxytocin in response to stress, and evidence has begun to accumulate of similar effects in human females. Oxytocin appears to play an important role in maternal behavior and affiliative behavior and thus may be the key endocrine response that underlies tend-and-befriend. It is probably fair to say that both men and women exhibit both stress responses, but fight-or-flight seems to be more characteristic of men and tend-or-befriend more characteristic of women (Taylor et al., 2000).

fight-or flight response: the name given to the stress response that prepares the body to meet a physical danger by fighting or fleeing from it

tend-and-befriend response: the name given to the stress response that helps the individual cope by nurturing others and seeking social support

oxytocin: a hormone that is released in response to stress and tends to lead to nurturing and affiliative behavior

Why and How Stress Makes Us Sick

There are two basic, serious problems with the stress response in modern humans, whether it is fight-or-flight or tend-and-befriend. First, the stress response is supposed to help us during a physical threat, not a psychological threat. Unfortunately, however, the body reacts the same way to both. Second, the stress response is supposed to take place for a few minutes, not a few weeks. Our stress response evolved to help us face short-term, physical threats, essentially to protect us during an emergency. When we are facing constant psychological stressors, such as a difficult class with daily demands and frequent high-pressure deadlines, the stress response that is designed to help us escape from a physical emergency is activated for days or weeks on end. Making matters worse, humans’ ability to imagine the future leads us to anticipate stressful situations (in other words, to worry), which greatly increases the duration of the stress response, as does our ability to relive the past through our memories. To put it simply, a set of responses that would save our lives in the short term makes us sick and may even kill us in the long term (Sapolsky, 2004).

The list of health risks associated with stress is long and frightening. Most of the risks come from problems with the cardiovascular system and the immune system, but other disorders pop up on occasion, too.

Cardiovascular Problems

A major part of the stress response, particularly the fight-or-flight version, involves the faster delivery of blood to areas of the body that need it to face the emergency. The two main ways this happens (via the sympathetic nervous system and the endocrine system) are through speeding up of the heart rate and constricting blood vessels. The result over the long term is very simple, in a way. Basically, the heart and blood vessels wear down from overuse (Sapolsky, 2004). It makes sense that long-term overactivation of the cardiovascular system would lead it to break down; it is like continuously running a car’s engine at top speed (while not taking care of it, if you consider poor lifestyle choices a side effect of stress).

The result is that long-term stress is associated with a host of cardiovascular disorders. For example, there is an increased risk of clogging of arteries (known as atherosclerosis), damage to the heart itself, stroke, and sudden heart attack (Brindley, 1995; Kamarck & Jennings, 1991; Rozanski et al., 1991; Waldstein et al., 2004).

Impaired Immune Function

Animal research has shown us that stress—created in the laboratory by separating offspring from mother, disrupting the social or physical environment, or introducing predators, for example—leads to reduced immune function and illness (Ader, 2001; Cohen et al., 1992). Although researchers cannot easily conduct similar studies to manipulate the stress level in humans, several human studies have demonstrated a direct link between stress and illness. For example, one team of researchers exposed a group of research participants to the common cold virus. The participants who reported being under more stress were more likely to develop a cold (Cohen et al., 1991; Cohen, et al., 1998; 1999). Other studies have shown a direct effect of stress on the functioning of the immune system (Kiecolt-Glaser et al., 1993)

Although our immune system is very complex, it is based on a simple concept: White blood cells, the major cells that fight against infection, can distinguish between normal cells in our bodies and invading foreign cells (mostly based on proteins on the surface of the cells). When white blood cells detect a foreign substance, they attack and destroy it. Different types of white blood cells have different mechanisms, and they attack different kinds of foreign cells, but the basic idea is the same.

One of the key ways that stress influences immunity is through the increase in glucocorticoids. As odd as it seems, the suppression of immunity appears to be what the stress response is supposed to do. A short explanation and this will make sense. Remember the role of the stress response in helping us to survive during a physical stressor. Soon after the onset of the stressor, immune function increases, just the response you need in order to stave off infection from a bite wound, for example. If the immune system stays “switched on” for too long, however, there is a rebalancing of immune function. Prolonged immune activity increases risk of autoimmune diseases, in which white blood cells begin to attack parts of the body in addition to foreign elements. To protect against an autoimmune response, glucocorticoids begin to suppress immune function after its initial boost. With an extended stressor, the period of suppression overwhelms the initial boost and the result is an overall decline in immune function (Sapolksy, 1998). Other parts of the immune system do continue to function at a high level, which can cause damaging inflammation throughout the body (Segerstrom & Miller, 2004).

Researchers have suspected for many years that stress contributes to cancer, judging from animal studies and correlational research with humans. For example, one study found a 3.7 times increased risk of breast cancer for women who experienced major stress, daily stress, and depression (Kruk & Aboul Enein, 2004).  Some scientists have concluded that the suppression of the immune system contributes to some types of cancer (Reich et al., 2004). At this point, there is fairly strong evidence allowing us to conclude that stress plays a role in cancer progression. It has been more difficult to establish its role in cancer initiation, but the evidence is growing here as well (Jenkins et al., 2014; Soung & Kim, 2015). As you certainly realize, a cancer diagnosis is extremely stressful for most patients and their families (note how threatening and uncontrollable it is). There is also mounting evidence that psychological interventions can help patients and their families reduce distress and can help patients follow doctor’s orders (Andersen et al., 2004; Cillessen et al., 2019; Spiegel & Kato, 1996).

A Bit More on Stressors

Let us finish this section by focusing a bit more on the types of situations that people may find stressful. Again, any event or situation that is perceived as threatening or uncontrollable (or both) will be stressful. Some people find a great many events stressful. And some events are stressful for nearly everybody, such as being the victim of a crime, being involved in a serious accident, or having a close friend or family member die.

Whether other events are perceived as stressful depends more on the interpretation of the person experiencing them. For many people, daily hassles,  including congested traffic, are a large source of stress (Ruffin, 1993). Daily hassles like financial burdens, social rejection, or ethnic or racial conflict are quite serious. Others seem trivial, such as disliking daily activities, having too many things to do at once, having difficulty dealing with computers, and being bored at work (Hennessy et al., 2000; Kohn & MacDonald, 1992). But the effects of even these trivial annoyances add up. If you personally find an event threatening or uncontrollable (or both), then it will be stressful.

  • What are some events that you find stressful that you suspect many other people do not?
  • What are some events that many other people find stressful that you do not?

27.2 Coping with Stress

  • What do you do to cope with stress?
  • Which of your stress coping activities are the most successful? Which are the least successful?

An individual experiencing stressful events, especially major stressors like the death of a loved one or relentless social rejection, should be keenly aware of the dangers of stress and then take action to combat it. The best solution is to avoid stressors, but of course, that often is not possible. This section focuses on some important aspects of coping with stress in these cases.

General Approaches to Coping

Researchers have distinguished between problem-focused coping  and emotion-focused coping  as general strategies that individuals tend to rely on (Carver, Scheier, & Weintraub, 1989). Some people favor problem-focused coping. They tackle the problem head-on and try to solve it, perhaps by increasing their effort, trying to change a situation, or learning some new skills. People who favor emotion-focused coping seek to manage their distressing feelings. You might think that problem-focused coping seems better, as it is directly intended to solve problems, while emotion-focused coping sounds like avoiding a problem and letting it get worse. You would be right, at least some of the time. People who tend to use problem-focused coping do seem to be better adjusted in general. There are exceptions to the advantage, though. For example, if a problem is unsolvable, managing distress might be the only strategy that can help us cope at all. Also, if the distress itself is too disruptive, it might be necessary to manage it before attempting to solve a problem. The best advice is to be ready to use both strategies when appropriate.

Change the Way You Think About Stressors

Do you want to reduce stress? Simple. Do not find events threatening or uncontrollable. Seriously, one important way to cope with stress is to learn how to think about our world differently. Although this can be difficult to do, it is definitely possible, especially with a little help.

Psychologist Martin Seligman (1990) has adapted techniques from cognitive therapy, which is an effective treatment for depression, and demonstrated how individuals can use them to change their style of thinking to cope better with stress. He notes that many people have a tendency to blame themselves when they experience a negative event, and they believe that it will be permanent and affect many aspects of their lives. For example, if you fail an important exam, you might think that it is all your fault, that you will never be able to pass this or any class, and that you will never be able to have a fulfilling career or social life because of your failure. We are sure you can see how this kind of thinking would make many events seem threatening and uncontrollable.

Importantly, researchers have discovered that stress reappraisal , reframing part of the stress response to change its meaning, can have remarkable effects. For example, Jamieson et al. (2016) taught students who were enrolled in developmental math courses that the stress arousal that they experienced with exams actually helps their performance. Compared to controls who were told to ignore stress feelings, the students who reappraised, reported less test anxiety and they actually did perform better on exams.

Again, although it is not always easy, learning to recognize these unhelpful styles of thinking and adopting new ones is possible. New ways of thinking can help you to realize that you have more control over situations and can also help you to see situations as challenges rather than threats.

Seek Support

People who have good social connections are better able to cope with stress than those who are isolated. Having a close friend, spouse, or family member on whom they can rely during stressful times literally makes people healthier (House et al., 1988; Case et al., 1992). For one thing, having a social support network can encourage people to engage in healthier behaviors (Helgeson et al., 1998). On top of that, though, the social network itself appears to provide a direct boost to our ability to fight the negative effects of stress, at least according to research on non-human primates (Coe, 1993; Cohen et al., 1992).

You can get similar effects from simulating social support by writing about stressful experiences. When research participants were instructed to write down their thoughts and emotions about traumatic experiences, they had healthier immune systems and got sick less often than participants who wrote about some other topic (Esterling, et al., 1994; Pennebaker, 1989; Smyth, 1998) Writing about stressful experiences has also improved the immune function of people suffering from HIV infection. (Petrie et al., 2004)

Learn to Relax

Although relaxing is a simple concept, it can be difficult to do. You might think that you are relaxing while watching television when in reality you may still be clenching your jaw, hunching your shoulders, and just generally remaining tense.

Getting some advice or instruction about effective relaxation techniques is helpful for most people. For example, you can use biofeedback to help learn how to relax. Biofeedback is simply a tool that allows you to see aspects of your physical state as some visual stimulus, such as a number. You can learn what types of behaviors can change the numbers, thus changing the physical states. Biofeedback systems use various physical cues as signs of stress level, such as muscular tension or fingertip temperature. Perhaps the most easily accessible is heart rate. You can buy an extremely accurate heart rate monitor for as little as fifty dollars. By trying different behaviors, such as deep breathing or releasing tension in your muscles, you can learn the most effective ways to decrease your heart rate.

Other techniques and activities also promote relaxation. For example, you can use STOP or progressive relaxation techniques (see sec 8.3). The STOP technique is an easy way to distract yourself when you are feeling anxious, so you can engage in some relaxation behavior, such as deep breathing. It simply involves mentally telling yourself to “STOP!” whenever you find yourself getting anxious, then devoting yourself to a short relaxation exercise. Progressive relaxation involves tensing and then relaxing different muscle groups in sequence throughout the body. Simpler forms of relaxation also work. Many people are successful at listening to relaxing music, sometimes with progressive relaxation (Pelletier, 2004; Smith & Joyce, 2004). Meditation and yoga may also help reduce stress through relaxation (Canter, 2003; Gura, 2002).

emotion-focused coping: a coping strategy in which people seek to manage their distressing feelings

problem-focused coping : a coping strategy that focuses on tackling the problem head on and trying to solve it

biofeedback: a tool that allows you to see aspects of your physical state, such as muscular tension or heart rate, as some visual stimulus, such as a number

stress reappraisal: reframing part of the stress response to change its meaning

Increase Physical Activity

There is little doubt that physical activity, and specifically aerobic exercise, improves our mood and helps us to cope with stress (Glenister, 1996; Yeung, 1996). In essence, exercise allows the body to do exactly what stress is preparing it to do. A survey of over 12,000 adults found that people who exercised least reported feeling the most psychological stress (Ng & Jeffery; 2003). Researchers Rebar, Stanton, and Geard (2015) conducted a meta-analysis of meta-analyses that indicated that physical activity leads to significant reductions of non-clinical anxiety and depression symptoms (these are milder symptoms that are common among people experiencing stress).

Researchers have been able to show how exercise helps. In one study, individuals who underwent six weeks of aerobic exercise had lower physiological responses (blood pressure and heart rate increases) during stress (Spalding et al., 2004). As you probably know, aerobic exercise improves the cardiovascular system by reducing blood pressure and heart rate in general, so these direct reductions during stressful episodes are on top of the overall benefits of exercise.

Perhaps the best news about exercise is that a little goes a long way. If you want to get the full range of cardiovascular, weight control, and stress reduction benefits, you should follow the recommendations of the US Department of Health and Human Services and get 60 minutes of exercise nearly every day (90 minutes if you need to lose weight). However, people gain most of the stress reduction benefits from getting as little as 10 minutes of exercise per day (Hansen et al., 2001). Anyone can exercise this much. All you need to do is purposely park your car at the far end of a parking lot and spend 10 minutes walking briskly to your office, classroom, or workplace, instead of driving around looking for the closest space. Walking back to your car adds another 10 minutes. If you squeeze in one more 10-minute walk, you will have reached another key threshold: The US Department of Health and Human Services reports that 30 minutes of exercise per day will improve health and psychological well-being.

  • Which of the strategies in this section for coping with stress are behaviors that you use already? Did you think of them as stress coping techniques?
  • Which of the strategies in this section for coping with stress do you not currently use? What, if anything, is preventing you from starting to use them?

27.3. Understanding Obesity and Eating Disorders

  • What do you think are the main causes of the large increase in overweight and obese people in the US over the past few decades?
  • In what kinds of situations do you probably eat more than you should?

Most people are aware that there is an epidemic of obesity in the US. Although there are a number of different ways to illustrate the alarming state of affairs, let us mention two. The US Centers for Disease Control and Prevention reported that the percentage of adults who were overweight or obese rose from 55% in the 1988–1994 period to 72% during 2015–2016. Obese individuals alone increased from 22% to 40% of the adult population. Second, 14% of 2 – 5 year-olds, 18% of 6 – 11 year-olds, and 21% of 12 – 19 year-olds were obese in 2015 – 2016 (CDC, 2019).

A key part of these statistics is the definitions of overweight and obese. They are based on the body mass index (BMI), a measure of weight in relation to height. For example, a six-foot person weighing 200 pounds would have a higher BMI (27.1) than a six-foot person weighing 180 pounds (24.4). According to the Centers for Disease Control (CDC) of the U.S. government, an individual is overweight if he or she has a BMI above 25. A person with a BMI of 30 or above is obese . So, a 6-foot tall person who weighs 184 pounds would be overweight; one who weighs 221 pounds would be obese. The BMI is only a guideline, however, because it does not actually measure body composition. People who are very athletic and muscular sometimes have BMI’s indicating that they are overweight.

To calculate your own BMI, multiply your body weight in pounds by 705, then divide by your height in inches squared.

body mass index (BMI): a measure of weight in relation to height; BMI is used to estimate whether an individual is overweight or obese

overweight: an official designation that corresponds to a BMI above 25

obese: an official designation that corresponds to a BMI of 30 or above

Most people are certainly aware that there are health risks of being overweight, but they might not know the scope of the problem. At the risk of overwhelming you, in order to convey the seriousness of the health risks associated with being obese or overweight, let us give you the exact list of physical ailments that are more likely, as reported by the CDC:

  • All-causes of death (mortality)
  • High blood pressure (hypertension)
  • High LDL cholesterol, low HDL cholesterol, or high levels of triglycerides (Dyslipidemia)
  • Type 2 diabetes
  • Coronary heart disease
  • Gallbladder disease
  • Osteoarthritis (a breakdown of cartilage and bone within a joint)
  • Sleep apnea and breathing problems
  • Many types of cancers
  • Low quality of life
  • Mental illness such as clinical depression, anxiety, and other mental disorders
  • Body pain and difficulty with physical functioning

The key question, of course, is why do so many people weigh too much? Although the full explanation is extremely complex, and we will outline quite a bit of that explanation, one of the most important reasons is that our biological need for food and the reasons that we eat are out of sync. For example, if your schedule allows you only 30 minutes to eat lunch every day at 12:00, you might eat every day at 12:00, whether or not your body needs the food.

Because of this very loose relationship between biological need and eating behavior, environmental cues have a large impact on our eating behavior. As a result, many people end up eating more than their bodies require, and the excess turns into extra weight. For example, suppose an individual requires 2,200 calories to maintain her current weight—a typical recommendation for adult women, although age, metabolism, and activity level affect individual needs, as does gender (National Academy of Sciences, [NAS] 1989). The average US adult is estimated to eat over 2,600 calories per day (US Department of Agriculture [USDA], 2002). You can estimate a ballpark weight gain for the woman in the example by doing a little math. One pound of weight is approximately 3,500 calories, so a 2,600-calorie daily eater might gain one pound every nine days. Although this seems extremely fast, the weight gain would be slow enough that the individual might not notice it for a while; it is less than a “just noticeable difference” (see sec 12.3). Three months later, though, she looks up and realizes that she has gained 10 pounds.

Evolution and Obesity

Human beings have some tendencies that served us well 500,000 years ago but get us into trouble in today’s environment. Throughout evolutionary history, our human ancestors were not troubled by the need to keep off weight. On the contrary, hunters and gatherers needed to be able to preserve body weight when food was scarce.

As a consequence, we have effective physical mechanisms for saving energy but not for dealing with excess energy (Hill & Peters, 1998). For example, in response to fasting, our metabolism rate (the rate of energy use) decreases, so that we can function on fewer calories. Of course, this is a major reason that losing weight through dieting is difficult. Our body realizes that far less energy is entering the system—as if there is a drought and we cannot find enough berries to eat—and enters energy-saving mode.

Many people note, correctly, that there is a significant genetic component to body composition. As usual, however, we are not guaranteed anything by our genes. At best, some people are predisposed to being overweight. Indeed, twin studies have demonstrated that increased activity can help people who might be predisposed to being overweight prevent weight gain (Jackson et al., 2020).

Here is another sign that genes play a limited role in our current weight problems: The proportion of overweight adults in the US has skyrocketed over the past 40 years, while our genetic composition has likely changed little if at all in that time (CDC, 2019). Further, as cultures throughout the world adopt more Western behaviors—particularly diet—their rates of obesity increase very rapidly (Hebert, 2005). It is implausible that these changes would have much, if anything, to do with genetics.

Environmental Cues That Promote Weight Gain

Evolution also helps explain why there is little relationship between our need for food and our eating behavior. For our human ancestors, when food was available irregularly and was sometimes scarce, a variety of environmental cues encouraged them to eat even though they didn’t need food at the moment. But these cues hurt us today, when a scarcity of food for most people in the US can be solved by a trip to McDonald’s. Many people do not realize how powerful the effects of the “eating environment” are on the amount that we eat. The list of environmental influences on food consumption is enormous, so we will mention only a few of the big—and more surprising—ones.

Social Environment

Social norms, the rules that guide us in group situations, often suggest that overeating is the “correct” thing to do (Module 21). For example, in many families, the norm is to “clean your plate because people all over the world are starving and would do anything to have our food.” (we are not quite sure how our overeating would benefit starving people on the other side of the world.) At the same time, in the broader society, the descriptive norm (the behavioral evidence all around us) is that eating lots of food is exactly what many people do. From the person in front of you at the fast-food restaurant ordering two triple cheeseburgers to the family members taking second and third helpings during holiday meals, we are continually reminded that “everyone eats this much.”

Stress and Emotion

As you have probably heard, and read in the previous section, stress can lead to overeating. For example, when workers reported more stress in their jobs, they ate more calories, sugar, and saturated fat (Wardle et al. 2000). Researchers have also demonstrated that other intense emotions can influence how much we eat. For example, when people are in a depressed mood, dieters eat more and non-dieters eat less (Baucom & Aiken, 1981). Even good moods can lead people who are dieting to overeat. In one study, dieting research participants who watched a comedy or a horror movie ate more food than those watching a neutral movie (Cools et al., 1992).

The “Mount Everest” Effect

In 1923, mountain climber George Mallory explained that he wanted to climb Mount Everest “because it’s there.” Similarly, we have a strong tendency to eat food simply because it is there. One survey found that 69% of Americans completely finish their restaurant entrees all or most of the time (American Institute for Cancer Research, [AICR], 2003). The problem is those meals have been growing larger. Both for food served in the home and in restaurants, portion size increased dramatically over the past several decades (Steenhuis & Poelman, 2017). In a study of the Mount Everest effect, researchers allowed participants to eat as many potato chips as they wanted from bags ranging from 28 to 170 grams (28 grams equals 1 ounce, the official serving size for potato chips, which is about 12 to 15 chips). The bigger the bag, the more people ate. Although the potato chip snacks were eaten in the afternoon, the research participants did not correspondingly reduce the amount they ate at dinner, so overall they ended up eating significantly more when they were given the larger bags of chips (Rolls et al., 2004).

Amazingly enough, the Mount Everest effect works even when the food tastes bad. People eating popcorn while watching a movie ate 61% more from a large container than from a small container, even though they had rated the taste unfavorably (Wansink & Park, 2001). It reminds us of an old Woody Allen joke in the movie Annie Hall.  He tells of two elderly women complaining about the food at a restaurant. The first one says, “Boy, the food at this place is really terrible.” The other woman replies, “Yeah, I know, and such small portions.”

Even changing the shape of a portion can change the amount that people eat or drink. In a study reminiscent of Jean Piaget’s “failure to conserve” demonstrations with preschool-aged children, adults poured and drank larger quantities of beverages when they were given short wide glasses than when they were given tall thin ones (Wansink & Van Ittersum, 2003).

By the way, George Mallory died trying to climb Mount Everest soon after his famous quotation.

The Role of Marketing

Countless attractive aspects of food are continually communicated to us through the marketing of food products. Although marketing messages affect all of us, children are especially influenced. A task force of experts convened by the American Psychological Association reported that an average child saw more than 40,000 television commercials per year, many of which are devoted to food. (The four main categories for children’s advertising are toys, cereal, candy, and fast-food restaurants.) The task force noted that children, at least under the age of eight, are unable to critically evaluate ads, so they end up automatically accepting everything in them as true (Kunkel et al., 2004). Researchers have lamented that it is difficult to keep up with the number of advertising messages that reach children today because a great deal of advertising (through online platforms, inserted as part of games, and so on) is very difficult to track (Lapierre et al., 2017). There is no reason to expect that the situation has improved, however.

Signals to overeat and to buy food, hungry or not, work best when they are subtle. For example, marketing messages are more persuasive if you do not realize you are being marketed to—as in product placements in movies and company sponsorships of academic activities in your local elementary school. And if you do not realize that you are eating more potato chips because the bag is bigger, there is nothing to stop you from doing so.

The Other Side of the Coin: Eating Disorders

Overall, about 50% of adults in the US are trying to lose weight, including 26% of people who are normal weight or underweight (Martin et al., 2018). Perhaps it is not surprising that, given the difficulty of losing weight and the large numbers of people trying it, some would fall into the trap of going too far. The result is an eating disorder, one category of psychological disorders. There are several specific eating disorders, but the two best-known are anorexia nervosa and bulimia nervosa.

Individuals with anorexia nervosa  are extremely anxious about being overweight and adopt extreme weight-control measures, such as excessive dieting or exercise. Consequently, the sufferer is extremely underweight. By definition, one can be diagnosed anorexic only if they weigh less than the minimum normal weight (for age, gender, and sexual development level). With anorexia nervosa, an individual’s life is literally in danger; serious medical complications can include heart problems, electrolyte imbalance (which can be quite serious), low blood pressure, and increased infections make anorexia nervosa the psychological disorder with the highest mortality rate (Edakubo & Fushimi, 2020). People who are anorexic have distorted views about their own body, which often heavily influences their self-esteem. Many of them thus have symptoms of depression, insomnia, social withdrawal, and anxiety. The very large majority of people who suffer from anorexia nervosa, more than 90%, are female; it typically begins in adolescence (between ages 14 and 18).

Bulimia nervosa is characterized by periods of binge eating (eating large amounts of food) followed by some behavior intended to counteract the overeating. Most famously, the counteracting behavior is self-induced vomiting or using laxatives. Other behaviors can be excessive exercise or fasting. Many people who suffer from anorexia nervosa also have symptoms of bulimia, although bulimia can also appear on its own. Because a bulimic person is not extremely underweight, the disorder can be extremely difficult to discover.

Psychological disorders, including eating disorders, typically have biological, psychological, and cultural or environmental causes. The cultural factors are probably most prominent in eating disorders. Eating disorders are prevalent only where thinness is especially valued, such as in wealthy nations where there is plenty of food (Polivy & Herman, 2002). In addition, specific characteristics of families and individuals are more likely to lead to eating disorders. For example, children who are insecurely attached to their parents are more likely to develop eating disorders (Ward et al. 2000). Individual characteristics, such as low self-esteem, are also related to eating disorders (Fairburn et al., 1997). Eating disorders clearly have a genetic component, as well, although heritability estimates vary widely (33% – 84% for anorexia nervosa; 28% – 83% for bulimia nervosa) (Zerwas et al., 2015).

anorexia nervosa: an eating disorder in which the affected individual is extremely anxious about being overweight and adopts extreme weight-control measures. To be diagnosed, the individual must weigh less than the minimum normal weight (for age, gender, and sexual development level)

bulimia nervosa: an eating disorder characterized by periods of binge eating (eating large amounts of food) followed by some behavior intended to counteract the overeating

  • Social environment
  • Stress or emotion
  • The Mount Everest effect
  • Marketing messages

27.4. Promoting Good Health and Habits

  • What behaviors do you think you should change in yourself to improve your health? What would it take for you to change?

By now, everyone should know that it is dangerous to smoke and to drink alcohol to excess. Most overweight and obese people realize that they should lose weight, and the message about the importance of healthy food choices has made it to most people. Although it is likely true that some public education efforts will continue to help make people aware of problem behaviors, the more interesting question is why people fail to follow the advice. For example, consider exercise. By now, nearly everyone knows that it is important to exercise. According to the US Government’s National Center for Health Statistics, only 23% of US adults meet the minimum recommendations of muscle strengthening two times per week and 150 minutes of moderate aerobic (or 75 minutes of intense aerobic) exercise per week (Blackwell & Clarke, 2018)

Why do people not do what they know that they should? Well, the simple, obvious, and not particularly helpful answer is that it is hard. For someone who currently exercises zero days per week, it can be quite a daunting proposal to have to exercise five days per week for at least 30 minutes, plus two days of strengthening exercises. If a non-exerciser jumps up and decides to run four miles, the immediate consequences of that behavior are not likely to be pleasant. The benefits of exercising, being healthier at some undefined time in the future, are too distant and abstract to provide much of an incentive for many people (Module 6). Similarly, the immediate effects of eating a small portion of a healthy food are a bit unsatisfying to someone who would prefer a big piece of French silk pie. Even when people do manage to lose weight over time, they are frequently disappointed when their bodies do not look like the bodies of models and actors.

One secret to getting people to change their behavior is to discover how people who have been successful did it. Psychologist James Prochaska spent many years examining the question of how people successfully change. He looked for common factors among different psychotherapies that promote behavior change, as well as the strategies and processes used by successful “self-changers” (Prochaska, 1979; Prochaska & DiClemente, 1982; Prochaska et al., 1992). His theory, the transtheoretical theory of change, describes how people progress through five separate stages on the road to successful behavior change, whether the problem is overeating, lack of exercise, smoking, drinking, or any other maladaptive behavior:

Precontemplation

This is not really a stage of change as much as a stage in which the individual is not yet ready to change. The individual in the precontemplation stage resists change, perhaps becoming defensive when other people try to help, often not realizing that he or she has a problem, and thinking that it is other people who need to change. For example, a middle-aged man in precontemplation may believe that exercising is overrated and potentially dangerous, citing the research that suggests that there is a significant risk of dying of a heart attack while exercising. (there is an increased risk of sudden heart attack during exercise, but this small temporary increase is more than offset by the overall decrease in risk of death from being in good shape.) A precontemplator’s occasional half-hearted attempts to change are practically guaranteed to fail, and the result is often worse behavior (Prochaska et al., 1994). Do not buy the sedentary man a $1000 membership at the local health club for his birthday. Health clubs make a fortune on people who join and pay a monthly fee to remain a member but never once use the facilities. One study found that over 500,000 people in the United Kingdom had health club memberships that they had never used (Men’s Health, 2004). The immediate goal for the precontemplator is simply to start thinking about changing and thus moving into the next stage. To do this, the individual must become aware of the problem and of his or her defenses against changing. Of course, the precontemplator is not motivated to admit to a problem. Nearly always, the awareness must be raised with the help of the outside environment. For example, a visit to the doctor or the death of a sedentary friend might get the precontemplator to start thinking about the problem. Other times, family and friends play an important role in helping the person become aware.

Contemplation

Once our non-exerciser realizes that he has a problem, he realizes that he should change and begins to consider the options seriously. He may actively seek information about the problem and is open to talking and thinking about it. Commonly, though, people get stuck in contemplation—that is, they continue to think about a problem as a way of avoiding taking any action. On the other hand, a common way to fail is to jump into action before the person has had enough contemplation. Again, the real goal is simply to move on to the next stage. To do this, the person should collect specific information about the problem and potential solutions and begin to set specific goals. For example, the non-exerciser can learn about the relative importance of strengthening and aerobic exercises, as well as the actual recommended levels of each for a man his age. He may decide that he would like to work toward two days of strength training and three days of aerobic exercise per week.

Preparation

This stage need not last long, but it is essential to successful change. It is time to turn the goals from the contemplation stage into specific steps. Preparation is also a kind of transition stage, with elements of both contemplation and action. Our exerciser may continue to collect information, as in the contemplation stage, while he begins to draw up a specific exercise plan. At the same time, he may begin to make small changes in his daily activity, such as taking the stairs instead of the elevator or parking his car farther away from the office door every morning before work. A major danger at this point is to try to change too quickly. A non-exerciser who decides after a week of walking the length of the parking lot twice a day that he can try jogging four miles on Saturday morning is not likely to be successful. For this stage, it is enough to make small changes, create a set of specific steps, anticipate some potential problems, and make a commitment to change. The commitment should be specific and public. Then, our non-exerciser may be ready to move on to the fourth stage.

Even after successfully making it through the previous stages, the changes will not be easy. If the person has planned for the likely difficulties, however, they can be surmounted. Two key strategies are countering and avoiding. When you counter,  you engage in a behavior that is a contradiction of the problem behavior. For example, if our non-exerciser has an urge to plop down in front of the TV, that is the perfect time to go for a walk around the block. When you avoid,  you remove the tempting stimuli. If you eat too many cookies, for example, stop buying cookies. You cannot eat them if they are not in the house. It is important to realize that you need not have superhuman willpower (see self control in Module 20) . If our non-exerciser is serious about stopping watching TV, he can unplug it and move it into another room, so that any time he wants to watch it he has to go to the effort of retrieving it and plugging it in. Finally, there are a great many additional techniques that can be borrowed from psychology to help initiate the changes. Perhaps the best are positive reinforcement and shaping from operant conditioning (Module 6). In other words, rewarding oneself for gradually approaching the desired behavior is an excellent strategy that can be used to facilitate nearly any change.

Maintenance

During the final stage, one attempts to make the change a permanent part of one’s lifestyle. At one level, maintenance is simply a matter of realizing that difficulties, backtracking, and obstacles will continue. It may be necessary to make other changes to help maintain the new behaviors. For example, in order to avoid television in order to find time to exercise, it may be necessary to avoid television at other times as well, so it becomes less of a habit. Our new exerciser may have to change his social routine if his old friends continue to be sedentary.

As our non-exerciser attempts to move from precontemplation through the intermediate stages to action, he will undoubtedly have many opportunities to think about the pros and cons of changing. For example, he may tell himself that if he begins exercising he will have more energy and sleep better (pro) or that he will have less time for other leisure time activities (con). Successful changers are able to increase their perception of the pros as they move from precontemplation to contemplation and then decrease their perception of the cons as they move from contemplation to preparation to action and finally to maintenance. Our non-exerciser, then, should focus on his perceptions at the right times when he gathers information, thinks about, plans for, begins, and maintains his new regimen of exercise.

  • Which of Prochaska’s stages are you currently in?
  • How do you think you might be able to move yourself to the next stage?
  • What is preventing you from moving to the next stage?

Module 28: Introduction to Mental Illnesses and Mood Disorders

Before beginning their first psychology class, many students have the impression that psychology consists solely of therapy for individuals who suffer from mental illness. Some students are surprised to discover that this aspect of psychology is a relatively small part of the total content of the course. On the other hand, even though clinical psychology, the understanding and treatment of mental illness or psychological disorders, is far from the only concern of psychology, it is a very important use of psychological knowledge. Many concepts from other psychological subfields contribute to our understanding of psychological disorders and their treatment. As you will see, insights about the brain and neurotransmitters, classical and operant conditioning, cognition, emotions, motivation, human development, and social psychology have all helped psychologists to understand and effectively treat psychological disorders.

Clinical psychologists spend years learning about the hundreds of psychological disorders that have been described. Obviously, we have to make choices about which disorders to cover in this book. We will cover only the disorders that are the most important to psychology in general, because of their seriousness, commonness, or both. In addition, we will cover a few disorders that have become well known because they are controversial.

This module begins by discussing some general issues about what it means to have a psychological disorder, and how psychologists decide that someone has a disorder and which disorder it is. Then, we will turn to what may be the most common category of disorders, mood disorders.

28.1 “Normal” and “abnormal”

28.2 Major depression and other mood disorders

28.3 Treatments for mood disorders

By reading and studying Module 28, you should be able to remember and describe:

  • How psychologists decide if someone has a disorder (28.1)
  • Stigmas and stereotyping (28.1)
  • General characteristics about DSM-5 (28.1)
  • Major depressive disorder: suicide and depression (28.2)
  • Bipolar I and II disorders (28.2)
  • Biological, psychological, and sociocultural factors in mood disorders (28.2)
  • Cognitive-behavioral therapy (28.3)
  • Antidepressants (28.3)
  • Other treatments for mood disorders (28.3)

By reading and thinking about how the concepts in Module 28 apply to real life, you should be able to:

  • Come up with new examples of bias and stigma against sufferers of psychological disorders (28.1)
  • Recognize examples of major depressive disorder and bipolar disorder (28.2)

By reading and thinking about Module 28, participating in classroom activities, and completing out-of-class assignments, you should be able to:

  • Identify misperceptions or negative attitudes that you might hold toward people who suffer from psychological disorders (28.1)
  • Recognize and appreciate the differences between a depressed mood and major depressive disorder (28.2)

  28.1. “Normal” and “Abnormal”

  • What are some slang terms used to describe people who suffer from psychological disorders? Why do you think it is often still acceptable to use many of these terms?
  • Who do you think decides that someone has a mental disorder? On what basis are such decisions made?

Psychological and mental disorders are more common than you might think. In 2017, 971,000,000 people around the world (12.7% of the world population) suffered from one (GBD, 2017). These disorders can be either contributing causes or consequences of a wide range of medical conditions, such as heart disease, cancer, diabetes, and HIV/AIDS. The WHO estimates that approximately 80% of worldwide sufferers receive no treatment. Because so few people worldwide have access to treatment for mental and psychological disorders—even though many effective treatments exist—one could argue that an effort to expand mental health programs might be the single most effective way to improve the overall health of the world’s population.

How Psychologists Decide That Someone Is “Abnormal”

As we are sure you are aware, there is an absolutely limitless set of possibilities for human behavior and mental processes. Consider anxiety, for example. People can range from completely calm to wildly panicked, and it is not difficult to imagine behavior at thousands of points between the two extremes. At what point do you draw the line and say everyone on this side is normal and everyone on that side is disordered? This is essentially a subjective decision (that is, one for which different observers might disagree), opening up the whole process of defining disorders to controversy from the very first moment.

Psychologists have come up with a set of criteria that can be used to make the decision more objective:

  • How unusual and unexpected the behavior is.  If someone is doing something that is completely out of the ordinary, it might be evidence that he or she is abnormal. As Oliver Wendell Holmes once said, “If a man is in a minority of one, we lock him up.” On the other hand, as Gandhi said, “Even if you are a minority of one, the truth is the truth.” Try to remember the Gandhi quotation when you are tempted to judge people simply because they are doing something unusual. The first thing you might look for is an indication of whether the unusual behavior is justified or expected given the situation. For example, a panic attack in the middle of a mall might be unusual, but it might be quite justified if you are an 8-year old child who has lost his parents. It might also be expected if you were the parent who was looking for the child.
  • How distressing the behavior is to the person who is doing it.  A person who experiences frequent attacks of panic is likely to report that the attacks are very upsetting and that they interfere with life in important ways. Likewise, an individual who cannot get out of bed because of major depressive disorder would probably report being bothered by the inability to function. Again, however, we cannot use this criterion to make a final judgment. Some disordered behavior is characterized by a lack of distress. For example, people who suffer from antisocial personality disorder are not bothered by their aggressive and mean behaviors at all. Also, psychologists may judge that behavior is abnormal if the individual is unable to meet the demands of daily functioning, even if the behavior is not distressing.
  • Whether the behavior is dangerous or damaging.  As you know, many common behaviors are dangerous. For example, according to the US Surgeon General, cigarette smoking is responsible for over 480,000 deaths per year in the US alone; the US Center for Disease Control and Prevention reports that smoking will kill or disable half of all regular smokers. Smoking is undoubtedly dangerous and damaging. The behavior is not exactly unusual, however, as over 34 million adults in the US smoke cigarettes. Also, for many smokers, the behavior is only mildly distressing, if at all. On the other hand, a depressed individual who is suicidal is engaging in unusual, unexpected, distressing, and dangerous behavior and would be clearly judged disordered. Similarly, a person who has a severe attack of panic every time he even thinks about a vehicle and cannot work as a result is exhibiting unusual, unexpected, distressing, and damaging behavior.

These criteria remove a great deal of the doubt when trying to decide whether someone is “normal” or “disordered,” but they are not perfect. Remember that “unusual” is not the same as “disordered.” Keep in mind also that these criteria are considered in combination. One of them alone is probably not sufficient (unless it takes an extreme form) to justify a “disordered” label.

Unfortunately, assigning the label “abnormal” or “disordered” to a person brings with it a whole host of problems, commonly referred to as stigmas. The dictionary definition of stigma  is a mark of disgrace or infamy or a bad or objectionable characteristic. If the stigmas were based in reality, it would be difficult to object to them. Let us pose the issue another way, though. In the language of social psychologists, people who suffer from psychological disorders are victims of stereotypes, prejudice, and discrimination (Module 21). In other words, the stigmas are probably not based in reality but rather in preexisting notions about how people should be treated based on superficial information. Forming a stereotype based on incorrect information and applying the stereotype to all members equally is just as unfair to people who suffer from psychological disorders as it is to victims of racial, gender, socioeconomic, and other stereotypes.

The effects of stigmas and labels

Throughout history, psychological disorders have been misunderstood and feared, a situation that is quite conducive to the development of stereotypes. Probably the most common conception historically was that people suffering from some kind of mental disorder were possessed by evil spirits. There is evidence that in ancient times, individuals received “treatment” by having someone drill into their skull to release the evil spirits, a procedure known as trephination . In the 19th century, people suffering from psychological disorders were commonly placed in asylums, facilities that resembled prisons more than anything else. In essence, the goal was to keep the psychologically disordered away from everyone else.

A 2012 meta-analysis of nationwide representative surveys from around the world found that although the public’s knowledge about mental illness has increased a great deal since the 1950’s (the date of the first survey), the social stigma of mental illness has not declined much. The problem is that at least one key stereotyped idea about the psychologically disordered remains as strong as ever: that individuals who suffer from psychological disorders are likely to be violent. In fact, the belief in violence is even stronger in recent times than it was in the 1950s (Phelan et al. 1997; Vaughan & Hansen, 2004). Observers have noted that this deterioration may be largely a result of news reports and entertainment media (Phelan et al., 1997; Heginbotham, 1998; Angermeyer & Matschinger, 1996). Just think about all of the movies over the years that have depicted violence inflicted by people who suffer from psychological disorders, from Psycho  to Silence of the Lambs  to Split to  The Joker . A Beautiful Mind  notwithstanding, movies overwhelmingly portray psychologically disordered people as violent aggressors and homicidal maniacs.

This would not be much of a problem if it were true, but studies show that the relationship between serious psychological disorder and violence is weak (Pilgrim & Rogers, 2003). People who suffer from serious psychological disorders are only a little bit more likely to commit violent acts than the general population is. The overall risk of violence at the hands of people who suffer psychological disorders is—just as it is from members of the general population—very low (Swanson et al., 1990).

When a mentally disordered person does become violent, the cause may be failure to take prescribed medications (Swartz et al. 1998). Also, one of the most important variables that predict violence in the disordered population is substance (alcohol and drug) abuse (Pilgrim & Rogers, 2003). This is very interesting because alcohol abuse is linked with aggression in the general population as well (Module 20). A main reason that violence is (slightly) higher among the psychologically disordered population is that rates of substance abuse are higher in that group (Regier et al., 1990). Again, however, even among psychologically disordered individuals who abuse substances, most do not become violent (Steadman et al., 1990).

Even on the rare occasions that psychologically disordered individuals do turn violent, they typically attack family members or other people that they know (Eronen et al., 1998; Lindqvist & Allebeck, 1989). You are at very little risk of being the victim of a homicidal attack by a stranger with mental illness. A great deal of the stereotypes and stigma associated with psychological disorders thus stems from an unrealistic fear of violence, a fear that comes largely from the misapplication of the availability heuristic , the tendency to judge the frequency of an event by how easily instances can be brought to mind.

With that in mind, let me offer the “other side of the coin,” an alternative world in which the media choose to focus on the remarkable accomplishments of some individuals who suffer from psychological disorders, not on their potential for violence. We might have a very different view of these disorders indeed. For example, consider the following list:

Please note: Unless an individual has made a disorder public—as in the cases of Jim Carrey and Kanye West, for example—we cannot be certain about whether someone had a disorder or not. We can be fairly confident by examining details about their lives, but it can be difficult to pinpoint which disorder a person has or had.

Another important source of misunderstanding about psychological disorders is related to the dualism-monism distinction (Module 26). Because some people believe in dualism—in other words, that the mind and body are separate—they tend to think of physical disorders and mental disorders as different kinds of disorders (although that is one specific area where knowledge has increased over the years). Saying to a mentally disordered person that something is “all in your mind” implies that the problem is imaginary, as if the mind is not real.

Concrete Effects of Stereotypes and Labels

The stigmas associated with mental illness have real-world consequences. For example, researchers found that mothers who suffer from psychological disorders are more likely to be judged unfit parents simply because of their diagnosis rather than because of the way they treat their children (Benjet, Azar, & Kuersten-Hogan, 2003). Stigmas also commonly prevent people from seeking the treatment that could help them. For example, a systematic review of 144 studies found that stigmas associated with mental illness was one of the top barriers to treatment that people who suffer from psychological disorders face (Clement et al., 2014).

Diagnosing Disorders: The DSM-5

When a psychologist or psychiatrist needs to figure out which, if any, specific disorder an individual has, he or she uses criteria that are laid out in the book Diagnostic and Statistical Manual of Mental Disorders, which is published by the American Psychiatric Association. The current edition of the book, published in 2013, is the 5th; it is abbreviated DSM-5 (or sometimes just DSM).

DSM-5 is essentially a checklist, a list of symptoms only (and how long they need to be present), which makes it seem as if it would be easy to use to classify people. Do not give in to the temptation to pick up the DSM (which is probably in your school’s library and is available in online editions) and diagnose your friends, however. The DSM was designed to be used by trained professionals, and application of the criteria requires substantial clinical judgment. Psychological tests also play a role in making decisions. The problem is that the categories have fuzzy boundaries. Although some clear cases are fairly easy to judge, many people present symptoms that could place them into a number of different categories of disorders. Also, many disorders occur together with other disorders, making it very difficult to reach a definitive diagnosis.

Compounding the difficulty of using the DSM-5 criteria is the fact that clinical judgments are not perfectly reliable; in other words, diagnoses using the DSM-5 are not always consistent from one clinician to another. Each new edition of the DSM is an improvement over the previous editions, and judgments that clinicians make for most diagnostic categories now have good reliability. That said, DSM-5 has been criticized for encouraging false positive diagnoses (a diagnosis of a disorder when none exists). Many disorders, such as major depressive disorder, bipolar II disorder, and attention deficit and hyperactivity disorder in adults now require lower thresholds to receive a diagnosis in DSM-5 compared to earlier editions. Although this is likely to pick up some genuine cases that would have been missed using the earlier criteria, it is also likely that false positives will occur as a result (Wakefield, 2016).

For each type of disorder, the DSM-5 describes both central features, the features that are most important for diagnosis, and associated features that can help a diagnosis. These secondary features are common to the disorder but may be less helpful because they occur in fewer cases than the central ones or occur in many different disorders (making them less useful for determining which disorder a person may be suffering from). The DSM-5 also gives information about prevalence, risk factors, available diagnostic measures, the way the disorder progresses, and information to help distinguish similar disorders.

There are twenty major categories of disorders in the DSM-5, covering about 150 different possible diagnoses.

List of DSM5 Chapters: Neurodevelopmental Disorders, Obsessive-Compulsive and Related Disorders, Elimination Disorders, Substance Use and Addictive Disorders, Schizophrenia Spectrum Disorders and Other Psychotic Disorders, Trauma- and Stressor-Related Disorders, Sleep-Wake Disorders, Neurocognitive Disorders, Bipolar and Related Disorders, Dissociative Disorders, Sexual Dysfunctions, Personality Disorders, Depressive Disorders, Somatic Symptom Disorders, Gender Dysphoria, Anxiety Disorders, Feeding and Eating Disorders; Disruptive, Impulse Control, and Conduct Disorders; Other Disorders

Becoming familiar with all these diagnoses and being able to tell which apply to a particular client is one of the challenges of becoming a professional clinical psychologist.

  • Try to think of some examples of when you may have perceived someone differently because of a label (even if it was not a label relating to a psychological disorder).
  • What ways can you suggest that society could change some of the negative attitudes toward people who suffer from psychological disorders?
  • What ways can you help change some of the negative attitudes toward people who suffer from psychological disorders?

28.2. Depressive and Bipolar Disorders

  • If you have never suffered from major depressive disorder, how would you characterize the differences between a depressed mood and clinical depression?
  • If you have suffered from major depressive disorder, how would you describe the differences between a depressed mood and clinical depression?

It is difficult to avoid the conclusion that mood disorders are the most important psychological disorders. Mood disorders have a disturbed mood as the main feature. They include depressive disorders, such as major depressive disorder (commonly called depression) and persistent depressive disorder, and bipolar disorders, along with a few others.

Mood disorders are among the most common disorders. 264 million people worldwide suffer from depressive disorders, with 163 millions suffering from major depressive disorder alone (GBD, 2018). In the US, about 17% of the population can expect to suffer from at least short-term depression during their lives (Kessler et al., 1994). Throughout the world, women are twice as likely as men to suffer from depression (Nolen-Hoeksema, 2002). College students are at particularly high risk. In one recent survey of over 30,000 college students in the US, 41% reported at least moderate depression (based on answers to a survey designed to detect it), including 21% whose depression was rated as severe (Duffy et al., 2019).

Depression touches many aspects of the lives of people who suffer from it, as well as the lives of friends, family, loved ones, and coworkers. Depressive disorders are the third of the leading cause of disability in the world (GBD, 2018). They are also a risk factor for cardiovascular illness, suicide, and other serious diseases.

Depressive Disorders

It is a rather unfortunate coincidence that the English language uses the same word, depression,  for a short-term, mildly sad feeling and a serious medium- or long-term psychological disorder. It is entirely natural for a person who is not suffering from major depressive disorder to say, “I’m depressed.” The surface-level similarity between a “depressed mood” and “depression” leads people to some serious misconceptions about the disorder. For example, when a healthy person feels depressed, she may be able to distract herself with some pleasant activity or otherwise reverse the feeling so she feels better. It is not difficult to imagine that this person would then wonder why someone suffering from clinical depression cannot do the same thing.

You can use your experiences of depressed moods to imagine what depression might feel like, but we should caution you: Depression is a complex disorder, and there are very many possible symptoms, so there really is not a “typical” profile.

Depression often goes away by itself, a fact that also leads some people to doubt that it is a true disorder. “How could it be a real illness,” they wonder, “if it goes away on its own?” We must confess, we have never understood this argument. Very many diseases and disorders go away on their own at least occasionally, for example, colds, the flu, and rashes; that is what our immune system is supposed to do, to make diseases go away. That does not make the diseases any less “real,” however. Even cancer sometimes spontaneously goes into remission. By the way, it is probably more correct to call depression’s disappearance a remission rather than a cure, because it frequently recurs after it goes away.

What Does Depression Look Like? 

To be diagnosed with major depressive disorder, the DSM specifies that a person must have at least one major depressive episode: 2 or more weeks of either of these five of the following nine symptoms (but at least one of the first two is required):

  • Depressed mood, most of the day, nearly every day (can be irritability in children and adolescents)
  • Loss of interest or pleasure in nearly all activities (this is called anhedonia )
  • Weight loss when not dieting, or decrease or increase of appetite
  • Insomnia or hypersomnia  (sleeping too much)
  • Psychomotor agitation or retardation
  • Fatigue or loss of energy
  • Feelings of worthlessness or inappropriate guilt
  • Difficulty concentrating, making decisions
  • Recurrent thoughts of death or suicide ( suicide ideation ), or suicide attempt or plan

You might note that depression hits us with emotional and motivational symptoms, physical symptoms, and cognitive symptoms, thus spanning an extraordinary range of our everyday functioning.

This is a notable list of symptoms in another important respect, too. There are nine possible symptoms, three of which do not specify the direction of the change in behavior; sleep, appetite, and activity level might increase or decrease. In addition, people from different cultures tend to report different symptoms when they experience depression. For example, people from Latino/Latina and Mediterranean cultures often report “nerves” and headaches; people from Asian cultures report that they are weak, tired, and unbalanced. As you might guess, it can be quite difficult to recognize depression. Which of these two individuals would you say is depressed: Joe, a 27-year-old graduate student who has been sad and feels worthless, spends most of the day sleeping or sitting on the couch, and has been eating more than he usually does for the past month? Or Jennifer, a 17-year-old high school junior unexpectedly quit the cross country team, and has been irritable and feeling overly guilty about an argument she had with her mother three weeks ago and is unable to sleep, eat, or sit still? In truth, both could be suffering from depression.

Many additional symptoms, such as anxiety and worrying, obsessive thoughts, phobias, panic attacks, and physical pain, often occur along with depression. Many people report difficulty with their social relationships, marriage, sexual functioning, occupation, or school. Also, increases in alcohol and drug use are common. Finally, when children have depression, they are more likely to suffer physical symptoms, irritability, and social withdrawal.

Depression and Suicide

It is worth spending some time on one specific symptom of depression, the recurrent thoughts of death or suicide. As we are sure you realize, some depressed individuals act on those thoughts. The CDC reports that suicide is the 10th leading cause of death in the US and the 2nd highest cause of death for people ages 15 to 24, behind only accidental death. (Note these estimates do not include deaths from COVID-19.) Although no one can know for sure, it is estimated that for every completed suicide, there are 8 to 25 attempted suicides. Three times as many women as men attempt suicide, but four times as many men as women die by suicide. The difference in completed suicides appears to be largely a result of the different methods that men and women choose; in particular, women are more likely than men to ingest poison, whereas men are more likely than women to choose a more deadly method, such as shooting.

Suicide is clearly related to depression, but the two are not synonymous. The large majority of people who suffer from depression do not attempt or complete suicide. On the other hand, the more serious the depression and the more suicide thoughts present, the more likely suicide is. Also, if you look at people who do commit suicide, 60% of them suffered from a mood disorder. This can be confusing, so let us repeat it another way that highlights the difference between the points:

  • If someone has a mood disorder, he or she probably will not commit suicide.
  • If someone commits suicide, he or she probably had a mood disorder.

It is very difficult to predict that a person suffering from depression will commit suicide because the same pattern exists between suicide and other risk factors as well. For example, a very small percentage of people who abuse drugs and alcohol commit suicide, but a larger percentage of people who commit suicide had abused drugs or alcohol. Other factors that increase the risk of suicide include impulsiveness, aggressive tendencies, previous suicide attempts, family history of suicide, and previous sexual abuse (NIMH, 1999). Also, some individuals, especially people under 24, attempt suicide after the suicide of a close friend or a celebrity, a phenomenon known as suicide contagion (CDC, 2002; Gould et al., 2003; Poijula et al., 2001).

The bottom line: If an individual reports that he or she has been thinking about suicide, you must take the comment seriously. Although most people who communicate that they are thinking about suicide do not follow through, most who commit suicide did communicate their intent in advance.

anhedonia : loss of interest or pleasure in nearly all activities

hypersomnia: sleeping too much

suicide contagion: an individual’s attempt at suicide following the suicide of a close friend or a celebrity

suicide ideation: recurrent thoughts of death or suicide

Persistent Depressive Disorder

You can think of persistent depressive disorder as a long-term milder form of depression. To be diagnosed, an individual must have a depressed mood most of the day for more than half of the days for two years or more. In addition, two of the following symptoms are required:

  • Poor appetite or overeating
  • Insomnia or hypersomnia
  • Low energy or fatigue
  • Low self-esteem
  • Difficulty concentrating or making decisions
  • Feelings of hopelessness

Bipolar Disorders

You may know bipolar disorderby its older name, manic-depression.  You probably think of it as the disorder in which an individual has periods of mania interspersed with periods of depression. It is a bit more complicated than that though. There are two main types, bipolar I disorder and bipolar II disorder. They are distinguished by the presence of a manic episode  or a hypomanic episode , and by whether a major depressive episode is required or not.

A diagnosis of bipolar I disorder requires a manic episode only. A major depressive episode is not required (but the large majority with Bipolar I do end up having one or more). A manic episode consists of one week of abnormally elevated or irritable mood and increased goal-directed activity or energy. In addition, three of the following symptoms are required (four if the mood is only irritable):

  • Inflated self-esteem or grandiosity
  • Decreased need for sleep
  • More talkative than usual
  • Racing thoughts
  • Distractibility
  • Increase in goal-directed activity or psychomotor agitation
  • Excessive high-risk behavior (e.g., buying sprees, sexual indiscretions)

Sometimes people have delusions , or false beliefs, when they are experiencing a manic episode. For example, an individual might believe that she can run a marathon without any training. It is also very common for someone experiencing a manic episode to deny that they are ill or need treatment.

A diagnosis of bipolar II disorder requires at least one hypomanic episode and major depressive episode (exactly the same as in major depressive disorder). A hypomanic episode  is a four day period over which a person experiences the same symptoms as required for a manic episode.

manic episode: the active phase of bipolar I disorder. Often involves high energy and good mood; it can also include irritability, inflated self-esteem, feelings of grandiosity, and delusions.

delusions: false beliefs

hypomanic episode : a four day period over which a person experiences the same symptoms as required for a manic episode

Causes of Depression and Mood Disorders

Befitting such hard-to-pinpoint disorders, the causes of depression and other mood disorders are complex. Biological and genetic factors are both involved, as are psychological and sociological factors. Because the concepts are complex but similar across different specific disorders, we will focus on major depressive disorder to illustrate the ideas and keep information at a manageable level.

Biological Factors

Many people have heard that depression is caused by a “chemical imbalance in the brain.” Honestly, that is not a very useful statement, though, as the changes that we have observed in people who suffer from depressive disorders are profound. Oakes et al. (2016) identified at least 34 separate brain areas that have been shown to be affected in major depression. Two of the key ones appear to be shrinkage of the hippocampus and cingulate cortex.

In addition, there appears to be something abnormal with neural transmission, notably in circuits involving pre-frontal cortex and limbic system. Reduced levels of several different neurotransmitters have also been observed, including GABA, glutamate, serotonin, norepinephrine, and dopamine. In some cases, particularly for serotonin, faulty receptor sites have also been identified.

Whew! Now we think you can begin to understand why depression has such a wide array of possible symptoms (and, as you will see, only moderately effective treatments).

Even if we had a perfect account of the neurotransmitter and brain activity involved in depression, we would still not have an explanation  of the disorder. What we would have is a detailed description  of the disorder at a very fine level. A description tells you what a phenomenon looks like, while an explanation tells you how or why the phenomenon occurs. To get an explanation of depression, we would need to know why these brain and neurotransmitter changes occur.

Genetic and Epigenetic Factors

There is solid evidence that mood disorders have a genetic component, which by now should come as no surprise to you. Reviews of behavior-genetics research have discovered heritabilities around 37% for major depressive disorder. The environmental contributions appears to come largely through individual environments, rather than through environmental conditions that are common to all members of a family (Lohoff, 2010). Given the complexity of depression, it probably comes as no surprise to you that no single gene variations have been implicated in all, or even most, cases of depression. Genes involved in serotonin transport and reception have been cited frequently as important candidates, however. A recent review has also suggested that epigenetic changes in DNA expression are indeed associated with depressive disorders (Li et al., 2019). There are few longitudinal studies, however, so it is not yet clear if these epigenetic changes are causes or consequences of depressive disorders.

Psychological Factors

Depressive episodes often follow stressful experiences. Of course, that does not mean that everyone who has a stressful experience will develop depression. We can begin to understand and predict who will, however, if we consider the role of genes. In essence, we can think of depression as an example of a gene x environment interaction (Module 10). For example, one line of research has found that individuals who had a particular version of a gene related to serotonin were likely to develop depression but only if they experienced serious life stressors (Caspi et al., 2010; Caspi et al., 2003). The researchers noted that many people who have the gene do not ever develop depression, and others who do not have the gene do develop depression. Clearly, something is still missing from the explanation. One likely possibility is that additional genes are involved.

A second possibility, also likely, is that cognition is involved. The thoughts of worthlessness and hopelessness that accompany depression are, for many sufferers, extreme versions of the kinds of thoughts that they have had for many years. Many researchers believe, then, that a negative style of thinking is not simply a symptom of depression but is a contributing cause. In other words, some people develop a habit of thinking in ways that make it more likely that they will develop depression.

Imagine that you are doing poorly in school and nothing you try is working. You devote every waking moment to studying or attending class, you spend many hours with your professors and in your college’s academic assistance areas, and you even hire a private tutor. Still, your grades do not improve one bit. Many people would finally give up, as they come to believe that nothing they can do will make a difference. In effect, they learn that they are helpless. This idea is one of the most important discoveries about the kind of thinking that puts people at risk for depression. Martin Seligman and his colleagues discovered it in the 1960s in some famous research with dogs.

The researchers placed two dogs in an apparatus that administered electric shocks. One dog was able to stop the shocks by pressing on a panel. This is a simple example of negative reinforcement, and the dog quickly learned to avoid the electric shocks and suffered no ill effects. The other dog was connected to the same shock generator as the first dog. This second dog received electric shocks whenever the first one did, but this dog did not have the ability to control the shocks. Although the two dogs received the same amount of shock, only the second one showed any negative effects. This dog essentially gave up and became passive and lethargic, much like a depressed person. Even if the dog could later escape the shocks, it made no effort to do so (Seligman & Maier, 1967). Seligman called the phenomenon learned helplessness .

Seligman and his colleagues have combined this concept with some of their concepts about optimism (Module 25). They have observed how differently optimists and pessimists look at the things that happen to them:

Perception of Successes and Failures by Optimists

Perception of successes and failures by pessimists.

When an individual learns that he is helpless in a specific situation, he tries to explain why (Abramson et al., 1978). If he has a pessimistic style of thinking—he blames himself, thinks that it will affect other areas of life, and thinks it will last forever—he is likely to generalize the helpless feeling to many other situations. The result is a generalized learned helplessness and a very high risk of depression. This link between depression and thinking style has been demonstrated in a great deal of research (Abramson et al., 2002).

Sociocultural Factors

One way to think about the role of sociocultural factors in depression is to realize that when significant life stressors that can influence individuals become increasingly common, they cross over to become cultural factors. Let us consider two briefly. First, we have already seen that a life event like unemployment can lead to long-term changes in happiness (Module 25). And unemployment can certainly qualify as a major life stressor that puts one at risk of developing depression. Consider these in a larger context. One recent study found that people who were employed at one-time point were less likely to suffer from major depressive disorder at a later time point. Forty-three million workers in the US filed for unemployment between the start of the COVID-19 economic downturn and May 30, 2020. This is one (of many) reasons why experts have been concerned about an increase in mental illness as another tragic side effect of the pandemic (also see Module 29).

A second reason is loneliness. A recent meta-analysis of 88 studies (with over 40,000 individual people) found that loneliness has a moderate effect on depression (Erzen & Çikrikci, 2018).  One national survey of 20,000 individuals over 18, conducted on behalf of the insurance company Cigna, revealed the following sobering statistics:

  • 54% said that they often feel like no one knows them well
  • Nearly half said that they sometimes or always feel alone (46%) or left out (47%)
  • 43% said that their relationships lack meaning and the same number said that they sometimes or always lack companionship.

We could continue, but we want you to keep reading.

Here, if we put you in a bad mood, we will try to make it up to you. The news is not all bad. Contrary to what experts had feared, the early results are that the COVID-19 physical distancing recommendations did not seem to increase loneliness in general, as indicated by an unrelated survey of 1,500 individuals (Luchetti et al., 2020). In other words, things are bad, but at least they are not getting worse. And in fact, there is even some cause for optimism in this study, as respondents reported that their perceived level of social support had increased over the first few months of the pandemic. This suggests that if we focus on the need for social support and to reduce loneliness, it is almost certainly a goal we can accomplish, with an expected decline in illnesses like major depressive disorder (and frankly, many other illnesses as well).

  • Which of the cognitive and sociocultural factors that contribute to depression can you recognize in your own life? What, if anything, do you do to protect yourself from the effects of these factors?

28.3. Treatments For Mood Disorders

  • Why do you think that antidepressant drugs like Zoloft, Prozac, are Cymbalta are so popular these days?
  • Are you generally in favor of or against the use of antidepressant drugs for the treatment of depression? Why?
  • What do you think the important elements of a psychotherapy treatment for depression should be?

The roles that neurotransmitter activity and negative thinking patterns play in depression are among the most important discoveries to advance our understanding of mood disorders. It should come as no surprise, then, that treatments based on these discoveries have emerged as the most successful ways for individuals to cope with these serious disorders. Specifically, cognitive-behavioral therapy and antidepressant medication are quite effective treatments for major depressive disorder and for the depressive symptoms of bipolar disorder.

Cognitive-Behavioral Therapy

The most effective psychotherapy for depression is cognitive-behavioral therapy, a simple combination of methods derived from cognitive theory and behavioral or learning theory. One key aspect of these therapies is that they try to help the client break habits, some of which have been established over a lifetime. It is not practical to expect that a one-hour conversation with a psychologist, once or twice per week, will have much of an impact. The client has to use the strategies throughout the week; in a sense, these therapies include a great deal of homework for the client.

The many specific therapies that make use of cognitive techniques all share important aspects. First, they are based on the observation that our thoughts are related to but separate from our feelings. Second, they help clients to recognize their own specific maladaptive or irrational thinking that contributes to depression. Third, they try to teach the client new ways of thinking. The main differences among cognitive therapies are specifics about the different kinds of thinking patterns that are common and specific techniques to recognize and stop them. For example, many cognitive therapies help clients to recognize the negative thoughts that they have about themselves, the world, and the future, an approach first described by psychologist Aaron Beck (1967).

In the behavioral part of cognitive-behavioral therapy, the therapist can help the client realize how aspects of the environment contribute to the depression. Then the client can be encouraged to avoid situations that lead to depression. When situations cannot be avoided—as, for example, with someone who becomes depressed about having to visit with relatives over the holidays—the therapist can teach the client new skills (such as relaxation) to solve problems or to cope with distress.

Antidepressants

Another common treatment for mood disorders—best used with psychotherapy but in reality often an alternative to it—is antidepressant medication. The most common type of antidepressant is called a selective serotonin reuptake inhibitor (SSRI). Commonly prescribed drugs such as Prozac and Paxil fall into this category. They seek to overcome problems in the neural transmission process. After the neurotransmitter serotonin is released into synapses, it is reabsorbed by the axon terminal button, a process called reuptake. SSRI antidepressants increase the effects of serotonin by preventing reuptake, so the serotonin that is available in a synapse stays there longer. SSRI’s are very common, in large part because they are a great improvement over older kinds of antidepressants. Although SSRI’s and older antidepressants are equally effective, SSRI’s have much less severe side effects. SSRI’s have also been used to treat other disorders, such as anxiety disorders.

One side effect that has received a great deal of well-deserved scrutiny over the years is an increase in suicidal thoughts among people who take SSRI’s. For adults, this does not seem to be the case, but there is an increase in these thoughts for children and adolescents. Luckily, there does not seem to be an increase in completed suicides, but physicians and caregivers of children and adolescents who take SSRI’s should be extra vigilant (Nischal et al., 2012).

Seventy-nine percent of antidepressants in the US are prescribed by primary care physicians, not psychiatrists (Barkil-Oteo, 2013). Although making antidepressants available through all medical doctors increases access to this important treatment option, it means that the drugs are being prescribed by people who might have little specialized training in psychological disorders. In addition, many doctors frequently prescribe antidepressants without requiring that psychotherapy be used in addition.

Many people wonder whether antidepressant treatment or psychotherapy is the better way to treat depression. The evidence suggests that both work. When research pits antidepressants against psychotherapy—specifically, cognitive-behavioral therapy—the two usually fare about equally well; about half of patients get better (Kappelmann et al., 2020). Nothing about depression implies that one needs to be limited to a single treatment option, however. The clear winner in these studies of treatment for depression, with more than 70% getting better, is the combination of antidepressants and cognitive-behavioral therapy (Keller et al., 2000; TADS, 2004).

cognitive-behavioral therapy: a simple combination of methods derived from cognitive theory and behavioral or learning theory

selective serotonin reuptake inhibitor: a class of antidepressant drugs that works by preventing the reabsorption of excess serotonin in synapses

Other Treatments for Mood Disorders

A number of other treatments have been used for mood disorders; some are reserved for serious cases:

  • Lithium  is commonly used for bipolar disorder. This drug works better for mania, so many people who have bipolar disorder also take an antidepressant for the depressive episodes. There is some evidence that cognitive-behavioral therapy is also effective for bipolar disorder (Jones, 2004).
  • Vagus nerve stimulation  was approved by the FDA in 2004 for severe, treatment-resistant depression. The vagus nerve brings information from other parts of the body to the hypothalamus and amygdala, brain areas that seem to be involved in depression (see sec 11.2). Patients have a device surgically implanted in the chest that provides stimulation of the vagus nerve, a nerve that sends information to the brain from the head, neck, chest, and abdomen. Research support for this treatment was shaky, and the benefits appeared in these studies only after one to two years using the therapy, but the severity of the disorder led the FDA to approve it.
  • Electroconvulsive therapy  is occasionally used for severe depression that is resistant to all other forms of treatment. After sedatives and muscle relaxers are administered, an electric current is passed through the brain, which starts a convulsion. Treatment usually involves 6 to 12 sessions over about a two-week period. The patient suffers some memory loss, but the depression lifts in most cases. Up to 85% will become depressed again, however, and may need additional treatment (Fink, 2001).
  • Transcranial magnetic stimulation is another possible therapy for depression that has not responded to other treatments. In this therapy a magnetic field is introduced over the left prefrontal cortex. Twenty daily sessions seems to improve short-term symptoms in up to 50% of patients whose depression had been resistant to other forms of treatment. Although there were few studies that looked at longer-term results, those that did found that most had improvements lasting at least 2 – 5 months (Gellerson & Kedzior, 2018).
  • Once again, are you generally in favor of or against the use of antidepressant drugs for the treatment of depression? Why?
  • What is your opinion about the Food and Drug Administration’s decision to put the black box warning label on antidepressants?
  • In the Debrief for section 28.2, you are asked to describe strategies that you might have used to protect yourself from the cognitive and sociocultural factors that contribute to depression. Which of the strategies remind you of the therapies described in this section?

Module 29: Other Psychological Disorders and Treatment

We considered giving this module the subtitle “The Common, the Severe, and the Controversial.” It is extremely likely that you will be familiar with the four categories of disorders that we will describe because, well, they are common, severe, or controversial. Anxiety and related disorders, characterized by intense fear or nervousness, are “the common.” For example, a great many people suffer from social and specific phobias, and occasional panic attacks are common among the non-disordered population. Chances are quite good that you know someone who suffers from an anxiety disorder. Schizophrenia spectrum disorders are “the severe.” If you have an image of a hospitalized psychiatric patient who is confused and confusing, talks incoherently, and displays the wrong emotions, it is probably based on schizophrenia. Although long-term hospitalization is no longer the norm, sufferers of schizophrenia do indeed resemble this common image. Personality disorders and dissociative disorders are “the controversial.” They are difficult to diagnose, and their causes are disputed. Making matters worse, the cinematic “homicidal maniac” so responsible for our lingering fear of violence among the psychologically disordered is usually based on one of these controversial disorders.

Module 29 has four sections. Section 29.1 describes the extremely common anxiety and related disorders. Section 29.2 turns to perhaps the most severe of the psychological disorders, schizophrenia spectrum disorders, and section 29.3 tackles the controversial, especially antisocial personality disorder and dissociative identity disorder. Finally, section 29.4 concludes the module with a brief description of some additional therapies and a discussion of factors that different therapies share.

29.1 Anxiety disorders

29.2 Schizophrenia

29.3 Personality disorders and dissociative disorders

29.4 Therapy in sum

By reading and studying Module 29, you should be able to remember and describe:

  • Anxiety disorders: generalized anxiety disorder, panic attacks, panic disorder, agoraphobia, phobias, systematic desensitization (29.1)
  • Posttraumatic stress disorder, obsessive-compulsive disorder (29.1)
  • Schizophrenia: positive and negative symptoms, subtypes, biological factors and causes, antipsychotic medication (29.2)
  • Personality disorders, antisocial personality disorder (29.3)
  • Dissociative disorders, dissociative identity disorder (29.3)
  • Psychodynamic therapy: free association, transference, projective tests, interpersonal therapy (29.4)
  • Client-centered therapy (29.4)
  • Shared factors in therapy (29.4)

By reading and thinking about how the concepts in Module 29 apply to real life, you should be able to:

  • Recognize examples of the different anxiety disorders (29.1)
  • Recognize examples of schizophrenia, including subtypes (29.2)
  • Recognize examples of antisocial personality disorder and dissociative identity disorder (29.3)

By reading and thinking about Module 29, participating in classroom activities, and completing out-of-class assignments, you should be able to:

  • Outline how you might plan systematic desensitization for a specific fear or anxiety that you commonly experience (29.1)
  • Comment on the opinion that “psychological therapy is more an art than a science” (29.4)

29.1. Anxiety and Related Disorders

  • If you have never suffered from an anxiety disorder, how would you characterize the differences between everyday nervousness and an anxiety disorder?
  • If you have suffered from an anxiety disorder, how would you describe the differences between everyday nervousness and an anxiety disorder?

Whenever we experience an emotion or find ourselves in a physically or psychologically stressful situation, our bodies react with sympathetic nervous system arousal—the fight-or-flight response. Often, the arousal is experienced as anxiety , a feeling roughly similar to nervousness or fear. Usually, anxiety is reasonably mild, and it goes away soon after the arousing situation changes. But sometimes anxiety becomes a problem; it lasts so long or is so distressing that the individual cannot function or is able to control the anxiety only by engaging in strange, maladaptive behaviors. When that happens, the individual is said to be suffering from an anxiety disorder.

The specific anxiety disorders we will describe in this section are panic disorder and agoraphobia, generalized anxiety disorder, and phobias. In addition, we will describe two additional disorders: obsessive-compulsive disorder, and posttraumatic stress disorder. On the surface, all of these disorders may not seem very similar. Although it is true that each disorder has features that distinguish it from the others, the feature that they all share in common is out-of-control anxiety, anxiety that causes severe distress or interferes with daily functioning.

anxiety: a feeling roughly similar to nervousness or fear

anxiety disorder: a category of psychological disorders marked by very distressing anxiety or maladaptive behaviors to relieve anxiety

Panic Disorder

An individual suffers from panic disorder if he or she experiences unexplained panic attacks. Panic disorder itself is not very common; about 1% – 2% of the population can expect to have it during their lifetime. Because panic attacks often occur in all of the anxiety disorders, and they are common among people who suffer from depression, as well, it makes sense to begin by describing them. A panic attack is a sudden dramatic increase in anxiety, marked by intense fear and (commonly) a feeling of doom or dread. It may be accompanied by physical symptoms, such as pounding heart, sweating, trembling, chest pain, nausea, abdominal distress, shortness of breath, and dizziness; it may also be accompanied by disruptive thoughts and feelings, such as feelings of unreality or being detached from oneself, fear of dying, and fear of losing control.

Panic attacks may be unexpected, or they may typically follow some cue. Cues may be external—for example, an individual has a panic attack when about to give a speech—or they may be internal—for example, when the person’s heart is pounding, creating the fear of having a heart attack. Occasional panic attacks are quite common; up to 40% of young adults report that they have had them (King, Gullone, & Tonge, 1993). But if the attacks are common or severe and they interfere with everyday functioning, they may signal an anxiety disorder. If the sufferer has recurrent, unexpected panic attacks, it signals panic disorder. Otherwise, the panic attacks will be considered a symptom of another disorder.

Recurrent panic attacks often lead sufferers to avoid situations or people associated with the attacks. The avoidance may turn into agoraphobia, anxiety about being unable to escape from or get help in a situation in which a panic attack is expected. People often think of agoraphobia as fear of leaving the house or fear of crowds. At a more basic level, however, it is better to think of agoraphobia as fear of panic attacks and of being stranded away from safety and helpers when one strikes. Many people who suffer from agoraphobia are able to venture out when they are accompanied by someone. In addition, they may have a number of “safe places” among which they can comfortably travel.

panic attack: a sudden dramatic increase in anxiety, marked by intense fear and (commonly) a feeling of doom or dread

panic disorder: an anxiety disorder marked repeated unexplained panic attacks

agoraphobia: an anxiety disorder marked by anxiety about being unable to escape from or get help in a situation in which a panic attack is expected

Generalized Anxiety Disorder

People who have Generalized Anxiety Disorder seem to always be tense and anxious about everything. The disorder is characterized by six months or more of excessive anxiety or worry more days than not about different events or activities– not necessarily about everything, but about a variety of different parts of their lives. In addition, three or more of the following symptoms are required:

  • Feeling restless or on edge
  • Easily fatigued
  • Difficulty concentrating or mind going blank
  • Irritability
  • Muscle tension
  • Sleep disturbance

There are three different kinds of phobias, agoraphobia, which you have already seen, specific phobias, and social anxiety disorder (social phobias).  Phobia,  as you probably know, means fear; it is from ancient Greek language. (The names for different phobias were generated by tacking the Latin or Greek word for the feared object or situation to the front of phobia, as in “arachnophobia,” or fear of spiders.) A phobia, then, is an intense fear or anxiety associated with a specific object or situation. Although many people report feeling uneasy or fearful in the presence of certain objects or in certain situations, an individual suffering from a phobia has intense anxiety that is out of proportion to the threat posed. Panic attacks in the presence of the feared object or situation are common. Although sufferers realize that their fears are unreasonable or excessive, their fear is so intense that it interferes with the person’s everyday functioning.

Agoraphobia, although technically a phobia, never appears alone; it is always a symptom of another disorder, such as panic disorder. The other two major types of phobias, specific phobias and social anxiety disorder (social phobias), do appear as their own disorders. Specific phobias are fears of particular objects or situations, such as heights, spiders, water, or flying. Specific phobias are quite common. Up to 10% of the US population experiences specific phobias, although very few seek treatment for them (Kessler, et al. 1994). Social anxiety disorder is also quite common; more than 12% of the population can expect to have one during their lifetime (Kessler, Stein, & Berglund, 1998). Social anxiety disorder (social phobias) is, at its core, the fear of being judged by others or of being embarrassed; people who suffer from it are intensely fearful that they will say or do something wrong or inappropriate. Social phobias are commonly manifest as a fear of speaking in public or performing, of social situations such as parties, or of interactions such as meeting people. Because social situations are harder to avoid than specific objects or non-social situations, social phobias tend to be more burdensome than specific phobias are.

It can be easy to confuse social anxiety disorder and a specific phobia of the same type of situation. The key is to look for the fear of being evaluated to make it social anxiety, otherwise it is a specific phobia.

phobia: an anxiety disorder marked by an intense fear or anxiety associated with a specific object or situation

specific phobias: an anxiety disorder marked by fears of particular objects or situations

social anxiety disorder/social phobias: an anxiety disorder marked by the fear of being judged by others or of being embarrassed in social situations

Obsessive-Compulsive Disorder

Obsessive-compulsive disorder (OCD) used to be classified as an anxiety disorder. As such, it does have a significant anxiety component; it comes from obsessions, persistent, uncontrollable, inappropriate thoughts, impulses, or images. Obsessions are usually not related to any real-life problems that the individual might be facing. Common obsessions include thoughts about contamination (for example, thinking that there are germs in restaurant food), doubts (for example, wondering if the gas stove was left on), aggressive impulses (for example, an impulse to hurt one’s children), and sexual thoughts. Many people have occasional images or impulses like these, but they do not cause a great deal of distress because they can be banished from thought without much difficulty. People with obsessive-compulsive disorder try to ignore or suppress the thoughts, and if they fail, they turn to compulsions to help. Most sufferers of OCD do end up having both obsessions and compulsions.

A compulsion is a repetitive action or thought (think of it as a physical or mental act) that is intended to reduce the anxiety of an obsession. Some compulsions reduce anxiety because they address the concern contained in the obsession. For example, someone who is obsessed about leaving the stove on may check it every 10 minutes. Other compulsions might work through distraction; every time the individual thinks of hurting his children, he touches both of his knees ten times with each finger. Compulsions can be very elaborate, by the way. The most common compulsion is hand-washing; contamination by germs is a very common obsession.

In order to be diagnosed with OCD according to the DSM, the obsessions or compulsions must be very distressing, take up more than one hour per day, or interfere significantly with everyday functioning.

obsessive-compulsive disorder: a disorder marked by uncontrollable, inappropriate thoughts, impulses, or images that lead to anxiety (obsessions) and repetitive action or thought that is intended to reduce the anxiety (compulsions)

compulsion: a repetitive action or thought (think of it as a physical or mental act) that is intended to reduce the anxiety of an obsession

obsessions: persistent, uncontrollable, inappropriate thoughts, impulses, or images

Posttraumatic Stress Disorder

Nearly everyone experiences a stress response after a traumatic experience. For example, you are extremely likely to be tense and jittery in the immediate aftermath of a serious car accident, a result of the fight-or-flight response from the sympathetic nervous system. It is quite common for an individual to have lingering anxiety for a week or more after a traumatic event. For most of us, the anxiety goes away completely before too long. Others are not so fortunate; they suffer from posttraumatic stress disorder (PTSD). PTSD is characterized by complex sets of symptoms that last for at least one month (often much longer than that) after experiencing a traumatic event:

One or more intrusion symptom:

  • Haunting memories
  • Recurrent nightmares
  • Intense psychological distress or physical reactions from stimuli that remind the individual of the trauma

One or both avoidance symptoms:

  • Trying to avoid memories, thoughts, and feelings
  • Trying to avoid reminders of the traumatic event

Two or more changes in cognition or mood

  • Inability to remember important parts of the traumatic event
  • Persistent negative beliefs or expectations
  • Distorted thoughts about the traumatic event leading the individual to blame self or others
  • Negative emotions
  • Loss of interest in activities
  • Feeling separated from others
  • Inability to experience positive emotions

And two or more changes in arousal and reactivity

  • Irritability and angry outbursts
  • Reckless or self-destructive behavior
  • Hypervigilance
  • Easily startled
  • Problems concentrating

Originally, PTSD was diagnosed only for individuals who experienced extreme events that directly threatened their own lives, such as being victimized in a violent crime or experiencing war or a severe natural disaster. The diagnosis has expanded substantially. The individual must be exposed to actual or threatened death, serious injury, or sexual violence– and there are several ways this can happen: through directly experiencing it, by witnessing it happening to others, by learning about it happening to someone close, or by repeated or extreme exposure to aversive details about it.

As we are sure you can guess, the prevalence of post-traumatic stress disorder in a group is heavily dependent on the severity of the traumas that members face. In a 2003 survey of over 1,700 US Army and Marine personnel who had been deployed in the Iraq war, 12.5% met the criteria to be diagnosed with PTSD 3 to 4 months after their return home. The more combat they experienced, the more likely they were to have post-traumatic stress disorder (Hoge et al., 2004).

Describes Rates of PTSF after combat

Only fourteen of the 1,700 soldiers and Marines studied were women. Research had previously shown that 20% of women, but only 8% of men, suffer from PTSD after a trauma (Kessler et al. 1995), but as you can see, the frequency and severity of the trauma can increase that rate dramatically.

People who experience trauma over a very long period would be likely to suffer from PTSD as well. A survey of a representative sample of the adult (over age 15) population of Afghanistan, a country that has been nearly continuously at war for over 20 years, found that over 60% of the population had experienced four or more traumatic events in the past 10 years. Overall, 42% of the population had symptoms of posttraumatic stress disorder (Lopes Cardozo et al., 2004).

As you might guess, there are early reports that PTSD is an important side effect, if you will, of COVID-19. Health care workers, patients who require intensive care treatment for long periods of time, and family members who have been separated from loved ones while they die are all in danger (Kanzler & Ogbeide, 2020; Tingley et al., 2020).

Causes of and Treatments for Anxiety Disorders

Researchers have made important discoveries about the biological and psychological factors that are related to or contribute to anxiety disorders. These discoveries, especially those about psychological factors, have led to the development of effective treatments for these disorders.

Biological Factors and Causes

Heritability estimates from twin and family studies for anxiety disorders are in the 30% – 50% range, similar to depression (Hettema et al., 2001; Smoller et al., 2009).

As you certainly realize, anxiety disorders have very different features from each other. Even the anxiety or fear itself differs across the disorders. These differences show up in brain activity, as there are relatively few similarities for different kinds of anxiety. Two specific brain areas that are key are the amygdala (which you have already seen) and the insula (which you have not). The insula is located in the area between the frontal, temporal, and parietal lobes, and is divided into up 13 separate subdivisions (Uddin et al., 2017). These two areas are important in all three of posttraumatic stress disorder, specific phobias, and social anxiety disorder (Wager, 2007). The amygdala in particular might be responsible for an inflated perception of threat and emotional responses (Schmidt et al., 2018).

By now you probably realize that networks of brain areas are important for complex responses. In this case, the expectation of bad outcomes (that is, worrying or anxiety) is associated with a network including the amygdala, insula, hippocampus, cingulate cortex, and prefrontal cortex (Schmidt et al., 2018). If you are reading carefully, you might notice that several of these are the same areas as implicated in depression. The front part (anterior) of the cingulate cortex along with the insula has been nicknamed the fear network (Sehlmeyer et al., 2009). Research into pharmacological and psychotherapeutic treatments for anxiety disorders has found that brain activity in all of these very same areas are affected by the treatments (Greco & Liberson, 2016).

Learning Factors

Many cases of anxiety disorder involve straightforward applications of learning principles. Through a combination of classical and operant conditioning, ordinary fears can be transformed into the intense fear of a phobia. For example, you may have an intense fear of dogs that originated as a conditioned response to being bitten as a child (Module 6). The unconditioned stimulus of being bitten by the dog led to an unconditioned response of fear. The sight of the dog was a conditioned stimulus, which led to a conditioned fear response. Generalization could then lead you to experience the conditioned fear response in the presence of any dog.

Once a fear is established through classical conditioning, it can be strengthened considerably through operant conditioning. When you feel anxiety or fear in the presence of dogs, you are very likely to avoid them. When you avoid them, the fear goes away, so you would be likely to repeat these avoidance behaviors in the future. This, of course, is the strengthening of a behavior through negative reinforcement.

Negative reinforcement helps us to understand how the compulsive behaviors of obsessive-compulsive disorder develop as well. When an individual is extremely anxious about contamination by germs, for example, he may scrub his hands, and the anxiety goes away. The next time he feels the anxiety, he will repeat the same behavior that led to its removal previously. As the negative reinforcement–fueled cycle of anxiety and hand washing to relieve the anxiety continues, the result can be a compulsion.

Psychotherapy

Anxiety disorders are often successfully treated with psychotherapy alone — except for OCD, which responds well to treatment with a combination of SSRI antidepressant and cognitive-behavior therapy. In addition, severe anxiety can sometimes be controlled with antianxiety drugs. These drugs are often agonists for the neurotransmitter GABA. This inhibitory neurotransmitter, when released in the amygdala, reduces anxiety, so a drug that increases or mimics its activity would enhance that effect.

The specific psychotherapies that are commonly used to treat anxiety disorders are types of cognitive-behavior therapy or simply behavior therapy. The goal of the cognitive portion of therapy is similar to that for the treatment of depression, to recognize and correct specific thoughts that contribute to the individual’s anxiety (Module 28). For example, a client may be taught to recognize the shortness of breath and palpitations that precede a panic attack so that he will not mistake it for a heart attack (as is common). Sometimes, that knowledge alone is enough to prevent the panic attack (Seligman, 1994).

Although a therapist may employ many specific behavioral techniques, the most well-known one is called systematic desensitization. This technique, quite an effective treatment for phobias, is based on two important observations. First, many specific fears have been classically conditioned (see Module 6). Second, a person cannot be anxious (or fearful) and relaxed at the same time. The goal of systematic desensitization is to replace the fear-conditioned response with a relaxation conditioned response. The therapist helps this process along by teaching the client to relax in the presence of the feared conditioned stimulus. Systematic desensitization works gradually, unlike a therapy technique called flooding , in which the client is exposed immediately to the feared stimuli in an unescapable situation—the difference between, say, letting a client who fears dogs work up to facing a barking dog over a dozen encounters and simply throwing the client into a pen with a dozen barking dogs. Systematic desensitization works by asking the client to imagine frightening situations at first rather than experiencing them directly. The client and therapist prepare a list of increasingly frightening situations to imagine, starting with a very mild one. For example, a client who fears dogs might have the following list:

  • Watching a television show about dogs
  • Meeting a gentle puppy
  • Playing with the puppy
  • Standing in the same room as a gentle larger dog
  • Petting a gentle dog
  • Facing a barking dog while standing outside a neighbor’s front door
  • Standing in the same room as a barking dog

After drawing up this list, the therapist teaches the client special techniques to relax. From here, the process is simple, although it does take some effort and persistence. The client relaxes and then imagines the first item in the list. If she is able to stay relaxed, she moves to the second item, then the third item, and so on. As soon as she feels any anxiety at all, the client stops imagining the situations and tries to relax again. Once relaxed, she can begin to imagine situations again. Eventually, the client is able to remain relaxed while imagining situations that previously caused anxiety. At this point, the client seeks some real situations that would have at one time caused anxiety while trying to maintain the relaxation response.

Recently, systematic desensitization has gone high tech. Clients in virtual reality exposure therapy face their feared situations in the form of a sort of video game, a realistic computer-generated environment with which they can interact. Research has shown that virtual reality therapy is effective for two very common phobias, heights and flying, and we are awaiting research results for other kinds of phobias (Krijin et al., 2004). In one study, clients who feared flying did as well in virtual reality exposure therapy as they did in a standard therapy that used exposure to actual situations, demonstrating how useful, efficient, and relatively inexpensive the therapy can be (Rothbaum et al., 2002).

systematic desensitization: a behavior therapy in which a client learns to relax while imagining increasingly frightening situations related to his or her phobia

flooding: a behavior therapy in which a client is exposed immediately to the feared stimuli of a phobia in an unescapable situation

virtual reality exposure therapy: a behavior therapy related to systematic desensitization in which a client interacts with feared situations in a computer-generated environment

  • Have you ever had a panic attack? If so, what brought it on? Can you describe how it felt?
  • Think about an event, object, or situation that makes you very anxious. How would you plan a program of systematic desensitization for your anxiety?

29.2. Schizophrenia Spectrum Disorders

  • Based on your prior exposure to the concept, how would you describe someone who suffers from schizophrenia?

Schizophrenia, one of the most serious of all psychological disorders, is very complex. In general, individuals with schizophrenia suffer from disturbed perceptions and disorganized thoughts. Their thoughts and perceptions are often extreme, bizarre, and irrational. Schizophrenia is actually a whole category of disorders, so symptoms can be quite different from person to person. However, many people who suffer from schizophrenia cannot function in society without treatment.

General Characteristics of Schizophrenia

Schizophrenia is characterized by two general types of symptoms, called positive and negative. Positive symptoms denote the presence of extra and inappropriate behaviors:

  • Delusions   are false beliefs, particularly about sensations or experiences. Of course, many normal people have false beliefs, and the boundary between a delusion and a “normal wrong belief” is a fuzzy one. The best criterion to tell them apart is the persistence of the belief in the face of contradicting evidence. For example, a woman suffering from schizophrenia who reported that she was Senator Edward Kennedy’s daughter despite the fact that she was quite obviously older than the Senator. Delusions are clearly out of touch with reality.
  • Hallucinations   are sensations and perceptions that are not based on real stimuli. The most common hallucinations in schizophrenia are auditory, especially hearing voices (two voices conversing are especially common).
  • Disorganized speech  is speech that quickly and frequently gets derailed or is incoherent.
  • Disorganized or Catatonic behavior. Disorganized behavior is is activity inappropriate to the current situation. An individual may react with the wrong emotion (for instance, laugh at bad news), dress bizarrely, or otherwise act in a very unusual way . Catatonic behavior  is a failure to react to the environment. Schizophrenic individuals may seem totally unaware of the things going on around them, stand or sit like a statue, or make excessive repetitive movements.

An individual who is currently experiencing positive symptoms is said to be psychotic.

Negative symptoms of schizophrenia are those that represent the loss of normal behavior. They include reduced speech (less fluent, less frequent, empty replies to questions), lack of initiative in activities, and especially loss of emotional characteristics, such as an immobile face or monotone voice. Catatonic behaviors might seem like negative symptoms, but catatonic behaviors are considered positive symptoms because they mark a difference in—and not necessarily a loss of—motor behaviors.

An individual can be diagnosed with schizophrenia if they have two or more of the above symptoms for one month or longer. One of the symptoms must be delusions, hallucinations, or disorganized speech. In addition, some signs of the disorder must be present for at least six months.

A second disorder in this class is delusional disorder, characterized by at least one month of delusions (without a diagnosis of schizophrenia). The individual’s behavior might be more-or-less normal outside of the delusions, making this disorder seem less severe. The delusions can be one of several types:

  • Erotomanic: believing that someone loves the individual
  • Grandiose: belief that one has great talent or insight
  • Jealous: beliefs that spouse or lover is unfaithful
  • Perscutory: beliefs that others are spying, cheating, conspiring against, etc.
  • Somatic: beliefs about body functions or sensations

negative symptoms: symptoms that represent the loss of normal behavior

positive symptoms: symptoms that represent the presence of extra and inappropriate behaviors

Biological Factors in Schizophrenia

There is strong evidence that schizophrenia has a significant genetic component. The number one risk factor for schizophrenia is having a close relative who suffers from it (Owen & O’Donovan, 2003). Estimates of heritability for schizophrenia by behavior geneticists range from 80% to 85% (Cardno & Gottesman, 2000; Hilker et al., 2018). Remember, even with such a high heritability, having the genes does not guarantee that an individual will develop schizophrenia. For example, if one identical twin has schizophrenia, there is only a 33% to 65% chance that the other twin will also have it.

More than for any other disorder, researchers have discovered severe abnormalities in the structure and activity of the brain associated with schizophrenia. The brains of many people with schizophrenia have larger fluid-filled spaces, called ventricles, than people without the disorder (Lieberman et al. 2001). Because there is fluid where there should be brain tissue, there is a corresponding reduction of brain mass. There are also many individual brain areas that have abnormal activity or structure, including the thalamus, hippocampus, amygdala, and frontal and temporal lobes (Andreasen, 2001) (Module 11). The major functions associated with these brain areas include sensation, memory, emotions, language, and planning, so you can perhaps understand how the many symptoms of schizophrenia occur.

The most important neurotransmitter that has been implicated in schizophrenia is dopamine. There appears to be too much dopamine in some areas of the brain, which may be responsible for the positive symptoms, and too little in areas such as the prefrontal cortex, which may be responsible for the negative symptoms (Conklin & Iacono, 2002; Davis et al., 1991).

Treatments for Schizophrenia

The first line of treatment in cases of schizophrenia is an antipsychotic medication. Traditional antipsychotic drugs decrease the activity of dopamine (that is, they are antagonists), often by blocking the receptor sites on dendrites. These drugs are quite effective for the positive symptoms, but they do not help negative symptoms much. Traditional antipsychotic drugs are very powerful, with serious side effects, such as involuntary facial movements (once these start they are irreversible for most patients), blurred vision, and sexual problems. Without the drugs, however, many people who suffer from schizophrenia have little chance of functioning outside of a hospital. Newer antipsychotic drugs, such as Clozapine and risperdone, have slightly different effects on neurotransmitters (such as blocking only some dopamine receptors or influencing other neurotransmitters such as serotonin), and they appear to be at least as effective as traditional ones with far less serious side effects (Bondolfi et al., 1998).

Although antipsychotic drugs benefit the majority of people who suffer from schizophrenia (Bondolfi et al., 1998; Spaulding et al., 2001), there is definitely a role for other kinds of therapy. In particular, behavior therapy is extremely helpful. A system of reinforcement can help the individual develop needed social skills to help him or her function once the more dramatic positive symptoms have been controlled by medication.

  • It is extremely likely that you have some false beliefs and that they are resistant to change. How do you know that these false beliefs do not qualify as delusions?

29.3. Personality Disorders and Dissociative Disorders

  • What do you think a sociopath is?
  • Have you ever heard of multiple personality disorder? Write down everything you know about this disorder.

When “crazy people” are depicted in movies and fiction, they are often either cold-blooded criminals or “split personalities,” Dr. Jekyll and Mr. Hyde types. These depictions are generally based on two types of psychological disorders: personality disorders and dissociative disorders. Although the truth about people with these sorts of disorders is less dramatic than fictional accounts would have you believe, it is a fascinating and controversial piece of the picture of psychological disorders.

Personality disorders are inflexible patterns of behavior or thinking that reflect deviations from a culture’s expectations and lead to impairment or distress. These are not the temporary lapses in judgment and manners that we all experience at one time or another. Personality disorders are stable over time and across different life situations. There are ten different personality disorders, the most well known of which is antisocial personality disorder.

Dissociative disorders are characterized by dissociation, a split in consciousness. Again, people have dissociation experiences quite commonly. Perhaps when you are preoccupied while driving home, for example, you might arrive home and realize that you do not remember part of the drive. At times, your mind was so focused on your thoughts that you were not conscious of where you were going. That is an ordinary example of dissociation, a simple split in consciousness. People who suffer from dissociative disorders have severe and dramatic splits, however. They lose contact with their identity or memory, and may have feeling of being separated from themselves. As you will see, this last class of disorder is rare (affecting about 1% of the population) and controversial.

personality disorder: a category of psychological disorders marked by inflexible patterns of behavior or thinking that reflect deviations from a culture’s expectations and lead to impairment or distress

dissociative disorders: a category of psychological disorders marked by dissociation, a split in consciousness

Antisocial Personality Disorder

Commonly known as sociopaths or psychopaths, people who suffer from antisocial personality disorder  have no regard for the rights of other people, happening after reaching age 15 (but can only be diagnosed after 18). Three or more of the following symptoms are required:

  • Repeatedly performing illegal acts
  • Repeated lying, use of fake names, conning others for personal profit or pleasure
  • Impulsivity or failure to plan
  • Irritability and aggressiveness
  • Disregard for safety of self or others
  • Irresponsibility— with work behavior or financial obligations
  • Lack of remorse—being indifferent to or rationalizing behavior that hurt others

People with antisocial personality disorder often engage in illegal behavior, deception, aggressiveness, and disregard for others’ safety. They are also typically indifferent about or lack remorse for their behavior. They often lack empathy and are impulsive, arrogant, cynical, and contemptuous toward other people. In addition, alcohol abuse is common among people who have antisocial personality disorder, probably making them even more likely to act on their aggressive impulses (Kraus & Reynolds, 2001).

Antisocial personality disorder is clearly one of the disorders that contribute greatly to the stereotypes about the psychologically disordered being violent and antisocial. By definition, antisocial personality disorder includes illegal, aggressive, or other antisocial behavior, the only disorder to do so. It comes as no surprise, then, that people with antisocial personality disorder are much more likely to commit violent crimes than other people are (Hart & Hare, 1997). Not all people with antisocial personality disorder commit violent crimes, however. Some commit crimes in business, and many are not even criminals at all (Reid, 2001).

Not much is known about the causes and treatments for personality disorders in general, in part because they are difficult to diagnose, a fact that makes them a bit controversial. Antisocial personality disorder is no different. As usual, however, there is evidence for a genetic component for the disorder, but the estimates of heritability have varied widely, ranging from 40% – 70% (Carey & Goldman, 1997; Cloninger & Gottesman, 1987; Torgersen et al., 2012).

Researchers have suspected that some of the biological causes of aggression are important for antisocial personality disorder as well (Module 20). For example, low levels of serotonin, linked to impulsiveness and aggression in normal people, may play a role in antisocial personality disorder (Ferris & de Vries, 1997; Mann et al., 2001). Observers have noted that impulsiveness may be the key characteristic of antisocial personality disorder (Rutter, 1997). This observation has led to the suggestion that SSRI antidepressants be used to treat antisocial personality disorder (Krause & Reynolds, 2001) (see sec 29.3). Although research is lacking for using SSRI’s for antisocial personality disorder, there has been some success at using these drugs to treat patients with other personality disorders and impulse control problems (Hollander, 1999; Markovitz, 2004).

People who have antisocial personality disorder do not have strong physiological reactions to what should be stressful situations (Herpertz et al. 2001). In a situation in which most people would have substantial sympathetic nervous system activity and feel intensely anxious or fearful, a person with antisocial personality disorder might be completely calm. In essence, it looks as if people with antisocial personality disorder do not experience much (if any) fear (Raine, 1997). People with antisocial personality disorder might seek dangerous and antisocial experiences to make up for their lack of internal arousal, and their lack of fear makes it unlikely that they are worried about the negative consequences (Eysenck, 1994).

Although there are definitely characteristic behaviors and patterns of thinking in many people with antisocial personality disorder, suggesting that psychotherapy should be effective, for the most part it has not been. The individuals who have the disorder resist treatment, believing that they do not have a problem—everyone else has the problem (Millon et al., 2000).

Dissociative Identity Disorder

Although dissociative identity disorder is very different from personality disorders, it shares with antisocial personality disorder the unfortunate distinction of being commonly represented in depictions of “homicidal maniacs” in the media. You may know dissociative identity disorder by its former name, multiple personality disorder. It is characterized by a dissociation, or split, between different parts of the personality. The dissociation becomes so complete that the person develops two or more complete personalities; each personality controls the person at different times, and the individual has significant gaps in memory about important events, personal information, or prior trauma. Although schizophrenia and dissociative identity disorder are commonly confused, you should be able to recognize some key differences. In schizophrenia, an individual may have a delusion and believe she is someone else, but she does not exhibit complete, separate personalities at different times.

The cause of dissociative identity disorder is the subject of controversy. Some believe that it results from an individual trying to repress extremely painful memories, especially memories of childhood sexual abuse (Ross, 1997). The child learns to dissociate during the abuse as a way of reducing the distress of the trauma. She (nearly all sufferers of dissociative identity disorder are female) forms a separate personality, one that does not experience the abuse.

Psychologists on the other side of the debate argue that dissociative identity disorder is best explained as an individual’s response to role expectations. More precisely, information from the media, individual beliefs about the disorder, and cues from therapists create a powerful expectation that leads some individuals to develop symptoms of the disorder (Lilienfeld et al., 1999; Spanos, 1994). Perhaps you have just realized why this debate is so explosive. Proponents of the role expectation view have suggested that therapists, in part, cause  dissociative identity disorder by using techniques that (unintentionally) encourage the client to form separate personalities. There is in fact evidence that completely normal people can rather easily be encouraged to display symptoms of dissociative identity disorder. All you have to do is hypnotize college students and ask a set of questions designed to “discover” multiple personalities (Spanos et al., 1985).

Also consistent with the idea that dissociative identity disorder is a consequence of some therapy is the observation that it commonly develops only after the therapy. For example, according to one estimate, only 20% of dissociative identity disorder patients had clear symptoms of the disorder before the therapy (Kluft, 1991). Even people who reject the role expectation explanation admit that people who have dissociative identity disorder are commonly unaware of their multiple personalities before therapy (Lilienfeld et al., 1999).

Because of the controversy about the role of therapists in causing this disorder, it is difficult to outline effective treatments for it. For example, therapies that encourage the client to confront the multiple personalities might actually make the disorder worse. As a consequence, dissociative identity disorder is difficult to treat (Ross & Ellason, 1999).

  • Laypeople are often prone to “diagnose” someone as having a personality disorder, such as antisocial personality disorder. Have you ever done this? Why do you suppose it happens?
  • Look over your description of schizophrenia from the Activate exercise for section 29.2. If you have any inaccuracies in it, to what extent did they resemble dissociative identity disorder?

29.4. COVID-19, Quarantines, and Mental Illness

  • How was your mental health (and that of your friends and family) during the COVID-19 pandemic?
  • Do you think that having a mental illness would make it more or less likely that someone might contract COVID-19? How about the reverse? Do you think contracting COVID-19 would make it more or less likely that someone might develop a mental illness?

As a way of wrapping up a general description of mental illness before concluding with some parting thoughts on therapy, let us turn to the most significant set of events in recent memory, the COVID-19 pandemic of 2020. You might recall from Module 24 that scientists have been busily producing research related to the pandemic since it began in March 2020. We have already hinted at a bit of what they have discovered. In this section, we will highlight a few more important discoveries. Please keep in mind, that although there has been an explosion of research activity, the topic is still new and as such, we might expect significant modifications of these early conclusions.

There are two types of research findings we would like to focus on:

General effects of the pandemic on mental health

  • Direct effects of COVID-19 on mental health and of mental health on COVID-19

Then, we will conclude with a brief summary of some psychological factors that are related to behaviors that help and hurt during the active portion of the pandemic.

In March 2020, COVID-19 was declared a worldwide pandemic. As a consequence, billions of people across the globe engaged in quarantine-like behavior, while tens of millions contracted COVID-19 and nearly 1.5 million died from the disease (by late November 2020).

There is no doubt you were, and probably still are, affected by the pandemic. Perhaps you were one of the billions who participated in a lock-down, shelter-in-place order, quarantine, or a stay-at-home order. During these quarantine-like periods, people who were not infected with or exposed to coronavirus severely limited their contact with people outside of their immediate family and spent a great deal of time in their own home. If you did this, it was hard. (That last statement has been entered into the “understatement of the book” contest.) You may also have felt anxiety about you or your loved ones being infected and suffering serious complications or dying. Or perhaps you or your loved ones did contract COVID. What were the effects of all of this uncertainty, loneliness, illness, and death?

Research on the psychology of the pandemic was conducted so rapidly, that several meta-analyses were available during 2020. For example, Salari et al (2020) reported on 17 separate studies with up to 63,000 participants (depending on the analysis) in the general population in Asia and Europe. These researchers noted that all of the studies reported that individuals suffered many symptoms of mental illness, including:

  • emotional distress
  • depressed mood
  • mood swings
  • irritability

Sound familiar? We thought so, because the key findings were that these symptoms, and other diagnosed mental illnesses were widespread. Some of the specific numbers were

  • Stress: 20.6%
  • Anxiety: 31.9%
  • Depression: 33.7%

And of course, people in the United States did not escape. One study compared 5,065 participants before the pandemic to 1,441 participants in the first month of the pandemic and found that the rate of depression had tripled (Ettman, et al. 2020). Another meta-analysis (this one, international) found a seven-fold increase in depression in just the first two months of the pandemic (Bueno-Notival et al. 2020). As you might guess, having a family member diagnosed with COVID-19 was associated with higher rates of anxiety and depression (this study looked only at college students; Wang et al., 2020). Young adults have been especially devastated. A US national survey conducted in late June 2020 by the CDC found that 62.9% of 18 to 24-year-olds were suffering from anxiety or depressive disorders, 24.7% had started or increased substance use to help cope, and 25.5% had seriously considered suicide (Czeisler MÉ , Lane RI, Petrosky E, et al., 2020). The research also identified that a COVID-19 diagnosis seems to make these problems worse.

Earlier, we noted that healthcare workers who treat COVID-19 patients were particularly prone to post-traumatic stress disorder. Unfortunately, the risks did not end there. Many also suffered from depression, anxiety, and sleep disturbances (Pappa et. al. 2020; a meta-analysis based on 13 studies with 33,000 participants in the first month of the pandemic).

29.5. Therapy in Sum

  • Think about your images of psychotherapy from your prior exposure to psychology. How would you describe that psychotherapy? What are its characteristics?

Cognitive-behavioral therapy and behavior therapy, which guide clients away from maladaptive patterns of thinking and behaving, are effective treatments for many disorders, including depression, other mood disorders, and anxiety disorders. But they are not the only psychotherapy techniques that clinical psychologists use. Other popular options are psychodynamic therapy and interpersonal therapy, and therapies that combine cognitive-behavioral principles with mindfulness (such as ACT, see below). Another therapy type that is useful to know about is humanistic. The specific therapy used depends in large part on the individual therapist’s training and preference. In an era of managed health care and limited dollars for mental health treatments, however, there is a growing effort to determine which therapies are really most effective in treating particular disorders. We will address that issue after describing psychodynamic therapy and interpersonal therapy, mindfulness-based therapies, and humanistic therapy.

Psychodynamic Therapy

Sigmund Freud’s method of psychoanalysis is no longer much in use in its pure form (Module 19). It has been replaced for the most part by psychodynamic therapy. Although most modern psychodynamic therapists have rejected Freud’s focus on childhood sexuality and his ideas about the role of sexual impulses in everyday life, they do still emphasize that adult maladjustment results from repressed conflicts. The goal of the psychodynamic therapist is to help the client uncover and resolve those conflicts.

The psychodynamic therapist often uses techniques designed to tap into the unconscious, such as free association or dream analysis. In free association, the client is encouraged to say whatever comes to mind. The therapist listens closely for clues that some conflict is lurking under the surface. For example, if the client begins to say something and then quickly changes the subject, the therapist might probe to try to discover if the client was avoiding an unconscious conflict. In addition, the therapist looks for evidence of transference , in which the client reveals unconscious conflicts from the past by transferring those feelings to the therapist. For example, if a client becomes very upset when the therapist slightly criticizes the way he handled a situation at work, it might be because old feelings toward a parent are being transferred to the therapist. Another common tool that some psychodynamic therapists use is projective tests, such as the Rorschach inkblot test. Proponents of these tests believe that aspects of people’s personalities are revealed by the way they interpret the irregularly shaped “ink blots” (or other images) on a series of cards. Most research, however, has found that tests like the Rorschach have poor reliability (Garb, Florio, & Grove, 1998) (see sec 8.1).

In general, little research has been done on the effectiveness of psychodynamic therapies (Wolitzky, 1995). Therapists have had success at treating depression with an offshoot of psychodynamic therapy called interpersonal therapy. This therapy, shorter than traditional psychodynamic therapies (only twelve to sixteen sessions total), focuses on conflicts or problems in current relationships rather than in past ones (Weissman, 1999; Weissman & Markowitz, 2002).

psychodynamic therapy: a type of psychotherapy in which the therapist helps the client uncover and resolve hidden conflicts from the past

free association: a common technique used in psychodynamic therapy, it involves having a client say whatever comes to mind

interpersonal therapy : a modern offshoot of psychodynamic therapy that focuses on conflicts or problems in a client’s current relationships

transference: in psychodynamic therapy, the process in which a client transfers feelings harbored about a person from the past to the therapist

projective test: a psychological test that is purported to reveal aspects of an individual’s personality by the way he or she interprets some ambiguous stimulus

Mindfulness-Based Therapies

Mindfulness is a way to train an individual to focus awareness in such a way that troubling thoughts are observed almost as an outsider and without judgment. It is traditionally used in a mediation setting, and the principles of mindfulness (non-judging, non-reactivity, and observation, for example), have made it into several specific therapy techniques. For example, Acceptance and Commitment Therapy encourages clients to accept negative and troubling thoughts, and is used commonly to treat depression and anxiety disorders. Mindfulness-Based Cognitive Therapy  explicitly combines ideas from cognitive therapy with mindfulness meditation and has strong support as a treatment for depression.

Acceptance and Commitment Therapy: encourages clients to accept negative and troubling thoughts

Mindfulness-Based Cognitive Therapy: a therapy that explicitly combines ideas from cognitive therapy with mindfulness meditation

Humanistic Therapy

Humanistic therapies start from the assumption that people have a basic orientation toward growth—that is, to reach their full potential. When they are maladjusted, it is because some barrier is preventing them from going in their natural direction. The therapist’s main role is to help clients to find the ability to solve their problems within themselves.

The most well-known humanistic therapy, called client-centered therapy; was developed by psychologist Carl Rogers (1951). In client-centered therapy, the therapist uses three key tools in his or her role as a facilitator, someone who helps clients realize that they can, in fact, solve their own problems. The first is unconditional positive regard. The therapist listens and accepts what the client says without judging. Second, the therapist exhibits genuineness, sharing his or her thoughts openly and honestly. Third, the therapist employs empathetic understanding. Throughout the therapy, a client-centered therapist uses active listening , a technique of restating, paraphrasing, and clarifying what is heard without judging it. Although few modern therapies identify with the humanistic label, many do make use of techniques like active listening.

active listening: a technique of restating, paraphrasing, and clarifying what is heard without judging it

humanistic therapies: a type of psychological therapy that assumes that people have a basic orientation toward growth; the therapist’s main role is to help clients to find the ability to solve their problems within themselves

client-centered therapy: a humanistic therapy developed by Carl Rogers. It uses unconditional positive regard, genuineness, and empathetic understanding

Can an App Provide Therapy?

From 19th century psychoanalysis on Sigmund Freud’s couch to the present, psychotherapy has been a face-to-face interaction with a human therapist and a human patient or group (although there is animal behavior therapy for dogs). Artificial intelligence, in which computer programs learn and perform complex tasks that used to be possible for humans only, has begun to make contributions to medical care, such as diagnosis. It seems at least possible that a computer program might be able to provide some of the benefits of psychotherapy as well.

You might be interested to know that one of the earliest famous demonstrations of artificial intelligence was a program called ELIZA. If you have an iPhone, ask Siri, “Who is ELIZA?” One of the answers Siri gives (there are a few different ones) is: “ELIZA is my good friend. She was a brilliant psychiatrist, but she’s retired now.” ELIZA was a computer program developed by Joseph Weizenbaum around 1965. It was designed to mimic a humanistic therapist. In reality, ELIZA simply had a series of canned responses to keywords (for example, if you said something with the word “no,” ELIZA usually responded with “You are being a bit negative.” Even so, some observers believed that ELIZA was actually able to understand them, and reported some therapeutic benefit of talking to her (of course, “talking” meant typing). Even some practicing psychiatrists thought that ELIZA showed promise as an actual therapist (Weizenbaum, 1966; 1976)

Click here for a pretty good implementation of ELIZA (very low production value, though).

Were these 1960’s psychiatrists onto something? Can a computer program provide psychotherapy? Because computer-based, or smartphone app-based, therapy is quite new itself, you might realize that research into its effectiveness is just beginning to accumulate. For example, one group of researchers examined 555 different apps that are promoted in the treatment of PTSD (Sander et al. 2020). Only 69 followed principles from current therapy methods—most were based on CBT methods. Only one was evaluated in an experimental study. It was a small study, but it did show informal results (on outcomes such as adhering to treatment requirements, and usage of different elements of the program) similar to participants who received traditional CBT. So, in this case, the results showed that the apps have the potential to be an effective part of treatment, but we are still too early to know how effective.

The results are similar for other apps and other disorders, as well. Wang, Fagan, and Yu (2020) examined 28 apps for treating depression, and found only five that had research support. Again, studies tended to be small, and the results limited (for example, several studies showed immediate effects but did not do any follow-up assessments). Again, at this point, we would have to conclude that app-based therapy shows promise, but the evidence for their widespread adoption is not there yet.

Common Factors in Therapy

Although we have focused on the differences between therapies, the truth is they share a lot in common. And research has generally found only modest differences in effectiveness between different therapy styles (Cuijpers, et al., 2008). If you are interested, Division 12 of the American Psychological Association is devoted to Clinical Psychology. They offer a terrific resource of 86 different therapy-disorder combinations, and the level of research support for each. For example, Cognitive-Behavioral Therapy for Depression has strong research support; Psychoanalytic Treatment for Panic Disorder is listed as modest support/controversial.

At least 40 of the 86 are CBT or another cognitive- or behavioral-based therapy. They all have strong, or at least, modest research support. The number, 40, is also important, though, because it suggests that treatments based on cognitive and behavioral principles have been subjected to the most research. That is another point in favor of CBT, exposure therapies like systematic desensitization, and ACT. But again, the differences in effectiveness across therapies tends to be modest. One reason is probably that therapies, regardless of their specific orientation, often have important factors in common:

  • Any successful therapy provides a client with a new way of looking at a previously unsolvable problem. One of the difficulties of solving real-world problems is that we get stuck representing or defining them incorrectly, a pattern of thinking called fixation (Module 7).
  • Before beginning therapy, people commonly avoid their negative thoughts and emotions because of the distress they cause. As therapy clients, they will be forced to face those thoughts and emotions, and over time they can become less threatening. In essence, any therapy provides a desensitizing effect, working much like systematic desensitization does for anxiety (Garfield, 1992).
  • Perhaps the most important shared factor across successful therapies is the trusting and caring relationship that develops between the therapist and client (Blatt et al. 1996; Teyber & McClure, 2000). Just as you may benefit most from advice when it is sincerely given by an individual whom you like, trust, and respect, therapy administered the same way tends to be effective.

It is likely that these shared factors will lead to a degree of success for very many competently administered therapies (Messer & Wampold, 2003; Wampold, 2001). However, we would caution you against relying on therapies that have not been demonstrated through research to be effective.

  • To what extent did your image of psychotherapy resemble psychodynamic or client-centered therapy? Why do you think these images are more common than those of cognitive-behavioral therapies?
  • Which of the “shared factors” of therapy can you recognize in problem-solving or advising situations in your own life?

Module 30: Clinical Psychology: The House that Psychology Built

By reading and studying Module 30, you should be able to remember and describe:

  • The relationships among the various subfields and topics in psychology
  • The scientist-practitioner gap
  • Evidence-based practice
  • The dangers of letting untested therapies proliferate
  • Relevant research results
  • Clinical judgment versus statistical prediction
  • Alternative therapies
  • Self-help books

Each of the six units in this book ends with special module like this one. Each of the ending modules uses one of the subfields of psychology to explain and illustrate some principles about the way science works. Specifically:

  • Unit 1 (Module 4) uses discussions, arguments, and debates among psychologists to illustrate the essential role that tension and conflict play in a scientific discipline.
  • Unit 2 (Module 9) uses the story of how cognitive psychology supplanted behaviorism to illustrate the way scientific revolutions often occur in a discipline.
  • Unit 3 (Module 14) uses the subdiscipline of biopsychology to show that scientific progress is not a smooth, unceasing process, that technological and methodological advances may lead to very rapid progress or to dead-ends and wrong turns.
  • Unit 4  (Module 18) uses the development of developmental psychology as an example of the process through which individual researchers carve out a niche, “dividing and conquering” small portions of larger, agenda-setting theories.
  • Unit 5  (Module 24) uses social psychology and personality psychology to illustrate how even a “basic research” sub-discipline can take on real-life problems.

In a way, this Unit 6 concluding module is a little bit backwards, as one of our goals here is to explain something about the organization of psychology itself, rather than to use the organization of psychology to explain some fact about the way science works. Specifically, we want you to understand the unique role that clinical psychology plays in the discipline.

Clinical Psychology and the House That Psychology Built

The different subfields of psychology—cognitive psychology, biopsychology, developmental psychology, social and personality psychology, clinical psychology, and so on—are not randomly situated within the field. Rather, they are related to one another, and these relationships reveal the way that the overall discipline is organized. You can think of this organization as a little bit like building a house. We can build up a discipline of psychology brick by brick, floor by floor, subfield by subfield.

Suppose we are building a house of psychology. Let’s think of the research methods on which the whole discipline of scientific psychology rests (Modules 1 and 2) as the foundation and biopsychology (Modules 10 and 11) as the bricks with which we will build the rest of house. All of the other subfields are built upon the foundation of research and out of the bricks of biopsychology. The basic scientific information about the way neurons generate and transmit signals, the electrical activity throughout the brain, the rest of the nervous system, and the endocrine system underlie everything else in the discipline. For example, every single section of this book could include a short description of action potentials and neural communication, focusing on the specific brain areas, neural networks, and neurotransmitters involved (as well as hormonal and evolutionary considerations). These details constitute a large part of what you might study if you were to take a full course in biopsychology.

Now let’s think about the topics or subfields that would make up the first floor of the house of psychology. We would use the processes that function to get the outside world into the head, namely sensation and perception (Modules 12 and 13). These basic processes, similar to the biological underpinnings themselves, underlie nearly all of the rest of psychology.

The second floor, built on top of the first floor, incorporates most of the topics from the basic subfields, namely cognitive psychology, developmental psychology, social psychology, and personality psychology. To link the first floor and the second floor, we need some stairs, some processes that allow us to get from an inside-the-head representation of the outside world to higher-level processes that use that representation. The stairs are memory, learning, categorization, emotion, and motivation (Modules 5, 6, 7, and 20). These stairs are the basic units of thought and feeling. They provide the bridge between the straightforward reflection of the outside world in our heads and the complex ways that we manipulate, change, and use that reflection to plan, reason, solve problems, self-regulate, interact, and function in our world.

Our second floor, then, composed of the higher-order cognitive and social processes, builds on and uses the outputs of the “stairs.” Our second floor is filled with processes like problem solving, reasoning, critical thinking, attribution, and influence (nearly all of the topics not yet listed from Units 2 through 5). Think about problem solving, for example. A parent might be trying to solve the problem that his son is not enthusiastic about school. The son often complains about going and often exaggerates minor aches so that they seem like illnesses. The ideas contained in the problem (son, school, illness, enthusiasm, and so on) are concepts, the result of categorization and memory. The particular strategies that the parent might generate to solve the problem are manipulations or combinations of these and other concepts he has in memory. For example, he might try to take advantage of his son’s love of learning and try to persuade him that school can be fun. Because education is important to most parents, this problem is likely to lead to strong negative emotions, which are likely to influence the parent’s motivation to solve it.

That brings us to the third floor and the roof of the house, the helping side of psychology. Just as the second floor built on and used the outputs of the first floor, stairs, and foundation, the top of our house builds on and uses all of the underlying knowledge about psychology. For example, think about one of the most important topics in clinical psychology, the understanding and treatment of major depressive disorder. The discovery from biopsychology of the role of low levels of serotonin and faulty serotonin receptor sites in depression has led to the use of selective serotonin reuptake inhibitor antidepressants. The discovery that specific brain areas, such as the hippocampus and cingulate cortex, are especially important in depression sets the table for the development of drugs that are even more selective, ones that may someday be able to have pinpoint effects, increasing neural activity only in the faulty areas. Discoveries from cognition and learning (such as learned helplessness and negative thinking styles) have contributed immensely to our understanding of the psychological factors that contribute to depression, as well as to effective psychotherapeutic treatments for the disorders.

As you can probably tell, psychology needs all of its subfields and research methods. Just as you cannot remove any part of a real house without rendering it unrecognizable and unusable, you cannot get rid of any piece of our discipline and still have a recognizable and useful psychology.

The Scientist-Practitioner Gap

There is, unfortunately, a problem in the house of psychology, however, that we have not yet discussed. To borrow an idea from Module 4, there is tension in the discipline of psychology. Remember, some tension is useful; if there is too little, a discipline grows stale and uncreative. If there is too much, though, a discipline can suffer from avoidance of difficult issues or even, if things get bad enough, explosive conflict. We are at the point in psychology where there is definitely some avoidance going on. Observers have referred to the situation as the scientist-practitioner gap. Although there has always been tension between researchers and practitioners in psychology (similar to other fields where basic researchers and applied practitioners must co-exist), some worry that the current gap is substantial and getting larger (Patihis et al., 2015; Tavris, 2003).

On one side of the gap are some therapists who contend that academic, scientific psychology is of little use in their efforts to help people solve their psychological problems in the real world. These therapists reject the idea that the effectiveness of psychotherapy should be demonstrated through research. They contend that the process of therapy is too variable and fluid to expect that its effects can be measured (Carey, 2004). Some therapists feel strongly that psychotherapy is more of an art than a science.

On the other side of the gap are many academic and scientific researchers in psychology as well as a significant number of therapists and clinical psychologists. They insist that scientific psychology is intimately related to clinical psychology and that research is the only reasonable way to judge whether psychotherapy is effective. Note that, because some clinical psychologists also favor what has become known as evidence-based practice, which is the use of therapies that have been justified by research, this is not exactly a “scientist-practitioner gap.”

We have to reveal a very strong bias in this controversy. We are firmly on the side of the academic, scientific, and clinical psychologists who favor the use of research as a foundation for therapy. We simply cannot understand why anyone would want to ignore everything we have learned from the basic science of behavior and mental processes in favor of a non-scientific version of psychological therapy. It seems to us that ignoring or rejecting the role of research and scientific psychology would be like disbanding the Food and Drug Administration, the government agency that is charged with ensuring that drugs are safe and effective treatments for disorders. To return to the house of psychology metaphor, it is as if some clinical psychologists believe that the roof and third floor should be floating in mid-air, completely separate from, unsupported by, and unrelated to the rest of the house.

scientist-practitioner gap : tension between researchers and practitioners in psychology

evidence-based practice: the use of therapies that have been justified by research

The Rallying Cry of the Scientific Side: How Do You Know?

We think of some psychotherapy techniques, the ones not based on research, as a lot like advice. Advice is very unpredictable. Sometimes it is very good; for example, if you are a psychology undergraduate interested in going to graduate school, you might be advised to get involved in research (this is excellent advice). Sometimes it is not so good; many guidance counselors (and unfortunately, some well-known politicians) advise high school students that a psychology degree is a waste of time and money (if we may be so bold, we would like to suggest that this is dreadful advice). Sometimes, the exact same advice that would be good for one person would be very poor for someone else. That is what many untested therapies can be like; think of them as hit or miss. Without independent evidence of the effectiveness of a therapy technique, “how do you know” whether it will be a hit or a miss? Even if the outcome is favorable, “how do you know” that it wouldn’t have turned out that way without the advice or that some other advice wouldn’t have made the outcome even more favorable? For example, maybe you can get into graduate school without getting involved in research, and perhaps there is some other activity that would have give you an even better chance.

The alternative to advice is knowledge based on research. Suppose you are depressed and someone tells you to eat five pieces of dark chocolate per day because “chocolate releases endorphins.” How do you know if that is good or poor advice? Well, basically, you need to take a couple of hundred depressed people and randomly assign half of them to eat chocolate and half to eat a placebo (fake chocolate, sounds delicious!). After six months, as long as both researchers and participants are not aware of whether they are in the experimental group (chocolate) or the control group (placebo), we could measure whether the chocolate group is less depressed than the placebo group. If the chocolate group is less depressed, we could have some confidence that eating chocolate is an effective treatment for depression.

No doubt, there are some excellent non-scientific therapists and some currently untested therapies that will turn out to be effective (Lilienfeld, Lynn, & Lohr, 2003). Again, however, without research, how do you know? Unfortunately, it is difficult to rely on the judgments of the effectiveness of therapy from the therapists themselves. They are as prone to reasoning errors, such as the availability heuristic and the confirmation bias, as any individual (Module 1). For example, because of the confirmation bias, therapists might be very likely to recall only the clients they treated successfully and forget about the clients who were not helped (the ones who came for two sessions and never returned). The availability heuristic might lead them to overestimate their rate of success. Paul Meehl (1993), a psychoanalyst who was perhaps the first to demonstrate the fallibility of clinical judgment, noted that earning a PhD does not automatically cure one of the misperceptions, memory distortions, and judgment biases that are common in all people. Similarly, you cannot use individual cases or testimonials. You cannot rely on clients’ memories and judgments, as they are just as prone to distortions and errors as anyone else. You cannot use “reasonableness.” Ideas that seem equally plausible can be on opposite ends of the “correctness spectrum.” What is it that makes the idea of judging one’s intelligence by the shape of the skull ridiculous and the idea of judging by one’s body-adjusted brain size a reasonable possibility? Research. We have research for psychotherapies so that we need not rely on the fallible memories or judgments of therapists or clients, on testimonials, or on plausibility arguments.

You have to realize that there is a real risk in letting untested therapies proliferate unchecked. If a scientific researcher has no regard at all for how her research could be applied, it may put the discipline of psychology at risk; the discipline may be in danger of becoming obscure and irrelevant (more likely, though, it is the researcher that will become obscure and irrelevant). On the other hand, the applied practitioner who ignores research puts people at risk. When psychotherapists choose ineffective therapies, they forfeit the opportunity to use treatments that would be effective. And we have unfortunately discovered that some therapeutic techniques are worse than ineffective; they are actually damaging. For example, therapies that rely on hypnosis to help recover repressed memories may actually contribute to the development of false memories (Lilienfield et al., 2003; Spanos, 1994). And there are still a significant number of practicing clinical psychologists who believe in the concept of repressed memories despite decades of research debunking the phenomenon (Patihis et al., 2015).

What Research Has Shown Us

Let us turn now to some of the results of research that speak to the effectiveness of specific therapies and diagnostic techniques. This small sample will work well to demonstrate how wrong we can be if we embrace clinical psychology that rejects research.

Clinical Judgment

For many years, it has been known that, as we suggested above, the intuitive judgments of clinicians can be wrong. Clinical psychologists often are called upon to predict people’s behavior (is an individual at risk for suicide, is he a danger to society, and so on) or to make diagnoses or predict outcomes (for example, will an individual get better). When the judgments of clinicians are pitted against statistical predictions, the clinicians rarely come out on top. Even when very experienced clinicians are making the judgments, they often do not do well. And among the worst kinds of judgments are those that involve clinical interviews (Bonta et al., 1998; Meehl, 1954; Grove et al., 2000; Swets et al., 2000). Just talking to a client is not that good a way to figure out what is wrong or what is going to help the person.

As you might guess, different psychologists can disagree about the causes of a particular disorder. For example, a psychodynamic psychologist may believe that depression is caused by anger toward another person unconsciously directed toward the self, while a cognitively trained psychologist might believe that depression results from a pattern of self-defeating thinking. Whatever your preferred explanation is, it should not influence your decision about whether someone has the disorder, however. In other words, a therapist’s diagnosis should not be affected by his or her theoretical orientation. But at least some of the time, it does. For example, one study demonstrated that clinicians were more likely to decide that a hypothetical patient suffered from a particular disorder if the symptom list included symptoms that were important for their own preferred theory rather than other, equally diagnostic symptoms (Kim & Ahn, 2002).

Alternative Therapies

Perhaps you were unimpressed with our “chocolate cures depression” example. Perhaps it seems a bit far-fetched that someone would propose something so preposterous. Well, a Google search for “alternative psychotherapy” returns many directories containing unscientific therapeutic techniques, including:

  • Music and gong therapy
  • Sand tray therapy (playing in a miniature sandbox)
  • Primal therapy (re-experiencing and expressing childhood problems)
  • Acupuncture
  • Aromatherapy
  • Bowen technique (apparently, the therapist, ahem, touches the client, through clothes of course)

And although you have to search a bit more, you can find information on therapy techniques based on:

  • Enneagram (an ancient method of determining personality)
  • Transformative dreamwork (a technique that reveals hidden meaning in our dreams, some of which are “sent” to us)
  • Color therapy (the use of colors or colored light to correct imbalances)
  • Voice therapy (retraining of the vocal cords to produce empowerment, liberation, and pleasure)
  • Toning and chanting (vocalization of pure sounds, sometimes repetitively, to enter a higher state of consciousness and heal oneself physically, spiritually, and emotionally)
  • Past-life therapy (a technique in which the therapist guides you to visit your past lives to heal old wounds and disturbing memories)
  • Light therapy (the use of bright lights for 30 minutes per day to relieve symptoms of winter depression)

Oh, by the way, that last one, light therapy, is a little different from the others. We know that  it works. Guess how we know. Right, research. Several experiments have demonstrated that people suffering from winter depression do indeed improve when exposed to light, compared to those given placebo treatments (Eastman, et al., 1998; Lewy, 1998; Terman et al., 1998; 2001). A recent meta-analysis even found that it is effective for both seasonal depression and non-seasonal depression (Geoffroy, et al. 2019).

The rest of the list, and dozens if not hundreds of additional alternative psychotherapy techniques, are available to cure nearly anything. At best they are untested; at worst, they have been demonstrated worthless by research. Occasionally, they are downright dangerous. In 2001, a 10-year old child in Colorado was suffocated by therapists during a rebirthing treatment, part of a larger therapy known as attachment therapy. Jean Mercer (2002) reported that there were no published studies using attachment therapy in a randomized, placebo-controlled design. Poor design and even statistical errors in the few studies that have been conducted have made for a very poor body of evidence in favor of attachment therapy. Mercer concluded from the evidence that the technique is not effective and is dangerous. The therapy is now illegal.

Oh, and about the depression-chocolate idea? It is suggested in the book Depression for Dummies  (Smith & Elliot, 2003). There are exactly zero scientific journal articles in PsycInfo, the exhaustive listing of research in psychology, examining this link experimentally, although there was one study, a large survey, that showed a correlation between consumption of dark chocolate and reduced symptoms of depression (Jackson et al. 2019). We hope we do not have to remind you about correlation and causation. A second study, this time an experiment, did find that cocoa polyphenols (almost chocolate) did improve mood over a placebo in healthy individuals. But that is a far cry from treating clinically significant depressive symptoms. Incidentally, this study described itself as the first to provide experimental evidence of the ability of cocoa polyphenols to help regulate mood (Pase et al., 2013).

Self-Help Books

Many scientific psychologists lament the state of affairs in our local and online bookstores. One local store near us devotes more than three times the shelf space to self-help as to psychology. There are certainly excellent self-help books based on concepts and techniques that have been supported by research. Unfortunately, however, it seems that the very large majority are not based on psychological theory and research (Stanovich, 2004).

Making matters worse, very few research studies have examined the effectiveness of self-help books, even the ones that are based in psychological research and theory. For example, Gerald Rosen and his colleagues searched an entire decade and identified only fifteen such studies (Rosen et al, 2003). That is a pretty frightening state of affairs when you consider that if you type “self-help” into the search box at internet bookseller Amazon.com, it returns more than 100,000 results.

Final Thoughts

Scott Lilienfeld of Emory University was one of the chief proponents of evidence-based practice in psychology. He was the editor-in-chief of the journal, Clinical Psychological Science,  and co-editor of the book Science and Pseudoscience in Clinical Psychology.  Unfortunately, his life and career were cut short by pancreatic cancer at age 59 in 2020. He did leave behind an extraordinary body of work that helped us cut through the pseudoscience and identify treatments that have been demonstrated to be safe and effective.  Always remember, the goal of the evidenced-based movement is not to reject all clinical psychology as quackery. It is to help us sort through the hundreds of possible therapeutic techniques, allowing us to direct our attention to the ones that are most likely to be effective because they have a body of research supporting them.

Our goal is a more effective psychology, of course, one that integrates the myriad research discoveries about human behavior and mental processes with the goal to help people. We also believe that this close relationship between clinical practice and research will improve psychology’s reputation. Keith Stanovich (2019) has referred to psychology as the Rodney Dangerfield of the sciences. (Rodney Dangerfield is the late comic who was famous for his signature line: “I don’t get no respect.”) One of the key reasons that Stanovich cites for this lack of respect is that the non-scientific, even the fringe wing of psychology is often the most visible.

Several years ago (some, more years than others!), the three of us decided to devote our professional lives to psychology. Looking back over our lives up until now, we would each have to say that it was one of our top five all-time decisions. The house of psychology is something that we are proud to be a part of. The whole discipline, complete with all of the basic and applied subfields, is the psychology that we know, love, and respect. That is the psychology that we want you to know, remember, and use.

Abramson, P. R. O., Charles, W. (1994). Question Wording and Partisanship: Change and Continuity in Party Loyalties During the 1992 Election Campaign. Public Opinion Quarterly, 58 , 21-48.

Adamek, R. J. (1994). Review: Public opinion and Roe v. Wade: Measurement difficulties. Public Opinion Quarterly, 58 (3), 409-418.

Ader, R., Felten, D. L., & Cohen, N. (2001). Psychoneuroimmunology, Vols 1 & 2 (3rd ed.) . San Diego, CA US: Academic Press.

Adler, M. G., & Fagley, N. S. (2005). Appreciation: Individual Differences in Finding Value and Meaning as a Unique Predictor of Subjective Well-Being.  Journal of Personality, 73 (1), 79–114.  https://doi.org/10.1111/j.1467-6494.2004.00305.x

Adler, E., Hoon, M. A., Mueller, K. L., Chandrashekar, J., Ryba, N. J. P., & Zuker, C. S. (2000). A novel family of mammalian taste receptors. Cell, 100 , 693-702.

Adolph, K. E., Vereijken, B., & Shrout, P. E. (2003). What changes in infant walking and why. Child Development, 74 (2), 475-497.

Adolphs, R., & Tranel, D. (2003). Amygdala damage impairs emotion recognition from scenes only when they contain facial expressions. Neuropsychologia, 41 (10), 1281-1289.

Adorno, T. W., Frenkel-Brunswik, E., Levinson, D. J., & Sanford, R. N. (1950). The authoritarian personality . Oxford England: Harpers.

Ainsworth, M. S., Blehar, M. C., Waters, E., & Wall, S. (1978). Patterns of attachment: A psychological study of the strange situation . Oxford England: Lawrence Erlbaum.

Ajzen, I., & Fishbein, M. (1980). Understanding Attitudes and Predicting Social Behavior . Englewood Cliffs, NJ: Prentice-Hall.

Alderfer, C. P. (1969a). An empirical test of a new theory of human needs. Organizational Behavior & Human Performance, 4 (2), 142-175.

Alderfer, C. P. (1969b). Effects of task factors on job attitudes and behavior: A symposium: II. Job enlargement and the organizational context. Personnel Psychology, 22 (4), 418-426.

Allen, J. A., Hays, R. T., & Buffardi, L. C. (1986). Maintenance training simulator fidelity and individual differences in transfer of training. Human Factors, 28 (5), 497-509.

Allison, T., Puce, A., Spencer, D. D., & McCarthy, G. (1999). Electrophysiological studies of human face perception. I: Potentials generated in occipitotemporal cortex by face and non-face stimuli. Cerebral Cortex, 9 (5), 415-430.

Allport, F. H. (1920). The influence of the group upon association and thought. Journal of Experimental Psychology, 3 (3), 159-182.

Allport, G. W. (1961). Pattern and growth in personality . Oxford England: Holt, Reinhart & Winston.

Allred, C. (2019). Divorce rate in the U.S.: Geographic variation, 2018.  Family Profiles , FP-19-23. Bowling Green, OH: National Center for Family & Marriage Research. https://doi.org/10.25035/ncfmr/fp-18-23

Altemeyer, B., & Hunsberger, B. E. (1992). Authoritarianism, religious fundamentalism, quest, and prejudice. International Journal for the Psychology of Religion, 2 (2), 113-133.

Amabile, T. M., Hennessey, B. A., & Grossman, B. S. (1986). Social influences on creativity: The effects of contracted-for reward. Journal of Personality and Social Psychology, 50 (1), 14-23.

Anderson, C. A., Shibuya, A., Ihori, N., Swing, E. L., Bushman, B.J., Sakamoto, A., Rothstein, H. R., & Saleem, M. (2010). Violent video-game effects on aggression, empathy, and prosocial behavior in Eastern and Western countries. Psychological Bulletin , 136 (pp. 151-173).

Anderson, C. A., Deuser, W. E., & DeNeve, K. M. (1995). Hot temperatures, hostile affect, hostile cognition, and arousal: Tests of a general model of affective aggression. Personality and Social Psychology Bulletin, 21 (5), 434-448.

Anderson, L., Krathwohl, D., Airasian, P., Cruikshank, K., Mayer, R., Pintrich, P., et al. (2001). A Taxonomy for Learning, Teaching, and Assessing: A Revision of Bloom’s Taxonomy of Educational Objectives . New York: Addison Wesley Longman.

Andreasen, N. C. (2001). Brave new brain: Conquering mental illness in the era of the genome . New York, NY US: Oxford University Press.

Anglin, J. M. (1993). Vocabulary development: A morphological analysis. Monographs of the Society for Research in Child Development, 58 (10), v-165.

Ansari, M. A., & Kapoor, A. (1987). Organizational context and upward influence tactics. Organizational Behavior and Human Decision Processes, 40 (1), 39-49.

Archer, J. (2000). Sex differences in physical aggression to partners: A reply to Frieze (2000), O’Leary (2000), and White, Smith, Koss, and Figueredo (2000). Psychological Bulletin, 126 (5), 697-702.

Argyle, M. (1970). The communication of inferior and superior attitudes by verbal and non-verbal signals. British Journal of Social & Clinical Psychology, 9 (3), 222-231.

Argyle, M., Kahneman, D., Diener, E., & Schwarz, N. (1999). Causes and correlates of happiness Well-being: The foundations of hedonic psychology. (pp. 353-373). New York, NY US: Russell Sage Foundation.

Armony, J. L., LeDoux, J. E., McGaugh, J. L., Roozendaal, B., Cahill, L., Ono, T., et al. (2000). Emotion The new cognitive neurosciences (2nd ed.). (pp. 1067-1159). Cambridge, MA US: The MIT Press.

Armstrong, T., & Olatunji, B. O. (2012). Eye tracking of attention in the affective disorders: a meta-analytic review and synthesis.  Clinical Psychology Review ,  32 (8), 704–723. https://doi.org/10.1016/j.cpr.2012.09.004

Arnett, J. (1992). Reckless behavior in adolescence: A developmental perspective. Developmental Review, 12 , 339-373.

Arnett, J. J. (1999). Adolescent storm and stress, reconsidered. American Psychologist, 54 (5), 317-326.

Arnst, C. (2001). Relax, Mom. Day care won’t ruin the kids. Business Week Online . Retrieved from http://www.businessweek.com/magazine/content/01_19/b3731063.htm

Arnst, C. (2003, August 21, 2003; retrieved December 26, 2003). I Can’t Remember. Business Week Online .

Aron, A., Dutton, D. G., Aron, E. N., & Iverson, A. (1989). Experiences of falling in love. Journal of Social and Personal Relationships, 6 (3), 243-257.

Aron, A., Norman, C. C., Aron, E. N., McKenna, C., & Heyman, R. E. (2000). Couples’ shared participation in novel and arousing activities and experienced relationship quality. Journal of Personality and Social Psychology, 78 (2), 273-284.

Asch, S. E. (1951). Effects of group pressure upon the modification and distortion of judgment. In H. Guetzkow (Ed.), Groups, Leadership, and Men . Pittsburgh: Carnegie.

Asch, S. E. (1955). Opinions and social pressure. Scientific American, 193 (5), 31-35.

Ashton, N. L. (1982). Validation of Rape Myth Acceptance Scale. Psychological Reports, 50 (1), 252-252.

Atkinson, R. C., & Shiffrin, R. M. (1968). Human memory: A proposed system and its control processes The psychology of learning and motivation: II. Oxford England: Academic Press.

Augustine, J. (2017). Human Neuroanatomy, John Wiley and Sons

Avolio, B. J., Waldman, D. A., & McDaniel, M. A. (1990). Age and work performance in nonmanagerial jobs: The effects of experience and occupational type. Academy of Management Journal, 33 (2), 407-422.

Aydin, O., & Sahin, D. (2003). The Role of Cognitive Factors in Modifying the Intensity of Emotional Responses Produced by Facial Expressions. Psychology and Education: An Interdisciplinary Journal, 40 (3), 50-56.

Baddeley, A. (1986). Working memory . New York, NY US: Clarendon Press/Oxford University Press.

Baddeley, A. (1996). Exploring the central executive. The Quarterly Journal of Experimental Psychology A: Human Experimental Psychology (1), 5-28.

Baddeley, A. D. H., G.J. (1974). Working Memory. In G. H. Bower (Ed.), The Psychology of Learning and Motivation (Vol. 8). New York: Academic Press.

Bahrick, L. E., & Pickens, J. N. (1995). Infant memory for object motion across a period of three months: Implications for a four-phase attention function. Journal of Experimental Child Psychology, 59 (3), 343-371.

Bailey, D. S., Leonard, K. E., Cranston, J. W., & Taylor, S. P. (1983). Effects of alcohol and self-awareness on human physical aggression. Personality and Social Psychology Bulletin, 9 (2), 289-295.

Bailey, J. M., Vasey, P. L., Diamond, L. M., Breedlove, S. M., Vilain, E., & Epprecht, M. (2016). Sexual orientation, controversy, and science.  Psychological Science in the Public Interest , 17, 45-101

Bailey, J. M., Bobrow, D., Wolfe, M., & Mikach, S. (1995). Sexual orientation of adult sons of gay fathers. Developmental Psychology, 31 (1), 124-129.

Bailey, J. M., Dunne, M. P., & Martin, N. G. (2000). Genetic and environmental influences on sexual orientation and its correlates in an Australian twin sample. Journal of Personality and Social Psychology, 78 (3), 524-536.

Bailey, J. M., & Pillard, R. C. (1991). A genetic study of male sexual orientation. Archives of General Psychiatry, 48 (12), 1089-1096.

Bailey, J. M., & Pillard, R. C. (1995). Genetics of human sexual orientation. Annual Review of Sex Research , 126-150.

Baillargeon, R. (1987). Object permanence in 3- and 4-month-old infants. Developmental Psychology, 23 (5), 655-664.

Baillargeon, R. e. (1987). Object permanence in 3¬Ω- and 4¬Ω-month-old infants. Developmental Psychology, 23 (5), 655-664.

Baillargeon, R. e., Spelke, E. S., & Wasserman, S. (1985). Object permanence in five-month-old infants. Cognition, 20 (3), 191-208.

Bakeman, R., & Brownlee, J. R. (1980). The strategic use of parallel play: A sequential analysis. Child Development, 51 (3), 873-878.

Baldwin, A. Y., Vialle, W., Clarke, C., Freeman, J., Borland, J. H., Wright, L., et al. (2000). Part V: Counseling and nurturing giftedness and talent International handbook of giftedness and talent (2nd ed.). (pp. 565-670). New York, NY US: Elsevier Applied Science Publishers/Elsevier Science Publishers.

Bandura, A. (1977). Self-efficacy: Toward a unifying theory of behavioral change. Psychological Review, 84 (2), 191-215.

Bandura, A. (1978). The self system in reciprocal determinism. American Psychologist, 33 (4), 344-358.

Bandura, A. (1986). Social foundations of thought and action: A social cognitive theory . Englewood Cliffs, NJ US: Prentice-Hall, Inc.

Bandura, A., Moghaddam, F. M., & Marsella, A. J. (2004). The role of selective moral disengagement in terrorism and counterterrorism Understanding terrorism: Psychosocial roots, consequences, and interventions. (pp. 121-150). Washington, DC US: American Psychological Association.

Bandura, A., & Reich, W. (1990). Mechanisms of moral disengagement Origins of terrorism: Psychologies, ideologies, theologies, states of mind. (pp. 161-191). New York, NY Washington, DC USUS: Cambridge University Press

Woodrow Wilson International Center for Scholars.

Bao, W. N. L., Whitbeck, D. H., Hoyt, D., & Conger, R. C. (1999). Perceived parental acceptance as a moderator of religious transmission among adolescent boys and girls. Journal of Marriage and Family, 61 , 362-374.

Barber, T. X., & Hahn, K. W., Jr. (1962). Physiological and subjective responses to pain producing stimulation under hypnotically-suggested and waking-imagined ‘analgesia.’. The Journal of Abnormal and Social Psychology, 65 (6), 411-418.

Barling, J., Weber, T., & Kelloway, E. K. (1996). Effects of transformational leadership training on attitudinal and financial outcomes: A field experiment. Journal of Applied Psychology, 81 (6), 827-832.

Baron, R. A., Byrne, D., & Branscombe, N. R. (2006). Social psychology (11th edition) . Upper Saddle River, NJ: Pearson Education.

Barrera, M., Fleming, C. F., & Khan, F. S. (2004). The role of emotional social support in the psychological adjustment of siblings of children with cancer. Child: Care, Health and Development, 30 (2), 103-111.

Bartlett, F. (1932). Remembering . Cambridge, UK: Cambridge University Press.

Bass, B. M. (1985). Leadership: Good, better, best. Organizational Dynamics, 13 (3), 26-40.

Bass, B. M. (1990). From transactional to transformational leadership: Learning to share the vision. Organizational Dynamics, 18 (3), 19-31.

Basso, J. C., & Suzuki, W. A. (2017). The Effects of Acute Exercise on Mood, Cognition, Neurophysiology, and Neurochemical Pathways: A Review.  Brain plasticity (Amsterdam, Netherlands) ,  2 (2), 127–152. https://doi.org/10.3233/BPL-160040

Barkil-Oteo A. (2013). Collaborative care for depression in primary care: how psychiatry could “troubleshoot” current treatments and practices.  The Yale Journal of Biology and Medicine ,  86 (2), 139–146.

Bates, J. E., Pettit, G. S., Dodge, K. A., & Ridge, B. (1998). Interaction of temperamental resistance to control and restrictive parenting in the development of externalizing behavior. Developmental Psychology, 34 (5), 982-995.

Batson, C. D., Ahmad, N., Yin, J., Bedell, S. J., Johnson, J. W., Templin, C. M., et al. (2001). Two threats to the common good: Self-interested egoism and empathy-induced altruism The next phase of business ethics: Integrating psychology and ethics. (pp. 165-191). US: Elsevier Science/JAI Press.

Batson, C. D., Batson, J. G., Griffitt, C. A., Barrientos, S., Brandt, J. R., Sprengelmeyer, P., et al. (1989). Negative-state relief and the empathy–altruism hypothesis. Journal of Personality and Social Psychology, 56 (6), 922-933.

Batson, C. D., Eidelman, S. H., Higley, S. L., & Russel, S. A. (2001). ‘And who is my neighbor?’ II: Quest religion as a source of universal compassion. Journal for the Scientific Study of Religion, 40 (1), 39-50.

Batson, C. D., Eisenberg, N., Reykowski, J., & Staub, E. (1989). Personal values, moral principles, and a three-path model of prosocial motivation Social and moral values: Individual and societal perspectives. (pp. 213-228). Hillsdale, NJ England: Lawrence Erlbaum Associates, Inc.

Batson, C. D., & Gray, R. A. (1981). Religious orientation and helping behavior: Responding to one’s own or the victim’s needs? Journal of Personality and Social Psychology, 40 (3), 511-520.

Batson, C. D., Oleson, K. C., Weeks, J. L., Healy, S. P., Reeves, P. J., Jennings, P., et al. (1989). Religious prosocial motivation: Is it altruistic or egoistic? Journal of Personality and Social Psychology, 57 (5), 873-884.

Batson, C. D., Schoenrade, P., & Ventis, W. L. (1993). Religion and the individual: A social-psychological perspective .

Batson, C. D., Schoenrade, P. A., Pych, V., & Brown, L. B. (1985). Brotherly love or self-concern?: Behavioural consequences of religion Advances in the psychology of religion. (pp. 185-208). Elmsford, NY US: Pergamon Press.

Batson, C. D., & Thompson, E. R. (2001). Why don’t moral people act morally? Motivational considerations. Current Directions in Psychological Science, 10 (2), 54-57.

Baucom, D. H., & Aiken, P. A. (1981). Effect of depressed mood on eating among obese and nonobese dieting and nondieting persons. Journal of Personality and Social Psychology, 41 (3), 577-585.

Bauer, P. J., & Wewerka, S. S. (1995). One- to two-year-olds’ recall of events: The more expressed, the more impressed. Journal of Experimental Child Psychology, 59 (3), 475-496.

Bauer, P. J., Wiebe, S. A., Waters, J. M., & Bangston, S. K. (2001). Reexposure breeds recall: Effects of experience on 9-month-olds’ ordered recall. Journal of Experimental Child Psychology, 80 (2), 174-200.

Baumeister, R. F., Bratslavsky, E., Muraven, M., & Tice, D. M. (1998). Ego depletion: Is the active self a limited resource? Journal of Personality and Social Psychology , 74 , 1252– 1265. doi:10.1037/0022-3514.74.5.1252

Baumeister, R., & Monroe, A. (2014). Recent Research on Free Will.  Advances In Experimental Social Psychology , 1-52. doi: 10.1016/b978-0-12-800284-1.00001-1

Baumeister, R. F., Boden, J. M., Geen, R. G., & Donnerstein, E. (1998). Aggression and the self: High self-esteem, low self-control, and ego threat Human aggression: Theories, research, and implications for social policy. (pp. 111-137). San Diego, CA US: Academic Press.

Baumeister, R. F., Hutton, D. G., & Tice, D. M. (1989). Cognitive processes during deliberate self-presentation: How self-presenters alter and misinterpret the behavior of their interaction partners. Journal of Experimental Social Psychology, 25 (1), 59-78.

Baumeister, R. F., Smart, L., & Boden, J. M. (1996). Relation of threatened egotism to violence and aggression: The dark side of high self-esteem. Psychological Review, 103 (1), 5-33.

Baumeister, R. F., Tice, D. M., & Hutton, D. G. (1989). Self-presentational motivations and personality differences in self-esteem. Journal of Personality, 57 (3), 547-579.

Baumrind, D. (1991). The influence of parenting style on adolescent competence and substance use. The Journal of Early Adolescence, 11 (1), 56-95.

Baumrind, D., Cowan, P. A., & Hetherington, E. M. (1991). Effective parenting during the early adolescent transition Family transitions. (pp. 111-163). Hillsdale, NJ England: Lawrence Erlbaum Associates, Inc.

Baumrind, D., & Damon, W. (1989). Rearing competent children Child development today and tomorrow. (pp. 349-378). San Francisco, CA US: Jossey-Bass.

Baumrind, D., Larzelere, R. E., & Cowan, P. A. (2002). Ordinary physical punishment: Is it harmful? Comment on Gershoff (2002). Psychological Bulletin, 128 (4), 580-589.

Baylis, G. C., & Driver, J. (2001). Shape-encoding in IT cells generalizes over contrast and mirror reversal but not figure-ground reversal. Nature Neuroscience, 4 , 937-942.

Beach, S. R. H., Davey, A., & Fincham, F. D. (1999). The time has come to talk of many things: A commentary on Kurdek (1998) and the emerging field of marital processes in depression. Journal of Family Psychology, 13 (4), 663-668.

Beach, S. R. H., Fincham, F. D., & Katz, J. (1998). Marital therapy in the treatment of depression: Toward a third generation of therapy and research. Clinical Psychology Review, 18 (6), 635-661.

Bear, G. G., & Rys, G. S. (1994). Moral reasoning, classroom behavior, and sociometric status among elementary school children. Developmental Psychology, 30 (5), 633-638.

Before Head Start: Income and ethnicity, family characteristics, child care experiences, and child development (2001). Early Education and Development, 12 (4), 545-576.

Bell, R. Q. (1968). A reinterpretation of the direction of effects in studies of socialization. Psychological Review, 75 (2), 81-95.

Belsky, J., Hsieh, K.-H., & Crnic, K. (1996). Infant positive and negative emotionality: One dimension or two? Developmental Psychology, 32 (2), 289-298.

Bem, D. J. (1996). Exotic becomes erotic: A developmental theory of sexual orientation. Psychological Review, 103 (2), 320-335.

Bem, S. (1993). The Lenses of Gender . New Haven, CT: Yale University Press.

Bem, S. L. (1981). Gender schema theory: A cognitive account of sex typing. Psychological Review, 88 (4), 354-364.

Bem, S. L. (1993). The lenses of gender: Transforming the debate on sexual inequality . New Haven, CT US: Yale University Press.

Benjamin, A. (2020). Factors influencing learning. In R. Biswas-Diener & E. Diener (Eds),  Noba textbook series: Psychology . Champaign, IL: DEF publishers. Retrieved from  http://noba.to/rnxyg6wp

Benjamin, J., Ebstein, R. P., & Belmaker, R. H. (2003). Review of Molecular Genetics and the Human Personality. American Journal of Psychiatry, 160 (4), 802-802.

Bennett, J. (1978). Some remarks about concepts. Behavioral and Brain Sciences, 1 , 557-560.

Bensley, L., & Van Eenwyk, J. (2001). Video games and real-life aggression: Review of the literature. Journal of Adolescent Health, 29 (4), 244-257.

Berg, C. A., & Sternberg, R. J. (1985). A triarchic theory of intellectual development during adulthood. Developmental Review, 5 (4), 334-370.

Berkowitz, L. (1993). Aggression: Its causes, consequences, and control . New York, NY England: Mcgraw-Hill Book Company.

Berkowitz, L., & Heimer, K. (1989). On the construction of the anger experience: Aversive events and negative priming in the formation of feelings Advances in experimental social psychology, Vol. 22. (pp. 1-37). San Diego, CA US: Academic Press.

Berman, M., Gladue, B., & Taylor, S. (1993). The effects of hormones, Type A behavior pattern, and provocation on aggression in men. Motivation and Emotion, 17 (2), 125-138.

Berndt, T. J. (2002). Friendship quality and social development. Current Directions in Psychological Science, 11 (1), 7-10.

Berndt, T. J., Murphy, L. M., & Kail, R. V. (2002). Influences of friends and friendships: Myths, truths, and research recommendations Advances in child development and behavior, Vol. 30. (pp. 275-310). San Diego, CA US: Academic Press.

Berndt, T. J., & Perry, T. B. (1986). Children’s perceptions of friendships as supportive relationships. Developmental Psychology, 22 (5), 640-648.

Berndt, T. J., Perry, T. B., Montemayor, R., Adams, G. R., & Gullotta, T. P. (1990). Distinctive features and effects of early adolescent friendships From childhood to adolescence: A transitional period? (pp. 269-287). Thousand Oaks, CA US: Sage Publications, Inc.

Bernhardt, P. C., Dabbs, J. M., Jr., Fielden, J. A., & Lutter, C. D. (1998). Testosterone changes during vicarious experiences of winning and losing among fans at sporting events. Physiology & Behavior, 65 (1), 59-62.

Berscheid, E., Dion, K., Walster, E., & Walster, G. W. (1971). Physical attractiveness and dating choice: A test of the matching hypothesis. Journal of Experimental Social Psychology, 7 (2), 173-189.

Bertenthal, B. I., Proffitt, D. R., & Cutting, J. E. (1984). Infant sensitivity to figural coherence in biomechanical motions. Journal of Experimental Child Psychology, 37 (2), 213-230.

Betzer, P. D.-S. (1996). The relation between adolescent perceptions of attachment to mother, attachment to father and attachment to closest same-gender friend. ProQuest Information & Learning, US.

Biasco, F., Goodwin, E. A., & Vitale, K. L. (2001). College students’ attitudes toward racial discrimination. College Student Journal, 35 (4), 523-528.

Bibby, P. A., & Payne, S. J. (1993). Internalization and the use specificity of device knowledge. Human-Computer Interaction, 8 (1), 25-56.

Bibby, R. W. (1993). Unknown gods: The ongoing story of religion in Canada .

Bibby, R. W. (2001). Canada’s teens: Today, yesterday, and tomorrow .

Biderman, D., Shir, Y., & Mudrik, L. (2020). B or 13? Unconscious Top-Down Contextual Effects at the Categorical but Not the Lexical Level. Psychological Science, 31(6), 663–677. https://doi.org/10.1177/0956797620915887

Bigelow, A. E. (1977). The development of self recognition in young children. ProQuest Information & Learning, US.

Billig, M., & Tajfel, H. (1973). Social categorization and similarity in intergroup behaviour. European Journal of Social Psychology, 3 (1), 27-52.

Billy, J. O., & Udry, J. R. (1985). The influence of male and female best friends on adolescent sexual behavior. Adolescence, 20 (77), 21-32.

Binder, Jeffrey R. The Wernicke area. Neurology, 85(24): 2170–2175.

Birman, D., & Trickett, E. J. (2001). Cultural transitions in first-generation immigrants: Acculturation of Soviet Jewish refugee adolescents and parents. Journal of Cross-Cultural Psychology, 32 (4), 456-477.

Bizzi, M. I. (2000). Motor learning through the combination of primitives. Philosophical Transactions: Biological Sciences, 355 (1404), 1755-1769.

Bjork, E. L., & Bjork, R. A. (2011). Making things hard on yourself, but in a good way: Creating desirable difficulties to enhance learning .  In M. A. Gernsbacher, R. W. Pew, L. M. Hough, J. R. Pomerantz (Eds.) & FABBS Foundation,  Psychology and the real world: Essays illustrating fundamental contributions to society  (p. 56–64). Worth Publishers.

Bjorkqvist, K. (1994). Sex differences in physical, verbal, and indirect aggression: A review of recent research. Sex Roles, 30 (3), 177-188.

Bjorkqvist, K., Uñsterman, K., & Lagerspetz, K. M. J. (1994). Sex differences in covert aggression among adults. Aggressive Behavior, 20 (1), 27-33.

Blatt, S. J., Sanislow, C. A., III, Zuroff, D. C., & Pilkonis, P. A. (1996). Characteristics of effective therapists: Further analyses of data from the National Institute of Mental Health Treatment of Depression Collaborative Research Program. Journal of Consulting and Clinical Psychology, 64 (6), 1276-1284.

Bleich, A., Gelkopf, M., & Solomon, Z. (2003). Exposure to Terrorism, Stress-Related Mental Health Symptoms, and Coping Behaviors Among a Nationally Representative Sample in Israel. JAMA: Journal of the American Medical Association, 290 (5), 612-620.

Bloom, L., & Damon, W. (1998). Language acquisition in its developmental context Handbook of child psychology: Volume 2: Cognition, perception, and language. (pp. 309-370). Hoboken, NJ US: John Wiley & Sons Inc.

Bokhorst, C. L., Bakermans-Kranenburg, M. J., Fearon, R. M. P., van Ijzendoorn, M. H., Fonagy, P., & Schuengel, C. (2003). The importance of shared environment in mother-infant attachment security: A behavioral genetic study. Child Development, 74 (6), 1769-1782.

Bond, R., & Smith, P. B. (1996). Culture and conformity: A meta-analysis of studies using Asch’s (1952b, 1956) line judgment task. Psychological Bulletin, 119 (1), 111-137.

Bondolfi, G., Dufour, H., Patris, M., Billeter, U., Eap, C. B., Baumann, P., et al. (1998). Risperidone versus clozapine in treatment-resistant chronic schizophrenia: A randomized double-blind study. American Journal of Psychiatry, 155 (4), 499-504.

Bonnel, A.-M., & Hafter, E. R. (1998). Divided attention between simultaneous auditory and visual signals. Perception & Psychophysics, 60 (2), 179-190.

Bonta, J., Law, M., & Hanson, K. (1998). The prediction of criminal and violent recidivism among mentally disordered offenders: A meta-analysis. Psychological Bulletin, 123 (2), 123-142.

Bonvillian, J. D., Orlansky, M. D., & Novack, L. L. (1983). Developmental milestones: Sign language acquisition and motor development. Child Development, 54 (6), 1435-1445.

Bouchard, T. J., Arvey, R. D., Keller, L. M., & Segal, N. L. (1992). Genetic influences on job satisfaction: A reply to Cropanzano and James. Journal of Applied Psychology, 77 (1), 89-93.

Bower, G. H. (1972). Mental imagery and associative learning Cognition in learning and memory. Oxford England: John Wiley & Sons.

Bowlby, J. (1982). Attachment and loss: Retrospect and prospect. American Journal of Orthopsychiatry, 52 (4), 664-678.

Braine, M. D. S. (1978). On the relation between the natural logic of reasoning and standard logic. Psychological Review, 85 (1), 1-21.

Braine, M. D. S., Reiser, B. J. & Rumain, B. (1984). Some emprical justification for a theory of natural propositional logic. In G. H. Bower (Ed.), The psychology of learning and motivation (Vol. 18). Orlando, FL: Academic Press.

Brand, A. (1985). Hot Cognition: Emotions and Writing Behavior.  Journal of Advanced Composition,   6 , 5-15. Retrieved January 23, 2021, from http://www.jstor.org/stable/20865583

Bransford, J. D., Barclay, J. R., & Franks, J. J. (1972). Sentence memory: A constructive versus interpretive approach. Cognitive Psychology, 3 (2), 193-209.

Bransford, J. D., Brown, A. L., & Cocking, R. R. (1999). How people learn: Brain, mind, experience, and school . Washington, DC US: National Academy Press.

Bransford, J. D., & Johnson, M. K. (1972). Contextual prerequisites for understanding: Some investigations of comprehension and recall. Journal of Verbal Learning & Verbal Behavior, 11 (6), 717-726.

Breger, L. (2000). Freud: Darkness in the midst of vision . Hoboken, NJ US: John Wiley & Sons Inc.

Bregman, A. S. (1990). Auditory scene analysis: The perceptual organization of sound . Cambridge, MA US: The MIT Press.

Brehm, J. W. (1956). Postdecision changes in the desirability of alternatives. The Journal of Abnormal and Social Psychology, 52 (3), 384-389.

Brenan, Megan. Gallup, I. (2020). U.S. National Pride Falls to Record Low. https://news.gallup.com/poll/312644/national-pride-falls-record-low.aspx

Bretherton, I., McNew, S., & Beeghly-Smith, M. (1981). Early person knowledge as expressed in gestural and verbal communication: When do infants acquire a “theory of mind”? In M. Lamb & L. Sherrod (Eds.), Social cognition in infancy . Hillsdale, NJ: Erlbaum.

Breuer, J., Scharkow, M., & Quandt, T. (2015). Sore losers? A reexamination of the frustration–aggression hypothesis for colocated video game play.  Psychology of Popular Media Culture, 4 (2), 126–137.  https://doi.org/10.1037/ppm0000020

Brewer, M. B., Miller, N., Katz, P. A., & Taylor, D. A. (1988). Contact and cooperation: When do they work? Eliminating racism: Profiles in controversy. (pp. 315-326). New York, NY US: Plenum Press.

Brewer, W. F. (1977). Memory for the pragmatic implications of sentences. Memory & Cognition, 5 (6), 673-678.

Brewer, W. F., & Treyens, J. C. (1981). Role of schemata in memory for places. Cognitive Psychology, 13 (2), 207-230.

Brickman, P., Coates, D., & Janoff-Bulman, R. (1978). Lottery winners and accident victims: Is happiness relative? Journal of Personality and Social Psychology, 36 (8), 917-927.

Brittle, Z. (2015). Turn Towards Instead of Away. The Gottman Institute. Retrieved from https://www.gottman.com/blog/turn-toward-instead-of-away/

Broca, P. P. (1861). Loss of Speech, Chronic Softening and Partial Destruction of the Anterior Let Lobe of the Brain. Classics in the History of Psychology . Retrieved from http://psychclassics.yorku.ca/Broca/perte-e.htm

Brody, L. R., & Fischer, A. H. (2000). The socialization of gender differences in emotional expression: Display rules, infant termperament, and differentiation Gender and emotion: Social psychological perspectives. (pp. 24-47). New York, NY US: Cambridge University Press.

Brown, P., Roediger, H. & McDaniel, M. (2014). Make It Stick: The Science of Successful Learning . Cambridge, MA. Belnap Press of Harvard University Press.

Brown, B. B., Clasen, D. R., & Eicher, S. A. (1986). Perceptions of peer pressure, peer conformity dispositions, and self-reported behavior among adolescents. Developmental Psychology, 22 (4), 521-530.

Browne, N. M. K., Stuart M. (2009). Asking the Right Questions (9 ed.). Upper Saddle River, NJ: Prentice Hall.

Browne, N. M., & Keeley, S. M. (2018). Asking the Right Questions: A Guide to Critical Thinking (12th Edition ed.). Boston, MA: Pearson.

Brownell, C. A. (1990). Peer social skills in toddlers: Competencies and constraints illustrated by same-age and mixed-aged interaction. Child Development, 61 (3), 838-848.

Brownell, C. A., & Carriger, M. S. (1990). Changes in cooperation and self-other differentiation during the second year. Child Development, 61 (4), 1164-1174.

Brownell, K. D., & Kazdin, A. E. (2000). Dieting Encyclopedia of Psychology, Vol. 3. (pp. 35-38). Washington, DC New York, NY USUS: American Psychological Association

Bruce, C., Desimone, R., & Gross, C. G. (1981). Visual properties of neurons in a polysensory area in superior temporal sulcus of the macaque. Journal of Neurophysiology, 46 (369-384).

Bruck, M., & Ceci, S. J. (1999). The suggestibility of children’s memory. Annual Review of Psychology, 50 , 419-439.

Bruno, F. (2001). Going Back to School: College Survival Strategies for Adult Students (3rd ed.). Lawrenceville, NJ: Thomson.

Buchanan, C. M., Eccles, J. S., Flanagan, C., & Midgley, C. (1990). Parents’ and teachers’ beliefs about adolescents: Effects of sex and experience. Journal of Youth and Adolescence, 19 (4), 363-394.

Buchanan, C. M., & Holmbeck, G. N. (1998). Measuring beliefs about adolescent personality and behavior. Journal of Youth and Adolescence, 27 (5), 607-627.

Bueno-Notivol, J., Gracia-García, P., Olaya, B., Lasheras, I., López-Antón, R., & Santabárbara, J. (2021). Prevalence of depression during the COVID-19 outbreak: A meta-analysis of community-based studies.  International Journal of Clinical and Health Psychology ,  21 (1), 100196. doi: 10.1016/j.ijchp.2020.07.007

Bugental, D. B. G., Jacqueline J. (1998). Socialization Processes. In N. Eisenberger (Ed.), Handbook of Child Psychology 5th Edition: Volume 3: Social, Emotional, and Personality Development (5th ed., Vol. 3).

Bugental, J. F. T. (1964). Investigations into the self-concept: III. Instructions for the W-A-Y method. Psychological Reports, 15 (2), 643-650.

Bullock, M., & Lutkenhaus, P. (1990). Who am I? Self-understanding in toddlers. Merrill-Palmer Quarterly, 36 (2), 217-238.

Bureau of Justice Statistics. (2007). Incident-Based Statistics (2007). Retrieved from http://www.ojp.usdoj.gov/bjs/ibrs.htm.

Bureau, P. R. (2004). Human Population: Fundamentals of Growth/World Health Retrieved February 6, 2004, from http://www.prb.org/Content/NavigationMenu/PRB/Educators/Human_Population/Health2/World_Health1.htm

Bushman, B. J., & Baumeister, R. F. (1998). Threatened egotism, narcissism, self-esteem, and direct and displaced aggression: Does self-love or self-hate lead to violence? Journal of Personality and Social Psychology, 75 (1), 219-229.

Bushman, B. J., Baumeister, R. F., & Stack, A. D. (1999). Catharsis, aggression, and persuasive influence: Self-fulfilling or self-defeating prophecies? Journal of Personality and Social Psychology, 76 (3), 367-376.

Bushman, B. J., & Cooper, H. M. (1990). Effects of alcohol on human aggression: An intergrative research review. Psychological Bulletin, 107 (3), 341-354.

Buss, D. M. (1995). Evolutionary Psychology: A New Paradigm for Psychological Science. Psychological Inquiry, 6 (1), 1-29.

Buss, D. M. (2001). Human Nature and Culture: An Evolutionary Psychological Perspective. Journal of Personality, 69 (6), 955-978.

Buss, D. M. (2003). The Evolution of Desire . New York: Basic Books.

Buss, D. M. (2007). Evolutionary Psychology: The New Science of the Mind (3rd ed.). Needham Heights, MA: Allyn & Bacon.

Buss, D. M., Abbott, M., Angleitner, A., & Asherian, A. (1990). International preferences in selecting mates: A study of 37 cultures. Journal of Cross-Cultural Psychology, 21 (1), 5-47.

Buss, D. M., & Dedden, L. A. (1990). Derogation of competitors. Journal of Social and Personal Relationships, 7 (3), 395-422.

Buss, D. M., Kenrick, D. T., Gilbert, D. T., Fiske, S. T., & Lindzey, G. (1998). Evolutionary social psychology The handbook of social psychology, Vols. 1 and 2 (4th ed.). (pp. 982-1026). New York, NY US: McGraw-Hill.

Buss, D. M., & Schmitt, D. P. (1993). Sexual Strategies Theory: An evolutionary perspective on human mating. Psychological Review, 100 (2), 204-232.

Buss, D. M., Sternberg, R. J., & Barnes, M. L. (1988). Love acts: The evolutionary biology of love The psychology of love. (pp. 100-118). New Haven, CT US: Yale University Press.

Busse, A. L., Gil, G., Santarém, J. M., & Jacob Filho, W. (2009). Physical activity and cognition in the elderly: A review.  Dementia & neuropsychologia ,  3 (3), 204–208. https://doi.org/10.1590/S1980-57642009DN30300005

Butterworth, G. (1992). Origins of self-perception in infancy. Psychological Inquiry, 3 (2), 103-111.

Byrne, D., & Nelson, D. (1965). Attraction as a linear function of proportion of positive reinforcements. Journal of Personality and Social Psychology, 1 (6), 659-663.

Byrne, J. H. (1997).  Neuroscience Online: An Electronic Textbook for the Neurosciences Department of Neurobiology and Anatomy, McGovern Medical School at The University of Texas Health Science Center at Houston (UTHealth) http://nba.uth.tmc.edu/neuroscience/

Cacioppo, J. T., Petty, R. E., Kao, C. F., & Rodriguez, R. (1986). Central and peripheral routes to persuasion: An individual difference perspective. Journal of Personality and Social Psychology, 51 (5), 1032-1043.

Cahill, L., McGaugh, J. L., & Kazdin, A. E. (2000). Emotional learning Encyclopedia of Psychology, Vol. 3. (pp. 175-177). Washington, DC New York, NY USUS: American Psychological Association

Oxford University Press.

Cai, D. J., Mednick, S. A., Harrison, E. M., Kanady, J. C., & Mednick, S. C. (2009). REM, not incubation, improves creativity by priming associative networks.  Proceedings of the National Academy of Sciences of the United States of America ,  106 (25), 10130–10134. https://doi.org/10.1073/pnas.0900271106

Calderone, J. (2018). Do memory supplements really work? Consumer Reports, Feb 8, 2018.

Caldwell, C. H., Zimmerman, M. A., Bernat, D. H., Sellers, R. M., & Notaro, P. C. (2002). Racial identity, maternal support, and psychological distress among African American adolescents. Child Development, 73 (4), 1322-1336.

Caldwell, D. F., & Burger, J. M. (1998). Personality characteristics of job applicants and success in screening interviews. Personnel Psychology, 51 (1), 119-136.

Callegaro, M., McCutcheon, A. L., & Ludwig, J. (2010). Who’s Calling? The Impact of Caller ID on Telephone Survey Response.  Field Methods ,  22 (2), 175–191.  https://doi.org/10.1177/1525822X09356046

Campbell, A., Converse, P. E., & Rodgers, W. L. (1976). The quality of American life: Perceptions, evaluations, and satisfactions . New York, NY US: Russell Sage Foundation.

Campbell, N., & Reece, J. (2002). Biology (6th ed.). San Francisco, CA: Benjamin Cummings.

Campbell, R. L., & Bickhard, M. H. (1986). Knowing levels and developmental stages. Contributions to Human Development, 16 , 146-146.

Cantanzaro, D. (1997). Course enrichment and the job characteristics model. Teaching of Psychology, 24 (2), 85-87.

Canter, P. H. (2003). The therapeutic effects of meditation. BMJ: British Medical Journal, 326 (7398), 1049-1050.

Cantor, N., Norem, J. K., Niedenthal, P. M., Langston, C. A., & Brower, A. M. (1987). Life tasks, self-concept ideals, and cognitive strategies in a life transition. Journal of Personality and Social Psychology, 53 (6), 1178-1191.

Caprara, G. V., Barbaranelli, C., Pastorelli, C., & Perugini, M. (1994). Individual differences in the study of human aggression. Aggressive Behavior, 20 (4), 291-303.

Caprara, G. V., Perugini, M., Barbaranelli, C., Potegal, M., & Knutson, J. F. (1994). Studies of individual differences in aggression The dynamics of aggression: Biological and social processes in dyads and groups. (pp. 123-153). Hillsdale, NJ England: Lawrence Erlbaum Associates, Inc.

Carey, F., Bishop, B., Foster, A., Klein, J., & O’Connell, V. (2004). Therapy by design: style in the therapeutic encounter Elusive elements in practice. (pp. 51-65). London England: Karnac Books.

Carey, G., Goldman, D., Stoff, D. M., Breiling, J., & Maser, J. D. (1997). The genetics of antisocial behavior Handbook of antisocial behavior. (pp. 243-254). Hoboken, NJ US: John Wiley & Sons Inc.

Carroll, R. T. (2003). The Skeptic’s Dictionary: A Collection of Strange Beliefs, Amusing Deceptions, and Dangerous Delusions . Hoboken, NJ: Wiley.

Carstensen, L. L., Isaacowitz, D. M., & Charles, S. T. (1999). Taking time seriously: A theory of socioemotional selectivity.  American Psychologist, 54 (3), 165–181.  https://doi.org/10.1037/0003-066X.54.3.165

Cartwright, D. (1979). Contemporary social psychology in historical perspective. Social Psychology Quarterly, 42 (1), 82-93.

Carver, C. S., Scheier, M. F., & Weintraub, J. K. (1989). Assessing coping strategies: A theoretically based approach. Journal of Personality and Social Psychology, 56, 267–283.

Caspi, A., Hariri, A. R., Holmes, A., Uher, R., & Moffitt, T. E. (2010). Genetic sensitivity to the environment: the case of the serotonin transporter gene and its implications for studying complex diseases and traits.  The American journal of psychiatry ,  167 (5), 509–527. https://doi.org/10.1176/appi.ajp.2010.09101452

Caspi, A. (2000). The child is father of the man: Personality continuities from childhood to adulthood. Journal of Personality and Social Psychology, 78 (1), 158-172.

Caspi, A., Lynam, D., Moffitt, T. E., & Silva, P. A. (1993). Unraveling girls’ delinquency: Biological, dispositional, and contextual contributions to adolescent misbehavior. Developmental Psychology, 29 (1), 19-30.

Caspi, A., Sugden, K., Moffitt, T., Taylor, A., Craig, I., Harrington, H., et al. (2003). Influence of Life Stress on Depression: Moderation by a Polymorphism in the 5-HTT Gene. Science, 301 , 386-389.

Cass, V. C. (1979). Homosexual identity formation: A theoretical model. Journal of Homosexuality, 4 (3), 219-235.

Catania, A. C., & Zimbardo, G. G. (1972). Applications of matrix switching. Journal of the Experimental Analysis of Behavior, 17 (1), 23-24.

Cattell, R. B. (1963). Theory of fluid and crystallized intelligence: A critical experiment. Journal of Educational Psychology, 54 (1), 1-22.

CDC (2017). Short Sleep Duration Among US Adults. Retrieved from

https://www.cdc.gov/sleep/data_statistics.html

Ceci, S. J., Huffman, M. L. C., Smith, E., & Loftus, E. F. (1994). Repeatedly thinking about a non-event: Source misattributions among preschoolers. Consciousness and Cognition: An International Journal, 3 (3), 388-407.

Cervone, D. (1997). Social-cognitive mechanisms and personality coherence: Self-knowledge, situational beliefs, and cross-situational coherence in perceived self-efficacy. Psychological Science, 8 (1), 43-50.

Chai, W. J., Abd Hamid, A. I., & Abdullah, J. M. (2018). Working Memory From the Psychological and Neurosciences Perspectives: A Review.  Frontiers in psychology ,  9 , 401. https://doi.org/10.3389/fpsyg.2018.00401

Chapman, A. H. (1954). Paranoid psychoses associated with amphetamine usage: a clinical note. American Journal of Psychiatry, 111 , 43-45.

Chapman, P., & Underwood, G. (2000). Forgetting near-accidents: The roles of severity, culpability and experience in the poor recall of dangerous driving situations. Applied Cognitive Psychology, 14 (1), 31-44.

Characteristics of infant child care: Factors contributing to positive caregiving (1996). Early Childhood Research Quarterly, 11 (3), 269-306.

Charest, J., & Grandner, M. (2020). Sleep and Athletic Performance.  Sleep Medicine Clinics ,  15 (1), 41-57. doi: 10.1016/j.jsmc.2019.11.005

Chaudhari, N., Landin, A. M., & Roper, S. D. (2000). A metabotropic glutamate receptor variant functions as a taste receptor. Nature Neuroscience, 3 , 113-119.

Chen, L., Taishi, P., Majde, J. A., Peterfi, Z., Obal, F., Jr., & Krueger, J. M. (2004). The role of nitric oxide synthases in the sleep responses to tumor necrosis factor-α. Brain, Behavior, and Immunity, 18 (4), 390-398.

Chen, Y., Spagna, A., Wu, T. et al .  (2019). Testing a Cognitive Control Model of Human Intelligence.  Sci Rep   9,  2898. https://doi.org/10.1038/s41598-019-39685-2

Chernik, D. A. (1972). Effect of REM sleep deprivation on learning and recall by humans. Perceptual and Motor Skills, 34 (1), 283-294.

Chew, S (2020). What Pandemics Can Teach Us about Critical Thinking. www.improvewithmetacognition.com .

Chi, M. T. H., & Siegler, R. S. (1978). Knowledge structures and memory development Children’s thinking: What develops? (pp. 73-96). Hillsdale, NJ England: Lawrence Erlbaum Associates, Inc.

Choi, I., & Nisbett, R. E. (1998). Situational salience and cultural differences in the correspondence bias and actor-observer bias. Personality and Social Psychology Bulletin, 24 (9), 949-960.

Christie, I. C., & Friedman, B. H. (2004). Autonomic specificity of discrete emotion and dimensions of affective space: A multivariate approach. International Journal of Psychophysiology, 51 (2), 143-153.

Cialdini, R. B. (2008). Influence: Science and Practice (5th ed.). S.l.: Allyn and Bacon.

Cialdini, R. B., Kallgren, C. A., & Reno, R. R. (1991). A focus theory of normative conduct. Advances in Experimental Social Psychology, 24 , 201-234.

Cianciolo, A. T., Grigorenko, E. L., Jarvin, L., Gil, G., Drebot, M. E., & Sternberg, R. J. (2006). Practical intelligence and tacit knowledge: Advancements in the measurement of developing expertise. Learning and Individual Differences, 16 (3), 235-253.

Cinciripini, P. M., Gritz, E. R., Tsoh, J. Y., Skaar, K. L., Lundberg, J. C., Passik, S. D., et al. (1998). Psychological and behavioral factors in cancer risk Psycho-oncology. (pp. 27-143). New York, NY US: Oxford University Press.

Clark, M. L., & Bittle, M. L. (1992). Friendship expectations and the evaluation of present friendships in middle childhood and early adolescence. Child Study Journal, 22 (2), 115-135.

Clark, R. D., & Hatfield, E. (1989). Gender differences in receptivity to sexual offers. Journal of Psychology & Human Sexuality, 2 (1), 39-55.

Clark, R. E., & Squire, L. R. (1998). Classical conditioning and brain systems: The role of awareness. Science, 280 (5360), 77-81.

Clarke-Stewart, K. A. (1991). Does day care affect development? Journal of Reproductive and Infant Psychology, 9 (2), 67-78.

Clement, S., Schauman, O., Graham, T., Maggioni, F., Evans-Lacko, S., Bezborodovs, N., Morgan, C., Rüsch, N., Brown, J. S., & Thornicroft, G. (2015). What is the impact of mental health-related stigma on help-seeking? A systematic review of quantitative and qualitative studies.  Psychological medicine ,  45 (1), 11–27. https://doi.org/10.1017/S0033291714000129

Clore, G. L., Schwarz, N., Conway, M., Wyer, R. S., Jr., & Srull, T. K. (1994). Affective causes and consequences of social information processing Handbook of social cognition, Vol. 1: Basic processes; Vol. 2: Applications (2nd ed.). (pp. 323-417). Hillsdale, NJ England: Lawrence Erlbaum Associates, Inc.

Coats, E. J., & Feldman, R. S. (1996). Gender differences in nonverbal correlates of social status. Personality and Social Psychology Bulletin, 22 (10), 1014-1022.

Cody, B. E., Quraishi, A. Y., & Mickalide, A. D. (2004). Headed for injury: An observational survey of helmet use among children ages 5 to 14 participating in wheeled sports. Retrieved June 5, 2004, from http://www.safekids.org/content_documents/ACFC7.pdf

Coe, C. L. (1993). Psychosocial factors and immunity in nonhuman primates: A review. Psychosomatic Medicine, 55 (3), 298-308.

Coghill, R. C., Talbot, J. D., Evans, A. C., Meyer, E., Gjedde, A., Bushnell, M. C., et al. (1994). Distributed processing of pain and vibration by the human brain. Journal of Neuroscience, 14 , 4095-4108.

Cohen, D. B., & MacNeilage, P. F. (1974). A test of the salience hypothesis of dream recall. Journal of Consulting and Clinical Psychology, 42 (5), 699-703.

Cohen, F., Kearney, K. A., Zegans, L. S., Kemeny, M. E., Neuhaus, J. M., & Stites, D. P. (1999). Differential immune system changes with acute and persistent stress for optomists vs pessimists. Brain, Behavior, and Immunity, 13 (2), 155-174.

Cohen, N. J., & Eichenbaum, H. (1993). Memory, amnesia, and the hippocampal system . Cambridge, MA US: The MIT Press.

Cohen, N. J., & Squire, L. R. (1980). Preserved learning and retention of pattern-analyzing skill in amnesia: Dissociation of knowing how and knowing that. Science, 210 (4466), 207-210.

Cohen, S., Frank, E., Doyle, W. J., Skoner, D. P., Rabin, B. S., & Gwaltney, J. M., Jr. (1998). Types of stressors that increase susceptibility to the common cold in healthy adults. Health Psychology, 17 (3), 214-223.

Cohen, S., Kaplan, J. R., Cunnick, J. E., & Manuck, S. B. (1992). Chronic social stress, affiliation, and cellular immune response in nonhuman primates. Psychological Science, 3 (5), 301-304.

Cohen, S., Tyrrell, D. A., & Smith, A. P. (1991). Psychological stress and susceptibility to the common cold. New England Journal of Medicine, 325 (9), 606-612.

Cold Fusion Research.  (1989).

Collaer, M. L., & Hines, M. (1995). Human behavioral sex differences: A role for gonadal hormones during early development? Psychological Bulletin, 118 (1), 55-107.

Conklin, H. M., & Iacono, W. G. (2002). Schizophrenia: A neurodevelopmental perspective. Current Directions in Psychological Science, 11 (1), 33-37.

Cools, J., Schotte, D. E., & McNally, R. J. (1992). Emotional arousal and overeating in restrained eaters. Journal of Abnormal Psychology, 101 (2), 348-351.

Correia, M. J., & Guedry, F. E. (1978). The vestibular system: Basic biophysical and physiological mechanisms. In R. B. Masterton (Ed.), Handbook of Sensory Neurobiology, Volume I, Sensory Integration. . New York: Plenum Press.

Corrigan, P. W., & Matthews, A. K. (2003). Stigma and disclosure: Implications for coming out of the closet. Journal of Mental Health, 12 (3), 235-248.

Cosier, R. A., & Dalton, D. R. (1990). Positive effects of conflict: A field assessment. International Journal of Conflict Management, 1 (1), 81-92.

Costa, P. T., Jr., McCrae, R. R., Siegler, I. C., & Cloninger, C. R. (1999). Continuity and change over the adult life cycle: Personality and personality disorders Personality and psychopathology. (pp. 129-154). Washington, DC US: American Psychiatric Association.

Costa, P. T., & McCrae, R. R. (1992). The five-factor model of personality and its relevance to personality disorders. Journal of Personality Disorders, 6 (4), 343-359.

Costanzo, P. R., & Shaw, M. E. (1966). CONFORMITY AS A FUNCTION OF AGE LEVEL. Child Development, 37 (4), 967-975.

Cowan, P. A., Cowan, C. P., Borkowski, J. G., Ramey, S. L., & Bristol-Power, M. (2002). What an intervention design reveals about how parents affect their children’s academic achievement and behavior problems Parenting and the child’s world: Influences on academic, intellectual, and social-emotional development. (pp. 75-97). Mahwah, NJ US: Lawrence Erlbaum Associates Publishers.

Cox, M. J., Owen, M. T., Henderson, V. K., & Margand, N. A. (1992). Prediction of infant-father and infant-mother attachment. Developmental Psychology, 28 (3), 474-483.

Craik, F. I., & Lockhart, R. S. (1972). Levels of processing: A framework for memory research. Journal of Verbal Learning & Verbal Behavior, 11 (6), 671-684.

Craik, F. I., & Tulving, E. (1975). Depth of processing and the retention of words in episodic memory. Journal of Experimental Psychology: General, 104 (3), 268-294.

Crandall, C. S. (1988). Social contagion of binge eating. Journal of Personality and Social Psychology, 55 (4), 588-598.

Crick, F., & Koch, C. (2000). The unconscious homunculus. Neuro-Psychoanalysis, 2 (1), 3-11.

Crick, F. H. C. (1994). The astonishing hypothesis: The scientific search for the soul . New York, NY England: Charles Scribner’S Sons.

Crivelli, C., Russell, J. A., Jarillo, S., & Fernández-Dols, J.-M. (2017). Recognizing spontaneous facial expressions of emotion in a small-scale society of Papua New Guinea.  Emotion, 17 (2), 337–347.  https://doi.org/10.1037/emo0000236

Crivelli, C., Russell, J., Jarillo, S., & Fernández-Dols, J. (2016). The fear gasping face as a threat display in a Melanesian society.  Proceedings of The National Academy Of Sciences ,  113 (44), 12403-12407. doi: 10.1073/pnas.1611622113

Crocker, J., & Luhtanen, R. K. (2003). Level of self-esteem and contingencies of self-worth: Unique effects on academic, social and financial problems in college students. Personality and Social Psychology Bulletin, 29 (6), 701-712.

Crocker, J., & Major, B. (2003). The Self-Protective Properties of Stigma: Evolution of a Modern Classic. Psychological Inquiry, 14 (3), 232-237.

Crocker, J., Park, L. E., Leary, M. R., & Tangney, J. P. (2003). Seeking self-esteem: Construction, maintenance, and protection of self-worth Handbook of self and identity. (pp. 291-313). New York, NY US: Guilford Press.

Crocker, J., & Wolfe, C. T. (2001). Contingencies of self-worth. Psychological Review, 108 (3), 593-623.

Crosby, R. A., & Yarber, W. L. (2001). Perceived versus actual knowledge about correct condom use among U.S. adolescents: Results from a national study. Journal of Adolescent Health, 28 (5), 415-420.

Cross, S., & Markus, H. (1991). Possible selves across the life span. Human Development, 34 (4), 230-255.

Crowder, R. G., & Morton, J. (1969). Precategorical acoustic storage (PAS). Perception & Psychophysics, 5 (6), 365-373.

Csikszentmihalyi, M. (1997a). Creativity: Flow and the psychology of discovery and invention . New York, NY US: HarperCollins Publishers.

Csikszentmihalyi, M. (1997b). Finding flow: The psychology of engagement with everyday life . New York, NY US: Basic Books.

Cuijpers, et al. (2008). Psychotherapy for Depression in Adults: A Meta-Analysis of Comparative Outcome Studies. Journal of Consulting and Clinical Psychology, 76 (6), DOI: 10.1037/a0013075

Cunningham, J. G., & Odom, R. D. (1986). Differential salience of facial features in children’s perception of affective expression. Child Development, 57 (1), 136-142.

Cunningham, M. R., Roberts, A. R., Barbee, A. P., Druen, P. B., & Wu, C.-H. (1995). ‘Their ideas of beauty are, on the whole, the same as ours’: Consistency and variability in the cross-cultural perception of female physical attractiveness. Journal of Personality and Social Psychology, 68 (2), 261-279.

Czeisler, M., Lane, R., Petrosky, E., Wiley, J., Christensen, A., & Njai, R. et al. (2020). Mental Health, Substance Use, and Suicidal Ideation During the COVID-19 Pandemic — United States, June 24–30, 2020.  MMWR. Morbidity And Mortality Weekly Report ,  69 (32), 1049-1057. doi: 10.15585/mmwr.mm6932a1

Daly, M., & Wilson, M. (1988). Homicide . Hawthorne, NY US: Aldine de Gruyter.

Daly, M., Wilson, M. I., & Weghorst, S. J. (1982). Male sexual jealousy. Ethology & Sociobiology, 3 (1), 11-27.

Danieli, Y., Engdahl, B., Schlenger, W. E., Moghaddam, F. M., & Marsella, A. J. (2004). The psychosocial aftermath of terrorism Understanding terrorism: Psychosocial roots, consequences, and interventions. (pp. 223-246). Washington, DC US: American Psychological Association.

Darley, J. M., & Latane, B. (1968a). Bystander intervention in emergencies: Diffusion of responsibility. Journal of Personality and Social Psychology, 8 (4), 377-383.

Darley, J. M., & Latane, B. (1968b). When will people help in a crisis ? Psychology Today, 2 (7), 54.

Dashiell, J. F. (1930). An experimental analysis of some group effects. The Journal of Abnormal and Social Psychology, 25 (2), 190-199.

Davis, J. (2017). 2017 Annual Recruiting Survey. HR Daily Advisor. Retrieved from https://hrdailyadvisor.blr.com/2017/03/07/2017-annual-recruiting-survey/

Davis, J. L. (2004). Antidepressant Rx: Careful Monitoring Needed Retrieved May 10, 2004, from http://webcenter.health.webmd.netscape.com/content/article/86/99206.htm

Davis, K. L., Kahn, R. S., Ko, G., & Davidson, M. (1991). Dopamine in schizophrenia: A review and reconceptualization. American Journal of Psychiatry, 148 (11), 1474-1486.

Davis, M., Lang, P. J., Gallagher, M., & Nelson, R. J. (2003). Emotion Handbook of psychology: Biological psychology, Vol. 3. (pp. 405-439). Hoboken, NJ US: John Wiley & Sons Inc.

Davis, M., Robbins Eshelman, E., & McKay, M. (1995). The Relaxation and Stress Reduction Workbook (4 ed.). Oakland, CA: New Harbinger Publications.

Davis, M. H., Luce, C., & Kraus, S. J. (1994). The heritability of characteristics associated with dispositional empathy. Journal of Personality, 62 (3), 369-391.

Davis Jr, T. (2020). Perceptions of Racial Progress and Mistreatment Today: The Implications for the Kerner Commission’s “Two Nations” Thesis.  National Review of Black Politics ,  1 (2), 251-270.

Dawes, R. M., & Mulford, M. (1996). The false consensus effect and overconfidence: Flaws in judgment or flaws in how we study judgment? Organizational Behavior and Human Decision Processes, 65 (3), 201-211.

Day, D. V., & Sessa, V. I. (2003). Accounting for Choice: How Committees Justify Executive Selection Decisions. Psychologist-Manager Journal, 6 (2), 79-96.

de Boysson-Bardies, B. n. d., Halle, P., Sagart, L., & Durand, C. (1989). A crosslinguistic investigation of vowel formants in babbling. Journal of Child Language, 16 (1), 1-17.

De Dreu, C. K. W., Weingart, L. R., & Kwon, S. (2000). Influence of social motives on integrative negotiation: A meta-analytic review and test of two theories. Journal of Personality and Social Psychology, 78 (5), 889-905.

De Wolff, M., & van Ijzendoorn, M. H. (1997). Sensitivity and attachment: A meta-analysis on parental antecedents of infant attachment. Child Development, 68 (4), 571-591.

DeCasper, A. J., & Fifer, W. P. (1980). Of human bonding: Newborns prefer their mothers’ voices. Science, 208 (4448), 1174-1176.

DeCasper, A. J., & Prescott, P. A. (1984). Human newborns’ perception of male voices: Preference, discrimination, and reinforcing value. Developmental Psychobiology, 17 (5), 481-491.

DeCasper, A. J., & Spence, M. J. (1986). Prenatal maternal speech influences newborns’ perception of speech sounds. Infant Behavior & Development, 9 (2), 133-150.

DeCasper, A. J., Spence, M. J., Weiss, M. J. S., & Zelazo, P. R. (1991). Auditorily mediated behavior during the perinatal period: A cognitive view Newborn attention: Biological constraints and the influence of experience. (pp. 142-176). Westport, CT US: Ablex Publishing.

Deci, E. L. (1972). Intrinsic motivation, extrinsic reinforcement, and inequity. Journal of Personality and Social Psychology, 22 (1), 113-120.

Dement, W. C., & Vaughan, C. (1999). The promise of sleep: A pioneer in sleep medicine explores the vital connection between health, happiness, and a good night’s sleep . New York, NY US: Dell Publishing Co.

Dempster, F. (1985). Short-term memory development in childhood and adolescence. In C. J. Brainerd & M. Pressley (Eds.), Basic Processes in Memory Development: Progress in Cognitive Development Researc . New York: Springer.

Dempster, F. N. (1978). Memory span and short-term memory capacity: A developmental study. Journal of Experimental Child Psychology, 26 (3), 419-431.

Denford, S., Abraham, C., Campbell, R., & Busse, H. (2017). A comprehensive review of reviews of school-based interventions to improve sexual-health.  Health Psychology Review ,  11 (1), 33–52. https://doi-org.cod.idm.oclc.org/10.1080/17437199.2016.1240625

Dennett, D. (1978). Beliefs about beliefs. Behavioral and Brain Sciences, 1 (568-570).

Descartes, R. (1637/1998). Discourse on Method (D. A. Cress, Trans.). Indianapolis, IN: Hackett Publishing Company.

Dessureau, B. K., Kurowski, C. O., & Thompson, N. S. (1998). A reassessment of the role of pitch and duration in adults’ responses to infant crying. Infant Behavior & Development, 21 (2), 367-371.

DeSteno, D., Petty, R. E., Wegener, D. T., & Rucker, D. D. (2000). Beyond valence in the perception of likelihood: The role of emotion specificity. Journal of Personality and Social Psychology, 78 (3), 397-416.

Deutsch, D., & Feroe, J. (1981). The internal representation of pitch sequences in tonal music. Psychological Review, 88 (6), 503-522.

Dewing, P., Shi, T., Horvath, S., & Valain, E. (2003). Molecular Brain research, 118 , 82-90.

Dewing, P., Shi, T., Horvath, S., & Vilain, E. (2003). Sexually dimorphic gene expression in mouse brain precedes gonadal differentiation. Molecular Brain research, 118 (1), 82-90.

DiCenso, A., Guyatt, G., Willan, A., & Griffith, L. (2002). Interventions to reduce unintended pregnancies among adolescents: Systematic review of randomised controlled trials. BMJ: British Medical Journal, 324 (7351), 1426-1430.

Dickson, D. H., & Kelly, I. W. (1985). The ‘Barnum effect’ in personality assessment: A review of the literature. Psychological Reports, 57 (2), 367-382.

DiClemente, C. C., & Prochaska, J. O. (1982). Self-change and therapy change of smoking behavior: A comparison of processes of change in cessation and maintenance. Addictive Behaviors, 7 (2), 133-142.

Diener, E., & Diener, M. (1995). Cross-cultural correlates of life satisfaction and self-esteem. Journal of Personality and Social Psychology, 68 (4), 653-663.

Diener, E., Diener, M., & Diener, C. (1995). Factors predicting the subjective well-being of nations. Journal of Personality and Social Psychology, 69 (5), 851-864.

Diener, E., Wolsic, B., & Fujita, F. (1995). Physical attractiveness and subjective well-being. Journal of Personality and Social Psychology, 69 (1), 120-129.

Dietrich, A. (2004). Endocannabinoids and exercise.  British Journal Of Sports Medicine ,  38 (5), 536-541. doi: 10.1136/bjsm.2004.011718

DiLalla, L. F. (2002). Behavior genetics of aggression in children: Review and future directions. Developmental Review, 22 (4), 593-622.

Dill, J. C., & Anderson, C. A. (1995). Effects of frustration justification on hostile aggression. Aggressive Behavior, 21 (5), 359-369.

Dion, K. K., & Dion, K. L. (1993). Individualistic and collectivistic perspectives on gender and the cultural context of love and intimacy. Journal of Social Issues, 49 (3), 53-69.

Dittes, J. E. (1969). Review of From Cry to Word: Contributions toward a Psychology of Prayer. PsycCRITIQUES, 14 (5), 304-304.

Ditto, P. H., & Lopez, D. F. (1992). Motivated skepticism: Use of differential decision criteria for preferred and nonpreferred conclusions.  Journal of Personality and Social Psychology, 63 , 568–584.

Divale, W. T. (1972). System population control in the middle and upper Paleolithic: Inferences based on contemporary hunter-gatherers. World Archaeology, 4 , 222-243.

Dodds, P. S., Muhamad, R., & Watts, D. J. (2003). An Experimental Study of Search in Global Social Networks. Science, 301 (5634), 827-828.

Does quality of child care affect child outcomes at age 4 1/2? (2003). Developmental Psychology, 39 (3), 451-469.

Dollard, J., Doob, L. W., Miller, N. E., Mowrer, O. H., Sears, R. R., Ford, C. S., et al. (1939). Frustration and aggression . Oxford England: Yale Univ. Press.

Dollard, J., Miller, N. E., Doob, L. W., Mowrer, O. H., & Sears, R. R. (1939). Frustration and aggression . New Haven, CT US: Yale University Press.

Domhoff, G. W. (1996). Finding meaning in dreams: A quantitative approach . New York, NY US: Plenum Press.

Domhoff, G. W. (2003a). New ways to study meaning in dreams The scientific study of dreams: Neural networks, cognitive development, and content analysis. (pp. 107-134). Washington, DC US: American Psychological Association.

Domhoff, G. W. (2003b). The Hall-Van de Castle system The scientific study of dreams: Neural networks, cognitive development, and content analysis. (pp. 67-94). Washington, DC US: American Psychological Association.

Doty, R. L. (2001). Olfaction. Annual Review of Psychology, 52 , 423-452.

Downs, A. C. (1990). Perceptions of physical appearance and adolescents’ social alienation. Psychological Reports, 67 (3), 1305-1306.

Drevets, W. C., Gautier, C., Price, J. C., Kupfer, D. J., Kinahan, P. E., Grace, A. A., et al. (2001). Amphetamine-induced dopamine release in human ventral striatum correlates with euphoria. Biological Psychiatry, 49 (2), 81-96.

Dunbar, M., Ford, G., Hunt, K., & Der, G. (2000). Question wording effects in the assessment of global self-esteem. European Journal of Psychological Assessment, 16 (1), 13-19.

Dunbar, R. I. M. (1998). The social brain hypothesis.  Evolutionary Anthropology ,  6 , 178–190.

Dunbar, R. I. M. (2016). The social brain hypothesis and human evolution. Oxford Research Encyclopedia, Psychology. DOI:10.1093/acrefore/9780190236557.013.44

Eagly, A. H. (1995). The science and politics of comparing women and men. American Psychologist, 50 (3), 145-158.

Eagly, A. H., & Carli, L. L. (2003). Finding gender advantage and disadvantage: Systematic research integration is the solution. Leadership Quarterly, 14 (6), 851-859.

Eagly, A. H., & Chaiken, S. (1993). The psychology of attitudes . Orlando, FL US: Harcourt Brace Jovanovich College Publishers.

Eagly, A. H., Johannesen-Schmidt, M. C., & van Engen, M. L. (2003). Transformational, transactional, and laissez-faire leadership styles: A meta-analysis comparing women and men. Psychological Bulletin, 129 (4), 569-591.

Eagly, A. H., & Johnson, B. T. (1990). Gender and leadership style: A meta-analysis. Psychological Bulletin, 108 (2), 233-256.

Eagly, A. H., & Wood, W. (1999). The origins of sex differences in human behavior: Evolved dispositions versus social roles. American Psychologist, 54 (6), 408-423.

Easterbrook, M. A., Kisilevsky, B. S., Hains, S. M. J., & Muir, D. W. (1999). Faceness or complexity: Evidence from newborn visual tracking of facelike stimuli. Infant Behavior & Development, 22 (1), 17-35.

Eastman, C. I., Young, M. A., Fogg, L. F., Liu, L., & Meaden, P. M. (1998). Bright light treatment of winter depression: A placebo-controlled trial. Archives of General Psychiatry, 55 (10), 883-889.

Ebrahimi, F. A. W., & Chess, A. (1998). The specificaion of olfactory neurons. Current Opinion in Neurobiology, 8 , 453-457.

Ebstein, R. P., Benjamin, J., Belmaker, R. H., Plomin, R., DeFries, J. C., Craig, I. W., et al. (2003). Behavioral genetics, genomics, and personality Behavioral genetics in the postgenomic era. (pp. 365-388). Washington, DC US: American Psychological Association.

Edakubo, S., Fushimi, K. Mortality and risk assessment for anorexia nervosa in acute-care hospitals: a nationwide administrative database analysis.  BMC Psychiatry   20,  19 (2020). https://doi.org/10.1186/s12888-020-2433-8

Efklides, A., Demetriou, A., & Metallidou, Y. (1994). The structure and development of propositional reasoning ability: Cognitive and metacognitive aspects Intelligence, mind, and reasoning: Structure and development. (pp. 151-172). Amsterdam Netherlands: North-Holland/Elsevier Science Publishers.

Efraty, D., & Sirgy, M. J. (1990). The effects of quality of working life (QWL) on employee behavioral responses. Social Indicators Research, 22 (1), 31-47.

Eisenberger, N. I., Lieberman, M. D., & Williams, K. D. (2003). Does Rejection Hurt? An fMRI Study of Social Exclusion. Science, 302 (5643), 290-292.

Ekman, P., Campos, J. J., Davidson, R. J., & de Waal, F. B. M. (2003). Darwin, Deception, and Facial Expression Emotions inside out: 130 years after Darwin’s: The expression of the emotions in man and animals. (pp. 205-221). New York, NY US: New York University Press.

Ekman, P., & Friesen, W. V. (1978/2002). Facial Action Coding System: A Technique for the Measurement of Facial Movement . Palo Alto, CA: Consulting Psychologists Press.

Ekman, P., Manstead, A. S. R., Frijda, N., & Fischer, A. (2004). What We Become Emotional About Feelings and emotions: The Amsterdam symposium. (pp. 119-135). New York, NY US: Cambridge University Press.

Ekman, P., O’Sullivan, M., & Frank, M. G. (1999). A few can catch a liar. Psychological Science, 10 (3), 263-266.

Ekman, P., Sorenson, E. R., & Friesen, W. V. (1969). Pan-cultural elements in facial displays of emotion. Science, 164 (3875), 86-88.

Eliminate Disparities in Infant Mortality (2004).  Retrieved February 6, 2004, from http://www.cdc.gov/omh/AMH/factsheets/infant.htm

Elkind, D. (1976). Conceptualizing Adolescence. PsycCRITIQUES, 21 (8), 538-539.

Elkind, D. (1985). Egocentrism Redux. Developmental Review, 5 , 218-226.

Elkind, D., & Bowen, R. (1979). Imaginary audience behavior in children and adolescents. Developmental Psychology, 15 (1), 38-44.

Ellason, J. W., & Ross, C. A. (1997). Childhood trauma and psychiatric symptoms. Psychological Reports, 80 (2), 447-450.

Elms, A. C. (1972). Allport, Freud, and the clean little boy. Psychoanalytic Review, 59 (4), 627-632.

Ember, C. (1978). Myths about hunter-gatherers. Ethnology, 27 , 239-248.

Emde, R. N., Plomin, R., Robinson, J., & Corley, R. (1992). Temperament, emotion, and cognition at fourteen months: The MacArthur Longitudinal Twin Study. Child Development, 63 (6), 1437-1455.

Epley, N., & Dunning, D. (2000). Feeling ‘holier than thou’: Are self-serving assessments produced by errors in self- or social prediction? Journal of Personality and Social Psychology, 79 (6), 861-875.

Epstein, S. (1979). The stability of behavior: I. On predicting most of the people much of the time. Journal of Personality and Social Psychology, 37 (7), 1097-1126.

Epstein, S. (1980). The stability of behavior: II. Implications for psychological research. American Psychologist, 35 (9), 790-806.

Epstein, S. (1983). Aggregation and beyond: Some basic issues on the prediction of behavior. Journal of Personality, 51 (3), 360-392.

Epstein, W., & Hanson, S. (1977). Is the discrimination of motion-path length unique? Perception & Psychophysics, 22 (2), 152-158.

Erikson, E. H. (1968). Identity: youth and crisis . Oxford England: Norton & Co.

Eriksson, P. S., Perfilieva, E., Bjork-Eriksson, T., Alborn, A.-M., Nordborg, C., Peterson, D. A., et al. (1998). Neurogenesis in the adult human hippocampus. Nature Medicine, 4 , 1313-1317.

Erzen, E., & Çikrikci, Ö. (2018). The effect of loneliness on depression: A meta-analysis.  The International journal of social psychiatry ,  64 (5), 427–435. https://doi.org/10.1177/0020764018776349

Esser, J. K. (1998). Alive and well after 25 years: A review of groupthink research. Organizational Behavior and Human Decision Processes, 73 (2), 116-141.

Esses, V. M., Jackson, L. M., & Armstrong, T. L. (1998). Intergroup competition and attitudes toward immigrants and immigration: An instrumental model of group conflict. Journal of Social Issues, 54 (4), 699-724.

Esterling, B. A., Antoni, M. H., Fletcher, M. A., Margulies, S., & Schneiderman, N. (1994). Emotional disclosure through writing or speaking modulates latent Epstein-Barr virus antibody titers. Journal of Consulting and Clinical Psychology, 62 (1), 130-140.

Etkin, A., & Wager, T. D. (2007). Functional neuroimaging of anxiety: a meta-analysis of emotional processing in PTSD, social anxiety disorder, and specific phobia.  The American journal of psychiatry ,  164 (10), 1476–1488. https://doi.org/10.1176/appi.ajp.2007.07030504

Ettman CK, Abdalla SM, Cohen GH, Sampson L, Vivier PM, Galea S. Prevalence of Depression Symptoms in US Adults Before and During the COVID-19 Pandemic.  JAMA Netw Open.  2020;3(9):e2019686. doi:10.1001/jamanetworkopen.2020.19686

Evans, F. J., & Orne, M. T. (1965). Motivation, performance, and hypnosis. International Journal of Clinical and Experimental Hypnosis, 13 (2), 103-116.

Evans, J. S. B. (1982). On statistical intuitions and inferential rules: A discussion of Kahneman and Tversky. Cognition, 12 (3), 319-323.

Everitt, B. J., & Robbins, T. W. (2000). Second-order schedules of drug reinforcement in rats and monkeys: Measurement of reinforcing efficacy and drug-seeking behaviour. Psychopharmacology, 153 (1), 17-30.

Exline, J. J., & Hill, P. C. (2012). Humility: A consistent and robust predictor of generosity.  The Journal of Positive Psychology, 7 (3), 208–218.  https://doi.org/10.1080/17439760.2012.671348

Eysenck, H. J., & Puka, B. (1994). The biology of morality Defining perspectives in moral development. (pp. 212-229). New York, NY US: Garland Publishing.

Fagan, J. F. (1974). Infant recognition memory: The effects of length of familiarization and type of discrimination task. Child Development, 45 (2), 351-356.

Fairburn, C. G., & Clark, D. M. (1997). Eating disorders Science and practice of cognitive behaviour therapy. (pp. 209-241). New York, NY US: Oxford University Press.

Fairburn, C. G., Welch, S. L., Doll, H. A., & Davies, B. A. (1997). Risk factors for bulimia nervosa: A community-based case-control study. Archives of General Psychiatry, 54 (6), 509-517.

Fantz, R. L. (1961). The origin of form perception. Scientific American, 204 (5), 66-72.

Fearon, P., Shmueli-Goetz, Y., Viding, E., Fonagy, P., & Plomin, R. (2014). Genetic and environmental influences on adolescent attachment.  Journal of child psychology and psychiatry, and allied disciplines ,  55 (9), 1033–1041. https://doi.org/10.1111/jcpp.12171

Feather, N. T., & Sherman, R. (2002). Envy, resentment, Schadenfreude, and sympathy: Reactions to deserved and underserved achievement and subsequent failure. Personality and Social Psychology Bulletin, 28 (7), 953-961.

Feldman, R., & Dement, W. (1968). Possible relationships between REM sleep and memory consolidation. Psychophysiology, 5 (2), 243-243.

Feldman, S. S., & Cauffman, E. (1999). Sexual betrayal among late adolescents: Perspectives of the perpetrator and the aggrieved. Journal of Youth and Adolescence, 28 (2), 235-258.

Ferris, C. F., De Vries, G. J., Stoff, D. M., Breiling, J., & Maser, J. D. (1997). Ethological models for examining the neurobiology of aggressive and affiliative behaviors Handbook of antisocial behavior. (pp. 255-268). Hoboken, NJ US: John Wiley & Sons Inc.

Festinger, L. (1957). A theory of cognitive dissonance . Oxford England: Row, Peterson.

Festinger, L., & Carlsmith, J. M. (1959). Cognitive consequences of forced compliance. The Journal of Abnormal and Social Psychology, 58 (2), 203-210.

Fiedler, F. E., House, R. J., Cooper, C. L., & Robertson, I. T. (1994). Leadership theory and research: A report of progress Key reviews in managerial psychology: Concepts and research for practice. (pp. 97-116). Oxford England: John Wiley & Sons.

Fiedler, K., Nickel, S., Muehlfriedel, T., & Unkelbach, C. (2001). Is mood congruency an effect of genuine memory or response bias? Journal of Experimental Social Psychology, 37 (3), 201-214.

Field, T., Stoller, S., Vega-Lahr, N., & Scafidi, F. (1986). Maternal unavailability effects on very young infants in homecare vs. daycare. Infant Mental Health Journal, 7 (4), 274-280.

Fields, H. L. (2006). A motivation-decision model of pain: The role of opioids. In H. Flor, E. Kalso, & J. O. Dostrovsky (Eds.), Proceedings of the 11th World Congress on Pain (pp. 449–459). Seattle, WA: IASP.

Finkel, D., Wille, D. E., & Matheny, A. P., Jr. (1998). Preliminary results from a twin study of infant-caregiver attachment. Behavior Genetics, 28 (1), 1-8.

Finn, S. E. (1986). Stability of personality self-ratings over 30 years: Evidence for an age/cohort interaction. Journal of Personality and Social Psychology, 50 (4), 813-818.

Fishbach, A., & Ferguson, M. F. (2007). The goal construct in social psychology. In A. W. Kruglanski & E. T. Higgins (Eds.),  Social psychology: Handbook of basic principles  (pp. 490–515). New York, NY: Guilford Press.

Fishbach, A. & Trope, Y., (2007). Implicit and explicit mechanisms of counteractive self-control. In J. Shah and W. Gardner (Eds.),  Handbook of motivation science  (pp. 281–294). New York, NY: Guilford Press.

Fishbach, A., Friedman, R. S., & Kruglanski, A. W. (2003). Leading us not unto temptation: Momentary allurements elicit overriding goal activation.  Journal of Personality and Social Psychology, 84 (2), 296–309.

Fischhoff, B. (1982). For those condemned to study the past: Heuristics and biases in hindsight. In D. Kahneman, P. Slovic & A. Tversky (Eds.), Judment Under Uncertainty: Heuristics and Biases . Cambridge, UK: Cambridge University Press.

Fischhoff, B., Slovic, P., & Lichtenstein, S. (1977). Knowing with certainty: The appropriateness of extreme confidence. Journal of Experimental Psychology: Human Perception and Performance, 3 (4), 552-564.

Fishbein, M., & Ajzen, I. (1974). Attitudes towards objects as predictors of single and multiple behavioral criteria. Psychological Review, 81 (1), 59-74.

Fiske, A. P., Kitayama, S., Markus, H. R., Nisbett, R. E., Gilbert, D. T., Fiske, S. T., et al. (1998). The cultural matrix of social psychology The handbook of social psychology, Vols. 1 and 2 (4th ed.). (pp. 915-981). New York, NY US: McGraw-Hill.

Fiske, S. T., Gilbert, D. T., & Lindzey, G. (1998). Stereotyping, prejudice, and discrimination The handbook of social psychology, Vols. 1 and 2 (4th ed.). (pp. 357-411). New York, NY US: McGraw-Hill.

Fiske, S. T., & Neuberg, S. L. (1990). A continuum model of impression formation, from category-based to individuating processes: Influence of information and motivation onattention and interpretation. In M. P. Zanna (Ed.), Advances in Experimental Social Psychology (Vol. 23). New York: Academic Press.

Fiske, S. T., Ruscher, J. B., Mackie, D. M., & Hamilton, D. L. (1993). Negative interdependence and prejudice: Whence the affect? Affect, cognition, and stereotyping: Interactive processes in group perception. (pp. 239-268). San Diego, CA US: Academic Press.

Fitzgerald, L. F., Swan, S., Magley, V. J., & O’Donohue, W. (1997). But was it really sexual harassment?: Legal, behavioral, and psychological definitions of the workplace victimization of women Sexual harassment: Theory, research, and treatment. (pp. 5-28). Needham Heights, MA US: Allyn & Bacon.

Fivush, R., Hamond, N. R., & Hudson, J. A. (1990). Autobiographical memory across the preschool years: Toward reconceptualizing childhood amnesia Knowing and remembering in young children. (pp. 223-248). New York, NY US: Cambridge University Press.

Fivush, R., Neisser, U., & Winograd, E. (1988). The functions of event memory: Some comments on Nelson and Barsalou. Remembering reconsidered: Ecological and traditional approaches to the study of memory. (pp. 277-282). New York, NY US: Cambridge University Press.

Flanagan, D.P. and Dixon, S.G. (2014). The Cattell‐Horn‐Carroll Theory of Cognitive Abilities. In Encyclopedia of Special Education (eds. C.R. Reynolds, K.J. Vannest and E. Fletcher‐Janzen).  https://doi.org/10.1002/9781118660584.ese0431

Flavell, J. H., Miller, P. H., & Damon, W. (1998). Social cognition Handbook of child psychology: Volume 2: Cognition, perception, and language. (pp. 851-898). Hoboken, NJ US: John Wiley & Sons Inc.

Fonow, M. M., Richardson, L., & Wemmerus, V. A. (1992). Feminist rape education: Does it work? Gender & Society, 6 (1), 108-121.

Ford, J. K., Quiñones, M. A., Sego, D. J., & Sorra, J. S. (1992). Factors affecting the opportunity to perform trained tasks on the job. Personnel Psychology, 45 (3), 511-527.

Forer, B. R. (1949). The fallacy of personal validation: a classroom demonstration of gullibility. The Journal of Abnormal and Social Psychology, 44 (1), 118-123.

Forgas, J. P. (1998). Asking nicely? The effects of mood on responding to more or less polite requests. Personality and Social Psychology Bulletin, 24 (2), 173-185.

Forgas, J. P. (2006a). Affect in social thinking and behavior . New York, NY US: Psychology Press.

Forgas, J. P. (2006b). Affective influences on interpersonal behavior: Towards understanding the role of affect in everyday interactions Affect in social thinking and behavior. (pp. 269-289). New York, NY US: Psychology Press.

Forman, T. A. (2003). The Social Psychological Costs of Racial Segmentation in the Workplace: A Study of African Americans’ Well-being. Journal of Health and Social Behavior, 44 (3), 332-352.

Forsyth, D. R., & Wibberly, K. H. (1993). The self-reference effect: Demonstrating schematic processing in the classroom. Teaching of Psychology, 20 (4), 237-238.

Fosnacht, K., McCormick, A.C. & Lerma, R. First-Year Students’ Time Use in College: A Latent Profile Analysis.  Res High Educ   59,  958–978 (2018). https://doi.org/10.1007/s11162-018-9497-z

Foulkes, D., Sullivan, B., Kerr, N. H., & Brown, L. (1988). Appropriateness of dream feelings to dreamed situations. Cognition & Emotion, 2 (1), 29-39.

Frable, D. E., & Bem, S. L. (1985). If you are gender schematic, all members of the opposite sex look alike. Journal of Personality and Social Psychology, 49 (2), 459-468.

Fraley, B., & Aron, A. (2004). The effect of a shared humorous experience on closeness in initial encounters. Personal Relationships, 11 (1), 61-78.

Franz, C.E., York, T.P., Eaves, L.J.  et al.  (2011). Adult Romantic Attachment, Negative Emotionality, and Depressive Symptoms in Middle Aged Men: A Multivariate Genetic Analysis.  Behavior Genetics   41,  488–498. https://doi.org/10.1007/s10519-010-9428-z

Frazier, L. D., Hooker, K., Johnson, P. M., & Kaus, C. R. (2000). Continuity and change in possible selves in later life: A 5-year longitudinal study. Basic and Applied Social Psychology, 22 (3), 237-243.

Freedle, R. O. (2003). Correcting the SAT’s ethnic and social-class bias: A method for reestimating SAT scores. Harvard Educational Review, 73 (1), 1-43.

French, G. M., & Harlow, H. (1955). Locomotor reaction decrement in normal and brain-damaged rhesus monkeys. Journal of Comparative and Physiological Psychology, 48 (6), 496-501.

Freud, S., Milman, D. S., & Goldman, G. D. (1987). Resistance and repression Techniques of working with resistance. (pp. 25-40). Lanham, MD US: Jason Aronson.

Frisch, H. (1977). Sex Stereotypes in Adult-Infant Play. Child Development, 48(4), 1671-1675. doi:10.2307/1128533

Fromuth, M. E., Burkhart, B. R., & Jones, C. W. (1991). Hidden child molestation: An investigation of adolescent perpetrators in a nonclinical sample. Journal of Interpersonal Violence, 6 (3), 376-384.

Fry, D. P. (1998). Anthropological perspectives on aggression: Sex differences and cultural variation. Aggressive Behavior, 24 , 81-95.

Fujita, T., & Horiuchi, T. (2004). Self-reference effect in an independence/remember-know procedure. Japanese Journal of Psychology, 74 (6), 547-551.

Furman, C. E., & Duke, R. A. (1988). Effect of majority consensus on preferences for recorded orchestral and popular music. Journal of Research in Music Education, 36 (4), 220-231.

Furman, W., & Buhrmester, D. (1992). Age and sex differences in perceptions of networks of personal relationships. Child Development, 63 (1), 103-115.

Fussell, S. R. (2002a). The verbal communication of emotion: Introduction and overview. In S. R. Fussell (Ed.), The Verbal Communication of Emotions: Interdisciplinary Perspectives (pp. 1-15). Mahwah, NJ: Lawrence Erlbaum Associates.

Fussell, S. R. (2002b). The verbal communication of emotions: Interdisciplinary perspectives . Mahwah, NJ US: Lawrence Erlbaum Associates Publishers.

Futuyma, D. (1995). Science on Trial . Sunderland, MA: Sinauer Associates.

Gable, S. L., Reis, H. T., Impett, E. A., & Asher, E. R. (2004). What Do You Do When Things Go Right? The Intrapersonal and Interpersonal Benefits of Sharing Positive Events. Journal of Personality and Social Psychology, 87 (2), 228-245.

Gaik, W. (1993). Combined evaluation of interaural time and intensity differences: Psychoacoustic results and computer modeling. Journal of the Acoustical Society of America, 94 (1), 98-110.

Gais, S., & Born, J. (2004a). Declarative memory consolidation: Mechanisms acting during human sleep. Learning & Memory, 11 (6), 679-685.

Gais, S., & Born, J. (2004b). Multiple Processes Strengthen Memory During Sleep. Psychologica Belgica, 44 (1), 105-120.

Gao, G. (2001). Intimacy, passion and commitment in Chinese and US American romantic relationships. International Journal of Intercultural Relations, 25 (3), 329-342.

Garb, H. N., Florio, C. M., & Grove, W. M. (1998). The validity of the Rorschach and the Minnesota Multiphasic Personality Inventory: Results from meta-analyses. Psychological Science, 9 (5), 402-404.

Garcia, J., & Koelling, R. A. (1966). Relation of cue to consequence in avoidance learning. Psychonomic Science, 4 (3), 123-124.

Gardner, H. (1985). The mind’s new science: A history of the cognitive revolution . New York, NY US: Basic Books.

Gardner, H. (1999). Intelligence reframed: Multiple intelligences for the 21st century . New York, NY US: Basic Books.

Garfield, S. (1992). The outcome problem in psychotherapy: Response . Maidenhead, BRK England: Open University Press.

Gates, G. J. (2011). How Many People are Lesbian, Gay, Bisexual and Transgender?  UCLA: The Williams Institute . https://escholarship.org/uc/item/09h684x2

Gazzaniga, M. S. (1967). The split brain in man. Scientific American, 217 (2), 24-29.

GBD 2017 Disease and Injury Incidence and Prevalence Collaborators (2018). Global, regional, and national incidence, prevalence, and years lived with disability for 354 diseases and injuries for 195 countries and territories, 1990-2017: a systematic analysis for the Global Burden of Disease Study 2017.  Lancet (London, England) ,  392 (10159), 1789–1858. https://doi.org/10.1016/S0140-6736(18)32279-7

Geen, R. G., & Donnerstein, E. (1998). Human aggression: Theories, research, and implications for social policy . San Diego, CA US: Academic Press.

Geen, R. G., Gilbert, D. T., Fiske, S. T., & Lindzey, G. (1998). Aggression and antisocial behavior The handbook of social psychology, Vols. 1 and 2 (4th ed.). (pp. 317-356). New York, NY US: McGraw-Hill.

Geen, R. G., & Thomas, S. L. (1986). The immediate effects of media violence on behavior. Journal of Social Issues, 42 (3), 7-27.

Gelman, S. A., & Markman, E. M. (1986). Categories and induction in young children.  Cognition, 23 (3), 183–209.  https://doi.org/10.1016/0010-0277(86)90034-X

Gellersen, H. & Kedzior, K. (2018). An Update of a Meta-Analysis on the Clinical Outcomes of Deep Transcranial Magnetic Stimulation (DTMS) in Major Depressive Disorder (MDD). Zeitschrift für Psychologie, 226, pp. 30-44.  https://doi.org/10.1027/2151-2604/a000320 . © 2018 Hogrefe Publishing.

Gendolla, G. H. E. (2000). On the impact of mood on behavior: An integrative theory and a review. Review of General Psychology, 4 (4), 378-408.

Gentile, D. A., Lynch, P. J., Linder, J. R., & Walsh, D. A. (2004). The effects of violent video game habits on adolescent hostility, aggressive behaviors, and school performance. Journal of Adolescence, 27 (1), 5-22.

Gentner, D. (1983). Structure-mapping: A theoretical framework for analogy. Cognitive Science: A Multidisciplinary Journal, 7 (2), 155-170.

Geoffroy, P. A., Schroder, C. M., Reynaud, E., & Bourgin, P. (2019). Efficacy of light therapy versus antidepressant drugs, and of the combination versus monotherapy, in major depressive episodes: A systematic review and meta-analysis.  Sleep medicine reviews ,  48 , 101213. https://doi.org/10.1016/j.smrv.2019.101213

Gershoff, E. & Grogan-Kaylor, A (2016). Spanking and child outcomes: Old controversies and new meta-analyses. Journal of Family Psychology , 30(4), 453 – 469. https://doi.org/10.1037/fam0000191

Gershoff, E. T., Lansford, J. E., Sexton, H. R., Davis-Kean, P., & Sameroff, A. J. (2012). Longitudinal links between spanking and children’s externalizing behaviors in a national sample of White, Black, Hispanic, and Asian American families.  Child development ,  83 (3), 838–843. https://doi.org/10.1111/j.1467-8624.2011.01732.x

Gershoff, E. T., Goodman, G. S., Miller-Perrin, C. L., Holden, G. W., Jackson, Y., & Kazdin, A. E. (2018). The strength of the causal evidence against physical punishment of children and its implications for parents, psychologists, and policymakers.  American Psychologist, 73 (5), 626–638.  https://doi.org/10.1037/amp0000327

Gershoff, E. T. (2002a). Corporal punishment by parents and associated child behaviors and experiences: A meta-analytic and theoretical review. Psychological Bulletin, 128 (4), 539-579.

Gershoff, E. T. (2002b). Corporal punishment, physical abuse, and the burden of proof: Reply to Baumrind, Larzelere, and Cowan (2002), Holden (2002), and Parke (2002). Psychological Bulletin, 128 (4), 602-611.

Gigerenzer, G. (2004). Dread risk, September 11, and fatal traffic accidents. Psychological Science, 15 (4), 286-287.

Gigerenzer, G., Augier, M., & March, J. G. (2004). Striking a Blow for Sanity in Theories of Rationality Models of a man: Essays in memory of Herbert A. Simon. (pp. 389-409). Cambridge, MA US: MIT Press.

Gilbert, D. T., Fiske, S. T., & Lindzey, G. (1998). The handbook of social psychology, Vols. 1 and 2 (4th ed.) . New York, NY US: McGraw-Hill.

Gilbertson, T. A. (1998). Gustatory mechanisms for the detection of fat. Current Opinion in Neurobiology, 8 , 447-452.

Gilbertson, T. A., Damak, S., & Margolskii, R. F. (2000). The molecular physiology of taste transduction. Current Opinion in Neurobiology, 10 , 519-527.

Gill, R., Hadaway, C. K., & Marler, P. L. (1998). Is religious belief declining in Britain? Journal for the Scientific Study of Religion, 37 (3), 507-516.

Gilovich, T. (1991). How we know what isn’t so: The fallibility of human reason in everyday life . New York, NY US: Free Press.

Gladue, B. A., Boechler, M., & McCaul, K. D. (1989). Hormonal response to competition in human males. Aggressive Behavior, 15 (6), 409-422.

Glass, J., Bengtson, V. L., & Dunham, C. C. (1986). Attitude similarity in three-generation families: Socialization, status inheritance, or reciprocal influence? American Sociological Review, 51 (5), 685-698.

Glassner, B. (1999). The culture of fear: Why Americans are afraid of the wrong things . New York, NY US: Basic Books.

Glick, P. C. (1988). Fifty years of family demography: A record of social change. Journal of Marriage & the Family, 50 (4), 861-873.

Gold, P. E., Cahill, L., & Wenk, G. L. (2002). Ginko Biloba: A Cognitive Enhancer? Psychological Science in the Public Interest, 3 (1), 2-11.

Goldman-Rakic, P. S., Scalaidhe, S., & Chafee, M. V. (2000). Domain specificity in cognitive systems. In M. S. Gazzaniga (Ed.), The new cognitive neurosciences (2nd ed.). (pp. 733-742). Cambridge, MA US: The MIT Press.

Goldstone, R. L., & Chin, C. (1993). Dishonesty in self-report of copies made: Moral relativity and the copy machine. Basic and Applied Social Psychology, 14 (1), 19-32.

Goleman, D. (1995). Emotional intelligence . New York, NY England: Bantam Books, Inc.

Golombok, S., & Tasker, F. (1996). Do parents influence the sexual orientation of their children? Findings from a longitudinal study of lesbian families. Developmental Psychology, 32 (1), 3-11.

Goodall, J. (1999). Reason for Hope: A Spiritual Journey . New York: Warner Books.

Gordon, D. A., Arbuthnot, J., Gustafson, K. E., & McGreen, P. (1988). Home-based behavioral-systems family therapy with disadvantaged juvenile delinquents. American Journal of Family Therapy, 16 (3), 243-255.

Gottman, J. M., & Levenson, R. W. (2002). A two-factor model for predicting when a couple will divorce: Exploratory analyses using 14-year longitudinal data. Family Process, 41 (1), 83-96.

Gray, J. (1993). Men Are From Mars, Women Are From Venus . New York: Harper Collins.

Gray, J. (2008). Why Mars and Venus Collide: Improving Relationships by Understanding How Men and Women Cope Differently with Stress . New York: Harper Paperbacks.

Greco, J. A., & Liberzon, I. (2016). Neuroimaging of Fear-Associated Learning.  Neuropsychopharmacology : official publication of the American College of Neuropsychopharmacology ,  41 (1), 320–334. https://doi.org/10.1038/npp.2015.255

Greenberg, M. J. (1981). The dependence of odor intensity on the hydrophobic properties of molecules. In H. R. Moskowitz & C. B. Warren (Eds.), Odor Quality and Chemical Structure (pp. 177-194). Washington, D.C.: American Chemical Society.

Griffin, J. (2004). Research on Students and Museums: Looking More Closely at the Students in School Groups. Science Education, 88 , S59-s70.

Griggs, R. A. (2017). Milgram’s Obedience Study: A Contentious Classic Reinterpreted. Teaching of Psychology, 44(1), 32–37. https://doi.org/10.1177/0098628316677644

Griggs, R. A., Blyler, J., & Jackson, S. L. (2020, June 11). New Revelations About Rosenhan’s Pseudopatient Study: Scientific Integrity in Remission. Scholarship of Teaching and Learning in Psychology . Advance online publication. http://dx.doi.org/10.1037/stl0000202

Grotevant, H. D., Adams, G. R., Gullotta, T. P., & Montemayor, R. (1992). Assigned and chosen identity components: A process perspective on their integration Adolescent identity formation. (pp. 73-90). Thousand Oaks, CA US: Sage Publications, Inc.

Grotevant, H. D., & Eisenberg, N. (1998). Adolescent development in family contexts Handbook of child psychology, 5th ed.: Vol 3. Social, emotional, and personality development. (pp. 1097-1149). Hoboken, NJ US: John Wiley & Sons Inc.

Grotevant, H. D., & Kroger, J. (1993). The integrative nature of identity: Bringing the soloists to sing in the choir Discussions on ego identity. (pp. 121-146). Hillsdale, NJ England: Lawrence Erlbaum Associates, Inc.

Group, T. E. (2009). One-year survival of extremely preterm infants after active perinatal care in Sweden. Journal of the American Medical Association, 301 (21), 2225-2233.

Grove, W. M., Zald, D. H., Lebow, B. S., Snitz, B. E., & Nelson, C. (2000). Clinical versus mechanical prediction: A meta-analysis. Psychological Assessment, 12 (1), 19-30.

Grych, J. H., & Fincham, F. D. (1990). Marital conflict and children’s adjustment: A cognitive-contextual framework. Psychological Bulletin, 108 (2), 267-290.

Gura, S. T. (2002). Yoga for stress reduction and injury prevention at work. Work: Journal of Prevention, Assessment & Rehabilitation, 19 (1), 3-7.

Gustafson, G. E., Green, J. A., & Cleland, J. W. (1994). Robustness of individual identity in the cries of human infants. Developmental Psychobiology, 27 (1), 1-9.

Guttmann, A. (2019). Media advertising spending in the United States from 2015 – 2022. Retrieved from https://www.statista.com/statistics/272314/advertising-spending-in-the-us/

Hagekull, B., & Bohlin, G. (1998). Preschool temperament and environmental factors related to the five-factor model of personality in middle childhood. Merrill-Palmer Quarterly, 44 (2), 194-215.

Hagger, M. S., et al. (2016). A multilab preregistered replication of the ego-depletion effect. Perspectives on Psychological Science, 11 , 546–573. DOI: 10.1177/1745691616652873

Hall, J. A. (1984). Nonverbal sex differences: Communication accuracy and expressive style . Baltimore: Johns Hopkins University Press.

Hallett, B., Moghaddam, F. M., & Marsella, A. J. (2004). Dishonest crimes, dishonest language: An argument about terrorism Understanding terrorism: Psychosocial roots, consequences, and interventions. (pp. 49-67). Washington, DC US: American Psychological Association.

Halpern, A. R. (1986). Memory for tune titles after organized or unorganized presentation. American Journal of Psychology, 99 , 57 – 70.

Hamilton, M., & Yee, J. (1990). Rape knowledge and propensity to rape. Journal of Research in Personality, 24 (1), 111-122.

Hamm, J. V. (2000). Do birds of a feather flock together? The variable bases for African American, Asian American, and European American adolescents’ selection of similar friends. Developmental Psychology, 36 (2), 209-219.

Hamond, N. R., & Fivush, R. (1991). Memories of Mickey Mouse: Young children recount their trip to Disneyworld. Cognitive Development, 6 (4), 433-448.

Haney, C., Banks, C., & Zimbardo, P. (1973). Interpersonal dynamics in a simulated prison. International Journal of Criminology & Penology, 1 (1), 69-97.

Hanna, E., & Meltzoff, A. N. (1993). Peer imitation by toddlers in laboratory, home, and day-care contexts: Implications for social learning and memory. Developmental Psychology, 29 (4), 701-710.

Harman, G. (1978). Studying the chimpanzee’s theory of mind. Behavioral and Brain Sciences, 1 (576-577).

Harris, J. R. (1995). Where is the child’s environment? A group socialization theory of development. Psychological Review, 102 , 458-489.

Harris, J. R. (1998). The Nurture Assumption . New York: Free Press.

Hartshorne, H., & May, M. A. (1928). Studies in deceit. Book I. General methods and results. Book II. Statistical methods and results . Oxford England: Macmillan.

Hartup, W. W., & Laursen, B. (1993). Adolescents and their friends Close friendships in adolescence. (pp. 3-22). San Francisco, CA US: Jossey-Bass.

Hartup, W. W., & Stevens, N. (1999). Friendships and adaptation across the life span. Current Directions in Psychological Science, 8 (3), 76-79.

Haslam , S A , Reicher , S D & Birney , M E 2016 , ‘ Questioning authority: New perspectives on Milgram’s ‘obedience’ research and its implications for intergroup relations ‘ , Current Opinion in Psychology , vol. 11 , pp. 6-9 . https://doi.org/10.1016/j.copsyc.2016.03.007

Hebert, L. E., Scherr, P. A., Bienias, J. L., Bennett, D. A., & Evans, D. A. (2003). Alzheimer Disease in the US Population. Archives of Neurology, 60 (8), 1119-1122.

Hecht, M. L., Marston, P. J., & Larkey, L. K. (1994). Love ways and relationship quality in heterosexual relationships. Journal of Social and Personal Relationships, 11 (1), 25-43.

Hedley, A. A., Ogden, C. L., Johnson, C. L., Carroll, M. D., Curtin, L. R., & Flegal, K. M. (2004). Prevalence of Overweight and Obesity Among US Children, Adolescents, and Adults, 1999-2002. JAMA: Journal of the American Medical Association, 291 (23), 2847-2850.

Hegde, J., & Van Essen, D. C. (2000). Selectivity for complex shapes in primate visual area V2. Journal of Neuroscience, 20 , RC61:61-66.

Heider, F. (1958). The psychology of interpersonal relations . Hoboken, NJ US: John Wiley & Sons Inc.

Helliwell, J. F., Aknin, L. B., Shiplett, H., Huang, H., & Wang, S. (2018). Social capital and prosocial behavior as sources of well-being. In E. Diener, S. Oishi, & L. Tay (Eds.),  Handbook of well-being . Salt Lake City, UT: DEF Publishers. DOI:nobascholar.com

Heneman, K., Steinberg, F., & Zidenberg-Cherr, S. (2007). Some Facts About Soy. Nutrition and Health Info-Sheet for Health Professionals ,

Hennessy, D. A., Wiesenthal, D. L., & Kohn, P. M. (2000). The influence of traffic congestion, daily hassles, and trait stress susceptibility on state driver stress: An interactive perspective. Journal of Applied Biobehavioral Research, 5 (2), 162-179.

Henrich, J., Heine, S., & Norenzayan, A. (2010). The weirdest people in the world?  Behavioral and Brain Sciences,   33 (2-3), 61-83. doi:10.1017/S0140525X0999152X

Herpertz, S. C., Werth, U., Lucas, G., Qunaibi, M., Schuerkens, A., Kunert, H.-J., et al. (2001). Emotion in criminal offenders with psychopathy and borderline personality disorders. Archives of General Psychiatry, 58 (8), 737-745.

Herman-Stabl, M. A., Stemmler, M., & Petersen, A. C. (1995). Approach and avoidant coping: Implications for adolescent mental health.  Journal of Youth and Adolescence, 24 , 649–665.

Herrnstein, R. J., & Murray, C. A. (1994). The bell curve: Intelligence and class structure in American life . New York, NY US: Free Press.

Hettema, J. M., Neale, M. C., & Kendler, K. S. (2001). A review and meta-analysis of the genetic epidemiology of anxiety disorders. American Journal of Psychiatry, 158 (10), 1568-1578.

Hettema, J. M., Neale, M. C., & Kendler, K. S. (2001). A review and meta-analysis of the genetic epidemiology of anxiety disorders.  The American journal of psychiatry ,  158 (10), 1568–1578. https://doi.org/10.1176/appi.ajp.158.10.1568

Hilgard, J., Engelhardt, C. R., Rouder, J. N., Segert, I. L., & Bartholow, B. D. (2019). Null Effects of Game Violence, Game Difficulty, and 2D:4D Digit Ratio on Aggressive Behavior. Psychological Science, 30(4), 606–616. https://doi.org/10.1177/0956797619829688

Hilker R, Helenius D, Fagerlund B, et al. Heritability of Schizophrenia and Schizophrenia Spectrum Based on the Nationwide Danish Twin Register.  Biol Psychiatry . 2018;83(6):492-498. doi:10.1016/j.biopsych.2017.08.017

Hilton, N. Z., Harris, G. T., & Rice, M. E. (2000). The functions of aggression by male teenagers. Journal of Personality and Social Psychology, 79 (6), 988-994.

Hinkin, T. R., & Schriesheim, C. A. (1989). Development and application of new scales to measure the French and Raven (1959) bases of social power. Journal of Applied Psychology, 74 (4), 561-567.

Hobbes, T. (1651/2006). Leviathan . Mineola, NY: Dover Publications.

Hobson, A., Clark, P., & Wright, C. (1988). Psychoanalytic dream theory: A critique based upon modern neurophysiology Mind, psychoanalysis and science. (pp. 277-308). Cambridge, MA US: Basil Blackwell.

Hobson, J. A. (1988). The dreaming brain . New York, NY US: Basic Books.

Hobson, J. A., & McCarley, R. W. (1977). The brain as a dream state generator: An activation-synthesis hypothesis of the dream process. American Journal of Psychiatry, 134 (12), 1335-1348.

Hobson, J. A., Pace-Schott, E. F., & Stickgold, R. (2000). Dreaming and the brain: Toward a cognitive neuroscience of conscious states. Behavioral and Brain Sciences, 23 (6), 793-842-842;.

Hoge, C. W., Castro, C. A., Messer, S. C., McGurk, D., Cotting, D. I., & Koffman, R. L. (2004). Combat Duty in Iraq and Afghanistan, Mental Health Problems, and Barriers to Care. New England Journal of Medicine, 351 (1), 13-22.

Hollander, E. (1999). Managing aggressive behavior in patients with obsessive-compulsive disorder and borderline personality disorder. Journal of Clinical Psychiatry, 60 (15), 38-44.

Holtzworth-Munroe, A., Smutzler, N., & Stuart, G. L. (1998). Demand and withdraw communication among couples experiencing husband violence. Journal of Consulting and Clinical Psychology, 66 (5), 731-743.

Hood, R. W., Hill, P. C., & Spilka, B. (2018). The psychology of religion: An empirical approach . New York, NY: Guilford Press.

House, J. S., Landis, K. R., & Umberson, D. (1988). Social relationships and health. Science, 241 (4865), 540-545.

House, J. S., Landis, K. R., Umberson, D., Salovey, P., & Rothman, A. J. (2003). Social relationships and health Social psychology of health. (pp. 218-226). New York, NY US: Psychology Press.

Howard, A. (1986). College Experiences and Managerial Performance. Journal of Applied Psychology, 71 (3), 530-552.

Howe, M. L., & Courage, M. L. (1993). On resolving the enigma of infantile amnesia. Psychological Bulletin, 113 (2), 305-326.

Howes, C. (1983). Patterns of friendship. Child Development, 54 (4), 1041-1053.

Howes, C. (1988). Peer interaction of young children. Monographs of the Society for Research in Child Development, 53 (1), 94-94.

Hubel, D. H., & Wiesel, T. N. (1959). Receptive fields of single neurones in the cat’s striate cortex. Journal of Physiology, 148 , 574-591.

Huesmann, L. R., Moise-Titus, J., Podolski, C., & Eron, L. (2003). Longitudinal Relations Between Children’s Exposure to TV Violence and Their Aggressive and Violent Behavior in Young Adulthood: 1977-1992. Developmental Psychology, 39 (2), 201-221.

Huesmann, L. R., Moise-Titus, J., Podolski, C.-L., & Eron, L. D. (2003). Longitudinal relations between children’s exposure to TV violence and their aggressive and violent behavior in young adulthood: 1977-1992. Developmental Psychology, 39 (2), 201-221.

Hunsberger, B. (1985). Religion, age, life satisfaction, and perceived sources of religiousness: A study of older persons. Journal of Gerontology, 40 (5), 615-620.

Hunt, M. M. (1993). The story of psychology . New York, NY US: Doubleday & Co.

Hunter, J. E., & Hunter, R. F. (1984). Validity and utility of alternative predictors of job performance. Psychological Bulletin, 96 (1), 72-98.

Huntington, S. (1998). The Clash of Civilizations and the Remaking of World Order . New York: Simon & Schuster.

Hurtz, G. M., & Donovan, J. J. (2000). Personality and job performance: The Big Five revisited. Journal of Applied Psychology, 85 (6), 869-879.

Hsu, L. M. (1989). Random sampling, randomization, and equivalence of contrasted groups in psychotherapy outcome research.  Journal of Consulting and Clinical Psychology, 57 (1), 131–137.  https://doi.org/10.1037/0022-006X.57.1.131

Hyde, J. S. (1986). The Sophisticated Second Generation of the Psychology of Gender. PsycCRITIQUES, 31 (5), 355-356.

Hyde, J. S., Fennema, E., & Lamon, S. J. (1990). Gender differences in mathematics performance: A meta-analysis. Psychological Bulletin, 107 (2), 139-155.

Hyde, J. S., & Linn, M. C. (1988). Gender differences in verbal ability: A meta-analysis. Psychological Bulletin, 104 (1), 53-69.

Hyman, G. J., Stanley, R. O., Burrows, G. D., & Horne, D. J. (1986). Treatment effectiveness of hypnosis and behaviour therapy in smoking cessation: A methodological refinement. Addictive Behaviors, 11 (4), 355-365.

Hyman, I. E., Jr., & Pentland, J. (1996). The role of mental imagery in the creation of false childhood memories. Journal of Memory and Language, 35 (2), 101-117.

Iaffaldano, M. T., & Muchinsky, P. M. (1985). Job satisfaction and job performance: A meta-analysis. Psychological Bulletin, 97 (2), 251-273.

Inhelder, B. r., & Piaget, J. (1958). Adolescent thinking The growth of logical thinking: From childhood to adolescence. (pp. 334-350). New York, NY US: Basic Books.

Inhelder, B. r., Piaget, J., Parsons, A., & Milgram, S. (1958). The growth of logical thinking: From childhood to adolescence . New York, NY US: Basic Books.

Isabella, R. A. (1993). Origins of attachment: Maternal interactive behavior across the first year. Child Development, 64 (2), 605-621.

Isen, A. M., & Berkowitz, L. (1987). Positive affect, cognitive processes, and social behavior Advances in experimental social psychology, Vol. 20. (pp. 203-253). San Diego, CA US: Academic Press.

Isen, A. M., & Geva, N. (1987). The influence of positive affect on acceptable level of risk: The person with a large canoe has a large worry. Organizational Behavior and Human Decision Processes, 39 (2), 145-154.

Ito, T. A., Miller, N., & Pollock, V. E. (1996). Alcohol and aggression: A meta-analysis on the moderating effects of inhibitory cues, triggering events, and self-focused attention. Psychological Bulletin, 120 (1), 60-82.

Jäkel, S. & Dimou, L. (2017). Glial Cells and Their Function in the Adult Brain: A Journey through the History of Their Ablation. Frontiers in Cellular Neuroscience , 11.   https://doi.org/10.3389/fncel.2017.00024

Jack, R., Garrod, O, & Schyns, P. (2014) Dynamic facial expressions of emotion transmit an evolving hierarchy of signals over time. Current Biology, 24 (2), 187 – 192. DOI:  https://doi.org/10.1016/j.cub.2013.11.064

Jackson, S. E., Smith, L., Firth, J., Grabovac, I., Soysal, P., Koyanagi, A., Hu, L., Stubbs, B., Demurtas, J., Veronese, N., Zhu, X., & Yang, L. (2019). Is there a relationship between chocolate consumption and symptoms of depression? A cross‐sectional survey of 13,626 us adults.  Depression and Anxiety . https://doi-org.cod.idm.oclc.org/10.1002/da.22950

Jackson, D. A., Jackson, N. F., Bennett, M. L., Bynum, D. M., & Faryna, E. (1991). Learning to get along: Social effectiveness training for people with developmental disabilities . Champaign, IL US: Research Press.

Jackson, J. S., McCullough, W. R., Gurin, G., & Broman, C. L. (1991). Race identity Life in black America. (pp. 238-253). Thousand Oaks, CA US: Sage Publications, Inc.

Jackson, L. A., & McGill, O. D. (1996). Body type preferences and body characteristics associated with attractive and unattractive bodies by African Americans and Anglo Americans. Sex Roles, 35 (5), 295-307.

Jackson, S. E., & Schuler, R. S. (1985). A meta-analysis and conceptual critique of research on role ambiguity and role conflict in work settings. Organizational Behavior and Human Decision Processes, 36 (1), 16-78.

Jacobson, S. W., & Frye, K. F. (1991). Effect of maternal social support on attachment: Experimental evidence. Child Development, 62 (3), 572-582.

Jacoby, L. L., & Dallas, M. (1981). On the relationship between autobiographical memory and perceptual learning. Journal of Experimental Psychology: General, 110 (3), 306-340.

James, K., Chen, J., & Goldberg, C. (1992). Organizational conflict and individual creativity. Journal of Applied Social Psychology, 22 (7), 545-566.

James, W. (1890a). Principles of Psychology : Henry Holt and Company.

James, W. (1890b). The principles of psychology, Vol I . New York, NY US: Henry Holt and Co.

Jamieson, J. P., Peters, B. J., Greenwood, E. J., & Altose, A. J. (2016). Reappraising Stress Arousal Improves Performance and Reduces Evaluation Anxiety in Classroom Exam Situations. Social Psychological and Personality Science, 7(6), 579–587. https://doi.org/10.1177/1948550616644656

Janis, I. (1982). Groupthink: Psychological studies of policy decisions and fiascoes . Boston: Houghton Mifflin.

Jenkins, F. J., Van Houten, B., & Bovbjerg, D. H. (2014). Effects on DNA Damage and/or Repair Processes as Biological Mechanisms Linking Psychological Stress to Cancer Risk.  Journal of applied biobehavioral research ,  19 (1), 3–23. https://doi.org/10.1111/jabr.12019

Jenkins, J. M. (1993). Self-monitoring and turnover: The impact of personality on intent to leave. Journal of Organizational Behavior, 14 (1), 83-91.

Jepson, C., Krantz, D. H., & Nisbett, R. E. (1983). Inductive reasoning: Competence or skill? Behavioral and Brain Sciences, 6 (3), 494-501.

John, L. K., Loewenstein, G., & Prelec, D. (2012). Measuring the Prevalence of Questionable Research Practices With Incentives for Truth Telling. Psychological Science, 23(5), 524–532. https://doi.org/10.1177/0956797611430953

Johns, M. (2010). A new perspective on sleepiness.  Sleep and Biological Rhythms ,  8 (3), 170-179. doi: 10.1111/j.1479-8425.2010.00450.x

Johnson, D. W. (2000). Reaching Out: Interpersonal Effectiveness and Self-Actualization . Needham Heights, MA: Allyn & Bacon.

Johnson, D. W., & Johnson, F. P. (2003). Joining together: Group theory and group skills (8th ed.) (8 ed.). Needham Heights, MA US: Allyn & Bacon.

Johnson, L. J., Zorn, D., Tam, B. K. Y., Lamontagne, M., & Johnson, S. A. (2003). Stakeholders’ views of factors that impact successful interagency collaboration. Exceptional Children, 69 (2), 195-209.

Johnson-Laird, P. N. (1999). Deductive reasoning. Annual Review of Psychology, 50 , 109-135.

Johnson-Laird, P. N., & Oatley, K. (1989). The language of emotions: An analysis of a semantic field. Cognition & Emotion, 3 (2), 81-123.

Johnston, J. M. (1972). Punishment of human behavior. American Psychologist, 27 (11), 1033-1054.

Jones, B. C., Little, A. C., Perrett, D. I., & Shohov, S. P. (2003). Why are symmetrical faces attractive? Advances in psychology research, Vol. 19. (pp. 145-166). Hauppauge, NY US: Nova Science Publishers.

Jones, E. E., & Berglas, S. (1978). Control of attributions about the self through self-handicapping strategies: The appeal of alcohol and the role of underachievement. Personality and Social Psychology Bulletin, 4 (2), 200-206.

Jones, E. E., & Harris, V. A. (1967). The Attribution of Attitudes. Journal of Experimental Social Psychology, 3 (1), 1-24.

Jones, H. S., & Oswald, I. (1968). Two cases of healthy insomnia. Electroencephalography & Clinical Neurophysiology, 24 (4), 378-380.

Jonides, J., Reuter-Lorenz, P. A., Smith, E. E., Awh, E., Barnes, L. L., Drain, M., et al. (1996). Verbal and spatial working memory in humans. In The psychology of learning and motivation: Advances in research and theory, Vol. 35. (pp. 43-88). San Diego, CA US: Academic Press.

Kagan, J. (1987). Review of Temperament in Clinical Practice. PsycCRITIQUES, 32 (9), 831-831.

Kagan, J., & Eisenberg, N. (1998). Biology and the child Handbook of child psychology, 5th ed.: Vol 3. Social, emotional, and personality development. (pp. 177-235). Hoboken, NJ US: John Wiley & Sons Inc.

Kahneman, D., Krueger, A. B., Schkade, D. A., Schwarz, N., & Stone, A. A. (2004). A Survey Method for Characterizing Daily Life Experience: The Day Reconstruction Method. Science, 306 (5702), 1776-1780.

Kahneman, D., & Tversky, A. (1973). On the psychology of prediction. Psychological Review, 80 (4), 237-251.

Kalat, J. W. (2004). Biological Psychology (8 ed.): Thomson Wadsworth.

Kamarck, T. W., & Jennings, J. R. (1991). Biobehavioral factors in sudden cardiac death. Psychological Bulletin, 109 (1), 42-75.

Kant, I. (2004/1786). Metaphysical Foundations of Natural Science (M. Friedman, Trans.). Cambridge, UK: Cambridge University Press.

Kanzler, K. E., & Ogbeide, S. (2020). Addressing trauma and stress in the COVID-19 pandemic: Challenges and the promise of integrated primary care.  Psychological Trauma: Theory, Research, Practice, and Policy, 12 (S1), S177-S179. http://dx.doi.org/10.1037/tra0000761

Kappelmann, N., Rein, M., Fietz, J., Mayberg, H. S., Craighead, W. E., Dunlop, B. W., Nemeroff, C. B., Keller, M., Klein, D. N., Arnow, B. A., Husain, N., Jarrett, R. B., Vittengl, J. R., Menchetti, M., Parker, G., Barber, J. P., Bastos, A. G., Dekker, J., Peen, J., Keck, M. E., … Kopf-Beck, J. (2020). Psychotherapy or medication for depression? Using individual symptom meta-analyses to derive a Symptom-Oriented Therapy (SOrT) metric for a personalised psychiatry.  BMC medicine ,  18 (1), 170. https://doi.org/10.1186/s12916-020-01623-9

Karpicke, J. D., & Roediger, H. L., 3rd (2008). The critical importance of retrieval for learning.  Science ( New York , N.Y.) ,  319 (5865), 966–968. https://doi.org/10.1126/science.1152408

Kashima, Y. (2020). Language and language use. In R. Biswas-Diener & E. Diener (Eds),  Noba textbook series: Psychology.  Champaign, IL: DEF publishers. Retrieved from  http://noba.to/gq62cpam

Kasser, T., & Ahuvia, A. (2002). Materialistic values and well-being in business students. European Journal of Social Psychology, 32 (1), 137-146.

Kasser, T., & Ryan, R. M. (1993). A dark side of the American dream: Correlates of financial success as a central life aspiration. Journal of Personality and Social Psychology, 65 (2), 410-422.

Kasser, T., & Ryan, R. M. (1996). Further examining the American dream: Differential correlates of intrinsic and extrinsic goals. Personality and Social Psychology Bulletin, 22 (3), 280-287.

Kassin, S. M. (2017). The Killing of Kitty Genovese: What Else Does This Case Tell Us? Perspectives on Psychological Science, 12(3), 374–381. https://doi.org/10.1177/1745691616679465

Katz, J., Sanger-Katz, M., & Quealy, K. (2020, July 17). A Detailed Map of Who Is Wearing Masks in the US. New York Times.

Kaufmann, D., Gesten, E., Santa Lucia, R. C., Salcedo, O., Rendina-Gobioff, G., & Gadd, R. (2000). The relationship between parenting style and children’s adjustment: The parents’ perspective. Journal of Child and Family Studies, 9 (2), 231-245.

Kawabata, Y., Alink, L., Tseng, W., van IJzendoorn, M., & Crick, N. (2011). Maternal and paternal parenting styles associated with relational aggression in children and adolescents: A conceptual analysis and meta-analytic review.  Developmental Review ,  31 (4), 240-278. doi: 10.1016/j.dr.2011.08.001

Kawana, N., Ishimatsu, S.-i., & Kanda, K. (2001). Psycho-physiological effects of the terrorist sarin attack on the Tokyo subway system. Military Medicine, 166 (12), 23-26.

Kawas, C., Gray, S., Brookmeyer, R., Fozard, J., & Zonderman, A. (2000). Age-specific incidence rates of Alzheimer’s disease: The Baltimore Longitudinal Study of Aging. Neurology, 54 (11), 2072-2077.

Kazdin, A. E., & Benjet, C. (2003). Spanking children: evidence and issues. Current Directions in Psychological Science, 12 (3), 99-103.

Kellogg, R. (1967). Understanding children’s art. Psychology Today, 1 (1), 16-25.

Kernis, M. H., Goldman, B. M., Leary, M. R., & Tangney, J. P. (2003). Stability and variability in self-concept and self-esteem Handbook of self and identity. (pp. 106-127). New York, NY US: Guilford Press.

Kernis, M. H., Grannemann, B. D., & Barclay, L. C. (1989). Stability and level of self-esteem as predictors of anger arousal and hostility. Journal of Personality and Social Psychology, 56 (6), 1013-1022.

Kerr, N. L., & Tindale, R. S. (2004). Group performance and decision making. Annual Review of Psychology, 55 , 623-655.

Kessler, R. C., Sonnega, A., Bromet, E., & Hughes, M. (1995). Posttraumatic stress disorder in the National Comorbidity Survey. Archives of General Psychiatry, 52 (12), 1048-1060.

Kessler, R. C., Stein, M. B., & Berglund, P. (1998). Social phobia subtypes in the National Comorbidity Survey. American Journal of Psychiatry, 155 (5), 613-619.

Kiecolt-Glaser, J. K., Malarkey, W. B., Chee, M., & Newton, T. (1993). Negative behavior during marital conflict is associated with immunological down-regulation. Psychosomatic Medicine, 55 (5), 395-409.

Kier, C., & Lewis, C. (1997). Infant-mother attachment in separated and married families. Journal of Divorce & Remarriage, 26 (3), 185-194.

Kim, H. K., & McKenry, P. C. (2002). The relationship between marriage and psychological well-being: A Longitudinal analysis. Journal of Family Issues, 23 (8), 885-911.

Kim, J., & Hatfield, E. (2004). Love types and subjective well-being: A cross cultural study. Social Behavior and Personality, 32 (2), 173-182.

Kim, N. S., & Ahn, W. (2002). Clinical psychologists’ theory-based representations of mental disorders predict their diagnostic reasoning and memory. Journal of Experimental Psychology: General, 131 (4), 451-476.

Kim, Y., Kasser, T., & Lee, H. (2003). Self-concept, aspirations, and well-being in South Korea and the United States. Journal of Social Psychology, 143 (3), 277-290.

King, A. J., & Schnupp, J. W. H. (2000). Sensory convergence in neural function and development. In M. S. Gazzaniga (Ed.), The New Cognitive Neurosciences (2 ed.). Cambridge, MA: The MIT Press.

King, N. J., Gullone, E., Tonge, B. J., & Ollendick, T. H. (1993). Self-reports of panic attacks and manifest anxiety in adolescents. Behaviour Research and Therapy, 31 (1), 111-116.

Kipnis, D., Schmidt, S. M., Swaffin-Smith, C., & Wilkinson, I. (1984). Patterns of managerial influence: Shotgun managers, tacticians, and bystanders. Organizational Dynamics, 12 (3), 58-67.

Kipnis, D., Schmidt, S. M., & Wilkinson, I. (1980). Intraorganizational influence tactics: Explorations in getting one’s way. Journal of Applied Psychology, 65 (4), 440-452.

Kirby, D. (2002). Effective approaches to reducing adolescent unprotected sex, pregnancy, and childbearing. Journal of Sex Research, 39 (1), 51-57.

Kirkpatrick, L. A., & Shaver, P. R. (1992). An attachment-theoretical approach to romantic love and religious belief. Personality and Social Psychology Bulletin, 18 (3), 266-275.

Kirnan, J., Bragge, J. D., Brecher, E., & Johnson, E. (2001). What race am I? The need for standardization in race question wording. Public Personnel Management, 30 (2), 211-220.

Kitayama, S., Markus, H. R., & Kurokawa, M. (2000). Culture, emotion, and well-being: Good feelings in Japan and the United States. Cognition & Emotion, 14 (1), 93-124.

Kivipelto, M., Mangialasche, F., & Ngandu, T. (2017). Can lifestyle changes prevent cognitive impairment?.  The Lancet. Neurology ,  16 (5), 338–339. https://doi.org/10.1016/S1474-4422(17)30080-7

Kliegl, R., Baltes, P. B., Schooler, C., & Schaie, K. W. (1987). Theory-guided analysis of mechanisms of development and aging through testing-the-limits and research on expertise Cognitive functioning and social structure over the life course. (pp. 95-119). Westport, CT US: Ablex Publishing.

Kluender, K. R., & Jenison, R. L. (1992). Effects of glide slope, noise intensity, and noise duration on the extrapolation of FM glides through noise. Perception & Psychophysics, 51 (3), 231-238.

Kluft, R. P., Tasman, A., & Goldfinger, S. M. (1991). Multiple personality disorder American Psychiatric Press review of psychiatry, Vol. 10. (pp. 161-188). Washington, DC US: American Psychiatric Association.

Knowlton, B. J., Squire, L. R., & Gluck, M. A. (1994). Probabilistic classification learning in amnesia. Learning & Memory, 1 (2), 106-120.

Kochan, T., Bezrukova, K., Ely, R., Jackson, S., Joshi, A., Jehn, K., et al. (2003). The effects of diversity on business performance: Report of the Diversity Research Network. Human Resource Management, 42 (1), 3-21.

Kochanska, G., Friesenborg, A. E., Lange, L. A., & Martel, M. M. (2004). Parents’ Personality and Infants’ Temperament as Contributors to Their Emerging Relationship. Journal of Personality and Social Psychology, 86 (5), 744-759.

Koffel, E., Kuhn, E., Petsoulis, N., Erbes, C. R., Anders, S., Hoffman, J. E., … Polusny, M. A. (2018). A randomized controlled pilot study of CBT-I Coach: Feasibility, acceptability, and potential impact of a mobile phone application for patients in cognitive behavioral therapy for insomnia.  Health Informatics Journal ,  24 (1), 3–13. https://doi.org/ 10.1177/1460458216656472

Kohlberg, L. (1963). The development of children’s orientations toward a moral order: I. Sequence in the development of moral thought. Vita Humana, 6 (1), 11-33.

Kohn, P. M., & MacDonald, J. E. (1992). Hassles, anxiety, and negative well-being. Anxiety, Stress & Coping: An International Journal, 5 (2), 151-163.

Kohn, P. M., & Macdonald, J. E. (1992). The Survey of Recent Life Experiences: A decontaminated hassles scale for adults. Journal of Behavioral Medicine, 15 (2), 221-236.

Koopman‐Verhoeff, M., Mulder, R., Saletin, J., Reiss, I., Horst, G., & Felix, J. et al. (2020). Genome‐wide DNA methylation patterns associated with sleep and mental health in children: a population‐based study.  Journal of Child Psychology And Psychiatry ,  61 (10), 1061-1069. doi: 10.1111/jcpp.13252

Kovach, K. A., & Cohen, D. J. (1992). The relationship of on-the-job, off-the-job and refresher training to human resource outcomes and variables. Human Resource Development Quarterly, 3 (2), 157-174.

Kramer, A. F., Bherer, L., Colcombe, S. J., Dong, W., & Greenough, W. T. (2004). Environmental Influences on Cognitive and Brain Plasticity During Aging. Journals of Gerontology: Series A: Biological Sciences and Medical Sciences (9), 940-957.

Kraus, G., & Reynolds, D. J. (2001). The ‘A-B-C’s’ of the Cluster B’s: Identifying, understanding, and treating Cluster B personality disorders. Clinical Psychology Review, 21 (3), 345-373.

Kraus, S. J. (1995). Attitudes and the prediction of behavior: A meta-analysis of the empirical literature. Personality and Social Psychology Bulletin, 21 (1), 58-75.

Kring, A. M., & Gordon, A. H. (1998). Sex differences in emotion: Expression, experience, and physiology. Journal of Personality and Social Psychology, 74 (3), 686-703.

Krosnick, J. A., Boninger, D. S., Chuang, Y. C., Berent, M. K., & Carnot, C. G. (1993). Attitude strength: One construct or many related constructs? Journal of Personality and Social Psychology, 65 (6), 1132-1151.

Kumar, K., & Beyerlein, M. (1991). Construction and validation of an instrument for measuring ingratiatory behaviors in organizational settings. Journal of Applied Psychology, 76 (5), 619-627.

Kurtz, J. L. (2018). Affective forecasting. In E. Diener, S. Oishi, & L. Tay (Eds.),  Handbook of well-being . Salt Lake City, UT: DEF Publishers. DOI:nobascholar.com

Kushlev, K., Heintzelman, S. J., Lutes, L. D., Wirtz, D., Kanippayoor, J. M., Leitner, D., & Diener, E. (2020). Does Happiness Improve Health? Evidence From a Randomized Controlled Trial. Psychological Science, 31(7), 807–821. https://doi.org/10.1177/0956797620919673

Kwon, Y., & Lawson, A. E. (2000). Linking brain growth with the development of scientific reasoning ability and conceptual change during adolescence. Journal of Research in Science Teaching, 37 (1), 44-62.

LaBouff, J., Rowatt, W., Johnson, M., Tsang, J. & McCullough Willerton, G. (2012). Humble persons are more helpful than less humble persons: Evidence from three studies, The Journal of Positive Psychology, 7:1,16-29, DOI:  10.1080/17439760.2011.626787

Lachs, L. (2020). Multi-modal perception. In R. Biswas-Diener & E. Diener (Eds),  Noba textbook series: Psychology.  Champaign, IL: DEF publishers. Retrieved from  http://noba.to/cezw4qyn

LaFrance, M., Banaji, M., & Clark, M. S. (1992). Toward a reconsideration of the gender-emotion relationship Emotion and social behavior. (pp. 178-201). Thousand Oaks, CA US: Sage Publications, Inc.

Landauer, T. K. (1986). How much do people remember? Some estimates of the quantity of learned information in long-term memory. Cognitive Science: A Multidisciplinary Journal, 10 (4), 477-493.

Landrum, R. E., & Harrold, R. (2003). What employers want from psychology graduates. Teaching of Psychology, 30 (2), 131-133.

Langer, E. (1992). Matters of mind: Mindfulness/mindlessness in perspective.  Consciousness and Cognition ,  1 (3), 289-305. doi: 10.1016/1053-8100(92)90066-j

LaPiere, R. T., Fazio, R. H., & Petty, R. E. (2008). Attitudes vs. actions Attitudes: Their structure, function, and consequences. (pp. 403-409). New York, NY US: Psychology Press.

Lapsley, D. K. (1993). Toward an integrated theory of adolescent ego development: The “new look” at adolescent egocentrism. American Journal of Orthopsychiatry, 63 , 562-571.

Lara B. Aknin, Michael I. Norton & Elizabeth W. Dunn (2009) From wealth to well-being? Money matters, but less than people think,  The Journal of Positive Psychology , 4:6, 523-527, DOI:  10.1080/17439760903271421

Larson, R., Csikszentmihalyi, M., & Freeman, M. (1984). Alcohol and marijuana use in adolescents’ daily lives: A random sample of experiences. International Journal of the Addictions, 19 (4), 367-381.

Larson, R., & Richards, M. H. (1994). Divergent realities: The emotional lives of mothers, fathers, and adolescents . New York, NY US: Basic Books.

Larson, R. W., & Richards, M. H. (1994). Family emotions: Do young adolescents and their parents experience the same states? Journal of Research on Adolescence, 4 (4), 567-583.

Latané, B., & Bourgeois, M. J. (1996). Experimental evidence for dynamic social impact: The emergence of subcultures in electronic groups. Journal of Communication, 46 (4), 35-47.

Latane, B., & Darley, J. M. (1968). Group inhibition of bystander intervention in emergencies. Journal of Personality and Social Psychology, 10 (3), 215-221.

Latane, B., & Rodin, J. (1969). A lady in distress: Inhibiting effects of friends and strangers on bystander intervention. Journal of Experimental Social Psychology, 5 (2), 189-202.

Lazarus, R. S. (1974). Psychological stress and coping in adaptation and illness. International Journal of Psychiatry in Medicine, 5 (4), 321-333.

Leach, C. W., Spears, R., Branscombe, N. R., & Doosje, B. (2003). Malicious pleasure: Schadenfreude at the suffering of another group. Journal of Personality and Social Psychology, 84 (5), 932-943.

Leana, C. R., Feldman, D. C., & Tan, G. Y. (1998). Predictors of coping behavior after a layoff. Journal of Organizational Behavior, 19 (1), 85-97.

Leary, M. R., & Tangney, J. P. (2003). Handbook of self and identity . New York, NY US: Guilford Press.

LeDoux, J. E. (1995). Emotion: Clues from the brain. Annual Review of Psychology, 46 , 209-235.

LeDoux, J. E., Friedman, M. J., Charney, D. S., & Deutch, A. Y. (1995). Setting ‘stress’ into motion: Brain mechanisms of stimulus evaluation Neurobiological and clinical consequences of stress: From normal adaptation to post-traumatic stress disorder. (pp. 125-134). Philadelphia, PA US: Lippincott Williams & Wilkins Publishers.

Ledoux, J. E., & Gazzaniga, M. S. (1995). In search of an emotional system in the brain: Leaping from fear to emotion and consciousness The cognitive neurosciences. (pp. 1049-1061). Cambridge, MA US: The MIT Press.

LeDoux, J. E., & Gorman, J. M. (2001). A call to action: Overcoming anxiety through active coping. American Journal of Psychiatry, 158 (12), 1953-1955.

Lee, G. R., Seccombe, K., & Shehan, C. L. (1991). Marital status and personal happiness: An analysis of trend data. Journal of Marriage & the Family, 53 (4), 839-844.

Lefevor, G. T., Sorrell, S. A., Virk, H. E., Huynh, K. D., Paiz, J. Y., Stone, W.-M., & Franklin, A. (2019). How do religious congregations affect congregants’ attitudes toward lesbian women and gay men?  Psychology of Religion and Spirituality. Advance online publication.  https://doi.org/10.1037/rel0000290

Leger, D. W., Thompson, R. A., & Merritt, J. A. (1996). Adult perception of emotional intensity in human infant cries: Effects of infant age and cry acoustics. Child Development, 67 (6), 3238-3249.

Leger, D. W., Thompson, R. A., Merritt, J. A., & Benz, J. J. (1996). Adult perception of emotional intensity in human infant cries: Effects of infant age and cry acoustics. Child Development, 67 (6), 3238-3249.

Leiner, H. C., Leiner, A. L., & Dow, R. S. (1995). The underestimated cerebellum. Human Brain Mapping, 2 , 244-254.

Lemann, N. (1999). The Big Test: The Secret History of the American Meritocracy . New York: Farrar, Straus, & Giroux.

Lemme, B. H. (1995). Development in adulthood . Needham Heights, MA US: Allyn & Bacon.

Leonard, A. (2003). Earth to Bill Gates: Thank you. Salon   Retrieved August 31, 2004, from http://www.salon.com/tech/feature/2003/05/09/gates/index_np.html

Leonhardt, D. (2020). The Black-White Wage Gap is as Big as It Was in 1950. New York Times, June 25, 2020.

Lepper, M. R., & Cordova, D. I. (1992). A desire to be taught: Instructional consequences of intrinsic motivation. Motivation and Emotion, 16 (3), 187-208.

Lepper, M. R., Greene, D., & Nisbett, R. E. (1973). Undermining children’s intrinsic interest with extrinsic reward: A test of the ‘overjustification’ hypothesis. Journal of Personality and Social Psychology, 28 (1), 129-137.

Levenson, R. W. (1992). Autonomic nervous system differences among emotions. Psychological Science, 3 (1), 23-27.

Levenson, R. W., & Ekman, P. (2002). Difficulty does not account for emotion-specific heart rate changes in the directed facial action task. Psychophysiology, 39 (3), 397-405.

Levenson, R. W., Ekman, P., & Friesen, W. V. (1990). Voluntary facial action generates emotion-specific autonomic nervous system activity. Psychophysiology, 27 (4), 363-384.

Levenson, R. W., Ekman, P., Heider, K., & Friesen, W. V. (1992). Emotion and autonomic nervous system activity in the Minangkabau of West Sumatra. Journal of Personality and Social Psychology, 62 (6), 972-988.

Levine, R. V. (2020). Persuasion: so easily fooled. In R. Biswas-Diener & E. Diener (Eds),  Noba textbook series: Psychology.  Champaign, IL: DEF publishers. Retrieved from  http://noba.to/y73u6ta8

Levine, R., Sato, S., Hashimoto, T., & Verma, J. (1995). Love and marriage in eleven cultures. Journal of Cross-Cultural Psychology, 26 (5), 554-571.

Levy-Lahad, E., Wijsman, E. M., Nemens, E., & Anderson, L. (1995). A familial Alzheimer’s disease locus on chromosome 1. Science, 269 (5226), 970-977.

Lewis, M., Alessandri, S. M., & Sullivan, M. W. (1992). Differences in shame and pride as a function of children’s gender and task difficulty. Child Development, 63 (3), 630-638.

Lewy, A. J., Bauer, V. K., Cutler, N. L., Sack, R. L., Ahmed, S., Thomas, K. H., et al. (1998). Morning vs evening light treatment of patients with winter depression. Archives of General Psychiatry, 55 (10), 890-896.

Li, M., D’Arcy, C., Li, X., Zhang, T., Joober, R., & Meng, X. (2019). What do DNA methylation studies tell us about depression? A systematic review.  Translational psychiatry ,  9 (1), 68. https://doi.org/10.1038/s41398-019-0412-y

Li, M. M., Yu, J. T., Wang, H. F., Jiang, T., Wang, J., Meng, X. F., Tan, C. C., Wang, C., & Tan, L. (2014). Efficacy of vitamins B supplementation on mild cognitive impairment and Alzheimer’s disease: a systematic review and meta-analysis.  Current Alzheimer research ,  11 (9), 844–852.

Lieberman, J. A., Chakos, M., Wu, H., Alvir, J., Hoffman, E., Robinson, D., et al. (2001). Longitudinal study of brain morphology in first episode schizophrenia. Biological Psychiatry, 49 (6), 487-499.

Lilienfeld, S. O., Kirsch, I., Sarbin, T. R., Lynn, S. J., Chaves, J. F., Ganaway, G. K., et al. (1999). Dissociative identity disorder and the sociocognitive model: Recalling the lessons of the past. Psychological Bulletin, 125 (5), 507-523.

Lilienfeld, S. O., Lynn, S. J., & Lohr, J. M. (2003a). Science and pseudoscience in clinical psychology: Concluding thoughts and constructive remedies Science and pseudoscience in clinical psychology. (pp. 461-465). New York, NY US: Guilford Press.

Lilienfeld, S. O., Lynn, S. J., & Lohr, J. M. (2003b). Science and pseudoscience in clinical psychology: Initial thoughts, reflections, and considerations Science and pseudoscience in clinical psychology. (pp. 1-14). New York, NY US: Guilford Press.

Lilienfeld, S. O., Lynn, S. J., Ruscio, J., & Beyerstein, B. L. (2009). 50 Great myths of popular psychology: shattering widespread misconceptions about human behavior . Wiley-Blackwell.

Lim, J. Y. (1999). Exercise Testing and Prescription for the Senior Population. The Sport Journal, 2 (1).

Lindemann, B. (1996). Taste reception. Physiological Reviews, 76 , 719-766.

Linderman, A. L. (1997). The deaf story: Themes of culture and coping. ProQuest Information & Learning, US.

Lindsay, G. (2020). Attention in Psychology, Neuroscience, and Machine Learning.  Frontiers In Computational Neuroscience ,  14 . doi: 10.3389/fncom.2020.00029

Linville, P. W., Fischer, G. W., & Yoon, C. (1996). Perceived covariation among the features of ingroup and outgroup members: The outgroup covariation effect. Journal of Personality and Social Psychology, 70 (3), 421-436.

Lloyd, M. (2004, July 31, 2004). Careers in Psychology Page Retrieved July 14, 2009, from www.psywww.com/careers

Locke, E. A., Latham, G. P., Kleinbeck, U., Quast, H.-H., Thierry, H., & Häcker, H. (1990). Work motivation: The high performance cycle Work motivation. (pp. 3-25). Hillsdale, NJ England: Lawrence Erlbaum Associates, Inc.

Locke, J. (1693/1995). An Essay Concerning Human Understanding . Amherst, New York: Prometheus Books.

Locksley, A., Ortiz, V., & Hepburn, C. (1980). Social categorization and discriminatory behavior: Extinguishing the minimal intergroup discrimination effect. Journal of Personality and Social Psychology, 39 (5), 773-783.

Loewenstein, G., Frederick, S., Bazerman, M. H., Messick, D. M., Tenbrunsel, A. E., & Wade-Benzoni, K. A. (1997). Predicting reactions to environmental change Environment, ethics, and behavior: The psychology of environmental valuation and degradation. (pp. 52-72). San Francisco, CA US: The New Lexington Press/Jossey-Bass Publishers.

Loewenstein, G., Read, D., & Baumeister, R. (2003). Time and decision: Economic and psychological perspectives on intertemporal choice . New York, NY US: Russell Sage Foundation.

Loftus, E. F., Miller, D. G., & Burns, H. J. (1978). Semantic integration of verbal information into a visual memory. Journal of Experimental Psychology: Human Learning and Memory, 4 (1), 19-31.

Loftus, E. F., & Pickrell, J. E. (1995). The formation of false memories. Psychiatric Annals, 25 (12), 720-725.

Loftus, E. F., Schooler, J. W., & Wagenaar, W. A. (1985). The fate of memory: Comment on McCloskey and Zaragoza. Journal of Experimental Psychology: General, 114 (3), 375-380.

Lohoff F. W. (2010). Overview of the genetics of major depressive disorder.  Current psychiatry reports ,  12 (6), 539–546. https://doi.org/10.1007/s11920-010-0150-6

Lonsway, K. A., & Fitzgerald, L. F. (1995). Attitudinal antecedents of rape myth acceptance: A theoretical and empirical reexamination. Journal of Personality and Social Psychology, 68 (4), 704-711.

Lorberbaum, J. P., Newman, J. D., Dubno, J. R., Horwitz, A. R., Nahas, Z., Teneback, C. C., et al. (1999). Feasibility of using fMRI to study mothers responding to infant cries. Depression and Anxiety, 10 (3), 99-104.

Lorrain, D. S., Riolo, J. V., Matuszewich, L., & Hull, E. M. (1999). Lateral hypothalamic serotonin inhibits nucleus accumbens dopamine: Implications for sexual satiety. Journal of Neuroscience, 19 (17), 7648-7652.

Luchetti, M., Lee, J. H., Aschwanden, D., Sesker, A., Strickhouser, J. E., Terracciano, A., & Sutin, A. R. (2020, June 22). The Trajectory of Loneliness in Response to COVID-19. American Psychologist . Advance online publication. http://dx.doi.org/10.1037/amp0000690

Luck, S. (2014). An Introduction to the Event-Related Potential Technique (2ndEdition). Cambridge: MA. MIT Press.

Luhmann, M., & Intelisano, S. (2018). Hedonic adaptation and the set point for subjective well-being. In E. Diener, S. Oishi, & L. Tay (Eds.), Handbook of well-being. Salt Lake City, UT: DEF Publishers. DOI:nobascholar.com

Luo, Y., Baillargeon, R. e., Brueckner, L., & Munakata, Y. (2003). Reasoning about a hidden object after a delay: Evidence for robust representations in 5-month-old infants. Cognition, 88 (3), B23-b32.

Lykken, D., & Tellegen, A. (1996). Happiness is a stochastic phenomenon. Psychological Science, 7 (3), 186-189.

Lyons-Ruth, K. (1996). Attachment relationships among children with aggressive behavior problems: The role of disorganized early attachment patterns. Journal of Consulting and Clinical Psychology, 64 (1), 64-73.

Lyons-Ruth, K., & Block, D. (1996). The disturbed caregiving system: Relations among childhood trauma, maternal caregiving, and infant affect and attachment. Infant Mental Health Journal, 17 (3), 257-275.

Lyons-Ruth, K., Connell, D. B., Grunebaum, H. U., & Botein, S. (1990). Infants at social risk: Maternal depression and family support services as mediators of infant development and security of attachment. Child Development, 61 (1), 85-98.

Lyons-Ruth, K., Easterbrooks, M. A., & Cibelli, C. D. (1997). Infant attachment strategies, infant mental lag, and maternal depressive symptoms: Predictors of internalizing and externalizing problems at age 7. Developmental Psychology, 33 (4), 681-692.

Maccoby, E. E. (2000). Perspectives on gender development. International Journal of Behavioral Development, 24 (4), 398-406.

Maccoby, E. E., Laursen, B., & Graziano, W. G. (2002). Gender and social exchange: A developmental perspective Social exchange in development. (pp. 87-105). San Francisco, CA US: Jossey-Bass.

Maccoby, E. E., & Martin, J. A. (1983). Socialization in the context of the family: Parent-child interaction. In P. H. Mussen (Ed.), Handbook of Child Psychology (4th ed., Vol. 4). New York: Wiley.

Macmillan, N. A., & Creelman, C. D. (1991). Detection theory: A user’s guide . New York, NY US: Cambridge University Press.

Maguire, E., Gadian, D., Johnsrude, I., Good, C., Ashburner, J., Frackowiak, R., et al. (2000). Navigation-related structural change in the hippocampi of taxi drivers. Proceedings of the National Academy of Sciences, USA, 97 (8), 4398-4403.

Malenka, R. C., & Nicoll, R. A. (1999). Long-term potentiation–A decade of progress? Science, 285 , 1870-1874.

Mann, J. J., Brent, D. A., & Arango, V. (2001). The neurobiology and genetics of suicide and attempted suicide: A focus on the serotonergic system. Neuropsychopharmacology, 24 (5), 467-477.

Manning, R., Levine, M., & Collins, A. (2007). The Kitty Genovese murder and the social psychology of helping: The parable of the 38 witnesses.  American Psychologist, 62 (6), 555–562.  https://doi.org/10.1037/0003-066X.62.6.555

Marcia, J. E., & Kroger, J. (1993). The relational roots of identity Discussions on ego identity. (pp. 101-120). Hillsdale, NJ England: Lawrence Erlbaum Associates, Inc.

Margolis, S., & Lyubomirsky, S. (2018). Cognitive outlooks and well-being. In E. Diener, S. Oishi, & L. Tay (Eds.), Handbook of well-being. Salt Lake City, UT: DEF Publishers. DOI:nobascholar.com

Markovits, H., & Bouffard-Bouchard, T. r. s. (1992). The belief-bias effect in reasoning: The development and activation of competence. British Journal of Developmental Psychology, 10 (3), 269-284.

Markovitz, P. J. (2004). Recent Trends in the Pharmacotherapy of Personality Disorders. Journal of Personality Disorders, 18 (1), 90-101.

Markus, H. (1977). Self-schemata and processing information about the self. Journal of Personality and Social Psychology, 35 (2), 63-78.

Martin CB, Herrick KA, Sarafrazi N, Ogden CL. (2018). Attempts to lose weight among adults in the United States, 2013–2016. NCHS Data Brief, no 313. Hyattsville, MD: National Center for Health Statistics.

Martinez GM, Abma JC. (2020). Sexual activity and contraceptive use among teenagers aged 15–19 in the United States, 2015–2017. NCHS Data Brief, no 366. Hyattsville, MD: National Center for Health Statistics.

Maslow, A. H. (1943). A theory of human motivation. Psychological Review, 50 (4), 370-396.

Masters, W. H., & Johnson, V. E. (1966). Human sexual response . Oxford England: Little, Brown.

Mazur, A., & Lamb, T. A. (1980). Testosterone, status, and mood in human males. Hormones and Behavior, 14 (3), 236-246.

McAdam, R., & Reid, R. (2001). SME and large organisation perceptions of knowledge management: Comparisons and contrasts. Journal of Knowledge Management, 5 (3), 231-241.

McClelland, D. C., Rhinesmith, S., & Kristensen, R. (1975). The effects of power training on community action agencies. Journal of Applied Behavioral Science, 11 (1), 92-115.

McClure, E. B., Monk, C. S., Nelson, E. E., Zarahn, E., Leibenluft, E., Bilder, R. M., et al. (2004). A Developmental Examination of Gender Differences in Brain Engagement During Evaluation of Threat. Biological Psychiatry, 55 (11), 1047-1055.

McColl, R., & Michelotti, M. (2019). Sorry, could you repeat the question? Exploring video‐interview recruitment practice in HRM.  Human Resource Management Journal ,  29 (4), 637-656. doi: 10.1111/1748-8583.12249

McCool, G. (1999). Dalai Lama says “lack of compassion” to blame for violence. World Tibet Network News  Retrieved Feb 7, 2004, from http://www.tibet.ca/wtnarchive/1999/8/12_2.html

McCrae, R. R., Bar-On, R., & Parker, J. D. A. (2000). Emotional intelligence from the perspective of the five-factor model of personality The handbook of emotional intelligence: Theory, development, assessment, and application at home, school, and in the workplace. (pp. 263-276). San Francisco, CA US: Jossey-Bass.

McCrae, R. R., & Costa, P. T. (1985). Comparison of EPI and psychoticism scales with measures of the five-factor model of personality. Personality and Individual Differences, 6 (5), 587-597.

McCrae, R. R., & Costa, P. T. (1987). Validation of the five-factor model of personality across instruments and observers. Journal of Personality and Social Psychology, 52 (1), 81-90.

McCrae, R. R., & Costa, P. T. (1995). Positive and negative valence within the five-factor model. Journal of Research in Personality, 29 (4), 443-460.

McCrae, R. R., & Costa, P. T., Jr. (1997). Personality trait structure as a human universal. American Psychologist, 52 (5), 509-516.

McCrae, R. R., & Costa, P. T., Jr. (2003). Personality in adulthood: A five-factor theory perspective (2nd ed.) . New York, NY US: Guilford Press.

McCrae, R. R., Costa, P. T., Jr., Del Pilar, G. H., Rolland, J.-P., & Parker, W. D. (1998). Cross-cultural assessment of the five-factor model: The Revised NEO Personality Inventory. Journal of Cross-Cultural Psychology, 29 (1), 171-188.

McCrae, R. R., Costa, P. T., Jr., Ostendorf, F., Angleitner, A., Hřebíčková, M., Avia, M. D., et al. (2000). Nature over nurture: Temperament, personality, and life span development. Journal of Personality and Social Psychology, 78 (1), 173-186.

McDaniel, M. A., Whetzel, D. L., Schmidt, F. L., & Maurer, S. D. (1994). The validity of employment interviews: A comprehensive review and meta-analysis. Journal of Applied Psychology, 79 (4), 599-616.

McFarland, J. Michael, Uecker, E. Jeremy, & Regnerus, D. Mark. (2011). The Role of Religion in Shaping Sexual Frequency and Satisfaction: Evidence from Married and Unmarried Older Adults. Journal of Sex Reasearch. 297–308. doi:  10.1080/00224491003739993

McLoyd, V. C., & Smith, J. (2002). Physical discipline and behavior problems in African American, European American, and Hispanic children: Emotional support as a moderator. Journal of Marriage and Family, 64 (1), 40-53.

Mead, M. (1928). An inquiry into the question of cultural stability in Polynesia . Oxford England: Columbia Univ. Press.

Mead, M. (1940). Character formation in two South Sea societies. Transactions of the American Neurological Association, 66 , 99-103.

Mealey, L., Bridgstock, R., & Townsend, G. C. (1999). Symmetry and perceived facial attractiveness: A monozygotic co-twin comparison. Journal of Personality and Social Psychology, 76 (1), 151-158.

Medina, M., Castleberry, A., & Persky, A. (2017). Strategies for Improving Learner Metacognition in Health Professional Education.  American Journal of Pharmaceutical Education ,  81 (4), 78. doi: 10.5688/ajpe81478

Medin, D. L., Ross, B., & Markman, A. (2001). Cognitive Psychology (3 ed.). Fort Worth, TX: Harcourt College Publishers.

Meehl, P. E. (1954a). Remarks on clinical intuition Clinical versus statistical prediction: A theoretical analysis and a review of the evidence. (pp. 69-82). Minneapolis, MN US: University of Minnesota Press.

Meehl, P. E. (1954b). The special powers of the clinician Clinical versus statistical prediction: A theoretical analysis and a review of the evidence. (pp. 24-28). Minneapolis, MN US: University of Minnesota Press.

Meehl, P. E. (1993). Philosophy of science: Help or hindrance? Psychological Reports, 72 (3), 707-733.

Mehrabian, A. (1972). Nonverbal communication . Oxford England: Aldine-Atherton.

Meltzoff, A. N., & Moore, M. K. (1977). Imitation of facial and manual gestures by human neonates. Science, 198 (4312), 75-78.

Meltzoff, A. N., & Moore, M. K. (1994). Imitation, memory, and the representation of persons. Infant Behavior & Development, 17 (1), 83-99.

Melzack, R., & Wall, P. D. (1965). Pain mechanisms: A new theory. Science, 150 , 971-979.

Mercer, J. (2002). Attachment Therapy: A Treatment without Empirical Support. The Scientific Review of Mental Health Practice, 1 (2), 105-112.

Merritt, J. M., Stickgold, R., Pace-Schott, E., & Williams, J. (1994). Emotion profiles in the dreams of men and women. Consciousness and Cognition: An International Journal, 3 (1), 46-60.

Meyers-Levy, J., & Tybout, A. M. (1989). Schema Congruity as a Basis for Product Evaluation. Journal of Consumer Research, 16 (1), 39-54.

Micallef, J. l., & Blin, O. (2001). Neurobiology and clinical pharmacology of obsessive-compulsive disorder. Clinical Neuropharmacology, 24 (4), 191-207.

Miceli, M. P., & Cropanzano, R. (1993). Justice and pay system satisfaction Justice in the workplace: Approaching fairness in human resource management. (pp. 257-283). Hillsdale, NJ England: Lawrence Erlbaum Associates, Inc.

Middlebrooks, J. C., Makous, J. C., & Green, D. M. (1989). Directional sensitivity of sound-pressure levels in the human ear canal. Journal of the Acoustical Society of America, 86 (1), 89-108.

Miles, D. R., & Carey, G. (1997). Genetic and environmental architecture on human aggression. Journal of Personality and Social Psychology, 72 (1), 207-217.

Milgram, J. I., & Sciarra, D. J. (1974). Childhood revisited . Oxford England: Macmillan.

Milgram, S. (1963). Behavioral Study of obedience. The Journal of Abnormal and Social Psychology, 67 (4), 371-378.

Milgram, S. (1974). Obedience to Authority . New York: Harper Perennial.

Milgram, S., Bickman, L., & Berkowitz, L. (1969). Note on the drawing power of crowds of different size. Journal of Personality and Social Psychology, 13 (2), 79-82.

Millar, M. G., & Millar, K. U. (1996). The effects of direct and indirect experience on affective and cognitive responses and the attitude–behavior relation. Journal of Experimental Social Psychology, 32 (6), 561-579.

Miller, C. S., Kaspin, J. A., & Schuster, M. H. (1990). The impact of performance appraisal methods on Age Discrimination in Employment Act cases. Personnel Psychology, 43 (3), 555-578.

Miller, G. A. (1956). The magical number seven, plus or minus two: some limits on our capacity for processing information. Psychological Review, 63 (2), 81-97.

Millis, I. Kim. APA(2019). Resolution on physical discipline of children by parents. Parenting, Physical Abuse & Violence. https://www.apa.org/news/press/releases/2019/02/physical-discipline

Millon, T., Davis, R., Millon, C., Escovar, L., & Meagher, S. (2000). Personality disorders in modern life . Hoboken, NJ US: John Wiley & Sons Inc.

Miner-Rubino, K., & Cortina, L. M. (2004). Working in a Context of Hostility Toward Women: Implications for Employees’ Well-Being. Journal of Occupational Health Psychology, 9 (2), 107-122.

Mischel, W. (1968). Personality and assessment . Hoboken, NJ US: John Wiley & Sons Inc.

Mischel, W. (2015). The Marshmallow Test. Little, Brown Spark; Reprint edition (September 22, 2015)

Mischel, W., Morf, C. C., Leary, M. R., & Tangney, J. P. (2003). The self as a psycho-social dynamic processing system: A meta-perspective on a century of the self in psychology Handbook of self and identity. (pp. 15-43). New York, NY US: Guilford Press.

Mischel, W., & Pervin, L. A. (1990). Personality dispositions revisited and revised: A view after three decades Handbook of personality: Theory and research. (pp. 111-134). New York, NY US: Guilford Press.

Mitchell, R. W. (1992). Developing concepts in infancy: Animals, self-perception, and two theories of mirror self-recognition. Psychological Inquiry, 3 (2), 127-130.

Moghaddam, F. M., & Marsella, A. J. (2004). Understanding terrorism: Psychosocial roots, consequences, and interventions . Washington, DC US: American Psychological Association.

Mookherjee, H. N. (1997). Marital status, gender, and perception of well-being. Journal of Social Psychology, 137 (1), 95-105.

Moosa, A. N., Jehi, L., Marashly, A., Cosmo, G., Lachhwani, D., Wyllie, E., Kotagal, P., Bingaman, W., & Gupta, A. (2013). Long-term functional outcomes and their predictors after hemispherectomy in 115 children.  Epilepsia ,  54 (10), 1771–1779. https://doi.org/10.1111/epi.12342

Mor Barak, M. l. E., Findler, L., & Wind, L. H. (2003). Cross-Cultural Aspects of Diversity and Well-Being in the Workplace: An International Perspective. Journal of Social Work Research and Evaluation, 4 (2), 145-169.

Moreland, R. L., & Beach, S. R. (1992). Exposure effects in the classroom: The development of affinity among students. Journal of Experimental Social Psychology, 28 (3), 255-276.

Morfei, M. Z., Hooker, K., Fiese, B. H., & Cordeiro, A. M. (2001). Continuity and change in parenting possible selves: A longitudinal follow-up. Basic and Applied Social Psychology, 23 (3), 217-223.

Morgan, B. L., & Korschgen, A. J. (2008). Majoring in Psych?: Career Options for Psychology Undergraduates (4 ed.). Needham Heights, MA US: Allyn & Bacon.

Morris, N. M., Udry, J. R., Khan-Dawood, F., & Dawood, M. Y. (1987). Marital sex frequency and midcycle female testosterone. Archives of Sexual Behavior, 16 (1), 27-37.

Moshman, D. (1990). Rationality as a goal of education. Educational Psychology Review, 2 (4), 335-364.

Moshman, D. (1994). Reason, reasons and reasoning: A constructivist account of human rationality. Theory & Psychology, 4 (2), 245-260.

Moshman, D., & Damon, W. (1998). Cognitive development beyond childhood Handbook of child psychology: Volume 2: Cognition, perception, and language. (pp. 947-978). Hoboken, NJ US: John Wiley & Sons Inc.

Moshman, D., Demetriou, A., & Efklides, A. (1994). Reasoning, metareasoning, and the promotion of rationality Intelligence, mind, and reasoning: Structure and development. (pp. 135-150). Amsterdam Netherlands: North-Holland/Elsevier Science Publishers.

Moshman, D., & Franks, B. A. (1986). Development of the concept of inferential validity. Child Development, 57 (1), 153-165.

Moynihan, J. A., Brenner, G. J., Cocke, R., Karp, J. D., Breneman, S. M., Dopp, J. M., et al. (1994). Stress-induced modulation of immune function in mice Handbook of human stress and immunity. (pp. 1-22). San Diego, CA US: Academic Press.

Mozel, M. M., Smith, B., Smith, P., Sullivan, R., & Swender, P. (1969). Nasal chemoreception in flavor identification. Archives of Otolaryngology, 90 , 367-373.

Mueller, U., Overton, W. F., & Reene, K. (2001). Development of conditional reasoning: A longitudinal study. Journal of Cognition and Development, 2 (1), 27-49.

Mullen, B., Copper, C., & Driskell, J. E. (1990). Jaywalking as a function of model behavior. Personality and Social Psychology Bulletin, 16 (2), 320-330.

Murnane, K., & Phelps, M. P. (1993). A global activation approach to the effect of changes in environmental context on recognition. Journal of Experimental Psychology: Learning, Memory, and Cognition, 19 (4), 882-894.

Murphy, G. (2002). The Big Book of Concepts . Cambridge, MA: The MIT Press.

Murphy, W. D., Coleman, E. M., & Haynes, M. R. (1986). Factors related to coercive sexual behavior in a nonclinical sample of males. Violence and Victims, 1 (4), 255-278.

Myers, S. M. (1996). An interactive model of religiosity inheritance: The importance of family context. American Sociological Review, 61 (5), 858-866.

National Sleep Foundation (2020). Sleep in America Poll 2020. Retrieved from http://www.thensf.org/wp-content/uploads/2020/03/SIA-2020-Report.pdf

National Safety Council (2020). Drivers are falling asleep behind the wheel. Retrieved from https://www.nsc.org/road-safety/safety-topics/fatigued-driving

Nasar, S. (1998). A beautiful mind . New York, NY US: Simon & Schuster.

National Center for Education Statistics Report (2001). from http://nces.ed.gov/.

National Center for Education Statistics Report (2003). from http://nces.ed.gov/ .

Navarra, R., & Gottman, J. (2013). Gottman Method Couple Therapy. From Theory to Practice. In Carson & Casado-Kehoe (Eds.) Case Studies in Couples Therapy: Theory-Based Approaches. Silvester, J. & Anderson, N. (2003). Technology and Discourse: A Comparison of Face‐to‐face and Telephone Employment Interviews. https://doi-org.cod.idm.oclc.org/10.1111/1468-2389.00244

Nebes, R. D. (1974). Hemispheric specialization in commissurotomized man. Psychological Bulletin, 81 , 1-14.

Neisser, U. (1979). Is Psychology Ready for Consciousness? PsycCRITIQUES, 24 (2), 99-100.

Nelson, K. (1993). The psychological and social origins of autobiographical memory. Psychological Science, 4 (1), 7-14.

Nelson, S. K., Layous, K., Cole, S. W., & Lyubomirsky, S. (2016). Do unto others or treat yourself? The effects of prosocial and self-focused behavior on psychological flourishing.  Emotion, 16 (6), 850–861.  https://doi.org/10.1037/emo0000178

Nelson, T. O., Fehling, M. R., & Moore-Glascock, J. (1979). The nature of semantic savings for items forgotten from long-term memory. Journal of Experimental Psychology: General, 108 (2), 225-250.

  • E. C. C. R. (2002). Child-care structure‚ process‚ outcome: Direct and indirect effects of child-care quality on young children’s development. Psychological Science, 13 (3), 199-206.

Allhusen, V., Appelbaum, M., Belsky, J., Booth, C. L., Bradley, R., et al. (2001). Nonmaternal care and family factors in early development: An overview of the NICHD Study of Early Child Care. Journal of Applied Developmental Psychology, 22 (5), 457-492.

Neumann, S. A., Waldstein, S. R., Sellers, J. J., III, Thayer, J. F., & Sorkin, J. D. (2004). Hostility and Distraction Have Differential Influences on Cardiovascular Recovery From Anger Recall in Women. Health Psychology, 23 (6), 631-640.

Newell, A., & Simon, H. A. (1972). Human problem solving . Oxford England: Prentice-Hall.

Newport, F., Saad, L., & Moore, D. (1997). Where America Stands . New York: John Wiley & Sons.

Ng, D. M., & Jeffery, R. W. (2003). Relationships Between Perceived Stress and Health Behaviors in a Sample of Working Adults. Health Psychology, 22 (6), 638-642.

Nickerson, C., Schwarz, N., Diener, E., & Kahneman, D. (2003). Zeroing on the dark side of the American Dream: A closer look at the negative consequences of the goal for financial success. Psychological Science, 14 (6), 531-536.

Nickerson, R. S. (1998). Confirmation bias: A ubiquitous phenomenon in many guises. Review of General Psychology, 2 (2), 175-220.

Nielsen, S. J., & Popkin, B. M. (2003). Patterns and trends in food portion sizes, 1977-1998. JAMA: Journal of the American Medical Association, 289 (4), 450-453.

Nisbett, R. E., Krantz, D. H., Jepson, C., Kunda, Z., Gilovich, T., Griffin, D., et al. (2002). The use of statistical heuristics in everyday inductive reasoning Heuristics and biases: The psychology of intuitive judgment. (pp. 510-533). New York, NY US: Cambridge University Press.

Nischal, A., Tripathi, A., Nischal, A., & Trivedi, J. K. (2012). Suicide and antidepressants: what current evidence indicates.  Mens sana monographs ,  10 (1), 33–44. https://doi.org/10.4103/0973-1229.87287

Niv, S., Tuvblad, C., Raine, A., & Baker, L. (2013). Aggression and rule-breaking: Heritability and stability of antisocial behavior problems in childhood and adolescence.  Journal Of Criminal Justice ,  41 (5), 285-291. doi: 10.1016/j.jcrimjus.2013.06.014

Nolan, R. P., Spanos, N. P., Hayward, A. A., & Scott, H. A. (1994). The efficacy of hypnotic and nonhypnotic response-based imagery for self-managing recurrent headache. Imagination, Cognition and Personality, 14 (3), 183-201.

Norcross, J. C., Santrock, J. W., Campbell, L. F., Smith, T. P., Sommer, R., & Zuckerman, E. L. (2003). Authoritative guide to self-help resources in mental health (rev. ed.) . New York, NY US: Guilford Press.

Norem, J. K., & Cantor, N. (1986). Anticipatory and post hoc cushioning strategies: Optimism and defensive pessimism in ‘risky’ situations. Cognitive Therapy and Research, 10 (3), 347-362.

Oakes P, Loukas M, Oskouian RJ, Tubbs RS. The neuroanatomy of depression: A review.  Clin Anat . 2017;30(1):44-49. doi:10.1002/ca.22781

O’Conner, B. P. (1995). Identity development and perceived parental behavior as sources of adolescent egocentrism. Journal of Youth and Adolescence, 24 , 205-227.

O’Connor, B. P. (1995). Identity development and perceived parental behavior as sources of adolescent egocentrism. Journal of Youth and Adolescence, 24 (2), 205-227.

O’Connor, T. G., & Croft, C. M. (2001). A twin study of attachment in preschool children. Child Development, 72 (5), 1501-1511.

O’Regan, J. K., & Noe, A. (2001). A sensorimotor account of vision and visual consciousness. Behavioral and Brain Sciences, 24 (5), 939-1031.

O’Reilly, C. A., III, Williams, K. Y., Barsade, S., & Gruenfeld, D. H. (1998). Group demography and innovation: Does diversity help? Composition. (pp. 183-207). US: Elsevier Science/JAI Press.

O’Reilly, C. A., & Weitz, B. A. (1980). Managing marginal employees: The use of warnings and dismissals. Administrative Science Quarterly, 25 (3), 467-484.

Occupational Outlook Handbook, 2004-05 Edition, Psychologists (2004, May 18, 2004). Retrieved May 24, 2004, from http://www.bls.gov/oco/ocos056.htm

Ohno, M., Chang, L., Tseng, W., Oakley, H., Citron, M., Klein, W. L., et al. (2006). Temporal memory deficits in Alzheimer’s mouse models: rescue by genetic deletion of BACE1. European Journal of Neuroscience, 23 (1), 251-260.

Okano, K., Kaczmarzyk, J.R., Dave, N. et al. Sleep quality, duration, and consistency are associated with better academic performance in college students. npj Sci. Learn. 4, 16 (2019). https://doi.org/10.1038/s41539-019-0055-z

Olds, J. (1958). Satiation effects in self-stimulation of the brain. Journal of Comparative and Physiological Psychology, 51 (6), 675-678.

Olds, J., & Milner, P. (1954). POSITIVE REINFORCEMENT PRODUCED BY ELECTRICAL STIMULATION OF SEPTAL AREA AND OTHER REGIONS OF RAT BRAIN. Journal of Comparative and Physiological Psychology, 47 (6), 419-427.

Oliveira, P., & Fearon, P. (2019). The biological bases of attachment . Adoption & Fostering , 43(3), 274–293. https://doi.org/10.1177/0308575919867770

Olsen, R. W. (2000). Absinthe and gamma-aminobutyric acid receptors. Proceedings of the National Academy of Sciences of the United States of America, 97 (9), 4417-4418.

Orlansky, M. D., Bonvillian, J. D., Smith, M. D., & Locke, J. L. (1988). Early sign language acquisition The emergent lexicon: The child’s development of a linguistic vocabulary. (pp. 263-292). San Diego, CA US: Academic Press.

Orne, M. T., & Evans, F. J. (1965). Social control in the psychological experiment: Antisocial behavior and hypnosis. Journal of Personality and Social Psychology, 1 (3), 189-200.

Ostroff, C. (1993). Relationships between person€nvironment congruence and organizational effectiveness. Group & Organization Management, 18 (1), 103-122.

Owen, M. J., O’Donovan, M. C., Plomin, R., DeFries, J. C., Craig, I. W., & McGuffin, P. (2003). Schizophrenia and genetics Behavioral genetics in the postgenomic era. (pp. 463-480). Washington, DC US: American Psychological Association.

Owen, M. T., & Cox, M. J. (1997). Marital conflict and the development of infant-parent attachment relationships. Journal of Family Psychology, 11 (2), 152-164.

Oyserman, D., & Markus, H. (1990). Possible selves in balance: Implications for delinquency. Journal of Social Issues, 46 (2), 141-157.

Ozorak, E. W. (1989). Social and cognitive influences on the development of religious beliefs and commitment in adolescence. Journal for the Scientific Study of Religion, 28 (4), 448-463.

Pack, D. A. (1999). A case study comparing the effects of Student-Directed Testing (S-DT) and Teacher-Directed Testing (T-DT) on test anxiety, academic achievement, and course satisfaction. ProQuest Information & Learning, US.

Palmer, S. E. (1992). Common region: A new principle of perceptual grouping. Cognitive Psychology, 24 (3), 436-447.

Panksepp, J., & Gordon, N. (2003). The instinctual basis of human affect: Affective imaging of laughter and crying. Consciousness & Emotion, 4 (2), 197-205.

Pappa, S., Ntella, V., Giannakas, T., Giannakoulis, V. G., Papoutsi, E., & Katsaounou, P. (2020). Prevalence of depression, anxiety, and insomnia among healthcare workers during the COVID-19 pandemic: A systematic review and meta-analysis.  Brain, behavior, and immunity ,  88 , 901–907. https://doi.org/10.1016/j.bbi.2020.05.026

Patihis, L., Ho, L. Y., Tingen, I. W., Lilienfeld, S. O., & Loftus, E. F. (2014). Are the “memory wars” over? A scientist-practitioner gap in beliefs about repressed memory.  Psychological science ,  25 (2), 519–530. https://doi.org/10.1177/0956797613510718

Parkinson, J. A., Willoughby, P. J., Robbins, T. W., & Everitt, B. J. (2000). Disconnection of the anterior cingulate cortex and nucleus accumbens core impairs Pavlovian approach behavior: Further evidence for limbic cortical-ventral striatopallidal systems. Behavioral Neuroscience, 114 (1), 42-63.

Parks, R., & Haskins, J. (1999). Rosa Parks: My Story . New York: Puffin.

Pase, M. P., Scholey, A. B., Pipingas, A., Kras, M., Nolidin, K., Gibbs, A., Wesnes, K., & Stough, C. (2013). Cocoa polyphenols enhance positive mood states but not cognitive performance: a randomized, placebo-controlled trial.  Journal of psychopharmacology (Oxford, England) ,  27 (5), 451–458. https://doi.org/10.1177/0269881112473791

Patterson, G. R., & Fagot, B. I. (1967). SELECTIVE RESPONSIVENESS TO SOCIAL REINFORCERS AND DEVIANT BEHAVIOR IN CHILDREN. Psychological Record, 17 (3), 369-378.

Paul, P. (2003, Sept 1, 2003). We’re Just Friends. Really! Time .

Pearce, J. L., Stevenson, W. B., & Perry, J. L. (1985). Managerial compensation based on organizational performance: A time series analysis of the effects of merit pay. Academy of Management Journal, 28 (2), 261-278.

Peisner-Feinberg, E. S., Burchinal, M. R., Clifford, R., M., Culkin, M. L., Howes, C., Kagan, S. L., et al. (2001). The Relation of Preschool Child-Care Quality to Children’s Cognitive and Social Developmental Trajectories through Second Grade. Child Development, 72 (5), 1534-1553.

Pelley, V. (2018). Meet the Scientists Who Haven’t Given Up On Spanking. https://www.fatherly.com/parenting/meet-scientists-havent-given-spanking/

Pelletier, C. L. (2004). The Effect of Music on Decreasing Arousal Due to Stress: A Meta-Analysis. Journal of Music Therapy, 41 (3), 192-214.

Pelletier, J. G., & Paré, D. (2004). Role of Amygdala Oscillations in the Consolidation of Emotional Memories. Biological Psychiatry, 55 (6), 559-562.

Penfield, W. (1955). The permanent record of the stream of consciousness. Acta Psychologica, 11 , 47-69.

Penfield, W., & Perot, P. (1963). The brain’s record of auditory and visual experience. Brain, 86 , 595-596.

Pennebaker, J. W., & Berkowitz, L. (1989). Confession, inhibition, and disease Advances in experimental social psychology, Vol. 22. (pp. 211-244). San Diego, CA US: Academic Press.

Perl, E. R. (1984). Characterization of nociceptors and their activation of neurons in the superficial dorsal horn: First steps for the sensation of pain. In L. Kruger & J. C. Liebeskind (Eds.), Neural Mechanisms of Pain (pp. 23-52). New York: Raven Press.

Perry, G., Brannigan, A., Wanner, R. A., & Stam, H. (2020). Credibility and Incredulity in Milgram’s Obedience Experiments: A Reanalysis of an Unpublished Test. Social Psychology Quarterly, 83(1), 88–106. https://doi.org/10.1177/0190272519861952

Peterson, C., & Gillham, J. E. (2000). Optimistic explanatory style and health The science of optimism and hope: Research essays in honor of Martin E. P. Seligman. (pp. 145-161). West Conshohocken, PA US: Templeton Foundation Press.

Peterson, L., & Peterson, M. J. (1959). Short-term retention of individual verbal items. Journal of Experimental Psychology, 58 (3), 193-198.

Petitto, L. A., & Marentette, P. F. (1991). Babbling in the manual mode: Evidence for the ontogeny of language. Science, 251 (5000), 1493-1496.

Petrie, K. J., Fontanilla, I., Thomas, M. G., Booth, R. J., & Pennebaker, J. W. (2004). Effect of Written Emotional Expression on Immune Function in Patients With Human Immunodeficiency Virus Infection: A Randomized Trial. Psychosomatic Medicine, 66 (2), 272-275.

Pettigrew, T. F. (1997). Generalized intergroup contact effects on prejudice. Personality and Social Psychology Bulletin, 23 (2), 173-185.

Pettigrew, T. F., Eberhardt, J. L., & Fiske, S. T. (1998). Prejudice and discrimination on the college campus Confronting racism: The problem and the response. (pp. 263-279). Thousand Oaks, CA US: Sage Publications, Inc.

Pettigrew, T. F., & Meertens, R. W. (1995). Subtle and blatant prejudice in western Europe. European Journal of Social Psychology, 25 (1), 57-75.

Petty, R. E., Cacioppo, J. T., & Goldman, R. (1981). Personal involvement as a determinant of argument-based persuasion. Journal of Personality and Social Psychology, 41 (5), 847-855.

Phinney, J. S. (1990). Ethnic identity in adolescents and adults: Review of research. Psychological Bulletin, 108 (3), 499-514.

Phinney, J. S., & Alipuria, L. L. (1990). Ethnic identity in college students from four ethnic groups. Journal of Adolescence, 13 (2), 171-183.

Phinney, J. S., Ferguson, D. L., & Tate, J. D. (1997). Intergroup attitudes among ethnic minority adolescents: A causal model. Child Development, 68 (5), 955-969.

Phinney, J. S., Ong, A., & Madden, T. (2000). Cultural values and intergenerational value discrepancies in immigrant and non-immigrant families. Child Development, 71 (2), 528-539.

Piaget, J. (1942). Les trois structures fondamentales de la vie psychique: rythme, regulation et groupement. Psychologie v Ekonomickic Praxi, 1 , 9-21.

Piaget, J. (1957). Logic and psychology . Oxford England: Basic Books.

Piaget, J. (1970). Science of education and the psychology of the child. Trans. D. Coltman . Oxford England: Orion.

Piaget, J. (1972). Essay on operative logic. (2nd ed.) . 75661 Paris Cedex 14 France: Dunod.

Piaget, J. (1972). Intellectual evolution from adolescence to adulthood. Human Development, 15 (1), 1-12.

Pickering, M., & Garrod, S. (2013). An integrated theory of language production and comprehension.  Behavioral and Brain Sciences ,  36 (4), 329-347. doi: 10.1017/s0140525x12001495

Pickering, M. J., & Garrod, S. (2004). Toward a mechanistic psychology of dialogue.  The Behavioral and Brain Sciences ,  27 (2), 169–226. https://doi.org/10.1017/s0140525x04000056

Pierce, J. L., Gardner, D. G., Dunham, R. B., & Cummings, L. L. (1993). Moderation by organization-based self-esteem of role condition€mployee response relationships. Academy of Management Journal, 36 (2), 271-288.

Pietschnig, J., Penke, L., Wicherts, J. M., Zeiler, M., & Voracek, M. (2015). Meta-analysis of associations between human brain volume and intelligence differences: How strong are they and what do they mean?.  Neuroscience and biobehavioral reviews ,  57 , 411–432. https://doi.org/10.1016/j.neubiorev.2015.09.017

Pinel, J. P. J. (2003). Biopsychology (5 ed.): Allyn and Bacon.

Pinker, S. (1994). The language instinct . New York: HarperCollins.

Pinker, S. (2002). The Blank Slate: The Modern Denial of Human Nature . New York: Viking.

Pishva, E., Creese, B., Smith, A., Viechtbauer, W., Proitsi, P., & van den Hove, D. et al. (2020). Psychosis-associated DNA methylomic variation in Alzheimer’s disease cortex.  Neurobiology of Aging ,  89 , 83-88. doi: 10.1016/j.neurobiolaging.2020.01.001

Pittman, T. S., & Pittman, N. L. (1980). Deprivation of control and the attribution process. Journal of Personality and Social Psychology, 39 (3), 377-389.

Plomin, R. (2018). Blueprint: How DNA Makes Us Who We Are . Cambridge, MA. The MIT Press.

Podsakoff, P. M., & Schriescheim, C. A. (1985). Field studies of French and Raven’s bases of power: Critique, reanalysis, and suggestions for future research. Psychological Bulletin, 97 (3), 387-411.

Polivy, J., & Herman, C. P. (2002). Causes of eating disorders. Annual Review of Psychology, 53 (1), 187-213.

Pope, H. G., Jr., Kouri, E. M., & Hudson, J. I. (2000). Effects of supraphysiologic doses of testosterone on mood and aggression in normal men: A randomized controlled trial. Archives of General Psychiatry, 57 (2), 133-140.

Porsch, R. M., Middeldorp, C. M., Cherny, S. S., Krapohl, E., van Beijsterveldt, C. E., Loukola, A., Korhonen, T., Pulkkinen, L., Corley, R., Rhee, S., Kaprio, J., Rose, R. R., Hewitt, J. K., Sham, P., Plomin, R., Boomsma, D. I., & Bartels, M. (2016). Longitudinal heritability of childhood aggression.  American journal of medical genetics. Part B, Neuropsychiatric genetics : the official publication of the International Society of Psychiatric Genetics ,  171 (5), 697–707. https://doi.org/10.1002/ajmg.b.32420

Posthuma, D., De Geus, E. J. C., Baare, W. F. C., Pol, H. E. H., Kahn, R. S., & Boomsma, D. I. (2002). The association between brain volume and intelligence is of genetic origin. Nature Neuroscience, 5 , 83-84.

Povinelli, D. J., & Vonk, J. (2003). Chimpanzee minds: Suspiciously human? Trends in Cognitive Sciences, 7 (4), 157-160.

Powell, L., & Self, W. R. (2004). Personalized Fear, Personalized Control, and Reactions to the September 11 Attacks. North American Journal of Psychology, 6 (1), 55-70.

Pratkanis, A. R., & Turner, M. E. (1994). Of what value is a job attitude? A socio-cognitive analysis. Human Relations, 47 (12), 1545-1576.

Preuss, T. (2000). Preface: From Basic Uniformity to Diversity in Cortical Organization. Brain, Behavior and Evolution, 55 (6), 283-286.

Prochaska, J. O. (1979). Systems of psychotherapy: A transtheoretical analysis . Oxford England: Dorsey.

Prochaska, J. O., DiClemente, C. C., & Norcross, J. C. (1992). In search of how people change: Applications to addictive behaviors. American Psychologist, 47 (9), 1102-1114.

Pronin, E., Lin, D. Y., & Ross, L. (2002). The bias blind spot: Perceptions of bias in self versus others. Personality and Social Psychology Bulletin, 28 (3), 369-381.

Pronin, E., Puccio, C., Ross, L., Gilovich, T., Griffin, D., & Kahneman, D. (2002). Understanding misunderstanding: Social psychological perspectives Heuristics and biases: The psychology of intuitive judgment. (pp. 636-665). New York, NY US: Cambridge University Press.

Quattrone, G. A. (1982). Overattribution and unit formation: When behavior engulfs the person. Journal of Personality and Social Psychology, 42 (4), 593-607.

Ragins, B. R., & Cornwell, J. M. (2001). Pink triangles: Antecedents and consequences of perceived workplace discrimination against gay and lesbian employees. Journal of Applied Psychology, 86 (6), 1244-1261.

Rahim, M. A., & Afza, M. (1993). Leader power, commitment, satisfaction, compliance, and propensity to leave a job among U.S. accountants. Journal of Social Psychology, 133 (5), 611-625.

Raine, A., Stoff, D. M., Breiling, J., & Maser, J. D. (1997). Antisocial behavior and psychophysiology: A biosocial perspective and a prefrontal dysfunction hypothesis Handbook of antisocial behavior. (pp. 289-304). Hoboken, NJ US: John Wiley & Sons Inc.

Rainville, P., Duncan, G. H., Price, D. D., Carrier, B., & Bushnell, M. C. (1997). Pain Affect Encoded in Human Anterior Cingulate But Not Somatosensory Cortex. Science, 277 , 968-971.

Rainville, P., Duncan, G. H., Price, D. D., Carrier, B. Æ., & Bushnell, M. C. (1997). Pain affect encoded in human anterior cingulate but not somatosensory cortex. Science, 277 (5328), 968-971.

Rajaram, S., & Roediger, H. L. (1993). Direct comparison of four implicit memory tests. Journal of Experimental Psychology: Learning, Memory, and Cognition, 19 (4), 765-776.

Rakic, P., Levitt, P., Chalupa, L. M., Wefers, C. J., Bourgeois, J.-P., Goldman-Rakic, P. S., et al. (2000). Development The new cognitive neurosciences (2nd ed.). (pp. 7-115). Cambridge, MA US: The MIT Press.

Ramirez, J. M. (2003). Hormones and aggression in childhood and adolescence. Aggression and Violent Behavior, 8 (6), 621-644.

Ramirez, J. M., & Andreu, J. M. (2003). Aggression’s typologies. Revue Internationale de Psychologie Sociale, 16 (3), 145-159.

Ratcliff, R., & McKoon, G. (1997). A counter model for implicit priming in perceptual word identification. Psychological Review, 104 (2), 319-343.

Ratliff-Crain, J., & Klopfleisch, K. (2005). Studying for Introductory Psychology Exams: Lessons Learned from Successful and Unsuccessful Students . Paper presented at the Annual Conference of the Midwestern Psychological Association.

Rebar A.L., Stanton R., Geard D. A meta-meta-analysis of the effect of physical activity on depression and anxiety in non-clinical adult populations. Health Psychol. Rev. 2015;9:366–378. doi: 10.1080/17437199.2015.1022901

Reicher, S. D., Haslam, S. A., & Smith, J. R. (2012). Working Toward the Experimenter: Reconceptualizing Obedience Within the Milgram Paradigm as Identification-Based Followership. Perspectives on Psychological Science, 7(4), 315–324. https://doi.org/10.1177/1745691612448482

Rechtschaffen, A., Gilliland, M. A., Bergmann, B. M., & Winter, J. B. (1983). Physiological correlates of prolonged sleep deprivation in rats. Science, 221 (4606), 182-184.

Reifman, A. S., Larrick, R. P., & Fein, S. (1991). Temper and temperature on the diamond: The heat-aggression relationship in major league baseball. Personality and Social Psychology Bulletin, 17 (5), 580-585.

Reilly, M. E., Lott, B., Caldwell, D., & DeLuca, L. (1992). Tolerance for sexual harassment related to self-reported sexual victimization. Gender & Society, 6 (1), 122-138.

Reinisch, J. M. (1981). Prenatal exposure to synthetic progestins increases potential for aggression in humans. Science, 211 (4487), 1171-1173.

Reinisch, J. M., Sanders, S. A., Nemeroff, C. B., & Loosen, P. T. (1987). Behavioral influences of prenatal hormones Handbook of clinical psychoneuroendocrinology. (pp. 431-448). New York, NY US: Guilford Press.

Rentschler, A. Carrie, et al. (2011). An Urban Physiognomy of the 1964 Kitty Genovese Murder.  Space and Culture  14:3, pages 310-329.

Rescorla, R. A., Bower, G. H., & Spence, J. T. (1969). Information variables in Pavlovian conditioning The psychology of learning and motivation. Oxford England:

Academic Press.

Rhodes, G., & Zebrowitz, L. A. (2002). Facial attrativeness: Evolutionary, cognitive, and social perspectives . Westport, CT US: Ablex Publishing.

Richards, M., Hardy, R., & Wadsworth, M. E. J. (2003). Does active leisure protect cognition? Evidence from a national birth cohort. Social Science & Medicine, 56 (4), 785-792.

Richardson, K. & Joseph, J. (2019). Hail the polygenic republic: Critical review of  Plomin, R. ( 2018). Blueprint: How DNA makes us who we are.  Cambridge, MA: MIT Press. British Journal of Psychology, 111(1) 148-150.

Rieger, G., Linsenmeier, J. A. W., Gygax, L., & Bailey, J. M. (2008). Sexual orientation and childhood gender nonconformity: Evidence from home videos.  Developmental Psychology, 44 (1), 46–58.  https://doi.org/10.1037/0012-1649.44.1.46

Riemann, R., Angleitner, A., & Strelau, J. (1997). Genetic and environmental influences on personality: A study of twins reared together using the self- and peer-report NEO-FFI scales. Journal of Personality, 65 (3), 449-475.

Riess, M., & Schlenker, B. R. (1977). Attitude change and responsibility avoidance as modes of dilemma resolution in forced-compliance situations. Journal of Personality and Social Psychology, 35 (1), 21-30.

Riordan, C. M., & Shore, L. M. (1997). Demographic diversity and employee attitudes: An empirical examination of relational demography within work units. Journal of Applied Psychology, 82 (3), 342-358.

Robitaille, A., Muniz, G., Lindwall, M., Piccinin, A., Hoffman, L., Johansson, B., & Hofer, S. (2014). Physical activity and cognitive functioning in the oldest old: within- and between-person cognitive activity and psychosocial mediators.  European Journal Of Ageing ,  11 (4), 333-347. doi: 10.1007/s10433-014-0314-z

Roediger, H. L., & McDermott, K. B. (1993). Implicit memory in normal human subjects. In H. Spinnler & F. Boller (Eds.), Handbook of Neuropsychology . Amsterdam: Elsevier.

Rogaev, E. I., Sherrington, R., Rogeaeva, E. A., & Levesque, G. (1995). Familial Alzheimer’s disease in kindreds with missense mutations in a gene on chromosome 1 related to the Alzheimer’s disease type 3 gene. Nature, 376 (6543), 775-778.

Rogers, C. R. (1951a). Perceptual reorganization in client-centered therapy Perception; an approach to personality. (pp. 307-327). Oxford England: Ronald.

Rogers, C. R. (1951b). Where are we going in clinical psychology? Journal of Consulting Psychology, 15 (3), 171-177.

Rogers, T. B. (1980). Models of man: The beauty and/or the beast? Personality and Social Psychology Bulletin, 6 (4), 582-590.

Roggman, L. A., Langlois, J. H., Hubbs-Tait, L., & Rieser-Danner, L. A. (1994). Infant day-care, attachment, and the ‘file drawer problem.’. Child Development, 65 (5), 1429-1443.

Rolls, B. J., Roe, L. S., Kral, T. V. E., Meengs, J. S., & Wall, D. E. (2004). Increasing the portion size of a packaged snack increases energy intake in men and women. Appetite, 42 (1), 63-69.

Roncesvalles, M. N. C., Woollacott, M. H., & Jensen, J. L. (2001). Development of lower extremity kinetics for balance control in infants and young children. Journal of Motor Behavior, 33 (2), 180-192.

Roof, W. C., & Hadaway, C. (1979). Denominational Switching in the Seventies: Going Beyond Stark and Glock. Journal for the Scientific Study of Religion, 18 , 363-379.

Rosellini, R. A., & Seligman, M. E. (1975). Frustration and learned helplessness. Journal of Experimental Psychology: Animal Behavior Processes, 1 (2), 149-157.

Ross, C. A., & Ellason, J. (1999). Comment on the effectiveness of treatment for dissociative identity disorder. Psychological Reports, 84 (3), 1109-1110.

Ross, L., & Anderson, C. A. (1982). Shortcomings in the attribution process: On the origins and maintenance of erroneous social assessments. In D. Kahneman, P. Slovic & A. Tversky (Eds.), Judgment Under Uncertainty: Heuristics and Biases . Cambridge, UK: Cambridge University Press.

Ross, M. W., Paulsen, J. A., & Stålström, O. W. (1988). Homosexuality and mental health: A cross-cultural review. Journal of Homosexuality, 15 (1), 131-152.

Rothbart, M. K., Ahadi, S. A., & Evans, D. E. (2000). Temperament and personality: Origins and outcomes. Journal of Personality and Social Psychology, 78 (1), 122-135.

Rothbart, M. K., Bates, J. E., & Eisenberg, N. (1998). Temperament Handbook of child psychology, 5th ed.: Vol 3. Social, emotional, and personality development. (pp. 105-176). Hoboken, NJ US: John Wiley & Sons Inc.

Rothbaum, B. O., Hodges, L., Anderson, P. L., Price, L., & Smith, S. (2002). Twelve-month follow-up of virtual reality and standard exposure therapies for the fear of flying. Journal of Consulting and Clinical Psychology, 70 (2), 428-432.

Rovee-Collier, C., Schechter, A., Shyi, G. C., & Shields, P. J. (1992). Perceptual identification of contextual attributes and infant memory retrieval. Developmental Psychology, 28 (2), 307-318.

Rovee-Collier, C. K., & Fagen, J. W. (1981). The retrieval of memory in early infancy. Advances in Infancy Research, 1 , 225-254.

Rowatt, W. C., & Kirkpatrick, L. A. (2002). Two dimensions of attachment to God and their relation to affect, religiosity, and personality constructs. Journal for the Scientific Study of Religion, 41 (4), 637-651.

Rowe, D. C., Borkowski, J. G., Ramey, S. L., & Bristol-Power, M. (2002). What twin and adoption studies reveal about parenting Parenting and the child’s world: Influences on academic, intellectual, and social-emotional development. (pp. 21-34). Mahwah, NJ US: Lawrence Erlbaum Associates Publishers.

Røysamb, E., & Nes, R. B. (2018). The genetics of wellbeing. In E. Diener, S. Oishi, & L. Tay (Eds.), Handbook of well-being. Salt Lake City, UT: DEF Publishers. DOI:nobascholar.com

Rubin, K. H., Bukowski, W., Parker, J. G., & Eisenberg, N. (1998). Peer interactions, relationships, and groups Handbook of child psychology, 5th ed.: Vol 3. Social, emotional, and personality development. (pp. 619-700). Hoboken, NJ US: John Wiley & Sons Inc.

Rubin, K. H., Lynch, D., Coplan, R., & Rose-Krasnor, L. (1994). ‘Birds of a feather¬†.¬†.¬†.’: Behavioral concordances and preferential personal attraction in children. Child Development, 65 (6), 1778-1785.

Ruble, D. N., Martin, C. L., & Eisenberg, N. (1998). Gender development Handbook of child psychology, 5th ed.: Vol 3. Social, emotional, and personality development. (pp. 933-1016). Hoboken, NJ US: John Wiley & Sons Inc.

Rudolph, U., Crestani, F., Benke, D., Brunig, I., Benson, J. A., Fritschy, J. M., et al. (1999). Benzodiazepine actions mediated by specific g-aminobutyric acid A receptor subtypes. Nature, 401 , 796-800.

Ruffin, C. L. (1993). Stress and health: Little hasslers vs. major life events. Australian Psychologist, 28 (3), 201-208.

Rushton, J. P., & Ankney, C. D. (1996). Brain size and cognitive ability: Correlations with age, sex, social class, and race. Psychonomic Bulletin & Review, 3 (1), 21-36.

Rusting, C. L., & DeHart, T. (2000). Retrieving positive memories to regulate negative mood: Consequences for mood-congruent memory. Journal of Personality and Social Psychology, 78 (4), 737-752.

Rutter, M. (1998). Developmental catch-up, and deficit, following adoption after severe global early privation. Journal of Child Psychology and Psychiatry, 39 (4), 465-476.

Rutter, M., Stoff, D. M., Breiling, J., & Maser, J. D. (1997). Antisocial behavior: Developmental psychopathology perspectives Handbook of antisocial behavior. (pp. 115-124). Hoboken, NJ US: John Wiley & Sons Inc.

Ryan, R. M., & Deci, E. L. (2000). Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being. American Psychologist, 55 (1), 68-78.

Saari, L. M., Johnson, T. R., McLaughlin, S. D., & Zimmerle, D. M. (1988). A survey of management training and education practices in U.S. companies. Personnel Psychology, 41 (4), 731-743.

Sagi, A., Koren-Karie, N., Gini, M., Ziv, Y., & Joels, T. (2002). Shedding Further Light on the Effects of Various Types and Quality of Early Child Care on Infant-Mother Attachment Relationship: The Haifa Study of Early Child Care. Child Development, 73 (4), 1166-1186.

Salamone, J. D., & Correa, M. (2002). Motivational views of reinforcement: Implications for understanding the behavioral functions of nucleus accumbens dopamine. Behavioural Brain Research, 137 (1), 3-25.

Salari, N., Hosseinian-Far, A., Jalali, R., Vaisi-Raygani, A., Rasoulpoor, S., Mohammadi, M., Rasoulpoor, S., & Khaledi-Paveh, B. (2020). Prevalence of stress, anxiety, depression among the general population during the COVID-19 pandemic: a systematic review and meta-analysis.  Globalization and health ,  16 (1), 57. https://doi.org/10.1186/s12992-020-00589-w

Salgado, J. s. F., Moscoso, S., & Lado, M. (2003). Test-retest reliability of ratings of job performance dimensions in managers. International Journal of Selection and Assessment, 11 (1), 98-101.

Salovey, P., & Mayer, J. (1990). Emotional Intelligence. Imagination, Cognition, and Personality, 9 , 185-211.

Salovey, P., Rothman, A. J., Detweiler, J. B., & Steward, W. T. (2000). Emotional states and physical health. American Psychologist, 55 (1), 110-121.

Salthouse, T. A. (1991). Mediation of adult age differences in cognition by reductions in working memory and speed of processing. Psychological Science, 2 (3), 179-183.

Salthouse, T. A. (1996). The processing-speed theory of adult age differences in cognition. Psychological Review, 103 (3), 403-428.

Salthouse, T. A., & Babcock, R. L. (1991). Decomposing adult age differences in working memory. Developmental Psychology, 27 (5), 763-776.

Sameroff, A. J., & Haith, M. M. (Eds.), (1996).  The five to seven year shift . Chicago, IL: University of Chicago Press.

Sander, L. B., Schorndanner, J., Terhorst, Y., Spanhel, K., Pryss, R., Baumeister, H., & Messner, E.-M. (2020). ‘Help for trauma from the app stores?’ a systematic review and standardised rating of apps for Post-Traumatic Stress Disorder (PTSD).  European Journal of Psychotraumatology, 11 (1), Article 1701788.  https://doi.org/10.1080/20008198.2019.1701788

Sanders, S. A., & Reinisch, J. M. (1999). Would you say you ‘had sex’ if ¬†.¬†.¬†.? JAMA: Journal of the American Medical Association, 281 (3), 275-277.

Santarelli, L., Saxe, M., Gross, C., Surget, A., Battaglia, F., Dulawa, S., et al. (2003). Requirement of Hippocampal Neurogenesis for the Behavioral Effects of Antidepressants. Science, 301 (5634), 805-809.

Santor, D. A., Messervey, D., & Kusumakar, V. (2000). Measuring peer pressure, popularity, and conformity in adolescent boys and girls: Predicting school performance, sexual attitudes, and substance abuse. Journal of Youth and Adolescence, 29 (2), 163-182.

Sapadin, L. A. (1988). Friendship and gender: Perspectives of professional men and women. Journal of Social and Personal Relationships, 5 (4), 387-403.

Sapolsky, R. (1999). Glucocorticoids, stress, and their adverse neurological effects: relevance to aging. Experimental Gerontology, 34 (6), 721-732.

Sapolsky, R. (2004). Why Zebras Don’t Get Ulcers (3rd ed.). New York: Henry Holt.

Sargent, L. D., & Sue-Chan, C. (2001). Does Diversity Affect Group Efficacy? The Intervening Role of Cohesion and Task Interdependence. Small Group Research, 32 (4), 426-450.

Saudino, K. J., Cherny, S. S., Emde, R. N., & Hewitt, J. K. (2001). Sources of continuity and change in observed temperament Infancy to early childhood: Genetic and environmental influences on developmental change. (pp. 89-110). New York, NY US: Oxford University Press.

Schachter, S., & Singer, J. (1962). Cognitive, social, and physiological determinants of emotional state. Psychological Review, 69 (5), 379-399.

Schacter, D. (1996). Illusory memories: A cognitive neuroscience analysis. Proceedings of the National Academy of Sciences of the United States of America, 93 (24), 13527-13534.

Schacter, D. L. (1998). Memory and awareness. Science, 280 (5360), 59-60.

Schacter, D. L. (2001). The seven sins of memory: How the mind forgets and remembers . Boston, MA US: Houghton, Mifflin and Company.

Schacter, D. L., Norman, K. A., Koutstaal, W., & Bjorklund, D. F. (2000). The cognitive neuroscience of constructive memory False-memory creation in children and adults: Theory, research, and implications. (pp. 129-168). Mahwah, NJ US: Lawrence Erlbaum Associates Publishers.

Schacter, D. L., & Tulving, E. (1994). Memory Systems 1994 . Cambridge: MIT Press.

Schaie, K. W., Birren, J. E., Schaie, K. W., Abeles, R. P., Gatz, M., & Salthouse, T. A. (1996). Intellectual development in adulthood Handbook of the psychology of aging (4th ed.). (pp. 266-286). San Diego, CA US: Academic Press.

Schardt, D. (2004, January/February). Dr. Phil’s Pills. Nutrition Action Healthletter, 5.

Schick, T., & Vaughn, L. (1999). How to Think About Weird Things (2nd ed.). Mountain View, CA: Mayfield Publishing Company.

Schimmack, U. (2020). A meta-psychological perspective on the decade of replication failures in social psychology.  Canadian Psychology/Psychologie canadienne, 61 (4), 364–376.  https://doi.org/10.1037/cap0000246

Schippers, M. l. C., Den Hartog, D. N., Koopman, P. L., & Wienk, J. A. (2003). Diversity and team outcomes: the moderating effects of outcome interdependence and group longevity and the mediating effect of reflexivity. Journal of Organizational Behavior, 24 (6), 779-802.

Schkade, D. A., & Kahneman, D. (1998). Does living in California make people happy? A focusing illusion in judgments of life satisfaction. Psychological Science, 9 (5), 340-346.

Schmaltz, R., Jansen, E., & Wenckowski, N. (2017). Redefining Critical Thinking: Teaching Students to Think like Scientists.  Frontiers in Psychology ,  8 . doi: 10.3389/fpsyg.2017.00459

Schmidt, F. L., & Hunter, J. E. (1984). A within setting empirical test of the situational specificity hypothesis in personnel selection. Personnel Psychology, 37 (2), 317-326.

Schmidt, C. K., Khalid, S., Loukas, M., & Tubbs, R. S. (2018). Neuroanatomy of Anxiety: A Brief Review.  Cureus ,  10 (1), e2055. https://doi.org/10.7759/cureus.2055

Schmidt, S. M., & Kipnis, D. (1984). Managers’ pursuit of individual and organizational goals. Human Relations, 37 (10), 781-794.

Schmitt, D. P. (2003). Universal sex differences in the desire for sexual variety: Tests from 52 nations, 6 continents, and 13 islands. Journal of Personality and Social Psychology, 85 (1), 85-104.

Schmitt, D. P. (2004). The Price of Mr. Right: Evolutionary Biology and Modern Mating Markets. PsycCRITIQUES, 49 (4), 478-480.

Schneider, W., Gruber, H., Gold, A., & Opwis, K. (1993). Chess expertise and memory for chess positions in children and adults. Journal of Experimental Child Psychology, 56 (3), 328-349.

Schneider-Rosen, K., & Cicchetti, D. (1991). Early self-knowledge and emotional development: Visual self-recognition and affective reactions to mirror self-images in maltreated and non-maltreated toddlers. Developmental Psychology, 27 (3), 471-478.

Schoenemann, P. T., Budinger, T. F., Sarich, V. M., & Wang, W. S. Y. (2000). Brain size does not predict general cognitive ability within families. Proceedings of the National Academy of Sciences, USA, 97 , 4932-4937.

Scholte, W. F., Olff, M., Ventevogel, P., de Vries, G.-J., Jansveld, E., Lopes Cardozo, B., et al. (2004). Mental Health Symptoms Following War and Repression in Eastern Afghanistan. JAMA: Journal of the American Medical Association, 292 (5), 585-593.

Schomerus, G. et al. (2012). Evolution of public attitudes about mental illness: a systematic review and meta-analysis. Acta Psychiatrica Scandinavia, 125: 440–452. DOI: 10.1111/j.1600-0447.2012.01826.x

Schredl, M. (2000a). Continuity between waking life and dreaming: Are all waking activities reflected equally often in dreams? Perceptual and Motor Skills, 90 (3), 844-846.

Schredl, M. (2000b). Time series analysis in dream research. Perceptual and Motor Skills, 91 (3), 915-916.

Schwartz, C. E., Wright, C. I., Shin, L. M., Kagan, J., & Rauch, S. L. (2003). Inhibited and Uninhibited Infants “Grown Up”: Adult Amygdalar Response to Novelty. Science, 300 , 1952-1953.

Schwarzwald, J., Amir, Y., & Crain, R. L. (1992). Long-term effects of school desegregation experiences on interpersonal relations in the Israeli Defense Forces. Personality and Social Psychology Bulletin, 18 (3), 357-368.

Scollon, C. N., Diener, E., Oishi, S., & Biswas-Diener, R. (2004). Emotions Across Cultures and Methods. Journal of Cross-Cultural Psychology, 35 (3), 304-326.

Scott, T. R., & Stricker, E. M. (1990). Gustatory control of food selection Neurobiology of food and fluid intake. (pp. 243-263). New York, NY US: Plenum Press.

Sege, R. (2018). AAP policy opposes corporal punishment, draws on recent evidence.  Child Abuse & Neglect . https://www.aappublications.org/news/2018/11/05/discipline110518

Segerstrom, S. C., & Miller, G. E. (2004). Psychological Stress and the Human Immune System: A Meta-Analytic Study of 30 Years of Inquiry.  Psychological Bulletin, 130 (4), 601–630.  https://doi.org/10.1037/0033-2909.130.4.601

Sehlmeyer, C., Schöning, S., Zwitserlood, P., Pfleiderer, B., Kircher, T., Arolt, V., & Konrad, C. (2009). Human Fear Conditioning and Extinction in Neuroimaging: A Systematic Review.  Plos ONE ,  4 (6), e5865. doi: 10.1371/journal.pone.0005865

Segrin, C., & Nabi, R. L. (2002). Does television viewing cultivate unrealistic expectations about marriage? Journal of Communication, 52 (2), 247-263.

Seligman, M. E., & Maier, S. F. (1967). Failure to escape traumatic shock. Journal of Experimental Psychology, 74 (1), 1-9.

Seligman, M. E. P. (1993). What You Can Change. . .And What You Can’t . New York: Fawcett Books.

Seligman, M. E. P. (2002). Authentic happiness: Using the new positive psychology to realize your potential for lasting fulfillment . New York, NY US: Free Press.

Seligman, M. E. P. (2003). Positive psychology: Fundamental assumptions. The Psychologist, 16 (3), 126-127.

Selye, H. (1956). Stress and psychiatry. American Journal of Psychiatry, 113 , 423-427.

Shanahan, T. L., Kronauer, R. E., Duffy, J. F., Williams, G. H., & Czeisler, C. A. (1999). Melatonin rhythm observed throughout a three-cycle bright-light stimulus designed to reset the human circadian pacemaker. Journal of Biological Rhythms, 14 (3), 237-253.

Shanahan, T. L., Zeitzer, J. M., & Czeisler, C. A. (1997). Resetting the melatonin rhythm with light in humans. Journal of Biological Rhythms, 12 (6), 556-567.

Shatz, M., & Gelman, R. (1973). The development of communication skills: Modifications in the speech of young children as a function of listener. Monographs of the Society for Research in Child Development, 38 (5), 1-37.

Sherman, M. (2004). STDs Unevenly High in Teens, Young Adults Retrieved Februrary 25, 2004, AP Wire report

Shermer, M. (2002). Why people believe weird things: Pseudoscience, superstition, and other confusions of our time . New York, NY US: W H Freeman/Times Books/ Henry Holt & Co.

Sherwin, B. B. (1988). A comparative analysis of the role of androgen in human male and female sexual behavior: Behavioral specificity, critical thresholds, and sensitivity. Psychobiology, 16 (4), 416-425.

Shoda, Y., Mischel, W., & Peake, P. K. (1990). Predicting adolescent cognitive and self-regulatory competencies from preschool delay of gratification: Identifying diagnostic conditions. Developmental Psychology, 26 (6), 978-986.

Shonkoff, J. P., & Phillips, D. A. (2000). From neurons to neighborhoods: The science of early childhood development . Washington, DC US: National Academy Press.

Showers, C. J., Zeigler-Hill, V., Leary, M. R., & Tangney, J. P. (2003). Organization of self-knowledge: Features, functions, and flexibility Handbook of self and identity. (pp. 47-67). New York, NY US: Guilford Press.

Sieff, E. M., Dawes, R. M., & Loewenstein, G. (1999). Anticipated versus actual reaction to HIV test results. American Journal of Psychology, 112 (2), 297-311.

Siegel, J. M. (2001). The REM sleep-memory consolidation hypothesis. Science, 294 (5544), 1058-1063.

Silva, M. n. (2002). The effectiveness of school-based sex education programs in the promotion of abstinent behavior: A meta-analysis. Health Education Research, 17 (4), 471-481.

Simon, N. G., Kaplan, J. R., Hu, S., Register, T. C., & Adams, M. R. (2004). Increased aggressive behavior and decreased affiliative behavior in adult male monkeys after long-term consumption of diets rich in soy protein and isoflavones. Hormones & Behavior, 45 (4), 278.

Simons, D. J., & Chabris, C. F. (1999). Gorillas in our midst: Sustained inattentional blindness for dynamic events. Perception, 28 (9), 1059-1074.

Singer, W. (2000). Why use more than one electrode at a time? Trends in Biochemical Sciences – Regular Edition. New Technologies, 25 (12), 12-18.

Singh-Manoux, A., Richards, M., & Marmot, M. (2003). Leisure activities and cognitive function in middle age: Evidence from the Whitehall II study. Journal of Epidemiology & Community Health, 57 (11), 907-913.

Slater, A., Mattock, A., Brown, E., & Burnham, D. (1991). Visual processing of stimulus compounds in newborn infants. Perception, 20 (1), 29-33.

Sleek, S., Rosenberg, S., Cowley, G., Begley, S., Gottman, J., Silver, N., et al. (2000). Unit 5: Social life, the Internet, and romantic relationships Annual editions: Social psychology 2000/2001 (4th ed.). (pp. 106-125). Guilford, CT US: Dushkin/Mcgraw-Hill.

Smith, E. Subrena. (2020). Philosopher Flattens Evolutionary Psychology. Biological Theory 15 (1): 39-49. 2020. https://mindmatters.ai/2020/05/philosopher-flattens-evolutionary-psychology/

Smith, E. A., Udry, J. R., & Morris, N. M. (1985). Pubertal development and friends: A biosocial explanation of adolescent sexual behavior. Journal of Health and Social Behavior, 26 (3), 183-192.

Smith, E. E., Jonides, J., & Koeppe, R. A. (1996). Dissociating verbal and spatial working memory using PET. Cerebral Cortex, 6 (1), 11-20.

Smith, J. C., & Joyce, C. A. (2004). Mozart versus New Age Music: Relaxation States, Stress, and ABC Relaxation Theory. Journal of Music Therapy, 41 (3), 215-224.

Smith, R. A. (2002). Challenging your preconceptions: Thinking critically about psychology (2nd ed.) . Belmont, CA US: Wadsworth/Thomson Learning.

Smith, S. M. (1979). Remembering in and out of context. Journal of Experimental Psychology: Human Learning and Memory, 5 (5), 460-471.

Smith, T. W. (1998). American Sexual Behavior: Trends, Socio-Demographic Differences, and Risk Behavior. GSS Topical Report No. 25   Retrieved February 11, 2004, from http://cloud9.norc.uchicago.edu/dlib/t-25.htm

Smith, S.E. Is Evolutionary Psychology Possible?.  Biological Theory   15,  39–49 (2020). https://doi.org/10.1007/s13752-019-00336-4

Smith, S.M., Glenberg, A. & Bjork, R.A. (1978). Environmental context and human memory.  Memory & Cognition   6,  342–353. https://doi.org/10.3758/BF03197465

Smoller, J. W., Block, S. R., & Young, M. M. (2009). Genetics of anxiety disorders: the complex road from DSM to DNA.  Depression and anxiety ,  26 (11), 965–975. https://doi.org/10.1002/da.20623

Smyer, M. A., Schaie, K. W., & Kapp, M. B. (1996). Older adults’ decision-making and the law . New York, NY US: Springer Publishing Co.

Smyth, J. M. (1998). Written emotional expression: Effect sizes, outcome types, and moderating variables. Journal of Consulting and Clinical Psychology, 66 (1), 174-184.

Soderstrom, N. C., Kerr, T. K., & Bjork, R. A. (2016). The Critical Importance of Retrieval—and Spacing—for Learning. Psychological Science, 27(2), 223–230. https://doi.org/10.1177/0956797615617778

Soung, N.K., Kim, B.Y. Psychological stress and cancer.  J Anal Sci Technol   6,  30 (2015). https://doi.org/10.1186/s40543-015-0070-5

Spalding, T. W., Lyon, L. A., Steel, D. H., & Hatfield, B. D. (2004). Aerobic exercise training and cardiovascular reactivity to psychological stress in sedentary young normotensive men and women. Psychophysiology, 41 (4), 552-562.

Spanos, N. P. (1996). Hypnosis: Mythology versus reality Multiple identities & false memories: A sociocognitive perspective. (pp. 17-28). Washington, DC US: American Psychological Association.

Spanos, N. P., Burgess, C. A., Wallace-Capretta, S., & Ouaida, N. (1996). Simulation, surreptitious observation and the modification of hypnotizability: Two tests of the compliance hypothesis. Contemporary Hypnosis, 13 (3), 161-176.

Spanos, N. P., Gabora, N. J., & Hyndford, C. (1991). Expectations and interpretations in hypnotic responding. Australian Journal of Clinical & Experimental Hypnosis, 19 (2), 87-96.

Spanos, N. P., Liddy, S. J., Baxter, C. E., & Burgess, C. A. (1994). Long-term and short-term stability of behavioral and subjective indexes of hypnotizability. Journal of Research in Personality, 28 (3), 301-313.

Spanos, N. P., Weekes, J. R., & Bertrand, L. D. (1985). Multiple personality: A social psychological perspective. Journal of Abnormal Psychology, 94 (3), 362-376.

Spaulding, W. D., Poland, J. S., Corrigan, P. W., & Penn, D. L. (2001). Cognitive rehabilitation for schizophrenia: Enhancing social cognition by strengthening neurocognitive functioning Social cognition and schizophrenia. (pp. 217-247). Washington, DC US: American Psychological Association.

Spear, B. A. (2002). Adolescent growth and development. Journal of the American Dietetic Association (retrieved January 5, 2004).

Sperling, G. (1960). The information available in brief visual presentation. Psychological Monographs, 74 (11), 29-29.

Spiegel, D., & Kato, P. M. (1996). Psychosocial influences on cancer incidence and progression. Harvard Review of Psychiatry, 4 (1), 10-26.

Spiegel, M. (1981). Applied Differential Equations . Englewood Cliffs, NJ: Prentice-Hall.

Spilka, B., Hood, R. W., Hunsberger, B., & Gorsuch, R. (2003). The psychology of religion: An empirical approach (3rd ed.) . New York, NY US: Guilford Press.

Squire, L. R., & Knowlton, B. J. (2000). The medial temporal lobe, the hippocampus, and the memory systems of the brain. In M. S. Gazzaniga (Ed.), The New Cognitive Neurosciences (2 ed.). Cambridge, MA: The MIT Press.

Sroufe, L. A., Egeland, B., Carlson, E. A., & Collins, W. A. (2005). The development of the person: The Minnesota study of risk and adaptation from birth to adulthood . New York, NY US: Guilford Publications.

Sroufe, L. A., Egeland, B., Carlson, E. A., Collins, W. A., & Laursen, B. (1999). One social world: The integrated development of parent-child and peer relationships Relationships as developmental contexts. (pp. 241-261). Mahwah, NJ US: Lawrence Erlbaum Associates Publishers.

Sroufe, L. A., & Fleeson, J. (1986). Attachment and the construction of relationships. In W. W. Hartup & Z. Rubin (Eds.), Relationships and Development . Hillsdale, NJ: Erlbaum.

Stachyra, M. F. (1993). Psychological differentiation, identity development, and pregnancy risk in late adolescent college women. ProQuest Information & Learning, US.

Stage, F. K., Okun, M. A., Stock, W. A., & George, L. K. (1984). Analyzing educational data from a cohort-sequential design: An illustration using GPA. Journal of Experimental Education, 52 (4), 227-230.

Stangor, C., Ford, T. E., Stroebe, W., & Hewstone, M. (1992). Accuracy and expectancy-confirming processing orientations and the development of stereotypes and prejudice European review of social psychology, Vol. 3. (pp. 57-89). Oxford England: John Wiley & Sons.

Stanovich, K. E. (2004). The robot’s rebellion: Finding meaning in the age of Darwin . Chicago, IL US: University of Chicago Press.

Stanovich, K. E. (2019). How to Think Straight About Psychology (11thEdition). Hoboken: NJ. Pearson Education

Stanovich, K., & West, R. (2000). Individual differences in reasoning: Implications for the rationality debate?  Behavioral and Brain Sciences,   23 (5), 645-665. doi:10.1017/S0140525X00003435

Staub, E., Moghaddam, F. M., & Marsella, A. J. (2004). Understanding and responding to group violence: Genocide, mass killing, and terrorism Understanding terrorism: Psychosocial roots, consequences, and interventions. (pp. 151-168). Washington, DC US: American Psychological Association.

Stavans, M., Lin, Y., Wu, D., & Baillargeon, R. (2019). Catastrophic individuation failures in infancy: A new model and predictions.  Psychological Review, 126 (2), 196–225.  https://doi.org/10.1037/rev0000136

Steele, C. M., & Aronson, J. (1995). Stereotype threat and the intellectual test performance of African Americans. Journal of Personality and Social Psychology, 69 (5), 797-811.

Steele, C. M., & Southwick, L. (1985). Alcohol and social behavior: I. The psychology of drunken excess. Journal of Personality and Social Psychology, 48 (1), 18-34.

Steenhuis, I., & Poelman, M. (2017). Portion Size: Latest Developments and Interventions.  Current obesity reports ,  6 (1), 10–17. https://doi.org/10.1007/s13679-017-0239-x

Stein, N. L., & Levine, L. J. (1989). The causal organisation of emotional knowledge: A developmental study. Cognition & Emotion, 3 (4), 343-378.

Steinberg, L., Mounts, N. S., Lamborn, S. D., & Dornbusch, S. M. (1991). Authoritative parenting and adolescent adjustment across varied ecological niches. Journal of Research on Adolescence, 1 (1), 19-36.

Steinhausen, H. C., Willms, J., & Spohr, H. (1994). Correlates of psychopathology and intelligence in children with Fetal Alcohol Syndrome. Journal of Child Psychology and Psychiatry, 35 (2), 323-331.

Sternberg, R. J. (1986). A triangular theory of love. Psychological Review, 93 (2), 119-135.

Sternberg, R. J. (1996). Successful Intelligence . New York: Plume.

Sternberg, R. J. (1998). Love is a story: A new theory of relationships . New York, NY US: Oxford University Press.

Sternberg, R. J. (2003). American Psychological Association 111th Convention Opening Ceremony Address. Toronto, Canada.

Sternberg, R. J., Wagner, R. K., Williams, W. M., & Horvath, J. A. (1995). Testing common sense. American Psychologist, 50 (11), 912-927.

Stith, S. M., & Farley, S. C. (1993). A predictive model of male spousal violence. Journal of Family Violence, 8 (2), 183-201.

Stork, S., & Musseler, J. (2004). Perceived localizations and eye movements with action-generated and computer-generated vanishing points of moving stimuli. Visual Cognition, 11 (2), 299-314.

Strack, F., Martin, L. L., & Stepper, S. (1988). Inhibiting and facilitating conditions of the human smile: A nonobtrusive test of the facial feedback hypothesis. Journal of Personality and Social Psychology, 54 (5), 768-777.

Straus, M. A., & Stewart, J. H. (1999). Corporal punishment by American parents: National data on prevalence, chronicity, severity and duration, in relation to child and family characteristics. Clinics of Child Family Psychology Review, 2 , 55-70.

Strayer, F. F. (1990). The social ecology of toddler play groups and the origins of gender discrimination. In F. F. Strayer (Ed.), Social Interaction and Behavioral Development During Early Childhood . Montreal: La Maison D’Ethologie de Montreal.

Streitmatter, J. L. (1988). Ethnicity as a mediating variable of early adolescent identity development. Journal of Adolescence, 11 (4), 335-346.

Sulin, R. A., & Dooling, D. J. (1974). Intrusion of a thematic idea in retention of prose. Journal of Experimental Psychology, 103 (2), 255-262.

Sun, W., & Dietrich, D. (2013). Synaptic integration by NG2 cells.  Frontiers in cellular neuroscience ,  7 , 255. https://doi.org/10.3389/fncel.2013.00255

Sunday Scaries. The Sleep Judge (2019). Retrieved from https://www.thesleepjudge.com/sunday-scaries/

Sur, M., Angelucci, A., & Sharma, J. (1999). Rewiring cortex: The role of patterned activity in development and plasticity of neocortical circuits. Journal of Neurobiology, 41 (1), 33-43.

Swain, J. E., Lorberbaum, J. P., Kose, S., & Strathearn, L. (2007). Brain basis of early parent-infant interactions: Psychology, physiology, and in vivo functional neuroimaging studies. Journal of Child Psychology and Psychiatry, 48 (3), 262-287.

Swets, J. A., Dawes, R. M., & Monahan, J. (2000). Psychological science can improve diagnostic decisions. Psychological Science in the Public Interest, 1 (1), 1-26.

Swim, J. K., Aikin, K. J., Hall, W. S., & Hunter, B. A. (1995). Sexism and racism: Old-fashioned and modern prejudices. Journal of Personality and Social Psychology, 68 (2), 199-214.

Tajfel, H., Billig, M. G., Bundy, R. P., & Flament, C. (1971). Social categorization and intergroup behaviour. European Journal of Social Psychology, 1 (2), 149-178.

Tajfel, H., & Turner, J. C. (1986). The social identity theory of intergroup behavior. In S. Worchel & W. Austin (Eds.), Psychology of Intergroup Relations . Chicago: Nelson-Hall.

Tamir, M., Robinson, M. D., Clore, G. L., Martin, L. L., & Whitaker, D. J. (2004). Are we puppets on a string? The contextual meaning of unconscious expressive cues. Personality and Social Psychology Bulletin, 30 (2), 237-249.

Tanaka, K. (1996). Inferotemporal cortex and object vision. Annual Review of Neuroscience, 19 , 109-139.

Tanaka, K., Sugita, Y., Moriya, M., & Saito, H. (1993). Analysis of object motion in the ventral part of the medial superior temporal area of the macaque visual cortex. Journal of Neurophysiology (69), 128-142.

Tanner, W. P., Jr., & Swets, J. A. (1954). A decision-making theory of visual detection. Psychological Review, 61 (6), 401-409.

Taylor, P. J., & Small, B. (2002). Asking applicants what they would do versus what they did do: A meta-analytic comparison of situational and past behaviour employment interview questions. Journal of Occupational and Organizational Psychology, 75 (3), 277-294.

Taylor, S. E., Klein, L. C., Lewis, B. P., Gruenewald, T. L., Gurung, R. A. R., & Updegraff, J. A. (2000). Biobehavioral responses to stress in females: Tend-and-befriend, not fight-or-flight. Psychological Review, 107 (3), 411-429.

Teghtsoonian, R. (1971). On the exponents in Stevens’ law and the constant in Ekman’s law. Psychological Review, 78 (1), 71-80.

Terman, J. S., Terman, M., Lo, E.-S., & Cooper, T. B. (2001). Circadian time of morning light administration and therapeutic response in winter depression. Archives of General Psychiatry, 58 (1), 69-75.

Terman, M., Terman, J. S., & Ross, D. C. (1998). A controlled trial of timed bright light and negative air ionization for treatment of winter depression. Archives of General Psychiatry, 55 (10), 875-882.

Terpstra, D. E., Mohamed, A. A., & Rozell, E. J. (1996). A model of human resource information, practice choice, and organizational outcomes. Human Resource Management Review, 6 (1), 25-46.

Tesser, A., Leary, M. R., & Tangney, J. P. (2003). Self-evaluation Handbook of self and identity. (pp. 275-290). New York, NY US: Guilford Press.

Testa, M. (2002). The impact of men’s alcohol consumption on perpetration of sexual aggression. Clinical Psychology Review, 22 (8), 1239-1263.

Tett, R. P., Jackson, D. N., & Rothstein, M. (1991). Personality measures as predictors of job performance: A meta-analytic review. Personnel Psychology, 44 (4), 703-742.

Teyber, E., McClure, F., Snyder, C. R., & Ingram, R. E. (2000). Therapist variables Handbook of psychological change: Psychotherapy processes & practices for the 21st century. (pp. 62-87). Hoboken, NJ US: John Wiley & Sons Inc.

The Ethical Principles of Psychologists. from www.apa.org/ethics

Thompson, R. A. (1982). Continuity and change in socioemotional development during the second year. ProQuest Information & Learning, US.

Thompson, R. A. (2000). The legacy of early attachments. Child Development, 71 (1), 145-152.

Tice, D. M. (1991). Esteem protection or enhancement? Self-handicapping motives and attributions differ by trait self-esteem. Journal of Personality and Social Psychology, 60 (5), 711-725.

Tingey, J. L., Bentley, J. A., & Hosey, M. M. (2020). COVID-19: Understanding and mitigating trauma in ICU survivors.  Psychological trauma : theory, research, practice and policy ,  12 (S1), S100–S104. https://doi.org/10.1037/tra0000884

Tjosvold, D., & Sun, H. F. (2002). Understanding conflict avoidance: Relationship, motivations, actions and consequences. International Journal of Conflict Management, 13 (2), 142-164.

Tirado, F. (2019). Queer Eye’s Jonathan Van Ness: “I’m Nonbinary” Out. Retrived from https://www.out.com/lifestyle/2019/6/10/queer-eyes-jonathan-van-ness-im-nonbinary

Tolman, E. C., & Honzik, C. H. (1930a). ‘Insight’ in rats. University of California Publications in Psychology, 4 , 215-232.

Tolman, E. C., & Honzik, C. H. (1930b). Introduction and removal of reward, and maze performance in rats. University of California Publications in Psychology, 4 , 257-275.

Tomasello, M., Call, J., & Hare, B. (2003). Chimpanzees versus humans: It’s not that simple. Trends in Cognitive Sciences, 7 (6), 239-240.

Tooby, J., & Cosmides, L. (1990). The past explains the present: Emotional adaptations and the structure of ancestral environments. Ethology & Sociobiology, 11 (4), 375-424.

Topolski, R., Boyd-Bowman, K. A., & Ferguson, H. (2003). Grapes of wrath: Discrimination in the produce aisle. Analyses of Social Issues and Public Policy (ASAP), 3 (1), 111-119.

Torgersen, S., Myers, J., Reichborn-Kjennerud, T., Røysamb, E., Kubarych, T. S., & Kendler, K. S. (2012). The heritability of Cluster B personality disorders assessed both by personal interview and questionnaire.  Journal of personality disorders ,  26 (6), 848–866. https://doi.org/10.1521/pedi.2012.26.6.848

Toups, M. L., & Holmes, W. R. (2002). Effectiveness of abstinence-based sex education curricula: A review. Counseling and Values, 46 (3), 237-240.

Tracey, J. B., Tannenbaum, S. I., & Kavanagh, M. J. (1995). Applying trained skills on the job: The importance of the work environment. Journal of Applied Psychology, 80 (2), 239-252.

Triplett, N. (1898). The dynamogenic factors in pacemaking and competition. American Journal of Psychology, 9 (4), 507-533.

Trobst, K. K., Collins, R. L., & Embree, J. M. (1994). The role of emotion in social support provision: Gender, empathy and expressions of distress. Journal of Social and Personal Relationships, 11 (1), 45-62.

Tulving, E., & Thomson, D. M. (1973). Encoding specificity and retrieval processes in episodic memory.  Psychological Review, 80 (5), 352- 373.   https://doi.org/10.1037/h0020071

Turing, A. L. (1936). On computable numbers, with an application to the Entscheidungs-Problem. Proceedings of the London Mathematical Society, 42 , 230-265.

Turner, M. E., & Pratkanis, A. R. (1998). Twenty-five years of groupthink theory and research: Lessons from the evaluation of a theory. Organizational Behavior and Human Decision Processes, 73 (2), 105-115.

Tversky, A., & Kahneman, D. (1973). Availability: A heuristic for judging frequency and probability. Cognitive Psychology, 5 (2), 207-232.

Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185 (4157), 1124-1131.

Tversky, A., & Kahneman, D. (1983). Extensional versus intuitive reasoning: The conjunction fallacy in probability judgment.  Psychological Review, 90 (4), 293–315.  https://doi.org/10.1037/0033-295X.90.4.293

Uddin, L. Q., Nomi, J. S., Hébert-Seropian, B., Ghaziri, J., & Boucher, O. (2017). Structure and Function of the Human Insula.  Journal of clinical neurophysiology : official publication of the American Electroencephalographic Society ,  34 (4), 300–306. https://doi.org/10.1097/WNP.0000000000000377

Urban, J., Carlson, E., Egeland, B., & Sroufe, L. A. (1991). Patterns of individual adaptation across childhood. Development and Psychopathology, 3 (4), 445-460.

Van Boven, L., & Epley, N. (2003). The unpacking effect in evaluative judgments: When the whole is less than the sum of its parts. Journal of Experimental Social Psychology, 39 (3), 263-269.

Van Boven, L., & Gilovich, T. (2003). To Do or to Have? That Is the Question. Journal of Personality and Social Psychology, 85 (6), 1193-1202.

Vancouver, J. (2018). Self-Efficacy’s Role in Unifying Self-Regulation Theories.  Advances in Motivation Science , 203-230. doi: 10.1016/bs.adms.2018.01.005

Van Dis, H., & Larsson, K. (1971). Induction of sexual arousal in the castrated male rat by intracranial stimulation. Physiology & Behavior, 6 (1), 85-86.

Van Duijn, C. M., Van Broeckhoven, C., Hardy, J. A., & Goate, A. M. (1991). Evidence for allelic heterogeneity in familial early-onset Alzheimer’s disease. British Journal of Psychiatry, 158 , 471-474.

van Halteren, H. K., Bongaerts, G. P. A., & Wagener, D. J. T. (2004). Cancer and psychosocial stress: Frequent companions. Lancet, 364 (9437), 824-825.

Van Ijzendoorn, M. H., & Kroonenberg, P. M. (1988). Cross-cultural patterns of attachment: A meta-analysis of the strange situation. Child Development, 59 (1), 147-156.

Van Praag, H., & Gage, F. H. (2002). Genetics of Childhood Disorders: XXXVI. Stem Cell Research, Part 1: New Neurons in the Adult Brain. Journal of the American Academy of Child and Adolescent Psychiatry, 41 (3), 354-357.

Vance, E. B., & Wagner, N. N. (1976). Written descriptions of orgasm: A study of sex differences. Archives of Sexual Behavior, 5 (1), 87-98.

Vander Linde, E., Morrongiello, B. A., & Rovee-Collier, C. (1985). Determinants of retention in 8-week-old infants. Developmental Psychology, 21 (4), 601-613.

Vansteenkiste, M., & Deci, E. L. (2003). Competitively contingent rewards and intrinsic motivation: Can losers remain motivated? Motivation and Emotion, 27 (4), 273-299.

Vartanian, L. R. (2000). Revisiting the imaginary audience and personal fable constructs of adolescent egocentrism: A conceptual review. Adolescence, 35 (140), 639-651.

Vining, E., Freeman, J., Pillas, D., Uematsu, S., Carson, B., Brandt, J., et al. (1997). Why Would You Remove Half a Brain? The Outcome of 58 Children After Hemispherectomy–The Johns Hopkins Experience: 1968-1996. Pediatrics, 100 (2), 163-172.

Vogels, R., Biederman, I., Bar, M., & Lorincz, A. (2001). Inferior temporal neurons show greater sensitivity to nonaccidental than to metric shape differences. Journal of Cognitive Neuroscience (13), 444-453.

Volterra, A., Magistretti, P. J., & Haydon, P. G. (2002). Glia in Synaptic Transmission . Oxford, UK: Oxford University Press.

Wade-Benzoni, K. A. (1997). Predicting reactions to environmental change Environment, ethics, and behavior: The psychology of environmental valuation and degradation. (pp. 52-72). San Francisco, CA US: The New Lexington Press/Jossey-Bass Publishers.

Wade, C., & Tavris, C. (2003). Psychology (7th ed.) . Upper Saddle River, NJ US: Prentice Hall/Pearson Education.

Wadlinger, H. A., & Isaacowitz, D. M. (2011). Fixing our focus: Training attention to regulate emotion. Personality and Social Psychology Review, 15, 75-102.

Wagar, B. M., & Thagard, P. (2003). Using computational neuroscience to investigate the neural correlates of cognitive-affective integration during covert decision making. Brain and Cognition, 53 (2), 398-402.

Wagar, B. M., & Thagard, P. (2004). Spiking Phineas Gage: A Neurocomputational Theory of Cognitive-Affective Integration in Decision Making. Psychological Review, 111 (1), 67-79.

Wagner, U., Gais, S., & Born, J. (2001). Emotional memory formation is enhanced across sleep intervals with high amounts of rapid eye movement sleep. Learning & Memory, 8 (2), 112-119.

Wakefield , J. (2016). Diagnostic Issues and Controversies in DSM-5: Return of the False Positives Problem. Annual Review of Clinical Psychology 12:105–32

Waldinger, R. J., Schulz, M. S., Hauser, S. T., Allen, J. P., & Crowell, J. A. (2004). Reading Others’ Emotions: The Role of Intuitive Judgments in Predicting Marital Satisfaction, Quality, and Stability. Journal of Family Psychology, 18 (1), 58-71.

Walker, M., Harriman, S., & Costello, S. (1980). The influence of appearance on compliance with a request. Journal of Social Psychology, 112 (1), 159-160.

Wallach, H., Becklen, R., & Nitzberg, D. (1985). Vector analysis and process combination in motion perception. Journal of Experimental Psychology: Human Perception and Performance, 11 , 93-102.

Walter, K. V., Conroy-Beam, D., Buss, D. M., Asao, K., Sorokowska, A., Sorokowski, P., … Zupančič, M. (2020). Sex Differences in Mate Preferences Across 45 Countries: A Large-Scale Replication. Psychological Science, 31(4), 408–423. https://doi.org/10.1177/0956797620904154

Waltes, R., Chiocchetti, A. G., & Freitag, C. M. (2016). The neurobiological basis of human aggression: A review on genetic and epigenetic mechanisms.  American journal of medical genetics. Part B, Neuropsychiatric genetics : the official publication of the International Society of Psychiatric Genetics ,  171 (5), 650–675. https://doi.org/10.1002/ajmg.b.32388

Wampold, B. E. (2001a). Contextualizing psychotherapy as a healing practice: Culture, history, and methods. Applied & Preventive Psychology, 10 (2), 69-86.

Wampold, B. E. (2001b). The great psychotherapy debate: Models, methods, and findings . Mahwah, NJ US: Lawrence Erlbaum Associates Publishers.

Wang, Z., Yang, H., Yang, Y., Liu, D., Li, Z., & Zhang, X. et al. (2020). Prevalence of anxiety and depression symptom, and the demands for psychological knowledge and interventions in college students during COVID-19 epidemic: A large cross-sectional study.  Journal Of Affective Disorders ,  275 , 188-193. doi: 10.1016/j.jad.2020.06.034

Wansink, B., Painter, J., & Van Ittersum, K. (2001). Descriptive Menu Labels’ Effect on Sales. Cornell Hotel & Restaurant Administration Quarterly, 42 (6), 68.

Wansink, B., & Van Ittersum, K. (2003). Bottoms up! The influence of elongation on pouring and consumption volume. Journal of Consumer Research, 30 (3), 455-463.

Ward, K., Lee, S., Pace, G., Grogan-Kaylor, A., & Ma, J. (2020). Attachment Style and the Association of Spanking and Child Externalizing Behavior.  Academic Pediatrics ,  20 (4), 501-507. doi: 10.1016/j.acap.2019.06.017

Wardle, J., Steptoe, A., Oliver, G., & Lipsey, Z. (2000). Stress, dietary restraint and food intake. Journal of Psychosomatic Research, 48 (2), 195-202.

Warren, R. M. (1984). Perceptual restoration of obliterated sounds. Psychological Bulletin, 96 (2), 371-383.

Wason, P. C. (1960). On the failure to eliminate hypotheses in a conceptual task. The Quarterly Journal of Experimental Psychology, 12 , 129-140.

Watson, J. B. (1913). Psychology as the behaviourist views it. Psychological Review, 20 (2), 158-177.

Watson, J. B. (1924). Behaviorism . Chicago: University of Chicago Press.

Watson, J. B., & Rayner, R. (1920). Conditioned emotional reactions. Journal of Experimental Psychology, 3 , 1-14.

Watson, R. I. (1979). Basic Writings in the History of Psychology .

Watts, T. W., Duncan, G. J., & Quan, H. (2018). Revisiting the Marshmallow Test: A Conceptual Replication Investigating Links Between Early Delay of Gratification and Later Outcomes. Psychological Science , 29(7), 1159–1177. https://doi.org/10.1177/0956797618761661

Wayment, H. A. (2004). It could have been me: Vicarious victims and disaster-focused distress. Personality and Social Psychology Bulletin, 30 (4), 515-528.

Wegener, D. T., & Petty, R. E. (1997). The flexible correction model: The role of naive theories of bias in bias correction.  Advances in Experimental Social Psychology , 29, 142-208.

Weidman, A. C., Cheng, J. T., & Tracy, J. L. (2018). The psychological structure of humility.  Journal of Personality and Social Psychology, 114 (1), 153–178.  https://doi.org/10.1037/pspp0000112

Weissman, M. M., Markowitz, J. C., Gotlib, I. H., & Hammen, C. L. (2002). Interpersonal psychotherapy for depression Handbook of depression. (pp. 404-421). New York, NY US: Guilford Press.

Weizenbaum, Joseph. “ELIZA – A Computer Program for the Study of Natural Language Communication between Man and Machine,” Communications of the Association for Computing Machinery 9 (1966): 36-45.

Weizenbaum, Joseph (1976). Computer Power and Human Reason. New York: W. H. Freeman and Company.

Wellman, H. (1990). The Child’s Theory of Mind . Cambridge, MA: MIT Press.

Wellman, H. (1993). Early understanding of mind: The normal case. In S. Baron-Cohen, H. Tager-Flusberg & D. Cohen (Eds.), Understanding Other Minds: Perspectives from Autism . Oxford, UK: Oxford University Press.

Wever, E. G. (1970). Theory of Hearing . New York: Wiley.

Wicker, A. W. (1969). Attitudes versus actions: The relationship of verbal and overt behavioral responses to attitude objects. Journal of Social Issues, 25 (4), 41-78.

Bleidorn, W., Hopwood, C. J., & Lucas, R. E. (2018). Life Events and Personality Trait Change.  Journal of personality ,  86 (1), 83–96. https://doi.org/10.1111/jopy.12286

Wiersema, M. F., & Bird, A. (1993). Organizational demography in Japanese firms: Group heterogeneity, individual dissimilarity, and top management team turnover. Academy of Management Journal, 36 (5), 996-1025.

Wierzbicka, A. (1999). Emotions across languages and cultures: Diversity and universals . New York, NY US: Cambridge University Press.

Wiggins, J. S., & Pincus, A. L. (1992). Personality: Structure and assessment. Annual Review of Psychology, 43 , 473-504.

Wild, T. C., Enzle, M. E., & Hawkins, W. L. (1992). Effects of perceived extrinsic versus intrinsic teacher motivation on student reactions to skill acquisition. Personality and Social Psychology Bulletin, 18 (2), 245-251.

Wilder, D. A. (1981). Reluctant Group in Study of Group Influence. PsycCRITIQUES, 26 (6), 445-446.

Wilder, D. A., & Shapiro, P. N. (1984). Role of out-group cues in determining social identity. Journal of Personality and Social Psychology, 47 (2), 342-348.

Williams, L. J., & Hazer, J. T. (1986). Antecedents and consequences of satisfaction and commitment in turnover models: A reanalysis using latent variable structural equation methods. Journal of Applied Psychology, 71 (2), 219-231.

Willingham, W. W., Rock, D. A., & Pollack, J. (1990). Predictability of college grades: Three tests and three national samples Predicting college grades: An analysis of institutional trends over two decades. (pp. 239-252). Princeton, NJ US: Educational Testing Service.

Willis, W. D. (1985). The Pain System: The Neural Basis of Nociceptive Transmission in the Mammalian Nervous System . Basal: Karger.

Willner, P., Benton, D., Brown, E., Cheeta, S., Davies, G., Morgan, J., et al. (1998). ‘Depression’ increases ‘craving’ for sweet rewards in animal and human models of depression and craving. Psychopharmacology, 136 (3), 272-283.

Wilson, E. O. (1999). Consilience . New York: Vintage.

Wimmer, H., & Perner, J. (1983). Beliefs about beliefs: Representation and constraining function of wrong beliefs in young children’s understanding of deception. Cognition, 13 (1), 103-128.

Winer J.R., Mander B.A., Kumar S., Reed M., Baker S.L., Jagust W.J., Walker M.P.

(2020)  Sleep Disturbance Forecasts β-Amyloid Accumulation across Subsequent Years. Current Biology ,  30  (21) , pp. 4291-4298.e3.

Wirth, S., Yanike, M., Frank, L. M., Smith, A. C., Brown, E. N., & Suzaki, W. A. (2003). Single Neurons in the Monkey Hippocampus and Learning of New Associations. Science, 300 (5625), 1578-1582.

Wise, R. A. (2004). Dopamine, learning and motivation. Nature Reviews Neuroscience, 5 (6), 483-494.

Witt, L. A., & Nye, L. G. (1992). Gender and the relationship between perceived fairness of pay or promotion and job satisfaction. Journal of Applied Psychology, 77 (6), 910-917.

Wolitzky, D. L., Gurman, A. S., & Messer, S. B. (1995). The theory and practice of traditional psychoanalytic psychotherapy Essential psychotherapies: Theory and practice. (pp. 12-54). New York, NY US: Guilford Press.

Wood, A. M., Maltby, J., Stewart, N., & Joseph, S. (2008). Conceptualizing gratitude and appreciation as a unitary personality trait. Personality and Individual Differences, 44, 621-632.

Wood, W., & Eagly, A. (2012). Biosocial Construction of Sex Differences and Similarities in Behavior.  Advances in Experimental Social Psychology , 55-123. doi: 10.1016/b978-0-12-394281-4.00002-7

Woodley of Menie, M., te Nijenhuis, J., Fernandes, H., & Metzen, D. (2016). Small to medium magnitude Jensen effects on brain volume: A meta-analytic test of the processing volume theory of general intelligence.  Learning and Individual Differences ,  51 , 215-219. doi: 10.1016/j.lindif.2016.09.007

Wright, S. C., Aron, A., McLaughlin-Volpe, T., & Ropp, S. A. (1997). The extended contact effect: Knowledge of cross-group friendships and prejudice. Journal of Personality and Social Psychology, 73 (1), 73-90.

Wyer, R. S., Jr., Clore, G. L., Isbell, L. M., & Zanna, M. P. (1999). Affect and information processing Advances in experimental social psychology, Vol. 31. (pp. 1-77). San Diego, CA US: Academic Press.

Yang, T. T., Menon, V., Reid, A. J., Gotlib, I. H., & Reiss, A. L. (2003). Amygdalar Activation Associated With Happy Facial Expressions in Adolescents: A 3-T Functional MRI Study. Journal of the American Academy of Child & Adolescent Psychiatry, 42 (8), 979-985.

Yapko, M. D. (1995). Essentials of hypnosis . Philadelphia, PA US: Brunner/Mazel.

Yeung, R. R. (1996). The acute effects of exercise on mood state. Journal of Psychosomatic Research, 40 (2), 123-141.

Yoder, A. E. (2000). Barriers to ego identity status formation: A contextual qualification of Marcia’s identity status paradigm. Journal of Adolescence, 23 (1), 95-106.

Youniss, J., & Haynie, D. L. (1992). Friendship in adolescence. Journal of Developmental & Behavioral Pediatrics, 13 (1), 59-66.

Yuan, L., & Ganetsky, B. (1999). A glial-neuronal signaling pathway revealed by mutations in a neurexin-related protein. Science, 283 , 1343-1344.

Zahn-Waxler, C., Robinson, J. L., & Emde, R. N. (1992). The development of empathy in twins. Developmental Psychology, 28 (6), 1038-1047.

Zajonc, R. B. (1965). The requirements and design of a standard group task. Journal of Experimental Social Psychology, 1 (1), 71-88.

Zajonc, R. B. (1968). ATTITUDINAL EFFECTS OF MERE EXPOSURE. Journal of Personality and Social Psychology, 9 (2), 1-27.

Zajonc, R. B., Gilbert, D. T., Fiske, S. T., & Lindzey, G. (1998). Emotions The handbook of social psychology, Vols. 1 and 2 (4th ed.). (pp. 591-632). New York, NY US: McGraw-Hill.

Zanna, M. P., Olson, J. M., & Fazio, R. H. (1981). Self-perception and attitude-behavior consistency. Personality and Social Psychology Bulletin, 7 (2), 252-256.

Zarbatany, L., McDougall, P., & Hymel, S. (2000). Gender-differentiated experience in the peer culture: Links to intimacy in preadolescence. Social Development, 9 (1), 62-79.

Zelazo, P. R., Zelazo, N. A., & Kolb, S. (1972). ‘Walking’ in the newborn. Science, 176 (4032), 314-315.

Zhou, J., Oldham, G. R., & Cummings, A. (1998). Employee reactions to the physical work environment: The role of childhood residential attributes. Journal of Applied Social Psychology, 28 (24), 2213-2238.

Zillmann, D. (1988). Cognition-excitation interdependencies in aggressive behavior. Aggressive Behavior, 14 (1), 51-64.

Zimbardo, P. G. (1972). Emotional Response Paradigm. PsycCRITIQUES, 17 (4), 210-211.

Zuckerman, M. (1979). Attribution of success and failure revisited: or The motivational bias is alive and well in attribution theory. Journal of Personality, 47 (2), 245-287.

How to Write a Psychology Essay

Saul Mcleod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul Mcleod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

On This Page:

Before you write your essay, it’s important to analyse the task and understand exactly what the essay question is asking. Your lecturer may give you some advice – pay attention to this as it will help you plan your answer.

Next conduct preliminary reading based on your lecture notes. At this stage, it’s not crucial to have a robust understanding of key theories or studies, but you should at least have a general “gist” of the literature.

After reading, plan a response to the task. This plan could be in the form of a mind map, a summary table, or by writing a core statement (which encompasses the entire argument of your essay in just a few sentences).

After writing your plan, conduct supplementary reading, refine your plan, and make it more detailed.

It is tempting to skip these preliminary steps and write the first draft while reading at the same time. However, reading and planning will make the essay writing process easier, quicker, and ensure a higher quality essay is produced.

Components of a Good Essay

Now, let us look at what constitutes a good essay in psychology. There are a number of important features.
  • Global Structure – structure the material to allow for a logical sequence of ideas. Each paragraph / statement should follow sensibly from its predecessor. The essay should “flow”. The introduction, main body and conclusion should all be linked.
  • Each paragraph should comprise a main theme, which is illustrated and developed through a number of points (supported by evidence).
  • Knowledge and Understanding – recognize, recall, and show understanding of a range of scientific material that accurately reflects the main theoretical perspectives.
  • Critical Evaluation – arguments should be supported by appropriate evidence and/or theory from the literature. Evidence of independent thinking, insight, and evaluation of the evidence.
  • Quality of Written Communication – writing clearly and succinctly with appropriate use of paragraphs, spelling, and grammar. All sources are referenced accurately and in line with APA guidelines.

In the main body of the essay, every paragraph should demonstrate both knowledge and critical evaluation.

There should also be an appropriate balance between these two essay components. Try to aim for about a 60/40 split if possible.

Most students make the mistake of writing too much knowledge and not enough evaluation (which is the difficult bit).

It is best to structure your essay according to key themes. Themes are illustrated and developed through a number of points (supported by evidence).

Choose relevant points only, ones that most reveal the theme or help to make a convincing and interesting argument.

essay structure example

Knowledge and Understanding

Remember that an essay is simply a discussion / argument on paper. Don’t make the mistake of writing all the information you know regarding a particular topic.

You need to be concise, and clearly articulate your argument. A sentence should contain no unnecessary words, a paragraph no unnecessary sentences.

Each paragraph should have a purpose / theme, and make a number of points – which need to be support by high quality evidence. Be clear why each point is is relevant to the argument. It would be useful at the beginning of each paragraph if you explicitly outlined the theme being discussed (.e.g. cognitive development, social development etc.).

Try not to overuse quotations in your essays. It is more appropriate to use original content to demonstrate your understanding.

Psychology is a science so you must support your ideas with evidence (not your own personal opinion). If you are discussing a theory or research study make sure you cite the source of the information.

Note this is not the author of a textbook you have read – but the original source / author(s) of the theory or research study.

For example:

Bowlby (1951) claimed that mothering is almost useless if delayed until after two and a half to three years and, for most children, if delayed till after 12 months, i.e. there is a critical period.
Maslow (1943) stated that people are motivated to achieve certain needs. When one need is fulfilled a person seeks to fullfil the next one, and so on.

As a general rule, make sure there is at least one citation (i.e. name of psychologist and date of publication) in each paragraph.

Remember to answer the essay question. Underline the keywords in the essay title. Don’t make the mistake of simply writing everything you know of a particular topic, be selective. Each paragraph in your essay should contribute to answering the essay question.

Critical Evaluation

In simple terms, this means outlining the strengths and limitations of a theory or research study.

There are many ways you can critically evaluate:

Methodological evaluation of research

Is the study valid / reliable ? Is the sample biased, or can we generalize the findings to other populations? What are the strengths and limitations of the method used and data obtained?

Be careful to ensure that any methodological criticisms are justified and not trite.

Rather than hunting for weaknesses in every study; only highlight limitations that make you doubt the conclusions that the authors have drawn – e.g., where an alternative explanation might be equally likely because something hasn’t been adequately controlled.

Compare or contrast different theories

Outline how the theories are similar and how they differ. This could be two (or more) theories of personality / memory / child development etc. Also try to communicate the value of the theory / study.

Debates or perspectives

Refer to debates such as nature or nurture, reductionism vs. holism, or the perspectives in psychology . For example, would they agree or disagree with a theory or the findings of the study?

What are the ethical issues of the research?

Does a study involve ethical issues such as deception, privacy, psychological or physical harm?

Gender bias

If research is biased towards men or women it does not provide a clear view of the behavior that has been studied. A dominantly male perspective is known as an androcentric bias.

Cultural bias

Is the theory / study ethnocentric? Psychology is predominantly a white, Euro-American enterprise. In some texts, over 90% of studies have US participants, who are predominantly white and middle class.

Does the theory or study being discussed judge other cultures by Western standards?

Animal Research

This raises the issue of whether it’s morally and/or scientifically right to use animals. The main criterion is that benefits must outweigh costs. But benefits are almost always to humans and costs to animals.

Animal research also raises the issue of extrapolation. Can we generalize from studies on animals to humans as their anatomy & physiology is different from humans?

The PEC System

It is very important to elaborate on your evaluation. Don’t just write a shopping list of brief (one or two sentence) evaluation points.

Instead, make sure you expand on your points, remember, quality of evaluation is most important than quantity.

When you are writing an evaluation paragraph, use the PEC system.

  • Make your P oint.
  • E xplain how and why the point is relevant.
  • Discuss the C onsequences / implications of the theory or study. Are they positive or negative?

For Example

  • Point: It is argued that psychoanalytic therapy is only of benefit to an articulate, intelligent, affluent minority.
  • Explain: Because psychoanalytic therapy involves talking and gaining insight, and is costly and time-consuming, it is argued that it is only of benefit to an articulate, intelligent, affluent minority. Evidence suggests psychoanalytic therapy works best if the client is motivated and has a positive attitude.
  • Consequences: A depressed client’s apathy, flat emotional state, and lack of motivation limit the appropriateness of psychoanalytic therapy for depression.

Furthermore, the levels of dependency of depressed clients mean that transference is more likely to develop.

Using Research Studies in your Essays

Research studies can either be knowledge or evaluation.
  • If you refer to the procedures and findings of a study, this shows knowledge and understanding.
  • If you comment on what the studies shows, and what it supports and challenges about the theory in question, this shows evaluation.

Writing an Introduction

It is often best to write your introduction when you have finished the main body of the essay, so that you have a good understanding of the topic area.

If there is a word count for your essay try to devote 10% of this to your introduction.

Ideally, the introduction should;

Identify the subject of the essay and define the key terms. Highlight the major issues which “lie behind” the question. Let the reader know how you will focus your essay by identifying the main themes to be discussed. “Signpost” the essay’s key argument, (and, if possible, how this argument is structured).

Introductions are very important as first impressions count and they can create a h alo effect in the mind of the lecturer grading your essay. If you start off well then you are more likely to be forgiven for the odd mistake later one.

Writing a Conclusion

So many students either forget to write a conclusion or fail to give it the attention it deserves.

If there is a word count for your essay try to devote 10% of this to your conclusion.

Ideally the conclusion should summarize the key themes / arguments of your essay. State the take home message – don’t sit on the fence, instead weigh up the evidence presented in the essay and make a decision which side of the argument has more support.

Also, you might like to suggest what future research may need to be conducted and why (read the discussion section of journal articles for this).

Don”t include new information / arguments (only information discussed in the main body of the essay).

If you are unsure of what to write read the essay question and answer it in one paragraph.

Points that unite or embrace several themes can be used to great effect as part of your conclusion.

The Importance of Flow

Obviously, what you write is important, but how you communicate your ideas / arguments has a significant influence on your overall grade. Most students may have similar information / content in their essays, but the better students communicate this information concisely and articulately.

When you have finished the first draft of your essay you must check if it “flows”. This is an important feature of quality of communication (along with spelling and grammar).

This means that the paragraphs follow a logical order (like the chapters in a novel). Have a global structure with themes arranged in a way that allows for a logical sequence of ideas. You might want to rearrange (cut and paste) paragraphs to a different position in your essay if they don”t appear to fit in with the essay structure.

To improve the flow of your essay make sure the last sentence of one paragraph links to first sentence of the next paragraph. This will help the essay flow and make it easier to read.

Finally, only repeat citations when it is unclear which study / theory you are discussing. Repeating citations unnecessarily disrupts the flow of an essay.

Referencing

The reference section is the list of all the sources cited in the essay (in alphabetical order). It is not a bibliography (a list of the books you used).

In simple terms every time you cite/refer to a name (and date) of a psychologist you need to reference the original source of the information.

If you have been using textbooks this is easy as the references are usually at the back of the book and you can just copy them down. If you have been using websites, then you may have a problem as they might not provide a reference section for you to copy.

References need to be set out APA style :

Author, A. A. (year). Title of work . Location: Publisher.

Journal Articles

Author, A. A., Author, B. B., & Author, C. C. (year). Article title. Journal Title, volume number (issue number), page numbers

A simple way to write your reference section is use Google scholar . Just type the name and date of the psychologist in the search box and click on the “cite” link.

scholar

Next, copy and paste the APA reference into the reference section of your essay.

apa reference

Once again, remember that references need to be in alphabetical order according to surname.

Print Friendly, PDF & Email

Related Articles

How To Cite A YouTube Video In APA Style – With Examples

Student Resources

How To Cite A YouTube Video In APA Style – With Examples

How to Write an Abstract APA Format

How to Write an Abstract APA Format

APA References Page Formatting and Example

APA References Page Formatting and Example

APA Title Page (Cover Page) Format, Example, & Templates

APA Title Page (Cover Page) Format, Example, & Templates

How do I Cite a Source with Multiple Authors in APA Style?

How do I Cite a Source with Multiple Authors in APA Style?

Lab Report Format: Step-by-Step Guide & Examples

Lab Report Format: Step-by-Step Guide & Examples

Logo for Kwantlen Polytechnic University

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Before/During Class

Before class skim through the material that will be covered to give yourself an overview, this will make it much easier to follow the class.  During class do NOT try to write down everything the professor says!  Pick out the important points but write enough so that your notes will make sense later ……. something like “studied rats” is not likely to help you remember the material a few weeks later but “rats able to remember maze for 24 hours” might.

After Class

Whenever possible you should review your notes a few hours after the class.  If they don’t make sense now then at least you might be able to remember enough to improve them.

Studying for Exams

Exams will be much less stressful if you spread your studying out over a few weeks or at least a few days rather than just waiting until the night before the exam.  Research has shown that reviewing material multiple times is a great way to help you remember it.

Learning Objectives

It is also VERY IMPORTANT to frequently test your memory for the material: flash cards, practice questions, explaining the material to a friend etc will show you whether you actually know your stuff!

Remember to ask your Prof. about the exam.  What type of questions will be included (essay, multiple choice etc)?  Are you expected to know names and dates?

Finally, when it comes to the exam – try to relax!  Think of this as an opportunity to show off what you know.

You may already know how to take great notes but if not there is lots of help out there.

Taking Notes in Class provides a good introduction to taking notes in a classroom setting with some real-life examples of what to do AND what not to do!

Video Intro-1: Taking Notes in Class (https://www.youtube.com/watch?v=Yc5qyMTjO3k&feature=youtu.be). Uploaded by Indiana University Bloomington, Student Academic Center.

Once the semester gets underway you will probably find yourself with too little time and too much to do.  Time Management provides detailed instructions on managing your time both as a student and in your non-student life!

Video Intro-2: Time Management (https://www.youtube.com/watch?v=9Q1KELLwaSk). Uploaded by Indiana University Bloomington, Student Academic Center.

Then, of course, there are EXAMS!  Again, you may already have developed some great strategies for both revising for and writing exams but if not (or if you think there may be some tricks you’re missing) then help is out there.  The video Preparing for Exams is short but to the point:

Video Intro-3: Study Skills: Preparing for Exams (https://www.youtube.com/watch?v=9Q1KELLwaSk). Uploaded by the Ottawa University Student Academic Success Service

FINALLY : Don’t ignore real-life resources such as your professor, other students, librarians, student services etc …… they can help and human contact is nice!

Psychology Specific Resources

Several great podcast series to dip into:

  • Hidden Brain : for discussion on the unconscious patterns that drive human behaviour. (http://www.npr.org/podcasts/510308/hidden-brain)
  • Tag: Arming the Donkeys : an advice column on psychology related issues. (http://danariely.com/tag/arming-the-donkeys/)
  • Waking up Podcast : an exploration of the human mind, society and current events. (https://www.samharris.org/podcast)
  • All in the Minda : a discussion on the brain and behaviour. (http://www.abc.net.au/radionational/programs/allinthemind/)

The crash course series contains 40 short (10 – 15 minutes) lessons and covers pretty much the entire Intro Psyc syllabus. Start with the Intro to Psychology, Crash Course Psychology #1 .

Video Intro-4: Introduction to Psychology: Crash Course Psychology #1 (https://www.youtube.com/watch?v=vo4pMVb0R6M). Uploaded by Crash Course.

American Psychological Association. (2018). Retrieved from http://www.apa.org/

Ariely, D. (2018).  Tag: Arming the Donkeys .  Retrieved from http://danariely.com/tag/arming-the-donkeys/

Australian Broadcasting Corporation, Radio National. (2018).   All in the Mind – Podcasts . [Audio podcast]. Retrieved from http://www.abc.net.au/radionational/programs/allinthemind/

Canadian Psychological Association. (2018).  Retrieved from http://www.cpa.ca/

Crash Course. (February 3, 2014). Introduction to Psychology: Crash Course Psychology #1 . [Video file]. Retrieved from https://youtu.be/vo4pMVb0R6M

Harris, S. (2018).  Waking Up – Podcast . [Video podcast]. Retrieved from https://www.samharris.org/podcast

Indiana University Bloomington, Student Academic Center. (September 13, 2013). Taking Notes in Class. [Video file]. Retrieved from https://www.youtube.com/watch?v=Yc5qyMTjO3k

Indiana University Bloomington, Student Academic Center. (September 11, 2013). Time Management . [Video file]. Retrieved from https://youtu.be/9Q1KELLwaSk

National Public Radio. (2018).  Hidden Brain.  [Audio podcast]. Retrieved from http://www.npr.org/podcasts/510308/hidden-brain

Ottawa University, Student Academic Success Service. (August 28, 2014). Study Skills: Preparing for Exams . [Video file]. Retrieved from https://youtube.com/khhjXkzXaZA

Introduction to Psychology Study Guide Copyright © 2021 by Sarah Murray is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License , except where otherwise noted.

Share This Book

Browse Course Material

Course info.

  • Prof. John D. E. Gabrieli

Departments

  • Brain and Cognitive Sciences

As Taught In

  • Cognitive Science

Learning Resource Types

Introduction to psychology.

« Previous | Next »

This exam covers material from Memory I through Emotion & Motivation .

Once you are comfortable with the content of these sessions, you can review further by trying some of the practice questions before proceeding to the exam.

These optional practice questions and solutions are from prior years’ exams.

  • 2010: Practice Exam 2 Questions (PDF) ; Practice Exam 2 Solutions (PDF)
  • 2009: Practice Exam 2 Questions (PDF) ; Practice Exam 2 Solutions (PDF)

The exam should be completed in 90 minutes. This is a closed book exam. You are not allowed to use notes, equation sheets, books or any other aids.

  • Exam 2 Questions (PDF)
  • Exam 2 Solutions (PDF)

facebook

You are leaving MIT OpenCourseWare

Logo for British Columbia/Yukon Open Authoring Platform

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Chapter 1: Introduction to Psychology

Multiple Choice

  • The word “psychology’ comes from: a. Latin b. Spanish c. Greek d. Italian
  • Psychology is defined as the scientific study of: a. people and things b. emotions and beliefs c. perception and religion d. mind and behaviour
  • The scientific approach is more useful at answering questions about ______ than questions about ______. a. facts, values b. ideas, emotions c. values, facts d. emotions, facts
  • According to the text, the lower level of explanation corresponds to ______ processes. a. social b. cultural c. biological d. interpersonal
  • A psychologist exploring the impact of a new drug on activity in the brain is working on the ______ level of explanation. a. lower b. middle c. upper d. all of the above
  • A psychologist studying what makes people laugh in different countries around the world is working on the ______ level of explanation. a. lower b. middle c. higher d. none of the above
  • Different people react differently to the same situation. This is referred to as: a. multiple determinants b. nativism c. the Simpson effect d. individual differences
  •  ______ is to nature as ______ is to nurture. a. environment, genes b. conscious, unconscious c. inaccuracy, accuracy d. biology, experience
  • The term “tabula rasa” highlights the importance of ______ in shaping behaviour. a. genes b. experience c. nature d. predestination
  • The Greek philosopher ______ believed that knowledge is acquired through experience and learning. a. Archimedes b. Rousseau c. Plato d. Aristotle
  •  ______ is to nature as ______ is to nurture. a. Plato, Aristotle b. Aristotle, Plato c. Pliny, Archimedes d. Stavros, Pliny
  •  ______ is the belief that the mind is fundamentally different from the body. a. mindism b. dualism c. centralism d. specialism
  • The school of psychology whose goal was to identify the basic elements of experience was called: a. experientialism b. dualism c. functionalism d. structuralism
  • Which of the following was most closely associated with the structuralist school of psychology? a. Titchener b. James c. Descartes d. Watson
  • Darwin’s theory of ______ argued that physiological characteristics evolve because they are useful to the organism. a. extreme usefulness b. natural endowment c. natural selection d. natural wellbeing
  • ______ was to structuralism as ______ was to functionalism. a. Wundt, Titchener b. Wundt, James c. James, Titchener d. Milner, Thompson
  • Freud championed ______ psychology. a. psychodynamic b. cultural c. conscious d. biodynamic
  • Which school of psychology believes that it is impossible to objectively study the mind? a. functionalism b. behaviorism c. humanism d. socialism
  • Receiving an electric shock would be an example of a ______ whereas being frightened would be an example of a ______. a. stimulus, response b. punishment, reward c. reaction, emotion d. reinforcement, stimulus
  • Dr Pula wants to explore differences in child-rearing practices between British and Chinese parents. She is most likely a: a. cognitive psychologist b. physiological psychologist c. cognitive-ergonomic psychologist d. social-cultural psychologist
  • Nature is to ________ as nurture is to ________. a. environment/genes b. conscious/unconscious c. genes/environment d. unconscious/conscious
  • Freud emphasized the role of ________ in shaping people’s personality. a. free will b. unconscious desires c. hormones d. group influence
  • Evolutionary psychology has its roots in: a. behaviourism b. collectivism c. functionalism d. structuralism
  • Most human behaviour: a. can be easily explained b. has multiple causes c. stems from unconscious desires d. depends on social influence
  • A forensic psychologist would be most likely to study: a. the accuracy of eyewitness memory b. the impact of advertising on shopping behaviour c. the effect of hormones on decision making d. gender differences in learning styles
  • The behaviourists rejected introspection because: a. it was too slow b. it invaded people’s privacy c. it yielded too much data d. it was too subjective
  • Another term for reinforcement is: a. stimulus b. reward c. response d. condition
  • East Asian cultures tend to be more oriented toward ________ while Western cultures tend to be more oriented toward ________. a. individualism/collectivism b. collectivism/individualism c. cultural norms/social norms d. social norms/cultural norms
  • Watson and Skinner both contributed to which school of psychology? a. functionalism b. cognitive c. social-cultural d. behaviourism
  • Which field of psychology would be most likely to study the influence of over-crowding on conformity? a. personality b. cognitive c. clinical d. social

Introduction to Psychology Study Guide Copyright © by Sarah Murray is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License , except where otherwise noted.

COMMENTS

  1. PDF Introduction to Psychology Study Guide

    Part I. Chapter 1: Introduction to Psychology . 1. True or False? 9 2. Short Answer/Essay Questions 10 3. Fill in the Blank(s) 11 4. Multiple Choice 12 5. Activities 17 . Part II. Chapter 2: Introduction to Major Perspectives . 6. True or False? 23 7. Short Answer/Essay Questions 24 ...

  2. Short Answer/Essay Questions

    2 Short Answer/Essay Questions. 2. Short Answer/Essay Questions. Try these AFTER you have thoroughly studied the chapter. You should not have to look back at the text to answer them (only to check your answer!). Remember, the point is NOT to memorize parts of the textbook but rather to understand the material and describe it in your OWN WORDS.

  3. PDF Student Solutions Guide

    Student Solutions Guide. This document contains the answers to the odd-numbered review and critical-thinking exercises from the end of each chapter in OpenStax Psychology. The critical thinking questions are open-ended, and the provided answers offer sample information or representative information. We suggest you use this guide as a resource ...

  4. Short Answer/Essay Questions

    If you are going to write more than a couple of paragraphs, think about the structure of your answer. Choose two neurotransmitters and describe their impact. Identify the four lobes of the cortex and describe the main function (s) of each of them. What is the endocrine system? Choose two of its glands and explain how they impact human behaviour.

  5. Chapter 1: Introduction to Psychology

    Chapter 1: Introduction to Psychology. Learning Objectives. After studying this chapter you should be able to: Explain why using our intuition about everyday behaviour is insufficient for a complete understanding of the causes of human behaviour. Describe the difference between values and facts and explain how the scientific method is used to ...

  6. Introduction to Psychology (June 2021 Edition)

    Unit 1: Thinking Like a Psychologist. II. Module 1: How Psychologists Think. 1.1 Understanding the Science of Psychology. 1.2 Thinking Like a Psychologist About Psychological Information. 1.3 Watching Out for Errors in Reasoning. Module 2: How Psychologists Know What They Know. 2.1 The Process of Psychological Research. 2.2 Research Methods Used to Describe People and Determine Relationships

  7. How to Write a Psychology Essay

    Identify the subject of the essay and define the key terms. Highlight the major issues which "lie behind" the question. Let the reader know how you will focus your essay by identifying the main themes to be discussed. "Signpost" the essay's key argument, (and, if possible, how. this argument is structured).

  8. Short Answer/Essay Questions

    Short Answer/Essay Questions. Try these AFTER you have thoroughly studied the chapter. You should not have to look back at the text to answer them (only to check your answer!). Remember, the point is NOT to memorize parts of the textbook but rather to understand the material and describe it in your OWN WORDS. If you are going to write more than ...

  9. Psychology 101: Intro to Psychology Final Exam

    Psychology 101: Intro to Psychology Final Exam. Free Practice Test Instructions: Choose your answer to the question and click "Continue" to see how you did. Then click 'Next Question' to answer ...

  10. PDF Introduction to Psychology

    Introduction to Psychology ... Question essays will be worth 25% of your total grade. Essay s turned in late (i.e., after the beginning of class on the due date) will receive a letter grade deduction. ... answer the questions in the fields provided. After thoughtfully answering all questions, fill in your full name, your e­mail address, my e ...

  11. Short Answer/Essay Questions

    Short Answer/Essay Questions. Try these AFTER you have thoroughly studied the chapter. You should not have to look back at the text to answer them (only to check your answer!). Remember, the point is NOT to memorize parts of the textbook but rather to understand the material and describe it in your OWN WORDS. If you are going to write more than ...

  12. Exam 3

    Exam. The exam should be completed in 90 minutes. This is a closed book exam. You are not allowed to use notes, equation sheets, books or any other aids. Exam 3 Questions (PDF) Exam 3 Solutions (PDF) « Previous. This page presents the Exam 3 problems and solutions, along with practice problems and solutions.

  13. Introduction

    The Learning Objectives listed for each chapter are a good starting point for your studying. Review them before you study the chapter and, when you have finished, make sure you can complete each of the objectives.

  14. PDF Introduction to Psychology Study Guide

    Introduction to Psychology Study Guide by Sarah Murray is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License, except where otherwise noted. ... Short Answer/Essay Questions 307 Fill in the Blank(s) 308 Multiple Choice 310 Activities 314 Test Your Knowledge 317. Acknowledgments

  15. Multiple Choice Questions

    This document is an introduction to psychology study guide that contains 30 multiple choice questions about brains, bodies, and behavior. It covers topics like neuron structure and function, neurotransmitters, parts of the brain like the limbic system and lobes, and the endocrine system. The questions test understanding of concepts like the reticular formation, amygdala, hippocampus, nodes of ...

  16. Exam 2

    2010: Practice Exam 2 Questions (PDF); Practice Exam 2 Solutions (PDF) 2009: Practice Exam 2 Questions (PDF); Practice Exam 2 Solutions (PDF) Exam. The exam should be completed in 90 minutes. This is a closed book exam. You are not allowed to use notes, equation sheets, books or any other aids. Exam 2 Questions (PDF) Exam 2 Solutions (PDF)

  17. PDF A Brief Guide to Writing the Psychology Paper

    Psychology writing can be very dense, with many references to previous research. Writers of psychology almost never directly quote a source. Instead, they distill the essence of the idea or finding, and cite the appropriate source. In the humanities, writers may repeat words or phrases for emphasis; in psychology writers rarely repeat

  18. PDF Sample Essay Questions

    This list contains 6 potential questions from the second half of the semester, and 3 "old" questions that were on the previous essay question list for the first half of the semester. Please be prepared for questions out of both sets, but as I stated in my email a few days back, there will be a stronger emphasis on the second half of the ...

  19. PDF Examination paper for PSY2014/PSYPRO4314 Social psychology II

    Instructions: There are four questions in Part A. Answer all four (1/2-1 page per answer). There are three questions in Part B. Answer one of them in an essay-like format. Each part (Part A, Part B) has to be marked as "passed" (grade of E or better) for the exam to be passed. Each part (Part A, Part B) counts 50% of your final mark.

  20. Multiple Choice

    Questions. The word "psychology' comes from: a. Latin b. Spanish c. Greek d. Italian; Psychology is defined as the scientific study of: a. people and things b. emotions and beliefs c. perception and religion d. mind and behaviour; The scientific approach is more useful at answering questions about _____ than questions about _____. a. facts ...