Cognitive Approach in Psychology

Saul Mcleod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul Mcleod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

Cognitive psychology is the scientific study of the mind as an information processor. It concerns how we take in information from the outside world, and how we make sense of that information.

Cognitive psychology focuses on studying mental processes, including how people perceive, think, remember, learn, solve problems, and make decisions.

Cognitive psychologists try to build up cognitive models of the information processing that goes on inside people’s minds, including perception, attention, language, memory, thinking, and consciousness.

Cognitive psychology became of great importance in the mid-1950s. Several factors were important in this:
  • Dissatisfaction with the behaviorist approach in its simple emphasis on external behavior rather than internal processes.
  • The development of better experimental methods.
  • Comparison between human and computer processing of information . Using computers allowed psychologists to try to understand the complexities of human cognition by comparing it with computers and artificial intelligence.

The emphasis of psychology shifted away from the study of conditioned behavior and psychoanalytical notions about the study of the mind, towards the understanding of human information processing using strict and rigorous laboratory investigation.

cognitive psychology sub-topics

Summary Table

Theoretical assumptions.

Mediational processes occur between stimulus and response:

The behaviorists approach only studies external observable (stimulus and response) behavior that can be objectively measured.

They believe that internal behavior cannot be studied because we cannot see what happens in a person’s mind (and therefore cannot objectively measure it).

However, cognitive psychologists regard it as essential to look at the mental processes of an organism and how these influence behavior.

Cognitive psychology assumes a mediational process occurs between stimulus/input and response/output. 

mediational processes

These are mediational processes because they mediate (i.e., go-between) between the stimulus and the response. They come after the stimulus and before the response.

Instead of the simple stimulus-response links proposed by behaviorism, the mediational processes of the organism are essential to understand. Without this understanding, psychologists cannot have a complete understanding of behavior.

The mediational (i.e., mental) event could be memory , perception , attention or problem-solving, etc. 

For example, the cognitive approach suggests that problem gambling is a result of maladaptive thinking and faulty cognitions. These both result in illogical errors being drawn, for example gamblers misjudge the amount of skill involved with ‘chance’ games so are likely to participate with the mindset that the odds are in their favour so they may have a good chance of winning.

Therefore, cognitive psychologists say that if you want to understand behavior, you must understand these mediational processes.

Psychology should be seen as a science:

The cognitive approach believes that internal mental behavior can be scientifically studied using controlled experiments . They use the results of their investigations as the basis for making inferences about mental processes. 

Cognitive psychology uses laboratory experiments that are highly controlled so they avoid the influence of extraneous variables. This allows the researcher to establish a causal relationship between the independent and dependent variables.

Cognitive psychologists measure behavior that provides information about cognitive processes (e.g., verbal protocols of thinking aloud). They also measure physiological indicators of brain activity, such as neuroimages (PET and fMRI).

For example, brain imaging fMRI and PET scans map areas of the brain to cognitive function, allowing the processing of information by centers in the brain to be seen directly. Such processing causes the area of the brain involved to increase metabolism and “light up” on the scan.

These controlled experiments are replicable, and the data obtained is objective (not influenced by an individual’s judgment or opinion) and measurable. This gives psychology more credibility.

Replicability is a crucial concept of science as it ensures that people can validate research by repeating the experiment to ensure that an accurate conclusion has been reached.

Without replicability, a scientific finding may be invalid as it cannot be falsified. Additionally, scientific research relies on the peer review of research to ensure that the research is justifiable.

Without replicability, it would be impossible to justify the accuracy of the research. 

Humans are information processors:

Cognitive psychology has been influenced by developments in computer science and analogies are often made between how a computer works and how we process information.

Information processing in humans resembles that in computers, and is based on transforming information, storing and processing information, and retrieving information from memory.

Information processing models of cognitive processes such as memory and attention assume that mental processes follow a linear sequence.

For example:

  • Input processes are concerned with the analysis of the stimuli.
  • Storage processes cover everything that happens to stimuli internally in the brain and can include coding and manipulation of the stimuli.
  • Output processes are responsible for preparing an appropriate response to a stimulus.

This has led to models which show information flowing through the cognitive system, such as the multi-store model of memory.

Information Processing Paradigm

The cognitive approach began to revolutionize psychology in the late 1950s and early 1960s to become the dominant approach (i.e., perspective) in psychology by the late 1970s. Interest in mental processes was gradually restored through the work of Jean Piaget and Edward Tolman .

Tolman was a ‘soft behaviorist’. His book Purposive Behavior in Animals and Man in 1932 described research that behaviorism found difficult to explain. The behaviorists’ view was that learning occurred due to associations between stimuli and responses.

However, Tolman suggested that learning was based on the relationships formed amongst stimuli. He referred to these relationships as cognitive maps.

But the arrival of the computer gave cognitive psychology the terminology and metaphor it needed to investigate the human mind.

The start of the use of computers allowed psychologists to try to understand the complexities of human cognition by comparing it with something simpler and better understood, i.e., an artificial system such as a computer.

The use of the computer as a tool for thinking about how the human mind handles information is known as the computer analogy. Essentially, a computer codes (i.e., changes) information, stores information, uses information and produces an output (retrieves info).

The idea of information processing was adopted by cognitive psychologists as a model of how human thought works.

computer brain metaphor

The information processing approach is based on several assumptions, including:

  • Information made available from the environment is processed by a series of processing systems (e.g., attention, perception, short-term memory);
  • These processing systems transform, or alter the information in systematic ways;
  • The aim of research is to specify the processes and structures that underlie cognitive performance;
  • Information processing in humans resembles that in computers.

The Role of Schemas

Schemas can often affect cognitive processing (a mental framework of beliefs and expectations developed from experience). As you get older, these become more detailed and sophisticated.

A schema is a “packet of information” or cognitive framework that helps us organize and interpret information. They are based on our previous experience.

Schemas help us to interpret incoming information quickly and effectively; this prevents us from being overwhelmed by the vast amount of information we perceive in our environment.

However, it can also lead to distortion of this information as we select and interpret environmental stimuli using schemas that might not be relevant.

This could be the cause of inaccuracies in areas such as eyewitness testimony. It can also explain some errors we make when perceiving optical illusions.

History of Cognitive Psychology

  • Kohler (1925) published a book called, The Mentality of Apes . In it, he reported observations which suggested that animals could show insightful behavior. He rejected behaviorism in favour of an approach which became known as Gestalt psychology .
  • Norbert Wiener (1948) published Cybernetics: or Control and Communication in the Animal and the Machine, introducing terms such as input and output.
  • Tolman (1948) work on cognitive maps – training rats in mazes, showed that animals had an internal representation of behavior.
  • Birth of Cognitive Psychology often dated back to George Miller’s (1956) “ The Magical Number 7 Plus or Minus 2 : Some Limits on Our Capacity for Processing Information.” Milner argued that short-term memory could only hold about seven pieces of information, called chunks.
  • Newell and Simon’s (1972) development of the General Problem Solver.
  • In 1960, Miller founded the Center for Cognitive Studies at Harvard with the famous cognitivist developmentalist, Jerome Bruner.
  • Ulric Neisser (1967) publishes “ Cognitive Psychology” , which marks the official beginning of the cognitive approach.
  • Process models of memory Atkinson & Shiffrin’s (1968) Multi-Store Model .
  • The cognitive approach is highly influential in all areas of psychology (e.g., biological, social, neuroscience, developmental, etc.).

Issues and Debates

Free will vs. determinism.

The position of the cognitive approach is unclear as it argues, on the one hand, the way we process information is determined by our experience (schemas).

On the other hand in, the therapy derived from the approach (CBT) argues that we can change the way we think.

Nature vs. Nurture

The cognitive approach takes an interactionist view of the debate as it argues that our behavior is influenced by learning and experience (nurture), but also by some of our brains’ innate capacities as information processors e.g., language acquisition (nature).

Holism vs. Reductionism

The cognitive approach tends to be reductionist as when studying a variable, it isolates processes such as memory from other cognitive processes.

However, in our normal life, we would use many cognitive processes simultaneously, so it lacks validity.

Idiographic vs. Nomothetic

It is a nomothetic approach as it focuses on establishing theories on information processing that apply to all people.

Critical Evaluation

B.F. Skinner criticizes the cognitive approach as he believes that only external stimulus-response behavior should be studied as this can be scientifically measured.

Therefore, mediation processes (between stimulus and response) do not exist as they cannot be seen and measured. Due to its subjective and unscientific nature, Skinner continues to find problems with cognitive research methods, namely introspection (as used by Wilhelm Wundt).

Humanistic psychologist Carl Rogers believes that the use of laboratory experiments by cognitive psychology has low ecological validity and creates an artificial environment due to the control over variables . Rogers emphasizes a more holistic approach to understanding behavior.

The cognitive approach uses a very scientific method which are controlled and replicable, so the results are reliable. However, experiments lack ecological validity because of the artificiality of the tasks and environment, so it might not reflect the way people process information in their everyday life.

For example, Baddeley (1966) used lists of words to find out the encoding used by LTM, however, these words had no meaning to the participants, so the way they used their memory in this task was probably very different than they would have done if the words had meaning for them. This is a weakness as the theories might not explain how memory works outside the laboratory.

These are used to study rare conditions which provide an insight on the working of some mental processes i.e. Clive Wearing, HM. Although case studies deal with very small sample so the results cannot be generalized to the wider population as they are influenced by individual characteristics, they allow us to study cases which could not be produced experimentally because of ethical and practical reasons.

The information processing paradigm of cognitive psychology views the minds in terms of a computer when processing information. However, although there are similarities between the human mind and the operations of a computer (inputs and outputs, storage systems, the use of a central processor), the computer analogy has been criticized by many.

The approach is reductionist as it does not consider emotions and motivation, which influence the processing of information and memory. For example, according to the Yerkes-Dodson law anxiety can influence our memory.

Such machine reductionism (simplicity) ignores the influence of human emotion and motivation on the cognitive system and how this may affect our ability to process information.

Behaviorism assumes that people are born a blank slate (tabula rasa) and are not born with cognitive functions like schemas , memory or perception .

The cognitive approach does not always recognize physical ( biological psychology ) and environmental (behaviorist approach) factors in determining behavior.

Cognitive psychology has influenced and integrated with many other approaches and areas of study to produce, for example, social learning theory , cognitive neuropsychology, and artificial intelligence (AI).

Another strength is that the research conducted in this area of psychology very often has applications in the real world.

For example, cognitive behavioral therapy (CBT) has been very effective in treating depression (Hollon & Beck, 1994), and moderately effective for anxiety problems (Beck, 1993). CBT’s basis is to change how the person processes their thoughts to make them more rational or positive.

By highlighting the importance of cognitive processing, the cognitive approach can explain mental disorders such as depression, where Beck argues that it is the negative schemas we hold about the self, the world, and the future which lead to depression rather than external events.

Atkinson, R. C., & Shiffrin, R. M. (1968). Chapter: Human memory: A proposed system and its control processes. In Spence, K. W., & Spence, J. T. The psychology of learning and motivation (Volume 2). New York: Academic Press. pp. 89–195.

Beck, A. T, & Steer, R. A. (1993). Beck Anxiety Inventory Manual. San Antonio: Harcourt Brace and Company.

Hollon, S. D., & Beck, A. T. (1994). Cognitive and cognitive-behavioral therapies. In A. E. Bergin & S.L. Garfield (Eds.), Handbook of psychotherapy and behavior change (pp. 428—466) . New York: Wiley.

Köhler, W. (1925). An aspect of Gestalt psychology. The Pedagogical Seminary and Journal of Genetic Psychology, 32(4) , 691-723.

Miller, G. A. (1956). The magical number seven, plus or minus two: some limits on our capacity for processing information. Psychological Review , 63 (2): 81–97.

Neisser, U (1967). Cognitive psychology . Appleton-Century-Crofts: New York

Newell, A., & Simon, H. (1972). Human problem solving . Prentice-Hall.

Tolman, E. C., Hall, C. S., & Bretnall, E. P. (1932). A disproof of the law of effect and a substitution of the laws of emphasis, motivation and disruption. Journal of Experimental Psychology, 15(6) , 601.

Tolman E. C. (1948). Cognitive maps in rats and men . Psychological Review. 55, 189–208

Wiener, N. (1948). Cybernetics or control and communication in the animal and the machine . Paris, (Hermann & Cie) & Camb. Mass. (MIT Press).

Further Reading

  • Why Your Brain is Not a Computer
  • Cognitive Psychology Historial Development

Print Friendly, PDF & Email

Banner

Psychology: 85-310: Research Methods in Cognitive Psychology: Start

  • Finding Research Articles
  • Citation Management with Zotero

Helpful Databases

  • PsycINFO This link opens in a new window 1887-present. American Psychological Association's renowned resource for abstracts of scholarly journal articles, book chapters, books, and dissertations, it is the largest resource devoted to peer-reviewed literature in behavioral science and mental health. Database Guide . Also see definitions of methodologies .
  • PsycARTICLES This link opens in a new window 1894-present.The full text of nearly 80 journals from the American Psychological Association as well as its imprint, the Educational Publishing Foundation (EPF), and from allied organizations including the Canadian Psychological Association and the Hogrefe Publishing Group. It includes all journal articles, book reviews, letters to the editor and errata from each journal.
  • PsycCRITIQUES This link opens in a new window 1956-2017. Produced by the American Psychological Association, full text book reviews featuring current scholarly and professional books in psychology. Also publishes reviews from a psychological perspective of popular films and trade books.
  • PubMed This link opens in a new window Biomedical literature from MEDLINE, life science journals, and online books. Citations may include links to full-text content from PubMed Central and publisher web sites.
  • Google Scholar This link opens in a new window "Google Scholar provides a simple way to broadly search for scholarly literature. From one place, you can search across many disciplines and sources: articles, theses, books, abstracts and court opinions, from academic publishers, professional societies, online repositories, universities and other web sites." - About Google Scholar
  • Web of Science This link opens in a new window A multidisciplinary database, with searchable author abstracts, covering the literature of the sciences, social sciences and the arts & humanities providing indexes and links to cited references for each article.

Data Services & Research Curation Office Hours

Consultations are offered in-person and over Zoom and might include but are not limited to the following:

  • Connecting your data analysis problems with an external consultant, or finding datasets to work on.
  • Finding, creating, and working with data, including data management, data mining & data modeling, and statistical analysis.
  • Learning about the availability of tools and platforms on campus, such as ArcGIS (GIS data), Tableau (data visualization), Open Science Framework, and others. Showing you how to work with experimental digital methods.
  • Connecting you to resources for self-teaching or the local networks of Digital Humanities and Digital Scholarship practitioners at CMU Helping you brainstorm, scope, and begin planning a project.
  • Evaluating and offering advice on the display of visual content, such as presentations, poster designs, and web design.
  • Providing feedback on your dataset, data management plan, project design, and code.

Schedule a consultation 

Useful Resource for Accessing Articles

The LibKey Nomad browser extension provides one-click access to the Libraries' full text resources as you find research on the web and in databases. Find information on how to install here . 

evaluate the research methods used by cognitive psychologists

  • Next: Finding Research Articles >>
  • Last Updated: Jan 31, 2024 9:05 AM
  • URL: https://guides.library.cmu.edu/85-310
  • Reference Manager
  • Simple TEXT file

People also looked at

Review article, the use of research methods in psychological research: a systematised review.

evaluate the research methods used by cognitive psychologists

  • 1 Community Psychosocial Research (COMPRES), School of Psychosocial Health, North-West University, Potchefstroom, South Africa
  • 2 WorkWell Research Institute, North-West University, Potchefstroom, South Africa

Research methods play an imperative role in research quality as well as educating young researchers, however, the application thereof is unclear which can be detrimental to the field of psychology. Therefore, this systematised review aimed to determine what research methods are being used, how these methods are being used and for what topics in the field. Our review of 999 articles from five journals over a period of 5 years indicated that psychology research is conducted in 10 topics via predominantly quantitative research methods. Of these 10 topics, social psychology was the most popular. The remainder of the conducted methodology is described. It was also found that articles lacked rigour and transparency in the used methodology which has implications for replicability. In conclusion this article, provides an overview of all reported methodologies used in a sample of psychology journals. It highlights the popularity and application of methods and designs throughout the article sample as well as an unexpected lack of rigour with regard to most aspects of methodology. Possible sample bias should be considered when interpreting the results of this study. It is recommended that future research should utilise the results of this study to determine the possible impact on the field of psychology as a science and to further investigation into the use of research methods. Results should prompt the following future research into: a lack or rigour and its implication on replication, the use of certain methods above others, publication bias and choice of sampling method.

Introduction

Psychology is an ever-growing and popular field ( Gough and Lyons, 2016 ; Clay, 2017 ). Due to this growth and the need for science-based research to base health decisions on ( Perestelo-Pérez, 2013 ), the use of research methods in the broad field of psychology is an essential point of investigation ( Stangor, 2011 ; Aanstoos, 2014 ). Research methods are therefore viewed as important tools used by researchers to collect data ( Nieuwenhuis, 2016 ) and include the following: quantitative, qualitative, mixed method and multi method ( Maree, 2016 ). Additionally, researchers also employ various types of literature reviews to address research questions ( Grant and Booth, 2009 ). According to literature, what research method is used and why a certain research method is used is complex as it depends on various factors that may include paradigm ( O'Neil and Koekemoer, 2016 ), research question ( Grix, 2002 ), or the skill and exposure of the researcher ( Nind et al., 2015 ). How these research methods are employed is also difficult to discern as research methods are often depicted as having fixed boundaries that are continuously crossed in research ( Johnson et al., 2001 ; Sandelowski, 2011 ). Examples of this crossing include adding quantitative aspects to qualitative studies ( Sandelowski et al., 2009 ), or stating that a study used a mixed-method design without the study having any characteristics of this design ( Truscott et al., 2010 ).

The inappropriate use of research methods affects how students and researchers improve and utilise their research skills ( Scott Jones and Goldring, 2015 ), how theories are developed ( Ngulube, 2013 ), and the credibility of research results ( Levitt et al., 2017 ). This, in turn, can be detrimental to the field ( Nind et al., 2015 ), journal publication ( Ketchen et al., 2008 ; Ezeh et al., 2010 ), and attempts to address public social issues through psychological research ( Dweck, 2017 ). This is especially important given the now well-known replication crisis the field is facing ( Earp and Trafimow, 2015 ; Hengartner, 2018 ).

Due to this lack of clarity on method use and the potential impact of inept use of research methods, the aim of this study was to explore the use of research methods in the field of psychology through a review of journal publications. Chaichanasakul et al. (2011) identify reviewing articles as the opportunity to examine the development, growth and progress of a research area and overall quality of a journal. Studies such as Lee et al. (1999) as well as Bluhm et al. (2011) review of qualitative methods has attempted to synthesis the use of research methods and indicated the growth of qualitative research in American and European journals. Research has also focused on the use of research methods in specific sub-disciplines of psychology, for example, in the field of Industrial and Organisational psychology Coetzee and Van Zyl (2014) found that South African publications tend to consist of cross-sectional quantitative research methods with underrepresented longitudinal studies. Qualitative studies were found to make up 21% of the articles published from 1995 to 2015 in a similar study by O'Neil and Koekemoer (2016) . Other methods in health psychology, such as Mixed methods research have also been reportedly growing in popularity ( O'Cathain, 2009 ).

A broad overview of the use of research methods in the field of psychology as a whole is however, not available in the literature. Therefore, our research focused on answering what research methods are being used, how these methods are being used and for what topics in practice (i.e., journal publications) in order to provide a general perspective of method used in psychology publication. We synthesised the collected data into the following format: research topic [areas of scientific discourse in a field or the current needs of a population ( Bittermann and Fischer, 2018 )], method [data-gathering tools ( Nieuwenhuis, 2016 )], sampling [elements chosen from a population to partake in research ( Ritchie et al., 2009 )], data collection [techniques and research strategy ( Maree, 2016 )], and data analysis [discovering information by examining bodies of data ( Ktepi, 2016 )]. A systematised review of recent articles (2013 to 2017) collected from five different journals in the field of psychological research was conducted.

Grant and Booth (2009) describe systematised reviews as the review of choice for post-graduate studies, which is employed using some elements of a systematic review and seldom more than one or two databases to catalogue studies after a comprehensive literature search. The aspects used in this systematised review that are similar to that of a systematic review were a full search within the chosen database and data produced in tabular form ( Grant and Booth, 2009 ).

Sample sizes and timelines vary in systematised reviews (see Lowe and Moore, 2014 ; Pericall and Taylor, 2014 ; Barr-Walker, 2017 ). With no clear parameters identified in the literature (see Grant and Booth, 2009 ), the sample size of this study was determined by the purpose of the sample ( Strydom, 2011 ), and time and cost constraints ( Maree and Pietersen, 2016 ). Thus, a non-probability purposive sample ( Ritchie et al., 2009 ) of the top five psychology journals from 2013 to 2017 was included in this research study. Per Lee (2015) American Psychological Association (APA) recommends the use of the most up-to-date sources for data collection with consideration of the context of the research study. As this research study focused on the most recent trends in research methods used in the broad field of psychology, the identified time frame was deemed appropriate.

Psychology journals were only included if they formed part of the top five English journals in the miscellaneous psychology domain of the Scimago Journal and Country Rank ( Scimago Journal & Country Rank, 2017 ). The Scimago Journal and Country Rank provides a yearly updated list of publicly accessible journal and country-specific indicators derived from the Scopus ® database ( Scopus, 2017b ) by means of the Scimago Journal Rank (SJR) indicator developed by Scimago from the algorithm Google PageRank™ ( Scimago Journal & Country Rank, 2017 ). Scopus is the largest global database of abstracts and citations from peer-reviewed journals ( Scopus, 2017a ). Reasons for the development of the Scimago Journal and Country Rank list was to allow researchers to assess scientific domains, compare country rankings, and compare and analyse journals ( Scimago Journal & Country Rank, 2017 ), which supported the aim of this research study. Additionally, the goals of the journals had to focus on topics in psychology in general with no preference to specific research methods and have full-text access to articles.

The following list of top five journals in 2018 fell within the abovementioned inclusion criteria (1) Australian Journal of Psychology, (2) British Journal of Psychology, (3) Europe's Journal of Psychology, (4) International Journal of Psychology and lastly the (5) Journal of Psychology Applied and Interdisciplinary.

Journals were excluded from this systematised review if no full-text versions of their articles were available, if journals explicitly stated a publication preference for certain research methods, or if the journal only published articles in a specific discipline of psychological research (for example, industrial psychology, clinical psychology etc.).

The researchers followed a procedure (see Figure 1 ) adapted from that of Ferreira et al. (2016) for systematised reviews. Data collection and categorisation commenced on 4 December 2017 and continued until 30 June 2019. All the data was systematically collected and coded manually ( Grant and Booth, 2009 ) with an independent person acting as co-coder. Codes of interest included the research topic, method used, the design used, sampling method, and methodology (the method used for data collection and data analysis). These codes were derived from the wording in each article. Themes were created based on the derived codes and checked by the co-coder. Lastly, these themes were catalogued into a table as per the systematised review design.

www.frontiersin.org

Figure 1 . Systematised review procedure.

According to Johnston et al. (2019) , “literature screening, selection, and data extraction/analyses” (p. 7) are specifically tailored to the aim of a review. Therefore, the steps followed in a systematic review must be reported in a comprehensive and transparent manner. The chosen systematised design adhered to the rigour expected from systematic reviews with regard to full search and data produced in tabular form ( Grant and Booth, 2009 ). The rigorous application of the systematic review is, therefore discussed in relation to these two elements.

Firstly, to ensure a comprehensive search, this research study promoted review transparency by following a clear protocol outlined according to each review stage before collecting data ( Johnston et al., 2019 ). This protocol was similar to that of Ferreira et al. (2016) and approved by three research committees/stakeholders and the researchers ( Johnston et al., 2019 ). The eligibility criteria for article inclusion was based on the research question and clearly stated, and the process of inclusion was recorded on an electronic spreadsheet to create an evidence trail ( Bandara et al., 2015 ; Johnston et al., 2019 ). Microsoft Excel spreadsheets are a popular tool for review studies and can increase the rigour of the review process ( Bandara et al., 2015 ). Screening for appropriate articles for inclusion forms an integral part of a systematic review process ( Johnston et al., 2019 ). This step was applied to two aspects of this research study: the choice of eligible journals and articles to be included. Suitable journals were selected by the first author and reviewed by the second and third authors. Initially, all articles from the chosen journals were included. Then, by process of elimination, those irrelevant to the research aim, i.e., interview articles or discussions etc., were excluded.

To ensure rigourous data extraction, data was first extracted by one reviewer, and an independent person verified the results for completeness and accuracy ( Johnston et al., 2019 ). The research question served as a guide for efficient, organised data extraction ( Johnston et al., 2019 ). Data was categorised according to the codes of interest, along with article identifiers for audit trails such as authors, title and aims of articles. The categorised data was based on the aim of the review ( Johnston et al., 2019 ) and synthesised in tabular form under methods used, how these methods were used, and for what topics in the field of psychology.

The initial search produced a total of 1,145 articles from the 5 journals identified. Inclusion and exclusion criteria resulted in a final sample of 999 articles ( Figure 2 ). Articles were co-coded into 84 codes, from which 10 themes were derived ( Table 1 ).

www.frontiersin.org

Figure 2 . Journal article frequency.

www.frontiersin.org

Table 1 . Codes used to form themes (research topics).

These 10 themes represent the topic section of our research question ( Figure 3 ). All these topics except, for the final one, psychological practice , were found to concur with the research areas in psychology as identified by Weiten (2010) . These research areas were chosen to represent the derived codes as they provided broad definitions that allowed for clear, concise categorisation of the vast amount of data. Article codes were categorised under particular themes/topics if they adhered to the research area definitions created by Weiten (2010) . It is important to note that these areas of research do not refer to specific disciplines in psychology, such as industrial psychology; but to broader fields that may encompass sub-interests of these disciplines.

www.frontiersin.org

Figure 3 . Topic frequency (international sample).

In the case of developmental psychology , researchers conduct research into human development from childhood to old age. Social psychology includes research on behaviour governed by social drivers. Researchers in the field of educational psychology study how people learn and the best way to teach them. Health psychology aims to determine the effect of psychological factors on physiological health. Physiological psychology , on the other hand, looks at the influence of physiological aspects on behaviour. Experimental psychology is not the only theme that uses experimental research and focuses on the traditional core topics of psychology (for example, sensation). Cognitive psychology studies the higher mental processes. Psychometrics is concerned with measuring capacity or behaviour. Personality research aims to assess and describe consistency in human behaviour ( Weiten, 2010 ). The final theme of psychological practice refers to the experiences, techniques, and interventions employed by practitioners, researchers, and academia in the field of psychology.

Articles under these themes were further subdivided into methodologies: method, sampling, design, data collection, and data analysis. The categorisation was based on information stated in the articles and not inferred by the researchers. Data were compiled into two sets of results presented in this article. The first set addresses the aim of this study from the perspective of the topics identified. The second set of results represents a broad overview of the results from the perspective of the methodology employed. The second set of results are discussed in this article, while the first set is presented in table format. The discussion thus provides a broad overview of methods use in psychology (across all themes), while the table format provides readers with in-depth insight into methods used in the individual themes identified. We believe that presenting the data from both perspectives allow readers a broad understanding of the results. Due a large amount of information that made up our results, we followed Cichocka and Jost (2014) in simplifying our results. Please note that the numbers indicated in the table in terms of methodology differ from the total number of articles. Some articles employed more than one method/sampling technique/design/data collection method/data analysis in their studies.

What follows is the results for what methods are used, how these methods are used, and which topics in psychology they are applied to . Percentages are reported to the second decimal in order to highlight small differences in the occurrence of methodology.

Firstly, with regard to the research methods used, our results show that researchers are more likely to use quantitative research methods (90.22%) compared to all other research methods. Qualitative research was the second most common research method but only made up about 4.79% of the general method usage. Reviews occurred almost as much as qualitative studies (3.91%), as the third most popular method. Mixed-methods research studies (0.98%) occurred across most themes, whereas multi-method research was indicated in only one study and amounted to 0.10% of the methods identified. The specific use of each method in the topics identified is shown in Table 2 and Figure 4 .

www.frontiersin.org

Table 2 . Research methods in psychology.

www.frontiersin.org

Figure 4 . Research method frequency in topics.

Secondly, in the case of how these research methods are employed , our study indicated the following.

Sampling −78.34% of the studies in the collected articles did not specify a sampling method. From the remainder of the studies, 13 types of sampling methods were identified. These sampling methods included broad categorisation of a sample as, for example, a probability or non-probability sample. General samples of convenience were the methods most likely to be applied (10.34%), followed by random sampling (3.51%), snowball sampling (2.73%), and purposive (1.37%) and cluster sampling (1.27%). The remainder of the sampling methods occurred to a more limited extent (0–1.0%). See Table 3 and Figure 5 for sampling methods employed in each topic.

www.frontiersin.org

Table 3 . Sampling use in the field of psychology.

www.frontiersin.org

Figure 5 . Sampling method frequency in topics.

Designs were categorised based on the articles' statement thereof. Therefore, it is important to note that, in the case of quantitative studies, non-experimental designs (25.55%) were often indicated due to a lack of experiments and any other indication of design, which, according to Laher (2016) , is a reasonable categorisation. Non-experimental designs should thus be compared with experimental designs only in the description of data, as it could include the use of correlational/cross-sectional designs, which were not overtly stated by the authors. For the remainder of the research methods, “not stated” (7.12%) was assigned to articles without design types indicated.

From the 36 identified designs the most popular designs were cross-sectional (23.17%) and experimental (25.64%), which concurred with the high number of quantitative studies. Longitudinal studies (3.80%), the third most popular design, was used in both quantitative and qualitative studies. Qualitative designs consisted of ethnography (0.38%), interpretative phenomenological designs/phenomenology (0.28%), as well as narrative designs (0.28%). Studies that employed the review method were mostly categorised as “not stated,” with the most often stated review designs being systematic reviews (0.57%). The few mixed method studies employed exploratory, explanatory (0.09%), and concurrent designs (0.19%), with some studies referring to separate designs for the qualitative and quantitative methods. The one study that identified itself as a multi-method study used a longitudinal design. Please see how these designs were employed in each specific topic in Table 4 , Figure 6 .

www.frontiersin.org

Table 4 . Design use in the field of psychology.

www.frontiersin.org

Figure 6 . Design frequency in topics.

Data collection and analysis —data collection included 30 methods, with the data collection method most often employed being questionnaires (57.84%). The experimental task (16.56%) was the second most preferred collection method, which included established or unique tasks designed by the researchers. Cognitive ability tests (6.84%) were also regularly used along with various forms of interviewing (7.66%). Table 5 and Figure 7 represent data collection use in the various topics. Data analysis consisted of 3,857 occurrences of data analysis categorised into ±188 various data analysis techniques shown in Table 6 and Figures 1 – 7 . Descriptive statistics were the most commonly used (23.49%) along with correlational analysis (17.19%). When using a qualitative method, researchers generally employed thematic analysis (0.52%) or different forms of analysis that led to coding and the creation of themes. Review studies presented few data analysis methods, with most studies categorising their results. Mixed method and multi-method studies followed the analysis methods identified for the qualitative and quantitative studies included.

www.frontiersin.org

Table 5 . Data collection in the field of psychology.

www.frontiersin.org

Figure 7 . Data collection frequency in topics.

www.frontiersin.org

Table 6 . Data analysis in the field of psychology.

Results of the topics researched in psychology can be seen in the tables, as previously stated in this article. It is noteworthy that, of the 10 topics, social psychology accounted for 43.54% of the studies, with cognitive psychology the second most popular research topic at 16.92%. The remainder of the topics only occurred in 4.0–7.0% of the articles considered. A list of the included 999 articles is available under the section “View Articles” on the following website: https://methodgarden.xtrapolate.io/ . This website was created by Scholtz et al. (2019) to visually present a research framework based on this Article's results.

This systematised review categorised full-length articles from five international journals across the span of 5 years to provide insight into the use of research methods in the field of psychology. Results indicated what methods are used how these methods are being used and for what topics (why) in the included sample of articles. The results should be seen as providing insight into method use and by no means a comprehensive representation of the aforementioned aim due to the limited sample. To our knowledge, this is the first research study to address this topic in this manner. Our discussion attempts to promote a productive way forward in terms of the key results for method use in psychology, especially in the field of academia ( Holloway, 2008 ).

With regard to the methods used, our data stayed true to literature, finding only common research methods ( Grant and Booth, 2009 ; Maree, 2016 ) that varied in the degree to which they were employed. Quantitative research was found to be the most popular method, as indicated by literature ( Breen and Darlaston-Jones, 2010 ; Counsell and Harlow, 2017 ) and previous studies in specific areas of psychology (see Coetzee and Van Zyl, 2014 ). Its long history as the first research method ( Leech et al., 2007 ) in the field of psychology as well as researchers' current application of mathematical approaches in their studies ( Toomela, 2010 ) might contribute to its popularity today. Whatever the case may be, our results show that, despite the growth in qualitative research ( Demuth, 2015 ; Smith and McGannon, 2018 ), quantitative research remains the first choice for article publication in these journals. Despite the included journals indicating openness to articles that apply any research methods. This finding may be due to qualitative research still being seen as a new method ( Burman and Whelan, 2011 ) or reviewers' standards being higher for qualitative studies ( Bluhm et al., 2011 ). Future research is encouraged into the possible biasness in publication of research methods, additionally further investigation with a different sample into the proclaimed growth of qualitative research may also provide different results.

Review studies were found to surpass that of multi-method and mixed method studies. To this effect Grant and Booth (2009) , state that the increased awareness, journal contribution calls as well as its efficiency in procuring research funds all promote the popularity of reviews. The low frequency of mixed method studies contradicts the view in literature that it's the third most utilised research method ( Tashakkori and Teddlie's, 2003 ). Its' low occurrence in this sample could be due to opposing views on mixing methods ( Gunasekare, 2015 ) or that authors prefer publishing in mixed method journals, when using this method, or its relative novelty ( Ivankova et al., 2016 ). Despite its low occurrence, the application of the mixed methods design in articles was methodologically clear in all cases which were not the case for the remainder of research methods.

Additionally, a substantial number of studies used a combination of methodologies that are not mixed or multi-method studies. Perceived fixed boundaries are according to literature often set aside, as confirmed by this result, in order to investigate the aim of a study, which could create a new and helpful way of understanding the world ( Gunasekare, 2015 ). According to Toomela (2010) , this is not unheard of and could be considered a form of “structural systemic science,” as in the case of qualitative methodology (observation) applied in quantitative studies (experimental design) for example. Based on this result, further research into this phenomenon as well as its implications for research methods such as multi and mixed methods is recommended.

Discerning how these research methods were applied, presented some difficulty. In the case of sampling, most studies—regardless of method—did mention some form of inclusion and exclusion criteria, but no definite sampling method. This result, along with the fact that samples often consisted of students from the researchers' own academic institutions, can contribute to literature and debates among academics ( Peterson and Merunka, 2014 ; Laher, 2016 ). Samples of convenience and students as participants especially raise questions about the generalisability and applicability of results ( Peterson and Merunka, 2014 ). This is because attention to sampling is important as inappropriate sampling can debilitate the legitimacy of interpretations ( Onwuegbuzie and Collins, 2017 ). Future investigation into the possible implications of this reported popular use of convenience samples for the field of psychology as well as the reason for this use could provide interesting insight, and is encouraged by this study.

Additionally, and this is indicated in Table 6 , articles seldom report the research designs used, which highlights the pressing aspect of the lack of rigour in the included sample. Rigour with regards to the applied empirical method is imperative in promoting psychology as a science ( American Psychological Association, 2020 ). Omitting parts of the research process in publication when it could have been used to inform others' research skills should be questioned, and the influence on the process of replicating results should be considered. Publications are often rejected due to a lack of rigour in the applied method and designs ( Fonseca, 2013 ; Laher, 2016 ), calling for increased clarity and knowledge of method application. Replication is a critical part of any field of scientific research and requires the “complete articulation” of the study methods used ( Drotar, 2010 , p. 804). The lack of thorough description could be explained by the requirements of certain journals to only report on certain aspects of a research process, especially with regard to the applied design (Laher, 20). However, naming aspects such as sampling and designs, is a requirement according to the APA's Journal Article Reporting Standards (JARS-Quant) ( Appelbaum et al., 2018 ). With very little information on how a study was conducted, authors lose a valuable opportunity to enhance research validity, enrich the knowledge of others, and contribute to the growth of psychology and methodology as a whole. In the case of this research study, it also restricted our results to only reported samples and designs, which indicated a preference for certain designs, such as cross-sectional designs for quantitative studies.

Data collection and analysis were for the most part clearly stated. A key result was the versatile use of questionnaires. Researchers would apply a questionnaire in various ways, for example in questionnaire interviews, online surveys, and written questionnaires across most research methods. This may highlight a trend for future research.

With regard to the topics these methods were employed for, our research study found a new field named “psychological practice.” This result may show the growing consciousness of researchers as part of the research process ( Denzin and Lincoln, 2003 ), psychological practice, and knowledge generation. The most popular of these topics was social psychology, which is generously covered in journals and by learning societies, as testaments of the institutional support and richness social psychology has in the field of psychology ( Chryssochoou, 2015 ). The APA's perspective on 2018 trends in psychology also identifies an increased amount of psychology focus on how social determinants are influencing people's health ( Deangelis, 2017 ).

This study was not without limitations and the following should be taken into account. Firstly, this study used a sample of five specific journals to address the aim of the research study, despite general journal aims (as stated on journal websites), this inclusion signified a bias towards the research methods published in these specific journals only and limited generalisability. A broader sample of journals over a different period of time, or a single journal over a longer period of time might provide different results. A second limitation is the use of Excel spreadsheets and an electronic system to log articles, which was a manual process and therefore left room for error ( Bandara et al., 2015 ). To address this potential issue, co-coding was performed to reduce error. Lastly, this article categorised data based on the information presented in the article sample; there was no interpretation of what methodology could have been applied or whether the methods stated adhered to the criteria for the methods used. Thus, a large number of articles that did not clearly indicate a research method or design could influence the results of this review. However, this in itself was also a noteworthy result. Future research could review research methods of a broader sample of journals with an interpretive review tool that increases rigour. Additionally, the authors also encourage the future use of systematised review designs as a way to promote a concise procedure in applying this design.

Our research study presented the use of research methods for published articles in the field of psychology as well as recommendations for future research based on these results. Insight into the complex questions identified in literature, regarding what methods are used how these methods are being used and for what topics (why) was gained. This sample preferred quantitative methods, used convenience sampling and presented a lack of rigorous accounts for the remaining methodologies. All methodologies that were clearly indicated in the sample were tabulated to allow researchers insight into the general use of methods and not only the most frequently used methods. The lack of rigorous account of research methods in articles was represented in-depth for each step in the research process and can be of vital importance to address the current replication crisis within the field of psychology. Recommendations for future research aimed to motivate research into the practical implications of the results for psychology, for example, publication bias and the use of convenience samples.

Ethics Statement

This study was cleared by the North-West University Health Research Ethics Committee: NWU-00115-17-S1.

Author Contributions

All authors listed have made a substantial, direct and intellectual contribution to the work, and approved it for publication.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Aanstoos, C. M. (2014). Psychology . Available online at: http://eds.a.ebscohost.com.nwulib.nwu.ac.za/eds/detail/detail?sid=18de6c5c-2b03-4eac-94890145eb01bc70%40sessionmgr4006&vid$=$1&hid$=$4113&bdata$=$JnNpdGU9ZWRzL~WxpdmU%3d#AN$=$93871882&db$=$ers

Google Scholar

American Psychological Association (2020). Science of Psychology . Available online at: https://www.apa.org/action/science/

Appelbaum, M., Cooper, H., Kline, R. B., Mayo-Wilson, E., Nezu, A. M., and Rao, S. M. (2018). Journal article reporting standards for quantitative research in psychology: the APA Publications and Communications Board task force report. Am. Psychol. 73:3. doi: 10.1037/amp0000191

PubMed Abstract | CrossRef Full Text | Google Scholar

Bandara, W., Furtmueller, E., Gorbacheva, E., Miskon, S., and Beekhuyzen, J. (2015). Achieving rigor in literature reviews: insights from qualitative data analysis and tool-support. Commun. Ass. Inform. Syst. 37, 154–204. doi: 10.17705/1CAIS.03708

CrossRef Full Text | Google Scholar

Barr-Walker, J. (2017). Evidence-based information needs of public health workers: a systematized review. J. Med. Libr. Assoc. 105, 69–79. doi: 10.5195/JMLA.2017.109

Bittermann, A., and Fischer, A. (2018). How to identify hot topics in psychology using topic modeling. Z. Psychol. 226, 3–13. doi: 10.1027/2151-2604/a000318

Bluhm, D. J., Harman, W., Lee, T. W., and Mitchell, T. R. (2011). Qualitative research in management: a decade of progress. J. Manage. Stud. 48, 1866–1891. doi: 10.1111/j.1467-6486.2010.00972.x

Breen, L. J., and Darlaston-Jones, D. (2010). Moving beyond the enduring dominance of positivism in psychological research: implications for psychology in Australia. Aust. Psychol. 45, 67–76. doi: 10.1080/00050060903127481

Burman, E., and Whelan, P. (2011). Problems in / of Qualitative Research . Maidenhead: Open University Press/McGraw Hill.

Chaichanasakul, A., He, Y., Chen, H., Allen, G. E. K., Khairallah, T. S., and Ramos, K. (2011). Journal of Career Development: a 36-year content analysis (1972–2007). J. Career. Dev. 38, 440–455. doi: 10.1177/0894845310380223

Chryssochoou, X. (2015). Social Psychology. Inter. Encycl. Soc. Behav. Sci. 22, 532–537. doi: 10.1016/B978-0-08-097086-8.24095-6

Cichocka, A., and Jost, J. T. (2014). Stripped of illusions? Exploring system justification processes in capitalist and post-Communist societies. Inter. J. Psychol. 49, 6–29. doi: 10.1002/ijop.12011

Clay, R. A. (2017). Psychology is More Popular Than Ever. Monitor on Psychology: Trends Report . Available online at: https://www.apa.org/monitor/2017/11/trends-popular

Coetzee, M., and Van Zyl, L. E. (2014). A review of a decade's scholarly publications (2004–2013) in the South African Journal of Industrial Psychology. SA. J. Psychol . 40, 1–16. doi: 10.4102/sajip.v40i1.1227

Counsell, A., and Harlow, L. (2017). Reporting practices and use of quantitative methods in Canadian journal articles in psychology. Can. Psychol. 58, 140–147. doi: 10.1037/cap0000074

Deangelis, T. (2017). Targeting Social Factors That Undermine Health. Monitor on Psychology: Trends Report . Available online at: https://www.apa.org/monitor/2017/11/trend-social-factors

Demuth, C. (2015). New directions in qualitative research in psychology. Integr. Psychol. Behav. Sci. 49, 125–133. doi: 10.1007/s12124-015-9303-9

Denzin, N. K., and Lincoln, Y. (2003). The Landscape of Qualitative Research: Theories and Issues , 2nd Edn. London: Sage.

Drotar, D. (2010). A call for replications of research in pediatric psychology and guidance for authors. J. Pediatr. Psychol. 35, 801–805. doi: 10.1093/jpepsy/jsq049

Dweck, C. S. (2017). Is psychology headed in the right direction? Yes, no, and maybe. Perspect. Psychol. Sci. 12, 656–659. doi: 10.1177/1745691616687747

Earp, B. D., and Trafimow, D. (2015). Replication, falsification, and the crisis of confidence in social psychology. Front. Psychol. 6:621. doi: 10.3389/fpsyg.2015.00621

Ezeh, A. C., Izugbara, C. O., Kabiru, C. W., Fonn, S., Kahn, K., Manderson, L., et al. (2010). Building capacity for public and population health research in Africa: the consortium for advanced research training in Africa (CARTA) model. Glob. Health Action 3:5693. doi: 10.3402/gha.v3i0.5693

Ferreira, A. L. L., Bessa, M. M. M., Drezett, J., and De Abreu, L. C. (2016). Quality of life of the woman carrier of endometriosis: systematized review. Reprod. Clim. 31, 48–54. doi: 10.1016/j.recli.2015.12.002

Fonseca, M. (2013). Most Common Reasons for Journal Rejections . Available online at: http://www.editage.com/insights/most-common-reasons-for-journal-rejections

Gough, B., and Lyons, A. (2016). The future of qualitative research in psychology: accentuating the positive. Integr. Psychol. Behav. Sci. 50, 234–243. doi: 10.1007/s12124-015-9320-8

Grant, M. J., and Booth, A. (2009). A typology of reviews: an analysis of 14 review types and associated methodologies. Health Info. Libr. J. 26, 91–108. doi: 10.1111/j.1471-1842.2009.00848.x

Grix, J. (2002). Introducing students to the generic terminology of social research. Politics 22, 175–186. doi: 10.1111/1467-9256.00173

Gunasekare, U. L. T. P. (2015). Mixed research method as the third research paradigm: a literature review. Int. J. Sci. Res. 4, 361–368. Available online at: https://ssrn.com/abstract=2735996

Hengartner, M. P. (2018). Raising awareness for the replication crisis in clinical psychology by focusing on inconsistencies in psychotherapy Research: how much can we rely on published findings from efficacy trials? Front. Psychol. 9:256. doi: 10.3389/fpsyg.2018.00256

Holloway, W. (2008). Doing intellectual disagreement differently. Psychoanal. Cult. Soc. 13, 385–396. doi: 10.1057/pcs.2008.29

Ivankova, N. V., Creswell, J. W., and Plano Clark, V. L. (2016). “Foundations and Approaches to mixed methods research,” in First Steps in Research , 2nd Edn. K. Maree (Pretoria: Van Schaick Publishers), 306–335.

Johnson, M., Long, T., and White, A. (2001). Arguments for British pluralism in qualitative health research. J. Adv. Nurs. 33, 243–249. doi: 10.1046/j.1365-2648.2001.01659.x

Johnston, A., Kelly, S. E., Hsieh, S. C., Skidmore, B., and Wells, G. A. (2019). Systematic reviews of clinical practice guidelines: a methodological guide. J. Clin. Epidemiol. 108, 64–72. doi: 10.1016/j.jclinepi.2018.11.030

Ketchen, D. J. Jr., Boyd, B. K., and Bergh, D. D. (2008). Research methodology in strategic management: past accomplishments and future challenges. Organ. Res. Methods 11, 643–658. doi: 10.1177/1094428108319843

Ktepi, B. (2016). Data Analytics (DA) . Available online at: https://eds-b-ebscohost-com.nwulib.nwu.ac.za/eds/detail/detail?vid=2&sid=24c978f0-6685-4ed8-ad85-fa5bb04669b9%40sessionmgr101&bdata=JnNpdGU9ZWRzLWxpdmU%3d#AN=113931286&db=ers

Laher, S. (2016). Ostinato rigore: establishing methodological rigour in quantitative research. S. Afr. J. Psychol. 46, 316–327. doi: 10.1177/0081246316649121

Lee, C. (2015). The Myth of the Off-Limits Source . Available online at: http://blog.apastyle.org/apastyle/research/

Lee, T. W., Mitchell, T. R., and Sablynski, C. J. (1999). Qualitative research in organizational and vocational psychology, 1979–1999. J. Vocat. Behav. 55, 161–187. doi: 10.1006/jvbe.1999.1707

Leech, N. L., Anthony, J., and Onwuegbuzie, A. J. (2007). A typology of mixed methods research designs. Sci. Bus. Media B. V Qual. Quant 43, 265–275. doi: 10.1007/s11135-007-9105-3

Levitt, H. M., Motulsky, S. L., Wertz, F. J., Morrow, S. L., and Ponterotto, J. G. (2017). Recommendations for designing and reviewing qualitative research in psychology: promoting methodological integrity. Qual. Psychol. 4, 2–22. doi: 10.1037/qup0000082

Lowe, S. M., and Moore, S. (2014). Social networks and female reproductive choices in the developing world: a systematized review. Rep. Health 11:85. doi: 10.1186/1742-4755-11-85

Maree, K. (2016). “Planning a research proposal,” in First Steps in Research , 2nd Edn, ed K. Maree (Pretoria: Van Schaik Publishers), 49–70.

Maree, K., and Pietersen, J. (2016). “Sampling,” in First Steps in Research, 2nd Edn , ed K. Maree (Pretoria: Van Schaik Publishers), 191–202.

Ngulube, P. (2013). Blending qualitative and quantitative research methods in library and information science in sub-Saharan Africa. ESARBICA J. 32, 10–23. Available online at: http://hdl.handle.net/10500/22397 .

Nieuwenhuis, J. (2016). “Qualitative research designs and data-gathering techniques,” in First Steps in Research , 2nd Edn, ed K. Maree (Pretoria: Van Schaik Publishers), 71–102.

Nind, M., Kilburn, D., and Wiles, R. (2015). Using video and dialogue to generate pedagogic knowledge: teachers, learners and researchers reflecting together on the pedagogy of social research methods. Int. J. Soc. Res. Methodol. 18, 561–576. doi: 10.1080/13645579.2015.1062628

O'Cathain, A. (2009). Editorial: mixed methods research in the health sciences—a quiet revolution. J. Mix. Methods 3, 1–6. doi: 10.1177/1558689808326272

O'Neil, S., and Koekemoer, E. (2016). Two decades of qualitative research in psychology, industrial and organisational psychology and human resource management within South Africa: a critical review. SA J. Indust. Psychol. 42, 1–16. doi: 10.4102/sajip.v42i1.1350

Onwuegbuzie, A. J., and Collins, K. M. (2017). The role of sampling in mixed methods research enhancing inference quality. Köln Z Soziol. 2, 133–156. doi: 10.1007/s11577-017-0455-0

Perestelo-Pérez, L. (2013). Standards on how to develop and report systematic reviews in psychology and health. Int. J. Clin. Health Psychol. 13, 49–57. doi: 10.1016/S1697-2600(13)70007-3

Pericall, L. M. T., and Taylor, E. (2014). Family function and its relationship to injury severity and psychiatric outcome in children with acquired brain injury: a systematized review. Dev. Med. Child Neurol. 56, 19–30. doi: 10.1111/dmcn.12237

Peterson, R. A., and Merunka, D. R. (2014). Convenience samples of college students and research reproducibility. J. Bus. Res. 67, 1035–1041. doi: 10.1016/j.jbusres.2013.08.010

Ritchie, J., Lewis, J., and Elam, G. (2009). “Designing and selecting samples,” in Qualitative Research Practice: A Guide for Social Science Students and Researchers , 2nd Edn, ed J. Ritchie and J. Lewis (London: Sage), 1–23.

Sandelowski, M. (2011). When a cigar is not just a cigar: alternative perspectives on data and data analysis. Res. Nurs. Health 34, 342–352. doi: 10.1002/nur.20437

Sandelowski, M., Voils, C. I., and Knafl, G. (2009). On quantitizing. J. Mix. Methods Res. 3, 208–222. doi: 10.1177/1558689809334210

Scholtz, S. E., De Klerk, W., and De Beer, L. T. (2019). A data generated research framework for conducting research methods in psychological research.

Scimago Journal & Country Rank (2017). Available online at: http://www.scimagojr.com/journalrank.php?category=3201&year=2015

Scopus (2017a). About Scopus . Available online at: https://www.scopus.com/home.uri (accessed February 01, 2017).

Scopus (2017b). Document Search . Available online at: https://www.scopus.com/home.uri (accessed February 01, 2017).

Scott Jones, J., and Goldring, J. E. (2015). ‘I' m not a quants person'; key strategies in building competence and confidence in staff who teach quantitative research methods. Int. J. Soc. Res. Methodol. 18, 479–494. doi: 10.1080/13645579.2015.1062623

Smith, B., and McGannon, K. R. (2018). Developing rigor in quantitative research: problems and opportunities within sport and exercise psychology. Int. Rev. Sport Exerc. Psychol. 11, 101–121. doi: 10.1080/1750984X.2017.1317357

Stangor, C. (2011). Introduction to Psychology . Available online at: http://www.saylor.org/books/

Strydom, H. (2011). “Sampling in the quantitative paradigm,” in Research at Grass Roots; For the Social Sciences and Human Service Professions , 4th Edn, eds A. S. de Vos, H. Strydom, C. B. Fouché, and C. S. L. Delport (Pretoria: Van Schaik Publishers), 221–234.

Tashakkori, A., and Teddlie, C. (2003). Handbook of Mixed Methods in Social & Behavioural Research . Thousand Oaks, CA: SAGE publications.

Toomela, A. (2010). Quantitative methods in psychology: inevitable and useless. Front. Psychol. 1:29. doi: 10.3389/fpsyg.2010.00029

Truscott, D. M., Swars, S., Smith, S., Thornton-Reid, F., Zhao, Y., Dooley, C., et al. (2010). A cross-disciplinary examination of the prevalence of mixed methods in educational research: 1995–2005. Int. J. Soc. Res. Methodol. 13, 317–328. doi: 10.1080/13645570903097950

Weiten, W. (2010). Psychology Themes and Variations . Belmont, CA: Wadsworth.

Keywords: research methods, research approach, research trends, psychological research, systematised review, research designs, research topic

Citation: Scholtz SE, de Klerk W and de Beer LT (2020) The Use of Research Methods in Psychological Research: A Systematised Review. Front. Res. Metr. Anal. 5:1. doi: 10.3389/frma.2020.00001

Received: 30 December 2019; Accepted: 28 February 2020; Published: 20 March 2020.

Reviewed by:

Copyright © 2020 Scholtz, de Klerk and de Beer. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Salomé Elizabeth Scholtz, 22308563@nwu.ac.za

  • Skip to main content
  • Skip to primary sidebar

IResearchNet

Cognitive Psychology

Cognitive psychology research methods.

Cognitive Psychology

The methodological tools that cognitive psycholo­gists use depend in large part upon the area of study. Thus, we provide an overview of the methods used in a number of distinct areas including perception, mem­ory, attention, and language processing, along with some discussion of methods that cut across these areas.

Perceptual Methods

During the initial stage of stimulus processing, an in­dividual encodes/perceives the stimulus. Encoding can be viewed as the process of translating the sensory en­ergy of a stimulus into a meaningful pattern. However, before a stimulus can be encoded, a minimum or threshold amount of sensory energy is required to de­tect that stimulus. In psychophysics, the method of lim­its and the method of constant stimuli have been used to determine sensory thresholds. The method of limits converges on sensory thresholds by using sub- and suprathreshold intensities of stimuli. From these anchor points, the intensity of a stimulus is gradually increased or decreased until it is at its sensory threshold and is just detectable by the participant. In contrast, the method of constant stimuli converges on a sensory threshold by using a series of trials in which partici­pants decide whether a stimulus was presented or not, and the experimenter varies the intensity of the stim­ulus. At the sensory threshold, participants are at chance of discriminating between the presence and ab­sence of a stimulus.

Although sensory threshold procedures have been important, these methods fail to recognize the role of nonsensory factors in stimulus processing. Thus signal detection theory was developed to take into account an individual’s biases in responding to a given signal in a particular context (Green & Swets, 1966). The no­tion is that target stimuli produce some signal that is always available in a background of noise and that the payoffs for hits (correctly responding “yes” when the stimulus is presented) and correct rejections (correctly responding “absent” when the stimulus is not pre­sented) modulate the likelihood of an individual re­porting that a stimulus is present or absent. One ex­ample of this has been a sonar operator in a submarine hearing signals that could be interpreted as an enemy ship or background noise. Because it is very important to detect a signal in this situation, the sonar operator may be biased to say “yes” another ship is present, even when the stimulus intensity is very low and could just be background noise. This bias will not only lead to a high hit probability, but it will also lead to a high false-alarm probability (i.e. incorrectly re­porting that a ship is there when there is only noise). Signal detection theory allows researchers to tease apart the sensitivity that the participant has in dis­criminating between signal and signal plus noise dis­tributions (reflected by changes in a statistic called d prime) and any bias that the individual may bring into the decision making situation (reflected by changes in a statistic called beta).

Signal detection theory has been used to illustrate the independent roles of signal strength and response bias not only in perceptual experiments, but also in other domains such as memory and decision making. Consistent with the distinction between sensitivity and bias, variables such as subject motivation and the pro­portion of signal trials have been shown to influence the placement of the decision criterion but not the dis­tance between the signal plus noise and noise distri­butions on the sensory energy scale. On the other hand, variables such as stimulus intensity have been shown to influence the distance between the signal plus noise and noise distributions but not the placement of the decision criterion.

Memory Methods

One of the first studies of human cognition was the work of Ebbinghaus (1885/1913) who demonstrated that one could experimentally investigate distinct as­pects of memory. One of the methods that Ebbinghaus developed was the savings-in-learning technique in which he studied lists of nonsense syllables (e.g. puv) to a criterion of perfect recitation. Memory was defined as the reduction in the number of trials necessary to relearn a list relative to the number of trials necessary to first learn a list. Since the work of Ebbinghaus, there has been considerable development in the methods used to study memory.

Researchers often attempt to distinguish between three different aspects of memory: encoding (the initial storage of information), retention (the delay between storage and the use of information), and retrieval (the access of the earlier stored information). For example, one way of investigating encoding processes is to ma­nipulate the participants’ expectancies. In an inten­tional memory task, participants are explicitly told that they will receive a memory test. In contrast, during an incidental memory test, participants are given a sec­ondary task that may vary with respect to the types of processes engaged (e.g. making a deep semantic deci­sion about a word versus simply counting the letters). Hyde and Jenkins (1969) found that both the intentionality of learning and the type of encoding during in­cidental memory tasks influenced later memory perfor­mance.

Studies of the retention of information most often involve varying the delay between study and test to investigate the influences of the passage of time on memory performance. However, researchers soon real­ized that it is not simply the passage of time that is important but also what occurs during the passage of time. In order to address the influence of interfering information, researchers developed retroactive interfer­ence paradigms, in which the similarity of the infor­mation presented during a retention interval was ma­nipulated. Results from such studies indicate that interference is a powerful modulator of memory per­formance (see Anderson & Neely, 1996, for a review).

There are two general classes of methods used in memory research to tap into retrieval processes. On an explicit memory test, the participants are presented a list of materials during an encoding stage, and at some later point in time they are given a test in which they are asked to retrieve the earlier presented material. There are three common measures of explicit memory: recall, recognition, and cued recall. During a recall test (akin to a classroom essay test), participants attempt to remember material presented earlier either in the order that it was presented (serial recall) or in any order (free recall). Researchers often compare the order of infor­mation during recall to the initial order of presentation (serial recall functions) and also the organizational strategies that individuals invoke during the retrieval process (measures of subjective organization and clus­tering). In order to investigate more complex materials such as stories and discourse processing, researchers sometimes measure the propositional structure of the recalled information. The notion is that in order to comprehend a story, individuals rely on a network of interconnected propositions. A proposition is a flexible representation of a sentence that contains a predicate (e.g. an adjective or a verb) and an argument (e.g.. a noun or a pronoun). By looking at the recall of the propositions, one can provide insights into the repre­sentation that the individuals may have gleaned from a story (Kintsch & van Dijk. 1978).

Of course, there may be memories available that the individual may not be able to produce in a free recall test. Thus, researchers sometimes employ a cued recall test, which is quite similar to free recall, with the ex­ception that the participant is provided with a cue at the time of recall that may aid in the retrieval of the information that was presented earlier. In a recognition task (akin to a classroom multiple choice test), partic­ipants are given the information presented earlier and are asked to discriminate this information from new information. The two most common types of recogni­tion tests are the forced choice recognition test and the free choice or yes/no recognition test. On a forced choice recognition test, a participant chooses which of two or more items is old. On a yes/no recognition test, a participant indicates whether each item in a large set of items is old or new.

A second general class of memory tests has some similarity to Ebbinghaus’s classic savings method. These are called implicit memory tests. The distinguish­ing aspect of implicit tests is that participants are not directly asked to recollect an earlier episode. Rather, participants are asked to engage in a task where per­formance often benefits from earlier exposure to the stimulus items. For example, participants might be pre­sented with a list of words (e.g. elephant, library, assassin) to name aloud during encoding, and then later they would be presented with a list of word fragments (e.g., _le_a_t) or word stems (e.g. ele_) to com­plete. Some of these fragments or stems might reflect earlier presented items while others may reflect new items. In this way, one can measure the benefit (also called priming) of previous exposure to the items com­pared to novel items. Interestingly, amnesics are often unimpaired in such implicit memory tests, while show­ing considerable impairment in explicit memory tests, such as free recall.

Chronometric Methods

In addition to relying on experiments to discriminate among mental operations, cognitive psychologists have attempted to provide information regarding the speed of mental operations. Interestingly, this work began over a century ago with the work of Donders (1868/ 1969) who was the first to use reaction times to mea­sure the speed of mental operations. In an attempt to isolate the speed of mental processes, Donders devel­oped a set of response time tasks that would appear to differ only in a simple component of processing. For example, task A might require Process 1 (stimulus en­coding), whereas, task B might require Process 1 (stim­ulus encoding) and Process 2 (binary decision). Ac­cording to Donder’s subtractive method, cognitive operations can be added and removed without influ­encing other cognitive operations. This has been re­ferred to as the assumption of pure insertion and de­letion. In the previous example, the duration of the binary decision process can be estimated by subtracting the reaction time in task A from the reaction time in task B.

Sternberg (1969) pointed out that the pure insertion assumptions of subtractive factors have some inherent difficulties. For example, it is possible that the speed of a given process might change when coupled with other processes. Therefore, one cannot provide a pure esti­mate of the speed of a given process. As an alternative, Sternberg introduced additive factors logic. According to additive factors logic, if a task contains distinct pro­cesses, there should be variables that selectively influ­ence the speed of each process. Thus, if two variables influence different processes, their effects should be sta­tistically additive. However, if two variables influence the same process, their effects should statistically inter­act. In this way, additive factor methods allow one to use studies of response latency to provide information regarding the sequence of stages and the manner in which such processes are influenced by independent variables.

Unfortunately, even additive factors logic has some difficulties. Specifically, additive factors logic works if one assumes a discrete serial stage model of informa­tion processing in which the output of a processing stage is not passed on to the next stage until that stage is complete. However, there is a second class of models that assume that the output of a given stage can begin exerting an influence on the next stage of processing before completion. These are called cascade models to capture the notion that the flow of mental processes (like a stream over multiple stones) can occur simul­taneously across multiple stages. McClelland (1979) has shown that if one assumes a cascade model, then one cannot use additive factors logic to unequivocally de­termine the locus of the effects of independent varia­bles.

One cannot consider reaction time measures with­out considering accuracy because there is an inherent tradeoff between speed and accuracy. Specifically, most behaviors are less accurate when completed too quickly (e.g., consider the danger associated with driving too fast. or the errors associated with solving a set of ar­ithmetic problems under time demands). Most chronometric researchers attempt to ensure that accuracy is quite high, most often above 90% correct, thereby min­imizing the concern about accuracy. However, Pachella (1974) has developed an idealized speed-accuracy tradeoff function that provides estimates of changes in speed across conditions and how such changes might relate to changes in accuracy. The importance of Pachella’s work is that at some locations of the speed-accuracy tradeoff function, very small changes in ac­curacy can lead to large changes in response latency and vice versa. More recently, researchers have capi­talized on the relation between speed and accuracy to empirically obtain estimates of speed-accuracy func­tions across different conditions. In these deadline ex­periments, participants are given a probe that signals the participant to terminate processing at a given point in time. By varying the delay of the deadline, one can track changes in the speed-accuracy function across conditions and thereby determine if an effect of a var­iable is in encoding and/or retrieval of information (see Meyer, Osman, Irwin, & Yantis, 1988, for a review).

It is important to note that although the temporal dynamics of virtually all cognitive processes can (and probably should) be measured, studies of attention and language processing are the areas that have relied most heavily on chronometric methods. For example, in the area of word recognition, researchers have used the lexical decision task (participants make word/nonword judgments) and speeded naming performance (speed taken to begin the overt pronunciation of a word) to develop models of word recognition. These studies have looked at variables such as the frequency of the stim­ulus (e.g., orb versus dog), the concreteness of the stim­ulus (e.g.. faith versus truck), or the syntactic class (e.g .. dog versus run). In addition, eye-tracking meth­ods have been developed that allow one to measure how long the reader is looking at a particular word (e.g., fixation and gaze measures) while they are en­gaged in more natural reading. Eye-tracking methods have allowed important insights into the semantic and syntactic processes that modulate the speed of recog­nizing and integrating a word with other words in the surrounding text.

Researchers in the area of attention have also relied quite heavily on speeded tasks. For example, two com­mon techniques in attention research are interference paradigms and cueing paradigms. In interference par­adigms, at least two stimuli are presented that compete for output. A classic example of this is the Stroop task in which a person is asked to name the ink color of a printed word. Under conditions of conflict, that is, when the word green is printed in red ink, there is a considerable increase in response latency compared to nonconflict conditions (e.g., the word deep printed in red ink). In the second class of speeded attention tasks, individuals are presented with visual cues to orient at­tention to specific locations in the visual field. A target is either presented at that location or at a different lo­cation. The difference in response latency to cued and uncued targets is used to measure the effectiveness of the attentional cue.

Cross-Population Studies

Although cognitive psychologists rely most heavily on college students as their target sample, there is an in­creasing interest in studying cognitive operations across quite distinct populations. For example, there are stud­ies of cognition from early childhood to older adulthood that attempt to trace developmental changes in specific operations such as memory, attention, and language processing. In addition, there are studies of special pop­ulations that may have a breakdown in a particular cognitive operation. Specifically, there has been consid­erable work attempting to understand the attentional breakdowns that occur in schizophrenia and the mem­ory breakdowns that occur in Alzheimer’s disease. Thus, researchers have begun to explore distinct pop­ulations to provide further leverage in isolating cogni­tive activity.

Case Studies

After a trauma to the brain, there are sometimes break­downs in apparently isolated components of cognitive performance. Thus, one may provide insights into the cognitive architecture by studying these individuals and the degree to which such cognitive processes are iso­lated. For example, there is the classic case of H M in memory research. H M are the initials of an individual who, because of an operation to relieve epilepsy, ac­quired severe memory loss on explicit tests, although performance on implicit memory tests was relatively in­tact. In addition, there are classic dissociations across individuals with different types of language break­downs. For example, Broca’s aphasics have relatively spared comprehension processes but difficulty produc­ing fluent speech. In contrast, Wernicke’s aphasics have impaired comprehension processes but relatively fluent speech production.

Measures of Brain Activity

With the increasing technical sophistication from the neurosciences, there has been an influx of studies that measure the correlates of mental activity in the brain (Posner & Raichle, 1994). Although there are other methods that are available, we will only review the three most common here. The first is the evoked poten­tial method. In this method, the researcher measures the electrical activity of systems of neurons (i.e. brain waves) as the individual is engaged in some cognitive task. This procedure has excellent temporal resolution, but the specific locus in the brain that is producing the activity can be relatively equivocal.

An approach that has much better spatial resolution is positron emission tomography (PET). In this ap­proach, the individual receives an injection of a radi­oactive isotope that emits signals that are measured by a scanner. The notion is that there will be increased blood flow (which carries the isotope) to the most active areas of the brain. In this way, one can isolate mental operations by measuring brain activity under specific task demands. Typically, these scans involve about a minute of some form of cognitive processing (e.g., gen­erating verbs to nouns), which is compared to other scans that involve some other cognitive process (e.g., reading nouns aloud). Given the window of time nec­essary for such scans, the PET approach has some ob­vious temporal limitations.

A third more recent approach is functional magnetic resonance imaging (fMRI). This procedure is less inva­sive because it does not involve a radioactive injection. Moreover, there has been some progress made in this area, which suggests that one can look at a more fine-grained temporal resolution in fMRI, at least compared to PET techniques. Ultimately, the wedding of evoked potential and fMRI signals may provide the necessary temporal and spatial resolution of the neural signals that underlie cognitive processes.

Computational Modeling

Most models of cognition, although grounded in the experimental method, are metaphorical and noncomputational in nature, e.g.. short-term versus long-term memory stores. However, there is also an important method in cognitive psychology that uses computation­ally explicit models. One example of this approach is connectionist/neural network modeling in which rela­tively simple processing units are often layered in a highly interconnected network (Rumelhart & Mc­Clelland. 1986). Activation patterns across the simple processing units are computationally tracked across time to make specific predictions regarding the effects of stimulus and task manipulations. Computational models are used in a number of ways to better under­stand the cognitive architecture. First, these models force researchers to be very explicit regarding the un­derlying assumptions of metaphorical models. Second, these models often can be used to help explain differ­ences across conditions. Specifically, if a manipulation has a given effect in the data, then one may be able to trace that effect in the architecture within the model. Third, these models can provide important insights into different ways of viewing a given set of data. For ex­ample, as noted earlier. McClelland (1979) demon­strated that cascadic models can handle data that were initially viewed as supportive of serial stage models.

The goal of this article is to provide an encapsulated review of some of the methods that cognitive psychol­ogists use to better understand mental operations. Re­searchers in cognitive psychology have developed a set of research tools that are almost as rich and diverse as cognition itself.

  • Cognitive Psychology History
  • Cognitive Psychology Theories
  • Cognitive Psychology Bibliography
  • Bipolar Disorder
  • Therapy Center
  • When To See a Therapist
  • Types of Therapy
  • Best Online Therapy
  • Best Couples Therapy
  • Best Family Therapy
  • Managing Stress
  • Sleep and Dreaming
  • Understanding Emotions
  • Self-Improvement
  • Healthy Relationships
  • Student Resources
  • Personality Types
  • Guided Meditations
  • Verywell Mind Insights
  • 2024 Verywell Mind 25
  • Mental Health in the Classroom
  • Editorial Process
  • Meet Our Review Board
  • Crisis Support

What Is Cognitive Psychology?

The Science of How We Think

Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

evaluate the research methods used by cognitive psychologists

Steven Gans, MD is board-certified in psychiatry and is an active supervisor, teacher, and mentor at Massachusetts General Hospital.

evaluate the research methods used by cognitive psychologists

Topics in Cognitive Psychology

  • Current Research
  • Cognitive Approach in Practice

Careers in Cognitive Psychology

How cognitive psychology differs from other branches of psychology, frequently asked questions.

Cognitive psychology involves the study of internal mental processes—all of the workings inside your brain, including perception, thinking, memory, attention, language, problem-solving, and learning.

Cognitive psychology--the study of how people think and process information--helps researchers understand the human brain. It also allows psychologists to help people deal with psychological difficulties.

This article discusses what cognitive psychology is, the history of this field, and current directions for research. It also covers some of the practical applications for cognitive psychology research and related career options you might consider.

Findings from cognitive psychology help us understand how people think, including how they acquire and store memories. By knowing more about how these processes work, psychologists can develop new ways of helping people with cognitive problems.

Cognitive psychologists explore a wide variety of topics related to thinking processes. Some of these include: 

  • Attention --our ability to process information in the environment while tuning out irrelevant details
  • Choice-based behavior --actions driven by a choice among other possibilities
  • Decision-making
  • Information processing
  • Language acquisition --how we learn to read, write, and express ourselves
  • Problem-solving
  • Speech perception -how we process what others are saying
  • Visual perception --how we see the physical world around us

History of Cognitive Psychology

Although it is a relatively young branch of psychology , it has quickly grown to become one of the most popular subfields. Cognitive psychology grew into prominence between the 1950s and 1970s.

Prior to this time, behaviorism was the dominant perspective in psychology. This theory holds that we learn all our behaviors from interacting with our environment. It focuses strictly on observable behavior, not thought and emotion. Then, researchers became more interested in the internal processes that affect behavior instead of just the behavior itself. 

This shift is often referred to as the cognitive revolution in psychology. During this time, a great deal of research on topics including memory, attention, and language acquisition began to emerge. 

In 1967, the psychologist Ulric Neisser introduced the term cognitive psychology, which he defined as the study of the processes behind the perception, transformation, storage, and recovery of information.

Cognitive psychology became more prominent after the 1950s as a result of the cognitive revolution.

Current Research in Cognitive Psychology

The field of cognitive psychology is both broad and diverse. It touches on many aspects of daily life. There are numerous practical applications for this research, such as providing help coping with memory disorders, making better decisions , recovering from brain injury, treating learning disorders, and structuring educational curricula to enhance learning.

Current research on cognitive psychology helps play a role in how professionals approach the treatment of mental illness, traumatic brain injury, and degenerative brain diseases.

Thanks to the work of cognitive psychologists, we can better pinpoint ways to measure human intellectual abilities, develop new strategies to combat memory problems, and decode the workings of the human brain—all of which ultimately have a powerful impact on how we treat cognitive disorders.

The field of cognitive psychology is a rapidly growing area that continues to add to our understanding of the many influences that mental processes have on our health and daily lives.

From understanding how cognitive processes change as a child develops to looking at how the brain transforms sensory inputs into perceptions, cognitive psychology has helped us gain a deeper and richer understanding of the many mental events that contribute to our daily existence and overall well-being.

The Cognitive Approach in Practice

In addition to adding to our understanding of how the human mind works, the field of cognitive psychology has also had an impact on approaches to mental health. Before the 1970s, many mental health treatments were focused more on psychoanalytic , behavioral , and humanistic approaches.

The so-called "cognitive revolution" put a greater emphasis on understanding the way people process information and how thinking patterns might contribute to psychological distress. Thanks to research in this area, new approaches to treatment were developed to help treat depression, anxiety, phobias, and other psychological disorders .

Cognitive behavioral therapy and rational emotive behavior therapy are two methods in which clients and therapists focus on the underlying cognitions, or thoughts, that contribute to psychological distress.

What Is Cognitive Behavioral Therapy?

Cognitive behavioral therapy (CBT) is an approach that helps clients identify irrational beliefs and other cognitive distortions that are in conflict with reality and then aid them in replacing such thoughts with more realistic, healthy beliefs.

If you are experiencing symptoms of a psychological disorder that would benefit from the use of cognitive approaches, you might see a psychologist who has specific training in these cognitive treatment methods.

These professionals frequently go by titles other than cognitive psychologists, such as psychiatrists, clinical psychologists , or counseling psychologists , but many of the strategies they use are rooted in the cognitive tradition.

Many cognitive psychologists specialize in research with universities or government agencies. Others take a clinical focus and work directly with people who are experiencing challenges related to mental processes. They work in hospitals, mental health clinics, and private practices.

Research psychologists in this area often concentrate on a particular topic, such as memory. Others work directly on health concerns related to cognition, such as degenerative brain disorders and brain injuries.

Treatments rooted in cognitive research focus on helping people replace negative thought patterns with more positive, realistic ones. With the help of cognitive psychologists, people are often able to find ways to cope and even overcome such difficulties.

Reasons to Consult a Cognitive Psychologist

  • Alzheimer's disease, dementia, or memory loss
  • Brain trauma treatment
  • Cognitive therapy for a mental health condition
  • Interventions for learning disabilities
  • Perceptual or sensory issues
  • Therapy for a speech or language disorder

Whereas behavioral and some other realms of psychology focus on actions--which are external and observable--cognitive psychology is instead concerned with the thought processes behind the behavior. Cognitive psychologists see the mind as if it were a computer, taking in and processing information, and seek to understand the various factors involved.

A Word From Verywell

Cognitive psychology plays an important role in understanding the processes of memory, attention, and learning. It can also provide insights into cognitive conditions that may affect how people function.

Being diagnosed with a brain or cognitive health problem can be daunting, but it is important to remember that you are not alone. Together with a healthcare provider, you can come up with an effective treatment plan to help address brain health and cognitive problems.

Your treatment may involve consulting with a cognitive psychologist who has a background in the specific area of concern that you are facing, or you may be referred to another mental health professional that has training and experience with your particular condition.

Ulric Neisser is considered the founder of cognitive psychology. He was the first to introduce the term and to define the field of cognitive psychology. His primary interests were in the areas of perception and memory, but he suggested that all aspects of human thought and behavior were relevant to the study of cognition.

A cognitive map refers to a mental representation of an environment. Such maps can be formed through observation as well as through trial and error. These cognitive maps allow people to orient themselves in their environment.

While they share some similarities, there are some important differences between cognitive neuroscience and cognitive psychology. While cognitive psychology focuses on thinking processes, cognitive neuroscience is focused on finding connections between thinking and specific brain activity. Cognitive neuroscience also looks at the underlying biology that influences how information is processed.

Cognitive psychology is a form of experimental psychology. Cognitive psychologists use experimental methods to study the internal mental processes that play a role in behavior.

Sternberg RJ, Sternberg K. Cognitive Psychology . Wadsworth/Cengage Learning. 

Krapfl JE. Behaviorism and society . Behav Anal. 2016;39(1):123-9. doi:10.1007/s40614-016-0063-8

Cutting JE. Ulric Neisser (1928-2012) . Am Psychol . 2012;67(6):492. doi:10.1037/a0029351

Ruggiero GM, Spada MM, Caselli G, Sassaroli S. A historical and theoretical review of cognitive behavioral therapies: from structural self-knowledge to functional processes .  J Ration Emot Cogn Behav Ther . 2018;36(4):378-403. doi:10.1007/s10942-018-0292-8

Parvin P. Ulric Neisser, cognitive psychology pioneer, dies . Emory News Center.

APA Dictionary of Psychology. Cognitive map . American Psychological Association.

Forstmann BU, Wagenmakers EJ, Eichele T, Brown S, Serences JT. Reciprocal relations between cognitive neuroscience and formal cognitive models: opposites attract? . Trends Cogn Sci . 2011;15(6):272-279. doi:10.1016/j.tics.2011.04.002

By Kendra Cherry, MSEd Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

evaluate the research methods used by cognitive psychologists

Live revision! Join us for our free exam revision livestreams Watch now →

Reference Library

Collections

  • See what's new
  • All Resources
  • Student Resources
  • Assessment Resources
  • Teaching Resources
  • CPD Courses
  • Livestreams

Study notes, videos, interactive activities and more!

Psychology news, insights and enrichment

Currated collections of free resources

Browse resources by topic

  • All Psychology Resources

Resource Selections

Currated lists of resources

Study Notes

The Cognitive Approach

Last updated 5 Sept 2022

  • Share on Facebook
  • Share on Twitter
  • Share by Email

The idea that humans conduct mental processes on incoming information – i.e. human cognition – came to the fore of psychological thought during the mid twentieth century, overlooking the stimulus-response focus of the behaviourist approach. A dominant cognitive approach evolved, advocating that sensory information is manipulated internally prior to responses made – influenced by, for instance, our motivations and beliefs.

Introspection – a subjective method predominantly used by philosophical and psychodynamic approaches – was rejected in favour of experimental methodology to study internal processes scientifically.

The cognitive approach assumes

  • The mind actively processes information from our senses (touch, taste etc.).
  • Between stimulus and response are complex mental processes, which can be studied scientifically.
  • Humans can be seen as data processing systems.
  • The workings of a computer and the human mind are alike – they encode and store information, and they have outputs.

The Study of Internal Mental Processes

Using experimental research methods, the cognitive approach studies internal mental processes such as attention, memory and decision-making. For example, an investigation might compare the abilities of groups to memorize a list of words, presenting them either verbally or visually to infer which type of sensory information is easiest to process, and could further investigate whether or not this changes with different word types or individuals.

Theoretical and computer models are proposed to attempt to explain and infer information about mental processes. For example, the Information-Processing Model (Figure 1) describes the mind as if a computer, in terms of the relationship between incoming information to be encoded (from the senses), manipulating this mentally (e.g. storage, a decision), and consequently directing an output (e.g. a behaviour, emotion). An example might be an artist looking at a picturesque landscape, deciding which paint colour suits a given area, before brushing the selected colour onto a canvas.

evaluate the research methods used by cognitive psychologists

In recent decades, newer models including Computational and Connectionist models have taken some attention away from the previously dominant information-processing analogy:

  • The Computational model similarly compares with a computer, but focuses more on how we structure the process of reaching the behavioural output (i.e. the aim, strategy and action taken), without specifying when/how much information is dealt with.
  • The Connectionist model takes a neural line of thought; it looks at the mind as a complex network of neurons, which activate in regular configurations that characterize known associations between stimuli.

The role of Schema

A key concept to the approach is the schema, an internal ‘script’ for how to act or what to expect from a given situation. For example, gender schemas assume how males/females behave and how is best to respond accordingly, e.g. a child may assume that all boys enjoy playing football. Schemas are like stereotypes, and alter mental processing of incoming information; their role in eyewitness testimony can be negative, as what somebody expects to see may distort their memory of was actually witnessed.

Cognitive Neuroscience emergence

This related field became prevalent over the latter half of the twentieth century, incorporating neuroscience techniques such as brain scanning to study the impact of brain structures on cognitive processes.

Evaluation of the cognitive approach

  • Models have presented a useful means to help explain internal mental processes
  • The approach provides a strong focus on internal mental processes, which behaviourists before did not.
  • The experimental methods used by the approach are considered scientific.
  • It could be argued that cognitive models over-simplify explanations for complex mental processes.
  • The data supporting cognitive theories often come from unrealistic tasks used in laboratory experiments, which puts the ecological validity of theories into question (i.e. whether or not they are truly representative of our normal cognitive patterns).
  • Comparing a human mind to a machine or computer is arguably an unsophisticated analogy.
  • Cognitive Approach
  • Cognitive neuroscience
  • Mediating cognitive factors
  • Negative self-schemas

You might also like

Cognitive approach to explaining and treating depression.

Quizzes & Activities

Learning Approaches - Social Learning Theory

Explaining depression - beck’s cognitive triad, introspection & the cognitive approach, ​vicarious reinforcement, ​mediating cognitive factors, can an fmri machine read your thoughts.

23rd June 2016

Updated map of the human brain hailed as a scientific tour de force

25th July 2016

Our subjects

  • › Criminology
  • › Economics
  • › Geography
  • › Health & Social Care
  • › Psychology
  • › Sociology
  • › Teaching & learning resources
  • › Student revision workshops
  • › Online student courses
  • › CPD for teachers
  • › Livestreams
  • › Teaching jobs

Boston House, 214 High Street, Boston Spa, West Yorkshire, LS23 6AD Tel: 01937 848885

  • › Contact us
  • › Terms of use
  • › Privacy & cookies

© 2002-2024 Tutor2u Limited. Company Reg no: 04489574. VAT reg no 816865400.

Cognitive psychology and self-reports: models and methods

Affiliation.

  • 1 Behavioral Medicine Scientific Research Group, National Heart, Lung, and Blood Institute, Bethesda, MD 20892-7936, USA. [email protected]
  • PMID: 12769134
  • DOI: 10.1023/a:1023279029852

This article describes the models and methods that cognitive psychologists and survey researchers use to evaluate and experimentally test cognitive issues in questionnaire design and subsequently improve self-report instruments. These models and methods assess the cognitive processes underlying how respondents comprehend and generate answers to self-report questions. Cognitive processing models are briefly described. Non-experimental methods--expert cognitive review, cognitive task analysis, focus groups, and cognitive interviews--are described. Examples are provided of how these methods were effectively used to identify cognitive self-report issues. Experimental methods--cognitive laboratory experiments, field tests, and experiments embedded in field surveys--are described. Examples are provided of: (a) how laboratory experiments were designed to test the capability and accuracy of respondents in performing the cognitive tasks required to answer self-report questions, (b) how a field experiment was conducted in which a cognitively designed questionnaire was effectively tested against the original questionnaire, and (c) how a cognitive experiment embedded in a field survey was conducted to test cognitive predictions.

  • Attitude to Health*
  • Health Surveys*
  • Models, Psychological*
  • Psychometrics
  • Quality of Life*
  • Research Design
  • Self-Assessment*
  • Sickness Impact Profile
  • Surveys and Questionnaires / standards*

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Ch 2: Psychological Research Methods

Children sit in front of a bank of television screens. A sign on the wall says, “Some content may not be suitable for children.”

Have you ever wondered whether the violence you see on television affects your behavior? Are you more likely to behave aggressively in real life after watching people behave violently in dramatic situations on the screen? Or, could seeing fictional violence actually get aggression out of your system, causing you to be more peaceful? How are children influenced by the media they are exposed to? A psychologist interested in the relationship between behavior and exposure to violent images might ask these very questions.

The topic of violence in the media today is contentious. Since ancient times, humans have been concerned about the effects of new technologies on our behaviors and thinking processes. The Greek philosopher Socrates, for example, worried that writing—a new technology at that time—would diminish people’s ability to remember because they could rely on written records rather than committing information to memory. In our world of quickly changing technologies, questions about the effects of media continue to emerge. Is it okay to talk on a cell phone while driving? Are headphones good to use in a car? What impact does text messaging have on reaction time while driving? These are types of questions that psychologist David Strayer asks in his lab.

Watch this short video to see how Strayer utilizes the scientific method to reach important conclusions regarding technology and driving safety.

You can view the transcript for “Understanding driver distraction” here (opens in new window) .

How can we go about finding answers that are supported not by mere opinion, but by evidence that we can all agree on? The findings of psychological research can help us navigate issues like this.

Introduction to the Scientific Method

Learning objectives.

  • Explain the steps of the scientific method
  • Describe why the scientific method is important to psychology
  • Summarize the processes of informed consent and debriefing
  • Explain how research involving humans or animals is regulated

photograph of the word "research" from a dictionary with a pen pointing at the word.

Scientists are engaged in explaining and understanding how the world around them works, and they are able to do so by coming up with theories that generate hypotheses that are testable and falsifiable. Theories that stand up to their tests are retained and refined, while those that do not are discarded or modified. In this way, research enables scientists to separate fact from simple opinion. Having good information generated from research aids in making wise decisions both in public policy and in our personal lives. In this section, you’ll see how psychologists use the scientific method to study and understand behavior.

The Scientific Process

A skull has a large hole bored through the forehead.

The goal of all scientists is to better understand the world around them. Psychologists focus their attention on understanding behavior, as well as the cognitive (mental) and physiological (body) processes that underlie behavior. In contrast to other methods that people use to understand the behavior of others, such as intuition and personal experience, the hallmark of scientific research is that there is evidence to support a claim. Scientific knowledge is empirical : It is grounded in objective, tangible evidence that can be observed time and time again, regardless of who is observing.

While behavior is observable, the mind is not. If someone is crying, we can see the behavior. However, the reason for the behavior is more difficult to determine. Is the person crying due to being sad, in pain, or happy? Sometimes we can learn the reason for someone’s behavior by simply asking a question, like “Why are you crying?” However, there are situations in which an individual is either uncomfortable or unwilling to answer the question honestly, or is incapable of answering. For example, infants would not be able to explain why they are crying. In such circumstances, the psychologist must be creative in finding ways to better understand behavior. This module explores how scientific knowledge is generated, and how important that knowledge is in forming decisions in our personal lives and in the public domain.

Process of Scientific Research

Flowchart of the scientific method. It begins with make an observation, then ask a question, form a hypothesis that answers the question, make a prediction based on the hypothesis, do an experiment to test the prediction, analyze the results, prove the hypothesis correct or incorrect, then report the results.

Scientific knowledge is advanced through a process known as the scientific method. Basically, ideas (in the form of theories and hypotheses) are tested against the real world (in the form of empirical observations), and those empirical observations lead to more ideas that are tested against the real world, and so on.

The basic steps in the scientific method are:

  • Observe a natural phenomenon and define a question about it
  • Make a hypothesis, or potential solution to the question
  • Test the hypothesis
  • If the hypothesis is true, find more evidence or find counter-evidence
  • If the hypothesis is false, create a new hypothesis or try again
  • Draw conclusions and repeat–the scientific method is never-ending, and no result is ever considered perfect

In order to ask an important question that may improve our understanding of the world, a researcher must first observe natural phenomena. By making observations, a researcher can define a useful question. After finding a question to answer, the researcher can then make a prediction (a hypothesis) about what he or she thinks the answer will be. This prediction is usually a statement about the relationship between two or more variables. After making a hypothesis, the researcher will then design an experiment to test his or her hypothesis and evaluate the data gathered. These data will either support or refute the hypothesis. Based on the conclusions drawn from the data, the researcher will then find more evidence to support the hypothesis, look for counter-evidence to further strengthen the hypothesis, revise the hypothesis and create a new experiment, or continue to incorporate the information gathered to answer the research question.

Basic Principles of the Scientific Method

Two key concepts in the scientific approach are theory and hypothesis. A theory is a well-developed set of ideas that propose an explanation for observed phenomena that can be used to make predictions about future observations. A hypothesis is a testable prediction that is arrived at logically from a theory. It is often worded as an if-then statement (e.g., if I study all night, I will get a passing grade on the test). The hypothesis is extremely important because it bridges the gap between the realm of ideas and the real world. As specific hypotheses are tested, theories are modified and refined to reflect and incorporate the result of these tests.

A diagram has four boxes: the top is labeled “theory,” the right is labeled “hypothesis,” the bottom is labeled “research,” and the left is labeled “observation.” Arrows flow in the direction from top to right to bottom to left and back to the top, clockwise. The top right arrow is labeled “use the hypothesis to form a theory,” the bottom right arrow is labeled “design a study to test the hypothesis,” the bottom left arrow is labeled “perform the research,” and the top left arrow is labeled “create or modify the theory.”

Other key components in following the scientific method include verifiability, predictability, falsifiability, and fairness. Verifiability means that an experiment must be replicable by another researcher. To achieve verifiability, researchers must make sure to document their methods and clearly explain how their experiment is structured and why it produces certain results.

Predictability in a scientific theory implies that the theory should enable us to make predictions about future events. The precision of these predictions is a measure of the strength of the theory.

Falsifiability refers to whether a hypothesis can be disproved. For a hypothesis to be falsifiable, it must be logically possible to make an observation or do a physical experiment that would show that there is no support for the hypothesis. Even when a hypothesis cannot be shown to be false, that does not necessarily mean it is not valid. Future testing may disprove the hypothesis. This does not mean that a hypothesis has to be shown to be false, just that it can be tested.

To determine whether a hypothesis is supported or not supported, psychological researchers must conduct hypothesis testing using statistics. Hypothesis testing is a type of statistics that determines the probability of a hypothesis being true or false. If hypothesis testing reveals that results were “statistically significant,” this means that there was support for the hypothesis and that the researchers can be reasonably confident that their result was not due to random chance. If the results are not statistically significant, this means that the researchers’ hypothesis was not supported.

Fairness implies that all data must be considered when evaluating a hypothesis. A researcher cannot pick and choose what data to keep and what to discard or focus specifically on data that support or do not support a particular hypothesis. All data must be accounted for, even if they invalidate the hypothesis.

Applying the Scientific Method

To see how this process works, let’s consider a specific theory and a hypothesis that might be generated from that theory. As you’ll learn in a later module, the James-Lange theory of emotion asserts that emotional experience relies on the physiological arousal associated with the emotional state. If you walked out of your home and discovered a very aggressive snake waiting on your doorstep, your heart would begin to race and your stomach churn. According to the James-Lange theory, these physiological changes would result in your feeling of fear. A hypothesis that could be derived from this theory might be that a person who is unaware of the physiological arousal that the sight of the snake elicits will not feel fear.

Remember that a good scientific hypothesis is falsifiable, or capable of being shown to be incorrect. Recall from the introductory module that Sigmund Freud had lots of interesting ideas to explain various human behaviors (Figure 5). However, a major criticism of Freud’s theories is that many of his ideas are not falsifiable; for example, it is impossible to imagine empirical observations that would disprove the existence of the id, the ego, and the superego—the three elements of personality described in Freud’s theories. Despite this, Freud’s theories are widely taught in introductory psychology texts because of their historical significance for personality psychology and psychotherapy, and these remain the root of all modern forms of therapy.

(a)A photograph shows Freud holding a cigar. (b) The mind’s conscious and unconscious states are illustrated as an iceberg floating in water. Beneath the water’s surface in the “unconscious” area are the id, ego, and superego. The area just below the water’s surface is labeled “preconscious.” The area above the water’s surface is labeled “conscious.”

In contrast, the James-Lange theory does generate falsifiable hypotheses, such as the one described above. Some individuals who suffer significant injuries to their spinal columns are unable to feel the bodily changes that often accompany emotional experiences. Therefore, we could test the hypothesis by determining how emotional experiences differ between individuals who have the ability to detect these changes in their physiological arousal and those who do not. In fact, this research has been conducted and while the emotional experiences of people deprived of an awareness of their physiological arousal may be less intense, they still experience emotion (Chwalisz, Diener, & Gallagher, 1988).

Link to Learning

Why the scientific method is important for psychology.

The use of the scientific method is one of the main features that separates modern psychology from earlier philosophical inquiries about the mind. Compared to chemistry, physics, and other “natural sciences,” psychology has long been considered one of the “social sciences” because of the subjective nature of the things it seeks to study. Many of the concepts that psychologists are interested in—such as aspects of the human mind, behavior, and emotions—are subjective and cannot be directly measured. Psychologists often rely instead on behavioral observations and self-reported data, which are considered by some to be illegitimate or lacking in methodological rigor. Applying the scientific method to psychology, therefore, helps to standardize the approach to understanding its very different types of information.

The scientific method allows psychological data to be replicated and confirmed in many instances, under different circumstances, and by a variety of researchers. Through replication of experiments, new generations of psychologists can reduce errors and broaden the applicability of theories. It also allows theories to be tested and validated instead of simply being conjectures that could never be verified or falsified. All of this allows psychologists to gain a stronger understanding of how the human mind works.

Scientific articles published in journals and psychology papers written in the style of the American Psychological Association (i.e., in “APA style”) are structured around the scientific method. These papers include an Introduction, which introduces the background information and outlines the hypotheses; a Methods section, which outlines the specifics of how the experiment was conducted to test the hypothesis; a Results section, which includes the statistics that tested the hypothesis and state whether it was supported or not supported, and a Discussion and Conclusion, which state the implications of finding support for, or no support for, the hypothesis. Writing articles and papers that adhere to the scientific method makes it easy for future researchers to repeat the study and attempt to replicate the results.

Ethics in Research

Today, scientists agree that good research is ethical in nature and is guided by a basic respect for human dignity and safety. However, as you will read in the Tuskegee Syphilis Study, this has not always been the case. Modern researchers must demonstrate that the research they perform is ethically sound. This section presents how ethical considerations affect the design and implementation of research conducted today.

Research Involving Human Participants

Any experiment involving the participation of human subjects is governed by extensive, strict guidelines designed to ensure that the experiment does not result in harm. Any research institution that receives federal support for research involving human participants must have access to an institutional review board (IRB) . The IRB is a committee of individuals often made up of members of the institution’s administration, scientists, and community members (Figure 6). The purpose of the IRB is to review proposals for research that involves human participants. The IRB reviews these proposals with the principles mentioned above in mind, and generally, approval from the IRB is required in order for the experiment to proceed.

A photograph shows a group of people seated around tables in a meeting room.

An institution’s IRB requires several components in any experiment it approves. For one, each participant must sign an informed consent form before they can participate in the experiment. An informed consent  form provides a written description of what participants can expect during the experiment, including potential risks and implications of the research. It also lets participants know that their involvement is completely voluntary and can be discontinued without penalty at any time. Furthermore, the informed consent guarantees that any data collected in the experiment will remain completely confidential. In cases where research participants are under the age of 18, the parents or legal guardians are required to sign the informed consent form.

While the informed consent form should be as honest as possible in describing exactly what participants will be doing, sometimes deception is necessary to prevent participants’ knowledge of the exact research question from affecting the results of the study. Deception involves purposely misleading experiment participants in order to maintain the integrity of the experiment, but not to the point where the deception could be considered harmful. For example, if we are interested in how our opinion of someone is affected by their attire, we might use deception in describing the experiment to prevent that knowledge from affecting participants’ responses. In cases where deception is involved, participants must receive a full debriefing  upon conclusion of the study—complete, honest information about the purpose of the experiment, how the data collected will be used, the reasons why deception was necessary, and information about how to obtain additional information about the study.

Dig Deeper: Ethics and the Tuskegee Syphilis Study

Unfortunately, the ethical guidelines that exist for research today were not always applied in the past. In 1932, poor, rural, black, male sharecroppers from Tuskegee, Alabama, were recruited to participate in an experiment conducted by the U.S. Public Health Service, with the aim of studying syphilis in black men (Figure 7). In exchange for free medical care, meals, and burial insurance, 600 men agreed to participate in the study. A little more than half of the men tested positive for syphilis, and they served as the experimental group (given that the researchers could not randomly assign participants to groups, this represents a quasi-experiment). The remaining syphilis-free individuals served as the control group. However, those individuals that tested positive for syphilis were never informed that they had the disease.

While there was no treatment for syphilis when the study began, by 1947 penicillin was recognized as an effective treatment for the disease. Despite this, no penicillin was administered to the participants in this study, and the participants were not allowed to seek treatment at any other facilities if they continued in the study. Over the course of 40 years, many of the participants unknowingly spread syphilis to their wives (and subsequently their children born from their wives) and eventually died because they never received treatment for the disease. This study was discontinued in 1972 when the experiment was discovered by the national press (Tuskegee University, n.d.). The resulting outrage over the experiment led directly to the National Research Act of 1974 and the strict ethical guidelines for research on humans described in this chapter. Why is this study unethical? How were the men who participated and their families harmed as a function of this research?

A photograph shows a person administering an injection.

Learn more about the Tuskegee Syphilis Study on the CDC website .

Research Involving Animal Subjects

A photograph shows a rat.

This does not mean that animal researchers are immune to ethical concerns. Indeed, the humane and ethical treatment of animal research subjects is a critical aspect of this type of research. Researchers must design their experiments to minimize any pain or distress experienced by animals serving as research subjects.

Whereas IRBs review research proposals that involve human participants, animal experimental proposals are reviewed by an Institutional Animal Care and Use Committee (IACUC) . An IACUC consists of institutional administrators, scientists, veterinarians, and community members. This committee is charged with ensuring that all experimental proposals require the humane treatment of animal research subjects. It also conducts semi-annual inspections of all animal facilities to ensure that the research protocols are being followed. No animal research project can proceed without the committee’s approval.

Introduction to Approaches to Research

  • Differentiate between descriptive, correlational, and experimental research
  • Explain the strengths and weaknesses of case studies, naturalistic observation, and surveys
  • Describe the strength and weaknesses of archival research
  • Compare longitudinal and cross-sectional approaches to research
  • Explain what a correlation coefficient tells us about the relationship between variables
  • Describe why correlation does not mean causation
  • Describe the experimental process, including ways to control for bias
  • Identify and differentiate between independent and dependent variables

Three researchers review data while talking around a microscope.

Psychologists use descriptive, experimental, and correlational methods to conduct research. Descriptive, or qualitative, methods include the case study, naturalistic observation, surveys, archival research, longitudinal research, and cross-sectional research.

Experiments are conducted in order to determine cause-and-effect relationships. In ideal experimental design, the only difference between the experimental and control groups is whether participants are exposed to the experimental manipulation. Each group goes through all phases of the experiment, but each group will experience a different level of the independent variable: the experimental group is exposed to the experimental manipulation, and the control group is not exposed to the experimental manipulation. The researcher then measures the changes that are produced in the dependent variable in each group. Once data is collected from both groups, it is analyzed statistically to determine if there are meaningful differences between the groups.

When scientists passively observe and measure phenomena it is called correlational research. Here, psychologists do not intervene and change behavior, as they do in experiments. In correlational research, they identify patterns of relationships, but usually cannot infer what causes what. Importantly, with correlational research, you can examine only two variables at a time, no more and no less.

Watch It: More on Research

If you enjoy learning through lectures and want an interesting and comprehensive summary of this section, then click on the Youtube link to watch a lecture given by MIT Professor John Gabrieli . Start at the 30:45 minute mark  and watch through the end to hear examples of actual psychological studies and how they were analyzed. Listen for references to independent and dependent variables, experimenter bias, and double-blind studies. In the lecture, you’ll learn about breaking social norms, “WEIRD” research, why expectations matter, how a warm cup of coffee might make you nicer, why you should change your answer on a multiple choice test, and why praise for intelligence won’t make you any smarter.

You can view the transcript for “Lec 2 | MIT 9.00SC Introduction to Psychology, Spring 2011” here (opens in new window) .

Descriptive Research

There are many research methods available to psychologists in their efforts to understand, describe, and explain behavior and the cognitive and biological processes that underlie it. Some methods rely on observational techniques. Other approaches involve interactions between the researcher and the individuals who are being studied—ranging from a series of simple questions to extensive, in-depth interviews—to well-controlled experiments.

The three main categories of psychological research are descriptive, correlational, and experimental research. Research studies that do not test specific relationships between variables are called descriptive, or qualitative, studies . These studies are used to describe general or specific behaviors and attributes that are observed and measured. In the early stages of research it might be difficult to form a hypothesis, especially when there is not any existing literature in the area. In these situations designing an experiment would be premature, as the question of interest is not yet clearly defined as a hypothesis. Often a researcher will begin with a non-experimental approach, such as a descriptive study, to gather more information about the topic before designing an experiment or correlational study to address a specific hypothesis. Descriptive research is distinct from correlational research , in which psychologists formally test whether a relationship exists between two or more variables. Experimental research  goes a step further beyond descriptive and correlational research and randomly assigns people to different conditions, using hypothesis testing to make inferences about how these conditions affect behavior. It aims to determine if one variable directly impacts and causes another. Correlational and experimental research both typically use hypothesis testing, whereas descriptive research does not.

Each of these research methods has unique strengths and weaknesses, and each method may only be appropriate for certain types of research questions. For example, studies that rely primarily on observation produce incredible amounts of information, but the ability to apply this information to the larger population is somewhat limited because of small sample sizes. Survey research, on the other hand, allows researchers to easily collect data from relatively large samples. While this allows for results to be generalized to the larger population more easily, the information that can be collected on any given survey is somewhat limited and subject to problems associated with any type of self-reported data. Some researchers conduct archival research by using existing records. While this can be a fairly inexpensive way to collect data that can provide insight into a number of research questions, researchers using this approach have no control on how or what kind of data was collected.

Correlational research can find a relationship between two variables, but the only way a researcher can claim that the relationship between the variables is cause and effect is to perform an experiment. In experimental research, which will be discussed later in the text, there is a tremendous amount of control over variables of interest. While this is a powerful approach, experiments are often conducted in very artificial settings. This calls into question the validity of experimental findings with regard to how they would apply in real-world settings. In addition, many of the questions that psychologists would like to answer cannot be pursued through experimental research because of ethical concerns.

The three main types of descriptive studies are, naturalistic observation, case studies, and surveys.

Naturalistic Observation

If you want to understand how behavior occurs, one of the best ways to gain information is to simply observe the behavior in its natural context. However, people might change their behavior in unexpected ways if they know they are being observed. How do researchers obtain accurate information when people tend to hide their natural behavior? As an example, imagine that your professor asks everyone in your class to raise their hand if they always wash their hands after using the restroom. Chances are that almost everyone in the classroom will raise their hand, but do you think hand washing after every trip to the restroom is really that universal?

This is very similar to the phenomenon mentioned earlier in this module: many individuals do not feel comfortable answering a question honestly. But if we are committed to finding out the facts about hand washing, we have other options available to us.

Suppose we send a classmate into the restroom to actually watch whether everyone washes their hands after using the restroom. Will our observer blend into the restroom environment by wearing a white lab coat, sitting with a clipboard, and staring at the sinks? We want our researcher to be inconspicuous—perhaps standing at one of the sinks pretending to put in contact lenses while secretly recording the relevant information. This type of observational study is called naturalistic observation : observing behavior in its natural setting. To better understand peer exclusion, Suzanne Fanger collaborated with colleagues at the University of Texas to observe the behavior of preschool children on a playground. How did the observers remain inconspicuous over the duration of the study? They equipped a few of the children with wireless microphones (which the children quickly forgot about) and observed while taking notes from a distance. Also, the children in that particular preschool (a “laboratory preschool”) were accustomed to having observers on the playground (Fanger, Frankel, & Hazen, 2012).

A photograph shows two police cars driving, one with its lights flashing.

It is critical that the observer be as unobtrusive and as inconspicuous as possible: when people know they are being watched, they are less likely to behave naturally. If you have any doubt about this, ask yourself how your driving behavior might differ in two situations: In the first situation, you are driving down a deserted highway during the middle of the day; in the second situation, you are being followed by a police car down the same deserted highway (Figure 9).

It should be pointed out that naturalistic observation is not limited to research involving humans. Indeed, some of the best-known examples of naturalistic observation involve researchers going into the field to observe various kinds of animals in their own environments. As with human studies, the researchers maintain their distance and avoid interfering with the animal subjects so as not to influence their natural behaviors. Scientists have used this technique to study social hierarchies and interactions among animals ranging from ground squirrels to gorillas. The information provided by these studies is invaluable in understanding how those animals organize socially and communicate with one another. The anthropologist Jane Goodall, for example, spent nearly five decades observing the behavior of chimpanzees in Africa (Figure 10). As an illustration of the types of concerns that a researcher might encounter in naturalistic observation, some scientists criticized Goodall for giving the chimps names instead of referring to them by numbers—using names was thought to undermine the emotional detachment required for the objectivity of the study (McKie, 2010).

(a) A photograph shows Jane Goodall speaking from a lectern. (b) A photograph shows a chimpanzee’s face.

The greatest benefit of naturalistic observation is the validity, or accuracy, of information collected unobtrusively in a natural setting. Having individuals behave as they normally would in a given situation means that we have a higher degree of ecological validity, or realism, than we might achieve with other research approaches. Therefore, our ability to generalize  the findings of the research to real-world situations is enhanced. If done correctly, we need not worry about people or animals modifying their behavior simply because they are being observed. Sometimes, people may assume that reality programs give us a glimpse into authentic human behavior. However, the principle of inconspicuous observation is violated as reality stars are followed by camera crews and are interviewed on camera for personal confessionals. Given that environment, we must doubt how natural and realistic their behaviors are.

The major downside of naturalistic observation is that they are often difficult to set up and control. In our restroom study, what if you stood in the restroom all day prepared to record people’s hand washing behavior and no one came in? Or, what if you have been closely observing a troop of gorillas for weeks only to find that they migrated to a new place while you were sleeping in your tent? The benefit of realistic data comes at a cost. As a researcher you have no control of when (or if) you have behavior to observe. In addition, this type of observational research often requires significant investments of time, money, and a good dose of luck.

Sometimes studies involve structured observation. In these cases, people are observed while engaging in set, specific tasks. An excellent example of structured observation comes from Strange Situation by Mary Ainsworth (you will read more about this in the module on lifespan development). The Strange Situation is a procedure used to evaluate attachment styles that exist between an infant and caregiver. In this scenario, caregivers bring their infants into a room filled with toys. The Strange Situation involves a number of phases, including a stranger coming into the room, the caregiver leaving the room, and the caregiver’s return to the room. The infant’s behavior is closely monitored at each phase, but it is the behavior of the infant upon being reunited with the caregiver that is most telling in terms of characterizing the infant’s attachment style with the caregiver.

Another potential problem in observational research is observer bias . Generally, people who act as observers are closely involved in the research project and may unconsciously skew their observations to fit their research goals or expectations. To protect against this type of bias, researchers should have clear criteria established for the types of behaviors recorded and how those behaviors should be classified. In addition, researchers often compare observations of the same event by multiple observers, in order to test inter-rater reliability : a measure of reliability that assesses the consistency of observations by different observers.

Case Studies

In 2011, the New York Times published a feature story on Krista and Tatiana Hogan, Canadian twin girls. These particular twins are unique because Krista and Tatiana are conjoined twins, connected at the head. There is evidence that the two girls are connected in a part of the brain called the thalamus, which is a major sensory relay center. Most incoming sensory information is sent through the thalamus before reaching higher regions of the cerebral cortex for processing.

The implications of this potential connection mean that it might be possible for one twin to experience the sensations of the other twin. For instance, if Krista is watching a particularly funny television program, Tatiana might smile or laugh even if she is not watching the program. This particular possibility has piqued the interest of many neuroscientists who seek to understand how the brain uses sensory information.

These twins represent an enormous resource in the study of the brain, and since their condition is very rare, it is likely that as long as their family agrees, scientists will follow these girls very closely throughout their lives to gain as much information as possible (Dominus, 2011).

In observational research, scientists are conducting a clinical or case study when they focus on one person or just a few individuals. Indeed, some scientists spend their entire careers studying just 10–20 individuals. Why would they do this? Obviously, when they focus their attention on a very small number of people, they can gain a tremendous amount of insight into those cases. The richness of information that is collected in clinical or case studies is unmatched by any other single research method. This allows the researcher to have a very deep understanding of the individuals and the particular phenomenon being studied.

If clinical or case studies provide so much information, why are they not more frequent among researchers? As it turns out, the major benefit of this particular approach is also a weakness. As mentioned earlier, this approach is often used when studying individuals who are interesting to researchers because they have a rare characteristic. Therefore, the individuals who serve as the focus of case studies are not like most other people. If scientists ultimately want to explain all behavior, focusing attention on such a special group of people can make it difficult to generalize any observations to the larger population as a whole. Generalizing refers to the ability to apply the findings of a particular research project to larger segments of society. Again, case studies provide enormous amounts of information, but since the cases are so specific, the potential to apply what’s learned to the average person may be very limited.

Often, psychologists develop surveys as a means of gathering data. Surveys are lists of questions to be answered by research participants, and can be delivered as paper-and-pencil questionnaires, administered electronically, or conducted verbally (Figure 11). Generally, the survey itself can be completed in a short time, and the ease of administering a survey makes it easy to collect data from a large number of people.

Surveys allow researchers to gather data from larger samples than may be afforded by other research methods . A sample is a subset of individuals selected from a population , which is the overall group of individuals that the researchers are interested in. Researchers study the sample and seek to generalize their findings to the population.

A sample online survey reads, “Dear visitor, your opinion is important to us. We would like to invite you to participate in a short survey to gather your opinions and feedback on your news consumption habits. The survey will take approximately 10-15 minutes. Simply click the “Yes” button below to launch the survey. Would you like to participate?” Two buttons are labeled “yes” and “no.”

There is both strength and weakness of the survey in comparison to case studies. By using surveys, we can collect information from a larger sample of people. A larger sample is better able to reflect the actual diversity of the population, thus allowing better generalizability. Therefore, if our sample is sufficiently large and diverse, we can assume that the data we collect from the survey can be generalized to the larger population with more certainty than the information collected through a case study. However, given the greater number of people involved, we are not able to collect the same depth of information on each person that would be collected in a case study.

Another potential weakness of surveys is something we touched on earlier in this chapter: people don’t always give accurate responses. They may lie, misremember, or answer questions in a way that they think makes them look good. For example, people may report drinking less alcohol than is actually the case.

Any number of research questions can be answered through the use of surveys. One real-world example is the research conducted by Jenkins, Ruppel, Kizer, Yehl, and Griffin (2012) about the backlash against the US Arab-American community following the terrorist attacks of September 11, 2001. Jenkins and colleagues wanted to determine to what extent these negative attitudes toward Arab-Americans still existed nearly a decade after the attacks occurred. In one study, 140 research participants filled out a survey with 10 questions, including questions asking directly about the participant’s overt prejudicial attitudes toward people of various ethnicities. The survey also asked indirect questions about how likely the participant would be to interact with a person of a given ethnicity in a variety of settings (such as, “How likely do you think it is that you would introduce yourself to a person of Arab-American descent?”). The results of the research suggested that participants were unwilling to report prejudicial attitudes toward any ethnic group. However, there were significant differences between their pattern of responses to questions about social interaction with Arab-Americans compared to other ethnic groups: they indicated less willingness for social interaction with Arab-Americans compared to the other ethnic groups. This suggested that the participants harbored subtle forms of prejudice against Arab-Americans, despite their assertions that this was not the case (Jenkins et al., 2012).

Think It Over

Archival research.

(a) A photograph shows stacks of paper files on shelves. (b) A photograph shows a computer.

In comparing archival research to other research methods, there are several important distinctions. For one, the researcher employing archival research never directly interacts with research participants. Therefore, the investment of time and money to collect data is considerably less with archival research. Additionally, researchers have no control over what information was originally collected. Therefore, research questions have to be tailored so they can be answered within the structure of the existing data sets. There is also no guarantee of consistency between the records from one source to another, which might make comparing and contrasting different data sets problematic.

Longitudinal and Cross-Sectional Research

Sometimes we want to see how people change over time, as in studies of human development and lifespan. When we test the same group of individuals repeatedly over an extended period of time, we are conducting longitudinal research. Longitudinal research  is a research design in which data-gathering is administered repeatedly over an extended period of time. For example, we may survey a group of individuals about their dietary habits at age 20, retest them a decade later at age 30, and then again at age 40.

Another approach is cross-sectional research . In cross-sectional research, a researcher compares multiple segments of the population at the same time. Using the dietary habits example above, the researcher might directly compare different groups of people by age. Instead of observing a group of people for 20 years to see how their dietary habits changed from decade to decade, the researcher would study a group of 20-year-old individuals and compare them to a group of 30-year-old individuals and a group of 40-year-old individuals. While cross-sectional research requires a shorter-term investment, it is also limited by differences that exist between the different generations (or cohorts) that have nothing to do with age per se, but rather reflect the social and cultural experiences of different generations of individuals make them different from one another.

To illustrate this concept, consider the following survey findings. In recent years there has been significant growth in the popular support of same-sex marriage. Many studies on this topic break down survey participants into different age groups. In general, younger people are more supportive of same-sex marriage than are those who are older (Jones, 2013). Does this mean that as we age we become less open to the idea of same-sex marriage, or does this mean that older individuals have different perspectives because of the social climates in which they grew up? Longitudinal research is a powerful approach because the same individuals are involved in the research project over time, which means that the researchers need to be less concerned with differences among cohorts affecting the results of their study.

Often longitudinal studies are employed when researching various diseases in an effort to understand particular risk factors. Such studies often involve tens of thousands of individuals who are followed for several decades. Given the enormous number of people involved in these studies, researchers can feel confident that their findings can be generalized to the larger population. The Cancer Prevention Study-3 (CPS-3) is one of a series of longitudinal studies sponsored by the American Cancer Society aimed at determining predictive risk factors associated with cancer. When participants enter the study, they complete a survey about their lives and family histories, providing information on factors that might cause or prevent the development of cancer. Then every few years the participants receive additional surveys to complete. In the end, hundreds of thousands of participants will be tracked over 20 years to determine which of them develop cancer and which do not.

Clearly, this type of research is important and potentially very informative. For instance, earlier longitudinal studies sponsored by the American Cancer Society provided some of the first scientific demonstrations of the now well-established links between increased rates of cancer and smoking (American Cancer Society, n.d.) (Figure 13).

A photograph shows pack of cigarettes and cigarettes in an ashtray. The pack of cigarettes reads, “Surgeon general’s warning: smoking causes lung cancer, heart disease, emphysema, and may complicate pregnancy.”

As with any research strategy, longitudinal research is not without limitations. For one, these studies require an incredible time investment by the researcher and research participants. Given that some longitudinal studies take years, if not decades, to complete, the results will not be known for a considerable period of time. In addition to the time demands, these studies also require a substantial financial investment. Many researchers are unable to commit the resources necessary to see a longitudinal project through to the end.

Research participants must also be willing to continue their participation for an extended period of time, and this can be problematic. People move, get married and take new names, get ill, and eventually die. Even without significant life changes, some people may simply choose to discontinue their participation in the project. As a result, the attrition  rates, or reduction in the number of research participants due to dropouts, in longitudinal studies are quite high and increases over the course of a project. For this reason, researchers using this approach typically recruit many participants fully expecting that a substantial number will drop out before the end. As the study progresses, they continually check whether the sample still represents the larger population, and make adjustments as necessary.

Correlational Research

Did you know that as sales in ice cream increase, so does the overall rate of crime? Is it possible that indulging in your favorite flavor of ice cream could send you on a crime spree? Or, after committing crime do you think you might decide to treat yourself to a cone? There is no question that a relationship exists between ice cream and crime (e.g., Harper, 2013), but it would be pretty foolish to decide that one thing actually caused the other to occur.

It is much more likely that both ice cream sales and crime rates are related to the temperature outside. When the temperature is warm, there are lots of people out of their houses, interacting with each other, getting annoyed with one another, and sometimes committing crimes. Also, when it is warm outside, we are more likely to seek a cool treat like ice cream. How do we determine if there is indeed a relationship between two things? And when there is a relationship, how can we discern whether it is attributable to coincidence or causation?

Three scatterplots are shown. Scatterplot (a) is labeled “positive correlation” and shows scattered dots forming a rough line from the bottom left to the top right; the x-axis is labeled “weight” and the y-axis is labeled “height.” Scatterplot (b) is labeled “negative correlation” and shows scattered dots forming a rough line from the top left to the bottom right; the x-axis is labeled “tiredness” and the y-axis is labeled “hours of sleep.” Scatterplot (c) is labeled “no correlation” and shows scattered dots having no pattern; the x-axis is labeled “shoe size” and the y-axis is labeled “hours of sleep.”

Correlation Does Not Indicate Causation

Correlational research is useful because it allows us to discover the strength and direction of relationships that exist between two variables. However, correlation is limited because establishing the existence of a relationship tells us little about cause and effect . While variables are sometimes correlated because one does cause the other, it could also be that some other factor, a confounding variable , is actually causing the systematic movement in our variables of interest. In the ice cream/crime rate example mentioned earlier, temperature is a confounding variable that could account for the relationship between the two variables.

Even when we cannot point to clear confounding variables, we should not assume that a correlation between two variables implies that one variable causes changes in another. This can be frustrating when a cause-and-effect relationship seems clear and intuitive. Think back to our discussion of the research done by the American Cancer Society and how their research projects were some of the first demonstrations of the link between smoking and cancer. It seems reasonable to assume that smoking causes cancer, but if we were limited to correlational research , we would be overstepping our bounds by making this assumption.

A photograph shows a bowl of cereal.

Unfortunately, people mistakenly make claims of causation as a function of correlations all the time. Such claims are especially common in advertisements and news stories. For example, recent research found that people who eat cereal on a regular basis achieve healthier weights than those who rarely eat cereal (Frantzen, Treviño, Echon, Garcia-Dominic, & DiMarco, 2013; Barton et al., 2005). Guess how the cereal companies report this finding. Does eating cereal really cause an individual to maintain a healthy weight, or are there other possible explanations, such as, someone at a healthy weight is more likely to regularly eat a healthy breakfast than someone who is obese or someone who avoids meals in an attempt to diet (Figure 15)? While correlational research is invaluable in identifying relationships among variables, a major limitation is the inability to establish causality. Psychologists want to make statements about cause and effect, but the only way to do that is to conduct an experiment to answer a research question. The next section describes how scientific experiments incorporate methods that eliminate, or control for, alternative explanations, which allow researchers to explore how changes in one variable cause changes in another variable.

Watch this clip from Freakonomics for an example of how correlation does  not  indicate causation.

You can view the transcript for “Correlation vs. Causality: Freakonomics Movie” here (opens in new window) .

Illusory Correlations

The temptation to make erroneous cause-and-effect statements based on correlational research is not the only way we tend to misinterpret data. We also tend to make the mistake of illusory correlations, especially with unsystematic observations. Illusory correlations , or false correlations, occur when people believe that relationships exist between two things when no such relationship exists. One well-known illusory correlation is the supposed effect that the moon’s phases have on human behavior. Many people passionately assert that human behavior is affected by the phase of the moon, and specifically, that people act strangely when the moon is full (Figure 16).

A photograph shows the moon.

There is no denying that the moon exerts a powerful influence on our planet. The ebb and flow of the ocean’s tides are tightly tied to the gravitational forces of the moon. Many people believe, therefore, that it is logical that we are affected by the moon as well. After all, our bodies are largely made up of water. A meta-analysis of nearly 40 studies consistently demonstrated, however, that the relationship between the moon and our behavior does not exist (Rotton & Kelly, 1985). While we may pay more attention to odd behavior during the full phase of the moon, the rates of odd behavior remain constant throughout the lunar cycle.

Why are we so apt to believe in illusory correlations like this? Often we read or hear about them and simply accept the information as valid. Or, we have a hunch about how something works and then look for evidence to support that hunch, ignoring evidence that would tell us our hunch is false; this is known as confirmation bias . Other times, we find illusory correlations based on the information that comes most easily to mind, even if that information is severely limited. And while we may feel confident that we can use these relationships to better understand and predict the world around us, illusory correlations can have significant drawbacks. For example, research suggests that illusory correlations—in which certain behaviors are inaccurately attributed to certain groups—are involved in the formation of prejudicial attitudes that can ultimately lead to discriminatory behavior (Fiedler, 2004).

We all have a tendency to make illusory correlations from time to time. Try to think of an illusory correlation that is held by you, a family member, or a close friend. How do you think this illusory correlation came about and what can be done in the future to combat them?

Experiments

Causality: conducting experiments and using the data, experimental hypothesis.

In order to conduct an experiment, a researcher must have a specific hypothesis to be tested. As you’ve learned, hypotheses can be formulated either through direct observation of the real world or after careful review of previous research. For example, if you think that children should not be allowed to watch violent programming on television because doing so would cause them to behave more violently, then you have basically formulated a hypothesis—namely, that watching violent television programs causes children to behave more violently. How might you have arrived at this particular hypothesis? You may have younger relatives who watch cartoons featuring characters using martial arts to save the world from evildoers, with an impressive array of punching, kicking, and defensive postures. You notice that after watching these programs for a while, your young relatives mimic the fighting behavior of the characters portrayed in the cartoon (Figure 17).

A photograph shows a child pointing a toy gun.

These sorts of personal observations are what often lead us to formulate a specific hypothesis, but we cannot use limited personal observations and anecdotal evidence to rigorously test our hypothesis. Instead, to find out if real-world data supports our hypothesis, we have to conduct an experiment.

Designing an Experiment

The most basic experimental design involves two groups: the experimental group and the control group. The two groups are designed to be the same except for one difference— experimental manipulation. The experimental group  gets the experimental manipulation—that is, the treatment or variable being tested (in this case, violent TV images)—and the control group does not. Since experimental manipulation is the only difference between the experimental and control groups, we can be sure that any differences between the two are due to experimental manipulation rather than chance.

In our example of how violent television programming might affect violent behavior in children, we have the experimental group view violent television programming for a specified time and then measure their violent behavior. We measure the violent behavior in our control group after they watch nonviolent television programming for the same amount of time. It is important for the control group to be treated similarly to the experimental group, with the exception that the control group does not receive the experimental manipulation. Therefore, we have the control group watch non-violent television programming for the same amount of time as the experimental group.

We also need to precisely define, or operationalize, what is considered violent and nonviolent. An operational definition is a description of how we will measure our variables, and it is important in allowing others understand exactly how and what a researcher measures in a particular experiment. In operationalizing violent behavior, we might choose to count only physical acts like kicking or punching as instances of this behavior, or we also may choose to include angry verbal exchanges. Whatever we determine, it is important that we operationalize violent behavior in such a way that anyone who hears about our study for the first time knows exactly what we mean by violence. This aids peoples’ ability to interpret our data as well as their capacity to repeat our experiment should they choose to do so.

Once we have operationalized what is considered violent television programming and what is considered violent behavior from our experiment participants, we need to establish how we will run our experiment. In this case, we might have participants watch a 30-minute television program (either violent or nonviolent, depending on their group membership) before sending them out to a playground for an hour where their behavior is observed and the number and type of violent acts is recorded.

Ideally, the people who observe and record the children’s behavior are unaware of who was assigned to the experimental or control group, in order to control for experimenter bias. Experimenter bias refers to the possibility that a researcher’s expectations might skew the results of the study. Remember, conducting an experiment requires a lot of planning, and the people involved in the research project have a vested interest in supporting their hypotheses. If the observers knew which child was in which group, it might influence how much attention they paid to each child’s behavior as well as how they interpreted that behavior. By being blind to which child is in which group, we protect against those biases. This situation is a single-blind study , meaning that one of the groups (participants) are unaware as to which group they are in (experiment or control group) while the researcher who developed the experiment knows which participants are in each group.

A photograph shows three glass bottles of pills labeled as placebos.

In a double-blind study , both the researchers and the participants are blind to group assignments. Why would a researcher want to run a study where no one knows who is in which group? Because by doing so, we can control for both experimenter and participant expectations. If you are familiar with the phrase placebo effect, you already have some idea as to why this is an important consideration. The placebo effect occurs when people’s expectations or beliefs influence or determine their experience in a given situation. In other words, simply expecting something to happen can actually make it happen.

The placebo effect is commonly described in terms of testing the effectiveness of a new medication. Imagine that you work in a pharmaceutical company, and you think you have a new drug that is effective in treating depression. To demonstrate that your medication is effective, you run an experiment with two groups: The experimental group receives the medication, and the control group does not. But you don’t want participants to know whether they received the drug or not.

Why is that? Imagine that you are a participant in this study, and you have just taken a pill that you think will improve your mood. Because you expect the pill to have an effect, you might feel better simply because you took the pill and not because of any drug actually contained in the pill—this is the placebo effect.

To make sure that any effects on mood are due to the drug and not due to expectations, the control group receives a placebo (in this case a sugar pill). Now everyone gets a pill, and once again neither the researcher nor the experimental participants know who got the drug and who got the sugar pill. Any differences in mood between the experimental and control groups can now be attributed to the drug itself rather than to experimenter bias or participant expectations (Figure 18).

Independent and Dependent Variables

In a research experiment, we strive to study whether changes in one thing cause changes in another. To achieve this, we must pay attention to two important variables, or things that can be changed, in any experimental study: the independent variable and the dependent variable. An independent variable is manipulated or controlled by the experimenter. In a well-designed experimental study, the independent variable is the only important difference between the experimental and control groups. In our example of how violent television programs affect children’s display of violent behavior, the independent variable is the type of program—violent or nonviolent—viewed by participants in the study (Figure 19). A dependent variable is what the researcher measures to see how much effect the independent variable had. In our example, the dependent variable is the number of violent acts displayed by the experimental participants.

A box labeled “independent variable: type of television programming viewed” contains a photograph of a person shooting an automatic weapon. An arrow labeled “influences change in the…” leads to a second box. The second box is labeled “dependent variable: violent behavior displayed” and has a photograph of a child pointing a toy gun.

We expect that the dependent variable will change as a function of the independent variable. In other words, the dependent variable depends on the independent variable. A good way to think about the relationship between the independent and dependent variables is with this question: What effect does the independent variable have on the dependent variable? Returning to our example, what effect does watching a half hour of violent television programming or nonviolent television programming have on the number of incidents of physical aggression displayed on the playground?

Selecting and Assigning Experimental Participants

Now that our study is designed, we need to obtain a sample of individuals to include in our experiment. Our study involves human participants so we need to determine who to include. Participants  are the subjects of psychological research, and as the name implies, individuals who are involved in psychological research actively participate in the process. Often, psychological research projects rely on college students to serve as participants. In fact, the vast majority of research in psychology subfields has historically involved students as research participants (Sears, 1986; Arnett, 2008). But are college students truly representative of the general population? College students tend to be younger, more educated, more liberal, and less diverse than the general population. Although using students as test subjects is an accepted practice, relying on such a limited pool of research participants can be problematic because it is difficult to generalize findings to the larger population.

Our hypothetical experiment involves children, and we must first generate a sample of child participants. Samples are used because populations are usually too large to reasonably involve every member in our particular experiment (Figure 20). If possible, we should use a random sample   (there are other types of samples, but for the purposes of this section, we will focus on random samples). A random sample is a subset of a larger population in which every member of the population has an equal chance of being selected. Random samples are preferred because if the sample is large enough we can be reasonably sure that the participating individuals are representative of the larger population. This means that the percentages of characteristics in the sample—sex, ethnicity, socioeconomic level, and any other characteristics that might affect the results—are close to those percentages in the larger population.

In our example, let’s say we decide our population of interest is fourth graders. But all fourth graders is a very large population, so we need to be more specific; instead we might say our population of interest is all fourth graders in a particular city. We should include students from various income brackets, family situations, races, ethnicities, religions, and geographic areas of town. With this more manageable population, we can work with the local schools in selecting a random sample of around 200 fourth graders who we want to participate in our experiment.

In summary, because we cannot test all of the fourth graders in a city, we want to find a group of about 200 that reflects the composition of that city. With a representative group, we can generalize our findings to the larger population without fear of our sample being biased in some way.

(a) A photograph shows an aerial view of crowds on a street. (b) A photograph shows s small group of children.

Now that we have a sample, the next step of the experimental process is to split the participants into experimental and control groups through random assignment. With random assignment , all participants have an equal chance of being assigned to either group. There is statistical software that will randomly assign each of the fourth graders in the sample to either the experimental or the control group.

Random assignment is critical for sound experimental design. With sufficiently large samples, random assignment makes it unlikely that there are systematic differences between the groups. So, for instance, it would be very unlikely that we would get one group composed entirely of males, a given ethnic identity, or a given religious ideology. This is important because if the groups were systematically different before the experiment began, we would not know the origin of any differences we find between the groups: Were the differences preexisting, or were they caused by manipulation of the independent variable? Random assignment allows us to assume that any differences observed between experimental and control groups result from the manipulation of the independent variable.

Issues to Consider

While experiments allow scientists to make cause-and-effect claims, they are not without problems. True experiments require the experimenter to manipulate an independent variable, and that can complicate many questions that psychologists might want to address. For instance, imagine that you want to know what effect sex (the independent variable) has on spatial memory (the dependent variable). Although you can certainly look for differences between males and females on a task that taps into spatial memory, you cannot directly control a person’s sex. We categorize this type of research approach as quasi-experimental and recognize that we cannot make cause-and-effect claims in these circumstances.

Experimenters are also limited by ethical constraints. For instance, you would not be able to conduct an experiment designed to determine if experiencing abuse as a child leads to lower levels of self-esteem among adults. To conduct such an experiment, you would need to randomly assign some experimental participants to a group that receives abuse, and that experiment would be unethical.

Introduction to Statistical Thinking

Psychologists use statistics to assist them in analyzing data, and also to give more precise measurements to describe whether something is statistically significant. Analyzing data using statistics enables researchers to find patterns, make claims, and share their results with others. In this section, you’ll learn about some of the tools that psychologists use in statistical analysis.

  • Define reliability and validity
  • Describe the importance of distributional thinking and the role of p-values in statistical inference
  • Describe the role of random sampling and random assignment in drawing cause-and-effect conclusions
  • Describe the basic structure of a psychological research article

Interpreting Experimental Findings

Once data is collected from both the experimental and the control groups, a statistical analysis is conducted to find out if there are meaningful differences between the two groups. A statistical analysis determines how likely any difference found is due to chance (and thus not meaningful). In psychology, group differences are considered meaningful, or significant, if the odds that these differences occurred by chance alone are 5 percent or less. Stated another way, if we repeated this experiment 100 times, we would expect to find the same results at least 95 times out of 100.

The greatest strength of experiments is the ability to assert that any significant differences in the findings are caused by the independent variable. This occurs because random selection, random assignment, and a design that limits the effects of both experimenter bias and participant expectancy should create groups that are similar in composition and treatment. Therefore, any difference between the groups is attributable to the independent variable, and now we can finally make a causal statement. If we find that watching a violent television program results in more violent behavior than watching a nonviolent program, we can safely say that watching violent television programs causes an increase in the display of violent behavior.

Reporting Research

When psychologists complete a research project, they generally want to share their findings with other scientists. The American Psychological Association (APA) publishes a manual detailing how to write a paper for submission to scientific journals. Unlike an article that might be published in a magazine like Psychology Today, which targets a general audience with an interest in psychology, scientific journals generally publish peer-reviewed journal articles aimed at an audience of professionals and scholars who are actively involved in research themselves.

A peer-reviewed journal article is read by several other scientists (generally anonymously) with expertise in the subject matter. These peer reviewers provide feedback—to both the author and the journal editor—regarding the quality of the draft. Peer reviewers look for a strong rationale for the research being described, a clear description of how the research was conducted, and evidence that the research was conducted in an ethical manner. They also look for flaws in the study’s design, methods, and statistical analyses. They check that the conclusions drawn by the authors seem reasonable given the observations made during the research. Peer reviewers also comment on how valuable the research is in advancing the discipline’s knowledge. This helps prevent unnecessary duplication of research findings in the scientific literature and, to some extent, ensures that each research article provides new information. Ultimately, the journal editor will compile all of the peer reviewer feedback and determine whether the article will be published in its current state (a rare occurrence), published with revisions, or not accepted for publication.

Peer review provides some degree of quality control for psychological research. Poorly conceived or executed studies can be weeded out, and even well-designed research can be improved by the revisions suggested. Peer review also ensures that the research is described clearly enough to allow other scientists to replicate it, meaning they can repeat the experiment using different samples to determine reliability. Sometimes replications involve additional measures that expand on the original finding. In any case, each replication serves to provide more evidence to support the original research findings. Successful replications of published research make scientists more apt to adopt those findings, while repeated failures tend to cast doubt on the legitimacy of the original article and lead scientists to look elsewhere. For example, it would be a major advancement in the medical field if a published study indicated that taking a new drug helped individuals achieve a healthy weight without changing their diet. But if other scientists could not replicate the results, the original study’s claims would be questioned.

Dig Deeper: The Vaccine-Autism Myth and the Retraction of Published Studies

Some scientists have claimed that routine childhood vaccines cause some children to develop autism, and, in fact, several peer-reviewed publications published research making these claims. Since the initial reports, large-scale epidemiological research has suggested that vaccinations are not responsible for causing autism and that it is much safer to have your child vaccinated than not. Furthermore, several of the original studies making this claim have since been retracted.

A published piece of work can be rescinded when data is called into question because of falsification, fabrication, or serious research design problems. Once rescinded, the scientific community is informed that there are serious problems with the original publication. Retractions can be initiated by the researcher who led the study, by research collaborators, by the institution that employed the researcher, or by the editorial board of the journal in which the article was originally published. In the vaccine-autism case, the retraction was made because of a significant conflict of interest in which the leading researcher had a financial interest in establishing a link between childhood vaccines and autism (Offit, 2008). Unfortunately, the initial studies received so much media attention that many parents around the world became hesitant to have their children vaccinated (Figure 21). For more information about how the vaccine/autism story unfolded, as well as the repercussions of this story, take a look at Paul Offit’s book, Autism’s False Prophets: Bad Science, Risky Medicine, and the Search for a Cure.

A photograph shows a child being given an oral vaccine.

Reliability and Validity

Dig deeper:  everyday connection: how valid is the sat.

Standardized tests like the SAT are supposed to measure an individual’s aptitude for a college education, but how reliable and valid are such tests? Research conducted by the College Board suggests that scores on the SAT have high predictive validity for first-year college students’ GPA (Kobrin, Patterson, Shaw, Mattern, & Barbuti, 2008). In this context, predictive validity refers to the test’s ability to effectively predict the GPA of college freshmen. Given that many institutions of higher education require the SAT for admission, this high degree of predictive validity might be comforting.

However, the emphasis placed on SAT scores in college admissions has generated some controversy on a number of fronts. For one, some researchers assert that the SAT is a biased test that places minority students at a disadvantage and unfairly reduces the likelihood of being admitted into a college (Santelices & Wilson, 2010). Additionally, some research has suggested that the predictive validity of the SAT is grossly exaggerated in how well it is able to predict the GPA of first-year college students. In fact, it has been suggested that the SAT’s predictive validity may be overestimated by as much as 150% (Rothstein, 2004). Many institutions of higher education are beginning to consider de-emphasizing the significance of SAT scores in making admission decisions (Rimer, 2008).

In 2014, College Board president David Coleman expressed his awareness of these problems, recognizing that college success is more accurately predicted by high school grades than by SAT scores. To address these concerns, he has called for significant changes to the SAT exam (Lewin, 2014).

Statistical Significance

Coffee cup with heart shaped cream inside.

Does drinking coffee actually increase your life expectancy? A recent study (Freedman, Park, Abnet, Hollenbeck, & Sinha, 2012) found that men who drank at least six cups of coffee a day also had a 10% lower chance of dying (women’s chances were 15% lower) than those who drank none. Does this mean you should pick up or increase your own coffee habit? We will explore these results in more depth in the next section about drawing conclusions from statistics. Modern society has become awash in studies such as this; you can read about several such studies in the news every day.

Conducting such a study well, and interpreting the results of such studies requires understanding basic ideas of statistics , the science of gaining insight from data. Key components to a statistical investigation are:

  • Planning the study: Start by asking a testable research question and deciding how to collect data. For example, how long was the study period of the coffee study? How many people were recruited for the study, how were they recruited, and from where? How old were they? What other variables were recorded about the individuals? Were changes made to the participants’ coffee habits during the course of the study?
  • Examining the data: What are appropriate ways to examine the data? What graphs are relevant, and what do they reveal? What descriptive statistics can be calculated to summarize relevant aspects of the data, and what do they reveal? What patterns do you see in the data? Are there any individual observations that deviate from the overall pattern, and what do they reveal? For example, in the coffee study, did the proportions differ when we compared the smokers to the non-smokers?
  • Inferring from the data: What are valid statistical methods for drawing inferences “beyond” the data you collected? In the coffee study, is the 10%–15% reduction in risk of death something that could have happened just by chance?
  • Drawing conclusions: Based on what you learned from your data, what conclusions can you draw? Who do you think these conclusions apply to? (Were the people in the coffee study older? Healthy? Living in cities?) Can you draw a cause-and-effect conclusion about your treatments? (Are scientists now saying that the coffee drinking is the cause of the decreased risk of death?)

Notice that the numerical analysis (“crunching numbers” on the computer) comprises only a small part of overall statistical investigation. In this section, you will see how we can answer some of these questions and what questions you should be asking about any statistical investigation you read about.

Distributional Thinking

When data are collected to address a particular question, an important first step is to think of meaningful ways to organize and examine the data. Let’s take a look at an example.

Example 1 : Researchers investigated whether cancer pamphlets are written at an appropriate level to be read and understood by cancer patients (Short, Moriarty, & Cooley, 1995). Tests of reading ability were given to 63 patients. In addition, readability level was determined for a sample of 30 pamphlets, based on characteristics such as the lengths of words and sentences in the pamphlet. The results, reported in terms of grade levels, are displayed in Figure 23.

Table showing patients' reading levels and pahmphlet's reading levels.

  • Data vary . More specifically, values of a variable (such as reading level of a cancer patient or readability level of a cancer pamphlet) vary.
  • Analyzing the pattern of variation, called the distribution of the variable, often reveals insights.

Addressing the research question of whether the cancer pamphlets are written at appropriate levels for the cancer patients requires comparing the two distributions. A naïve comparison might focus only on the centers of the distributions. Both medians turn out to be ninth grade, but considering only medians ignores the variability and the overall distributions of these data. A more illuminating approach is to compare the entire distributions, for example with a graph, as in Figure 24.

Bar graph showing that the reading level of pamphlets is typically higher than the reading level of the patients.

Figure 24 makes clear that the two distributions are not well aligned at all. The most glaring discrepancy is that many patients (17/63, or 27%, to be precise) have a reading level below that of the most readable pamphlet. These patients will need help to understand the information provided in the cancer pamphlets. Notice that this conclusion follows from considering the distributions as a whole, not simply measures of center or variability, and that the graph contrasts those distributions more immediately than the frequency tables.

Finding Significance in Data

Even when we find patterns in data, often there is still uncertainty in various aspects of the data. For example, there may be potential for measurement errors (even your own body temperature can fluctuate by almost 1°F over the course of the day). Or we may only have a “snapshot” of observations from a more long-term process or only a small subset of individuals from the population of interest. In such cases, how can we determine whether patterns we see in our small set of data is convincing evidence of a systematic phenomenon in the larger process or population? Let’s take a look at another example.

Example 2 : In a study reported in the November 2007 issue of Nature , researchers investigated whether pre-verbal infants take into account an individual’s actions toward others in evaluating that individual as appealing or aversive (Hamlin, Wynn, & Bloom, 2007). In one component of the study, 10-month-old infants were shown a “climber” character (a piece of wood with “googly” eyes glued onto it) that could not make it up a hill in two tries. Then the infants were shown two scenarios for the climber’s next try, one where the climber was pushed to the top of the hill by another character (“helper”), and one where the climber was pushed back down the hill by another character (“hinderer”). The infant was alternately shown these two scenarios several times. Then the infant was presented with two pieces of wood (representing the helper and the hinderer characters) and asked to pick one to play with.

The researchers found that of the 16 infants who made a clear choice, 14 chose to play with the helper toy. One possible explanation for this clear majority result is that the helping behavior of the one toy increases the infants’ likelihood of choosing that toy. But are there other possible explanations? What about the color of the toy? Well, prior to collecting the data, the researchers arranged so that each color and shape (red square and blue circle) would be seen by the same number of infants. Or maybe the infants had right-handed tendencies and so picked whichever toy was closer to their right hand?

Well, prior to collecting the data, the researchers arranged it so half the infants saw the helper toy on the right and half on the left. Or, maybe the shapes of these wooden characters (square, triangle, circle) had an effect? Perhaps, but again, the researchers controlled for this by rotating which shape was the helper toy, the hinderer toy, and the climber. When designing experiments, it is important to control for as many variables as might affect the responses as possible. It is beginning to appear that the researchers accounted for all the other plausible explanations. But there is one more important consideration that cannot be controlled—if we did the study again with these 16 infants, they might not make the same choices. In other words, there is some randomness inherent in their selection process.

Maybe each infant had no genuine preference at all, and it was simply “random luck” that led to 14 infants picking the helper toy. Although this random component cannot be controlled, we can apply a probability model to investigate the pattern of results that would occur in the long run if random chance were the only factor.

If the infants were equally likely to pick between the two toys, then each infant had a 50% chance of picking the helper toy. It’s like each infant tossed a coin, and if it landed heads, the infant picked the helper toy. So if we tossed a coin 16 times, could it land heads 14 times? Sure, it’s possible, but it turns out to be very unlikely. Getting 14 (or more) heads in 16 tosses is about as likely as tossing a coin and getting 9 heads in a row. This probability is referred to as a p-value . The p-value represents the likelihood that experimental results happened by chance. Within psychology, the most common standard for p-values is “p < .05”. What this means is that there is less than a 5% probability that the results happened just by random chance, and therefore a 95% probability that the results reflect a meaningful pattern in human psychology. We call this statistical significance .

So, in the study above, if we assume that each infant was choosing equally, then the probability that 14 or more out of 16 infants would choose the helper toy is found to be 0.0021. We have only two logical possibilities: either the infants have a genuine preference for the helper toy, or the infants have no preference (50/50) and an outcome that would occur only 2 times in 1,000 iterations happened in this study. Because this p-value of 0.0021 is quite small, we conclude that the study provides very strong evidence that these infants have a genuine preference for the helper toy.

If we compare the p-value to some cut-off value, like 0.05, we see that the p=value is smaller. Because the p-value is smaller than that cut-off value, then we reject the hypothesis that only random chance was at play here. In this case, these researchers would conclude that significantly more than half of the infants in the study chose the helper toy, giving strong evidence of a genuine preference for the toy with the helping behavior.

Drawing Conclusions from Statistics

Generalizability.

Photo of a diverse group of college-aged students.

One limitation to the study mentioned previously about the babies choosing the “helper” toy is that the conclusion only applies to the 16 infants in the study. We don’t know much about how those 16 infants were selected. Suppose we want to select a subset of individuals (a sample ) from a much larger group of individuals (the population ) in such a way that conclusions from the sample can be generalized to the larger population. This is the question faced by pollsters every day.

Example 3 : The General Social Survey (GSS) is a survey on societal trends conducted every other year in the United States. Based on a sample of about 2,000 adult Americans, researchers make claims about what percentage of the U.S. population consider themselves to be “liberal,” what percentage consider themselves “happy,” what percentage feel “rushed” in their daily lives, and many other issues. The key to making these claims about the larger population of all American adults lies in how the sample is selected. The goal is to select a sample that is representative of the population, and a common way to achieve this goal is to select a r andom sample  that gives every member of the population an equal chance of being selected for the sample. In its simplest form, random sampling involves numbering every member of the population and then using a computer to randomly select the subset to be surveyed. Most polls don’t operate exactly like this, but they do use probability-based sampling methods to select individuals from nationally representative panels.

In 2004, the GSS reported that 817 of 977 respondents (or 83.6%) indicated that they always or sometimes feel rushed. This is a clear majority, but we again need to consider variation due to random sampling . Fortunately, we can use the same probability model we did in the previous example to investigate the probable size of this error. (Note, we can use the coin-tossing model when the actual population size is much, much larger than the sample size, as then we can still consider the probability to be the same for every individual in the sample.) This probability model predicts that the sample result will be within 3 percentage points of the population value (roughly 1 over the square root of the sample size, the margin of error. A statistician would conclude, with 95% confidence, that between 80.6% and 86.6% of all adult Americans in 2004 would have responded that they sometimes or always feel rushed.

The key to the margin of error is that when we use a probability sampling method, we can make claims about how often (in the long run, with repeated random sampling) the sample result would fall within a certain distance from the unknown population value by chance (meaning by random sampling variation) alone. Conversely, non-random samples are often suspect to bias, meaning the sampling method systematically over-represents some segments of the population and under-represents others. We also still need to consider other sources of bias, such as individuals not responding honestly. These sources of error are not measured by the margin of error.

Cause and Effect

In many research studies, the primary question of interest concerns differences between groups. Then the question becomes how were the groups formed (e.g., selecting people who already drink coffee vs. those who don’t). In some studies, the researchers actively form the groups themselves. But then we have a similar question—could any differences we observe in the groups be an artifact of that group-formation process? Or maybe the difference we observe in the groups is so large that we can discount a “fluke” in the group-formation process as a reasonable explanation for what we find?

Example 4 : A psychology study investigated whether people tend to display more creativity when they are thinking about intrinsic (internal) or extrinsic (external) motivations (Ramsey & Schafer, 2002, based on a study by Amabile, 1985). The subjects were 47 people with extensive experience with creative writing. Subjects began by answering survey questions about either intrinsic motivations for writing (such as the pleasure of self-expression) or extrinsic motivations (such as public recognition). Then all subjects were instructed to write a haiku, and those poems were evaluated for creativity by a panel of judges. The researchers conjectured beforehand that subjects who were thinking about intrinsic motivations would display more creativity than subjects who were thinking about extrinsic motivations. The creativity scores from the 47 subjects in this study are displayed in Figure 26, where higher scores indicate more creativity.

Image showing a dot for creativity scores, which vary between 5 and 27, and the types of motivation each person was given as a motivator, either extrinsic or intrinsic.

In this example, the key question is whether the type of motivation affects creativity scores. In particular, do subjects who were asked about intrinsic motivations tend to have higher creativity scores than subjects who were asked about extrinsic motivations?

Figure 26 reveals that both motivation groups saw considerable variability in creativity scores, and these scores have considerable overlap between the groups. In other words, it’s certainly not always the case that those with extrinsic motivations have higher creativity than those with intrinsic motivations, but there may still be a statistical tendency in this direction. (Psychologist Keith Stanovich (2013) refers to people’s difficulties with thinking about such probabilistic tendencies as “the Achilles heel of human cognition.”)

The mean creativity score is 19.88 for the intrinsic group, compared to 15.74 for the extrinsic group, which supports the researchers’ conjecture. Yet comparing only the means of the two groups fails to consider the variability of creativity scores in the groups. We can measure variability with statistics using, for instance, the standard deviation: 5.25 for the extrinsic group and 4.40 for the intrinsic group. The standard deviations tell us that most of the creativity scores are within about 5 points of the mean score in each group. We see that the mean score for the intrinsic group lies within one standard deviation of the mean score for extrinsic group. So, although there is a tendency for the creativity scores to be higher in the intrinsic group, on average, the difference is not extremely large.

We again want to consider possible explanations for this difference. The study only involved individuals with extensive creative writing experience. Although this limits the population to which we can generalize, it does not explain why the mean creativity score was a bit larger for the intrinsic group than for the extrinsic group. Maybe women tend to receive higher creativity scores? Here is where we need to focus on how the individuals were assigned to the motivation groups. If only women were in the intrinsic motivation group and only men in the extrinsic group, then this would present a problem because we wouldn’t know if the intrinsic group did better because of the different type of motivation or because they were women. However, the researchers guarded against such a problem by randomly assigning the individuals to the motivation groups. Like flipping a coin, each individual was just as likely to be assigned to either type of motivation. Why is this helpful? Because this random assignment  tends to balance out all the variables related to creativity we can think of, and even those we don’t think of in advance, between the two groups. So we should have a similar male/female split between the two groups; we should have a similar age distribution between the two groups; we should have a similar distribution of educational background between the two groups; and so on. Random assignment should produce groups that are as similar as possible except for the type of motivation, which presumably eliminates all those other variables as possible explanations for the observed tendency for higher scores in the intrinsic group.

But does this always work? No, so by “luck of the draw” the groups may be a little different prior to answering the motivation survey. So then the question is, is it possible that an unlucky random assignment is responsible for the observed difference in creativity scores between the groups? In other words, suppose each individual’s poem was going to get the same creativity score no matter which group they were assigned to, that the type of motivation in no way impacted their score. Then how often would the random-assignment process alone lead to a difference in mean creativity scores as large (or larger) than 19.88 – 15.74 = 4.14 points?

We again want to apply to a probability model to approximate a p-value , but this time the model will be a bit different. Think of writing everyone’s creativity scores on an index card, shuffling up the index cards, and then dealing out 23 to the extrinsic motivation group and 24 to the intrinsic motivation group, and finding the difference in the group means. We (better yet, the computer) can repeat this process over and over to see how often, when the scores don’t change, random assignment leads to a difference in means at least as large as 4.41. Figure 27 shows the results from 1,000 such hypothetical random assignments for these scores.

Standard distribution in a typical bell curve.

Only 2 of the 1,000 simulated random assignments produced a difference in group means of 4.41 or larger. In other words, the approximate p-value is 2/1000 = 0.002. This small p-value indicates that it would be very surprising for the random assignment process alone to produce such a large difference in group means. Therefore, as with Example 2, we have strong evidence that focusing on intrinsic motivations tends to increase creativity scores, as compared to thinking about extrinsic motivations.

Notice that the previous statement implies a cause-and-effect relationship between motivation and creativity score; is such a strong conclusion justified? Yes, because of the random assignment used in the study. That should have balanced out any other variables between the two groups, so now that the small p-value convinces us that the higher mean in the intrinsic group wasn’t just a coincidence, the only reasonable explanation left is the difference in the type of motivation. Can we generalize this conclusion to everyone? Not necessarily—we could cautiously generalize this conclusion to individuals with extensive experience in creative writing similar the individuals in this study, but we would still want to know more about how these individuals were selected to participate.

Close-up photo of mathematical equations.

Statistical thinking involves the careful design of a study to collect meaningful data to answer a focused research question, detailed analysis of patterns in the data, and drawing conclusions that go beyond the observed data. Random sampling is paramount to generalizing results from our sample to a larger population, and random assignment is key to drawing cause-and-effect conclusions. With both kinds of randomness, probability models help us assess how much random variation we can expect in our results, in order to determine whether our results could happen by chance alone and to estimate a margin of error.

So where does this leave us with regard to the coffee study mentioned previously (the Freedman, Park, Abnet, Hollenbeck, & Sinha, 2012 found that men who drank at least six cups of coffee a day had a 10% lower chance of dying (women 15% lower) than those who drank none)? We can answer many of the questions:

  • This was a 14-year study conducted by researchers at the National Cancer Institute.
  • The results were published in the June issue of the New England Journal of Medicine , a respected, peer-reviewed journal.
  • The study reviewed coffee habits of more than 402,000 people ages 50 to 71 from six states and two metropolitan areas. Those with cancer, heart disease, and stroke were excluded at the start of the study. Coffee consumption was assessed once at the start of the study.
  • About 52,000 people died during the course of the study.
  • People who drank between two and five cups of coffee daily showed a lower risk as well, but the amount of reduction increased for those drinking six or more cups.
  • The sample sizes were fairly large and so the p-values are quite small, even though percent reduction in risk was not extremely large (dropping from a 12% chance to about 10%–11%).
  • Whether coffee was caffeinated or decaffeinated did not appear to affect the results.
  • This was an observational study, so no cause-and-effect conclusions can be drawn between coffee drinking and increased longevity, contrary to the impression conveyed by many news headlines about this study. In particular, it’s possible that those with chronic diseases don’t tend to drink coffee.

This study needs to be reviewed in the larger context of similar studies and consistency of results across studies, with the constant caution that this was not a randomized experiment. Whereas a statistical analysis can still “adjust” for other potential confounding variables, we are not yet convinced that researchers have identified them all or completely isolated why this decrease in death risk is evident. Researchers can now take the findings of this study and develop more focused studies that address new questions.

Explore these outside resources to learn more about applied statistics:

  • Video about p-values:  P-Value Extravaganza
  • Interactive web applets for teaching and learning statistics
  • Inter-university Consortium for Political and Social Research  where you can find and analyze data.
  • The Consortium for the Advancement of Undergraduate Statistics
  • Find a recent research article in your field and answer the following: What was the primary research question? How were individuals selected to participate in the study? Were summary results provided? How strong is the evidence presented in favor or against the research question? Was random assignment used? Summarize the main conclusions from the study, addressing the issues of statistical significance, statistical confidence, generalizability, and cause and effect. Do you agree with the conclusions drawn from this study, based on the study design and the results presented?
  • Is it reasonable to use a random sample of 1,000 individuals to draw conclusions about all U.S. adults? Explain why or why not.

How to Read Research

In this course and throughout your academic career, you’ll be reading journal articles (meaning they were published by experts in a peer-reviewed journal) and reports that explain psychological research. It’s important to understand the format of these articles so that you can read them strategically and understand the information presented. Scientific articles vary in content or structure, depending on the type of journal to which they will be submitted. Psychological articles and many papers in the social sciences follow the writing guidelines and format dictated by the American Psychological Association (APA). In general, the structure follows: abstract, introduction, methods, results, discussion, and references.

  • Abstract : the abstract is the concise summary of the article. It summarizes the most important features of the manuscript, providing the reader with a global first impression on the article. It is generally just one paragraph that explains the experiment as well as a short synopsis of the results.
  • Introduction : this section provides background information about the origin and purpose of performing the experiment or study. It reviews previous research and presents existing theories on the topic.
  • Method : this section covers the methodologies used to investigate the research question, including the identification of participants , procedures , and  materials  as well as a description of the actual procedure . It should be sufficiently detailed to allow for replication.
  • Results : the results section presents key findings of the research, including reference to indicators of statistical significance.
  • Discussion : this section provides an interpretation of the findings, states their significance for current research, and derives implications for theory and practice. Alternative interpretations for findings are also provided, particularly when it is not possible to conclude for the directionality of the effects. In the discussion, authors also acknowledge the strengths and limitations/weaknesses of the study and offer concrete directions about for future research.

Watch this 3-minute video for an explanation on how to read scholarly articles. Look closely at the example article shared just before the two minute mark.

https://digitalcommons.coastal.edu/kimbel-library-instructional-videos/9/

Practice identifying these key components in the following experiment: Food-Induced Emotional Resonance Improves Emotion Recognition.

In this chapter, you learned to

  • define and apply the scientific method to psychology
  • describe the strengths and weaknesses of descriptive, experimental, and correlational research
  • define the basic elements of a statistical investigation

Putting It Together: Psychological Research

Psychologists use the scientific method to examine human behavior and mental processes. Some of the methods you learned about include descriptive, experimental, and correlational research designs.

Watch the CrashCourse video to review the material you learned, then read through the following examples and see if you can come up with your own design for each type of study.

You can view the transcript for “Psychological Research: Crash Course Psychology #2” here (opens in new window).

Case Study: a detailed analysis of a particular person, group, business, event, etc. This approach is commonly used to to learn more about rare examples with the goal of describing that particular thing.

  • Ted Bundy was one of America’s most notorious serial killers who murdered at least 30 women and was executed in 1989. Dr. Al Carlisle evaluated Bundy when he was first arrested and conducted a psychological analysis of Bundy’s development of his sexual fantasies merging into reality (Ramsland, 2012). Carlisle believes that there was a gradual evolution of three processes that guided his actions: fantasy, dissociation, and compartmentalization (Ramsland, 2012). Read   Imagining Ted Bundy  (http://goo.gl/rGqcUv) for more information on this case study.

Naturalistic Observation : a researcher unobtrusively collects information without the participant’s awareness.

  • Drain and Engelhardt (2013) observed six nonverbal children with autism’s evoked and spontaneous communicative acts. Each of the children attended a school for children with autism and were in different classes. They were observed for 30 minutes of each school day. By observing these children without them knowing, they were able to see true communicative acts without any external influences.

Survey : participants are asked to provide information or responses to questions on a survey or structure assessment.

  • Educational psychologists can ask students to report their grade point average and what, if anything, they eat for breakfast on an average day. A healthy breakfast has been associated with better academic performance (Digangi’s 1999).
  • Anderson (1987) tried to find the relationship between uncomfortably hot temperatures and aggressive behavior, which was then looked at with two studies done on violent and nonviolent crime. Based on previous research that had been done by Anderson and Anderson (1984), it was predicted that violent crimes would be more prevalent during the hotter time of year and the years in which it was hotter weather in general. The study confirmed this prediction.

Longitudinal Study: researchers   recruit a sample of participants and track them for an extended period of time.

  • In a study of a representative sample of 856 children Eron and his colleagues (1972) found that a boy’s exposure to media violence at age eight was significantly related to his aggressive behavior ten years later, after he graduated from high school.

Cross-Sectional Study:  researchers gather participants from different groups (commonly different ages) and look for differences between the groups.

  • In 1996, Russell surveyed people of varying age groups and found that people in their 20s tend to report being more lonely than people in their 70s.

Correlational Design:  two different variables are measured to determine whether there is a relationship between them.

  • Thornhill et al. (2003) had people rate how physically attractive they found other people to be. They then had them separately smell t-shirts those people had worn (without knowing which clothes belonged to whom) and rate how good or bad their body oder was. They found that the more attractive someone was the more pleasant their body order was rated to be.
  • Clinical psychologists can test a new pharmaceutical treatment for depression by giving some patients the new pill and others an already-tested one to see which is the more effective treatment.

American Cancer Society. (n.d.). History of the cancer prevention studies. Retrieved from http://www.cancer.org/research/researchtopreventcancer/history-cancer-prevention-study

American Psychological Association. (2009). Publication Manual of the American Psychological Association (6th ed.). Washington, DC: Author.

American Psychological Association. (n.d.). Research with animals in psychology. Retrieved from https://www.apa.org/research/responsible/research-animals.pdf

Arnett, J. (2008). The neglected 95%: Why American psychology needs to become less American. American Psychologist, 63(7), 602–614.

Barton, B. A., Eldridge, A. L., Thompson, D., Affenito, S. G., Striegel-Moore, R. H., Franko, D. L., . . . Crockett, S. J. (2005). The relationship of breakfast and cereal consumption to nutrient intake and body mass index: The national heart, lung, and blood institute growth and health study. Journal of the American Dietetic Association, 105(9), 1383–1389. Retrieved from http://dx.doi.org/10.1016/j.jada.2005.06.003

Chwalisz, K., Diener, E., & Gallagher, D. (1988). Autonomic arousal feedback and emotional experience: Evidence from the spinal cord injured. Journal of Personality and Social Psychology, 54, 820–828.

Dominus, S. (2011, May 25). Could conjoined twins share a mind? New York Times Sunday Magazine. Retrieved from http://www.nytimes.com/2011/05/29/magazine/could-conjoined-twins-share-a-mind.html?_r=5&hp&

Fanger, S. M., Frankel, L. A., & Hazen, N. (2012). Peer exclusion in preschool children’s play: Naturalistic observations in a playground setting. Merrill-Palmer Quarterly, 58, 224–254.

Fiedler, K. (2004). Illusory correlation. In R. F. Pohl (Ed.), Cognitive illusions: A handbook on fallacies and biases in thinking, judgment and memory (pp. 97–114). New York, NY: Psychology Press.

Frantzen, L. B., Treviño, R. P., Echon, R. M., Garcia-Dominic, O., & DiMarco, N. (2013). Association between frequency of ready-to-eat cereal consumption, nutrient intakes, and body mass index in fourth- to sixth-grade low-income minority children. Journal of the Academy of Nutrition and Dietetics, 113(4), 511–519.

Harper, J. (2013, July 5). Ice cream and crime: Where cold cuisine and hot disputes intersect. The Times-Picaune. Retrieved from http://www.nola.com/crime/index.ssf/2013/07/ice_cream_and_crime_where_hot.html

Jenkins, W. J., Ruppel, S. E., Kizer, J. B., Yehl, J. L., & Griffin, J. L. (2012). An examination of post 9-11 attitudes towards Arab Americans. North American Journal of Psychology, 14, 77–84.

Jones, J. M. (2013, May 13). Same-sex marriage support solidifies above 50% in U.S. Gallup Politics. Retrieved from http://www.gallup.com/poll/162398/sex-marriage-support-solidifies-above.aspx

Kobrin, J. L., Patterson, B. F., Shaw, E. J., Mattern, K. D., & Barbuti, S. M. (2008). Validity of the SAT for predicting first-year college grade point average (Research Report No. 2008-5). Retrieved from https://research.collegeboard.org/sites/default/files/publications/2012/7/researchreport-2008-5-validity-sat-predicting-first-year-college-grade-point-average.pdf

Lewin, T. (2014, March 5). A new SAT aims to realign with schoolwork. New York Times. Retreived from http://www.nytimes.com/2014/03/06/education/major-changes-in-sat-announced-by-college-board.html.

Lowry, M., Dean, K., & Manders, K. (2010). The link between sleep quantity and academic performance for the college student. Sentience: The University of Minnesota Undergraduate Journal of Psychology, 3(Spring), 16–19. Retrieved from http://www.psych.umn.edu/sentience/files/SENTIENCE_Vol3.pdf

McKie, R. (2010, June 26). Chimps with everything: Jane Goodall’s 50 years in the jungle. The Guardian. Retrieved from http://www.theguardian.com/science/2010/jun/27/jane-goodall-chimps-africa-interview

Offit, P. (2008). Autism’s false prophets: Bad science, risky medicine, and the search for a cure. New York: Columbia University Press.

Perkins, H. W., Haines, M. P., & Rice, R. (2005). Misperceiving the college drinking norm and related problems: A nationwide study of exposure to prevention information, perceived norms and student alcohol misuse. J. Stud. Alcohol, 66(4), 470–478.

Rimer, S. (2008, September 21). College panel calls for less focus on SATs. The New York Times. Retrieved from http://www.nytimes.com/2008/09/22/education/22admissions.html?_r=0

Rothstein, J. M. (2004). College performance predictions and the SAT. Journal of Econometrics, 121, 297–317.

Rotton, J., & Kelly, I. W. (1985). Much ado about the full moon: A meta-analysis of lunar-lunacy research. Psychological Bulletin, 97(2), 286–306. doi:10.1037/0033-2909.97.2.286

Santelices, M. V., & Wilson, M. (2010). Unfair treatment? The case of Freedle, the SAT, and the standardization approach to differential item functioning. Harvard Education Review, 80, 106–134.

Sears, D. O. (1986). College sophomores in the laboratory: Influences of a narrow data base on social psychology’s view of human nature. Journal of Personality and Social Psychology, 51, 515–530.

Tuskegee University. (n.d.). About the USPHS Syphilis Study. Retrieved from http://www.tuskegee.edu/about_us/centers_of_excellence/bioethics_center/about_the_usphs_syphilis_study.aspx.

CC licensed content, Original

  • Psychological Research Methods. Provided by : Karenna Malavanti. License : CC BY-SA: Attribution ShareAlike

CC licensed content, Shared previously

  • Psychological Research. Provided by : OpenStax College. License : CC BY: Attribution . License Terms : Download for free at https://openstax.org/books/psychology-2e/pages/1-introduction. Located at : https://openstax.org/books/psychology-2e/pages/2-introduction .
  • Why It Matters: Psychological Research. Provided by : Lumen Learning. License : CC BY: Attribution   Located at: https://pressbooks.online.ucf.edu/lumenpsychology/chapter/introduction-15/
  • Introduction to The Scientific Method. Provided by : Lumen Learning. License : CC BY: Attribution   Located at:   https://pressbooks.online.ucf.edu/lumenpsychology/chapter/outcome-the-scientific-method/
  • Research picture. Authored by : Mediterranean Center of Medical Sciences. Provided by : Flickr. License : CC BY: Attribution   Located at : https://www.flickr.com/photos/mcmscience/17664002728 .
  • The Scientific Process. Provided by : Lumen Learning. License : CC BY-SA: Attribution ShareAlike   Located at:  https://pressbooks.online.ucf.edu/lumenpsychology/chapter/reading-the-scientific-process/
  • Ethics in Research. Provided by : Lumen Learning. License : CC BY: Attribution   Located at:  https://pressbooks.online.ucf.edu/lumenpsychology/chapter/ethics/
  • Ethics. Authored by : OpenStax College. Located at : https://openstax.org/books/psychology-2e/pages/2-4-ethics . License : CC BY: Attribution . License Terms : Download for free at https://openstax.org/books/psychology-2e/pages/1-introduction .
  • Introduction to Approaches to Research. Provided by : Lumen Learning. License : CC BY-NC-SA: Attribution NonCommercial ShareAlike   Located at:   https://pressbooks.online.ucf.edu/lumenpsychology/chapter/outcome-approaches-to-research/
  • Lec 2 | MIT 9.00SC Introduction to Psychology, Spring 2011. Authored by : John Gabrieli. Provided by : MIT OpenCourseWare. License : CC BY-NC-SA: Attribution-NonCommercial-ShareAlike Located at : https://www.youtube.com/watch?v=syXplPKQb_o .
  • Paragraph on correlation. Authored by : Christie Napa Scollon. Provided by : Singapore Management University. License : CC BY-NC-SA: Attribution-NonCommercial-ShareAlike Located at : http://nobaproject.com/modules/research-designs?r=MTc0ODYsMjMzNjQ%3D . Project : The Noba Project.
  • Descriptive Research. Provided by : Lumen Learning. License : CC BY-SA: Attribution ShareAlike   Located at: https://pressbooks.online.ucf.edu/lumenpsychology/chapter/reading-clinical-or-case-studies/
  • Approaches to Research. Authored by : OpenStax College.  License : CC BY: Attribution . License Terms : Download for free at https://openstax.org/books/psychology-2e/pages/1-introduction. Located at : https://openstax.org/books/psychology-2e/pages/2-2-approaches-to-research
  • Analyzing Findings. Authored by : OpenStax College. Located at : https://openstax.org/books/psychology-2e/pages/2-3-analyzing-findings . License : CC BY: Attribution . License Terms : Download for free at https://openstax.org/books/psychology-2e/pages/1-introduction.
  • Experiments. Provided by : Lumen Learning. License : CC BY: Attribution   Located at:  https://pressbooks.online.ucf.edu/lumenpsychology/chapter/reading-conducting-experiments/
  • Research Review. Authored by : Jessica Traylor for Lumen Learning. License : CC BY: Attribution Located at:  https://pressbooks.online.ucf.edu/lumenpsychology/chapter/reading-conducting-experiments/
  • Introduction to Statistics. Provided by : Lumen Learning. License : CC BY: Attribution   Located at:  https://pressbooks.online.ucf.edu/lumenpsychology/chapter/outcome-statistical-thinking/
  • histogram. Authored by : Fisher’s Iris flower data set. Provided by : Wikipedia.
  • License : CC BY-SA: Attribution-ShareAlike   Located at : https://en.wikipedia.org/wiki/Wikipedia:Meetup/DC/Statistics_Edit-a-thon#/media/File:Fisher_iris_versicolor_sepalwidth.svg .
  • Statistical Thinking. Authored by : Beth Chance and Allan Rossman . Provided by : California Polytechnic State University, San Luis Obispo.  
  • License : CC BY-NC-SA: Attribution-NonCommerci al-S hareAlike .  License Terms : http://nobaproject.com/license-agreement   Located at : http://nobaproject.com/modules/statistical-thinking . Project : The Noba Project.
  • Drawing Conclusions from Statistics. Authored by: Pat Carroll and Lumen Learning. Provided by : Lumen Learning. License : CC BY: Attribution   Located at: https://pressbooks.online.ucf.edu/lumenpsychology/chapter/reading-drawing-conclusions-from-statistics/
  • Statistical Thinking. Authored by : Beth Chance and Allan Rossman, California Polytechnic State University, San Luis Obispo. Provided by : Noba. License: CC BY-NC-SA: Attribution-NonCommercial-ShareAlike Located at : http://nobaproject.com/modules/statistical-thinking .
  • The Replication Crisis. Authored by : Colin Thomas William. Provided by : Ivy Tech Community College. License: CC BY: Attribution
  • How to Read Research. Provided by : Lumen Learning. License : CC BY: Attribution   Located at:  https://pressbooks.online.ucf.edu/lumenpsychology/chapter/how-to-read-research/
  • What is a Scholarly Article? Kimbel Library First Year Experience Instructional Videos. 9. Authored by:  Joshua Vossler, John Watts, and Tim Hodge.  Provided by : Coastal Carolina University  License :  CC BY NC ND:  Attribution-NonCommercial-NoDerivatives Located at :  https://digitalcommons.coastal.edu/kimbel-library-instructional-videos/9/
  • Putting It Together: Psychological Research. Provided by : Lumen Learning. License : CC BY: Attribution   Located at:  https://pressbooks.online.ucf.edu/lumenpsychology/chapter/putting-it-together-psychological-research/
  • Research. Provided by : Lumen Learning. License : CC BY: Attribution   Located at:

All rights reserved content

  • Understanding Driver Distraction. Provided by : American Psychological Association. License : Other. License Terms: Standard YouTube License Located at : https://www.youtube.com/watch?v=XToWVxS_9lA&list=PLxf85IzktYWJ9MrXwt5GGX3W-16XgrwPW&index=9 .
  • Correlation vs. Causality: Freakonomics Movie. License : Other. License Terms : Standard YouTube License Located at : https://www.youtube.com/watch?v=lbODqslc4Tg.
  • Psychological Research – Crash Course Psychology #2. Authored by : Hank Green. Provided by : Crash Course. License : Other. License Terms : Standard YouTube License Located at : https://www.youtube.com/watch?v=hFV71QPvX2I .

Public domain content

  • Researchers review documents. Authored by : National Cancer Institute. Provided by : Wikimedia. Located at : https://commons.wikimedia.org/wiki/File:Researchers_review_documents.jpg . License : Public Domain: No Known Copyright

grounded in objective, tangible evidence that can be observed time and time again, regardless of who is observing

well-developed set of ideas that propose an explanation for observed phenomena

(plural: hypotheses) tentative and testable statement about the relationship between two or more variables

an experiment must be replicable by another researcher

implies that a theory should enable us to make predictions about future events

able to be disproven by experimental results

implies that all data must be considered when evaluating a hypothesis

committee of administrators, scientists, and community members that reviews proposals for research involving human participants

process of informing a research participant about what to expect during an experiment, any risks involved, and the implications of the research, and then obtaining the person’s consent to participate

purposely misleading experiment participants in order to maintain the integrity of the experiment

when an experiment involved deception, participants are told complete and truthful information about the experiment at its conclusion

committee of administrators, scientists, veterinarians, and community members that reviews proposals for research involving non-human animals

research studies that do not test specific relationships between variables

research investigating the relationship between two or more variables

research method that uses hypothesis testing to make inferences about how one variable impacts and causes another

observation of behavior in its natural setting

inferring that the results for a sample apply to the larger population

when observations may be skewed to align with observer expectations

measure of agreement among observers on how they record and classify a particular event

observational research study focusing on one or a few people

list of questions to be answered by research participants—given as paper-and-pencil questionnaires, administered electronically, or conducted verbally—allowing researchers to collect data from a large number of people

subset of individuals selected from the larger population

overall group of individuals that the researchers are interested in

method of research using past records or data sets to answer various research questions, or to search for interesting patterns or relationships

studies in which the same group of individuals is surveyed or measured repeatedly over an extended period of time

compares multiple segments of a population at a single time

reduction in number of research participants as some drop out of the study over time

relationship between two or more variables; when two variables are correlated, one variable changes as the other does

number from -1 to +1, indicating the strength and direction of the relationship between variables, and usually represented by r

two variables change in the same direction, both becoming either larger or smaller

two variables change in different directions, with one becoming larger as the other becomes smaller; a negative correlation is not the same thing as no correlation

changes in one variable cause the changes in the other variable; can be determined only through an experimental research design

unanticipated outside factor that affects both variables of interest, often giving the false impression that changes in one variable causes changes in the other variable, when, in actuality, the outside factor causes changes in both variables

seeing relationships between two things when in reality no such relationship exists

tendency to ignore evidence that disproves ideas or beliefs

group designed to answer the research question; experimental manipulation is the only difference between the experimental and control groups, so any differences between the two are due to experimental manipulation rather than chance

serves as a basis for comparison and controls for chance factors that might influence the results of the study—by holding such factors constant across groups so that the experimental manipulation is the only difference between groups

description of what actions and operations will be used to measure the dependent variables and manipulate the independent variables

researcher expectations skew the results of the study

experiment in which the researcher knows which participants are in the experimental group and which are in the control group

experiment in which both the researchers and the participants are blind to group assignments

people's expectations or beliefs influencing or determining their experience in a given situation

variable that is influenced or controlled by the experimenter; in a sound experimental study, the independent variable is the only important difference between the experimental and control group

variable that the researcher measures to see how much effect the independent variable had

subjects of psychological research

subset of a larger population in which every member of the population has an equal chance of being selected

method of experimental group assignment in which all participants have an equal chance of being assigned to either group

consistency and reproducibility of a given result

accuracy of a given result in measuring what it is designed to measure

determines how likely any difference between experimental groups is due to chance

statistical probability that represents the likelihood that experimental results happened by chance

Psychological Science is the scientific study of mind, brain, and behavior. We will explore what it means to be human in this class. It has never been more important for us to understand what makes people tick, how to evaluate information critically, and the importance of history. Psychology can also help you in your future career; indeed, there are very little jobs out there with no human interaction!

Because psychology is a science, we analyze human behavior through the scientific method. There are several ways to investigate human phenomena, such as observation, experiments, and more. We will discuss the basics, pros and cons of each! We will also dig deeper into the important ethical guidelines that psychologists must follow in order to do research. Lastly, we will briefly introduce ourselves to statistics, the language of scientific research. While reading the content in these chapters, try to find examples of material that can fit with the themes of the course.

To get us started:

  • The study of the mind moved away Introspection to reaction time studies as we learned more about empiricism
  • Psychologists work in careers outside of the typical "clinician" role. We advise in human factors, education, policy, and more!
  • While completing an observation study, psychologists will work to aggregate common themes to explain the behavior of the group (sample) as a whole. In doing so, we still allow for normal variation from the group!
  • The IRB and IACUC are important in ensuring ethics are maintained for both human and animal subjects

Psychological Science: Understanding Human Behavior Copyright © by Karenna Malavanti is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

Cognitive psychology and self-reports: Models and methods

  • Published: May 2003
  • Volume 12 , pages 219–227, ( 2003 )

Cite this article

evaluate the research methods used by cognitive psychologists

  • Jared B. Jobe 1  

2005 Accesses

135 Citations

Explore all metrics

This article describes the models and methods that cognitive psychologists and survey researchers use to evaluate and experimentally test cognitive issues in questionnaire design and subsequently improve self-report instruments. These models and methods assess the cognitive processes underlying how respondents comprehend and generate answers to self-report questions. Cognitive processing models are briefly described. Non-experimental methods – expert cognitive review, cognitive task analysis, focus groups, and cognitive interviews – are described. Examples are provided of how these methods were effectively used to identify cognitive self-report issues. Experimental methods – cognitive laboratory experiments, field tests, and experiments embedded in field surveys – are described. Examples are provided of: (a) how laboratory experiments were designed to test the capability and accuracy of respondents in performing the cognitive tasks required to answer self-report questions, (b) how a field experiment was conducted in which a cognitively designed questionnaire was effectively tested against the original questionnaire, and (c) how a cognitive experiment embedded in a field survey was conducted to test cognitive predictions.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

Similar content being viewed by others

evaluate the research methods used by cognitive psychologists

Cognitive Evaluation of Survey Instruments

evaluate the research methods used by cognitive psychologists

Questionnaires

evaluate the research methods used by cognitive psychologists

Uncovering the structure of self-regulation through data-driven ontology discovery

Stone AA, Turkkan JS, Bachrach CA, Jobe JB, Kurtzman HS, Cain VS (eds). The Science of Self-report: Implications for Research and Practice. Mahwah, NJ: Erlbaum, 2000.

Google Scholar  

Rubin DC (ed). Autobiographical Memory. Cambridge, UK: Cambridge University Press, 1986.

Jobe JB, Mingay DJ. Cognition and survey measurement: History and overview. Appl Cognit Psychol 1991; 5: 175–192.

Jobe JB, Tourangeau R, Smith AF. Contributions of survey research to the understanding of memory. Appl Cognit Psychol 1993; 7: 567–584.

Sudman S, Bradburn N, Schwarz N. Thinking about Answers: The Application of Cognitive Processes to Survey Methodology. San Francisco: Jossey-Bass, 1995.

Tourangeau R, Rips LJ, Rasinski K. The Psychology of Survey Response. New York: Cambridge University Press, 2000.

Jobe JB, Herrmann DJ. Implications of models of survey cognition for memory theory. In: Herrmann D, Johnson M, McEvoy C, Hertzog C, Hertel P (eds), Basic and Applied Memory Research: Vol. 2. Practical Applications. Hillsdale, NJ: Erlbaum, 1996; 193–205.

Tourangeau R. Cognitive sciences and survey methods. In: Jabine TB, Straf ML, Tanur JM, Tourangeau R (eds), Cognitive Aspects of Survey Methodology: Building a Bridge between Disciplines. Washington, DC: National Academy Press, 1984; 73–101.

Willis GB, Royston P, Bercini D. The use of verbal report methods in the development and the testing of survey questionnaires. Appl Cognit Psychol 1991; 5: 251–267.

Esposito JL, Jobe JB. A general model of the survey interaction process. Bureau of the Census Seventh Annual Research Conference Proceedings 1991; 537–560.

Forsyth BH, Lessler JT. Cognitive laboratory methods: a taxonomy. In: Biemer PP, Groves RM, Lyberg LE, Mathiowetz NA, Sudman S (eds), Measurement Errors in Surveys. New York: Wiley, 1991; 167–183.

Lessler JT, Forsyth BH. A coding system for appraising questionnaires. In: Schwarz N, Sudman S (eds), Answering Questions: Methodology for Determining Cognitive and Communicative Processes in Survey Research. San Francisco: Jossey-Bass, 1996; 259–291.

Lee L, Brittingham A, Tourangeau R, et al. Are reporting errors due to encoding limitations or retrieval failure? Surveys of child vaccination as a case study. Appl Cognit Psychol 1999; 13: 43–63.

Smith AF. Cognitive processes in long-term dietary recall. Vital Health Stat 6. 1991; 4: 1–34.

Bercini DH. Pretesting questionnaires in the laboratory: an alternative approach. J Expo Anal Environ Epidemiol 1992; 2: 241–248.

Pratt WF, Tourangeau R, Jobe JB, et al. Asking sensitive questions in a health survey. Vital Health Stat 6 (in press).

Sudman S, Warnecke R, Johnson T, et al. Cognitive aspects of reporting cancer prevention examinations and tests. Vital Health Stat 6. 1994; 7: 1–171.

Willis GB. The use of the psychological laboratory to study sensitive survey topics. In: Harrison L, Hughes A (eds), The Validity of Self-reported Drug Use: Improving the Accuracy of Survey Estimates. Rockville, MD: National Institute on Drug Abuse, 1997; 416–438.

Jobe JB, Mingay DJ. Cognitive laboratory approach to designing questionnaires for surveys of the elderly. Public Health Rep 1990; 105: 518–524.

Keller DM, Kovar MG, Jobe JB, Branch LG. Problems eliciting elders' reports of functional status. J Aging Health 1993; 5: 306–318.

Schechter S, Herrmann D. The proper use of self-report questions in effective measurement of health outcomes. Eval Health Prof 1997; 20: 28–46.

Subar AF, Thompson FE, Smith AF, et al. Improving food frequency questionnaires: A qualitative approach using cognitive interviews. J Am Diet Assoc 1995; 95: 781–788.

Smith AF, Jobe JB, Mingay DJ. Question-induced cognitive biases in reports of dietary intake by college men and women. Health Psychol 1991; 10: 244–251.

Smith AF, Jobe JB, Mingay DJ. Retrieval from memory of dietary information. Appl Cognit Psychol 1991; 5: 269–296.

Means B, Swan GE, Jobe JB, Esposito JL. An alternative approach to obtaining personal history data. In: Biemer PP, Groves RM, Lyberg LE, Mathiowetz NA, Sudman S (eds), Measurement Errors in Surveys. New York: Wiley, 1991; 167–183.

Thompson FE, Subar AF, Brown CC, et al. Cognitive research enhances accuracy of food frequency questionnaire reports: Results of an experimental validation study. J Am Diet Assoc 2002; 102: 212–218, 223-225.

Jobe JB, White AA, Kelley CL, et al. Recall strategies and memory for health care visits. Milbank Q 1991; 68: 171–189.

Brewer WF. Autobiographical memory and survey research. In: Schwarz N, Sudman S (eds), Autobiographical Memory and the Validity of Retrospective Reports. New York: Springer-Verlag, 1994; 11–20.

Wagenaar WA. My memory: A study of autobiographical memory over six years. Cognit Psychol 1986; 18: 225–252.

Drury CG, Paramore B, Van Gott HP, et al. Task analysis. In: Salvendy G (ed), Handbook of Human Factors. New York: Wiley, 1987; 370–401.

Krueger RA. Focus Groups: A Practical Guide for Applied Research. Thousand Oaks, CA: Sage, 1994.

Jobe JB, Mingay DJ. Cognitive research improves questionnaires. Am J Public Health 1989; 79: 1053–1055.

Means B, Loftus EF. When personal history repeats itself: Decomposing memories for recurring events. Appl Cognit Psychol 1991; 5: 297–318.

Tourangeau R, Rasinski KA. Cognitive processes underlying context effects in attitude measurement. Psychol Bull 1989; 103: 299–314.

Jenkins CR, Dillman DA. Towards a theory of self-administered questionnaire design. In: Lyberg L, Biemer P, Collins M, de Leeuw E, Dippo C, Schwarz N, Trewin D (eds), Survey Measurement and Process Quality. New York: Wiley, 1997; 165–196.

Mullen PA, Lohr KN, Bresnahan BW, McNulty P. Applying cognitive design principles to formatting HRQOL instruments. Qual Life Res 2000; 9: 13–27.

Download references

Author information

Authors and affiliations.

National Heart, Lung, and Blood Institute, Bethesda, MD, USA

Jared B. Jobe

You can also search for this author in PubMed   Google Scholar

Rights and permissions

Reprints and permissions

About this article

Jobe, J.B. Cognitive psychology and self-reports: Models and methods. Qual Life Res 12 , 219–227 (2003). https://doi.org/10.1023/A:1023279029852

Download citation

Issue Date : May 2003

DOI : https://doi.org/10.1023/A:1023279029852

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Autobiographical memory
  • Cognitive interviews
  • Focus groups
  • Information processing models
  • Find a journal
  • Publish with us
  • Track your research

evaluate the research methods used by cognitive psychologists

Skip to content

Get Revising

Join get revising, already a member, describe the cognitive approach in psychology. evaluate the research methods used by cognitive psychologists..

  • Created by: Paulina Gorska
  • Created on: 10-04-16 00:40

 Describe the cognitive approach in psychology.

Evaluate the research methods used by cognitive psychologists.

The cognitive approach is concerned with mental processes to help us understand why people behave in certain ways. The approach uses scientific and experimental methods to measure mental processes which results in rejection of the psychodynamic use of introspection. The cognitive approach has three types of enquiry: neuropsychology that looks into brain damage, amnesia etc and how these impact our behaviour, cognitive science that is concerned with theories and theoretical developments of the approach and experimental cognitive psychology that investigates all types of mental processes in healthy people in controlled experimental environment in lab conditions such as the Ebinghaus study in 1885. The aim of the study was to investigate forgetting and the long term memory decay which resulted in looking at nonsense words like PEZ, URB. It showed that is the words weren’t practised a day, week and twp weeks after the first time they will be forgotten much faster. The experiment was conducted under controlled time conditions where the variables were highly controlled. Another strength of the research methods is the fact that standardised procedures are easy to replicate because the variables can be easily adjusted and the data is objective.   However, the experiments are highly artificial and unnatural; when people are briefed beforehand demand characteristics may occur and data can be …

Report Thu 7th March, 2019 @ 14:53

TY SM life saver !!!!

Similar Psychology resources:

Origins of Psychology 0.0 / 5

Approaches in Psychology 0.0 / 5

The Cognitive Approach 5.0 / 5 based on 7 ratings

Psychology - Approaches to Psychology - The Cognitive Approach 0.0 / 5

Psychology - PY1 Cognitive Approach 0.0 / 5

AS psychology OCR 2.5 / 5 based on 3 ratings

psychology - approaches and biopsychology 0.0 / 5

APPROACHES - PSYCHOLOGY 5.0 / 5 based on 1 rating Teacher recommended

Cognitive Psychology 3.5 / 5 based on 3 ratings

The Cognitive Approach: Model Essays 0.0 / 5

Related discussions on The Student Room

  • AQA A-level Psychology Paper 2 (7182/2) - 25th May 2023 [Exam Chat] »
  • psychology a level aqa »
  • Do you learn to read people's minds when studying psychology? »
  • AQA A-level Psychology Paper 3 (7182/3) - 5th June 2023 [Exam Chat] »
  • Edexcel A-level Psychology Paper 3 (9PS0 03) - 5th June 2023 [Exam Chat] »
  • Philosophy and Pschology »
  • Predictions for A level Psychology AQA 2023 »
  • questions in the 2023 paper psychology paper 1 »
  • Marking AQA a-level Psychology Essay: Evaluate and outline the social learning theory »

evaluate the research methods used by cognitive psychologists

  • Open access
  • Published: 30 April 2024

Designing, implementation and evaluation of story reading: a solution to increase general empathy in medical students

  • Masoumeh Mahmoudi 1 ,
  • Ali Asghar Ghorbani 2 ,
  • Mehdi Pourasghar 3 ,
  • Azita Balaghafari 4 ,
  • Jamshid Yazdani Charati 5 ,
  • Nassim Ghahrani 6 &
  • Farzaneh Amini 7  

BMC Medical Education volume  24 , Article number:  477 ( 2024 ) Cite this article

Metrics details

Communication and mutual understanding among healthcare providers is a significant concern within the healthcare system, and enhancing empathy is one way to foster effective communication and mutual understanding. The aim of this research is to evaluate and compare the impact of story reading on the level of empathy in medical students at Mazandaran University of Medical Sciences.

The study employed an intervention educational design (a quasi-experimental), with a convenience sample of 51 medical students selected as the statistical population. The process of story reading was conducted over six two-hour virtual sessions in the students' classroom, spanning six weeks. Selected stories were discussed in an online virtual class under the supervision of an instructor, focusing on story elements. To assess students' empathy in this educational program, the Davis General Empathy Questionnaire was administered before each of the six sessions, after, and one week later at the end of the course. Statistical analysis of the collected data was performed using repeated measures analysis of variance and Bonferroni's post hoc test through SPSS version 28 software, with a significance level set at 0.05.

The findings revealed that 27 participants (58.7%) were female students, with the remaining being male students, having an average age of 19.5 ± 0.86 years. The level of general empathy among the students significantly increased after the intervention compared to before the intervention ( P <0.001). Furthermore, the analysis of variance with repeated measures indicated a significant effect of the story reading program on enhancing empathy in terms of emotional and cognitive transfer among students in the intervention group ( P <0.001).

Conclusions

The research findings suggest that the story reading program effectively enhances the overall sense of empathy among medical students at the University of Medical Sciences. Therefore, implementing this method in universities, higher education centers, libraries, and psychology centers for teaching empathy can be valuable in fostering empathy skills and improving healthcare.

Peer Review reports

Introduction

In the realm of medicine, even with remarkable technological progress, it remains imperative to maintain effective communication and personal interactions when interacting with and providing care for patients. Research outcomes underscore the importance of the doctor-patient relationship in comprehending and addressing the suffering caused by illnesses, thereby emphasizing the necessity to enhance this relationship [ 1 ]. Many researchers acknowledge the importance of empathy among healthcare professionals and the value of training in developing this skill [ 2 , 3 ]. Furthermore, investigations into complaints related to doctors confirm that many of these grievances are not linked to the doctors' scientific skills or efficiency but rather arise from their communication approach with patients [ 4 ]. In simple terms, it can be said that communication errors are the primary cause of most medical complaints and violations. While information and communication technology (ICT) have made obtaining health and medical information easier, medical technology cannot validate the doctor-patient relationship, which is built upon their mutual attitudes towards one another. As a result, a therapist's behavior may vary depending on their perception of and attitude towards their patient since people's behavior is influenced by their orientation towards others and their perceptions.

Carl Rogers, a renowned humanist in the early 20th century, believed that empathy was a crucial and effective process for facilitating psychological changes within the doctor-patient relationship [ 5 ]. Furthermore, doctor-patient empathy holds a significant ethical importance within the medical community [ 6 ], which has led to discussions on this topic during medical ethics congresses. The need for well-designed educational plans to enhance empathetic abilities is highlighted by research that indicates a lack of satisfaction in this area [ 7 ]. Empathy training can strengthen this moral virtue and improve the mental and spiritual well-being of patients by incorporating it into the medical curriculum [ 8 ]. Empathy skills will help healthcare providers better understand and address their patients’ emotional needs, thereby fostering more positive therapeutic outcomes.

We are narrative creatures and we grow by telling stories and listening to stories [ 9 ]. Many believe that storytelling has many benefits and as an educational tool can improve the ability to retain words, improve their vocabulary, encourage children to learn English. , increasing moral value in them and providing cheap media in teaching rich language experience and increasing students' interest in reading and improving listening and writing abilities [ 10 ]. Also some narration and storytelling are considered useful for treatment, intervention and as a tool and technique for collecting qualitative data about treatment processes [ 11 ]. There is a view that how to read a story and get to know the structure of a story text in literature has similarities with narrative medicine. In fact, just as literature deals with the sufferings and problems of humans, narrative medicine mainly deals with the sufferings and problems of patients, patients' families or health care doctors. Therefore, the narrative reading techniques used in the literature study can be used in the study of narrative medicine to better understand the suffering of patients and thus help medical students and health care professionals to communicate with patients. And this can be effective in creating empathy [ 12 ]. To apply the narrative approach in teaching, we must answer the key questions that are necessary to understand fiction; Questions like who, what, where, when and why? What is the storyline? How will it begin, develop and end? [ 13 ]. Indeed, the field of narrative medicine has provided a useful way to open up our understanding of clinical reasoning, as we realize that the task of arriving at a diagnosis and treatment plan is largely narrative in nature [ 14 ].

In addition, many researchers in the field of medical humanities [ 15 ], medical professionalism [ 16 ] and the study of literary fiction in the creation and development of clinical abilities for compassionate and professional interaction with the patient has been highlighted by the importance and significance of phenomenological research and the lived experience of illness [ 17 ]. They have used experimental methods such as FMRI to track the effect of reading a story text on brain activities and its positive effect [ 18 ]. Since reading literary fiction is a cheap and effective way to develop empathy, story-reading and discussion sessions about the plot of tales has been held for years for hospital staff [ 19 ].

Of course, many of these studies are dedicated to reading a text or a part of a literary text and that too in one session, and the research community in them consists of people who have a tendency to read a story text, so it is possible that the influence of the text is due to the tendency and preparation of them to learn more empathy. Another important aspect is that while these studies highlight the impact of reading literary stories on readers, improving their communication skills and fostering empathy, none of them have been specifically designed to incorporate educational lessons centered on text analysis and teaching of story elements as this current study does. Although we have not discovered any evidence of a similar process being implemented in educational institutions nationwide, internal studies have suggested that reading stories can have an impact on reducing behavioral inconsistencies or enhancing learning abilities [ 20 , 21 ].

Considering the researches that have raised the need for training and creating empathy among therapists (8, 40, 41 and 42), we decided to include reading the text of fiction and familiarizing with the elements of story structure in the curriculum of medical students. Our research has several features: using Persian stories, reading stories along with explaining and discussing their story elements, and placing these training sessions in the students' curriculum and implementing it in 6 two-hour training sessions.

These four theories (cognitive Learning theory, behaviorism Learning theory, Constructivism Learning and Connectivism Learning) can be effective in storytelling. According to the cognitive theory, internal and external factors can affect the learners, and in the behaviorist learning theory, the concept is emphasized that the behavior of the learners depends on how they interact with their environment. In constructivism learning theory, the learner designs its learning based on previous experiences, and Connectivism learning theory focuses on the concept that people learn and grow when creating relationships [ 10 ].

Therefore, it seems that storytelling and active exposure to story can be considered by each of these theories, and considering the relationship between the understanding of the text structure of fictional story and the story of patients and diseases, storytelling and knowing the elements and structure of the story can be effective in creating narrative skills in learners.

Currently, although the significance of empathy skills is acknowledged in courses such as medical ethics, the teaching of this skill is not officially sanctioned in medical courses. Given the impact of accurately understanding a story in cultivating empathy, it is worth considering the inclusion of indirect instruction on this skill in the Persian literature course. This course is a three-unit general course. Medical students are mandated to successfully complete three units of Persian literature as part of their curriculum. The course content has been authorized by the Ministry of Science, although the Ministry of Health's educational planning office has yet to designate an official curriculum for it. Despite being approved by the Ministry of Science with the aim of providing comprehensive education in the Persian language and developing proficient reading and writing skills, critics argue that the general Persian course falls short in adhering to the approved curriculum. They claim that the lecturers, disregarding the audience's needs, tend to design educational content based on their personal preferences and areas of expertise [ 22 , 23 ]. In recent years, the importance of focusing on empathy skills among medical staff and the need for their training has consistently been emphasized in medical ethics meetings and conferences. Therefore, the authors of this article recognize the significance of enabling medical students to enhance their empathy skills. They also acknowledge the potential impact that reading literary stories can have on improving this skill. Consequently, they have decided to focus on teaching the art of accurately interpreting stories. Therefore, this study aims to evaluate and compare the impact of story reading on the level of empathy in medical students at Mazandaran University of Medical Sciences.

We designed an intervention educational study (a quasi-experimental) with utilizing the ADDIE model, which comprises of five primary elements: Analysis, Design, Development, Implementation, and Evaluation. Out of the 51 medical students (Second year) enrolled at medical school of Mazandaran University of Medical Sciences in Sari, 46 students participated in this research study during the first semester of the academic year 2021. During the design phase, the researchers developed the stories, execution method, and test method. Before each session, the students read the selected stories specific to that session. Then, they were required to participate in an exam based on the content of the stories and the session. To assess the students' empathy levels (Design phase), we utilized the Davis empathy questionnaires prior to and following the sessions. Three subscales of the IRI (Interpersonal Reactivity Index) (Davis, 1980) were used to assess empathic concern (EC), perspective-taking (PT) and Fantasy Scale (FS). Consisted of seven items answered on a 5-point scale (describes me very well to does not describe me at all). Which each of subscales included seven items that were answered on a 5-point scale (very agree to very disagree).

The EC subscale measures the feelings of compassion and concern for others, also known as emotional empathy. For example, "I often become emotionally affected by witnessing events". The PT subscale measures the inclination to consider the psychological viewpoint of others, also known as cognitive empathy. For example, "I make an effort to pay attention to the opposing perspective of each person before engaging in a debate". Also, the FS subscale is a measure of an individual's tendency to identify with fictitious characters from books, films, or video games. (e.g., “When I am reading an interesting story or novel, I imagine how I would feel if the events in the story were happening to me”).

During the developmental phase, we gathered the stories using research methods that have established the impact of reading stories on empathy. For this study, we selected stories from prestigious literary festivals or renowned authors who have caught the attention of literary criticism theorists and were carefully chosen by literature experts. Additionally, this aspect is encompassed in the selection of stories that are deemed appropriate for teaching story elements during the assigned sessions. During the implementation phase, students who expressed disinterest in participating in the research were instructed to study the material covered in the in-person classes and respond to tailored questions in the final examination. Please note that the offline content lacks criticism and analysis of the story, and it consists of typical educational texts found in this course. Additionally, it includes ancient Persian texts that have been translated into modern Persian poetry and prose.

In this study, the reading of literary texts was conducted virtually and online in the classroom of medical students for a total of six 2-hour sessions over a period of six weeks. This arrangement was necessary due to the Covid-19 pandemic and the resulting limitations on in-person educational activities. The reason for allocating 2 hours of time for this training over 6 sessions is to equate it to 1 lesson unit. This unit will be included in the proposed subject of a 3-unit Persian literature course for medical students. In order to assess the students' empathy during this educational course, the Davis General Empathy Questionnaire was administered before each of the six sessions, after, and one week later at the end of the course.

Another important aspect is that during these sessions, the teacher offered explanations on story elements by referring to relevant specialized books. The stories selected for this study were based on the author's esteemed reputation and literary recognition, as supported by research in this field [ 24 , 25 , 26 , 27 , 28 , 29 ]. The stories were selected by the literature expert (MM) who taught the training sessions. These stories are known in Persian literature and have famous authors. As mentioned in the Additional file 1 : Appendix 1, some of these stories have won awards from festivals, and the rest are famous writers of Persian literature, and have been selected from the book "Short Story in Iran" by Dr. Hossein Payandeh, a well-known Iranian critic, holder of a chair of literary criticism and Theorizing in Allameh Tabatabai University.

After every session, in order for the professor to make sure that the students' attention, listening and active presence in the class, a test was administered, which focused on the topics covered during that session. This was done in support of the measured empathy resulting from reading the story. Additionally, the level of student satisfaction with the educational material and the manner in which the sessions were conducted was gauged using an interview and self-expression by students. To assess the durability of empathy, students were required to complete the Davis General Empathy Questionnaire one week later at the end of the course.

We evaluated 46 students, including 27 (58.7%) female students and the rest male students with an average age of 19.5 ± 0.8. The highest age of the intervention subjects was 22 years and the youngest was 18 years. In this study, empathy was evaluated in three time periods before the intervention, after the intervention, and then 7 days after the end of the intervention, and the results listed below are related to the overall empathy scores (Table 1 ).

To compare overall empathy, analysis of variance with repeated measurement was used because Mauchly's Test of Sphericity showed significant results ( P <0.001), therefore Greenhouse- Geisser test was used and a significant difference was observed in three periods of overall empathy measurement ( P <0.001).

For comparisons in three times of measuring empathy, the results listed in the following table were also obtained: the type of paired t-test and the significance level of 5%, which was considered to be 0.015 with Bonferroni's correction based on the three times of measurement (Table 2 ).

The results showed that the average empathy score increased by 18.2 ± 0.97 after the intervention ( P < 0.001). And in the third time compared to the second time, it increased by 16.25 ± 0.95 ( P <0.001). In the third time, it decreased by 1.94 ± 0.54 compared to the second time ( P =0.003), whose graph is as follows (Fig. 1 ).

figure 1

The changing trend of the overall average empathy score in three time periods

The results showed that the overall average score of empathy increased in the third time compared to the first time. In the first dimension of empathy (EC=empathic concern), the results were calculated as follows (Table 3 ).

The above results show the description of scores with a 95% confidence interval in the first dimension of empathy. To compare the first component of empathy, analysis of variance with repeated measurements was used, Mauchly's Test of Sphericity showed significant results ( P <0.001), therefore Greenhouse-Geisser test was used and a significant difference was observed in the first three measurement periods of the first component of empathy ( P <0.001). For comparisons, the results listed in the following table were obtained these three times by measuring the first dimension of empathy: the type of paired t-test and a significance level of 5%, which was considered to be 0.015 with Bonferroni's correction based on the three times of measurement (Table 4 ).

The results showed that the average score of the first dimension of empathy increased by 5.92 ± 0.4997 after the intervention ( P < 0.001). And in the third time compared to the second time, it increased by 5.07 ± 0.56 ( P <0.001). Also, the third time, it decreased by 0.85 ± 0.28 compared to the second time ( P =0.013) whose diagram is as follows (Fig. 2 ).

figure 2

The changing trend of the average empathy score in the dimension of empathic concern in three time periods

The results showed that the average score of empathy in the dimension of empathic concern increased in the third time compared to the first time. In the second dimension of empathy (PT=perspective-taking), the results were calculated as follows (Table 5 ).

The above results show the description of scores with a 95% confidence interval in the second dimension of empathy. To compare the second component of empathy, analysis of variance with repeated measurements was used, Mauchly's Test of Sphericity showed significant results ( P <0.001), the Greenhouse-Geisser test was used and a significant difference was observed in the three measurement periods of the second component of empathy ( P <0.001). For comparisons in these three times of measuring the second dimension of empathy, the results listed in the following table were also obtained: the type of paired t-test and the significance level of 5%, which was considered to be 0.015 with Bonferroni's correction according to the three times of measurement (Table 6 ).

The results showed that the average score of the second dimension of empathy increased by 6.37 ± 0.46 after the intervention ( P <0.001), and in the third time compared to the second time it increased by an average of 5.94±0.48 ( P <0.001). Also, in the third time, no significant difference was observed compared to the second time ( P =0.61), whose graph is as follows (Fig. 3 ).

figure 3

The changing trend of the average empathy score in the dimension of perspective-taking in three time periods

The results showed that the average score of empathy in the dimension of e perspective-taking increased in the third time compared to the first time. In the third dimension of empathy (FS= Fantasy Scale), the results were calculated as follows (Table 7 ).

The above results show the description of scores with a 95% confidence interval in the third dimension of empathy. To compare the third component of empathy, analysis of variance with repeated measurements was used, Mauchly's Test of Sphericity showed significant results ( P <0.001), the Greenhouse-Geisser test was used and a significant difference was observed in the three measurement periods of the third component of empathy ( P <0.001). For comparisons in these three times of measuring the third dimension of empathy, the results listed in the following table were also obtained: the type of paired t-test and the significance level of 5%, which was considered to be 0.015 with Bonferroni's correction according to the three times of measurement (Table 8 ).

The results showed that the average score of the third dimension of empathy increased by 5.92 ± 0.43 after the intervention ( P <0.001), and in the third time compared to the second time it increased by an average of 5.23±0.47 (P < 0.001). Also, in the third time, no significant difference was observed compared to the second time ( P =0.028), whose graph is as follows (Fig. 4 ). The results showed that the average score of empathy in the dimension of Fantasy Scale increased in the third time compared to the first time.

figure 4

The changing trend of the average empathy score in the dimension of Fantasy Scale in three time periods

This study was conducted with the aim of investigating the effectiveness of reading stories in the Persian literature course on increasing empathy in medical students. The results of the present study indicated that reading stories had an effect on increasing empathy in the fields of emotional transfer (FS), cognitive and emotional empathy of students in the intervention group. In line with the current research, Maria et al.(2016) found that reading stories increases empathy among students [ 30 ]. Also, Matis et al. (2013), and Maja et al. (2013) showed the effectiveness of reading stories on empathy [ 31 , 32 ].

Also, the findings of the present study indicated that the story reading program had an effect on increasing empathy in the emotional transfer dimension of medical students in the intervention group, and the effectiveness of the intervention was stable in the follow-up phase. In line with the findings of the present study, Mumper and Gerrig (2017) concluded that reading stories can improve empathy [ 33 ]. In the Netherlands (2013), during a university research, it was found that reading the story carefully and conveying it emotionally to the reader can increase their empathy [ 31 ]. The research of Djikic et al. (2013) also confirmed the same result [ 32 ].

Also, the findings showed that the story reading program had a significant effect on increasing empathy in the cognitive dimension of students in the intervention group. In line with this finding, Panro et al. (2016) concluded that reading stories can affect the cognitive dimension and improve empathy [ 34 ]. In the explanation of the mentioned findings, it can be said that since reading the story emphasizes the mental and cognitive abilities of humans, it can lead to the development of cognitive skills in people [ 35 ] and increase people’s aware of others cognitive, emotional and sense states [ 36 ]. The research of Kidd et al. (2016) also showed that reading stories in the long term can increase our understanding of the cognitive and emotional states of others [ 37 ]. Also, the findings showed that reading stories had a significant impact on emotional empathy. In line with these findings, Koopman et al. and Dickij et al. also reached these results [ 32 , 38 ]. This finding is due to the fact that empathy is an important concept in the field of psychology and interpersonal communication. As a personal characteristic and also as an ability, it makes a person show desirable emotional reactions by perceiving the emotional reactions of others. In order to establish empathetic relationships, a person must be able to put himself in the place of others. Empathetic people are kind and caring towards others. They worry about others when they get hurt, on the other hand, their sensitivity to the behavior of others and trying to understand the nature of the behavior of the people around them makes them feel close and empathetic with the person to whom an incident has happened [ 39 , 40 ].

The findings of Rafati et al. (2015) also showed that the level of empathy of medical students with patients varies during the academic years. In such a way that the level of empathy in medical students decreases with increasing age and educational level, which is in line with the findings of the present study. These researchers also believe that due to the importance of empathy as a moral virtue, it is necessary to plan to strengthen empathy and include this concept in the curriculum of medical students to improve the mental and spiritual health of patients [ 8 ]. Khairabadi et al.'s research also informs about the weakness in the field of empathy and the importance of its training [ 41 ]. Shariat and Kikhaoni (2009) in a study that measured the level of empathy in clinical assistants of Iranian medical sciences universities, reported the decrease of this skill in students during their studies [ 42 ]. Considering this shortcoming, although in order to reduce this dissatisfaction, empathy training workshops have been held for the treatment staff in some universities, but there is still a need for planning in this regard [ 43 ]. One of the important limitations of the present study is the involvement of intervening factors in the effect of medical students' empathy, which are uncontrollable. Another limitation of this study is the small number of female participants compared to male participants. This issue does not allow us to draw conclusions about gender differences. The next limitation was that giving the right to choose to participate in the research for students made only students who are interested in fiction to participate in it, and there is a possibility that these people are more prepared to teach empathy and be influenced by texts. Finally, considering the relationship between reading fiction and general empathy, it is suggested for future researches to examine the ranking of general empathy with real empathy and also how empathy research evolves over time.

The findings of this research showed that story reading increased the general empathy of medical students in the fields of transfer of empathy, cognitive and emotional empathy in the post-intervention phase significantly more than the pre-intervention phase. This problem implicitly confirms the positive effect of reading stories in improving the sense of empathy. These results show the necessity of planning to include the study of stories and teaching them to read in the curriculum of medical students as a low-cost way to increase empathy. There is a literature course as a general course in the students' curriculum, but it does not have a specific educational framework. Considering the importance of storytelling in narrative medicine and the role of stories in creating empathy, it is suggested that more importance be given to reading stories in the curriculum of this course, especially during Interns and Stagers .

Availability of data and materials

The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.

Samuel CA, Mbah O, Schaal J, Eng E, Black KZ, Baker S, et al. The role of patient-physician relationship on health-related quality of life and pain in cancer patients. Supportive Care Cancer. 2020;28:2615–26.

Article   Google Scholar  

Halpern J. Empathy and patient–physician conflicts. J Gen Intern Med. 2007;22(5):696–700.

Yune SJ, Kang SH, Park K. Medical students' perceptions of patient-doctor relationship in South Korea: concept mapping analysis. Front Public Health. 2021:1606.

Van Der Merwe J. Physician-patient communication using ancestral spirits to achieve holistic healing. Am J Obstet Gynecol. 1995;172(4):1080–7.

Rogers CR. Significant learning in therapy and in education. Educ Leader. 1959;16(4):232–42.

Google Scholar  

Khodabakhsh MR, Mansoori P. Empathy and its impact on promoting physician-patients relationship. Iran J Med Ethics History Med. 2011;4(3):38–46.

Kazemipoor M, Sattar BS, Hakimian R. Patient empathy and related factors in undergraduate and postgraduate dental students. 2018.

Shiva R, Nahid R, Ali D, Forotani F. Empathic perspective of medical students based on Jefferson Empathy Scale. J Med Sci Res Ethics. 2016;10(36):25–34.

Loftus S, Greenhalgh T. Towards a narrative mode of practice. Education for future practice: Brill; 2010. p. 85-94.

Linda NC, Clement M. The Application of Storytelling in Teaching and Learning: Implication on Pupil’s Performance and Enrolment in Schools. 2023.

Cersosimo G. Storytelling in medical education programs. Ital J Sociol Educ. 2019;11(Italian Journal of Sociology of Education 11/3):212–25.

Liao H-C, Wang Y-H. Storytelling in medical education: narrative medicine as a resource for interdisciplinary collaboration. Int J Environ Res Public Health. 2020;17(4):1135.

Kamel-ElSayed S, Loftus S. Using and combining learning theories in medical education. Med Sci Educ. 2018;28:255–8.

Loftus S. The language of clinical reasoning. Clinical Reasoning in the Health Professions E-Book. 2018:129.

Jones AH. Why teach literature and medicine? Answers from three decades. J Med Humanit. 2013;34:415–28.

Shapiro J, Nixon LL, Wear SE, Doukas DJ. Medical professionalism: what the study of literature can contribute to the conversation. Philos Ethics Humanit Med. 2015;10:1–8.

Sklar DP. Health humanities and medical education: joined by a common purpose. Acad Med. 2017;92(12):1647–9.

Tamir DI, Bricker AB, Dodell-Feder D, Mitchell JP. Reading fiction and reading minds: the role of simulation in the default network. Soc Cogn Affect Neurosci. 2016;11(2):215–24.

Bonebakker V. Literature & medicine: Humanities at the heart of health care: a hospital-based reading and discussion program developed by the Maine Humanities Council. Acad Med. 2003;78(10):963–7.

Panahifar S, Nouriani JM. The Effectiveness of Narrative therapy on Behavioral Maladaptation and Psychological Health of Children with ADHD in Kerman. 2021.

Shahabizadeh F, Khageaminiyan F. The effectiveness of narrative therapy based on cognitive-behavioral perspective on symptoms depression and dysthymic disorders in children. J Psychol Achieve. 2019;26(1):39–58.

Azmi hjsa. farsi omomi, ahamiyat wa asibshenasi 1 th hamayesh amozeshe zabane farsi. 2016. p. 195-200.

Kiyani barforoshi hR, ghodsiye. Pathological Criticism in farsi ye omomi; Critical Studies in Texts and Programs of Human Sciences 19. 2019;3:181-205.

Boulter A. Writing fiction: creative and critical approaches: Bloomsbury Publishing; 2007.

Damrosch D. How to read world literature: John Wiley & Sons; 2017.

Foster TC. How to Read Literature Like a Professor. Revised. New York: HarperCollins; 2014.

Rasley A. The Power of Point of View: Make Your Story Come to Life: Penguin. 2008.

Robert S. Elements of Fiction: An Anthology. Oxford University Press; 1981.

Truby J. The anatomy of story: 22 steps to becoming a master storyteller: Farrar, Straus and Giroux; 2008.

Pino MC, Mazza M. The use of “literary fiction” to promote mentalizing ability. PloS one. 2016;11(8):e0160254.

Bal PM, Veltkamp M. How does fiction reading influence empathy? An experimental investigation on the role of emotional transportation. PloS one. 2013;8(1):e55341.

Djikic M, Oatley K, Moldoveanu MC. Reading other minds: effects of literature on empathy. Sci Study Lit. 2013;3(1):28–47.

Mumper ML, Gerrig RJ. Leisure reading and social cognition: a meta-analysis. Psychol Aesthetics Creativity Arts. 2017;11(1):109.

Panero ME, Weisberg DS, Black J, Goldstein TR, Barnes JL, Brownell H, et al. Does reading a single passage of literary fiction really improve theory of mind? An attempt at replication. J Personal Soc Psychol. 2016;111(5):e46.

Dodell-Feder D, Lincoln SH, Coulson JP, Hooker CI. Using fiction to assess mental state understanding: a new task for assessing theory of mind in adults. PloS one. 2013;8(11):e81279.

Beaudoin C, Leblanc É, Gagner C, Beauchamp MH. Systematic review and inventory of theory of mind measures for young children. Front Psychol. 2020;10:2905.

Kidd D, Ongis M, Castano E. On literary fiction and its effects on theory of mind. Sci Study Lit. 2016;6(1):42–58.

Koopman EME. Effects of “literariness” on emotions and on empathy and reflection after reading. Psychol Aesthetics Creativity Arts. 2016;10(1):82.

Engbretson AM, Poehlmann-Tynan JA, Zahn-Waxler CJ, Vigna AJ, Gerstein ED, Raison CL. Effects of cognitively-based compassion training on parenting interactions and children’s empathy. Mindfulness. 2020;11:2841–52.

Stansfield J, Bunce L. The relationship between empathy and reading fiction: separate roles for cognitive and affective components. J Eur Psychol Students. 2014;5(3).

Kheirabadi G H-RM, Mahki B, Masaiely N, Yahaei M, Golshani L,Kheirabadi D. Empathy with patients in medical sciences faculty physicians at Isfahan University of Medical Sciences, Iran. J Res Behav Sci. 2016;14(2):154-60.

Shariat SV, Kaykhavoni A. Empathy in medical residents at Iran University of Medical Sciences. Iran J Psychiatry Clin Psychol. 2010;16(3):248–56.

Managheb E, Bagheri S. The impact of empathy training workshops on empathic practice of family physicians of Jahrom University of Medical Sciences. Iran J Med Educ. 2013;13(2):114–22.

Download references

Acknowledgements

The authors would like to express our gratitude to all those who helped us in this research and participated in the intervention. This research has been done with the financial support of the National Center for Strategic Research in Medical Education, Tehran, Iran (project number 1400130).

This research has been done with the financial support of the National Center for Strategic Research in Medical Education, Tehran, Iran (project number 1400130).

Author information

Authors and affiliations.

Department of General Education, School of Paramedical Sciences, Mazandaran University of Medical Sciences, Sari, Iran

Masoumeh Mahmoudi

School of Paramedical Sciences, Mazandaran University of Medical Sciences, Sari, Iran

Ali Asghar Ghorbani

Psychiatry and Behavioral Sciences Research Center, Addiction Institute, Mazandaran University of Medical Sciences, Sari, Iran

Mehdi Pourasghar

Department of Health Information Technology, School of Paramedical Sciences, Mazandaran University of Medical Sciences, Sari, Iran

Azita Balaghafari

Department of Biostatistics and Epidemiology, School of Health, Health Sciences Research Center, Mazandaran University of Medical Sciences, Sari, Iran

Jamshid Yazdani Charati

Mazandaran University of Medical Sciences, Sari, Iran

Nassim Ghahrani

Department of Biostatistics and Epidemiology, Student Research Committee, School of Health, Mazandaran University of Medical Sciences, Sari, Iran

Farzaneh Amini

You can also search for this author in PubMed   Google Scholar

Contributions

MM, MP: developed the stories, AB, NGH and AAGH: Study design, intervention, implementation and interpretation of the results. JY and FA: Data analysis, and interpretation of the results. All authors confirmed the final version for submission. We confirm that instruction for authors of this journal has been read carefully, and all points are compiled for the whole manuscript.

Corresponding author

Correspondence to Ali Asghar Ghorbani .

Ethics declarations

Ethics approval and consent to participate.

In this study, after coordination with the relevant authorities and obtaining the consent and approval of the participants, the intervention was conducted. Confidentiality was also observed.

Also, written informed consent was obtained from all participants.

All authors confirm that all experiments were performed in accordance with relevant guidelines and regulations. Also, they confirm that all methods were carried out in accordance with relevant guidelines and regulations.

The authors confirm that the experimental protocols were approved by the Ethics Committee of the National Center for Strategic Research in Medical Education, Tehran, Iran (Ethical Code: 1400130).

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1: appendix 1..

The Stories.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Mahmoudi, M., Ghorbani, A.A., Pourasghar, M. et al. Designing, implementation and evaluation of story reading: a solution to increase general empathy in medical students. BMC Med Educ 24 , 477 (2024). https://doi.org/10.1186/s12909-024-05384-4

Download citation

Received : 07 September 2023

Accepted : 03 April 2024

Published : 30 April 2024

DOI : https://doi.org/10.1186/s12909-024-05384-4

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Story reading
  • General empathy
  • Medical students
  • Interventional education

BMC Medical Education

ISSN: 1472-6920

evaluate the research methods used by cognitive psychologists

IMAGES

  1. An Introduction to the Types Of Psychological Research Methods

    evaluate the research methods used by cognitive psychologists

  2. Evaluative Research: Definition, Methods & Types

    evaluate the research methods used by cognitive psychologists

  3. Describe the Different Research Methods Used by Psychologists

    evaluate the research methods used by cognitive psychologists

  4. A Psychologist Guide: An Introduction to Cognitive Psychology

    evaluate the research methods used by cognitive psychologists

  5. Research Methods in Cognitive Psychology.

    evaluate the research methods used by cognitive psychologists

  6. COGNITIVE PSYCHOLOGY

    evaluate the research methods used by cognitive psychologists

VIDEO

  1. Research methods in cognitive psychology

  2. Cognitive Psychology Research Methods Experiments & Case Studies

  3. Psychology Research

  4. Cognitive analytic therapy (CAT)

  5. Specialized Training for Psychologists

  6. Specialized Training for Psychologists

COMMENTS

  1. Cognitive Approach In Psychology

    The cognitive approach began to revolutionize psychology in the late 1950s and early 1960s to become the dominant approach (i.e., perspective) in psychology by the late 1970s. Interest in mental processes was gradually restored through the work of Jean Piaget and Edward Tolman. Tolman was a 'soft behaviorist'.

  2. The Use of Research Methods in Psychological Research: A Systematised

    Introduction. Psychology is an ever-growing and popular field (Gough and Lyons, 2016; Clay, 2017).Due to this growth and the need for science-based research to base health decisions on (Perestelo-Pérez, 2013), the use of research methods in the broad field of psychology is an essential point of investigation (Stangor, 2011; Aanstoos, 2014).Research methods are therefore viewed as important ...

  3. EVALUATING RESEARCH METHODS IN PSYCHOLOGY

    1. Psychology - Research - Methodology. 2. Psychology - Research - Case studies. I. Title. BF76.5.D85 2005 150'.72-dc22 2004029159 A catalogue record for this title is available from the British Library. Set in 10.5/12.5pt Photina by Graphicraft Ltd., Hong Kong Printed and bound in the United Kingdom by TJ International, Padstow, Cornwall

  4. PDF APA Handbook of Research Methods in Psychology

    Research Methods in Psychology AP A Han dbook s in Psychology VOLUME Research Designs: Quantitative, Qualitative, Neuropsychological, and Biological SECOND EDITION Harris Cooper, Editor-in-Chief Marc N. Coutanche, Linda M. McMullen, A. T. Panter, sychological Association. Not for further distribution.

  5. Topics, Methods, and Research-Based Strategies for Teaching ...

    Abstract. In this chapter, we review the basic contents and structure of our courses in cognition and cognitive psychology as well as pedagogical approaches to teaching. Topics range from an historical overview of the areas of science that lead up to the formation of cognitive science to detailed discussions of published articles within each of ...

  6. Start

    Psychology: 85-310: Research Methods in Cognitive Psychology: Start. This guide will help you find literature in databases, with a focus on PsycINFO, and manage citations in Zotero. Start; Finding Research Articles; ... Evaluating and offering advice on the display of visual content, such as presentations, poster designs, and web design.

  7. Research Methods in Cognition

    Three Metatheoretical Assumptions Shared by Cognitive Researchers. General Strategies for Studying Unobservable Knowledge and Cognitive Processes. Research Methods Used to Study Cognition. Research Methods in Cognitive Neuroscience. Concluding Comments: A Useful Conceptual Framework and a Cautionary Note

  8. Frontiers

    Introduction. Psychology is an ever-growing and popular field (Gough and Lyons, 2016; Clay, 2017).Due to this growth and the need for science-based research to base health decisions on (Perestelo-Pérez, 2013), the use of research methods in the broad field of psychology is an essential point of investigation (Stangor, 2011; Aanstoos, 2014).Research methods are therefore viewed as important ...

  9. Research Methods in Cognition

    Three Metatheoretical Assumptions Shared by Cognitive Researchers. General Strategies for Studying Unobservable Knowledge and Cognitive Processes. Research Methods Used to Study Cognition. Research Methods in Cognitive Neuroscience. Concluding Comments: A Useful Conceptual Framework and a Cautionary Note

  10. Cognitive Approach

    The cognitive approach uses experimental research methods to study internal mental processes such as attention, perception, memory and decision-making. Cognitive psychologists assume that the mind actively processes information from our senses (touch, taste etc.) and that between stimulus and response is a series of complex mental processes, which can be studied scientifically.

  11. Describe the Cognitive Approach. Evaluate the research methods used by

    The study of cognitive neuroscience and the psychology of it, rely on making inferences (educated guesses). As we cannot study mental process directly, an inference based off scientific data is made. To aid this, scientists like to create theoretical models (visual reps of mental process)

  12. Cognitive Psychology: History, Theories, Research Methods

    The methods used by cognitive psychologists have been developed to experimentally tease apart mental opera­tions. At the onset, it should be noted that cognitive psychologists rely most heavily on the experimental method, in which independent variables are manipu­lated and dependent variables are measured to provide insights into the cognitive architecture.

  13. PDF Approaches to Psychology Cognitive Psychology The cognitive approach

    Cognitive psychologists follow the example of the behaviourists in preferring objective, controlled, scientific methods for investigating behaviour. They use the results of their investigations as the basis for making inferences about mental processes. One strand of cognitive research involves conducting case studies of people with brain damage.

  14. Cognitive Psychology: The Science of How We Think

    MaskotOwner/Getty Images. Cognitive psychology involves the study of internal mental processes—all of the workings inside your brain, including perception, thinking, memory, attention, language, problem-solving, and learning. Cognitive psychology--the study of how people think and process information--helps researchers understand the human brain.

  15. The Cognitive Approach

    The Cognitive Approach. The idea that humans conduct mental processes on incoming information - i.e. human cognition - came to the fore of psychological thought during the mid twentieth century, overlooking the stimulus-response focus of the behaviourist approach. A dominant cognitive approach evolved, advocating that sensory information is ...

  16. Cognitive psychology and self-reports: models and methods

    Abstract. This article describes the models and methods that cognitive psychologists and survey researchers use to evaluate and experimentally test cognitive issues in questionnaire design and subsequently improve self-report instruments. These models and methods assess the cognitive processes underlying how respondents comprehend and generate ...

  17. Ch 2: Psychological Research Methods

    Psychologists focus their attention on understanding behavior, as well as the cognitive (mental) and physiological (body) processes that underlie behavior. In contrast to other methods that people use to understand the behavior of others, such as intuition and personal experience, the hallmark of scientific research is that there is evidence to ...

  18. Cognitive Approach Flashcards

    Explain two limitations of the cognitive approach.(8 marks), Describe the cognitive approach in psychology. Evaluate the research methods used by cognitive psychologist. (16 marks), Give two assumptions of cognitive approach. For each assumption, illustrate your answer with reference to a topic in psychology. Use a different topic for each ...

  19. Cognitive psychology and self-reports: Models and methods

    This article describes the models and methods that cognitive psychologists and survey researchers use to evaluate and experimentally test cognitive issues in questionnaire design and subsequently improve self-report instruments. These models and methods assess the cognitive processes underlying how respondents comprehend and generate answers to self-report questions. Cognitive processing ...

  20. 1. Outline and evaluate the cognitive approach in psychology ...

    1. Outline and evaluate the cognitive approach in psychology (16 marks) - Cognitive approach believed behaviour explained through internal mental processes e.g. thought. -Behaviourists not interested in what happens between stimulus - response and they only study observable behaviours. - Whereas cognitive psychologists extend this idea and say ...

  21. Evaluating Research Methods in Psychology

    "An excellent supplement to courses in experimental research methods, critical thinking, problem solving, and cognitive psychology. Instructors can easily select course-appropriate cases to increase the depth of student's knowledge and understanding of material." Dr Kirsten Rewey, psychology research methods instructor, Minnesota

  22. Describe the cognitive approach in psychology. Evaluate the research

    Evaluate the research methods used by cognitive psychologists. The cognitive approach is concerned with mental processes to help us understand why people behave in certain ways. The approach uses scientific and experimental methods to measure mental processes which results in rejection of the psychodynamic use of introspection.

  23. Designing, implementation and evaluation of story reading: a solution

    To compare overall empathy, analysis of variance with repeated measurement was used because Mauchly's Test of Sphericity showed significant results (P<0.001), therefore Greenhouse- Geisser test was used and a significant difference was observed in three periods of overall empathy measurement (P<0.001).For comparisons in three times of measuring empathy, the results listed in the following ...