types of research methods ap psychology

  • AP Calculus
  • AP Chemistry
  • AP U.S. History
  • AP World History
  • Free AP Practice Questions
  • AP Exam Prep

AP Psychology: Research Methods Notes

Key Takeaways: Research Methods

  • The study of psychology relies on a diverse array of qualitative and quantitative research methods, including observations, case studies, surveys, and controlled experiments.
  • Psychological research is carefully designed so that researchers can be confident about using results to draw conclusions about real-life phenomena. This is done by controlling variables, creating representative samples, controlling for internal and external validity, and operationalizing definitions and measurements.
  • Researchers use statistics to analyze and make sense of the data gathered in a research study. This involves the use of descriptive statistics like measures of central tendency and dispersion, as well as inferential statistics for making generalizations based on the data.
  • Because psychological study often involves the participation of human subjects, researchers must abide by established ethical principles and practices as well as legal guidelines while conducting research.

Research Methods Key Terms

Types of psychological research.

  • Quantitative research: Research that uses operational measurements and statistical techniques to reach conclusions on the basis of numerical data, such as correlational studies and experiments.
  • Qualitative research: Research that does not rely on numerical representations of data, such as naturalistic observations, unstructured interviews, and case studies.
  • Correlation coefficient: A number (symbolized by r ) between −1 and +1, which represents the strength and direction of the correlation between two variables. The closer the coefficient is to −1 or +1, the stronger the correlation between the variables.
  • Positive correlation: An r value above 0, which indicates that two variables have a direct relationship: when one variable increases, the other also increases.
  • Negative correlation: An r value below 0, which indicates that two variables have an inverse relationship: when one variable increases, the other decreases.
  • Naturalistic observation: A research method, typically qualitative in nature and usually covert and undisclosed, that attempts to document behavior as it spontaneously occurs in a real world setting.
  • Structured observation: A type of observational research typically conducted in a laboratory setting, where the researcher can control some aspects of the environment.
  • Coding: The classification of behaviors into discrete categories, used especially in structured observations to achieve a level of consistency in recording and describing observations.
  • Inter-rater reliability: A statistical measure of the degree of agreement between different codings of the same phenomena.
  • Participant observation: A mostly qualitative research method in which the researcher becomes a member of a studied group, either overtly or covertly.
  • Hawthorne effect: A phenomenon in which research subjects tend to alter their behavior in response to knowledge of being observed.
  • Longitudinal study: A research design that examines how individuals develop by studying the same sample over a long period of time.
  • Cross-sectional study: A research design conducted at a single point in time, comparing groups of differing ages to arrive at conclusions about development.
  • Case study: A research design involving an in-depth and detailed examination of a single subject, or case, usually an individual or a small group.
  • Survey: A mostly quantitative research method involving a list of questions filled out by a group of people to assess attitudes or opinions.
  • Nonresponse bias: A distortion of data that can occur in surveys with a low response rate.
  • Surveyor bias: A distortion of data that can occur when survey questions are written in a way that prompts respondents to answer a certain way.
  • Experiments: Deliberately designed procedures used to test research hypotheses.
  • Hypothesis: A proposed, testable explanation for a phenomenon, often constructed in the form of a statement about the relationship between two or more variables.
  • Controlled experiment: A research design for testing a causal hypothesis, in which all aspects of the study are deliberately controlled and only independent variables are manipulated to isolate their effects on dependent variables.
  • Field experiment: Experiments conducted out in the real world, with fewer controls than would be found in a lab.

Check out our full Research Methods Notes!

You might also like

Call 1-800-KAP-TEST or email [email protected]

Prep for an Exam

MCAT Test Prep

LSAT Test Prep

GRE Test Prep

GMAT Test Prep

SAT Test Prep

ACT Test Prep

DAT Test Prep

NCLEX Test Prep

USMLE Test Prep

Courses by Location

NCLEX Locations

GRE Locations

SAT Locations

LSAT Locations

MCAT Locations

GMAT Locations

Useful Links

Kaplan Test Prep Contact Us Partner Solutions Work for Kaplan Terms and Conditions Privacy Policy CA Privacy Policy Trademark Directory

Research Methods In Psychology

Saul McLeod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul McLeod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

Research methods in psychology are systematic procedures used to observe, describe, predict, and explain behavior and mental processes. They include experiments, surveys, case studies, and naturalistic observations, ensuring data collection is objective and reliable to understand and explain psychological phenomena.

research methods3

Hypotheses are statements about the prediction of the results, that can be verified or disproved by some investigation.

There are four types of hypotheses :
  • Null Hypotheses (H0 ) – these predict that no difference will be found in the results between the conditions. Typically these are written ‘There will be no difference…’
  • Alternative Hypotheses (Ha or H1) – these predict that there will be a significant difference in the results between the two conditions. This is also known as the experimental hypothesis.
  • One-tailed (directional) hypotheses – these state the specific direction the researcher expects the results to move in, e.g. higher, lower, more, less. In a correlation study, the predicted direction of the correlation can be either positive or negative.
  • Two-tailed (non-directional) hypotheses – these state that a difference will be found between the conditions of the independent variable but does not state the direction of a difference or relationship. Typically these are always written ‘There will be a difference ….’

All research has an alternative hypothesis (either a one-tailed or two-tailed) and a corresponding null hypothesis.

Once the research is conducted and results are found, psychologists must accept one hypothesis and reject the other. 

So, if a difference is found, the Psychologist would accept the alternative hypothesis and reject the null.  The opposite applies if no difference is found.

Sampling techniques

Sampling is the process of selecting a representative group from the population under study.

Sample Target Population

A sample is the participants you select from a target population (the group you are interested in) to make generalizations about.

Representative means the extent to which a sample mirrors a researcher’s target population and reflects its characteristics.

Generalisability means the extent to which their findings can be applied to the larger population of which their sample was a part.

  • Volunteer sample : where participants pick themselves through newspaper adverts, noticeboards or online.
  • Opportunity sampling : also known as convenience sampling , uses people who are available at the time the study is carried out and willing to take part. It is based on convenience.
  • Random sampling : when every person in the target population has an equal chance of being selected. An example of random sampling would be picking names out of a hat.
  • Systematic sampling : when a system is used to select participants. Picking every Nth person from all possible participants. N = the number of people in the research population / the number of people needed for the sample.
  • Stratified sampling : when you identify the subgroups and select participants in proportion to their occurrences.
  • Snowball sampling : when researchers find a few participants, and then ask them to find participants themselves and so on.
  • Quota sampling : when researchers will be told to ensure the sample fits certain quotas, for example they might be told to find 90 participants, with 30 of them being unemployed.

Experiments always have an independent and dependent variable .

  • The independent variable is the one the experimenter manipulates (the thing that changes between the conditions the participants are placed into). It is assumed to have a direct effect on the dependent variable.
  • The dependent variable is the thing being measured, or the results of the experiment.

variables

Operationalization of variables means making them measurable/quantifiable. We must use operationalization to ensure that variables are in a form that can be easily tested.

For instance, we can’t really measure ‘happiness’, but we can measure how many times a person smiles within a two-hour period. 

By operationalizing variables, we make it easy for someone else to replicate our research. Remember, this is important because we can check if our findings are reliable.

Extraneous variables are all variables which are not independent variable but could affect the results of the experiment.

It can be a natural characteristic of the participant, such as intelligence levels, gender, or age for example, or it could be a situational feature of the environment such as lighting or noise.

Demand characteristics are a type of extraneous variable that occurs if the participants work out the aims of the research study, they may begin to behave in a certain way.

For example, in Milgram’s research , critics argued that participants worked out that the shocks were not real and they administered them as they thought this was what was required of them. 

Extraneous variables must be controlled so that they do not affect (confound) the results.

Randomly allocating participants to their conditions or using a matched pairs experimental design can help to reduce participant variables. 

Situational variables are controlled by using standardized procedures, ensuring every participant in a given condition is treated in the same way

Experimental Design

Experimental design refers to how participants are allocated to each condition of the independent variable, such as a control or experimental group.
  • Independent design ( between-groups design ): each participant is selected for only one group. With the independent design, the most common way of deciding which participants go into which group is by means of randomization. 
  • Matched participants design : each participant is selected for only one group, but the participants in the two groups are matched for some relevant factor or factors (e.g. ability; sex; age).
  • Repeated measures design ( within groups) : each participant appears in both groups, so that there are exactly the same participants in each group.
  • The main problem with the repeated measures design is that there may well be order effects. Their experiences during the experiment may change the participants in various ways.
  • They may perform better when they appear in the second group because they have gained useful information about the experiment or about the task. On the other hand, they may perform less well on the second occasion because of tiredness or boredom.
  • Counterbalancing is the best way of preventing order effects from disrupting the findings of an experiment, and involves ensuring that each condition is equally likely to be used first and second by the participants.

If we wish to compare two groups with respect to a given independent variable, it is essential to make sure that the two groups do not differ in any other important way. 

Experimental Methods

All experimental methods involve an iv (independent variable) and dv (dependent variable)..

The researcher decides where the experiment will take place, at what time, with which participants, in what circumstances,  using a standardized procedure.

  • Field experiments are conducted in the everyday (natural) environment of the participants. The experimenter still manipulates the IV, but in a real-life setting. It may be possible to control extraneous variables, though such control is more difficult than in a lab experiment.
  • Natural experiments are when a naturally occurring IV is investigated that isn’t deliberately manipulated, it exists anyway. Participants are not randomly allocated, and the natural event may only occur rarely.

Case studies are in-depth investigations of a person, group, event, or community. It uses information from a range of sources, such as from the person concerned and also from their family and friends.

Many techniques may be used such as interviews, psychological tests, observations and experiments. Case studies are generally longitudinal: in other words, they follow the individual or group over an extended period of time. 

Case studies are widely used in psychology and among the best-known ones carried out were by Sigmund Freud . He conducted very detailed investigations into the private lives of his patients in an attempt to both understand and help them overcome their illnesses.

Case studies provide rich qualitative data and have high levels of ecological validity. However, it is difficult to generalize from individual cases as each one has unique characteristics.

Correlational Studies

Correlation means association; it is a measure of the extent to which two variables are related. One of the variables can be regarded as the predictor variable with the other one as the outcome variable.

Correlational studies typically involve obtaining two different measures from a group of participants, and then assessing the degree of association between the measures. 

The predictor variable can be seen as occurring before the outcome variable in some sense. It is called the predictor variable, because it forms the basis for predicting the value of the outcome variable.

Relationships between variables can be displayed on a graph or as a numerical score called a correlation coefficient.

types of correlation. Scatter plot. Positive negative and no correlation

  • If an increase in one variable tends to be associated with an increase in the other, then this is known as a positive correlation .
  • If an increase in one variable tends to be associated with a decrease in the other, then this is known as a negative correlation .
  • A zero correlation occurs when there is no relationship between variables.

After looking at the scattergraph, if we want to be sure that a significant relationship does exist between the two variables, a statistical test of correlation can be conducted, such as Spearman’s rho.

The test will give us a score, called a correlation coefficient . This is a value between 0 and 1, and the closer to 1 the score is, the stronger the relationship between the variables. This value can be both positive e.g. 0.63, or negative -0.63.

Types of correlation. Strong, weak, and perfect positive correlation, strong, weak, and perfect negative correlation, no correlation. Graphs or charts ...

A correlation between variables, however, does not automatically mean that the change in one variable is the cause of the change in the values of the other variable. A correlation only shows if there is a relationship between variables.

Correlation does not always prove causation, as a third variable may be involved. 

causation correlation

Interview Methods

Interviews are commonly divided into two types: structured and unstructured.

A fixed, predetermined set of questions is put to every participant in the same order and in the same way. 

Responses are recorded on a questionnaire, and the researcher presets the order and wording of questions, and sometimes the range of alternative answers.

The interviewer stays within their role and maintains social distance from the interviewee.

There are no set questions, and the participant can raise whatever topics he/she feels are relevant and ask them in their own way. Questions are posed about participants’ answers to the subject

Unstructured interviews are most useful in qualitative research to analyze attitudes and values.

Though they rarely provide a valid basis for generalization, their main advantage is that they enable the researcher to probe social actors’ subjective point of view. 

Questionnaire Method

Questionnaires can be thought of as a kind of written interview. They can be carried out face to face, by telephone, or post.

The choice of questions is important because of the need to avoid bias or ambiguity in the questions, ‘leading’ the respondent or causing offense.

  • Open questions are designed to encourage a full, meaningful answer using the subject’s own knowledge and feelings. They provide insights into feelings, opinions, and understanding. Example: “How do you feel about that situation?”
  • Closed questions can be answered with a simple “yes” or “no” or specific information, limiting the depth of response. They are useful for gathering specific facts or confirming details. Example: “Do you feel anxious in crowds?”

Its other practical advantages are that it is cheaper than face-to-face interviews and can be used to contact many respondents scattered over a wide area relatively quickly.

Observations

There are different types of observation methods :
  • Covert observation is where the researcher doesn’t tell the participants they are being observed until after the study is complete. There could be ethical problems or deception and consent with this particular observation method.
  • Overt observation is where a researcher tells the participants they are being observed and what they are being observed for.
  • Controlled : behavior is observed under controlled laboratory conditions (e.g., Bandura’s Bobo doll study).
  • Natural : Here, spontaneous behavior is recorded in a natural setting.
  • Participant : Here, the observer has direct contact with the group of people they are observing. The researcher becomes a member of the group they are researching.  
  • Non-participant (aka “fly on the wall): The researcher does not have direct contact with the people being observed. The observation of participants’ behavior is from a distance

Pilot Study

A pilot  study is a small scale preliminary study conducted in order to evaluate the feasibility of the key s teps in a future, full-scale project.

A pilot study is an initial run-through of the procedures to be used in an investigation; it involves selecting a few people and trying out the study on them. It is possible to save time, and in some cases, money, by identifying any flaws in the procedures designed by the researcher.

A pilot study can help the researcher spot any ambiguities (i.e. unusual things) or confusion in the information given to participants or problems with the task devised.

Sometimes the task is too hard, and the researcher may get a floor effect, because none of the participants can score at all or can complete the task – all performances are low.

The opposite effect is a ceiling effect, when the task is so easy that all achieve virtually full marks or top performances and are “hitting the ceiling”.

Research Design

In cross-sectional research , a researcher compares multiple segments of the population at the same time

Sometimes, we want to see how people change over time, as in studies of human development and lifespan. Longitudinal research is a research design in which data-gathering is administered repeatedly over an extended period of time.

In cohort studies , the participants must share a common factor or characteristic such as age, demographic, or occupation. A cohort study is a type of longitudinal study in which researchers monitor and observe a chosen population over an extended period.

Triangulation means using more than one research method to improve the study’s validity.

Reliability

Reliability is a measure of consistency, if a particular measurement is repeated and the same result is obtained then it is described as being reliable.

  • Test-retest reliability :  assessing the same person on two different occasions which shows the extent to which the test produces the same answers.
  • Inter-observer reliability : the extent to which there is an agreement between two or more observers.

Meta-Analysis

Meta-analysis is a statistical procedure used to combine and synthesize findings from multiple independent studies to estimate the average effect size for a particular research question.

Meta-analysis goes beyond traditional narrative reviews by using statistical methods to integrate the results of several studies, leading to a more objective appraisal of the evidence.

This is done by looking through various databases, and then decisions are made about what studies are to be included/excluded.

  • Strengths : Increases the conclusions’ validity as they’re based on a wider range.
  • Weaknesses : Research designs in studies can vary, so they are not truly comparable.

Peer Review

A researcher submits an article to a journal. The choice of the journal may be determined by the journal’s audience or prestige.

The journal selects two or more appropriate experts (psychologists working in a similar field) to peer review the article without payment. The peer reviewers assess: the methods and designs used, originality of the findings, the validity of the original research findings and its content, structure and language.

Feedback from the reviewer determines whether the article is accepted. The article may be: Accepted as it is, accepted with revisions, sent back to the author to revise and re-submit or rejected without the possibility of submission.

The editor makes the final decision whether to accept or reject the research report based on the reviewers comments/ recommendations.

Peer review is important because it prevent faulty data from entering the public domain, it provides a way of checking the validity of findings and the quality of the methodology and is used to assess the research rating of university departments.

Peer reviews may be an ideal, whereas in practice there are lots of problems. For example, it slows publication down and may prevent unusual, new work being published. Some reviewers might use it as an opportunity to prevent competing researchers from publishing work.

Some people doubt whether peer review can really prevent the publication of fraudulent research.

The advent of the internet means that a lot of research and academic comment is being published without official peer reviews than before, though systems are evolving on the internet where everyone really has a chance to offer their opinions and police the quality of research.

Types of Data

  • Quantitative data is numerical data e.g. reaction time or number of mistakes. It represents how much or how long, how many there are of something. A tally of behavioral categories and closed questions in a questionnaire collect quantitative data.
  • Qualitative data is virtually any type of information that can be observed and recorded that is not numerical in nature and can be in the form of written or verbal communication. Open questions in questionnaires and accounts from observational studies collect qualitative data.
  • Primary data is first-hand data collected for the purpose of the investigation.
  • Secondary data is information that has been collected by someone other than the person who is conducting the research e.g. taken from journals, books or articles.

Validity means how well a piece of research actually measures what it sets out to, or how well it reflects the reality it claims to represent.

Validity is whether the observed effect is genuine and represents what is actually out there in the world.

  • Concurrent validity is the extent to which a psychological measure relates to an existing similar measure and obtains close results. For example, a new intelligence test compared to an established test.
  • Face validity : does the test measure what it’s supposed to measure ‘on the face of it’. This is done by ‘eyeballing’ the measuring or by passing it to an expert to check.
  • Ecological validit y is the extent to which findings from a research study can be generalized to other settings / real life.
  • Temporal validity is the extent to which findings from a research study can be generalized to other historical times.

Features of Science

  • Paradigm – A set of shared assumptions and agreed methods within a scientific discipline.
  • Paradigm shift – The result of the scientific revolution: a significant change in the dominant unifying theory within a scientific discipline.
  • Objectivity – When all sources of personal bias are minimised so not to distort or influence the research process.
  • Empirical method – Scientific approaches that are based on the gathering of evidence through direct observation and experience.
  • Replicability – The extent to which scientific procedures and findings can be repeated by other researchers.
  • Falsifiability – The principle that a theory cannot be considered scientific unless it admits the possibility of being proved untrue.

Statistical Testing

A significant result is one where there is a low probability that chance factors were responsible for any observed difference, correlation, or association in the variables tested.

If our test is significant, we can reject our null hypothesis and accept our alternative hypothesis.

If our test is not significant, we can accept our null hypothesis and reject our alternative hypothesis. A null hypothesis is a statement of no effect.

In Psychology, we use p < 0.05 (as it strikes a balance between making a type I and II error) but p < 0.01 is used in tests that could cause harm like introducing a new drug.

A type I error is when the null hypothesis is rejected when it should have been accepted (happens when a lenient significance level is used, an error of optimism).

A type II error is when the null hypothesis is accepted when it should have been rejected (happens when a stringent significance level is used, an error of pessimism).

Ethical Issues

  • Informed consent is when participants are able to make an informed judgment about whether to take part. It causes them to guess the aims of the study and change their behavior.
  • To deal with it, we can gain presumptive consent or ask them to formally indicate their agreement to participate but it may invalidate the purpose of the study and it is not guaranteed that the participants would understand.
  • Deception should only be used when it is approved by an ethics committee, as it involves deliberately misleading or withholding information. Participants should be fully debriefed after the study but debriefing can’t turn the clock back.
  • All participants should be informed at the beginning that they have the right to withdraw if they ever feel distressed or uncomfortable.
  • It causes bias as the ones that stayed are obedient and some may not withdraw as they may have been given incentives or feel like they’re spoiling the study. Researchers can offer the right to withdraw data after participation.
  • Participants should all have protection from harm . The researcher should avoid risks greater than those experienced in everyday life and they should stop the study if any harm is suspected. However, the harm may not be apparent at the time of the study.
  • Confidentiality concerns the communication of personal information. The researchers should not record any names but use numbers or false names though it may not be possible as it is sometimes possible to work out who the researchers were.

Print Friendly, PDF & Email

types of research methods ap psychology

AP Psychology: Understanding Research Methods

types of research methods ap psychology

In AP Psychology, a deep understanding of research methods is essential for interpreting psychological studies and conducting empirical research. Here's a comprehensive guide to the key research methods studied in AP Psychology:

1. Experimental Research:

   - Objective: Establish cause-and-effect relationships between variables.

   - Design: Random assignment of participants to conditions, manipulation of an independent variable, and measurement of dependent variables.

2. Correlational Research:

   - Objective: Examine relationships between variables without manipulating them.

   - Design: Measure variables to determine the degree and direction of correlation. No manipulation of variables occurs.

3. Descriptive Research:

   - Objective: Observe and describe behavior without manipulating variables.

   - Design: Includes naturalistic observation, case studies, and surveys to gather information about behavior.

4. Longitudinal Studies:

   - Objective: Examine changes in behavior or traits over an extended period.

   - Design: Data collected from the same participants over time to observe developmental changes.

5. Cross-Sectional Studies:

   - Objective: Compare individuals of different ages to assess differences.

   - Design: Data collected from participants of different age groups at a single point in time.

6. Quasi-Experimental Designs:

   - Objective: Investigate cause-and-effect relationships without random assignment.

   - Design: Participants are not randomly assigned to conditions due to ethical or practical reasons.

7. Surveys and Questionnaires:

   - Objective: Gather self-report data on opinions, attitudes, or behaviors.

   - Design: Participants respond to a set of questions, providing quantitative or qualitative data.

8. Naturalistic Observation:

   - Objective: Observe and record behavior in its natural setting.

   - Design: Researchers avoid interfering with the environment, allowing for a more authentic representation of behavior.

9. Case Studies:

   - Objective: In-depth analysis of an individual or small group.

   - Design: Intensive examination of a person's history, behavior, and experiences.

10. Independent and Dependent Variables:

    - Objective: Identify the manipulated and measured aspects in an experiment.

    - Design: The independent variable is manipulated, and the dependent variable is measured to observe the effect.

11. Random Assignment:

    - Objective: Minimize pre-existing differences among participants in different experimental conditions.

    - Design: Participants are randomly assigned to experimental and control groups.

12. Sampling Methods:

    - Objective: Ensure the selected sample is representative of the population.

    - Design: Techniques like random sampling, stratified sampling, or convenience sampling are used.

13. Ethical Considerations:

    - Objective: Ensure the well-being of participants and the integrity of research.

    - Design: Adherence to ethical guidelines, including informed consent, debriefing, and protection from harm.

14. Reliability and Validity:

    - Objective: Assess the consistency and accuracy of measurements.

    - Design: Researchers employ techniques to ensure that data collection methods are reliable and valid.

15. Statistical Analysis:

    - Objective: Draw meaningful conclusions from data.

    - Design: Utilize statistical tests like t-tests, ANOVA, or correlation coefficients to analyze and interpret results.

16. Replication:

    - Objective: Confirm the reliability of study findings.

    - Design: Repeated studies with similar methodologies to ensure the consistency of results.

By mastering these research methods, AP Psychology students can critically evaluate psychological studies, design their own experiments, and contribute to the scientific understanding of behavior and mental processes. Understanding the strengths and limitations of each method is crucial for becoming a proficient consumer and producer of psychological research.

You Might Also Like

types of research methods ap psychology

How to Pick the Correct College Majors For You

It’s quite a hard decision to make - choosing a college major. This guide will help you brainstorm, research and decide on the college major that is a perfect fit for you

types of research methods ap psychology

The Perfect College Essay Structure

The Fundamentals of writing an Essay which includes the process of brainstorming, drafting, and finalizing.

types of research methods ap psychology

How to Stand Out through Extracurricular Activities

Do you know the importance of extracurricular activities? Why should you participate in extracurricular activities & how to stand out through it

AP Guru has been helping students since 2010 gain admissions to their dream universities by helping them in their college admissions and SAT and ACT Prep

Free Resources

American Psychological Association Logo

APA Handbook of Research Methods in Psychology

Available formats.

  • Table of contents
  • Contributor bios
  • Book details

With significant new and updated content across dozens of chapters, this second edition  presents the most exhaustive treatment available of the techniques psychologists and others have developed to help them pursue a shared understanding of why humans think, feel, and behave the way they do.

The initial chapters in this indispensable three-volume handbook address broad, crosscutting issues faced by researchers: the philosophical, ethical, and societal underpinnings of psychological research. Next, chapters detail the research planning process, describe the range of measurement techniques that psychologists most often use to collect data, consider how to determine the best measurement techniques for a particular purpose, and examine ways to assess the trustworthiness of measures.

Additional chapters cover various aspects of quantitative, qualitative, neuropsychological, and biological research designs, presenting an array of options and their nuanced distinctions. Chapters on techniques for data analysis follow, and important issues in writing up research to share with the community of psychologists are discussed in the handbook’s concluding chapters.

Among the newly written chapters in the second edition, the handbook’s stellar roster of authors cover literature searching, workflow and reproducibility, research funding, neuroimaging, various facets of a wide range of research designs and data analysis methods, and updated information on the publication process, including research data management and sharing, questionable practices in statistical analysis, and ethical issues in manuscript preparation and authorship.

Volume 1. Foundations, Planning, Measures, and Psychometrics

Editorial Board

About the Editors

Contributors

A Note from the Publisher

Introduction: Objectives of Psychological Research and Their Relations to Research Methods

Part I. Philosophical, Ethical, and Societal Underpinnings of Psychological Research

  • Chapter 1. Perspectives on the Epistemological Bases for Qualitative Research Carla Willig
  • Chapter 2. Frameworks for Causal Inference in Psychological Science Peter M. Steiner, William R. Shadish, and Kristynn J. Sullivan
  • Chapter 3. Ethics in Psychological Research: Guidelines and Regulations Adam L. Fried and Kate L. Jansen
  • Chapter 4. Ethics and Regulation of Research With Nonhuman Animals Sangeeta Panicker, Chana K. Akins, and Beth Ann Rice
  • Chapter 5. Cross-Cultural Research Methods David Masumoto and Fons J. R. van de Vijver
  • Chapter 6.Research With Populations that Experience Marginalization George P. Knight, Rebecca M. B. White, Stefanie Martinez-Fuentes, Mark W. Roosa, and Adriana J. Umaña-Taylor

Part II. Planning Research

  • Chapter 7. Developing Testable and Important Research Questions Frederick T. L. Leong, Neal Schmitt, and Brent J. Lyons
  • Chapter 8. Searching With a Purpose: How to Use Literature Searching to Support Your Research Diana Ramirez and Margaret J. Foster
  • Chapter 9. Psychological Measurement: Scaling and Analysis Heather Hayes and Susan E. Embretson
  • Chapter 10. Sample Size Planning Ken Kelley, Samantha F. Anderson, and Scott E. Maxwell
  • Chapter 11. Workflow and Reproducibility Oliver Kirchkamp
  • Chapter 12. Obtaining and Evaluating Research Funding Jonathan S. Comer and Amanda L. Sanchez

Part III. Measurement Methods

  • Chapter 13. Behavioral Observation Roger Bakeman and Vicenç Quera
  • Chapter 14. Question Order Effects Lisa Lee, Parvati Krishnamurty, and Struther Van Horn
  • Chapter 15. Interviews and Interviewing Techniques Anna Madill
  • Chapter 16. Using Intensive Longitudinal Methods in Psychological Research Masumi Iida, Patrick E. Shrout, Jean-Philippe Laurenceau, and Niall Bolger
  • Chapter 17. Automated Analyses of Natural Language in Psychological Research Laura K. Allen, Arthur C. Graesser, and Danielle S. McNamara
  • Chapter 18. Objective Tests as Instruments of Psychological Theory and Research David Watson
  • Chapter 19. Norm- and Criterion-Referenced Testing Kurt F. Geisinger
  • Chapter 20. The Current Status of "Projective" "Tests" Robert E. McGrath, Alec Twibell, and Elizabeth J. Carroll
  • Chapter 21. Brief Instruments and Short Forms Emily A. Atkinson, Carolyn M. Pearson Carter, Jessica L. Combs Rohr, and Gregory T. Smith
  • Chapter 22. Eye Movements, Pupillometry, and Cognitive Processes Simon P. Liversedge, Sara V. Milledge, and Hazel I. Blythe
  • Chapter 23. Response Times Roger Ratcliff
  • Chapter 24. Psychophysics: Concepts, Methods, and Frontiers Allie C. Hexley, Takuma Morimoto, and Manuel Spitschan
  • Chapter 25. The Perimetric Physiological Measurement of Psychological Constructs Louis G. Tassinary, Ursula Hess, Luis M. Carcoba, and Joseph M. Orr
  • Chapter 26. Salivary Hormone Assays Linda Becker, Nicholas Rohleder, and Oliver C. Schultheiss
  • Chapter 27. Electro- and Magnetoencephalographic Methods in Psychology Eddie Harmon-Jones, David M. Amodio, Philip A. Gable, and Suzanne Dikker
  • Chapter 28. Event-Related Potentials Steven J. Luck
  • Chapter 29. Functional Neuroimaging Megan T. deBettencourt, Wilma A. Bainbridge, Monica D. Rosenberg
  • Chapter 30. Noninvasive Stimulation of the Cerebral Cortex Dennis J. L. G. Schutter
  • Chapter 31. Combined Neuroimaging Methods Marius Moisa and Christian C. Ruff
  • Chapter 32. Neuroimaging Analysis Methods Yanyu Xiong and Sharlene D. Newman

Part IV. Psychometrics

  • Chapter 33. Reliability Sean P. Lane, Elizabeth N. Aslinger, and Patrick E. Shrout
  • Chapter 34. Generalizability Theory Xiaohong Gao and Deborah J. Harris
  • Chapter 35. Construct Validity Kevin J. Grimm and Keith F. Widaman
  • Chapter 36. Item-Level Factor Nisha C. Gottfredson, Brian D. Stucky, and A. T. Panter
  • Chapter 37. Item Response Theory Steven P. Reise and Tyler M. Moore
  • Chapter 38. Measuring Test Performance With Signal Detection Theory Techniques Teresa A. Treat and Richard J. Viken

Volume 2. Research Designs: Quantitative, Qualitative, Neuropsychological, and Biological

Part I. Qualitative Research Methods

  • Chapter 1. Developments in Qualitative Inquiry Sarah Riley and Andrea LaMarre
  • Chapter 2. Metasynthesis of Qualitative Research Sally Thorne
  • Chapter 3. Grounded Theory and Psychological Research Robert Thornberg, Elaine Keane, and Malgorzata Wójcik
  • Chapter 4. Thematic Analysis Virginia Braun and Victoria Clarke
  • Chapter 5. Phenomenological Methodology, Methods, and Procedures for Research in Psychology Frederick J. Wertz
  • Chapter 6. Narrative Analysis Javier Monforte and Brett Smith
  • Chapter 7. Ethnomethodology and Conversation Analysis Paul ten Have
  • Chapter 8. Discourse Analysis and Discursive Psychology Chris McVittie and Andy McKinlay
  • Chapter 9. Ethnography in Psychological Research Elizabeth Fein and Jonathan Yahalom
  • Chapter 10. Visual Research in Psychology Paula Reavey, Jon Prosser, and Steven D. Brown
  • Chapter 11. Researching the Temporal Karen Henwood and Fiona Shirani

Part II. Working Across Epistemologies, Methodologies, and Methods

  • Chapter 12. Mixed Methods Research in Psychology Timothy C. Guetterman and Analay Perez
  • Chapter 13. The "Cases Within Trials" (CWT) Method: An Example of a Mixed-Methods Research Design Daniel B. Fishman
  • Chapter 14. Researching With American Indian and Alaska Native Communities: Pursuing Partnerships for Psychological Inquiry in Service to Indigenous Futurity Joseph P. Gone
  • Chapter 15. Participatory Action Research as Movement Toward Radical Relationality, Epistemic Justice, and Transformative Intervention: A Multivocal Reflection Urmitapa Dutta, Jesica Siham Fernández, Anne Galletta, and Regina Day Langhout

Part III. Sampling Across People and Time

  • Chapter 16. Introduction to Survey Sampling Roger Tourangeau and Ting Yan
  • Chapter 17. Epidemiology Rumi Kato Price and Heidi H. Tastet
  • Chapter 18. Collecting Longitudinal Data: Present Issues and Future Challenges Simran K. Johal, Rohit Batra, and Emilio Ferrer
  • Chapter 19. Using the Internet to Collect Data Ulf-Dietrich Reips

Part IV. Building and Testing Models

  • Chapter 20. Statistical Mediation Analysis David P. MacKinnon, Jeewon Cheong, Angela G. Pirlott, and Heather L. Smyth
  • Chapter 21. Structural Equation Modeling with Latent Variables Rick H. Hoyle and Nisha C. Gottfredson
  • Chapter 22. Mathematical Psychology Parker Smith, Yanjun Liu, James T. Townsend, and Trisha Van Zandt
  • Chapter 23. Computational Modeling Adele Diederich
  • Chapter 24. Fundamentals of Bootstrapping and Monte Carlo Methods William Howard Beasley, Patrick O'Keefe, and Joseph Lee Rodgers
  • Chapter 25. Designing Simulation Studies Xitao Fan
  • Chapter 26. Bayesian Modeling for Psychologists: An Applied Approach Fred M. Feinberg and Richard Gonzalez

Part V. Designs Involving Experimental Manipulations

  • Chapter 27. Randomized Designs in Psychological Research Larry Christensen, Lisa A. Turner, and R. Burke Johnson
  • Chapter 28. Nonequivalent Comparison Group Designs Henry May and Zachary K. Collier
  • Chapter 29. Regression Discontinuity Designs Charles S. Reichardt and Gary T. Henry
  • Chapter 30. Treatment Validity for Intervention Studies Dianne L. Chambless and Steven D. Hollon
  • Chapter 31. Translational Research Michael T. Bardo, Christopher Cappelli, and Mary Ann Pentz
  • Chapter 32. Program Evaluation: Outcomes and Costs of Putting Psychology to Work Brian T. Yates

Part VI. Quantitative Research Designs Involving Single Participants or Units

  • Chapter 33. Single-Case Experimental Design John M. Ferron, Megan Kirby, and Lodi Lipien
  • Chapter 34. Time Series Designs Bradley J. Bartos, Richard McCleary, and David McDowall

Part VII. Designs in Neuropsychology and Biological Psychology

  • Chapter 35. Case Studies in Neuropsychology Randi C. Martin, Simon Fischer-Baum, and Corinne M. Pettigrew
  • Chapter 36. Group Studies in Experimental Neuropsychology Avinash R Vaidya, Maia Pujara, and Lesley K. Fellows
  • Chapter 37. Genetic Methods in Psychology Terrell A. Hicks, Daniel Bustamante, Karestan C. Koenen, Nicole R. Nugent, and Ananda B. Amstadter
  • Chapter 38. Human Genetic Epidemiology Floris Huider, Lannie Ligthart, Yuri Milaneschi, Brenda W. J. H. Penninx, and Dorret I. Boomsma

Volume 3. Data Analysis and Research Publication

Part I. Quantitative Data Analysis

  • Chapter 1. Methods for Dealing With Bad Data and Inadequate Models: Distributions, Linear Models, and Beyond Rand R. Wilcox and Guillaume A. Rousselet
  • Chapter 2. Maximum Likelihood and Multiple Imputation Missing Data Handling: How They Work, and How to Make Them Work in Practice Timothy Hayes and Craig K. Enders
  • Chapter 3. Exploratory Data Analysis Paul F. Velleman and David C. Hoaglin
  • Chapter 4. Graphic Displays of Data Leland Wilkinson
  • Chapter 5. Estimating and Visualizing Interactions in Moderated Multiple Regression Connor J. McCabe and Kevin M. King
  • Chapter 6. Effect Size Estimation Michael Borenstein
  • Chapter 7. Measures of Clinically Significant Change Russell J. Bailey, Benjamin M. Ogles, and Michael J. Lambert
  • Chapter 8. Analysis of Variance and the General Linear Model James Jaccard and Ai Bo
  • Chapter 9. Generalized Linear Models David Rindskopf
  • Chapter 10. Multilevel Modeling for Psychologists John B. Nezlek
  • Chapter 11. Longitudinal Data Analysis Andrew K. Littlefield
  • Chapter 12. Event History Analysis Fetene B. Tekle and Jeroen K. Vermunt
  • Chapter 13. Latent State-Trait Models Rolf Steyer, Christian Geiser, and Christiane Loß​nitzer
  • Chapter 14. Latent Variable Modeling of Continuous Growth David A. Cole, Jeffrey A. Ciesla, and Qimin Liu
  • Chapter 15. Dynamical Systems and Differential Equation Models of Change Steven M. Boker and Robert G. Moulder
  • Chapter 16. A Multivariate Growth Curve Model for Three-Level Data Patrick J. Curran, Chris L. Strauss, Ethan M. McCormick, and James S. McGinley
  • Chapter 17. Exploratory Factor Analysis and Confirmatory Factor Analysis Keith F. Widaman and Jonathan Lee Helm
  • Chapter 18. Latent Class and Latent Profile Models Brian P. Flaherty, Liying Wang, and Cara J. Kiff
  • Chapter 19. Decision Trees and Ensemble Methods in the Behavioral Sciences Kevin J. Grimm, Ross Jacobucci, and John J. McArdle
  • Chapter 20. Using the Social Relations Model to Understand Interpersonal Perception and Behavior P. Niels Christensen, Deborah A. Kashy, and Katelin E. Leahy
  • Chapter 21. Dyadic Data Analysis Richard Gonzalez and Dale Griffin
  • Chapter 22. The Data of Others: New and Old Faces of Archival Research Sophie Pychlau and David T. Wagner
  • Chapter 23. Social Network Analysis in Psychology: Recent Breakthroughs in Methods and Theories Wei Wang, Tobias Stark, James D. Westaby, Adam K. Parr, and Daniel A. Newman
  • Chapter 24. Meta-Analysis Jeffrey C. Valentine, Therese D. Pigott, and Joseph Morris

Part II. Publishing and the Publication Process

  • Chapter 25. Research Data Management and Sharing Katherine G. Akers and John A. Borghi
  • Chapter 26. Questionable Practices in Statistical Analysis Rex B. Kline
  • Chapter 27. Ethical Issues in Manuscript Preparation and Authorship Jennifer Crocker

Harris Cooper, PhD, is the Hugo L. Blomquist professor, emeritus, in the Department of Psychology and Neuroscience at Duke University. His research interests concern research synthesis and research methodology, and he also studies the application of social and developmental psychology to education policy. His book Research Synthesis and Meta-Analysis: A Step-by-Step Approach (2017) is in its fifth edition. He is the coeditor of the Handbook of Research Synthesis and Meta-Analysis (3 rd ed. 2019).

In 2007, Dr. Cooper was the recipient of the Frederick Mosteller Award for Contributions to Research Synthesis Methodology, and in 2008 he received the Ingram Olkin Award for Distinguished Lifetime Contribution to Research Synthesis from the Society for Research Synthesis Methodology.

He served as the chair of the Department of Psychology and Neuroscience at Duke University from 2009 to 2014, and from 2017 to 2018 he served as the dean of social science at Duke. Dr. Cooper chaired the first APA committee that developed guidelines for information about research that should be included in manuscripts submitted to APA journals. He currently serves as the editor of American Psychologist, the flagship journal of APA.

Marc N. Coutanche, PhD, is an associate professor of psychology and research scientist in the Learning Research and Development Center at the University of Pittsburgh. Dr. Coutanche directs a program of cognitive neuroscience research and develops and tests new computational techniques to identify and understand the neural information present within neuroimaging data.

His work has been funded by the National Institutes of Health, National Science Foundation, American Psychological Foundation, and other organizations, and he has published in a variety of journals.

Dr. Coutanche received his PhD from the University of Pennsylvania, and conducted postdoctoral training at Yale University. He received a Howard Hughes Medical Institute International Student Research Fellowship and Ruth L. Kirschstein Postdoctoral National Research Service Award, and was named a 2019 Rising Star by the Association for Psychological Science.

Linda M. McMullen, PhD, is professor emerita of psychology at the University of Saskatchewan, Canada. Over her career, she has contributed to the development of qualitative inquiry in psychology through teaching, curriculum development, and pedagogical scholarship; original research; and service to the qualitative research community.

Dr. McMullen introduced qualitative inquiry into both the graduate and undergraduate curriculum in her home department, taught courses at both levels for many years, and has published articles, coedited special issues, and written a book ( Essentials of Discursive Psychology ) that is part of APA’s series on qualitative methodologies, among other works. She has been engaged with building the Society for Qualitative Inquiry in Psychology (SQIP; a section of Division 5 of the APA) into a vibrant scholarly society since its earliest days, and took on many leadership roles while working as a university professor.

Dr. McMullen’s contributions have been recognized by Division 5 of the APA, the Canadian Psychological Association, and the Saskatchewan Psychological Association.

Abigail Panter, PhD, is the senior associate dean for undergraduate education and a professor of psychology in the L. L. Thurstone Psychometric Laboratory at University of North Carolina at Chapel Hill. She is past president of APA’s Division 5, Quantitative and Qualitative Methods.

As a quantitative psychologist, she develops instruments, research designs and data-analytic strategies for applied research questions in higher education, personality, and health. She serves as a program evaluator for UNC’s Chancellor’s Science Scholars Program, and was also principal investigator for The Finish Line Project, a $3 million grant from the U.S. Department of Education that systematically investigated new supports and academic initiatives, especially for first-generation college students.

Her books include the  APA Dictionary of Statistics and Research Methods  (2014), the APA Handbook of Research Methods in Psychology  (first edition; 2012), the Handbook of Ethics in Quantitative Methodology  (2011), and the SAGE Handbook of Methods in Social Psychology (2004), among others.

David Rindskopf, PhD, is distinguished professor at the City University of New York Graduate Center, specializing in research methodology and statistics. His main interests are in Bayesian statistics, causal inference, categorical data analysis, meta-analysis, and latent variable models.

He is a fellow of the American Statistical Association and the American Educational Research Association, and is past president of the Society of Multivariate Experimental Psychology and the New York Chapter of the American Statistical Association.

Kenneth J. Sher, PhD, is chancellor’s professor and curators’ distinguished professor of psychological sciences, emeritus, at the University of Missouri. He received his PhD in clinical psychology from Indiana University (1980) and his clinical internship training at Brown University (1981).

His primary areas of research focus on etiological processes in the development of alcohol dependence, factors that affect the course of drinking and alcohol use disorders throughout adulthood, longitudinal research methodology, psychiatric comorbidity, and nosology. At the University of Missouri he directed the predoctoral and postdoctoral training program in alcohol studies, and his research has been continually funded by the National Institute on Alcohol Abuse and Alcoholism for more than 35 years.

Dr. Sher’s research contributions have been recognized by professional societies including the Research Society on Alcoholism and APA, and throughout his career, he has been heavily involved in service to professional societies and scholarly publications.

You may also like

How to Mix Methods

The newest release in the APA Handbooks in Psychology ® series

750 First Street NE Washington, DC 20002 www.apa.org | [email protected]

Terms of Use | Privacy Statement ©2023 American Psychological Association. All Rights Reserved.

APA Handbook of Research Methods in Psychology

Please select the collection(s) you would like to receive more information on.

A one-time purchase of any collection provides perpetual access to DRM-free titles to best meet the needs of your institution and users.

Table of Contents

Volume 1 — Foundations, Planning, Measures, and Psychometrics

Part I. Philosophical, Ethical, and Societal Underpinnings of Psychological Research (Chapters 1 – 6) Part II. Planning Research (Chapters 7 – 12) Part III. Measurement Methods (Chapters 13 – 32) Part IV. Psychometrics (Chapters 33 – 38)

Volume 2 — Research Designs: Quantitative, Qualitative, Neuropsychological, and Biological

Part I. Qualitative Research Methods (Chapters 1 – 11) Part II. Working Across Epistemologies, Methodologies, and Methods (Chapters 12 – 15) Part III. Sampling Across People and Time (Chapters 16 – 19) Part IV. Building and Testing Methods (Chapters 20 – 26) Part V. Designs Involving Experimental Manipulations (Chapters 27 – 32) Part VI. Quantitative Research Designs Involving Single Participants or Units (Chapters 33 – 34) Part VII. Designs in Neuropsychology and Biological Psychology (Chapters 35 – 38)

Volume 3 — Data Analysis and Research Publication

Part I. Quantitative Data Analysis (Chapters 1 – 24) Part II. Publishing and the Publication Process (Chapters 25 – 27)

This resource serves as an ideal reference for many different courses, including:

Please complete the form and an APA representative will follow up with access options.

By submitting your information, you agree to receive information about American Psychological Association (APA) products and services. You may unsubscribe at any time. Please review the APA Privacy Policy for more information.

With significant new and updated content across dozens of chapters, the second edition of the APA Handbook of Research Methods in Psychology presents the most exhaustive treatment available of the techniques psychologists and others have developed to help them pursue a shared understanding of why humans think, feel, and behave the way they do. Across three volumes, the chapters in this indispensable handbook address broad, crosscutting issues faced by researchers: the philosophical, ethical, and societal underpinnings of psychological research. Newly written chapters cover topics such as:

  • Literature searching
  • Workflow and reproducibility
  • Research funding
  • Neuroimaging
  • Data analysis methods
  • Navigating the publishing process
  • Ethics in scholarly authorship
  • Research data management and sharing
  • Applied Psychology 
  • Clinical Psychology
  • Cognitive Psychology
  • Developmental Psychology
  • Education Psychology
  • Human Development
  • Neuroscience
  • Public health

Harris Cooper, 

Duke University

Marc N. Coutanche, 

University of Pittsburgh

Linda M. McMullen, 

University of Saskatchewan (Canada) A.T. Panter, 

University of North Carolina 

at Chapel Hill ISBN: 978-1-4338-4123-1

  • Find Flashcards
  • Why It Works
  • Tutors & resellers
  • Content partnerships
  • Teachers & professors
  • Employee training

Brainscape's Knowledge Genome TM

Entrance exams, professional certifications.

  • Foreign Languages
  • Medical & Nursing

Humanities & Social Studies

Mathematics, health & fitness, business & finance, technology & engineering, food & beverage, random knowledge, see full index.

AP® Psychology > Research Methods > Flashcards

Research Methods Flashcards

Use these cards to study different research methods of psychology, including experiments and correlational research. the ap psych exam, along with most introductory undergrad psych courses, devote 8-10% of their multiple choice questions to the content in this deck..

scientific method

general procedures psychologists use for gathering and interpreting data

Define theory as it relates to research methods.

organized, testable explanation of phenomena

Other researchers must be able to replicate the results of an experiment to validate its conclusions.

What is replication?

obtaining similar results to a previous study using the same methods

What is hindsight bias?

The tendency of people to overestimate their ability to predict an event after it happened.

What is a controlled experiment?

researchers systematically manipulate a variable and observe the response in a laboratory

prediction of how two or more factors are related

What is the difference between an independent variable and a dependent variable in an experiment?

The factor being manipulated is the independent variable. The factor being measured is the dependent variable.

Identify the independent and dependent variables:

If students use Brainscape to study, rather than simple flash cards, then they will get higher test scores.

  • independent: method of studying
  • dependent: test score

Define population as it relates to research methods.

all the individuals to which the study applies

Define sample as it relates to research methods.

subgroup of a population that constitutes participants of a study

What type of sample should be used in research?

Larger sample sizes are ideal because they are the most representative of the population.

The amount of difference between the sample and population is called __________.

sampling error

Define random selection as it relates to research methods.

every individual from a population has an equal chance of being chosen for the sample

Which individuals are in the experimental group?

subjects who receive the treatment or manipulation of the independent variable

Which individuals are in the control group?

subjects who do not receive any treatment or manipulation

Subjects who receive the treatment are part of the __________, while those who do not receive the treatment belong to the __________.

experimental group; control group

What type of experimental design uses experimental and control groups?

A match pair between subjects uses an experimental group and a control group to compare the effect of the independent variable.

What process is used to ensure there are no preexisting differences between the control group and the experimental group?

Random assignment fairly divides the sample participants into the two groups.

confounding variable

  • any difference between the experimental group and the control group, besides the effect of the independent variable
  • a.k.a. third variable

List four types of confounding variables.

  • experimenter bias
  • demand characteristics
  • placebo effect
  • lack of counterbalancing

Define experimenter bias as it relates to confounding variables.

Experimenter bias occurs when a researcher’s expectations or preferences about the results of the study influence the experiment.

Define demand characteristics as they relate to confounding variables.

clues the participants discover about the intention of the study that alter their responses

Define placebo effect as it relates to confounding variables.

responding to an inactive drug with a change in behavior because the subject believes it contains the active ingredient

Define counterbalancing as it relates to confounding variables.

Researchers using a within-subjects design eliminate the effects of treatment order by assigning half the participants to treatment A first and the other half to treatment B first.

What type of experimental design uses each participant as his/her own control?

A within-subjects design exposes each participant to the treatment and compares their pre-test and post-test results. This design can also compare the results of two different treatments administered.

How do researchers specifically define what variables mean?

Researchers use operational definitions to precisely describe variables in relation to their study. For example, “effectiveness of studying” can be operationally defined with a test score.

What is a single-blind procedure?

research design in which the subjects are unaware if they are in the control or experimental group

What is a double-blind procedure?

research design in which neither the experimenter nor the subjects are aware who is in the control or experimental group

Single-blind procedures aim to eliminate the effects of __________, while double-blind procedures use a third party researcher to omit the effects of __________.

demand characteristics; experimenter bias

What is the Hawthorne effect?

individuals who are being experimented on behave differently than in their everyday life

How are quasi-experiments different from controlled experiments?

Random assignment is not possible in quasi-experiments.

What types of research are considered quasi-experiments?

Differences in behavior between:

  • males and females
  • various age groups
  • students in different classes

correlational research

  • establishes a relationship between two variables
  • does not determine cause and effect
  • used to make predictions and generate future research

List three types of correlational research.

  • naturalistic observation

Define naturalistic observation as it relates to correlational research.

Naturalistic observation consists of field observation of naturally occurring behavior, such as the way students behave in the classroom. There is no manipulation of variables.

What are surveys and why are they not always accurate?

  • type of correlational research
  • questionnaires and interviews given to a large group of people about their thoughts or behavior
  • individuals aim to be politically correct and socially accepted, leading them to give false answers

Define tests as they relate to correlational research.

research method that measures individual traits at a specific time and place

__________ studies start by looking at an effect and then attempt to determine the cause.

Ex post facto

What is the difference between the reliability and validity of a test?

A test is reliable if it is consistent and repeatable, meaning if you could take the same test a second time, you would get the same result(s)

A test is valid if it measures what it is intended to measure. It is often said that a test cannot be valid if it is not reliable.

For example, a bathroom scale is reliable because you can weigh yourself, step off, step back on, then get the same measurement. The bathroom scale is valid for measuring how much you weigh, but invalid for measuring your IQ.

What is a case study?

  • detailed examination of one person or a small group
  • beneficial for understanding rare and complex phenomena in clinical research
  • not always representative of the larger population

What are the strengths and weaknesses of this research method?

experiments

  • determine cause and effect relationship between variables
  • control over confounding variables

Weaknesses:

  • it can be difficult to generalize from the lab to the real world
  • time-consuming
  • easy to administer surveys or tests
  • inexpensive
  • minimal time needed
  • substantial real-world generalizability
  • no control over confounding variables
  • skewed or biased results
  • establishes a relationship, not causation

analysis of numerical data regarding representative samples

__________ data includes numerical measurements and __________ data includes descriptive words.

Quantitative; qualitative

What are the four scales of measurement?

nominal scale

numbers have no meaning except as labels

Example: Girls are designated as 1 and boys are designated as 2.

ordinal scale

numbers are used as ranks

Example: The highest score is designated as 1, second highest as 2, third highest as 3, and so on.

interval scale

numbers that have a meaningful difference between them

Example: Temperature: The difference between 10°F and 20°F is the same as between 30°F and 40°F.

ratio scale

numbers that have a meaningful ratio between them on a scale with a real zero point

Example: Weight and height: If you weight zero pounds, you have no weight. 100 pounds is twice as heavy as 50 pounds.

Would temperature be measured on an interval scale or a ratio scale?

If the temperature is 0°F, there is not “no temperature.” There is not a meaningful ratio between values. 100°F is not twice as hot as 50°F.

What are descriptive statistics?

numbers that summarize a set of research data from a sample

frequency distribution

an orderly arrangement of scores indicating the frequency of each score

What is the difference between a histogram and a frequency polygon?

A histogram is a bar graph and a frequency polygon is a line graph or a bell curve.

Define and list the three types of:

central tendency

Measures of central tendency describe the most typical scores for a set of research data.

Define in terms of central tendency:

most frequently occurring score in the data set

the middle score when the data is ordered by size

arithmetic average of the scores in the data set

If two scores appear most frequently, the distribution is __________, and if there are three or more appearing most frequently, it is __________.

bimodal; multimodal

Which measure of central tendency is the most representative? The least representative?

  • mean is usually most representative, unless there are extreme outliers that pull the mean in a particular direction
  • median is less sensitive to outliers, but is a weak statistic
  • mode is the least representative

normal distribution

bell-shaped, symmetric curve that represents data about many human characteristics throughout the population

When most of the scores are compacted on one side of the bell curve, the distribution is said to be __________.

Positively skewed distributions include a lot of small values and negatively skewed distributions include a lot of large values.

measures of variablity

Measures of variability describe the dispersion of scores for a set of research data.

  • standard deviation

Define in terms of variability:

difference between the largest score and the smallest score

What do variance and standard deviation measure?

average difference between each score and the mean of the data set

Taller, narrow curves have less variance than short, wider curves.

What is a z score (a.k.a. standard score)?

  • allows for comparison between different scales
  • subtract mean from each score and divide by standard deviation
  • mean has a z score of zero

percentile score

percentage of scores at or below a particular score between 1 and 99

Example: If you are in the 70th percentile, 70% of the scores are the same as or below yours.

Pearson correlation coefficient

  • statistical linear measure of the relationship between two sets of data
  • varies from -1 to +1
  • helps to make predictions about variables

Name the correlation coefficient for each and describe the relationship:

  • perfect positive correlation
  • no relationship
  • perfect negative correlation
  • r = +1 direct relationship: as one variable increases or decreases, the other does the same
  • r = 0 no relationship
  • r = -1 inverse relationship: as one variable increases or decreases, the other does the opposite

What type of graph plots single points to show the strength and direction of correlations?

scatterplot

What is the term for the line on a scatterplot that follows the trend of the points?

line of best fit or regression line

inferential statistics

  • used to interpret data and draw conclusions
  • indicate generalizability to population
  • indicate real relationship, not due to chance

What is the difference between a null and an alternative hypothesis?

Null hypotheses state that a treatment had no effect, while alternative hypotheses state the treatment did have an effect in the experiment.

What is the difference between a Type I and Type II error?

Type I errors, or false positives, occur if the researcher rejects a true null hypothesis. Type II errors, or false negatives, occur if the researcher fails to reject a false null hypothesis.

The variable p represents __________.

statistical significance

When is a finding statistically significant?

when the probability (alpha) that the finding is due to chance is less than 1 in 20 (p < 0.05)

Said another way, when you are 95% confident that the result was not due to chance

What method statistically combines the results of several research studies to reach a conclusion?

meta-analysis

Why did the American Psychological Association (APA) implement ethical guidelines?

  • Guidelines were set in place in the late 20th century to stress responsibility and morality in research and clinical practice
  • Dangerous and inhumane experiments such as Harlow’s rhesus monkeys, Zimbardo’s prison role-playing, and Milgram’s shock test led to the implementation of rules

What are the purposes of an Institutional Review Board (IRB)?

  • approve research being conducted at their particular institution
  • require participants give informed consent after hearing the risks and procedures
  • require debriefing of participants afterward with results of research
  • ensure humane and ethical treatment of animal and human subjects

__________ psychology is practical and designed for real world application, while __________ psychology is focused on research of fundamental principles and theories.

Applied; basic

AP® Psychology (14 decks)

  • History and Approaches
  • Research Methods
  • Biological Bases of Behavior
  • Sensation and Perception
  • States of Consciousness
  • Motivation and Emotion
  • Developmental Psychology
  • Personality
  • Testing and Individual Differences
  • Abnormal Behavior
  • Treatment of Abnormal Behavior
  • Social Psychology
  • Corporate Training
  • Teachers & Schools
  • Android App
  • Help Center
  • Law Education
  • All Subjects A-Z
  • All Certified Classes
  • Earn Money!
  • Bipolar Disorder
  • Therapy Center
  • When To See a Therapist
  • Types of Therapy
  • Best Online Therapy
  • Best Couples Therapy
  • Managing Stress
  • Sleep and Dreaming
  • Understanding Emotions
  • Self-Improvement
  • Healthy Relationships
  • Student Resources
  • Personality Types
  • Sweepstakes
  • Guided Meditations
  • Verywell Mind Insights
  • 2024 Verywell Mind 25
  • Mental Health in the Classroom
  • Editorial Process
  • Meet Our Review Board
  • Crisis Support

Understanding Methods for Research in Psychology

A Psychology Research Methods Study Guide

Types of Research in Psychology

  • Cross-Sectional vs. Longitudinal Research
  • Reliability and Validity

Glossary of Terms

Research in psychology focuses on a variety of topics , ranging from the development of infants to the behavior of social groups. Psychologists use the scientific method to investigate questions both systematically and empirically.

Research in psychology is important because it provides us with valuable information that helps to improve human lives. By learning more about the brain, cognition, behavior, and mental health conditions, researchers are able to solve real-world problems that affect our day-to-day lives.

At a Glance

Knowing more about how research in psychology is conducted can give you a better understanding of what those findings might mean to you. Psychology experiments can range from simple to complex, but there are some basic terms and concepts that all psychology students should understand.

Start your studies by learning more about the different types of research, the basics of experimental design, and the relationships between variables.

Research in Psychology: The Basics

The first step in your review should include a basic introduction to psychology research methods . Psychology research can have a variety of goals. What researchers learn can be used to describe, explain, predict, or change human behavior.

Psychologists use the scientific method to conduct studies and research in psychology. The basic process of conducting psychology research involves asking a question, designing a study, collecting data, analyzing results, reaching conclusions, and sharing the findings.

The Scientific Method in Psychology Research

The steps of the scientific method in psychology research are:

  • Make an observation
  • Ask a research question and make predictions about what you expect to find
  • Test your hypothesis and gather data
  • Examine the results and form conclusions
  • Report your findings

Research in psychology can take several different forms. It can describe a phenomenon, explore the causes of a phenomenon, or look at relationships between one or more variables. Three of the main types of psychological research focus on:

Descriptive Studies

This type of research can tell us more about what is happening in a specific population. It relies on techniques such as observation, surveys, and case studies.

Correlational Studies

Correlational research is frequently used in psychology to look for relationships between variables. While research look at how variables are related, they do not manipulate any of the variables.

While correlational studies can suggest a relationship between two variables, finding a correlation does not prove that one variable causes a change in another. In other words, correlation does not equal causation.

Experimental Research Methods

Experiments are a research method that can look at whether changes in one variable cause changes in another. The simple experiment is one of the most basic methods of determining if there is a cause-and-effect relationship between two variables.

A simple experiment utilizes a control group of participants who receive no treatment and an experimental group of participants who receive the treatment.

Experimenters then compare the results of the two groups to determine if the treatment had an effect.

Cross-Sectional vs. Longitudinal Research in Psychology

Research in psychology can also involve collecting data at a single point in time, or gathering information at several points over a period of time.

Cross-Sectional Research

In a cross-sectional study , researchers collect data from participants at a single point in time. These are descriptive type of research and cannot be used to determine cause and effect because researchers do not manipulate the independent variables.

However, cross-sectional research does allow researchers to look at the characteristics of the population and explore relationships between different variables at a single point in time.

Longitudinal Research

A longitudinal study is a type of research in psychology that involves looking at the same group of participants over a period of time. Researchers start by collecting initial data that serves as a baseline, and then collect follow-up data at certain intervals. These studies can last days, months, or years. 

The longest longitudinal study in psychology was started in 1921 and the study is planned to continue until the last participant dies or withdraws. As of 2003, more than 200 of the partipants were still alive.

The Reliability and Validity of Research in Psychology

Reliability and validity are two concepts that are also critical in psychology research. In order to trust the results, we need to know if the findings are consistent (reliability) and that we are actually measuring what we think we are measuring (validity).

Reliability

Reliability is a vital component of a valid psychological test. What is reliability? How do we measure it? Simply put, reliability refers to the consistency of a measure. A test is considered reliable if we get the same result repeatedly.

When determining the merits of a psychological test, validity is one of the most important factors to consider. What exactly is validity? One of the greatest concerns when creating a psychological test is whether or not it actually measures what we think it is measuring.

For example, a test might be designed to measure a stable personality trait but instead measures transitory emotions generated by situational or environmental conditions. A valid test ensures that the results accurately reflect the dimension undergoing assessment.

Review some of the key terms that you should know and understand about psychology research methods. Spend some time studying these terms and definitions before your exam. Some key terms that you should know include:

  • Correlation
  • Demand characteristic
  • Dependent variable
  • Hawthorne effect
  • Independent variable
  • Naturalistic observation
  • Placebo effect
  • Random assignment
  • Replication
  • Selective attrition

Erol A.  How to conduct scientific research ?  Noro Psikiyatr Ars . 2017;54(2):97-98. doi:10.5152/npa.2017.0120102

Aggarwal R, Ranganathan P. Study designs: Part 2 - Descriptive studies .  Perspect Clin Res . 2019;10(1):34-36. doi:10.4103/picr.PICR_154_18

Curtis EA, Comiskey C, Dempsey O. Importance and use of correlational research .  Nurse Res . 2016;23(6):20-25. doi:10.7748/nr.2016.e1382

Wang X, Cheng Z. Cross-sectional studies: Strengths, weaknesses, and recommendations .  Chest . 2020;158(1S):S65-S71. doi:10.1016/j.chest.2020.03.012

Caruana EJ, Roman M, Hernández-Sánchez J, Solli P. Longitudinal studies .  J Thorac Dis . 2015;7(11):E537-E540. doi:10.3978/j.issn.2072-1439.2015.10.63

Stanford Magazine. The vexing legacy of Lewis Terman .

By Kendra Cherry, MSEd Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

AP Psychology Community

AP Psychology Community

AP Psychology Community

Research Methods Of Psychology

Objectives:

Key Vocabulary: hindsight bias, critical thinking, theory, hypothesis, operational definition, replication, case study, survey, false consensus effect, population, random sample, naturalistic observation, correlational coefficient, scatterplot, illusory correlation, experiment, placebo, double blind procedure, experimental condition, control condition, random assignment, independent variable, dependent variable, mode, mean, median, range, standard deviation.

All Subjects

1.4 Selecting a Research Method

4 min read • june 18, 2024

Sadiyya Holsey

Sadiyya Holsey

Dalia Savy

When selecting a research method, the researcher should write down their goals and what their research question is.

Selecting a research method depends on what a researcher wants to show or prove. For example, if a researcher only wants to show a correlation between two variables, then an experiment would not be optimal. If a researcher wants to show cause and effect between two variables, then an experiment is the best method. 

When performing any research, the researcher should be sure of their results to ensure that they show true relationships. ✔️

Selecting a research method also depends on the validity of the experiments. There are two types of validity: external validity and internal validity . 

  • External validity refers to how generalizable the results of the experiment are. For example, if the study on a drug is done on an Asian, middle-aged, average-weight man with high blood pressure, can the results be generalized to the population?
  • Internal validity  is when a study shows a truthful cause-and-effect relationship and the researcher is confident that the changes in the dependent variable were produced  only  by the independent variable. A confounding variable hurts the internal validity because it creates lower confidence in the research conclusion.

Confounding Variables

Confounding variables limit the confidence that researchers have in their conclusions. To recall  from the last key topic , the confounding variable is an outside influence (variable) that changes the effect of a dependent and independent variable. For example, we looked at the correlation between crime ⛓️ and the sale of ice cream 🍨. As the crime rate increases, ice cream sales also increase. So, one might suggest that criminals cause people to buy ice cream or that purchasing ice cream causes people to commit crimes. However, both are extremely unlikely.

types of research methods ap psychology

When taking a look at this graphic above, smoking becomes a very clear confounding variable when thinking about what causes many chronic illnesses and cancer. It may look like coffee is directly causing pancreatic cancer, since you are focused on it and are measuring its effects, but smoking contributes as well, without you realizing. That rings the question: how accurate is it that exposure to coffee leads to pancreatic cancer or that the two are even correlated? This is why confounding variables are so important!

Here is a quick reminder of the different research methods and their strengths/weaknesses:

To observe and record behaviorDo , , or NothingCase studies require only one participant; naturalistic observations may be done when it is not ethical to manipulate variables; surveys may be done quickly and inexpensively (compared with experiments)Uncontrolled variables mean cause and effect cannot be determined; single cases may be misleading
To detect naturally occurring relationships; to assess how well one variable predicts anotherCollect data on two or more variables; no manipulationsNothingWorks with large groups of data, and may be used in situations where an experiment would not be ethical or possibleDoes not specify cause and effect
To explore cause and effectManipulate one or more variables; use random assignmentThe independent variable(s)Specifies cause and effect, and variables are controlledSometimes not feasible; results may not generalize to other contexts; not ethical to manipulate certain variables

👉 Need a review of any of these research methods? Be sure to check out our study guide all about them to get a more in-depth read!

Practice AP Question

The following prompt is from the  2013 AP Psychology Exam (#1). 

  • In response to declining reading scores in local schools, John wrote an editorial suggesting that schools need to increase interest in reading books by providing students with incentives. Based on research showing a relation between use of incentives and student reading, he recommended providing a free pizza coupon for every ten books a student reads. 🍕 This question had several parts, but let's relate it to research methods. 

Could we trust his argument? Does he have the right evidence?

Not really, right? He never ran an experiment and actually  implied causation. That is something we could never do. Association does  not equal causation. 

Just because he found a "relation" between incentives and student reading, it doesn't mean that using incentives could increase student reading. Therefore, we could easily refute John's argument and prove that providing an incentive would not help increase interest in reading books. 📚

** When taking a look at a free-response question on the AP Psychology exam, be sure to look out for words such as "caused" or "correlated." You always want to see what is being implied and test its accuracy. Here, we saw the word "relation," and quickly thought correlation does not equal causation.**

Key Terms to Review ( 12 )

© 2024 fiveable inc. all rights reserved., ap® and sat® are trademarks registered by the college board, which is not affiliated with, and does not endorse this website..

FRQ Question - Research Methods TUTORIAL / Examples

Frq - research methods, 2020 ap exam question 2 will be similar in structure to a traditional research methods questions, and will have 6 tasks - 15 minutes, key takeaways: research methods.

  • The study of psychology relies on a diverse array of qualitative and quantitative research methods, including observations, case studies, surveys, and controlled experiments.
  • Psychological research is carefully designed so that researchers can be confident about using results to draw conclusions about real-life phenomena. This is done by controlling variables, creating representative samples, controlling for internal and external validity, and operationalizing definitions and measurements.
  • Researchers use statistics to analyze and make sense of the data gathered in a research study. This involves the use of descriptive statistics like measures of central tendency and dispersion, as well as inferential statistics for making generalizations based on the data.
  • Because psychological study often involves the participation of human subjects, researchers must abide by established ethical principles and practices as well as legal guidelines while conducting research.

TYPES OF PSYCHOLOGICAL RESEARCH

  • Quantitative research: Research that uses operational measurements and statistical techniques to reach conclusions on the basis of numerical data, such as correlational studies and experiments.
  • Qualitative research: Research that does not rely on numerical representations of data, such as naturalistic observations, unstructured interviews, and case studies.
  • Correlation coefficient: A number (symbolized by r ) between −1 and +1, which represents the strength and direction of the correlation between two variables. The closer the coefficient is to −1 or +1, the stronger the correlation between the variables.
  • Positive correlation: An r value above 0, which indicates that two variables have a direct relationship: when one variable increases, the other also increases.
  • Negative correlation: An r value below 0, which indicates that two variables have an inverse relationship: when one variable increases, the other decreases.
  • Naturalistic observation: A research method, typically qualitative in nature and usually covert and undisclosed, that attempts to document behavior as it spontaneously occurs in a real world setting.
  • Structured observation: A type of observational research typically conducted in a laboratory setting, where the researcher can control some aspects of the environment.
  • Coding: The classification of behaviors into discrete categories, used especially in structured observations to achieve a level of consistency in recording and describing observations.
  • Inter-rater reliability: A statistical measure of the degree of agreement between different codings of the same phenomena.
  • Participant observation: A mostly qualitative research method in which the researcher becomes a member of a studied group, either overtly or covertly.
  • Hawthorne effect: A phenomenon in which research subjects tend to alter their behavior in response to knowledge of being observed.
  • Longitudinal study: A research design that examines how individuals develop by studying the same sample over a long period of time.
  • Cross-sectional study: A research design conducted at a single point in time, comparing groups of differing ages to arrive at conclusions about development.
  • Case study: A research design involving an in-depth and detailed examination of a single subject, or case, usually an individual or a small group.
  • Survey: A mostly quantitative research method involving a list of questions filled out by a group of people to assess attitudes or opinions.
  • Nonresponse bias: A distortion of data that can occur in surveys with a low response rate.
  • Surveyor bias: A distortion of data that can occur when survey questions are written in a way that prompts respondents to answer a certain way.
  • Experiments: Deliberately designed procedures used to test research hypotheses.
  • Hypothesis: A proposed, testable explanation for a phenomenon, often constructed in the form of a statement about the relationship between two or more variables.
  • Controlled experiment: A research design for testing a causal hypothesis, in which all aspects of the study are deliberately controlled and only independent variables are manipulated to isolate their effects on dependent variables.
  • Field experiment: Experiments conducted out in the real world, with fewer controls than would be found in a lab.

FRQ - RESEARCH Methods - VIDEO TUTORIALS

2018 ap psychology test frq - research methods - positive correlation.

types of research methods ap psychology

Unit 1 Progress Check Research Methods FRQ Review

types of research methods ap psychology

2013 AP Psychology EXAM - FRQ Research methods tutorial

TYPES of GRAPHS ON THE AP PSYCHOLOGY EXAM - PDF WorksheeT

Sample research design practice frq's - google doc.

Students must log in with their Wm S. Hart District Google account to view this Google Document

types of research methods ap psychology

This site uses various technologies, as described in our Privacy Policy, for personalization, measuring website use/performance, and targeted advertising, which may include storing and sharing information about your site visit with third parties. By continuing to use this website you consent to our Privacy Policy and Terms of Use .

We are experiencing sporadically slow performance in our online tools, which you may notice when working in your dashboard. Our team is fully engaged and actively working to improve your online experience. If you are experiencing a connectivity issue, we recommend you try again in 10-15 minutes. We will update this space when the issue is resolved.

Enter your email to unlock an extra $25 off an SAT or ACT program!

By submitting my email address. i certify that i am 13 years of age or older, agree to recieve marketing email messages from the princeton review, and agree to terms of use., guide to the ap psychology exam.

AP Psychology Exam

Interested in the scientific study of behavior and mental processes? The AP ® Psychology Exam is a college-level exam administered every year in May upon completion of an Advanced Placement Psychology course taken at your high school. If you score high enough, your  AP score  could earn you  college credit !

Check out our AP Psychology Guide for the essential info you need about the exam:

  • Exam Overview

Sections & Question Types

  • How to Prepare

What's on the AP Psychology Exam?

The College Board requires your AP teacher to cover certain topics in the AP Psychology course. As you complete your Psych review, make sure you are familiar with the following topics:

  • Scientific Foundations of Psychology: Introducing Psychology; Research Methods in Psychology; Defining Psychological Science: The Experimental Method; Selecting a Research Method; Statistical Analysis in Psychology; Ethical Guidelines in Psychology
  • Biological Bases of Behavior: Interaction of Hereditary and Environment; The Endocrine System; Overview of the Nervous System and the Neuron; Neural Firing; Influence of Drugs on Neural Firing; The Brain; Tools for Examining Brain Structure and Function; The Adaptable Brain; Sleep and Dreaming
  • Sensation and Perception: Principles of Sensation; Principles of Perception; Visual Anatomy; Visual Perception; Auditory Sensation and Perception; Chemical Senses; Body Sense
  • Learning: Introduction to Learning; Classical Conditioning; Operant Conditioning; Social and Cognitive Factors in Learning
  • Cognitive Psychology: Introduction to Memory; Encoding; Storing; Retrieving; Forgetting and Memory Distortion; Biological Bases of Memory; Introduction to Thinking and Problem Solving; Biases and Errors in Thinking; Introduction to Intelligence; Psychometric Principles and Intelligence Testing; Components of Language and Language Acquisition
  • Developmental Psychology: The Lifespan and Physical Development in Childhood; Social Development in Childhood; Cognitive Development in Childhood; Adolescent Development; Adulthood and Aging; Moral Development; Gender and Sexual Orientation
  • Motivation, Emotion, and Personality: Theories of Motivation; Specific Topics of Motivation; Theories of Emotion; Stress and Coping; Introduction to Personality; Psychoanalytic Theories of Personality; Behaviorism and Social Cognitive Theories of Personality; Humanistic Theories of Personality; Trait Theories of Personality; Measuring Personality
  • Clinical Psychology: Introduction to Psychological Disorders; Psychological Perspectives and Etiology of Disorders; Neurodevelopmental and Schizophrenic Spectrum Disorders; Bipolar, Depressive, Anxiety, and Obsessive-Compulsive and Related Disorders; Trauma- and Stressor- Related, Dissociative, and Somatic Symptom and Related Disorders; Feeding and Eating, Substance and Addictive, and Personality Disorders; Introduction to Treatment of Psychological Disorders; Psychological Perspectives and Treatment of Disorders; Treatment of Disorders from the Biological Perspective; Evaluating Strengths, Weaknesses, and Empirical Support for Treatments of Disorders
  • Social Psychology: Attribution Theory and Person Perception; Attitude Formation and Attitude Change; Conformity, Compliance, and Obedience; Group Influences on Behavior and Mental Processes; Bias, Prejudice, and Discrimination; Altruism and Aggression; Interpersonal Attraction

Read More: Review for the exam with our AP Psychology Crash Courses

The AP Psych exam is 2 hours long and has two sections: a multiple-choice section and a a free-response section. 

Timing

Number of Questions

Exam Weighting


70 minutes

100 multiple-choice questions

66.7%

50 minutes

2 free-response questions

33.3%

Multiple-Choice Questions

The AP Psychology multiple-choice questions test the following skills:

  • Concept Understanding
  • Data Analysis
  • Scientific Investigation

Free-Response Questions

The AP Psych FRQs consists of two questions:

  • Question 1 is about Concept Application, assessing a student’s ability to explain and apply theories and perspectives in authentic contexts
  • Question 2 is about Research Design, assessing a student’s ability to analyze psychological research studies that include quantitative data.

For a comprehensive content review, check out our book,  AP Psychology Premium Prep

What’s a good AP Psychology Score?

AP scores are reported from 1 to 5. Colleges are generally looking for a 4 or 5 on the AP Psychology exam, but some may grant credit for a 3. Here’s how students scored on the May 2020 test:

5

Extremely qualified

22.4%

4

Well qualified

25.4%

3

Qualified

23.5%

2

Possibly qualified

9.6%

1

No recommendation

19.1%

Source: College Board

How can I prepare?

AP classes are great, but for many students they’re not enough! For a thorough review of AP Psychology content and strategy, pick the  AP prep option  that works best for your goals and learning style. 

  • AP Exams  

Explore Colleges For You

Explore Colleges For You

Connect with our featured colleges to find schools that both match your interests and are looking for students like you.

Career Quiz

Career Quiz

Take our short quiz to learn which is the right career for you.

Connect With College Coaches

Get Started on Athletic Scholarships & Recruiting!

Join athletes who were discovered, recruited & often received scholarships after connecting with NCSA's 42,000 strong network of coaches.

Best 389 Colleges

Best 389 Colleges

165,000 students rate everything from their professors to their campus social scene.

SAT Prep Courses

1400+ course, act prep courses, free sat practice test & events,  1-800-2review, free digital sat prep try our self-paced plus program - for free, get a 14 day trial.

types of research methods ap psychology

Free MCAT Practice Test

I already know my score.

types of research methods ap psychology

MCAT Self-Paced 14-Day Free Trial

types of research methods ap psychology

Enrollment Advisor

1-800-2REVIEW (800-273-8439) ext. 1

1-877-LEARN-30

Mon-Fri 9AM-10PM ET

Sat-Sun 9AM-8PM ET

Student Support

1-800-2REVIEW (800-273-8439) ext. 2

Mon-Fri 9AM-9PM ET

Sat-Sun 8:30AM-5PM ET

Partnerships

  • Teach or Tutor for Us

College Readiness

International

Advertising

Affiliate/Other

  • Enrollment Terms & Conditions
  • Accessibility
  • Cigna Medical Transparency in Coverage

Register Book

Local Offices: Mon-Fri 9AM-6PM

  • SAT Subject Tests

Academic Subjects

  • Social Studies

Find the Right College

  • College Rankings
  • College Advice
  • Applying to College
  • Financial Aid

School & District Partnerships

  • Professional Development
  • Advice Articles
  • Private Tutoring
  • Mobile Apps
  • International Offices
  • Work for Us
  • Affiliate Program
  • Partner with Us
  • Advertise with Us
  • International Partnerships
  • Our Guarantees
  • Accessibility – Canada

Privacy Policy | CA Privacy Notice | Do Not Sell or Share My Personal Information | Your Opt-Out Rights | Terms of Use | Site Map

©2024 TPR Education IP Holdings, LLC. All Rights Reserved. The Princeton Review is not affiliated with Princeton University

TPR Education, LLC (doing business as “The Princeton Review”) is controlled by Primavera Holdings Limited, a firm owned by Chinese nationals with a principal place of business in Hong Kong, China.

logo-type-white

AP® Psychology

11 tough vocab terms for ap® psychology research methods.

  • The Albert Team
  • Last Updated On: March 1, 2022

11 Tough Vocab Terms for AP® Psychology Research Methods

The AP® Psychology research methods is one of the most term heavy topics within AP® Psychology. Being able to differentiate between the terms will benefit you on your exam and reduce confusion once you begin running experiments.

1. Operational Definition

The operational definition is a term that is used to describe the procedure of a study and the research variables. When thinking about the operational definition, it is beneficial to visualize what the experiment is measuring and how it is going to be measured.

An example of this would be an experiment that is measuring if Timmy laughs more at girls or boys. The operation definition of this experiment would say what the experiment defines as a laugh. This experiment may operationally define a laugh as a smile with a sound.

By having this operational definition, other psychologists are able to replicate the experiment. The operational definition lets the reader of the experiment know what was deemed a positive or negative result, thereby opening the experiment up for replication and expansion by other psychologists.

2. Random Sample

A random sample is when the group of subjects in your experiment accurately depicts the population. This random sample should fairly depict the overall population, covering various ethnicities, socioeconomic classes, gender, and age.

An example of this would be if we use Timmy’s study again. To have a truly random sampling of people we must get a population of girls and boys that represent all of the different types of people. We would need all races and socioeconomic groups as well as ages. The random sampling also dictates that the experimenter must have little to no bias in choosing the people in the sample. It is advised to choose the participants in an impartial way to retain random sampling. One way that this is often done is by taking every third person in a given population for the study.

Correlation Coefficient

3. Random Assignment

Random assignment is different than random sampling in that random sampling deals with choosing who participates in the study. Random assignment, however, dictates which of the selected experimental population will go to the control group or the experimental trial.

This is important, because random assignment keeps the person running the experiment from putting people that he or she thinks will be affected into the experimental group. By randomly assigning people to each group, then the experiment retains validity.

An example of this would be placing a random half of the random sampling, or the selected population, into a placebo group and the other half into the experimental group for a drug trial.

4. Correlation Coefficient

The correlation coefficient is a number that lies between negative one (-1) and positive one (+1). This number represents how close to cause and effect the experiment is. The number one represents that perfect cause and effect in a positive direction and the negative one represents perfect cause and effect in the negative direction.

The experiment usually does not work out to be perfect, making the correlation coefficient a decimal. The close the decimal is to one dictates the strength of the correlation. For example, if .78 was the correlation coefficient, then the strength of the cause and effect would be much higher than .34 correlation. This can be seen pictured to the left.

It is important, however, to note that correlation does not directly make causation. This means that just because something has a strong correlation, it does not mean that the same outcome will always be caused by the independent variable.

5. Illusory Correlation

The illusory correlation is a phenomenon that psychologists must avoid during experimentation. This correlation is when the person believes that a relationship exists between two variables when it does not.

A great example of this are some superstitions like an unwashed, favorite jersey will lead to a win for a sports team. There is not actual relationship between a fan wearing and not washing a jersey and winning the game, but that fan believes that there is.

6. Dependent Variable

The dependent variable is the variable that measures the outcome of the experiment. This is the response. For example, if we are measuring which comedian makes the children laugh, then we will be measuring how many times the children laugh for the dependent variable.

The experimenter should have no influence on what dependent variable takes place; otherwise this would be a skewed test.

7. Independent Variable

The independent variable is what causes the dependent variable. This independent variable would be the comedian in the case of recording the funniest comedian to children. The comedian causes the laughter, which is the dependent variable, making that comedian or comedians the independent variable.

The independent variable must be influenced by the experimenter, because this psychologist must craft the independent variable so that other variables do not influence the dependent variable. To do so would mean that the experiment contains error.

A great way to control the independent variable and the experiment as a whole is by utilizing random sampling and random assignment.

8. Confounding Variable

The confounding variable is the variable that is often referred to as the extraneous variable. This variable is unwanted in the experiment, although the confounding variable unfortunately ends up in many experiments unintentionally. The confounding variable is a variable that the experimenter did not account for initially that affected the dependent variable. For example, the random sampling may result in not so random sample. If the random sample contained mostly one social class and it affected the experiment’s outcome, then that would be the confounding variable.

If the confounding variable is too influential towards the dependent variable, then the experiment could be deemed invalid.

9. Standard Deviation

Standard deviation diagram US men heights

Standard deviation is a statistical procedure that is done in order to determine how far away from the average result. The standard deviation is a way for the experimenter to tell how much variation is in the results. As the standard deviation gets higher, the more variation is occurring in the data.

For the AP® Psychology exam it is important to know the percentage of the population that occupies one standard deviation, which is sixty-eight percent, and two standard deviations, which is ninety-five percent. The other five percent are within three standard deviations. This can be seen in the bell shaped curve pictured to the right.

10. The Double Blind Procedure

The double blind procedure is when neither the participant in the study nor the person giving the study know who is the control group and who is in the experimental group. This allows the study to detect the Placebo Effect. The Placebo Effect is when a group of people feel an effect of a drug when they have actually only ingested a placebo, which is often a sugar pill that has no effect.

The double blind procedure keeps as much bias out of the procedure as possible, allowing the psychologists doing the procedure to more accurately determine if the result is accurate.

11. Internal Validity

Internal validity is a term that represents the confidence that can be put into the experiment’s results. For an experiment to have internal validity, then all of the confounding variables must have been acknowledged and controlled by the experimenter. Also, there must be a relationship statistically between the independent variable and the dependent variable for internal validity.

Key Takeaway for AP® Psychology

These eleven terms on the research methods portion of AP® Psychology may seem unneeded or common sense; however, in order to construct a more reliable experiment with internal validity one must be wary of them all during experiment crafting.

By being able to pick out parts of an experiment and tell why an experiment is valid or invalid is also a large part of many of the free response questions. Being able to do this will be sure to boost your score on exam day.

Let’s put everything into practice. Try this AP® Psychology practice question:

Circadian Rhythm AP® Psychology Practice Question

Looking for more AP® Psychology practice?

Check out our other articles on  AP® Psychology .

You can also find thousands of practice questions on Albert.io. Albert.io lets you customize your learning experience to target practice where you need the most help. We’ll give you challenging practice questions to help you achieve mastery of AP Psychology.

Start practicing here .

Are you a teacher or administrator interested in boosting AP® Psychology student outcomes?

Learn more about our school licenses here .

Interested in a school license?​

Popular posts.

AP® Physics I score calculator

AP® Score Calculators

Simulate how different MCQ and FRQ scores translate into AP® scores

types of research methods ap psychology

AP® Review Guides

The ultimate review guides for AP® subjects to help you plan and structure your prep.

types of research methods ap psychology

Core Subject Review Guides

Review the most important topics in Physics and Algebra 1 .

types of research methods ap psychology

SAT® Score Calculator

See how scores on each section impacts your overall SAT® score

types of research methods ap psychology

ACT® Score Calculator

See how scores on each section impacts your overall ACT® score

types of research methods ap psychology

Grammar Review Hub

Comprehensive review of grammar skills

types of research methods ap psychology

AP® Posters

Download updated posters summarizing the main topics and structure for each AP® exam.

IMAGES

  1. statistical techniques ap psychology

    types of research methods ap psychology

  2. Types of Research Archives

    types of research methods ap psychology

  3. Research Methods AP Psychology-Unit ppt download

    types of research methods ap psychology

  4. image

    types of research methods ap psychology

  5. 15 Types of Research Methods (2024)

    types of research methods ap psychology

  6. Research methods in applied social psychology-1

    types of research methods ap psychology

VIDEO

  1. The scientific approach and alternative approaches to investigation

  2. Unit 1: Intro to Research (AP Psychology)

  3. AP Psych Unit 2 Descriptive Research Methods

  4. Unit 0 Part 4 Types of Research Methods

  5. Unit 1: Descriptive Research (AP Psychology)

  6. PSY 2120: Why study research methods in psychology?

COMMENTS

  1. AP Psychology RESEARCH METHODS Flashcards

    Experiment. A research method in which an investigator manipulates one or more factors (independent variables) to observe the effects on some behavior or mental process (the dependent variable). By random assignment of participants, the experimenter aims to control other relevant variable. Replication.

  2. Research Methods in Psychology

    1.5 Statistical Analysis in Psychology. 1.6 Ethical Guidelines in Psychology. Unit 2 - Biological Basis of Behavior. Unit 3 - Sensation & Perception. Unit 4 - Learning. Unit 5 - Cognitive Psychology. Unit 6 - Developmental Psychology. Unit 7 - Motivation, Emotion, & Personality. Unit 8 - Clinical Psychology.

  3. PDF APA Handbook of Research Methods in Psychology

    Research Methods in Psychology AP A Han dbook s in Psychology VOLUME Research Designs: Quantitative, Qualitative, Neuropsychological, and Biological SECOND EDITION Harris Cooper, Editor-in-Chief Marc N. Coutanche, Linda M. McMullen, A. T. Panter, sychological Association. Not for further distribution.

  4. AP Psychology: Research Methods Notes

    AP Psychology: Research Methods Notes. The study of psychology relies on a diverse array of qualitative and quantitative research methods, including observations, case studies, surveys, and controlled experiments. Psychological research is carefully designed so that researchers can be confident about using results to draw conclusions about real ...

  5. Research Methods In Psychology

    Olivia Guy-Evans, MSc. Research methods in psychology are systematic procedures used to observe, describe, predict, and explain behavior and mental processes. They include experiments, surveys, case studies, and naturalistic observations, ensuring data collection is objective and reliable to understand and explain psychological phenomena.

  6. AP Psychology: Research Methods Flashcards

    This is the type of research that describes the strength and direction of relationships between variables. The ___________ hypothesis states that there is no change, no difference, or no new finding. Study with Quizlet and memorize flashcards containing terms like experimental, Negative or inverse, explanatory variable and more.

  7. AP Psychology: Understanding Research Methods

    Here's a comprehensive guide to the key research methods studied in AP Psychology: 1. Experimental Research: - Objective: Establish cause-and-effect relationships between variables. - Design: Random assignment of participants to conditions, manipulation of an independent variable, and measurement of dependent variables. 2.

  8. APA Handbook of Research Methods in Psychology

    Volume 1: Foundations, Planning, Measures, and PsychometricsVolume 2: Research Designs: Quantitative, Qualitative, Neuropsychological, and BiologicalVolume 3: Data Analysis and Research Publication, Second Edition. With significant new and updated content across dozens of chapters, this second edition presents the most exhaustive treatment ...

  9. Research Methods

    Observational Study: This is when researchers observe and record behavior without manipulating or controlling the situation.. Survey Method: A technique where questions are asked to subjects who report their own answers. It's like taking a poll to gather information about people's opinions or behaviors. Case Study: An in-depth study of one person, group, or event.

  10. APA Handbook of Research Methods in Psychology

    With significant new and updated content across dozens of chapters, the second edition of the APA Handbook of Research Methods in Psychology presents the most exhaustive treatment available of the techniques psychologists and others have developed to help them pursue a shared understanding of why humans think, feel, and behave the way they do. Across three volumes, the chapters in this ...

  11. Research Methods for AP® Psychology

    Use these cards to study different research methods of psychology, including experiments and correlational research. The AP Psych exam, along with most introductory undergrad psych courses, devote 8-10% of their multiple choice questions to the content in this deck.

  12. Research in Psychology: Methods You Should Know

    Research in Psychology: The Basics. The first step in your review should include a basic introduction to psychology research methods. Psychology research can have a variety of goals. What researchers learn can be used to describe, explain, predict, or change human behavior. Psychologists use the scientific method to conduct studies and research ...

  13. Research Methods In Psychology [AP Psychology Review Unit 1 ...

    More From Mr. Sinn!Ultimate Review Packets:AP Psychology: https://bit.ly/3vs9s43AP Human Geography: https://bit.ly/3JNaRqMEach of these packets comes with un...

  14. AP Psychology (Research Methods) Flashcards

    Basic research. One of the two main types of research, pure research that aims to confirm an existing theory or to learn more about a concept or phenomenon. Scientific method. A general approach to gathering information and answering questions so that errors and biases are minimized. Applied research.

  15. Research Methods

    There are basically two types of research that can take place; applied and basic. Applied research is when the scientist has clear and practical reasons and applications for her study. If a psychologist was trying to come up with a new behavior therapy to stop heroin use, it would be applied research. Basic research explores questions that are ...

  16. Research Methods Of Psychology

    Research Methods Of Psychology. Research Methods Of Psychology. Objectives: Describe hindsight bias and explain how it often leads us to perceive psychological research as common sense. Discuss how overconfidence contaminates our everyday judgments. Compare and contrast case studies, surveys, naturalistic observation and the experimental method ...

  17. Selecting a Research Method

    1.5 Statistical Analysis in Psychology. 1.6 Ethical Guidelines in Psychology. Unit 2 - Biological Basis of Behavior. Unit 3 - Sensation & Perception. Unit 4 - Learning. Unit 5 - Cognitive Psychology. Unit 6 - Developmental Psychology. Unit 7 - Motivation, Emotion, & Personality. Unit 8 - Clinical Psychology.

  18. AP Psychology

    giving participants in a research study a complete explanation of the study after the study is completed. standardization. defining meaningful scores by comparison with the performance of a pretested standardization group. AP Psychology - Research Methods For more information visit: www.APStudyGuides.weebly.com Learn with flashcards, games, and ...

  19. AP Psychology

    Key Takeaways: Research Methods. The study of psychology relies on a diverse array of qualitative and quantitative research methods, including observations, case studies, surveys, and controlled experiments. Psychological research is carefully designed so that researchers can be confident about using results to draw conclusions about real-life ...

  20. Guide to the AP Psychology Exam

    The AP Psych FRQs consists of two questions: Question 1 is about Concept Application, assessing a student's ability to explain and apply theories and perspectives in authentic contexts. Question 2 is about Research Design, assessing a student's ability to analyze psychological research studies that include quantitative data.

  21. Research Methods in Psychology: Study Guide

    Continue your study of Research Methods in Psychology with these useful links. Research Methods in Psychology Quiz. Review Questions. From a general summary to chapter summaries to explanations of famous quotes, the SparkNotes Research Methods in Psychology Study Guide has everything you need to ace quizzes, tests, and essays.

  22. AP Psychology Research Methods Flashcards

    Study with Quizlet and memorize flashcards containing terms like Model of Scientific Inquiry, 1.Asking questions 2.Identifying the Important Factors 3.Formulating the Hypothesis & Null 4.Collecting Relevant Information 5. Testing the Data 6.Working with the Hypothesis 7.Working with the Theory, Clinical Research or Biographical Study and more.

  23. 11 Tough Vocab Terms for AP® Psychology Research Methods

    11 Tough Vocab Terms for AP® Psychology Research Methods. The AP® Psychology research methods is one of the most term heavy topics within AP® Psychology. Being able to differentiate between the terms will benefit you on your exam and reduce confusion once you begin running experiments. 1. Operational Definition.