Dissertation surveys: Questions, examples, and best practices

Collect data for your dissertation with little effort and great results.

Dissertation surveys are one of the most powerful tools to get valuable insights and data for the culmination of your research. However, it’s one of the most stressful and time-consuming tasks you need to do. You want useful data from a representative sample that you can analyze and present as part of your dissertation. At SurveyPlanet, we’re committed to making it as easy and stress-free as possible to get the most out of your study.

With an intuitive and user-friendly design, our templates and premade questions can be your allies while creating a survey for your dissertation. Explore all the options we offer by simply signing up for an account—and leave the stress behind.

How to write dissertation survey questions

The first thing to do is to figure out which group of people is relevant for your study. When you know that, you’ll also be able to adjust the survey and write questions that will get the best results.

The next step is to write down the goal of your research and define it properly. Online surveys are one of the best and most inexpensive ways to reach respondents and achieve your goal.

Before writing any questions, think about how you’ll analyze the results. You don’t want to write and distribute a survey without keeping how to report your findings in mind. When your thesis questionnaire is out in the real world, it’s too late to conclude that the data you’re collecting might not be any good for assessment. Because of that, you need to create questions with analysis in mind.

You may find our five survey analysis tips for better insights helpful. We recommend reading it before analyzing your results.

Once you understand the parameters of your representative sample, goals, and analysis methodology, then it’s time to think about distribution. Survey distribution may feel like a headache, but you’ll find that many people will gladly participate.

Find communities where your targeted group hangs out and share the link to your survey with them. If you’re not sure how large your research sample should be, gauge it easily with the survey sample size calculator.

Need help with writing survey questions? Read our guide on well-written examples of good survey questions .

Dissertation survey examples

Whatever field you’re studying, we’re sure the following questions will prove useful when crafting your own.

At the beginning of every questionnaire, inform respondents of your topic and provide a consent form. After that, start with questions like:

  • Please select your gender:
  • What is the highest educational level you’ve completed?
  • High school
  • Bachelor degree
  • Master’s degree
  • On a scale of 1-7, how satisfied are you with your current job?
  • Please rate the following statements:
  • I always wait for people to text me first.
  • Strongly Disagree
  • Neither agree nor disagree
  • Strongly agree
  • My friends always complain that I never invite them anywhere.
  • I prefer spending time alone.
  • Rank which personality traits are most important when choosing a partner. Rank 1 - 7, where 1 is the most and 7 is the least important.
  • Flexibility
  • Independence
  • How openly do you share feelings with your partner?
  • Almost never
  • Almost always
  • In the last two weeks, how often did you experience headaches?

Dissertation survey best practices

There are a lot of DOs and DON’Ts you should keep in mind when conducting any survey, especially for your dissertation. To get valuable data from your targeted sample, follow these best practices:

Use the consent form.

The consent form is a must when distributing a research questionnaire. A respondent has to know how you’ll use their answers and that the survey is anonymous.

Avoid leading and double-barreled questions

Leading and double-barreled questions will produce inconclusive results—and you don’t want that. A question such as: “Do you like to watch TV and play video games?” is double-barreled because it has two variables.

On the other hand, leading questions such as “On a scale from 1-10 how would you rate the amazing experience with our customer support?” influence respondents to answer in a certain way, which produces biased results.

Use easy and straightforward language and questions

Don’t use terms and professional jargon that respondents won’t understand. Take into consideration their educational level and demographic traits and use easy-to-understand language when writing questions.

Mix close-ended and open-ended questions

Too many open-ended questions will annoy respondents. Also, analyzing the responses is harder. Use more close-ended questions for the best results and only a few open-ended ones.

Strategically use different types of responses

Likert scale, multiple-choice, and ranking are all types of responses you can use to collect data. But some response types suit some questions better. Make sure to strategically fit questions with response types.

Ensure that data privacy is a priority

Make sure to use an online survey tool that has SSL encryption and secure data processing. You don’t want to risk all your hard work going to waste because of poorly managed data security. Ensure that you only collect data that’s relevant to your dissertation survey and leave out any questions (such as name) that can identify the respondents.

Create dissertation questionnaires with SurveyPlanet

Overall, survey methodology is a great way to find research participants for your research study. You have all the tools required for creating a survey for a dissertation with SurveyPlanet—you only need to sign up . With powerful features like question branching, custom formatting, multiple languages, image choice questions, and easy export you will find everything needed to create, distribute, and analyze a dissertation survey.

Happy data gathering!

Sign up now

Free unlimited surveys, questions and responses.

  • Utility Menu

University Logo

Harvard University Program on Survey Research

  • Questionnaire Design Tip Sheet

This PSR Tip Sheet provides some basic tips about how to write good survey questions and design a good survey questionnaire.

PSR Resources

  • Managing and Manipulating Survey Data: A Beginners Guide
  • Finding and Hiring Survey Contractors
  • How to Frame and Explain the Survey Data Used in a Thesis
  • Overview of Cognitive Testing and Questionnaire Evaluation
  • Sampling, Coverage, and Nonresponse Tip Sheet
  • Introduction to Surveys for Honors Thesis Writers
  • PSR Introduction to the Survey Process
  • Related Centers/Programs at Harvard
  • General Survey Reference
  • Institutional Review Boards
  • Select Funding Opportunities
  • Survey Analysis Software
  • Professional Standards
  • Professional Organizations
  • Major Public Polls
  • Survey Data Collections
  • Major Longitudinal Surveys
  • Other Links

Grad Coach

Dissertation & Thesis Survey Design 101

5 Common Mistakes To Avoid (+ Examples)

By: David Phair (PhD) & Kerryn Warren (PhD) | April 2022

Surveys are a powerful way to collect data for your dissertation, thesis or research project. Done right, a good survey allows you to collect large swathes of useful data with (relatively) little effort. However, if not designed well, you can run into serious issues.

Over the years, we’ve encountered numerous common mistakes students make when it comes to survey design. In this post, we’ll unpack five of these costly mistakes.

Overview: 5 Survey Design Mistakes

  • Having poor overall survey structure and flow
  • Using poorly constructed questions and/or statements
  • Implementing inappropriate response types
  • Using unreliable and/or invalid scales  and measures
  • Designing without consideration for analysis techniques

Mistake #1: Having poor structure and flow

One of the most common issues we see is poor overall survey structure and flow . If a survey is designed badly, it will discourage participants from completing it. As a result, few participants will take the time to respond to the survey, which can lead to a small sample size and poor or even unusable results . Let’s look at a few best practices to ensure good overall structure and flow.

1. Make sure your survey is aligned with your study’s “golden thread”.

The first step might seem obvious, but it’s important to develop survey questions that are tightly aligned with your research question(s), aims and objectives – in other words, “your golden thread”. Your survey serves to generate the data that will answer these key ideas in your thesis; if it doesn’t do that, you’ve got a serious problem. To put it simply, it’s critically important to design your survey questions with the golden thread of your study front of mind at all times.

2. Order your questions in an intuitive, logical way.

The types of questions you ask and when you ask them are vital aspects when designing an effective survey. To avoid losing respondents, you need to order your questions clearly and logically.

In general, it’s a good idea to ask exclusion questions upfront . For example, if your research is focused on an aspect of women’s lives, your first question should be one to determine the gender of the respondent (and filter out unsuitable respondents). Once that’s out of the way, the exclusion questions can be followed by questions related to the key constructs or ideas and/or the dependent and independent variables in your study.

Lastly, the demographics-related questions are usually positioned at the end of the survey. These are questions related to the characteristics of your respondents (e.g., age, race, occupation). It’s a good idea to position these questions at the end of your survey because respondents can get caught up in these identity-related questions as they move through the rest of your survey. Placing them at the end of your survey helps ensure that the questions related to the core constructs of your study will have the respondents’ full attention.

It’s always a good idea to ask exclusion questions upfront, so that unsuitable respondents are filtered our as early as possible.

3. Design for user experience and ease of use.

This might seem obvious, but it’s essential to carefully consider your respondents’ “journey” when designing your survey. In other words, you need to keep user experience and engagement front of mind when designing your survey.

One way of creating a good user experience is to have a clear introduction or cover page upfront. On this intro page, it’s good to communicate the estimated time required to complete the survey (generally, 15 to 20 minutes is reasonable). Also, make u se of headings and short explainers to help respondents understand the context of each question or section in your survey. It’s also helpful if you provide a progress indicator to indicate how far they are in completing the survey.

Naturally, readability is important to a successful survey. So, keep the survey content as concise as possible, as people tend to drop out of long surveys. A general rule of thumb is to make use of plain, easy-to-understand language . Related to this, always carefully edit and proofread your survey before launching it. Typos, grammar and formatting issues will heavily detract from the credibility of your work and will likely increase respondent dropout.

In cases where you have no choice but to use a technical term or industry jargon, be sure to explain the meaning (define the term) first. You don’t want respondents to be distracted or confused by the technical aspects of your survey. In addition to this, create a logical flow by grouping related topics together and moving from general to more specific questions.

You should also think about what devices respondents will use to access your survey. Because many people use their phones to complete your survey, making it mobile-friendly means more people will be able to respond, which is hugely beneficial. By hosting your survey on a trusted provider (e.g., SurveyMonkey or Qualtrix), the mobile aspect should be taken care of, but always test your survey on a few devices.  Aside from making the data collection easier, using a well-established survey platform will also make processing your survey data easier.

4.  Prioritise ethics and data privacy.

The last (and very important) point to consider when designing your survey is the ethical requirements. Your survey design must adhere to all ethics policies and data protection laws of your country. If you (or your respondents) are in Europe for instance, you’ll need to comply with GDPR. It’s also essential to highlight to your respondents that all data collected will be handled and stored securely , to minimise any concerns about the confidentiality and safety of their data.

Since many respondents will be completing your survey on their phones, it's very important to ensure that your survey's mobile-friendly.

Mistake #2: Using poorly constructed questions

Another common survey design issue we encounter is poorly constructed questions and statements. There are a few ways in which questions can be poorly constructed. These usually fall into four broad categories: 

  • Loaded questions
  • Leading questions
  • Double-barreled questions
  • Vague questions 

Let’s look at each of these. 

A loaded question assumes something about the respondent without having any data to support that assumption. For example, if the question asks, “Where is your favourite place to eat steak?”, it assumes that the respondent eats steak. Clearly, this is problematic for respondents that are vegetarians or vegans, or people that simply don’t like steak. 

A leading question pushes the respondent to answer in a certain way. For example, a question such as, “How would you rate the excellent service at our restaurant?” is trying to influence the way that the respondent thinks about the service at the restaurant. This can be annoying to the respondent (at best) or lead them to respond in a way they wouldn’t have, had the question been more objective.

Need a helping hand?

survey questions for thesis

A double-barreled question is a question that contains two (or more) variables within it. It essentially tries to ask two questions at the same time. An example of this is

“ Do you enjoy eating peanut butter and cheese on bread?”

As you can see, this question makes it unclear whether you are being asked about whether they like eating the two together on bread, or whether they like eating one at a time. This is problematic, as there are multiple ways to interpret this question, which means that the resultant data will be unusable. 

A vague question , as the name suggests, is one where it is unclear what is being asked or one that is very open-ended . Of course, sometimes you do indeed want open questions, as they can provide richer information from respondents. However, if you ask a vague question, you’ll likely get a vague answer. So, you need to be careful. Consider the following fairly vague question:

“What was your experience at this restaurant?”. 

A respondent could answer this question by just saying “good” or “bad” – or nothing at all. This isn’t particularly helpful. Alternatively, someone might respond extensively about something unrelated to the question. If you want to ask open-ended questions, interviewing may be a better (or additional) data method to consider, so give some thought to what you’re trying to achieve. Only use open-ended questions in a survey if they’re central to your research aims . 

To make sure that your questions don’t fall into one of these problematic categories, it’s important to keep your golden thread (i.e., your research aims, objectives and research questions ) in mind and consider the type of data you want to generate. Also, it’s always a good idea to make use of a pilot study to test your survey questions and responses to see whether any questions are problematic and whether the data generated is useful.

If you want to ask open-ended questions, you may want to consider complementing your survey with a small round of interviews.

Mistake #3: Using inappropriate response types

When designing your survey, it’s essential to choose the best-suited response type/format for each question. In other words, you need to consider how the respondents will input their responses into your survey. Broadly speaking, there are three response types .

The first response type is categorical.  

These are questions where the respondent will choose one of the pre-determined options that you provide, for example: yes/no, gender, ethnicity, etc.

For categorical responses, there will be a limited number of choices and respondents will only be able to pick one. This is useful for basic demographic data where all potential responses can be easily grouped into categories. 

The second response type is scales . 

Scales offer respondents the opportunity to express their opinion on a spectrum . For example, you could design a 3-point scale with the options of agree, neutral and disagree. Scales are useful when you’re trying to assess the extent to which respondents agree with specific statements or claims. This data can then be statistically analysed in powerful ways. 

Scales can, however, be problematic if they have too many or too few points . For example, if you only have “strongly agree”, “neutral” and “strongly disagree”, your respondent might resort to selecting “neutral” because they don’t feel strongly about the subject. Conversely, if there are too many points on the scale, your respondents might take too much time to complete the survey and become frustrated in the process of agonising over what exactly they feel. 

The third response type is the free form text box (open-ended response). 

We mentioned open-ended questions earlier and looked at some of the ways in which they can be problematic. But, because free-form responses are useful for understanding nuances and finer details, this response type does have its benefits. For example, some respondents might have a problem with how the other questions in your survey are presented or asked, and therefore an open-ended response option gives them an opportunity to respond in a way that reflects their true feelings. 

As you can see, it’s important to carefully consider which response types you use, as each one has its own purpose, pros and cons . Make sure that each response option is appropriate for the type of question and generates data that you will be able to analyse in a meaningful way.

It’s also good to keep in mind that you as the researcher will need to process all the data generated by the survey. Therefore, you need to consider how you will analyse the data from each response type. Use the response type that makes sense for the specific question and keep the analysis aspect in mind when choosing your response types.

It's essential to use the best-suited response type for each question to ensure the data that you collect is  both meaningful and analysable.

Mistake #4: Using poorly design scales/measures

We’ve spoken about the design of the survey as a whole, but it’s also important to think carefully about the design of individual measures/scales. Theoretical constructs are typically measured using Likert scales. To measure these constructs effectively, you’ll need to ensure that your scales produce valid and reliable data.

Validity refers to whether the scale measures what you’re trying to measure . This might sound like a no-brainer, but oftentimes people can interpret questions or statements in diverse ways. Therefore, it’s important to think of whether the interpretations of the responses to each measure are sound relative to the original construct you are measuring and the existing literature relating to it.

Reliability, on the other hand, is related to whether multiple scales measuring the same construct get the same response (on average, of course). In other words, if you have three scales measuring employee satisfaction, they should correlate, as they all measure the same construct. A good survey should make use of multiple scales to measure any given construct, and these should “move” together – in other words, be “reliable”.

If you’re designing a survey, you’ll need to demonstrate the validity and reliability of your measures. This can be done in several ways, using both statistical and non-statistical techniques. We won’t get into detail about those here, but it’s important to remember that validity and reliability are central to making sure that your survey is measuring what it is meant to measure.

Importantly, when thinking about the scales for your survey, you don’t need to reinvent the wheel. There are pre-developed and tested scales available for most areas of research, and it’s preferable to use a “tried and tested” scale, rather than developing one from scratch. If there isn’t already something that fits your research, you can often modify existing scales to suit your specific needs.

To measure your theoretical constructs effectively, you’ll need to ensure (and show) that your scales produce valid and reliable data.

Mistake #5: Not designing with analysis in mind

Naturally, you’ll want to use the data gathered from your survey as effectively as possible. Therefore, it’s always a good idea to start with the end (i.e., the analysis phase) in mind when designing your survey. The analysis methods that you’ll be able to use in your study will be dictated by the design of the survey, as it will produce certain types of data. Therefore, it’s essential that you design your survey in a way that will allow you to undertake the analyses you need to achieve your research aims. 

Importantly, you should have a clear idea of what statistical methods you plan to use before you start designing your survey. Be clear about which specific descriptive and inferential tests you plan to do (and why). Make sure that you understand the assumptions of all the statistical tests you’ll be using and the type of data (i.e., nominal, ordinal, interval, or ratio ) that each test requires. Only once you have that level of clarity can you get started designing your survey. 

Finally, and as we’ve emphasized before, it’s essential that you keep your study’s golden thread front of mind during the design process. If your analysis methods don’t aid you in answering your research questions, they’ll be largely useless. So, keep the big picture and the end goal front of mind from the outset.

Recap: Survey Design Mistakes

In this post we’ve discussed some important aspects of survey design and five common mistakes to avoid while designing your own survey. To recap, these include:

If you have any questions about these survey design mistakes, drop a comment below. Alternatively, if you’re interested in getting  1-on-1 help with your research , check out our dissertation coaching service or book a free initial consultation with a friendly coach.

survey questions for thesis

Psst
 there’s more (for free)

This post is part of our dissertation mini-course, which covers everything you need to get started with your dissertation, thesis or research project. 

You Might Also Like:

Research proposal mistakes

Thank you much for this article. I really appreciate it.

Please do you handle someone’s project write up or you can guide someone through out the project write up?

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

  • Print Friendly

Logo for Mavs Open Press

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

13.1 Writing effective survey questions and questionnaires

Learning objectives.

Learners will be able to…

  • Describe some of the ways that survey questions might confuse respondents and how to word questions and responses clearly
  • Create mutually exclusive, exhaustive, and balanced response options
  • Define fence-sitting and floating
  • Describe the considerations involved in constructing a well-designed questionnaire
  • Discuss why pilot testing is important

In the previous chapter, we reviewed how researchers collect data using surveys. Guided by their sampling approach and research context, researchers should choose the survey approach that provides the most favorable tradeoffs in strengths and challenges. With this information in hand, researchers need to write their questionnaire and revise it before beginning data collection. Each method of delivery requires a questionnaire, but they vary a bit based on how they will be used by the researcher. Since phone surveys are read aloud, researchers will pay more attention to how the questionnaire sounds than how it looks. Online surveys can use advanced tools to require the completion of certain questions, present interactive questions and answers, and otherwise afford greater flexibility in how questionnaires are designed. As you read this chapter, consider how your method of delivery impacts the type of questionnaire you will design.

survey questions for thesis

Start with operationalization

The first thing you need to do to write effective survey questions is identify what exactly you wish to know. As silly as it sounds to state what seems so completely obvious, we can’t stress enough how easy it is to forget to include important questions when designing a survey. Begin by looking at your research question and refreshing your memory of the operational definitions you developed for those variables from Chapter 11. You should have a pretty firm grasp of your operational definitions before starting the process of questionnaire design. You may have taken those operational definitions from other researchers’ methods, found established scales and indices for your measures, or created your own questions and answer options.

TRACK 1 (IF YOU ARE CREATING A RESEARCH PROPOSAL FOR THIS CLASS)

STOP! Make sure you have a complete operational definition for the dependent and independent variables in your research question. A complete operational definition contains the variable being measured, the measure used, and how the researcher interprets the measure. Let’s make sure you have what you need from Chapter 11 to begin writing your questionnaire.

List all of the dependent and independent variables in your research question.

  • It’s normal to have one dependent or independent variable. It’s also normal to have more than one of either.
  • Make sure that your research question (and this list) contain all of the variables in your hypothesis. Your hypothesis should only include variables from you research question.

For each variable in your list:

  • If you don’t have questions and answers finalized yet, write a first draft and revise it based on what you read in this section.
  • If you are using a measure from another researcher, you should be able to write out all of the questions and answers associated with that measure. If you only have the name of a scale or a few questions, you need to access to the full text and some documentation on how to administer and interpret it before you can finish your questionnaire.
  • For example, an interpretation might be “there are five 7-point Likert scale questions…point values are added across all five items for each participant…and scores below 10 indicate the participant has low self-esteem”
  • Don’t introduce other variables into the mix here. All we are concerned with is how you will measure each variable by itself. The connection between variables is done using statistical tests, not operational definitions.
  • Detail any validity or reliability issues uncovered by previous researchers using the same measures. If you have concerns about validity and reliability, note them, as well.

TRACK 2 (IF YOU  AREN’T CREATING A RESEARCH PROPOSAL FOR THIS CLASS)

You are interested in researching the decision-making processes of parents of elementary-aged children during the beginning of the COVID-19 pandemic in 2020. Specifically, you want to if and how parents’ socioeconomic class impacted their decisions about whether to send their children to school in-person or instead opt for online classes or homeschooling.

  • Create a working research question for this topic.
  • What is the dependent variable in this research question? The independent variable? What other variables might you want to control?

For the independent variable, dependent variable, and at least one control variable from your list:

  • What measure (the specific question and answers) might you use for each one? Write out a first draft based on what you read in this section.

If you completed the exercise above and listed out all of the questions and answer choices you will use to measure the variables in your research question, you have already produced a pretty solid first draft of your questionnaire! Congrats! In essence, questionnaires are all of the self-report measures in your operational definitions for the independent, dependent, and control variables in your study arranged into one document and administered to participants. There are a few questions on a questionnaire (like name or ID#) that are not associated with the measurement of variables. These are the exception, and it’s useful to think of a questionnaire as a list of measures for variables. Of course, researchers often use more than one measure of a variable (i.e., triangulation ) so they can more confidently assert that their findings are true. A questionnaire should contain all of the measures researchers plan to collect about their variables by asking participants to self-report.

Sticking close to your operational definitions is important because it helps you avoid an everything-but-the-kitchen-sink approach that includes every possible question that occurs to you. Doing so puts an unnecessary burden on your survey respondents. Remember that you have asked your participants to give you their time and attention and to take care in responding to your questions; show them your respect by only asking questions that you actually plan to use in your analysis. For each question in your questionnaire, ask yourself how this question measures a variable in your study. An operational definition should contain the questions, response options, and how the researcher will draw conclusions about the variable based on participants’ responses.

survey questions for thesis

Writing questions

So, almost all of the questions on a questionnaire are measuring some variable. For many variables, researchers will create their own questions rather than using one from another researcher. This section will provide some tips on how to create good questions to accurately measure variables in your study. First, questions should be as clear and to the point as possible. This is not the time to show off your creative writing skills; a survey is a technical instrument and should be written in a way that is as direct and concise as possible. As I’ve mentioned earlier, your survey respondents have agreed to give their time and attention to your survey. The best way to show your appreciation for their time is to not waste it. Ensuring that your questions are clear and concise will go a long way toward showing your respondents the gratitude they deserve. Pilot testing the questionnaire with friends or colleagues can help identify these issues. This process is commonly called pretesting, but to avoid any confusion with pretesting in experimental design, we refer to it as pilot testing.

Related to the point about not wasting respondents’ time, make sure that every question you pose will be relevant to every person you ask to complete it. This means two things: first, that respondents have knowledge about whatever topic you are asking them about, and second, that respondents have experienced the events, behaviors, or feelings you are asking them to report. If you are asking participants for second-hand knowledge—asking clinicians about clients’ feelings, asking teachers about students’ feelings, and so forth—you may want to clarify that the variable you are asking about is the key informant’s perception of what is happening in the target population. A well-planned sampling approach ensures that participants are the most knowledgeable population to complete your survey.

If you decide that you do wish to include questions about matters with which only a portion of respondents will have had experience, make sure you know why you are doing so. For example, if you are asking about MSW student study patterns, and you decide to include a question on studying for the social work licensing exam, you may only have a small subset of participants who have begun studying for the graduate exam or took the bachelor’s-level exam. If you decide to include this question that speaks to a minority of participants’ experiences, think about why you are including it. Are you interested in how studying for class and studying for licensure differ? Are you trying to triangulate study skills measures? Researchers should carefully consider whether questions relevant to only a subset of participants is likely to produce enough valid responses for quantitative analysis.

Many times, questions that are relevant to a subsample of participants are conditional on an answer to a previous question. A participant might select that they rent their home, and as a result, you might ask whether they carry renter’s insurance. That question is not relevant to homeowners, so it would be wise not to ask them to respond to it. In that case, the question of whether someone rents or owns their home is a filter question , designed to identify some subset of survey respondents who are asked additional questions that are not relevant to the entire sample. Figure 13.1 presents an example of how to accomplish this on a paper survey by adding instructions to the participant that indicate what question to proceed to next based on their response to the first one. Using online survey tools, researchers can use filter questions to only present relevant questions to participants.

example of filter question, with a yes answer meaning you had to answer more questions

Researchers should eliminate questions that ask about things participants don’t know to minimize confusion. Assuming the question is relevant to the participant, other sources of confusion come from how the question is worded. The use of negative wording can be a source of potential confusion. Taking the question from Figure 13.1 about drinking as our example, what if we had instead asked, “Did you not abstain from drinking during your first semester of college?” This is a double negative, and it’s not clear how to answer the question accurately. It is a good idea to avoid negative phrasing, when possible. For example, “did you not drink alcohol during your first semester of college?” is less clear than “did you drink alcohol your first semester of college?”

Another 877777771`issue arises when you use jargon, or technical language, that people do not commonly know. For example, if you asked adolescents how they experience imaginary audience , they would find it difficult to link those words to the concepts from David Elkind’s theory. The words you use in your questions must be understandable to your participants. If you find yourself using jargon or slang, break it down into terms that are more universal and easier to understand.

Asking multiple questions as though they are a single question can also confuse survey respondents. There’s a specific term for this sort of question; it is called a double-barreled question . Figure 13.2 shows a double-barreled question. Do you see what makes the question double-barreled? How would someone respond if they felt their college classes were more demanding but also more boring than their high school classes? Or less demanding but more interesting? Because the question combines “demanding” and “interesting,” there is no way to respond yes to one criterion but no to the other.

Double-barreled question asking more than one thing at a time.

Another thing to avoid when constructing survey questions is the problem of social desirability . We all want to look good, right? And we all probably know the politically correct response to a variety of questions whether we agree with the politically correct response or not. In survey research, social desirability refers to the idea that respondents will try to answer questions in a way that will present them in a favorable light. (You may recall we covered social desirability bias in Chapter 11. )

Perhaps we decide that to understand the transition to college, we need to know whether respondents ever cheated on an exam in high school or college for our research project. We all know that cheating on exams is generally frowned upon (at least I hope we all know this). So, it may be difficult to get people to admit to cheating on a survey. But if you can guarantee respondents’ confidentiality, or even better, their anonymity, chances are much better that they will be honest about having engaged in this socially undesirable behavior. Another way to avoid problems of social desirability is to try to phrase difficult questions in the most benign way possible. Earl Babbie (2010) [1] offers a useful suggestion for helping you do this—simply imagine how you would feel responding to your survey questions. If you would be uncomfortable, chances are others would as well.

Try to step outside your role as researcher for a second, and imagine you were one of your participants. Evaluate the following:

  • Is the question too general? Sometimes, questions that are too general may not accurately convey respondents’ perceptions. If you asked someone how they liked a certain book and provide a response scale ranging from “not at all” to “extremely well”, and if that person selected “extremely well,” what do they mean? Instead, ask more specific behavioral questions, such as “Will you recommend this book to others?” or “Do you plan to read other books by the same author?” 
  • Is the question too detailed? Avoid unnecessarily detailed questions that serve no specific research purpose. For instance, do you need the age of each child in a household or is just the number of children in the household acceptable? However, if unsure, it is better to err on the side of details than generality.
  • Is the question presumptuous? Does your question make assumptions? For instance, if you ask, “what do you think the benefits of a tax cut would be?” you are presuming that the participant sees the tax cut as beneficial. But many people may not view tax cuts as beneficial. Some might see tax cuts as a precursor to less funding for public schools and fewer public services such as police, ambulance, and fire department. Avoid questions with built-in presumptions.
  • Does the question ask the participant to imagine something? Is the question imaginary? A popular question on many television game shows is “if you won a million dollars on this show, how will you plan to spend it?” Most participants have never been faced with this large amount of money and have never thought about this scenario. In fact, most don’t even know that after taxes, the value of the million dollars will be greatly reduced. In addition, some game shows spread the amount over a 20-year period. Without understanding this “imaginary” situation, participants may not have the background information necessary to provide a meaningful response.

Try to step outside your role as researcher for a second, and imagine you were one of your participants. Use the following prompts to evaluate your draft questions from the previous exercise:

Cultural considerations

When researchers write items for questionnaires, they must be conscientious to avoid culturally biased questions that may be inappropriate or difficult for certain populations.

[insert information related to asking about demographics and how this might make some people uncomfortable based on their identity(ies) and how to potentially address]

You should also avoid using terms or phrases that may be regionally or culturally specific (unless you are absolutely certain all your respondents come from the region or culture whose terms you are using). When I first moved to southwest Virginia, I didn’t know what a holler was. Where I grew up in New Jersey, to holler means to yell. Even then, in New Jersey, we shouted and screamed, but we didn’t holler much. In southwest Virginia, my home at the time, a holler also means a small valley in between the mountains. If I used holler in that way on my survey, people who live near me may understand, but almost everyone else would be totally confused.

Testing questionnaires before using them

Finally, it is important to get feedback on your survey questions from as many people as possible, especially people who are like those in your sample. Now is not the time to be shy. Ask your friends for help, ask your mentors for feedback, ask your family to take a look at your survey as well. The more feedback you can get on your survey questions, the better the chances that you will come up with a set of questions that are understandable to a wide variety of people and, most importantly, to those in your sample.

In sum, in order to pose effective survey questions, researchers should do the following:

  • Identify how each question measures an independent, dependent, or control variable in their study.
  • Keep questions clear and succinct.
  • Make sure respondents have relevant lived experience to provide informed answers to your questions.
  • Use filter questions to avoid getting answers from uninformed participants.
  • Avoid questions that are likely to confuse respondents—including those that use double negatives, use culturally specific terms or jargon, and pose more than one question at a time.
  • Imagine how respondents would feel responding to questions.
  • Get feedback, especially from people who resemble those in the researcher’s sample.

Table 13.1 offers one model for writing effective questionnaire items.

Let’s complete a first draft of your questions.

  • In the first exercise, you wrote out the questions and answers for each measure of your independent and dependent variables. Evaluate each question using the criteria listed above on effective survey questions.
  • Type out questions for your control variables and evaluate them, as well. Consider what response options you want to offer participants.

Now, let’s revise any questions that do not meet your standards!

  •  Use the BRUSO model in Table 13.1 for an illustration of how to address deficits in question wording. Keep in mind that you are writing a first draft in this exercise, and it will take a few drafts and revisions before your questions are ready to distribute to participants.
  • In the first exercise, you wrote out the question and answers for your independent, dependent, and at least one control variable. Evaluate each question using the criteria listed above on effective survey questions.
  •  Use the BRUSO model in Table 13.1 for an illustration of how to address deficits in question wording. In real research, it will take a few drafts and revisions before your questions are ready to distribute to participants.

survey questions for thesis

Writing response options

While posing clear and understandable questions in your survey is certainly important, so too is providing respondents with unambiguous response options. Response options are the answers that you provide to the people completing your questionnaire. Generally, respondents will be asked to choose a single (or best) response to each question you pose. We call questions in which the researcher provides all of the response options closed-ended questions . Keep in mind, closed-ended questions can also instruct respondents to choose multiple response options, rank response options against one another, or assign a percentage to each response option. But be cautious when experimenting with different response options! Accepting multiple responses to a single question may add complexity when it comes to quantitatively analyzing and interpreting your data.

Surveys need not be limited to closed-ended questions. Sometimes survey researchers include open-ended questions in their survey instruments as a way to gather additional details from respondents. An open-ended question does not include response options; instead, respondents are asked to reply to the question in their own way, using their own words. These questions are generally used to find out more about a survey participant’s experiences or feelings about whatever they are being asked to report in the survey. If, for example, a survey includes closed-ended questions asking respondents to report on their involvement in extracurricular activities during college, an open-ended question could ask respondents why they participated in those activities or what they gained from their participation. While responses to such questions may also be captured using a closed-ended format, allowing participants to share some of their responses in their own words can make the experience of completing the survey more satisfying to respondents and can also reveal new motivations or explanations that had not occurred to the researcher. This is particularly important for mixed-methods research. It is possible to analyze open-ended response options quantitatively using content analysis (i.e., counting how often a theme is represented in a transcript looking for statistical patterns). However, for most researchers, qualitative data analysis will be needed to analyze open-ended questions, and researchers need to think through how they will analyze any open-ended questions as part of their data analysis plan. Open-ended questions cannot be operationally defined because you don’t know what responses you will get. We will address qualitative data analysis in greater detail in Chapter 19.

To write an effective response options for closed-ended questions, there are a couple of guidelines worth following. First, be sure that your response options are mutually exclusive . Look back at Figure 13.1, which contains questions about how often and how many drinks respondents consumed. Do you notice that there are no overlapping categories in the response options for these questions? This is another one of those points about question construction that seems fairly obvious but that can be easily overlooked. Response options should also be exhaustive . In other words, every possible response should be covered in the set of response options that you provide. For example, note that in question 10a in Figure 13.1, we have covered all possibilities—those who drank, say, an average of once per month can choose the first response option (“less than one time per week”) while those who drank multiple times a day each day of the week can choose the last response option (“7+”). All the possibilities in between these two extremes are covered by the middle three response options, and every respondent fits into one of the response options we provided.

Earlier in this section, we discussed double-barreled questions. Response options can also be double barreled, and this should be avoided. Figure 13.3 is an example of a question that uses double-barreled response options. Other tips about questions are also relevant to response options, including that participants should be knowledgeable enough to select or decline a response option as well as avoiding jargon and cultural idioms.

Double-barreled response options providing more than one answer for each option

Even if you phrase questions and response options clearly, participants are influenced by how many response options are presented on the questionnaire. For Likert scales, five or seven response options generally allow about as much precision as respondents are capable of. However, numerical scales with more options can sometimes be appropriate. For dimensions such as attractiveness, pain, and likelihood, a 0-to-10 scale will be familiar to many respondents and easy for them to use. Regardless of the number of response options, the most extreme ones should generally be “balanced” around a neutral or modal midpoint. An example of an unbalanced rating scale measuring perceived likelihood might look like this:

Unlikely  |  Somewhat Likely  |  Likely  |  Very Likely  |  Extremely Likely

Because we have four rankings of likely and only one ranking of unlikely, the scale is unbalanced and most responses will be biased toward “likely” rather than “unlikely.” A balanced version might look like this:

Extremely Unlikely  |  Somewhat Unlikely  |  As Likely as Not  |  Somewhat Likely  | Extremely Likely

In this example, the midpoint is halfway between likely and unlikely. Of course, a middle or neutral response option does not have to be included. Researchers sometimes choose to leave it out because they want to encourage respondents to think more deeply about their response and not simply choose the middle option by default. Fence-sitters are respondents who choose neutral response options, even if they have an opinion. Some people will be drawn to respond, “no opinion” even if they have an opinion, particularly if their true opinion is the not a socially desirable opinion. Floaters , on the other hand, are those that choose a substantive answer to a question when really, they don’t understand the question or don’t have an opinion. 

As you can see, floating is the flip side of fence-sitting. Thus, the solution to one problem is often the cause of the other. How you decide which approach to take depends on the goals of your research. Sometimes researchers specifically want to learn something about people who claim to have no opinion. In this case, allowing for fence-sitting would be necessary. Other times researchers feel confident their respondents will all be familiar with every topic in their survey. In this case, perhaps it is okay to force respondents to choose one side or another (e.g., agree or disagree) without a middle option (e.g., neither agree nor disagree) or to not include an option like “don’t know enough to say” or “not applicable.” There is no always-correct solution to either problem. But in general, including middle option in a response set provides a more exhaustive set of response options than one that excludes one. 

==This came from 10.3 under “Measuring unidimensional concepts” but it seems more appropriate in the chapter about writing survey questions. We need to make sure this section flows well. Maybe there should be a better organized subsection on rating scales?  Where does this go? Does it need any revision?===

The number of response options on a typical rating scale is usually five or seven, though it can range from three to 11. Five-point scales are best for unipolar scales where only one construct is tested, such as frequency (Never, Rarely, Sometimes, Often, Always). Seven-point scales are best for bipolar scales where there is a dichotomous spectrum, such as liking (Like very much, Like somewhat, Like slightly, Neither like nor dislike, Dislike slightly, Dislike somewhat, Dislike very much). For bipolar questions, it is useful to offer an earlier question that branches them into an area of the scale; if asking about liking ice cream, first ask “Do you generally like or dislike ice cream?” Once the respondent chooses like or dislike, refine it by offering them relevant choices from the seven-point scale. Branching improves both reliability and validity (Krosnick & Berent, 1993). [2] Although you often see scales with numerical labels, it is best to only present verbal labels to the respondents but convert them to numerical values in the analyses. Avoid partial labels or length or overly specific labels. In some cases, the verbal labels can be supplemented with (or even replaced by) meaningful graphics. The last rating scale shown in Figure 10.1 is a visual-analog scale, on which participants make a mark somewhere along the horizontal line to indicate the magnitude of their response.

Finalizing Response Options

The most important check before your finalize your response options is to align them with your operational definitions. As we’ve discussed before, your operational definitions include your measures (questions and responses options) as well as how to interpret those measures in terms of the variable being measured. In particular, you should be able to interpret all response options to a question based on your operational definition of the variable it measures. If you wanted to measure the variable “social class,” you might ask one question about a participant’s annual income and another about family size. Your operational definition would need to provide clear instructions on how to interpret response options. Your operational definition is basically like this social class calculator from Pew Research , though they include a few more questions in their definition.

To drill down a bit more, as Pew specifies in the section titled “how the income calculator works,” the interval/ratio data respondents enter is interpreted using a formula combining a participant’s four responses to the questions posed by Pew categorizing their household into three categories—upper, middle, or lower class. So, the operational definition includes the four questions comprising the measure and the formula or interpretation which converts responses into the three final categories that we are familiar with: lower, middle, and upper class.

It’s perfectly normal for operational definitions to change levels of measurement, and it’s also perfectly normal for the level of measurement to stay the same. The important thing is that each response option a participant can provide is accounted for by the operational definition. Throw any combination of family size, location, or income at the Pew calculator, and it will define you into one of those three social class categories.

Unlike Pew’s definition, the operational definitions in your study may not need their own webpage to define and describe. For many questions and answers, interpreting response options is easy. If you were measuring “income” instead of “social class,” you could simply operationalize the term by asking people to list their total household income before taxes are taken out. Higher values indicate higher income, and lower values indicate lower income. Easy. Regardless of whether your operational definitions are simple or more complex, every response option to every question on your survey (with a few exceptions) should be interpretable using an operational definition of a variable. Just like we want to avoid an everything-but-the-kitchen-sink approach to questions on our questionnaire, you want to make sure your final questionnaire only contains response options that you will use in your study.

One note of caution on interpretation (sorry for repeating this). We want to remind you again that an operational definition should not mention more than one variable. In our example above, your operational definition could not say “a family of three making under $50,000 is lower class; therefore, they are more likely to experience food insecurity.” That last clause about food insecurity may well be true, but it’s not a part of the operational definition for social class. Each variable (food insecurity and class) should have its own operational definition. If you are talking about how to interpret the relationship between two variables, you are talking about your data analysis plan . We will discuss how to create your data analysis plan beginning in Chapter 14 . For now, one consideration is that depending on the statistical test you use to test relationships between variables, you may need nominal, ordinal, or interval/ratio data. Your questions and response options should match the level of measurement you need with the requirements of the specific statistical tests in your data analysis plan. Once you finalize your data analysis plan, return to your questionnaire to confirm the level of measurement matches with the statistical test you’ve chosen.

In summary, to write effective response options researchers should do the following:

  • Avoid wording that is likely to confuse respondents—including double negatives, use culturally specific terms or jargon, and double-barreled response options.
  • Ensure response options are relevant to participants’ knowledge and experience so they can make an informed and accurate choice.
  • Present mutually exclusive and exhaustive response options.
  • Consider fence-sitters and floaters, and the use of neutral or “not applicable” response options.
  • Define how response options are interpreted as part of an operational definition of a variable.
  • Check level of measurement matches operational definitions and the statistical tests in the data analysis plan (once you develop one in the future)

Look back at the response options you drafted in the previous exercise. Make sure you have a first draft of response options for each closed-ended question on your questionnaire.

  • Using the criteria above, evaluate the wording of the response options for each question on your questionnaire.
  • Revise your questions and response options until you have a complete first draft.
  • Do your first read-through and provide a dummy answer to each question. Make sure you can link each response option and each question to an operational definition.

Look back at the response options you drafted in the previous exercise.

From this discussion, we hope it is clear why researchers using quantitative methods spell out all of their plans ahead of time. Ultimately, there should be a straight line from operational definition through measures on your questionnaire to the data analysis plan. If your questionnaire includes response options that are not aligned with operational definitions or not included in the data analysis plan, the responses you receive back from participants won’t fit with your conceptualization of the key variables in your study. If you do not fix these errors and proceed with collecting unstructured data, you will lose out on many of the benefits of survey research and face overwhelming challenges in answering your research question.

survey questions for thesis

Designing questionnaires

Based on your work in the previous section, you should have a first draft of the questions and response options for the key variables in your study. Now, you’ll also need to think about how to present your written questions and response options to survey respondents. It’s time to write a final draft of your questionnaire and make it look nice. Designing questionnaires takes some thought. First, consider the route of administration for your survey. What we cover in this section will apply equally to paper and online surveys, but if you are planning to use online survey software, you should watch tutorial videos and explore the features of of the survey software you will use.

Informed consent & instructions

Writing effective items is only one part of constructing a survey. For one thing, every survey should have a written or spoken introduction that serves two basic functions (Peterson, 2000) . [3] One is to encourage respondents to participate in the survey. In many types of research, such encouragement is not necessary either because participants do not know they are in a study (as in naturalistic observation) or because they are part of a subject pool and have already shown their willingness to participate by signing up and showing up for the study. Survey research usually catches respondents by surprise when they answer their phone, go to their mailbox, or check their e-mail—and the researcher must make a good case for why they should agree to participate. Thus, the introduction should briefly explain the purpose of the survey and its importance, provide information about the sponsor of the survey (university-based surveys tend to generate higher response rates), acknowledge the importance of the respondent’s participation, and describe any incentives for participating.

The second function of the introduction is to establish informed consent . Remember that this involves describing to respondents everything that might affect their decision to participate. This includes the topics covered by the survey, the amount of time it is likely to take, the respondent’s option to withdraw at any time, confidentiality issues, and other ethical considerations we covered in Chapter 6. Written consent forms are not always used in survey research (when the research is of minimal risk and completion of the survey instrument is often accepted by the IRB as evidence of consent to participate), so it is important that this part of the introduction be well documented and presented clearly and in its entirety to every respondent.

Organizing items to be easy and intuitive to follow

The introduction should be followed by the substantive questionnaire items. But first, it is important to present clear instructions for completing the questionnaire, including examples of how to use any unusual response scales. Remember that the introduction is the point at which respondents are usually most interested and least fatigued, so it is good practice to start with the most important items for purposes of the research and proceed to less important items. Items should also be grouped by topic or by type. For example, items using the same rating scale (e.g., a 5-point agreement scale) should be grouped together if possible to make things faster and easier for respondents. Demographic items are often presented last. This can be because they are easy to answer in the event respondents have become tired or bored, because they are least interesting to participants, or because they can raise concerns for respondents from marginalized groups who may see questions about their identities as a potential red flag. Of course, any survey should end with an expression of appreciation to the respondent.

Questions are often organized thematically. If our survey were measuring social class, perhaps we’d have a few questions asking about employment, others focused on education, and still others on housing and community resources. Those may be the themes around which we organize our questions. Or perhaps it would make more sense to present any questions we had about parents’ income and then present a series of questions about estimated future income. Grouping by theme is one way to be deliberate about how you present your questions. Keep in mind that you are surveying people, and these people will be trying to follow the logic in your questionnaire. Jumping from topic to topic can give people a bit of whiplash and may make participants less likely to complete it.

Using a matrix is a nice way of streamlining response options for similar questions. A matrix is a question type that lists a set of questions for which the answer categories are all the same. If you have a set of questions for which the response options are the same, it may make sense to create a matrix rather than posing each question and its response options individually. Not only will this save you some space in your survey but it will also help respondents progress through your survey more easily. A sample matrix can be seen in Figure 13.4.

Survey using matrix options--between agree and disagree--and opinions about class

Once you have grouped similar questions together, you’ll need to think about the order in which to present those question groups. Most survey researchers agree that it is best to begin a survey with questions that will want to make respondents continue (Babbie, 2010; Dillman, 2000; Neuman, 2003). [4] In other words, don’t bore respondents, but don’t scare them away either. There’s some disagreement over where on a survey to place demographic questions, such as those about a person’s age, gender, and race. On the one hand, placing them at the beginning of the questionnaire may lead respondents to think the survey is boring, unimportant, and not something they want to bother completing. On the other hand, if your survey deals with some very sensitive topic, such as child sexual abuse or criminal convictions, you don’t want to scare respondents away or shock them by beginning with your most intrusive questions.

Your participants are human. They will react emotionally to questionnaire items, and they will also try to uncover your research questions and hypotheses. In truth, the order in which you present questions on a survey is best determined by the unique characteristics of your research. When feasible, you should consult with key informants from your target population determine how best to order your questions. If it is not feasible to do so, think about the unique characteristics of your topic, your questions, and most importantly, your sample. Keeping in mind the characteristics and needs of the people you will ask to complete your survey should help guide you as you determine the most appropriate order in which to present your questions. None of your decisions will be perfect, and all studies have limitations.

Questionnaire length

You’ll also need to consider the time it will take respondents to complete your questionnaire. Surveys vary in length, from just a page or two to a dozen or more pages, which means they also vary in the time it takes to complete them. How long to make your survey depends on several factors. First, what is it that you wish to know? Wanting to understand how grades vary by gender and year in school certainly requires fewer questions than wanting to know how people’s experiences in college are shaped by demographic characteristics, college attended, housing situation, family background, college major, friendship networks, and extracurricular activities. Keep in mind that even if your research question requires a sizable number of questions be included in your questionnaire, do your best to keep the questionnaire as brief as possible. Any hint that you’ve thrown in a bunch of useless questions just for the sake of it will turn off respondents and may make them not want to complete your survey.

Second, and perhaps more important, how long are respondents likely to be willing to spend completing your questionnaire? If you are studying college students, asking them to use their very limited time to complete your survey may mean they won’t want to spend more than a few minutes on it. But if you ask them to complete your survey during down-time between classes and there is little work to be done, students may be willing to give you a bit more of their time. Think about places and times that your sampling frame naturally gathers and whether you would be able to either recruit participants or distribute a survey in that context. Estimate how long your participants would reasonably have to complete a survey presented to them during this time. The more you know about your population (such as what weeks have less work and more free time), the better you can target questionnaire length.

The time that survey researchers ask respondents to spend on questionnaires varies greatly. Some researchers advise that surveys should not take longer than about 15 minutes to complete (as cited in Babbie 2010), [5] whereas others suggest that up to 20 minutes is acceptable (Hopper, 2010). [6] As with question order, there is no clear-cut, always-correct answer about questionnaire length. The unique characteristics of your study and your sample should be considered to determine how long to make your questionnaire. For example, if you planned to distribute your questionnaire to students in between classes, you will need to make sure it is short enough to complete before the next class begins.

When designing a questionnaire, a researcher should consider:

  • Weighing strengths and limitations of the method of delivery, including the advanced tools in online survey software or the simplicity of paper questionnaires.
  • Grouping together items that ask about the same thing.
  • Moving any questions about sensitive items to the end of the questionnaire, so as not to scare respondents off.
  • Moving any questions that engage the respondent to answer the questionnaire at the beginning, so as not to bore them.
  • Timing the length of the questionnaire with a reasonable length of time you can ask of your participants.
  • Dedicating time to visual design and ensure the questionnaire looks professional.

Type out a final draft of your questionnaire in a word processor or online survey tool.

  • Evaluate your questionnaire using the guidelines above, revise it, and get it ready to share with other student researchers.
  • Take a look at the question drafts you have completed and decide on an order for your questions. E valuate your draft questionnaire using the guidelines above, and revise as needed.

survey questions for thesis

Pilot testing and revising questionnaires

A good way to estimate the time it will take respondents to complete your questionnaire (and other potential challenges) is through pilot testing . Pilot testing allows you to get feedback on your questionnaire so you can improve it before you actually administer it. It can be quite expensive and time consuming if you wish to pilot test your questionnaire on a large sample of people who very much resemble the sample to whom you will eventually administer the finalized version of your questionnaire. But you can learn a lot and make great improvements to your questionnaire simply by pilot testing with a small number of people to whom you have easy access (perhaps you have a few friends who owe you a favor). By pilot testing your questionnaire, you can find out how understandable your questions are, get feedback on question wording and order, find out whether any of your questions are boring or offensive, and learn whether there are places where you should have included filter questions. You can also time pilot testers as they take your survey. This will give you a good idea about the estimate to provide respondents when you administer your survey and whether you have some wiggle room to add additional items or need to cut a few items.

Perhaps this goes without saying, but your questionnaire should also have an attractive design. A messy presentation style can confuse respondents or, at the very least, annoy them. Be brief, to the point, and as clear as possible. Avoid cramming too much into a single page. Make your font size readable (at least 12 point or larger, depending on the characteristics of your sample), leave a reasonable amount of space between items, and make sure all instructions are exceptionally clear. If you are using an online survey, ensure that participants can complete it via mobile, computer, and tablet devices. Think about books, documents, articles, or web pages that you have read yourself—which were relatively easy to read and easy on the eyes and why? Try to mimic those features in the presentation of your survey questions. While online survey tools automate much of visual design, word processors are designed for writing all kinds of documents and may need more manual adjustment as part of visual design.

Realistically, your questionnaire will continue to evolve as you develop your data analysis plan over the next few chapters. By now, you should have a complete draft of your questionnaire grounded in an underlying logic that ties together each question and response option to a variable in your study. Once your questionnaire is finalized, you will need to submit it for ethical approval from your IRB. If your study requires IRB approval, it may be worthwhile to submit your proposal before your questionnaire is completely done. Revisions to IRB protocols are common and it takes less time to review a few changes to questions and answers than it does to review the entire study, so give them the whole study as soon as you can. Once the IRB approves your questionnaire, you cannot change it without their okay.

Key Takeaways

  • A questionnaire is comprised of self-report measures of variables in a research study.
  • Make sure your survey questions will be relevant to all respondents and that you use filter questions when necessary.
  • Effective survey questions and responses take careful construction by researchers, as participants may be confused or otherwise influenced by how items are phrased.
  • The questionnaire should start with informed consent and instructions, flow logically from one topic to the next, engage but not shock participants, and thank participants at the end.
  • Pilot testing can help identify any issues in a questionnaire before distributing it to participants, including language or length issues.

It’s a myth that researchers work alone! Get together with a few of your fellow students and swap questionnaires for pilot testing.

  • Use the criteria in each section above (questions, response options, questionnaires) and provide your peers with the strengths and weaknesses of their questionnaires.
  • See if you can guess their research question and hypothesis based on the questionnaire alone.

It’s a myth that researchers work alone! Get together with a few of your fellow students and compare draft questionnaires.

  • What are the strengths and limitations of your questionnaire as compared to those of your peers?
  • Is there anything you would like to use from your peers’ questionnaires in your own?
  • Babbie, E. (2010). The practice of social research (12th ed.). Belmont, CA: Wadsworth. ↵
  • Krosnick, J.A. & Berent, M.K. (1993). Comparisons of party identification and policy preferences: The impact of survey question format. American Journal of Political Science, 27(3), 941-964. ↵
  • Peterson, R. A. (2000).  Constructing effective questionnaires . Thousand Oaks, CA: Sage. ↵
  • Babbie, E. (2010). The practice of social research (12th ed.). Belmont, CA: Wadsworth; Dillman, D. A. (2000). Mail and Internet surveys: The tailored design method (2nd ed.). New York, NY: Wiley; Neuman, W. L. (2003). Social research methods: Qualitative and quantitative approaches (5th ed.). Boston, MA: Pearson. ↵
  • Babbie, E. (2010). The practice of social research  (12th ed.). Belmont, CA: Wadsworth. ↵
  • Hopper, J. (2010). How long should a survey be? Retrieved from  http://www.verstaresearch.com/blog/how-long-should-a-survey-be ↵

According to the APA Dictionary of Psychology, an operational definition is "a description of something in terms of the operations (procedures, actions, or processes) by which it could be observed and measured. For example, the operational definition of anxiety could be in terms of a test score, withdrawal from a situation, or activation of the sympathetic nervous system. The process of creating an operational definition is known as operationalization."

Triangulation of data refers to the use of multiple types, measures or sources of data in a research project to increase the confidence that we have in our findings.

Testing out your research materials in advance on people who are not included as participants in your study.

items on a questionnaire designed to identify some subset of survey respondents who are asked additional questions that are not relevant to the entire sample

a question that asks more than one thing at a time, making it difficult to respond accurately

When a participant answers in a way that they believe is socially the most acceptable answer.

the answers researchers provide to participants to choose from when completing a questionnaire

questions in which the researcher provides all of the response options

Questions for which the researcher does not include response options, allowing for respondents to answer the question in their own words

respondents to a survey who choose neutral response options, even if they have an opinion

respondents to a survey who choose a substantive answer to a question when really, they don’t understand the question or don’t have an opinion

An ordered outline that includes your research question, a description of the data you are going to use to answer it, and the exact analyses, step-by-step, that you plan to run to answer your research question.

A process through which the researcher explains the research process, procedures, risks and benefits to a potential participant, usually through a written document, which the participant than signs, as evidence of their agreement to participate.

a type of survey question that lists a set of questions for which the response options are all the same in a grid layout

Doctoral Research Methods in Social Work Copyright © by Mavs Open Press. All Rights Reserved.

Share This Book

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology
  • Doing Survey Research | A Step-by-Step Guide & Examples

Doing Survey Research | A Step-by-Step Guide & Examples

Published on 6 May 2022 by Shona McCombes . Revised on 10 October 2022.

Survey research means collecting information about a group of people by asking them questions and analysing the results. To conduct an effective survey, follow these six steps:

  • Determine who will participate in the survey
  • Decide the type of survey (mail, online, or in-person)
  • Design the survey questions and layout
  • Distribute the survey
  • Analyse the responses
  • Write up the results

Surveys are a flexible method of data collection that can be used in many different types of research .

Table of contents

What are surveys used for, step 1: define the population and sample, step 2: decide on the type of survey, step 3: design the survey questions, step 4: distribute the survey and collect responses, step 5: analyse the survey results, step 6: write up the survey results, frequently asked questions about surveys.

Surveys are used as a method of gathering data in many different fields. They are a good choice when you want to find out about the characteristics, preferences, opinions, or beliefs of a group of people.

Common uses of survey research include:

  • Social research: Investigating the experiences and characteristics of different social groups
  • Market research: Finding out what customers think about products, services, and companies
  • Health research: Collecting data from patients about symptoms and treatments
  • Politics: Measuring public opinion about parties and policies
  • Psychology: Researching personality traits, preferences, and behaviours

Surveys can be used in both cross-sectional studies , where you collect data just once, and longitudinal studies , where you survey the same sample several times over an extended period.

Prevent plagiarism, run a free check.

Before you start conducting survey research, you should already have a clear research question that defines what you want to find out. Based on this question, you need to determine exactly who you will target to participate in the survey.

Populations

The target population is the specific group of people that you want to find out about. This group can be very broad or relatively narrow. For example:

  • The population of Brazil
  • University students in the UK
  • Second-generation immigrants in the Netherlands
  • Customers of a specific company aged 18 to 24
  • British transgender women over the age of 50

Your survey should aim to produce results that can be generalised to the whole population. That means you need to carefully define exactly who you want to draw conclusions about.

It’s rarely possible to survey the entire population of your research – it would be very difficult to get a response from every person in Brazil or every university student in the UK. Instead, you will usually survey a sample from the population.

The sample size depends on how big the population is. You can use an online sample calculator to work out how many responses you need.

There are many sampling methods that allow you to generalise to broad populations. In general, though, the sample should aim to be representative of the population as a whole. The larger and more representative your sample, the more valid your conclusions.

There are two main types of survey:

  • A questionnaire , where a list of questions is distributed by post, online, or in person, and respondents fill it out themselves
  • An interview , where the researcher asks a set of questions by phone or in person and records the responses

Which type you choose depends on the sample size and location, as well as the focus of the research.

Questionnaires

Sending out a paper survey by post is a common method of gathering demographic information (for example, in a government census of the population).

  • You can easily access a large sample.
  • You have some control over who is included in the sample (e.g., residents of a specific region).
  • The response rate is often low.

Online surveys are a popular choice for students doing dissertation research , due to the low cost and flexibility of this method. There are many online tools available for constructing surveys, such as SurveyMonkey and Google Forms .

  • You can quickly access a large sample without constraints on time or location.
  • The data is easy to process and analyse.
  • The anonymity and accessibility of online surveys mean you have less control over who responds.

If your research focuses on a specific location, you can distribute a written questionnaire to be completed by respondents on the spot. For example, you could approach the customers of a shopping centre or ask all students to complete a questionnaire at the end of a class.

  • You can screen respondents to make sure only people in the target population are included in the sample.
  • You can collect time- and location-specific data (e.g., the opinions of a shop’s weekday customers).
  • The sample size will be smaller, so this method is less suitable for collecting data on broad populations.

Oral interviews are a useful method for smaller sample sizes. They allow you to gather more in-depth information on people’s opinions and preferences. You can conduct interviews by phone or in person.

  • You have personal contact with respondents, so you know exactly who will be included in the sample in advance.
  • You can clarify questions and ask for follow-up information when necessary.
  • The lack of anonymity may cause respondents to answer less honestly, and there is more risk of researcher bias.

Like questionnaires, interviews can be used to collect quantitative data : the researcher records each response as a category or rating and statistically analyses the results. But they are more commonly used to collect qualitative data : the interviewees’ full responses are transcribed and analysed individually to gain a richer understanding of their opinions and feelings.

Next, you need to decide which questions you will ask and how you will ask them. It’s important to consider:

  • The type of questions
  • The content of the questions
  • The phrasing of the questions
  • The ordering and layout of the survey

Open-ended vs closed-ended questions

There are two main forms of survey questions: open-ended and closed-ended. Many surveys use a combination of both.

Closed-ended questions give the respondent a predetermined set of answers to choose from. A closed-ended question can include:

  • A binary answer (e.g., yes/no or agree/disagree )
  • A scale (e.g., a Likert scale with five points ranging from strongly agree to strongly disagree )
  • A list of options with a single answer possible (e.g., age categories)
  • A list of options with multiple answers possible (e.g., leisure interests)

Closed-ended questions are best for quantitative research . They provide you with numerical data that can be statistically analysed to find patterns, trends, and correlations .

Open-ended questions are best for qualitative research. This type of question has no predetermined answers to choose from. Instead, the respondent answers in their own words.

Open questions are most common in interviews, but you can also use them in questionnaires. They are often useful as follow-up questions to ask for more detailed explanations of responses to the closed questions.

The content of the survey questions

To ensure the validity and reliability of your results, you need to carefully consider each question in the survey. All questions should be narrowly focused with enough context for the respondent to answer accurately. Avoid questions that are not directly relevant to the survey’s purpose.

When constructing closed-ended questions, ensure that the options cover all possibilities. If you include a list of options that isn’t exhaustive, you can add an ‘other’ field.

Phrasing the survey questions

In terms of language, the survey questions should be as clear and precise as possible. Tailor the questions to your target population, keeping in mind their level of knowledge of the topic.

Use language that respondents will easily understand, and avoid words with vague or ambiguous meanings. Make sure your questions are phrased neutrally, with no bias towards one answer or another.

Ordering the survey questions

The questions should be arranged in a logical order. Start with easy, non-sensitive, closed-ended questions that will encourage the respondent to continue.

If the survey covers several different topics or themes, group together related questions. You can divide a questionnaire into sections to help respondents understand what is being asked in each part.

If a question refers back to or depends on the answer to a previous question, they should be placed directly next to one another.

Before you start, create a clear plan for where, when, how, and with whom you will conduct the survey. Determine in advance how many responses you require and how you will gain access to the sample.

When you are satisfied that you have created a strong research design suitable for answering your research questions, you can conduct the survey through your method of choice – by post, online, or in person.

There are many methods of analysing the results of your survey. First you have to process the data, usually with the help of a computer program to sort all the responses. You should also cleanse the data by removing incomplete or incorrectly completed responses.

If you asked open-ended questions, you will have to code the responses by assigning labels to each response and organising them into categories or themes. You can also use more qualitative methods, such as thematic analysis , which is especially suitable for analysing interviews.

Statistical analysis is usually conducted using programs like SPSS or Stata. The same set of survey data can be subject to many analyses.

Finally, when you have collected and analysed all the necessary data, you will write it up as part of your thesis, dissertation , or research paper .

In the methodology section, you describe exactly how you conducted the survey. You should explain the types of questions you used, the sampling method, when and where the survey took place, and the response rate. You can include the full questionnaire as an appendix and refer to it in the text if relevant.

Then introduce the analysis by describing how you prepared the data and the statistical methods you used to analyse it. In the results section, you summarise the key results from your analysis.

A Likert scale is a rating scale that quantitatively assesses opinions, attitudes, or behaviours. It is made up of four or more questions that measure a single attitude or trait when response scores are combined.

To use a Likert scale in a survey , you present participants with Likert-type questions or statements, and a continuum of items, usually with five or seven possible responses, to capture their degree of agreement.

Individual Likert-type questions are generally considered ordinal data , because the items have clear rank order, but don’t have an even distribution.

Overall Likert scale scores are sometimes treated as interval data. These scores are considered to have directionality and even spacing between them.

The type of data determines what statistical tests you should use to analyse your data.

A questionnaire is a data collection tool or instrument, while a survey is an overarching research method that involves collecting and analysing data from people using questionnaires.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

McCombes, S. (2022, October 10). Doing Survey Research | A Step-by-Step Guide & Examples. Scribbr. Retrieved 25 March 2024, from https://www.scribbr.co.uk/research-methods/surveys/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, qualitative vs quantitative research | examples & methods, construct validity | definition, types, & examples, what is a likert scale | guide & examples.

Surveys for Master and Bachelor Degree Thesis

This template will help you get information on how parents can provide support for their children’s educational development.

Try Other Survey Templets for Education

Qualitative or quantitative research is an important and sometimes even necessary element of a master’s or bachelor’s thesis. Startquestion allows you to create student surveys on any topic without limiting the number of questions. The questionnaire can be shared via a link or sent directly to the indicated e-mail address. The tool also allows you to end the survey after reaching the specified response limit.

Each student can automatically generate a report containing charts and data summaries on his personal account on the platform. The ability to download it in DOC or PDF format allows direct pasting to the content of the work.

An example of a survey template for a thesis is presented below, including an introduction informing about the purpose of the study, sample types of questions, a certificate, and a thank you note.

Do you want to know specific examples of surveys for your master’s thesis? See sample surveys for students and research reports for diploma theses created on Startquestion.

We have been trusted by over 5,000 clients:

We are recommended by:.

Startquestion is a very useful tool for carrying out our research, in which we want to reach users from our own databases – member companies, marketers, digital industry employees, etc. This advanced tool helps us create questionnaires and configure various types of parameters necessary during the research. The platform offers capabilities that can meet the needs of even demanding researchers, and the support service always quickly responds to our questions and needs. The analytical module is also useful, allowing for quick and easy interpretation of the results both after and during the study. We are delighted that we have a domestic, world-class tool on the Polish market.

survey questions for thesis

Ania GarwoliƄska

Director of Initiatives and Member Relations of IAB Poland

For me, as a “heavy user” of IT tools, Startquestion allows you to achieve almost any business goal related to conducting research or online surveys. It does it safely, comfortably, and effectively.

PaweƂ Owczarek

PaweƂ Owczarek

Author of the FRIS Product, Managing Partner

We would all like to respond to the needs of our clients and be their first choice. However, the devil is in the details. Meeting expectations is inextricably linked to continuous monitoring of satisfaction with the offer or service. At PZU, we are aware of client expectations, and we create standards according to them. Quick response is a crucial element of the entire process. That is why, in many cases, we have replaced traditional telephone surveys with online surveys. Thanks to the Startquestion, we can complete them faster, cheaper, and on a larger sample. We used to carry out several hundred surveys; today, we do several thousand. Thus, we can better manage customer experiences and diagnose specific actions that we should take to meet expectations better. The intuitiveness and usefulness of this solution encouraged us to use it in other areas of the organization apart from CX. Thanks to this, we can collect information not only from customers but also from employees.

I work with various research tools, and I am convinced that the Startquestion is one of the solutions that can meet the above-standard needs of companies in collecting feedback.

survey questions for thesis

MirosƂaw MikƂos

Director of Customer Experience Management Bureau w PZU

The Essential Guide to Writing Effective Survey Questions

Jennifer Leigh Brown

User surveys are popping up on websites and mobile apps everywhere. Well-designed ones yield helpful data. Poorly executed ones are a waste of time for users and researchers. Crafting a good questionnaire is a science, but luckily, substantial published research is available on the topic. It’s easy to write good questions when “survey best practices” are just a Google search away…

Indeed. There are about 68,300,000 results . All that information can be overwhelming to those new to survey writing. UX Booth is here to help.

In this article, I will focus solely on writing effective questions and answers—the critically important and challenging part between a proper introduction and conclusion. Skipping the basics of question types , I’ll dive into proven techniques researchers rely on to craft effective inquiries and response options to gather useful data and results.

(If you need help with survey planning, defining goals, or understanding the basics of structure and question types, check out How to Create an Effective Survey .)

Question Wording and Structure

The creation of effective survey questions is essential to accurately measure the opinions of the participants. If the questions are poorly worded, unclear or biased, the responses will be useless. A well-written question will mean the same thing to all respondents. It will communicate the desired information so that all participants interpret it the same way and understand the expected type of response.

Use these guidelines for writing survey questions to yield informative and accurate information.

Be clear, specific, and direct

Failure to clearly explain the intent of the question can lead to confusion and misinterpretation. Be very specific and avoid imprecise or vague words. Present the topic and define the behaviors, events, or timeframe. This will help ensure every participant is providing the same type of response.

Vague: What is your income?

For what time period? For just the respondent or the entire household? Before or after taxes?

Specific: What was your household’s yearly income before taxes in 2016?

Use the participants’ vocabulary

Consider the education level of the survey audience, and use words that will will be easily understood. Avoid jargon, complex terms, undefined abbreviations and acronyms. Use simple language and never assume knowledge; always provide the necessary information for the respondent to understand what is being asked. Define any concepts or terms that the respondent needs to understand in order to answer. If referencing something participants might not be familiar with, be sure to add details to help explain it.

Unclear: How likely would you be to subscribe to UIE’s library?

Whose library? The International Union for Electricity? What kind of library–documentation, podcasts, ebooks?

Clear: User Interface Engineering’s library offers online seminars by experts in UX design. You can access the recordings anytime for only $25 a month. How likely would you be to subscribe?

Tip: If the question requires a lengthy explanation, consider separating it from the question itself to help make the information easier to digest.

Talk like a real person and treat the questions like a conversation

Group similar topics together and order the questions in a logical way to create a natural flow as if having a conversation. The voice and tone of the survey should match who it is from and being designed for. The writing can be friendly and familiar but don’t sacrifice clarity for cutesy. Consider the MailChimp writing tone guideline , “It’s always more important to be clear than entertaining.”

Formal: Would you be willing to participate in our 10-question feedback survey? Your responses to this questionnaire will help us improve your experience with Corporation ABC’s website.

Informal: Hi! Are you willing to answer a few quick questions? It’s won’t take more than five minutes. (And there’s a really good prize!)

Tip: Although I’m focusing on introductions and not question writing, it’s worth noting that being up front about the time-investment and offering incentives can also help with response rates.

Ask only one question at a time

Each question should focus on a single item or concept. This generally means that questions should have one subject and verb. Double-barrel questions ask a respondent to evaluate more than one thing in a question yet only allow for a single response.

Double-barrel: Was the content marketing seminar worthwhile and entertaining?

What if the seminar was educational but the presenter was a dreadful bore, and the response options are Yes or No? A double-barrel question is also known as a compound question. This is a common mistake, which can be corrected by breaking questions into two. Let’s look at an example with how to correct it:

Double-barrel: How satisfied are you with your work environment and compensation?

Single and separate:

  • How satisfied are you with your work environment?
  • How satisfied are you with your compensation?

By breaking the double-barrel question into two questions, one about satisfaction with the work environment and another question about pay, the participant is now able to provide a response to both inquiries separately.

Practice good grammar

Keep the questions simple and grammatically correct. Maintaining a parallel structure and consistently using the same words and phrases improves respondents’ comprehension. Avoid two-part or complex questions which can be hard to interpret, as can double negatives .

Double Negative: Do you agree or disagree that user interface designers should never not know how to code?

Better: User interface designers should know how to code.

An agreement scale goes well with this reworked question—more on that later.

Avoid bias and loaded words

A biased question will lead participants in the direction of a particular answer. Some phrases, particularly adjectives and adverbs, may add bias to questions. Depending on how a question is presented, people can react in different ways (for example, asking a question using the word “loss” versus “gain”). The use of emotional, politically-charged, or sensitive words can also trigger a different response. Remain neutral regardless of topic and watch for wording that skews positive or negative.

Biased: We think this UX Booth article on Survey Question Writing is very helpful. How helpful do you think this article is?

Unbiased: What do you think of this UX Booth article on Survey Question Writing?

Start with broad, general questions and progress to specific and harder ones

Beginning with basic, easier questions can encourage a respondent to continue. When possible, try to balance simple and complex inquiries. Interspersing easier questions among more challenging ones can make it seem less burdensome and help reduce abandonment. And remember to save sensitive questions like income for the end and make them optional.

Keep the survey short and don’t be greedy!

Don’t waste people’s time–only ask for what you really need. (Requiring answers to questions will slow people down, but it won’t necessarily get you want you and will increase drop off rates.) If there aren’t too many questions, and respondents can immediately understand what is being asked, they are more likely to be willing and able to provide useful information. If the answers are also well-crafted


Answer Wording and Structure

Since this article is concentrated on writing, I’ll focus on answers to closed questions , where responses need to be crafted by the survey designer. When providing responses to closed-ended questions, how each answer is described, the number of options, and the order can all influence how people respond. Whether presented as multiple choice, checklists, or in scales, just like when writing questions, the answers should use precise, clear wording. Here’s how to make that happen.

Present all the possibilities

The number of answers should be kept relatively small but include all the possible choices. Answers need to be balanced both ways (e.g. positive to negative, high to low frequency).

All respondents need to be able to find an answer that fits their situation—including opting out. If there could be a situation where none of the answers apply, provide the option to select “don’t know,” “not applicable” or “prefer not to answer” for sensitive questions. Including an “Other,” with a free-form text field to provide a custom answer, is a great way to learn about alternative responses not provided in the defined answer set.

Incomplete and Unbalanced:

  • Very Important
  • Moderately important
  • Slightly important

What if it is not important at all? Or not even applicable to the participant?

Complete and Balanced:

  • Extremely important
  • Very important
  • Not at all important
  • Not applicable

Say “no” only when necessary

Dichotomous questions present only two options and are clearly distinct. These answers, like yes/no and true/false, can produce less helpful data because they don’t provide context or specificity. (Though when using skip logic , these responses can often appropriate.) Formatting responses to use scales that measure things like attitudes or frequency yield more information-rich results. These answers can make a single question work harder.

Yes/No: Do you use the mobile app?

Frequency: How often do you use the mobile app?

Tip: These answers also follow the first guideline to cover all the possibilities in a balanced way, ranging from high to low or not at all. An even stronger set of choices would include references for the time period to clearly define what “sometimes” is versus “rarely.” Check out the UX Booth blog example below.

Keep answers mutually exclusive

If a participant can only select one response than each answer should be distinct and not cross-over. For example, options might be 0-5 or 6-10 rather than 0-5 or 5-10. Having the “5” in both answers makes them not mutually exclusive:

Not Distinct: Where is your work location?

  • In an office building.
  • From my home.
  • In the city.

The person could work in an office building in the city or from their home in the city.

Remove universal statements

Words like “never, none, always, all” are extreme choices that respondents might be hesitant to commit to. Rather than absolute words, provide answers with specific references for behaviors or timeframes.

  • I always read UX Booth’s blog.
  • I never read UX Booth’s blog.

Referenced Alternatives: I read UX Booth’s blog:

  • Once a week
  • 2-3 times a week
  • 4 or more times a week
  • I do not read UX Booth’s blog.

Use ratings and scales

The Likert Scale , where respondents indicate their level of agreement or disagreement, is the most commonly used approach to scaling options when measuring attitudes or behaviors. Likert scales should be symmetrical and balanced. They should contain equal numbers of positive and negative responses within the distance between each item being the same.

Experts’ debates about scales—the number of levels (5, 7,10), and the inclusion of a neutral midpoint (neither agree nor disagree)—is too overwhelming to tackle in this article. Consult the many resources for Likert Scale best practices . SurveyMonkey suggests five scale points for unipolar and seven for bipolar. (My personal opinion is between five to seven is best; the higher the number the harder it is for people to gauge.) Always include word labels not just numbers to identify what each point on the scale means.

Common Scales:

  • Agreement: Disagree to Agree
  • Familiarity: Not Familiar to Very Familiar
  • Frequency: Never to Always
  • Important: Not Important to Extremely Important
  • Likelihood: Not Likely to Extremely Likely
  • Quality: Poor to Excellent
  • More Examples (Iowa State PDF)

Use the expected, “natural” order for answer scales because it is easier for people to respond. For ranges (e.g. excellent to poor) it’s okay to reverse the order, such as starting with the favorable and ending with unfavorable answer, since it can also influence choices.

Tip: Read “ There is a Right Way and Wrong Way to Number Rating Scales .”

Good survey design leads to good data.

The unfortunate result of bad survey design is bad data. Asking good questions and providing solid answers is not easy. Take advantage of what other researchers and academics have done and use starter templates when appropriate. It is the survey designer’s responsibility to be clear and unbiased. Make sure the questions will be informative, the answers accurate, and that the insight you can will lead to actionable results.

UX Booth is trusted by over 100,000 user experience professionals. Start your subscription today for free.

Related Articles

Academic Experience

Great survey questions: How to write them & avoid common mistakes

Learning how to write survey questions is both art and science. The wording you choose can make the difference between accurate, useful data and just the opposite. Fortunately, we’ve got a raft of tips to help.

Figuring out how to make a good survey that yields actionable insights is all about sweating the details. And writing effective questionnaire questions is the first step.

Essential for success is understanding the different types of survey questions and how they work. Each format needs a slightly different approach to question-writing.

In this article, we’ll share how to write survey questionnaires and list some common errors to avoid so you can improve your surveys and the data they provide.

Free eBook: The Qualtrics survey template guide

Survey question types

Did you know that Qualtrics provides 23 question types you can use in your surveys ? Some are very popular and used frequently by a wide range of people from students to market researchers, while others are more specialist and used to explore complex topics. Here’s an introduction to some basic survey question formats, and how to write them well.

Multiple choice

Familiar to many, multiple choice questions ask a respondent to pick from a range of options. You can set up the question so that only one selection is possible, or allow more than one to be ticked.

When writing a multiple choice question


  • Be clear about whether the survey taker should choose one (“pick only one”) or several (“select all that apply”).
  • Think carefully about the options you provide, since these will shape your results data.
  • The phrase “of the following” can be helpful for setting expectations. For example, if you ask “What is your favorite meal” and provide the options “hamburger and fries”, “spaghetti and meatballs”, there’s a good chance your respondent’s true favorite won’t be included. If you add “of the following” the question makes more sense.

Asking participants to rank things in order, whether it’s order of preference, frequency or perceived value, is done using a rank structure. There can be a variety of interfaces, including drag-and-drop, radio buttons, text boxes and more.

When writing a rank order question


  • Explain how the interface works and what the respondent should do to indicate their choice. For example “drag and drop the items in this list to show your order of preference.”
  • Be clear about which end of the scale is which. For example, “With the best at the top, rank these items from best to worst”
  • Be as specific as you can about how the respondent should consider the options and how to rank them. For example, “thinking about the last 3 months’ viewing, rank these TV streaming services in order of quality, starting with the best”

Slider structures ask the respondent to move a pointer or button along a scale, usually a numerical one, to indicate their answers.

When writing a slider question


  • Consider whether the question format will be intuitive to your respondents, and whether you should add help text such as “click/tap and drag on the bar to select your answer”
  • Qualtrics includes the option for an open field where your respondent can type their answer instead of using a slider. If you offer this, make sure to reference it in the survey question so the respondent understands its purpose.

Also known as an open field question, this format allows survey-takers to answer in their own words by typing into the comments box.

When writing a text entry question


  • Use open-ended question structures like “How do you feel about
” “If you said x, why?” or “What makes a good x?”
  • Open-ended questions take more effort to answer, so use these types of questions sparingly.
  • Be as clear and specific as possible in how you frame the question. Give them as much context as you can to help make answering easier. For example, rather than “How is our customer service?”, write “Thinking about your experience with us today, in what areas could we do better?”

Matrix table

Matrix structures allow you to address several topics using the same rating system, for example a Likert scale (Very satisfied / satisfied / neither satisfied nor dissatisfied / dissatisfied / very dissatisfied).

When writing a matrix table question


  • Make sure the topics are clearly differentiated from each other, so that participants don’t get confused by similar questions placed side by side and answer the wrong one.
  • Keep text brief and focused. A matrix includes a lot of information already, so make it easier for your survey-taker by using plain language and short, clear phrases in your matrix text.
  • Add detail to the introductory static text if necessary to help keep the labels short. For example, if your introductory text says “In the Philadelphia store, how satisfied were you with the
” you can make the topic labels very brief, for example “staff friendliness” “signage” “price labeling” etc.

Now that you know your rating scales from your open fields, here are the 7 most common mistakes to avoid when you write questions. We’ve also added plenty of survey question examples to help illustrate the points.

Likert Scale Questions

Likert scales are commonly used in market research when dealing with single topic survyes. They're simple and most reliable when combatting survey bias . For each question or statement, subjects choose from a range of possible responses. The responses, for example, typically include:

  • Strongly agree
  • Strongly disagree

7 survey question examples to avoid.

There are countless great examples of writing survey questions but how do you know if your types of survey questions will perform well? We've highlighted the 7 most common mistakes when attempting to get customer feedback with online surveys.

Survey question mistake #1: Failing to avoid leading words / questions

Subtle wording differences can produce great differences in results. For example, non-specific words and ideas can cause a certain level of confusing ambiguity in your survey. “Could,” “should,” and “might” all sound about the same, but may produce a 20% difference in agreement to a question.

In addition, strong words such as “force” and “prohibit” represent control or action and can bias your results.

Example: The government should force you to pay higher taxes.

No one likes to be forced, and no one likes higher taxes. This agreement scale question makes it sound doubly bad to raise taxes. When survey questions read more like normative statements than questions looking for objective feedback, any ability to measure that feedback becomes difficult.

Wording alternatives can be developed. How about simple statements such as: The government should increase taxes, or the government needs to increase taxes.

Example: How would you rate the career of legendary outfielder Joe Dimaggio?

This survey question tells you Joe Dimaggio is a legendary outfielder. This type of wording can bias respondents.

How about replacing the word “legendary” with “baseball” as in: How would you rate the career of baseball outfielder Joe Dimaggio? A rating scale question like this gets more accurate answers from the start.

Survey question mistake #2: Failing to give mutually exclusive choices

Multiple choice response options should be mutually exclusive so that respondents can make clear choices. Don’t create ambiguity for respondents.

Review your survey and identify ways respondents could get stuck with either too many or no single, correct answers to choose from.

Example: What is your age group?

What answer would you select if you were 10, 20, or 30? Survey questions like this will frustrate a respondent and invalidate your results.

Example: What type of vehicle do you own?

This question has the same problem. What if the respondent owns a truck, hybrid, convertible, cross-over, motorcycle, or no vehicle at all?

Survey question mistake #3: Not asking direct questions

Questions that are vague and do not communicate your intent can limit the usefulness of your results. Make sure respondents know what you’re asking.

Example: What suggestions do you have for improving Tom’s Tomato Juice?

This question may be intended to obtain suggestions about improving taste, but respondents will offer suggestions about texture, the type of can or bottle, about mixing juices, or even suggestions relating to using tomato juice as a mixer or in recipes.

Example: What do you like to do for fun?

Finding out that respondents like to play Scrabble isn’t what the researcher is looking for, but it may be the response received. It is unclear that the researcher is asking about movies vs. other forms of paid entertainment. A respondent could take this question in many directions.

Survey question mistake #4: Forgetting to add a “prefer not to answer” option

Sometimes respondents may not want you to collect certain types of information or may not want to provide you with the types of information requested.

Questions about income, occupation, personal health, finances, family life, personal hygiene, and personal, political, or religious beliefs can be too intrusive and be rejected by the respondent.

Privacy is an important issue to most people. Incentives and assurances of confidentiality can make it easier to obtain private information.

While current research does not support that PNA (Prefer Not to Answer) options increase data quality or response rates, many respondents appreciate this non-disclosure option.

Furthermore, different cultural groups may respond differently. One recent study found that while U.S. respondents skip sensitive questions, Asian respondents often discontinue the survey entirely.

  • What is your race?
  • What is your age?
  • Did you vote in the last election?
  • What are your religious beliefs?
  • What are your political beliefs?
  • What is your annual household income?

These types of questions should be asked only when absolutely necessary. In addition, they should always include an option to not answer. (e.g. “Prefer Not to Answer”).

Survey question mistake #5: Failing to cover all possible answer choices

Do you have all of the options covered? If you are unsure, conduct a pretest version of your survey using “Other (please specify)” as an option.

If more than 10% of respondents (in a pretest or otherwise) select “other,” you are probably missing an answer. Review the “Other” text your test respondents have provided and add the most frequently mentioned new options to the list.

Example: You indicated that you eat at Joe's fast food once every 3 months. Why don't you eat at Joe's more often?

There isn't a location near my house

I don't like the taste of the food

Never heard of it

This question doesn’t include other options, such as healthiness of the food, price/value or some “other” reason. Over 10% of respondents would probably have a problem answering this question.

Survey question mistake #6: Not using unbalanced scales carefully

Unbalanced scales may be appropriate for some situations and promote bias in others.

For instance, a hospital might use an Excellent - Very Good - Good - Fair scale where “Fair” is the lowest customer satisfaction point because they believe “Fair” is absolutely unacceptable and requires correction.

The key is to correctly interpret your analysis of the scale. If “Fair” is the lowest point on a scale, then a result slightly better than fair is probably not a good one.

Additionally, scale points should represent equi-distant points on a scale. That is, they should have the same equal conceptual distance from one point to the next.

For example, researchers have shown the points to be nearly equi-distant on the strongly disagree–disagree–neutral–agree–strongly agree scale.

Set your bottom point as the worst possible situation and top point as the best possible, then evenly spread the labels for your scale points in-between.

Example: What is your opinion of Crazy Justin's auto-repair?

Pretty good

The Best Ever

This question puts the center of the scale at fantastic, and the lowest possible rating as “Pretty Good.” This question is not capable of collecting true opinions of respondents.

Survey question mistake #7: Not asking only one question at a time

There is often a temptation to ask multiple questions at once. This can cause problems for respondents and influence their responses.

Review each question and make sure it asks only one clear question.

Example: What is the fastest and most economical internet service for you?

This is really asking two questions. The fastest is often not the most economical.

Example: How likely are you to go out for dinner and a movie this weekend?

Dinner and Movie

Dinner Only

Even though “dinner and a movie” is a common term, this is two questions as well. It is best to separate activities into different questions or give respondents these options:

5 more tips on how to write a survey

Here are 5 easy ways to help ensure your survey results are unbiased and actionable.

1. Use the Funnel Technique

Structure your questionnaire using the “funnel” technique. Start with broad, general interest questions that are easy for the respondent to answer. These questions serve to warm up the respondent and get them involved in the survey before giving them a challenge. The most difficult questions are placed in the middle – those that take time to think about and those that are of less general interest. At the end, we again place general questions that are easier to answer and of broad interest and application. Typically, these last questions include demographic and other classification questions.

2. Use “Ringer” questions

In social settings, are you more introverted or more extroverted?

That was a ringer question and its purpose was to recapture your attention if you happened to lose focus earlier in this article.

Questionnaires often include “ringer” or “throw away” questions to increase interest and willingness to respond to a survey. These questions are about hot topics of the day and often have little to do with the survey. While these questions will definitely spice up a boring survey, they require valuable space that could be devoted to the main topic of interest. Use this type of question sparingly.

3. Keep your questionnaire short

Questionnaires should be kept short and to the point. Most long surveys are not completed, and the ones that are completed are often answered hastily. A quick look at a survey containing page after page of boring questions produces a response of, “there is no way I’m going to complete this thing”. If a questionnaire is long, the person must either be very interested in the topic, an employee, or paid for their time. Web surveys have some advantages because the respondent often can't view all of the survey questions at once. However, if your survey's navigation sends them page after page of questions, your response rate will drop off dramatically.

How long is too long?  The sweet spot is to keep the survey to less than five minutes. This translates into about 15 questions. The average respondent is able to complete about 3 multiple choice questions per minute. An open-ended text response question counts for about three multiple choice questions depending, of course, on the difficulty of the question. While only a rule of thumb, this formula will accurately predict the limits of your survey.

4. Watch your writing style

The best survey questions are always easy to read and understand. As a rule of thumb, the level of sophistication in your survey writing should be at the 9th to 11th grade level. Don’t use big words. Use simple sentences and simple choices for the answers. Simplicity is always best.

5. Use randomization

We know that being the first on the list in elections increases the chance of being elected. Similar bias occurs in all questionnaires when the same answer appears at the top of the list for each respondent. Randomization corrects this bias by randomly rotating the order of the multiple choice matrix questions for each respondent.

Free Templates: Get free access to 30+ of Qualtrics' best survey templates

While not totally inclusive, these seven survey question tips are common offenders in building good survey questions. And the five tips above should steer you in the right direction.

Focus on creating clear questions and having an understandable, appropriate, and complete set of answer choices. Great questions and great answer choices lead to great research success. To learn more about survey question design, download our eBook, The Qualtrics survey template guide or get started with a free survey account with our world-class survey software .

Sarah Fisher

Related Articles

February 8, 2023

Smoothing the transition from school to work with work-based learning

December 6, 2022

How customer experience helps bring Open Universities Australia’s brand promise to life

August 18, 2022

School safety, learning gaps top of mind for parents this back-to-school season

August 9, 2022

3 things that will improve your teachers’ school experience

August 2, 2022

Why a sense of belonging at school matters for K-12 students

July 14, 2022

Improve the student experience with simplified course evaluations

March 17, 2022

Understanding what’s important to college students

February 18, 2022

Malala: ‘Education transforms lives, communities, and countries’

Stay up to date with the latest xm thought leadership, tips and news., request demo.

Ready to learn more about Qualtrics?

Learn / Blog / Article

Back to blog

Survey questions 101: 70+ survey question examples, types of surveys, and FAQs

How well do you understand your prospects and customers—who they are, what keeps them awake at night, and what brought them to your business in search of a solution? Asking the right survey questions at the right point in their customer journey is the most effective way to put yourself in your customers’ shoes.

Last updated

Reading time.

survey questions for thesis

This comprehensive intro to survey questions contains over 70 examples of effective questions, an overview of different types of survey questions, and advice on how to word them for maximum effect. Plus, we’ll toss in our pre-built survey templates, expert survey insights, and tips to make the most of AI for Surveys in Hotjar. ✹

Surveying your users is the simplest way to understand their pain points, needs, and motivations. But first, you need to know how to set up surveys that give you the answers you—and your business—truly need. Impactful surveys start here:

❓ The main types of survey questions : most survey questions are classified as open-ended, closed-ended, nominal, Likert scale, rating scale, and yes/no. The best surveys often use a combination of questions.

💡 70+ good survey question examples : our top 70+ survey questions, categorized across ecommerce, SaaS, and publishing, will help you find answers to your business’s most burning questions

✅ What makes a survey question ‘good’ : a good survey question is anything that helps you get clear insights and business-critical information about your customers 

❌ The dos and don’ts of writing good survey questions : remember to be concise and polite, use the foot-in-door principle, alternate questions, and test your surveys. But don’t ask leading or loaded questions, overwhelm respondents with too many questions, or neglect other tools that can get you the answers you need.

👍 How to run your surveys the right way : use a versatile survey tool like Hotjar Surveys that allows you to create on-site surveys at specific points in the customer journey or send surveys via a link

đŸ› ïž 10 use cases for good survey questions : use your survey insights to create user personas, understand pain points, measure product-market fit, get valuable testimonials, measure customer satisfaction, and more

Use Hotjar to build your survey and get the customer insight you need to grow your business.

6 main types of survey questions

Let’s dive into our list of survey question examples, starting with a breakdown of the six main categories your questions will fall into:

Open-ended questions

Closed-ended questions

Nominal questions

Likert scale questions

Rating scale questions

'Yes' or 'no' questions

1. Open-ended survey questions

Open-ended questions  give your respondents the freedom to  answer in their own words , instead of limiting their response to a set of pre-selected choices (such as multiple-choice answers, yes/no answers, 0–10 ratings, etc.). 

Examples of open-ended questions:

What other products would you like to see us offer?

If you could change just one thing about our product, what would it be?

When to use open-ended questions in a survey

The majority of example questions included in this post are open-ended, and there are some good reasons for that:

Open-ended questions help you learn about customer needs you didn’t know existed , and they shine a light on areas for improvement that you may not have considered before. If you limit your respondents’ answers, you risk cutting yourself off from key insights.

Open-ended questions are very useful when you first begin surveying your customers and collecting their feedback. If you don't yet have a good amount of insight, answers to open-ended questions will go a long way toward educating you about who your customers are and what they're looking for.

There are, however, a few downsides to open-ended questions:

First, people tend to be less likely to respond to open-ended questions in general because they take comparatively more effort to answer than, say, a yes/no one

Second, but connected: if you ask consecutive open-ended questions during your survey, people will get tired of answering them, and their answers might become less helpful the more you ask

Finally, the data you receive from open-ended questions will take longer to analyze compared to easy 1-5 or yes/no answers—but don’t let that stop you. There are plenty of shortcuts that make it easier than it looks (we explain it all in our post about how to analyze open-ended questions , which includes a free analysis template.)

💡 Pro tip: if you’re using Hotjar Surveys, let our AI for Surveys feature analyze your open-ended survey responses for you. Hotjar AI reviews all your survey responses and provides an automated summary report of key findings, including supporting quotes and actionable recommendations for next steps.

2. Closed-ended survey questions

Closed-end questions limit a user’s response options to a set of pre-selected choices. This broad category of questions includes

‘Yes’ or ‘no’ questions

When to use closed-ended questions

Closed-ended questions work brilliantly in two scenarios:

To open a survey, because they require little time and effort and are therefore easy for people to answer. This is called the foot-in-the-door principle: once someone commits to answering the first question, they may be more likely to answer the open-ended questions that follow.

When you need to create graphs and trends based on people’s answers. Responses to closed-ended questions are easy to measure and use as benchmarks. Rating scale questions, in particular (e.g. where people rate customer service or on a scale of 1-10), allow you to gather customer sentiment and compare your progress over time.

3. Nominal questions

A nominal question is a type of survey question that presents people with multiple answer choices; the answers are  non-numerical in nature and don't overlap  (unless you include an ‘all of the above’ option).

Example of nominal question:

What are you using [product name] for?

Personal use

Both business and personal use

When to use nominal questions

Nominal questions work well when there is a limited number of categories for a given question (see the example above). They’re easy to create graphs and trends from, but the downside is that you may not be offering enough categories for people to reply.

For example, if you ask people what type of browser they’re using and only give them three options to choose from, you may inadvertently alienate everybody who uses a fourth type and now can’t tell you about it.

That said, you can add an open-ended component to a nominal question with an expandable ’other’ category, where respondents can write in an answer that isn’t on the list. This way, you essentially ask an open-ended question that doesn’t limit them to the options you’ve picked.

4. Likert scale questions

The Likert scale is typically a 5- or 7-point scale that evaluates a respondent’s level of agreement with a statement or the intensity of their reaction toward something.

The scale develops symmetrically: the median number (e.g. a 3 on a 5-point scale) indicates a point of neutrality, the lowest number (always 1) indicates an extreme view, and the highest number (e.g. a 5 on a 5-point scale) indicates the opposite extreme view.

Example of a Likert scale question:

#The British Museum uses a Likert scale Hotjar survey to gauge visitors’ reactions to their website optimizations

When to use Likert scale questions

Likert-type questions are also known as ordinal questions because the answers are presented in a specific order. Like other multiple-choice questions, Likert scale questions come in handy when you already have some sense of what your customers are thinking. For example, if your open-ended questions uncover a complaint about a recent change to your ordering process, you could use a Likert scale question to determine how the average user felt about the change.

A series of Likert scale questions can also be turned into a matrix question. Since they have identical response options, they are easily combined into a single matrix and break down the pattern of single questions for users.

5. Rating scale questions

Rating scale questions are questions where the answers map onto a numeric scale (such as rating customer support on a scale of 1-5, or likelihood to recommend a product from 0-10).

Examples of rating questions:

How likely are you to recommend us to a friend or colleague on a scale of 0-10?

How would you rate our customer service on a scale of 1-5?

When to use rating questions

Whenever you want to assign a numerical value to your survey or visualize and compare trends , a rating question is the way to go.

A typical rating question is used to determine Net Promoter ScoreÂź (NPSÂź) : the question asks customers to rate their likelihood of recommending products or services to their friends or colleagues, and allows you to look at the results historically and see if you're improving or getting worse. Rating questions are also used for customer satisfaction (CSAT) surveys and product reviews.

When you use a rating question in a survey, be sure to explain what the scale means (e.g. 1 for ‘Poor’, 5 for ‘Amazing’). And consider adding a follow-up open-ended question to understand why the user left that score.

Example of a rating question (NPS):

#Hotjar's Net Promoter ScoreÂź (NPSÂź) survey template lets you add open-ended follow-up questions so you can understand the reasons behind users' ratings

6. ‘Yes’ or ‘no’ questions

These dichotomous questions are super straightforward, requiring a simple ‘yes’ or ‘no’ reply.

Examples of yes/no questions:

Was this article useful? (Yes/No)

Did you find what you were looking for today? (Yes/No)

When to use ‘yes’ or ‘no’ questions

‘Yes’ and ‘no’ questions are a good way to quickly segment your respondents . For example, say you’re trying to understand what obstacles or objections prevent people from trying your product. You can place a survey on your pricing page asking people if something is stopping them, and follow up with the segment who replied ‘yes’ by asking them to elaborate further.

These questions are also effective for getting your foot in the door: a ‘yes’ or ‘no’ question requires very little effort to answer. Once a user commits to answering the first question, they tend to become more willing to answer the questions that follow, or even leave you their contact information.

#Web design agency NerdCow used Hotjar Surveys to add a yes/no survey on The Transport Library’s website, and followed it up with an open-ended question for more insights

70+ more survey question examples

Below is a list of good survey questions, categorized across ecommerce, software as a service (SaaS), and publishing. You don't have to use them word-for-word, but hopefully, this list will spark some extra-good ideas for the surveys you’ll run immediately after reading this article. (Plus, you can create all of them with Hotjar Surveys—stick with us a little longer to find out how. 😉)

📊 9 basic demographic survey questions

Ask these questions when you want context about your respondents and target audience, so you can segment them later. Consider including demographic information questions in your survey when conducting user or market research as well. 

But don’t ask demographic questions just for the sake of it—if you're not going to use some of the data points from these sometimes sensitive questions (e.g. if gender is irrelevant to the result of your survey), move on to the ones that are truly useful for you, business-wise. 

Take a look at the selection of examples below, and keep in mind that you can convert most of them to multiple choice questions:

What is your name?

What is your age?

What is your gender?

What company do you work for?

What vertical/industry best describes your company?

What best describes your role?

In which department do you work?

What is the total number of employees in your company (including all locations where your employer operates)?

What is your company's annual revenue?

🚀 Get started: gather more info about your users with our product-market fit survey template .

đŸ‘„ 20+ effective customer questions

These questions are particularly recommended for ecommerce companies:

Before purchase

What information is missing or would make your decision to buy easier?

What is your biggest fear or concern about purchasing this item?

Were you able to complete the purpose of your visit today?

If you did not make a purchase today, what stopped you?

After purchase

Was there anything about this checkout process we could improve?

What was your biggest fear or concern about purchasing from us?

What persuaded you to complete the purchase of the item(s) in your cart today?

If you could no longer use [product name], what’s the one thing you would miss the most?

What’s the one thing that nearly stopped you from buying from us?

👉 Check out our 7-step guide to setting up an ecommerce post-purchase survey .

Other useful customer questions

Do you have any questions before you complete your purchase?

What other information would you like to see on this page?

What were the three main things that persuaded you to create an account today?

What nearly stopped you from creating an account today?

Which other options did you consider before choosing [product name]?

What would persuade you to use us more often?

What was your biggest challenge, frustration, or problem in finding the right [product type] online?

Please list the top three things that persuaded you to use us rather than a competitor.

Were you able to find the information you were looking for?

How satisfied are you with our support?

How would you rate our service/support on a scale of 0-10? (0 = terrible, 10 = stellar)

How likely are you to recommend us to a friend or colleague? ( NPS question )

Is there anything preventing you from purchasing at this point?

🚀 Get started: learn how satisfied customers are with our expert-built customer satisfaction and NPS survey templates .

Set up a survey in seconds

Use Hotjar's free survey templates to build virtually any type of survey, and start gathering valuable insights in moments.

🛍 30+ product survey questions

These questions are particularly recommended for SaaS companies:

Questions for new or trial users

What nearly stopped you from signing up today?

How likely are you to recommend us to a friend or colleague on a scale of 0-10? (NPS question)

Is our pricing clear? If not, what would you change?

Questions for paying customers

What convinced you to pay for this service?

What’s the one thing we are missing in [product type]?

What's one feature we can add that would make our product indispensable for you?

If you could no longer use [name of product], what’s the one thing you would miss the most?

🚀 Get started: find out what your buyers really think with our pricing plan feedback survey template .

Questions for former/churned customers

What is the main reason you're canceling your account? Please be blunt and direct.

If you could have changed one thing in [product name], what would it have been?

If you had a magic wand and could change anything in [product name], what would it be?

🚀 Get started: find out why customers churn with our free-to-use churn analysis survey template .

Other useful product questions

What were the three main things that persuaded you to sign up today?

Do you have any questions before starting a free trial?

What persuaded you to start a trial?

Was this help section useful?

Was this article useful?

How would you rate our service/support on a scale of 1-10? (0 = terrible, 10 = stellar)

Is there anything preventing you from upgrading at this point?

Is there anything on this page that doesn't work the way you expected it to?

What could we change to make you want to continue using us?

If you did not upgrade today, what stopped you?

What's the next thing you think we should build?

How would you feel if we discontinued this feature?

What's the next feature or functionality we should build?

🚀 Get started: gather feedback on your product with our free-to-use product feedback survey template .

🖋 20+ effective questions for publishers and bloggers

Questions to help improve content.

If you could change just one thing in [publication name], what would it be?

What other content would you like to see us offer?

How would you rate this article on a scale of 1–10?

If you could change anything on this page, what would you have us do?

If you did not subscribe to [publication name] today, what was it that stopped you?

🚀 Get started: find ways to improve your website copy and messaging with our content feedback survey template .

New subscriptions

What convinced you to subscribe to [publication] today?

What almost stopped you from subscribing?

What were the three main things that persuaded you to join our list today?

Cancellations

What is the main reason you're unsubscribing? Please be specific.

Other useful content-related questions

What’s the one thing we are missing in [publication name]?

What would persuade you to visit us more often?

How likely are you to recommend us to someone with similar interests? (NPS question)

What’s missing on this page?

What topics would you like to see us write about next?

How useful was this article?

What could we do to make this page more useful?

Is there anything on this site that doesn't work the way you expected it to?

What's one thing we can add that would make [publication name] indispensable for you?

If you could no longer read [publication name], what’s the one thing you would miss the most?

💡 Pro tip: do you have a general survey goal in mind, but are struggling to pin down the right questions to ask? Give Hotjar’s AI for Surveys a go and watch as it generates a survey for you in seconds with questions tailored to the exact purpose of the survey you want to run.

What makes a good survey question?

We’ve run through more than 70 of our favorite survey questions—but what is it that makes a good survey question, well, good ? An effective question is anything that helps you get clear insights and business-critical information about your customers , including

Who your target market is

How you should price your products

What’s stopping people from buying from you

Why visitors leave your website

With this information, you can tailor your website, products, landing pages, and messaging to improve the user experience and, ultimately, maximize conversions .

How to write good survey questions: the DOs and DON’Ts

To help you understand the basics and avoid some rookie mistakes, we asked a few experts to give us their thoughts on what makes a good and effective survey question.

Survey question DOs

✅ do focus your questions on the customer.

It may be tempting to focus on your company or products, but it’s usually more effective to put the focus back on the customer. Get to know their needs, drivers, pain points, and barriers to purchase by asking about their experience. That’s what you’re after: you want to know what it’s like inside their heads and how they feel when they use your website and products.

Rather than asking, “Why did you buy our product?” ask, “What was happening in your life that led you to search for this solution?” Instead of asking, “What's the one feature you love about [product],” ask, “If our company were to close tomorrow, what would be the one thing you’d miss the most?” These types of surveys have helped me double and triple my clients.

✅ DO be polite and concise (without skimping on micro-copy)

Put time into your micro-copy—those tiny bits of written content that go into surveys. Explain why you’re asking the questions, and when people reach the end of the survey, remember to thank them for their time. After all, they’re giving you free labor!

✅ DO consider the foot-in-the-door principle

One way to increase your response rate is to ask an easy question upfront, such as a ‘yes’ or ‘no’ question, because once people commit to taking a survey—even just the first question—they’re more likely to finish it.

✅ DO consider asking your questions from the first-person perspective

Disclaimer: we don’t do this here at Hotjar. You’ll notice all our sample questions are listed in second-person (i.e. ‘you’ format), but it’s worth testing to determine which approach gives you better answers. Some experts prefer the first-person approach (i.e. ‘I’ format) because they believe it encourages users to talk about themselves—but only you can decide which approach works best for your business.

I strongly recommend that the questions be worded in the first person. This helps create a more visceral reaction from people and encourages them to tell stories from their actual experiences, rather than making up hypothetical scenarios. For example, here’s a similar question, asked two ways: “What do you think is the hardest thing about creating a UX portfolio?” versus “My biggest problem with creating my UX portfolio is
” 

The second version helps get people thinking about their experiences. The best survey responses come from respondents who provide personal accounts of past events that give us specific and real insight into their lives.

✅ DO alternate your questions often

Shake up the questions you ask on a regular basis. Asking a wide variety of questions will help you and your team get a complete view of what your customers are thinking.

✅ DO test your surveys before sending them out

A few years ago, Hotjar created a survey we sent to 2,000 CX professionals via email. Before officially sending it out, we wanted to make sure the questions really worked. 

We decided to test them out on internal staff and external people by sending out three rounds of test surveys to 100 respondents each time. Their feedback helped us perfect the questions and clear up any confusing language.

Survey question DON’Ts

❌ don’t ask closed-ended questions if you’ve never done research before.

If you’ve just begun asking questions, make them open-ended questions since you have no idea what your customers think about you at this stage. When you limit their answers, you just reinforce your own assumptions.

There are two exceptions to this rule:

Using a closed-ended question to get your foot in the door at the beginning of a survey

Using rating scale questions to gather customer sentiment (like an NPS survey)

❌ DON’T ask a lot of questions if you’re just getting started

Having to answer too many questions can overwhelm your users. Stick with the most important points and discard the rest.

Try starting off with a single question to see how your audience responds, then move on to two questions once you feel like you know what you’re doing.

How many questions should you ask? There’s really no perfect answer, but we recommend asking as few as you need to ask to get the information you want. In the beginning, focus on the big things:

Who are your users?

What do potential customers want?

How are they using your product?

What would win their loyalty?

❌ DON’T just ask a question when you can combine it with other tools

Don’t just use surveys to answer questions that other tools (such as analytics) can also answer. If you want to learn about whether people find a new website feature helpful, you can also observe how they’re using it through traditional analytics, session recordings , and other user testing tools for a more complete picture.

Don’t use surveys to ask people questions that other tools are better equipped to answer. I’m thinking of questions like “What do you think of the search feature?” with pre-set answer options like ‘Very easy to use,’ ‘Easy to use,’ etc. That’s not a good question to ask. 

Why should you care about what people ‘think’ about the search feature? You should find out whether it helps people find what they need and whether it helps drive conversions for you. Analytics, user session recordings, and user testing can tell you whether it does that or not.

❌ DON’T ask leading questions

A leading question is one that prompts a specific answer. Avoid asking leading questions because they’ll give you bad data. For example, asking, “What makes our product better than our competitors’ products?” might boost your self-esteem, but it won’t get you good information. Why? You’re effectively planting the idea that your own product is the best on the market.

❌ DON’T ask loaded questions

A loaded question is similar to a leading question, but it does more than just push a bias—it phrases the question such that it’s impossible to answer without confirming an underlying assumption.

A common (and subtle) form of loaded survey question would be, “What do you find useful about this article?” If we haven’t first asked you whether you found the article useful at all, then we’re asking a loaded question.

❌ DON’T ask about more than one topic at once

For example, “Do you believe our product can help you increase sales and improve cross-collaboration?”

This complex question, also known as a ‘double-barreled question’, requires a very complex answer as it begs the respondent to address two separate questions at once:

Do you believe our product can help you increase sales?

Do you believe our product can help you improve cross-collaboration?

Respondents may very well answer 'yes', but actually mean it for the first part of the question, and not the other. The result? Your survey data is inaccurate, and you’ve missed out on actionable insights.

Instead, ask two specific questions to gather customer feedback on each concept.

How to run your surveys

The format you pick for your survey depends on what you want to achieve and also on how much budget or resources you have. You can

Use an on-site survey tool , like Hotjar Surveys , to set up a website survey that pops up whenever people visit a specific page: this is useful when you want to investigate website- and product-specific topics quickly. This format is relatively inexpensive—with Hotjar’s free forever plan, you can even run up to 3 surveys with unlimited questions for free.

survey questions for thesis

Use Hotjar Surveys to embed a survey as an element directly on a page: this is useful when you want to grab your audience’s attention and connect with customers at relevant moments, without interrupting their browsing. (Scroll to the bottom of this page to see an embedded survey in action!) This format is included on Hotjar’s Business and Scale plans—try it out for 15 days with a free Ask Business trial .

Use a survey builder and create a survey people can access in their own time: this is useful when you want to reach out to your mailing list or a wider audience with an email survey (you just need to share the URL the survey lives at). Sending in-depth questionnaires this way allows for more space for people to elaborate on their answers. This format is also relatively inexpensive, depending on the tool you use.

Place survey kiosks in a physical location where people can give their feedback by pressing a button: this is useful for quick feedback on specific aspects of a customer's experience (there’s usually plenty of these in airports and waiting rooms). This format is relatively expensive to maintain due to the material upkeep.

Run in-person surveys with your existing or prospective customers: in-person questionnaires help you dig deep into your interviewees’ answers. This format is relatively cheap if you do it online with a user interview tool or over the phone, but it’s more expensive and time-consuming if done in a physical location.

💡 Pro tip: looking for an easy, cost-efficient way to connect with your users? Run effortless, automated user interviews with Engage , Hotjar’s user interview tool. Get instant access to a pool of 200,000+ participants (or invite your own), and take notes while Engage records and transcribes your interview.

10 survey use cases: what you can do with good survey questions

Effective survey questions can help improve your business in many different ways. We’ve written in detail about most of these ideas in other blog posts, so we’ve rounded them up for you below.

1. Create user personas

A user persona is a character based on the people who currently use your website or product. A persona combines psychographics and demographics and reflects who they are, what they need, and what may stop them from getting it.

Examples of questions to ask:

Describe yourself in one sentence, e.g. “I am a 30-year-old marketer based in Dublin who enjoys writing articles about user personas.”

What is your main goal for using this website/product?

What, if anything, is preventing you from doing it?

👉 Our post about creating simple and effective user personas in four steps highlights some great survey questions to ask when creating a user persona.

🚀 Get started: use our user persona survey template or AI for Surveys to inform your user persona.

2. Understand why your product is not selling

Few things are more frightening than stagnant sales. When the pressure is mounting, you’ve got to get to the bottom of it, and good survey questions can help you do just that.

What made you buy the product? What challenges are you trying to solve?

What did you like most about the product? What did you dislike the most?

What nearly stopped you from buying?

👉 Here’s a detailed piece about the best survey questions to ask your customers when your product isn’t selling , and why they work so well.

🚀 Get started: our product feedback survey template helps you find out whether your product satisfies your users. Or build your surveys in the blink of an eye with Hotjar AI.

3. Understand why people leave your website

If you want to figure out why people are leaving your website , you’ll have to ask questions.

A good format for that is an exit-intent pop-up survey, which appears when a user clicks to leave the page, giving them the chance to leave website feedback before they go.

Another way is to focus on the people who did convert, but just barely—something Hotjar founder David Darmanin considers essential for taking conversions to the next level. By focusing on customers who bought your product (but almost didn’t), you can learn how to win over another set of users who are similar to them: those who almost bought your products, but backed out in the end.

Example of questions to ask:

Not for you? Tell us why. ( Exit-intent pop-up —ask this when a user leaves without buying.)

What almost stopped you from buying? (Ask this post-conversion .)

👉 Find out how HubSpot Academy increased its conversion rate by adding an exit-intent survey that asked one simple question when users left their website: “Not for you? Tell us why.”

🚀 Get started: place an exit-intent survey on your site. Let Hotjar AI draft the survey questions by telling it what you want to learn.

I spent the better half of my career focusing on the 95% who don’t convert, but it’s better to focus on the 5% who do. Get to know them really well, deliver value to them, and really wow them. That’s how you’re going to take that 5% to 10%.

4. Understand your customers’ fears and concerns

Buying a new product can be scary: nobody wants to make a bad purchase. Your job is to address your prospective customers’ concerns, counter their objections, and calm their fears, which should lead to more conversions.

👉 Take a look at our no-nonsense guide to increasing conversions for a comprehensive write-up about discovering the drivers, barriers, and hooks that lead people to converting on your website.

🚀 Get started: understand why your users are tempted to leave and discover potential barriers with a customer retention survey .

5. Drive your pricing strategy

Are your products overpriced and scaring away potential buyers? Or are you underpricing and leaving money on the table?

Asking the right questions will help you develop a pricing structure that maximizes profit, but you have to be delicate about how you ask. Don’t ask directly about price, or you’ll seem unsure of the value you offer. Instead, ask questions that uncover how your products serve your customers and what would inspire them to buy more.

How do you use our product/service?

What would persuade you to use our product more often?

What’s the one thing our product is missing?

👉 We wrote a series of blog posts about managing the early stage of a SaaS startup, which included a post about developing the right pricing strategy —something businesses in all sectors could benefit from.

🚀 Get started: find the sweet spot in how to price your product or service with a Van Westendorp price sensitivity survey or get feedback on your pricing plan .

6. Measure and understand product-market fit

Product-market fit (PMF) is about understanding demand and creating a product that your customers want, need, and will actually pay money for. A combination of online survey questions and one-on-one interviews can help you figure this out.

What's one thing we can add that would make [product name] indispensable for you?

If you could change just one thing in [product name], what would it be?

👉 In our series of blog posts about managing the early stage of a SaaS startup, we covered a section on product-market fit , which has relevant information for all industries.

🚀 Get started: discover if you’re delivering the best products to your market with our product-market fit survey .

7. Choose effective testimonials

Human beings are social creatures—we’re influenced by people who are similar to us. Testimonials that explain how your product solved a problem for someone are the ultimate form of social proof. The following survey questions can help you get some great testimonials.

What changed for you after you got our product?

How does our product help you get your job done?

How would you feel if you couldn’t use our product anymore?

👉 In our post about positioning and branding your products , we cover the type of questions that help you get effective testimonials.

🚀 Get started: add a question asking respondents whether you can use their answers as testimonials in your surveys, or conduct user interviews to gather quotes from your users.

8. Measure customer satisfaction

It’s important to continually track your overall customer satisfaction so you can address any issues before they start to impact your brand’s reputation. You can do this with rating scale questions.

For example, at Hotjar, we ask for feedback after each customer support interaction (which is one important measure of customer satisfaction). We begin with a simple, foot-in-the-door question to encourage a response, and use the information to improve our customer support, which is strongly tied to overall customer satisfaction.

How would you rate the support you received? (1-5 scale)

If 1-3: How could we improve?

If 4-5: What did you love about the experience?

👉 Our beginner’s guide to website feedback goes into great detail about how to measure customer service, NPS , and other important success metrics.

🚀 Get started: gauge short-term satisfaction level with a CSAT survey .

9. Measure word-of-mouth recommendations

Net Promoter Score is a measure of how likely your customers are to recommend your products or services to their friends or colleagues. NPS is a higher bar than customer satisfaction because customers have to be really impressed with your product to recommend you.

Example of NPS questions (to be asked in the same survey):

How likely are you to recommend this company to a friend or colleague? (0-10 scale)

What’s the main reason for your score?

What should we do to WOW you?

👉 We created an NPS guide with ecommerce companies in mind, but it has plenty of information that will help companies in other industries as well.

🚀 Get started: measure whether your users would refer you to a friend or colleague with an NPS survey . Then, use our free NPS calculator to crunch the numbers.

10. Redefine your messaging

How effective is your messaging? Does it speak to your clients' needs, drives, and fears? Does it speak to your strongest selling points?

Asking the right survey questions can help you figure out what marketing messages work best, so you can double down on them.

What attracted you to [brand or product name]?

Did you have any concerns before buying [product name]?

Since you purchased [product name], what has been the biggest benefit to you?

If you could describe [brand or product name] in one sentence, what would you say?

What is your favorite thing about [brand or product name]?

How likely are you to recommend this product to a friend or colleague? (NPS question)

👉 We talk about positioning and branding your products in a post that’s part of a series written for SaaS startups, but even if you’re not in SaaS (or you’re not a startup), you’ll still find it helpful.

Have a question for your customers? Ask!

Feedback is at the heart of deeper empathy for your customers and a more holistic understanding of their behaviors and motivations. And luckily, people are more than ready to share their thoughts about your business— they're just waiting for you to ask them. Deeper customer insights start right here, with a simple tool like Hotjar Surveys.

Build surveys faster with AIđŸ”„

Use AI in Hotjar Surveys to build your survey, place it on your website or send it via email, and get the customer insight you need to grow your business.

FAQs about survey questions

How many people should i survey/what should my sample size be.

A good rule of thumb is to aim for at least 100 replies that you can work with.

You can use our  sample size calculator  to get a more precise answer, but understand that collecting feedback is research, not experimentation. Unlike experimentation (such as A/B testing ), all is not lost if you can’t get a statistically significant sample size. In fact, as little as ten replies can give you actionable information about what your users want.

How many questions should my survey have?

There’s no perfect answer to this question, but we recommend asking as few as you need to ask in order to get the information you want. Remember, you’re essentially asking someone to work for free, so be respectful of their time.

Why is it important to ask good survey questions?

A good survey question is asked in a precise way at the right stage in the customer journey to give you insight into your customers’ needs and drives. The qualitative data you get from survey responses can supplement the insight you can capture through other traditional analytics tools (think Google Analytics) and behavior analytics tools (think heatmaps and session recordings , which visualize user behavior on specific pages or across an entire website).

The format you choose for your survey—in-person, email, on-page, etc.—is important, but if the questions themselves are poorly worded you could waste hours trying to fix minimal problems while ignoring major ones a different question could have uncovered. 

How do I analyze open-ended survey questions?

A big pile of  qualitative data  can seem intimidating, but there are some shortcuts that make it much easier to analyze. We put together a guide for  analyzing open-ended questions in 5 simple steps , which should answer all your questions.

But the fastest way to analyze open questions is to use the automated summary report with Hotjar AI in Surveys . AI turns the complex survey data into:

Key findings

Actionable insights

Will sending a survey annoy my customers?

Honestly, the real danger is  not  collecting feedback. Without knowing what users think about your page and  why  they do what they do, you’ll never create a user experience that maximizes conversions. The truth is, you’re probably already doing something that bugs them more than any survey or feedback button would.

If you’re worried that adding an on-page survey might hurt your conversion rate, start small and survey just 10% of your visitors. You can stop surveying once you have enough replies.

Related articles

survey questions for thesis

User research

5 tips to recruit user research participants that represent the real world

Whether you’re running focus groups for your pricing strategy or conducting usability testing for a new product, user interviews are one of the most effective research methods to get the needle-moving insights you need. But to discover meaningful data that helps you reach your goals, you need to connect with high-quality participants. This article shares five tips to help you optimize your recruiting efforts and find the right people for any type of research study.

Hotjar team

survey questions for thesis

How to instantly transcribe user interviews—and swiftly unlock actionable insights

After the thrill of a successful user interview, the chore of transcribing dialogue can feel like the ultimate anticlimax. Putting spoken words in writing takes several precious hours—time better invested in sharing your findings with your team or boss.

But the fact remains: you need a clear and accurate user interview transcript to analyze and report data effectively. Enter automatic transcription. This process instantly transcribes recorded dialogue in real time without human help. It ensures data integrity (and preserves your sanity), enabling you to unlock valuable insights in your research.

survey questions for thesis

Shadz Loresco

survey questions for thesis

An 8-step guide to conducting empathetic (and insightful) customer interviews

Customer interviews uncover your ideal users’ challenges and needs in their own words, providing in-depth customer experience insights that inform product development, new features, and decision-making. But to get the most out of your interviews, you need to approach them with empathy. This article explains how to conduct accessible, inclusive, and—above all—insightful interviews to create a smooth (and enjoyable!) process for you and your participants.

Read our research on: Abortion | Podcasts | Election 2024

Regions & Countries

Writing survey questions.

Perhaps the most important part of the survey process is the creation of questions that accurately measure the opinions, experiences and behaviors of the public. Accurate random sampling will be wasted if the information gathered is built on a shaky foundation of ambiguous or biased questions. Creating good measures involves both writing good questions and organizing them to form the questionnaire.

Questionnaire design is a multistage process that requires attention to many details at once. Designing the questionnaire is complicated because surveys can ask about topics in varying degrees of detail, questions can be asked in different ways, and questions asked earlier in a survey may influence how people respond to later questions. Researchers are also often interested in measuring change over time and therefore must be attentive to how opinions or behaviors have been measured in prior surveys.

Surveyors may conduct pilot tests or focus groups in the early stages of questionnaire development in order to better understand how people think about an issue or comprehend a question. Pretesting a survey is an essential step in the questionnaire design process to evaluate how people respond to the overall questionnaire and specific questions, especially when questions are being introduced for the first time.

For many years, surveyors approached questionnaire design as an art, but substantial research over the past forty years has demonstrated that there is a lot of science involved in crafting a good survey questionnaire. Here, we discuss the pitfalls and best practices of designing questionnaires.

Question development

There are several steps involved in developing a survey questionnaire. The first is identifying what topics will be covered in the survey. For Pew Research Center surveys, this involves thinking about what is happening in our nation and the world and what will be relevant to the public, policymakers and the media. We also track opinion on a variety of issues over time so we often ensure that we update these trends on a regular basis to better understand whether people’s opinions are changing.

At Pew Research Center, questionnaire development is a collaborative and iterative process where staff meet to discuss drafts of the questionnaire several times over the course of its development. We frequently test new survey questions ahead of time through qualitative research methods such as  focus groups , cognitive interviews, pretesting (often using an  online, opt-in sample ), or a combination of these approaches. Researchers use insights from this testing to refine questions before they are asked in a production survey, such as on the ATP.

Measuring change over time

Many surveyors want to track changes over time in people’s attitudes, opinions and behaviors. To measure change, questions are asked at two or more points in time. A cross-sectional design surveys different people in the same population at multiple points in time. A panel, such as the ATP, surveys the same people over time. However, it is common for the set of people in survey panels to change over time as new panelists are added and some prior panelists drop out. Many of the questions in Pew Research Center surveys have been asked in prior polls. Asking the same questions at different points in time allows us to report on changes in the overall views of the general public (or a subset of the public, such as registered voters, men or Black Americans), or what we call “trending the data”.

When measuring change over time, it is important to use the same question wording and to be sensitive to where the question is asked in the questionnaire to maintain a similar context as when the question was asked previously (see  question wording  and  question order  for further information). All of our survey reports include a topline questionnaire that provides the exact question wording and sequencing, along with results from the current survey and previous surveys in which we asked the question.

The Center’s transition from conducting U.S. surveys by live telephone interviewing to an online panel (around 2014 to 2020) complicated some opinion trends, but not others. Opinion trends that ask about sensitive topics (e.g., personal finances or attending religious services ) or that elicited volunteered answers (e.g., “neither” or “don’t know”) over the phone tended to show larger differences than other trends when shifting from phone polls to the online ATP. The Center adopted several strategies for coping with changes to data trends that may be related to this change in methodology. If there is evidence suggesting that a change in a trend stems from switching from phone to online measurement, Center reports flag that possibility for readers to try to head off confusion or erroneous conclusions.

Open- and closed-ended questions

One of the most significant decisions that can affect how people answer questions is whether the question is posed as an open-ended question, where respondents provide a response in their own words, or a closed-ended question, where they are asked to choose from a list of answer choices.

For example, in a poll conducted after the 2008 presidential election, people responded very differently to two versions of the question: “What one issue mattered most to you in deciding how you voted for president?” One was closed-ended and the other open-ended. In the closed-ended version, respondents were provided five options and could volunteer an option not on the list.

When explicitly offered the economy as a response, more than half of respondents (58%) chose this answer; only 35% of those who responded to the open-ended version volunteered the economy. Moreover, among those asked the closed-ended version, fewer than one-in-ten (8%) provided a response other than the five they were read. By contrast, fully 43% of those asked the open-ended version provided a response not listed in the closed-ended version of the question. All of the other issues were chosen at least slightly more often when explicitly offered in the closed-ended version than in the open-ended version. (Also see  “High Marks for the Campaign, a High Bar for Obama”  for more information.)

survey questions for thesis

Researchers will sometimes conduct a pilot study using open-ended questions to discover which answers are most common. They will then develop closed-ended questions based off that pilot study that include the most common responses as answer choices. In this way, the questions may better reflect what the public is thinking, how they view a particular issue, or bring certain issues to light that the researchers may not have been aware of.

When asking closed-ended questions, the choice of options provided, how each option is described, the number of response options offered, and the order in which options are read can all influence how people respond. One example of the impact of how categories are defined can be found in a Pew Research Center poll conducted in January 2002. When half of the sample was asked whether it was “more important for President Bush to focus on domestic policy or foreign policy,” 52% chose domestic policy while only 34% said foreign policy. When the category “foreign policy” was narrowed to a specific aspect – “the war on terrorism” – far more people chose it; only 33% chose domestic policy while 52% chose the war on terrorism.

In most circumstances, the number of answer choices should be kept to a relatively small number – just four or perhaps five at most – especially in telephone surveys. Psychological research indicates that people have a hard time keeping more than this number of choices in mind at one time. When the question is asking about an objective fact and/or demographics, such as the religious affiliation of the respondent, more categories can be used. In fact, they are encouraged to ensure inclusivity. For example, Pew Research Center’s standard religion questions include more than 12 different categories, beginning with the most common affiliations (Protestant and Catholic). Most respondents have no trouble with this question because they can expect to see their religious group within that list in a self-administered survey.

In addition to the number and choice of response options offered, the order of answer categories can influence how people respond to closed-ended questions. Research suggests that in telephone surveys respondents more frequently choose items heard later in a list (a “recency effect”), and in self-administered surveys, they tend to choose items at the top of the list (a “primacy” effect).

Because of concerns about the effects of category order on responses to closed-ended questions, many sets of response options in Pew Research Center’s surveys are programmed to be randomized to ensure that the options are not asked in the same order for each respondent. Rotating or randomizing means that questions or items in a list are not asked in the same order to each respondent. Answers to questions are sometimes affected by questions that precede them. By presenting questions in a different order to each respondent, we ensure that each question gets asked in the same context as every other question the same number of times (e.g., first, last or any position in between). This does not eliminate the potential impact of previous questions on the current question, but it does ensure that this bias is spread randomly across all of the questions or items in the list. For instance, in the example discussed above about what issue mattered most in people’s vote, the order of the five issues in the closed-ended version of the question was randomized so that no one issue appeared early or late in the list for all respondents. Randomization of response items does not eliminate order effects, but it does ensure that this type of bias is spread randomly.

Questions with ordinal response categories – those with an underlying order (e.g., excellent, good, only fair, poor OR very favorable, mostly favorable, mostly unfavorable, very unfavorable) – are generally not randomized because the order of the categories conveys important information to help respondents answer the question. Generally, these types of scales should be presented in order so respondents can easily place their responses along the continuum, but the order can be reversed for some respondents. For example, in one of Pew Research Center’s questions about abortion, half of the sample is asked whether abortion should be “legal in all cases, legal in most cases, illegal in most cases, illegal in all cases,” while the other half of the sample is asked the same question with the response categories read in reverse order, starting with “illegal in all cases.” Again, reversing the order does not eliminate the recency effect but distributes it randomly across the population.

Question wording

The choice of words and phrases in a question is critical in expressing the meaning and intent of the question to the respondent and ensuring that all respondents interpret the question the same way. Even small wording differences can substantially affect the answers people provide.

An example of a wording difference that had a significant impact on responses comes from a January 2003 Pew Research Center survey. When people were asked whether they would “favor or oppose taking military action in Iraq to end Saddam Hussein’s rule,” 68% said they favored military action while 25% said they opposed military action. However, when asked whether they would “favor or oppose taking military action in Iraq to end Saddam Hussein’s rule  even if it meant that U.S. forces might suffer thousands of casualties, ” responses were dramatically different; only 43% said they favored military action, while 48% said they opposed it. The introduction of U.S. casualties altered the context of the question and influenced whether people favored or opposed military action in Iraq.

There has been a substantial amount of research to gauge the impact of different ways of asking questions and how to minimize differences in the way respondents interpret what is being asked. The issues related to question wording are more numerous than can be treated adequately in this short space, but below are a few of the important things to consider:

First, it is important to ask questions that are clear and specific and that each respondent will be able to answer. If a question is open-ended, it should be evident to respondents that they can answer in their own words and what type of response they should provide (an issue or problem, a month, number of days, etc.). Closed-ended questions should include all reasonable responses (i.e., the list of options is exhaustive) and the response categories should not overlap (i.e., response options should be mutually exclusive). Further, it is important to discern when it is best to use forced-choice close-ended questions (often denoted with a radio button in online surveys) versus “select-all-that-apply” lists (or check-all boxes). A 2019 Center study found that forced-choice questions tend to yield more accurate responses, especially for sensitive questions.  Based on that research, the Center generally avoids using select-all-that-apply questions.

It is also important to ask only one question at a time. Questions that ask respondents to evaluate more than one concept (known as double-barreled questions) – such as “How much confidence do you have in President Obama to handle domestic and foreign policy?” – are difficult for respondents to answer and often lead to responses that are difficult to interpret. In this example, it would be more effective to ask two separate questions, one about domestic policy and another about foreign policy.

In general, questions that use simple and concrete language are more easily understood by respondents. It is especially important to consider the education level of the survey population when thinking about how easy it will be for respondents to interpret and answer a question. Double negatives (e.g., do you favor or oppose  not  allowing gays and lesbians to legally marry) or unfamiliar abbreviations or jargon (e.g., ANWR instead of Arctic National Wildlife Refuge) can result in respondent confusion and should be avoided.

Similarly, it is important to consider whether certain words may be viewed as biased or potentially offensive to some respondents, as well as the emotional reaction that some words may provoke. For example, in a 2005 Pew Research Center survey, 51% of respondents said they favored “making it legal for doctors to give terminally ill patients the means to end their lives,” but only 44% said they favored “making it legal for doctors to assist terminally ill patients in committing suicide.” Although both versions of the question are asking about the same thing, the reaction of respondents was different. In another example, respondents have reacted differently to questions using the word “welfare” as opposed to the more generic “assistance to the poor.” Several experiments have shown that there is much greater public support for expanding “assistance to the poor” than for expanding “welfare.”

We often write two versions of a question and ask half of the survey sample one version of the question and the other half the second version. Thus, we say we have two  forms  of the questionnaire. Respondents are assigned randomly to receive either form, so we can assume that the two groups of respondents are essentially identical. On questions where two versions are used, significant differences in the answers between the two forms tell us that the difference is a result of the way we worded the two versions.

survey questions for thesis

One of the most common formats used in survey questions is the “agree-disagree” format. In this type of question, respondents are asked whether they agree or disagree with a particular statement. Research has shown that, compared with the better educated and better informed, less educated and less informed respondents have a greater tendency to agree with such statements. This is sometimes called an “acquiescence bias” (since some kinds of respondents are more likely to acquiesce to the assertion than are others). This behavior is even more pronounced when there’s an interviewer present, rather than when the survey is self-administered. A better practice is to offer respondents a choice between alternative statements. A Pew Research Center experiment with one of its routinely asked values questions illustrates the difference that question format can make. Not only does the forced choice format yield a very different result overall from the agree-disagree format, but the pattern of answers between respondents with more or less formal education also tends to be very different.

One other challenge in developing questionnaires is what is called “social desirability bias.” People have a natural tendency to want to be accepted and liked, and this may lead people to provide inaccurate answers to questions that deal with sensitive subjects. Research has shown that respondents understate alcohol and drug use, tax evasion and racial bias. They also may overstate church attendance, charitable contributions and the likelihood that they will vote in an election. Researchers attempt to account for this potential bias in crafting questions about these topics. For instance, when Pew Research Center surveys ask about past voting behavior, it is important to note that circumstances may have prevented the respondent from voting: “In the 2012 presidential election between Barack Obama and Mitt Romney, did things come up that kept you from voting, or did you happen to vote?” The choice of response options can also make it easier for people to be honest. For example, a question about church attendance might include three of six response options that indicate infrequent attendance. Research has also shown that social desirability bias can be greater when an interviewer is present (e.g., telephone and face-to-face surveys) than when respondents complete the survey themselves (e.g., paper and web surveys).

Lastly, because slight modifications in question wording can affect responses, identical question wording should be used when the intention is to compare results to those from earlier surveys. Similarly, because question wording and responses can vary based on the mode used to survey respondents, researchers should carefully evaluate the likely effects on trend measurements if a different survey mode will be used to assess change in opinion over time.

Question order

Once the survey questions are developed, particular attention should be paid to how they are ordered in the questionnaire. Surveyors must be attentive to how questions early in a questionnaire may have unintended effects on how respondents answer subsequent questions. Researchers have demonstrated that the order in which questions are asked can influence how people respond; earlier questions can unintentionally provide context for the questions that follow (these effects are called “order effects”).

One kind of order effect can be seen in responses to open-ended questions. Pew Research Center surveys generally ask open-ended questions about national problems, opinions about leaders and similar topics near the beginning of the questionnaire. If closed-ended questions that relate to the topic are placed before the open-ended question, respondents are much more likely to mention concepts or considerations raised in those earlier questions when responding to the open-ended question.

For closed-ended opinion questions, there are two main types of order effects: contrast effects ( where the order results in greater differences in responses), and assimilation effects (where responses are more similar as a result of their order).

survey questions for thesis

An example of a contrast effect can be seen in a Pew Research Center poll conducted in October 2003, a dozen years before same-sex marriage was legalized in the U.S. That poll found that people were more likely to favor allowing gays and lesbians to enter into legal agreements that give them the same rights as married couples when this question was asked after one about whether they favored or opposed allowing gays and lesbians to marry (45% favored legal agreements when asked after the marriage question, but 37% favored legal agreements without the immediate preceding context of a question about same-sex marriage). Responses to the question about same-sex marriage, meanwhile, were not significantly affected by its placement before or after the legal agreements question.

survey questions for thesis

Another experiment embedded in a December 2008 Pew Research Center poll also resulted in a contrast effect. When people were asked “All in all, are you satisfied or dissatisfied with the way things are going in this country today?” immediately after having been asked “Do you approve or disapprove of the way George W. Bush is handling his job as president?”; 88% said they were dissatisfied, compared with only 78% without the context of the prior question.

Responses to presidential approval remained relatively unchanged whether national satisfaction was asked before or after it. A similar finding occurred in December 2004 when both satisfaction and presidential approval were much higher (57% were dissatisfied when Bush approval was asked first vs. 51% when general satisfaction was asked first).

Several studies also have shown that asking a more specific question before a more general question (e.g., asking about happiness with one’s marriage before asking about one’s overall happiness) can result in a contrast effect. Although some exceptions have been found, people tend to avoid redundancy by excluding the more specific question from the general rating.

Assimilation effects occur when responses to two questions are more consistent or closer together because of their placement in the questionnaire. We found an example of an assimilation effect in a Pew Research Center poll conducted in November 2008 when we asked whether Republican leaders should work with Obama or stand up to him on important issues and whether Democratic leaders should work with Republican leaders or stand up to them on important issues. People were more likely to say that Republican leaders should work with Obama when the question was preceded by the one asking what Democratic leaders should do in working with Republican leaders (81% vs. 66%). However, when people were first asked about Republican leaders working with Obama, fewer said that Democratic leaders should work with Republican leaders (71% vs. 82%).

The order questions are asked is of particular importance when tracking trends over time. As a result, care should be taken to ensure that the context is similar each time a question is asked. Modifying the context of the question could call into question any observed changes over time (see  measuring change over time  for more information).

A questionnaire, like a conversation, should be grouped by topic and unfold in a logical order. It is often helpful to begin the survey with simple questions that respondents will find interesting and engaging. Throughout the survey, an effort should be made to keep the survey interesting and not overburden respondents with several difficult questions right after one another. Demographic questions such as income, education or age should not be asked near the beginning of a survey unless they are needed to determine eligibility for the survey or for routing respondents through particular sections of the questionnaire. Even then, it is best to precede such items with more interesting and engaging questions. One virtue of survey panels like the ATP is that demographic questions usually only need to be asked once a year, not in each survey.

U.S. Surveys

Other research methods.

About Pew Research Center Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of The Pew Charitable Trusts .

Quantitative Research

Qualitative research.

  • Free plan, no time limit
  • Set up in minutes
  • No credit card required

7+ Reasons to Use Surveys in Your Dissertation

Blocksurvey blog author

Writing a dissertation is a serious milestone. Your degree depends on it, so it takes a lot of effort and time to figure out what direction to choose. Everything starts with the topic: you read background literature, consult with your supervisor and seek approval before you start writing the first draft. After that, you need to decide how you will collect the data that is supposed to contribute to the research field.

This is where it gets complicated. If you have never tried conducting primary research (i.e. working with human subjects), it can seem quite scary. Analyzing articles may sound like the safest and the coolest option. Yet, there might not be enough information for you to claim that your research is somehow novel.

To make sure it is, you might need to conduct primary research, and the survey method is the most widespread tool to do that. The number of advantages surveys present is huge. However, there are various perks depending on what approach you pursue. So, let’s go through all of them before you decide to pay for essay and order a dissertation that will go on and on about analyzing literature and nothing else except it.

In the quantitative primary research, students have to calculate the data received from typical a, b, c, d questionnaires. The latter provides precise answers and helps prove or reject the formulated hypothesis. For the research to be legit, there are several stages to go through like:

  • Discarding irrelevant or subjective questions/answers included in questionnaires.
  • Setting criteria for credible answers.
  • Composing an explanation of how you will manage ethical concerns (for participants and university committee).

However, all this is done to prevent issues in the future. Provided you have taken care of all the points above, you will get to enjoy the following benefits.

Data Collection Is Less Tedious

There are numerous services, like Survey Monkey, that the best write my essay services use. It can help you distribute your questionnaire among potential participants. These platforms simplify the data collection process. You don’t have to arrange calls or convince someone that they can safely share the information. Just upload the consent letter each participant has to sign and let the platform guide them further.

Data Analysis Is Fast

In quantitative analysis, all you have to take care of is mainly data entry. It requires focus and accuracy, but the rest can be done with the help of software. Whether it’s ordinary Excel or something like SPSS, you don’t have to reread loads of text. Just make sure you download the collected data from the platform correctly, remove irrelevant fields, and feed the rest to your computer.

survey questions for thesis

Numbers Rule

Numbers don’t lie (unless you miscalculated them, of course). They give a clear answer: it’s either ‘yes’ or ‘no’. Moreover, they leave more room for creating good visuals and making your paper less boring. Just make sure you explain the numbers properly and compare the results between various graphs and charts.

No Room For Subjectivity

A quantitative dissertation is mostly a technical paper. It’s not about creativity and your ability to impress like in admission essays students usually delegate to admission essay writing services to avoid babbling about things they deem senseless. It’s about following particular procedures. And there is also a less abstract analysis.

Qualitative-oriented surveys are about conducting full-fledged personal interviews, working with focus groups, or distributing open-ended questionnaires requiring short but unique answers. Let’s talk about what makes this approach worth trying!

survey questions for thesis

First-Hand Experience

The ability to gain a unique perspective is what distinguishes interviews from other surveys. Close-ended questions may be too rigid and make participants omit a lot of information that might help the research. In an interview, you may also correct some of your questions, and add more details to them, thus improving the outcomes.

More Diverse and Honest Answers

When participants are limited by only several options, they might choose something they cannot fully relate to. So, there is no guarantee that the results will be authentic. Meanwhile, with open-ended questions, participants share a lot of details.

Sure, some of them may be less relevant to your topic, but the researcher gains a deeper understanding of the issues lying beneath the topic. Of course, all of it is guaranteed only if the researcher provides anonymity and a safe space for the interviewees to share their thoughts freely.

No Need For Complex Software

In contrast to quantitative analysis, here, you won’t have to use formulae and learn how to perform complex tests. You might not even need Excel, except for storing some data about your participants. However, no calculations will be needed, which is also a relief for those who are not used to working with such kind of data.

Both types of research have also other advantages:

  • With surveys, you have more chances to fill the literature gap you’ve discovered.
  • Primary research may not be quite easy, but it’s highly valued at the doctoral level of education.
  • You receive a lot of new information and stay away from retelling literature that has been published before.
  • Primary research is less boring.

However, there is a must-remember thing: not every supervisor or university committee approves of surveys and primary research in general. It depends on numerous aspects like topic and subject, the conditions of research, your approach to handling human subjects, etc.

It means that the methodology you are going to use should be approved by your professor first. Otherwise, you may have to discard some parts of your draft and lose time gathering data you won’t be able to use. So, take care and good luck!

7+ Reasons to Use Surveys in Your Dissertation FAQ

What are the benefits of using surveys in a dissertation, surveys can provide a large amount of data in a short amount of time, they are cost-effective and can allow for anonymity, they can reach a wide audience, and they can be used to obtain feedback from the participants., how can i ensure that my survey results are accurate, make sure to ask questions that are clear and concise and that there are no bias in the questions. make sure to have a good sample size and to have a response rate that is high enough to provide accurate results., how can i analyze the survey results, depending on the type of survey, there are various analysis techniques that can be used. these include descriptive statistics, inferential statistics, correlation analysis, and regression analysis., what are the limitations of surveys, surveys can be subject to sampling errors, response bias, and interviewer effects. they may also not be able to capture the full range of opinions and attitudes of the population., like what you see share with a friend..

blog author description

Sarath Shyamson

Sarath Shyamson is the customer success person at BlockSurvey and also heads the outreach. He enjoys volunteering for the church choir and loves spending time with his two year old son.

Related articles

A/b testing calculator for statistical significance.

A/B Testing calculator for Statistical Significance

Anounymous Feedback: A How to guide

Anounymous Feedback: A How to guide

A Beginner's Guide to Non-Profit Marketing: Learn the Tips, Best practices and 7 best Marketing Strategies for your NPO

A Beginner's Guide to Non-Profit Marketing: Learn the Tips, Best practices and 7 best Marketing Strategies for your NPO

4 Major Benefits of Incorporating Online Survey Tools Into the Classroom

4 Major Benefits of Incorporating Online Survey Tools Into the Classroom

7 best demographic questions to enhance the quality of your survey

7 best demographic questions to enhance the quality of your survey

Confidential survey vs Anonymous survey - How to decide on that

Confidential survey vs Anonymous survey - How to decide on that

Conjoint analysis: Definition and How it can be used in your surveys

Conjoint analysis: Definition and How it can be used in your surveys

Cross-Tabulation Analysis: How to use it in your surveys?

Cross-Tabulation Analysis: How to use it in your surveys?

What is Data Masking- Why it is essential to maintain the anonymity of a survey

What is Data Masking- Why it is essential to maintain the anonymity of a survey

The Art of Effective Survey Questions Design: Everything You Need to Know to Survey Students the Right Way

The Art of Effective Survey Questions Design: Everything You Need to Know to Survey Students the Right Way

Focus group survey Vs Online survey: How to choose the best method for your Market Research

Focus group survey Vs Online survey: How to choose the best method for your Market Research

How Employee Satisfaction Affects Company's Financial Performance

How Employee Satisfaction Affects Company's Financial Performance

How to create an anonymous survey

How to create an anonymous survey

How to identify if the survey is anonymous or not

How to identify if the survey is anonymous or not

A Simple and Easy guide to understand: When and How to use Surveys in Psychology

A Simple and Easy guide to understand: When and How to use Surveys in Psychology

How to write a survey introduction that motivates respondents to fill it out

How to write a survey introduction that motivates respondents to fill it out

Survey and Question Design: How to Make a Perfect Statistical Survey

Survey and Question Design: How to Make a Perfect Statistical Survey

Matrix Questions: Definition and How to use it in survey

Matrix Questions: Definition and How to use it in survey

Maxdiff analysis: Definition, Example and How to use it

Maxdiff analysis: Definition, Example and How to use it

How to Maximize Data Collection Efficiency with Web 3.0 Surveys?

How to Maximize Data Collection Efficiency with Web 3.0 Surveys?

Empowering Prevention: Leveraging Online Surveys to Combat School Shootings

Empowering Prevention: Leveraging Online Surveys to Combat School Shootings

Optimizing Survey Results: Advanced Editing And Reporting Techniques

Optimizing Survey Results: Advanced Editing And Reporting Techniques

Enhancing Student Engagement and Learning with Online Surveys

Enhancing Student Engagement and Learning with Online Surveys

Student survey questions that will provide valuable feedback

Student survey questions that will provide valuable feedback

Preparing Students for the Future: The Role of Technology in Education

Preparing Students for the Future: The Role of Technology in Education

When It’s Best To Use Surveys For A Dissertation & How To Ensure Anonymity?

When It’s Best To Use Surveys For A Dissertation & How To Ensure Anonymity?

Which Pricing Strategy Should You Choose for Your Product? A Van Westendorp Analysis

Which Pricing Strategy Should You Choose for Your Product?  A Van Westendorp Analysis

Want to create Anonymous survey in Facebook??- Know why you can't

Want to create Anonymous survey in Facebook??- Know why you can't

  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case NPS+ Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

survey questions for thesis

Home Surveys

Top 15 Demographic Survey Questions for Questionnaire

demographic-survey-questions

Suppose you want to conduct market research and understand your target audience. In that case, demographic survey questions are vital to revealing valuable information and insights about the individuals that matter most to your reseach.

Market researchers can learn more about their target audience’s demographics, interests, and lifestyle choices by asking carefully crafted demographic survey questions. These demographic questions reveal your audience’s complicated makeup and provide vital demographic data that can be useful in developing your marketing strategies and generating unique experiences.

LEARN ABOUT: Survey Mistakes And How to Avoid

Don’t underestimate the value of demographic surveys; they could be the key to unlocking your audience’s full potential. You can get a deeper insight into your customers and their requirements by asking the correct questions. It allows you to adjust your strategies and develop stronger relationships.

In this blog, we will learn about 15 different demographic survey question examples, which can be very useful to deeply understand your target audience and ease your research.

LEARN ABOUT: Testimonial Questions

What are demographic survey questions?

Demographic survey questions are usually a part of market research or market segmentation surveys that give survey creators insights into respondents’ age, gender survey questions , or marital status.

Demographic information can provide details about users that other question types might fail to achieve. This way, marketers can conduct focused and reliable survey research, providing filtered responses from their target audience .

While designing an online survey , including a few demographic questions is essential irrespective of the core topic. These questions should be in line with the purpose of the study and should be catalyzing in receiving detailed information from respondents.

An acute understanding of the demographic questions that correspond to a survey is pivotal to getting into the nerve of customer feedback and opinions. These questions are used in surveys to enable survey creators to compare two or more different sections of demographics.

LEARN ABOUT: Event Surveys

For example, if you’re surveying school children, you are highly unlikely to ask questions about qualifications, marital status, or information about kids.

Demographic survey questions are easily created using multiple-choice questions within a few minutes. The demographic survey includes questions on age, ethnicity, gender, marital status, basic qualifications, employment, household income, and other such parameters.

Learn about: Demographic Survey Questions Template

Why Demographic Survey Questions Are Important?

Demographic survey questions are essential for understanding and connecting with your target audience. The following are some of the reasons why demographic questions are important:

survey questions for thesis

Audience Segmentation

Targeted marketing, decision making, personalization.

  • Market Research

Social Impact

LEARN ABOUT: Survey Sample Sizes

How to Design Demographic Survey Questions?

Designing demographic survey questions requires carefully considering the information you want to collect and the unique demographic aspects important to your research or objectives. Following are some key things to focus on while creating beneficial demographic survey questions:

Determine research objectives and demographic factors

Determine question format, provide comprehensive response options, consider skip logic or branching features, ensure privacy and confidentiality, 15 demographic survey questions to ease your research.

Demographic survey questions cover a wide range of topics, allowing you to learn more about the respondents’ relevant characteristics. Let’s explore 15 different kinds of important demographic survey questions:

This is the first question that comes to mind when discussing demographics. According to the accuracy of options, you can offer broad or narrow ranges for all answers.

Age-related demographic answer options are shown as radio buttons. Answer options do not overlap each other, i.e., the range mentioned in each option has to be unique.

02. Race and Ethnicity

T hese two are distinct parameters that need to be mentioned separately to get customer feedback. Race refers to physical appearance, while ethnicity refers to cultures, historical roots, and also geography.

These are highly sensitive questions that need special attention. Provide checkboxes allowing survey respondents to choose multiple options if they wish to. Questions related to ethnicity will be:

  • “Are you of Spanish or Latino origin?” which will usually be a Yes/No radio button. A question on race will follow it.
  • African American
  • Native American

03. Gender or Sex

These demographic questions are as sensitive as race and ethnicity survey questions. There are two ways in which these questions can be asked:

  • Gender? ( Be respectful towards respondents’ thoughts regarding gender and subtly ask questions about them. )
  • Prefer not to say

Create a Free Demographic Survey

04. Educational Qualification

This very subjective question needs prior evaluation, such as understanding whether many colleges or school students may take your survey and whether this feedback on qualification will be useful to you. A student interest survey helps customize teaching methods and curriculum to make learning more engaging and relevant to students’ lives. The classroom response system allowed students to answer multiple-choice questions and engage in real-time discussions instantly.

After analyzing these factors, add demographic questions on qualification, such as:

  • Less than a high school diploma
  • High school diploma or equivalent degree
  • Bachelor’s degree
  • Master’s degree

05. Marital Status

Relationships revolving around love are often considered less important than questions about race, ethnicity, or sex. But they’re sometimes far more complicated and need to be tackled carefully too.

Thus, a deep-rooted analysis must go into whether the fact that someone is married or not would hamper the purpose of a survey. If it’s highly critical to know the marital status, only then should you include this demographic survey question.

In case would want to segment your target audience on this basis, you must curate questions in this manner:

  • Unmarried 

06. Employment Information

This demographic question can include various sub-sections of employment status, such as the number of hours worked, job type of employer, and other parameters. The most common classification of employment status options is whether or not the respondent is employed.

  • Full-time employment 
  • Part-time employment 
  • Self-employed
  • Student 

Suppose respondents inform you that they’re employed. In that case, you can apply the skip logic feature to eliminate options that aren’t applicable and include options that might provide further insights about the participant’s employment status.

07. Family or Household Income

This question type is very similar to the age demographic question. Question about household income depends on what you would use this information for and what detail you would want to receive these inputs. Here’s an example of a household income demographic question:

  • Less than $20,000
  • $21,000 – $30,000
  • $31,000 to $40,000
  • $41,000 to $50,000
  • $51,000 to $60,000
  • Above $60,000

08. Household Size

This type of question asks how many people live in one house. It informs researchers, organizations, and governments about population dynamics, resource allocation, and housing demands.

Provide a set of response alternatives that cover a range of household sizes, and consider providing “Prefer not to say” as a response option to respect survey respondents’ privacy.

  • 6 or more people

09. Parental Status

This demographic question helps to learn about a person’s parental status or if they have children or are expecting one. It gives data on family structures and dynamics, which can help research social developments, educational demands, and household resource allocation. Here’s an example:

  • Expecting a child
  • Not a parent

10. Homeownership

This demographic question asks if respondents own or rent their homes. It is used better to understand people’s or households’ housing situations and helps in housing, real estate, market research, or social planning surveys.

LEARN ABOUT: Real Estate Surveys

  • I am a homeowner.
  • I am a renter.
  • I live with family or friends.
  • Other (please specify).
  • Prefer not to say.

11. Disability

This demographic question provides information about the rate and distribution of disabilities in the target group. Including a “Prefer not to say” option alongside other options protects the privacy of people who do not want to reveal their handicap status.

  • Yes, I have a disability.
  • No, I do not have a disability.

12. ZIP Code of Residence or Location

If you want to know where a survey respondent lives, collecting ZIP codes is a quick and straightforward approach. If you want to know where your customers are from, you might ask about their geographical location. These kinds of questions can be as follow:

  • What is your ZIP code? (Please enter your 5-digit ZIP code)
  • State/Province

13. Religion

The Religion demographic question asks about a person’s religion. Respecting different viewpoints and backgrounds is essential while answering the Religion demographic question. Make sure respondents feel comfortable revealing their religious affiliation or not.

Include a variety of commonly practiced religions, an “Other” option for respondents to specify their religious affiliation if it is not listed among the provided options, a “None/No religion” option for atheists, agnostics, and secular people, and a “Prefer not to say” option to allow respondents to keep their religious affiliation private.

  • Christianity (Catholic, Protestant, Orthodox, etc.)
  • Other (please specify)
  • None/No religion

14. Language

Language is important for organizations since it allows you to build customer relationships by speaking their language. The Language demographic question seeks to identify persons’ primary language(s).

You can include a language proficiency question depending on the research targets or survey context. This can reveal information about a respondent’s fluency or skill in speaking, reading, and writing in several languages.

  • Mandarin Chinese
  • Native/Fluent
  • Intermediate

15. Political Preferences

The Political Preferences demographic question seeks political and ideological information. To get honest answers, frame the question neutrally. Be careful with any language that might lead respondents toward a biased response.

  • Conservative
  • Libertarian
  • Progressive
  • Green Party
  • Independent

LEARN ABOUT:   Structured Question  &   Structured Questionnaire

When to ask demographic survey questions in surveys ?

When conducting a survey, the appropriate time to ask demographic questions depends on several factors. Here are a few things to keep in mind when deciding when it is appropriate to ask demographic questions:

Beginning of the survey

After an introduction, before sensitive or personal questions, before skip logic or branching, in the end, if needed, create your demographic surveys with questionpro.

QuestionPro enables you to design highly effective and insightful demographic surveys easily. Whether you’re performing market research, academic studies, or gathering data for organizational purposes, QuestionPro gives you the tools and capabilities you need to create surveys that capture crucial demographic information.

User-friendly interface

Pre-built demographic survey templates, advanced survey features, data security and privacy, analytics and reporting, survey distribution.

LEARN ABOUT: Performance Appraisal Survey

QuestionPro offers a seamless and efficient way to create demographic surveys. With our platform, you can experience the ease and effectiveness of crafting surveys that gather valuable insights from your target audience. You can use single ease questions . A single-ease question is a straightforward query that elicits a concise and uncomplicated response.

LEARN ABOUT: Free Travel Surveys: Questions & Templates

Sign Up free and unlock a wide range of powerful features, customizable options, and robust analytics to provide the demographic insights you need for your research, business, or organizational endeavors. You can use QuestionPro for survey questions like income survey questions , Quantitative survey questions, Ethnicity survey questions, and life survey questions. Take advantage of this opportunity to take your projects to the next level.

LEARN MORE         SIGN UP FREE

MORE LIKE THIS

customer loyalty software

10 Top Customer Loyalty Software to Boost Your Business

Mar 25, 2024

anonymous employee feedback tools

Top 13 Anonymous Employee Feedback Tools for 2024

idea management software

Unlocking Creativity With 10 Top Idea Management Software

Mar 23, 2024

website optimization tools

20 Best Website Optimization Tools to Improve Your Website

Mar 22, 2024

Other categories

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Uncategorized
  • Video Learning Series
  • What’s Coming Up
  • Workforce Intelligence

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Saudi J Anaesth
  • v.16(1); Jan-Mar 2022

How short or long should be a questionnaire for any research? Researchers dilemma in deciding the appropriate questionnaire length

Hunny sharma.

Department of Community and Family Medicine, All India Institute of Medical Sciences, Raipur, Chhattisgarh, India

A questionnaire plays a pivotal role in various surveys. Within the realm of biomedical research, questionnaires serve a role in epidemiological surveys and mental health surveys and to obtain information about knowledge, attitude, and practice (KAP) on various topics of interest. Questionnaire in border perspective can be of different types like self-administered or professionally administered and according to the mode of delivery paper-based or electronic media–based. Various studies have been conducted to assess the appropriateness of a questionnaire in a particular field and methods to translate and validate them. But very little is known regarding the appropriate length and number of questions in a questionnaire and what role it has in data quality, reliability, and response rates. Hence, this narrative review is to explore the critical issue of appropriate length and number of questions in a questionnaire while questionnaire designing.

Introduction

A questionnaire is an essential tool in epidemiological surveys and mental health surveys and to assess knowledge, attitude, and practice (KAP) on a particular topic of interest. In general, it is a set of predefined questions based on the aim of the research.[ 1 ]

Designing a questionnaire is an art which unfortunately is neglected by most researchers.[ 2 ] A well-designed questionnaire not only saves time for a researcher but helps to obtain relevant information most efficiently, but designing such a questionnaire is complex and time-consuming.[ 3 , 4 ]

The quality of the data obtained by a specific questionnaire depends on the length and number of questions in the questionnaire, the language, and the ease of comprehension of the questions, relevance of the population to which it is administered, and the mode of administration, i.e., the self-administered or paper method or the electronic method [ Figure 1 ].[ 5 , 6 ]

An external file that holds a picture, illustration, etc.
Object name is SJA-16-65-g001.jpg

Qualities of a well-designed questionnaire

Response rate is defined as the number of people who responded to a question asked divided by the number of total potential respondents. Response rate which is a crucial factor in determining the quality and generalizability of the outcome of the survey depends indirectly on the length and number of questions in a questionnaire.[ 7 , 8 ]

Several studies have been conducted to assess the appropriateness of the questionnaire in a particular field and methods to translate and validate them. But very little is known regarding the appropriate length and number of questions in a questionnaire and what role it has in data quality and reliability. Hence, this narrative review is to explore the critical issue of appropriate length and number of questions in a questionnaire while questionnaire designing.

What is a questionnaire

Merriam Webster defines the questionnaire as “a set of questions for obtaining statistically useful or personal information from individuals,” whereas Collins defines a questionnaire as “a questionnaire is a written list of questions which are answered by a lot of people to provide information for a report or a survey.” The oxford learners’ dictionaries also give a somewhat similar definition which states that a questionnaire is “a written list of questions that are answered by several people so that information can be collected from the answers.”[ 9 , 10 , 11 ]

Thus, this provides a simpler meaning that a questionnaire in simpler terms is a collection of questions that can be used to collect information from various individuals relevant to the research aims.

Where are questionnaires generally applied?

A questionnaire, in general, can be applied to a wide variety of research which can either be quantitative or qualitative research which completely depends on how and in which a number of open-ended questions are asked.[ 12 ]

Questionnaires are generally applied when a large population has to be assessed or surveyed with relative ease where they play a crucial role in gathering information on the perspectives of individuals in the population.

There is a variety of applications of questionnaire in opinion polls, marketing surveys, and in politics, wherein the context of biomedical research questionnaires are generally used in epidemiological surveys, mental health surveys, surveys on attitudes to a health service or health service utilization, to conduction knowledge, attitude, and practice (KAP) studies on a particular issue or topic of interest.[ 13 , 14 ]

What are the types of questionnaire?

Questionnaires in general are of two types those which are in paper format and those which are in electronic format. The questionnaire can further be of two types i.e., self-administered or professionally administered via interview. The paper format can be administered easily both in self-administered mode or professional administered mode via direct administration when the population is relatively small as it is cumbersome to manage and store the physical questionnaire, paper format can also be administered to a larger population via postal surveys. Electronic questionnaires can be easily administered to a larger population in self-administered mode via Internet-based services like google forms, e-mails, SurveyMonkey, or Survey Junkie, etc. When administering professional-administered questionnaires professional telephonic services must be utilized to interview a larger population in a shorter duration of time.[ 15 , 16 , 17 ]

What it is required to answer individual questions in the questionnaire or the burden imparted on respondents

As mentioned by Bowling, in general, there are at least four intricated steps required in answering a particular question in a questionnaire, these steps are comprehension, recall of information asked by the question from the memory, judgment on the link between the asked question and the recall of information, and at last communication of the information to the questionnaire or evaluator [ Figure 2 ].[ 18 ]

An external file that holds a picture, illustration, etc.
Object name is SJA-16-65-g002.jpg

Steps involved for answering a particular question in the questionnaire

In the case of a self-administered questionnaire, there is also a need for critical reading skills which is not required in one-to-one or face-to-face interview which only requires listening and verbal skills to respond to questions in the same language in which they are being asked or interviewed.[ 18 ]

There are many other crucial factors which play an important role in deciding the utility of questionnaire in various research, one such factor is the literacy of the participants which is a major limiting factor in self-administered questionnaires. Whereas, the other factors include the respondent's age, maturity, and level of understanding and cognition, which are some of the other ways related to the comprehension of the questions.[ 19 ]

Do the length of the questionnaire matters?

Length and number of items in the questionnaire play a crucial role in questionnaire-based studies or surveys, it has a direct effect on the time taken by the respondent to complete the questionnaire, cost of the survey or study, response rate, and quality of data obtained.[ 20 ]

As evident from the study conducted by Iglesias and Torgerson in 2000, on the response rate of a mailed questionnaire, an increase in the length of the questionnaire from five pages to seven pages reduces the response rate from women aged 70 years and over but on contrary does not seems to affect the quality of response to questions.[ 21 ]

Another study conducted by Similar Koitsalu et al .[ 22 ] in 2018 reported that they were able to increase overall participation and information gathered through a long questionnaire with the help of prenotification and the use of a reminder without risking a lower response rate.

Whereas Sahlqvist, et al .[ 23 ] in 2011 reported that participants were more likely to respond to the short version of the questionnaire as compared to a long questionnaire.

Testing of ultrashort, short, and long surveys of 13, 25, and 75 questions, respectively by Kost et al .[ 24 ] in 2018, revealed that a shorter survey utilizing a short questionnaire was reliable and produce high response and completion rates than a long survey.

Bolt, on the other hand, in 2014, found a surprising find that reducing the length of a long questionnaire in a physician survey does not mean that it will necessarily improve response rate hence to improve the response rate in nonresponders’ researchers may think to utilize a drastically shortened version of the questionnaire to obtain some relevant information rather than no information.[ 25 ]

But the most interesting find comes from the web-based survey giant “Survey Monkey,” which states that there is a nonlinear relationship between the number of questions in a survey and the time spent answering each question. Which in other words can be explained as more there are questions in a survey lesser time respondent spend answering each question which is known as “speeding up” or “satisficing” through the questions. It is also observed that as the length of and the number of questions asked increased there is an increase in a nonresponse rate. This in term affects the quantity and reliability of the data gathered.[ 26 ]

What happens when respondents lose interest?

When there is a loss of interest, in the case of a long length questionnaire or extensive interviews, the bored respondents provide unconsidered and unreliable answers, or in other scenarios, it may lead to high nonresponse to questions. Where on one side a high nonresponse rate may lead to difficulty in data analysis or an unacceptable reduction in sample size, whereas on the other side, unconsidered or unreliable answers may defeat the whole purpose of the research [ Figure 3 ].[ 19 ]

An external file that holds a picture, illustration, etc.
Object name is SJA-16-65-g003.jpg

Consequences of Loss of interest in research participant

Considerations while using a long questionnaire

While using a long questionnaire, a high nonresponse rate should always be expected hence appropriate measures to address the missing data should be considered such as data trimming or data imputation depending on the amount of data missing.[ 27 , 28 ]

While the loss of interest can be administering counteracted by dividing the questionnaire into sections and administering each section separating to avoid respondents’ fatigue or boredom.[ 19 ]

It is always advised that the administration of telephonic interview–based questionnaire should be kept short in general about 30 min to prevent fatigue or inattention which may adversely affect the quality of data. In the case of a very long telephonic interview, questions can be divided into sections, and each section can be administered on separate days or shifts lasting 30 min each. A long questionnaire should preferably be administered through face-to-face interviews.

Designing a questionnaire is an art and requires time and dedication, which in turn leads to the easiest way to measure the relevant information on a desired topic of interest. But many a times, this crucial step in biomedical research is ignored by researchers. With this narrative review, we were able to provide a glimpse of the importance of a good questionnaire. A good questionnaire can be of 25 to 30 questions and should be able to be administered within 30 min to keep the interest and attention of the participants intact. It is observed that as the number of questions increases there is a tendency of the participants speeding up or satisficing through the questions, which severely affect the quality, reliability, and response rates. In case a long questionnaire is essential, it should be divided into sections of 25 to 30 questions each to be delivered at a different time or day. In the case of a long questionnaire i.e., more than 30 questions, a larger amount of missing data or nonresponse rates must be anticipated and provisions should be made to address them. At last, it is always advised that shortening a relatively lengthy questionnaire significantly increases the response.

Financial support and sponsorship

Conflicts of interest.

There are no conflicts of interest.

Banner

Dissertation Format and Submission

  • Format Guidelines
  • Dissertation Submission
  • Getting Survey Permissions
  • Help Videos

Instrument Permission documents

survey questions for thesis

Instrument Permissions FAQ

Download a pdf of this faq  , download the template permission letter, permissions to use and reproduce instruments in a thesis/dissertation frequently asked questions, why might i need permission to use an instrument in my thesis/dissertation.

  • Determine whether you need permission
  • Identify the copyright holder
  • Ask for permission
  • Keep a record
  • What if I can't locate the copyright holder?

If you want to use surveys, questionnaires, interview questions, tests, measures, or other instruments created by other people, you are required to locate and follow usage permissions. The instrument may be protected by copyright and/or licensing restrictions.

Copyright Protection

Copyright provides authors of original creative work with limited control over the reproduction and distribution of that work. Under United States law, all original expressions that are “fixed in a tangible medium” are automatically protected by copyright at the time of their creation. In other words, it is not necessary to formally state a declaration of copyright, to use the © symbol, or to register with the United States Copyright Office.

Therefore, you must assume that any material you find is copyrighted, unless you have evidence otherwise. This is the case whether you find the instrument openly on the web, in a library database, or reproduced in a journal article. It is your legal and ethical responsibility to obtain permission to use, modify, and/or reproduce the instrument.

If you use and/or reproduce material in your thesis/dissertation beyond the limits outlined by the “fair use” doctrine, which allows for limited use of a work, without first gaining the copyright holder’s permission, you may be infringing copyright.

Licensing/Terms of Use

Some instruments are explicitly distributed under a license agreement or terms of use. Unlike copyright, which applies automatically, users must agree to these terms in order to use the instrument. In exchange for abiding by the terms, the copyright holder grants the licensee specific and limited rights, such as the right to use the instrument in scholarly research, or to reproduce the instrument in a publication.

When you ask a copyright holder for permission to use or reproduce an instrument, you are in effect asking for a license to do those things.

How do I know if I need permission to use instruments in my thesis/dissertation research? (Adapted from Hathcock & Crews )

Follow the four-step process below:

1. Determine whether you need permission

There are different levels of permissions for using an instrument:

a)  No permission required

i. The copyright holder has explicitly licensed the use of instrument for any purpose, without requiring you to obtain permission.

ii. If you are only using a limited portion of the instrument, your use may be covered under the Fair Use Doctrine. See more here:  https://uhcl.libguides.com/copyright/fairuse .

iii. If the instrument was developed by the federal government or under a government grant it may be in the public domain, and permission is therefore not required.

iv. If the document was created before 1977, it may be in the public domain, and permission is therefore not required. See the Stanford Public Domain Flowchart at https://fairuse.stanford.edu/wp-content/uploads/2014/06/publicdomainflowchart.png .

b)  Non-commercial/educational use: The copyright holder has licensed the instrument only for non-commercial research or educational purposes, without requiring you to obtain the permission of the copyright holder. Any other usage requires permission.

Sample Permission for Educational Use:

Test content may be reproduced and used for non-commercial research and educational purposes without seeking written permission. Distribution must be controlled, meaning only to the participants engaged in the research or enrolled in the educational activity. Any other type of reproduction or distribution of test content is not authorized without written permission from the author and publisher. Always include a credit line that contains the source citation and copyright owner when writing about or using any test.

Source: Marta Soto, “How Permissions Work in PsycTests,” APA Databases & Electronic Resources Blog. American Psychological Association. http://blog.apapubs.org/2016/12/21/how-permissions-work-in-psyctests/ .

Even if you are not required to obtain permission to use the instrument, consider contacting the author for ideas on how to administer and analyze the test. Authors often welcome further use of their work, and may request you send them a copy of your final work.

c)  Permission required:  Instruments that require you to obtain the permission of the copyright holder, regardless of whether the use is for educational or commercial purposes. This may be because the copyright holder

  • has important directions for how the test must be administered and analyzed
  • wants to make sure the most current version is being used
  • charges users a fee in order to administer the test

If you cannot locate the permissions, you are required to identify the copyright holder and contact them to ask about permission to use the instrument.

2. Identify the copyright holder  (Adapted from Crews )

The next step is to identify who owns the copyright. The copyright holder is usually the creator of the work. If the copyright owner is an individual, you will need to do the usual Internet and telephone searches to find the person. Be ready to introduce yourself and to explain carefully what you are seeking.

Some authors transfer copyright to another entity, such as a journal publisher or an organization. In these cases, you must obtain permission from that entity to use or reproduce the instrument. You can often identify the owner by locating a © copyright notice, but as mentioned above, not all copyrighted works have a notice.

Check the following sources to locate instruments, their copyright holders, and their permission statements:

  • Mental Measurements Yearbook: https://uhcl.idm.oclc.org/login?url=https://search.ebscohost.com/login.aspx?authtype=ip,uid&profile=ehost&defaultdb=mmt
  • PsycTESTS: https://uhcl.idm.oclc.org/login?url=https://search.ebscohost.com/login.aspx?authtype=ip,uid&profile=ehost&defaultdb=pst
  • Neumann Library Tests & Measures help: https://uhcl.libguides.com/PSYC/tests
  • Library assistance e-mail: [email protected]

​You may need to contact the author or publisher directly to find out who owns the copyright. Publishers often have websites that prescribe a method for contacting the copyright owner, so search the publisher website for a permissions department or contact person. Be sure to confirm the exact name and address of the addressee, and call/e-mail the person or publishing house to confirm the copyright ownership.

  • The copyright owner may prefer or require that permission requests be made using a certain medium (i.e. fax, mail, web form, etc.). If you do not follow instructions, you may not get a reply.
  • Telephone calls may be the quickest method for getting a response from the owner, but they should be followed up with a letter or e-mail in order to document the exact scope of the permission. E-mail permissions are legally acceptable in most cases, but getting a genuine signature is usually best.
  • The request should be sent to the individual copyright holder (when applicable) or permissions department of the publisher in question. Be sure to include your return address, telephone and fax numbers, e-mail address, and the date at the top of your letter or message. If you send the permission request by mail, include a self-addressed, stamped return envelope.
  • Make the process easy for the copyright owner. The less effort the owner has to put forth, the more likely you will get the permission you need. If you are using conventional mail, include a second copy of your request for the owner’s records.
  • State clearly who you are, your institutional affiliation (e.g., University of Houston-Clear Lake), and the general nature of your thesis/dissertation research.

Do not send permissions letters to all possible rightsholders simultaneously. Taking the time to find the person who most likely holds the copyright will better yield success. If you do not have much information about who actually owns the copyright, be honest with your contacts, and they may be able to help you find the right person.

3. Ask for permission  (Adapted from  Crews )

Once you have identified the copyright holder, you must determine the scope of your permission request. Some copyright owners furnish their own permission form that you may download from their website.

If the copyright owner does not provide a permission agreement form, you may write your own letter ( click here to download a template ). Requests should be made in writing; e-mail is fine for this purpose. A most effective letter will include detailed information concerning your request for permission to use the work. Include the following information:

  • Who: Introduce yourself. Tell who you are, your degree program, and a brief overview of your research.
  • Why: Tell why you are contacting that person or entity for permission.
  • What: Be as specific as possible when you cite and describe the instrument you wish to use. Include whether you plan to use the entire instrument, or if you plan on modifying or adapting any of the questions.
  • How: Tell how you plan to use the instrument. Specify the parameters of your research study, and include any important information about the way you will administer the instrument and/or analyze the results.
  • When: Expected length of the project and time to complete the thesis/dissertation.

Important : Obtaining permission to use an instrument is not the same as obtaining permission to reproduce the instrument in your appendix. If you intend on providing a copy of the instrument in an appendix, ask for separate permissions to do that.

Click here to download a template letter . Feel free to modify and adapt this template for your purposes.

4. Keep a record

After securing permission to use and/or reproduce the instrument, save a copy of the correspondence and the agreement. Documentation allows you to demonstrate to others that you have the legal right to use the owner's work. In the unlikely event that your use of the work is ever challenged, you will need to demonstrate your good faith efforts. That challenge could arise far in the future, so keep a permanent file of the records. Moreover, you might need to contact that same copyright owner again for a later use of the work, and your notes from the past will make the task easier.

Upload a copy of your permission letter in Vireo with your thesis/dissertation, or include it as an appendix in the document itself.

What if I can't locate the copyright holder?  (Adapted from Hathcock  & Crews & Pantalony )

In some cases, you may never get a response from the copyright holder or you may never even be able to identify who they are or how to contact them. It can be difficult to know how to proceed when you reach a dead end. Unfortunately, no matter how diligently you have tried to get permission, these efforts cannot completely eliminate the risk of infringement should you proceed to use the work.

Assuming you have diligently investigated your alternatives, do not want to change your project, and remain in need of the elusive copyright permission, the remaining alternative is to explore a risk-benefit analysis. You need to balance the benefits of using that particular material in your given project against the risks that a copyright owner may see your project, identify the materials, and assert the owner’s legal claims against you. Numerous factual circumstances may be important in this evaluation. The “benefit” may depend upon the importance of your project and the importance of using that particular material. The “risks” may depend upon whether your project will be published or available on the Internet for widespread access—as theses and dissertations will. You ought to investigate whether the work is registered with the U.S. Copyright Office and weigh the thoroughness of your search for the copyright owner and your quest for appropriate permission.

Undertaking this analysis can be sensitive and must be advanced with caution and with careful documentation. You may be acting to reduce the risk of liability, but you have not eliminated liability. A copyright owner may still hold rights to the material. Members of the University of Houston-Clear Lake community should consult with their chair or the Neumann Library to discuss their options.

Portions of this FAQ are used and adapted from:

Crews, Kenneth and Rina Elster Pantalony. “Special Cases.” Columbia University Copyright Advisory Services. https://copyright.columbia.edu/basics/special-cases.html . Licensed under Creative Commons Attribution 4.0 International (CC BY 4.0).

Crews, Kenneth. “Asking for Permission.” Columbia University Advisory Services. https://copyright.columbia.edu/basics/permissions-and-licensing.html . Licensed under Creative Commons Attribution 4.0 International (CC BY 4.0).

Hathcock, April. “Getting Permission.” NYU Libraries Copyright Library Guide, https://guides.nyu.edu/c.php?g=276785&p=1845968 . Licensed under Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0).

  • << Previous: Dissertation Submission
  • Next: Help Videos >>
  • Last Updated: Feb 12, 2024 11:35 AM
  • URL: https://uhcl.libguides.com/dissertation

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base
  • Research paper
  • Research Paper Appendix | Example & Templates

Research Paper Appendix | Example & Templates

Published on August 4, 2022 by Tegan George and Kirsten Dingemanse. Revised on July 18, 2023.

An appendix is a supplementary document that facilitates your reader’s understanding of your research but is not essential to your core argument. Appendices are a useful tool for providing additional information or clarification in a research paper , dissertation , or thesis without making your final product too long.

Appendices help you provide more background information and nuance about your thesis or dissertation topic without disrupting your text with too many tables and figures or other distracting elements.

We’ve prepared some examples and templates for you, for inclusions such as research protocols, survey questions, and interview transcripts. All are worthy additions to an appendix. You can download these in the format of your choice below.

Download Word doc Download Google doc

Location of appendices

Instantly correct all language mistakes in your text

Upload your document to correct all your mistakes in minutes

upload-your-document-ai-proofreader

Table of contents

What is an appendix in a research paper, what to include in an appendix, how to format an appendix, how to refer to an appendix, where to put your appendices, other components to consider, appendix checklist, other interesting articles, frequently asked questions about appendices.

In the main body of your research paper, it’s important to provide clear and concise information that supports your argument and conclusions . However, after doing all that research, you’ll often find that you have a lot of other interesting information that you want to share with your reader.

While including it all in the body would make your paper too long and unwieldy, this is exactly what an appendix is for.

As a rule of thumb, any detailed information that is not immediately needed to make your point can go in an appendix. This helps to keep your main text focused but still allows you to include the information you want to include somewhere in your paper.

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

An appendix can be used for different types of information, such as:

  • Supplementary results : Research findings  are often presented in different ways, but they don’t all need to go in your paper. The results most relevant to your research question should always appear in the main text, while less significant results (such as detailed descriptions of your sample or supplemental analyses that do not help answer your main question), can be put in an appendix.
  • Statistical analyses : If you conducted statistical tests using software like Stata or R, you may also want to include the outputs of your analysis in an appendix.
  • Further information on surveys or interviews : Written materials or transcripts related to things such as surveys and interviews can also be placed in an appendix.

You can opt to have one long appendix, but separating components (like interview transcripts, supplementary results, or surveys ) into different appendices makes the information simpler to navigate.

Here are a few tips to keep in mind:

  • Always start each appendix on a new page.
  • Assign it both a number (or letter) and a clear title, such as “Appendix A. Interview transcripts.” This makes it easier for your reader to find the appendix, as well as for you to refer back to it in your main text.
  • Number and title the individual elements within each appendix (e.g., “Transcripts”) to make it clear what you are referring to. Restart the numbering in each appendix at 1.

It is important that you refer to each of your appendices at least once in the main body of your paper. This can be done by mentioning the appendix and its number or letter, either in parentheses or within the main part of a sentence. It’s also possible to refer to a particular component of an appendix.

Appendix B presents the correspondence exchanged with the fitness boutique. Example 2. Referring to an appendix component These results (see Appendix 2, Table 1) show that 


It is common to capitalize “Appendix” when referring to a specific appendix, but it is not mandatory. The key is just to make sure that you are consistent throughout your entire paper, similarly to consistency in  capitalizing headings and titles in academic writing .

However, note that lowercase should always be used if you are referring to appendices in general. For instance, “The appendices to this paper include additional information about both the survey and the interviews .”

The simplest option is to add your appendices after the main body of your text, after you finish citing your sources in the citation style of your choice. If this is what you choose to do, simply continue with the next page number. Another option is to put the appendices in a separate document that is delivered with your dissertation.

Location of appendices

Remember that any appendices should be listed in your paper’s table of contents .

There are a few other supplementary components related to appendices that you may want to consider. These include:

  • List of abbreviations : If you use a lot of abbreviations or field-specific symbols in your dissertation, it can be helpful to create a list of abbreviations .
  • Glossary : If you utilize many specialized or technical terms, it can also be helpful to create a glossary .
  • Tables, figures and other graphics : You may find you have too many tables, figures, and other graphics (such as charts and illustrations) to include in the main body of your dissertation. If this is the case, consider adding a figure and table list .

Checklist: Appendix

All appendices contain information that is relevant, but not essential, to the main text.

Each appendix starts on a new page.

I have given each appendix a number and clear title.

I have assigned any specific sub-components (e.g., tables and figures) their own numbers and titles.

My appendices are easy to follow and clearly formatted.

I have referred to each appendix at least once in the main text.

Your appendices look great! Use the other checklists to further improve your thesis.

If you want to know more about AI for academic writing, AI tools, or research bias, make sure to check out some of our other articles with explanations and examples or go directly to our tools!

Research bias

  • Anchoring bias
  • Halo effect
  • The Baader–Meinhof phenomenon
  • The placebo effect
  • Nonresponse bias
  • Deep learning
  • Generative AI
  • Machine learning
  • Reinforcement learning
  • Supervised vs. unsupervised learning

 (AI) Tools

  • Grammar Checker
  • Paraphrasing Tool
  • Text Summarizer
  • AI Detector
  • Plagiarism Checker
  • Citation Generator

Yes, if relevant you can and should include APA in-text citations in your appendices . Use author-date citations as you do in the main text.

Any sources cited in your appendices should appear in your reference list . Do not create a separate reference list for your appendices.

An appendix contains information that supplements the reader’s understanding of your research but is not essential to it. For example:

  • Interview transcripts
  • Questionnaires
  • Detailed descriptions of equipment

Something is only worth including as an appendix if you refer to information from it at some point in the text (e.g. quoting from an interview transcript). If you don’t, it should probably be removed.

When you include more than one appendix in an APA Style paper , they should be labeled “Appendix A,” “Appendix B,” and so on.

When you only include a single appendix, it is simply called “Appendix” and referred to as such in the main text.

Appendices in an APA Style paper appear right at the end, after the reference list and after your tables and figures if you’ve also included these at the end.

You may have seen both “appendices” or “appendixes” as pluralizations of “ appendix .” Either spelling can be used, but “appendices” is more common (including in APA Style ). Consistency is key here: make sure you use the same spelling throughout your paper.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

George, T. & Dingemanse, K. (2023, July 18). Research Paper Appendix | Example & Templates. Scribbr. Retrieved March 25, 2024, from https://www.scribbr.com/dissertation/appendix/

Is this article helpful?

Tegan George

Tegan George

Other students also liked, dissertation table of contents in word | instructions & examples, what is a glossary | definition, templates, & examples, figure and table lists | word instructions, template & examples, "i thought ai proofreading was useless but..".

I've been using Scribbr for years now and I know it's a service that won't disappoint. It does a good job spotting mistakes”

Search

  • Graduate Programs
  • Prospective Students
  • Current Students
  • Faculty & Staff
  • Academic & Professional Development >
  • Academic Resources >

Theses & Dissertations Overview

Guiding principles: the graduate school style manual.

The  Graduate School Style Manual  establishes a set of standards designed to ensure consistency, legibility, and professional appearance of theses and dissertations. These standards are not intended to comprehensively address all the minutiae of style and formatting. Students should refer to their academic department’s choice of style manuals for such specifics.  Note:  You must follow these guidelines to format your thesis/dissertation for the first format check. If it is apparent that you have not made a reasonable attempt to do so, your document will not be checked and your graduation may be delayed until a future semester.

Preparing for Electronic Submission

  • Choosing a software package
  • Creating single or multiple files
  • Formatting your document
  • Front Matter Templates
  • Converting your document to PDF
  • Using Adobe Acrobat

Submitting your documents

  • ETD Format Check Submission  You must have a format check done before you can submit your official copy.
  • ETD Final Submission   Please note: Proof-reading changes cannot be made to the document once it has been accepted as final. Please make sure that you are happy with the document you submit and do not submit until you are sure no additional edits to the content will be needed. Please do NOT create a brand-new ProQuest account when it is time to submit the final version of your thesis or dissertation. Simply log into your original ProQuest account, visit the document upload area, and replace the PDF that is there with the revised and final PDF.

Doctoral Students Only

Survey of Earned Doctorates:   The Survey of Earned Doctorates (SED)  gathers data from all doctorate graduates each year. The responses become part of the Doctorate Records File, a virtually complete databank on doctorate recipients from 1920 to the present. These data serve policymakers at the federal, state, local and university levels.  Privacy : Information you provide is kept confidential and is safeguarded in accordance with the Privacy Act of 1974, as amended. The survey data are reported only in aggregate form or in a manner that does not identify information about any individual. Your information is vital to future program development and funding. Please register for the survey on the  SED Registration Website . You will receive a pin and password to complete the secure survey.

Questions About Formatting Your Thesis or Dissertation

If you cannot find your answers to formatting questions in the Graduate School Style Manual, you may address your question to  [email protected] . References

  • The Graduate School Style Manual  All ETDs must conform to the Graduate School style requirements
  • Human Subjects Guidelines  The Human Subjects Office at the Office of the Vice-President for Research Policies and Procedures for Electronic Theses and Dissertations
  • ETD Library  Theses and Dissertations that have been submitted to the Graduate School are available via the UGA Libraries Galileo System; any inquiries regarding the ETD Library site and its contents should be directed to the Library (note that it may take some time for the Library system to process new theses and dissertations after they have been accepted by the Graduate School)

After You’ve Finished

Graduating students who wish to purchase copies of their work have a choice to either order bound copies from ProQuest or UGA Print and Copy Services. If you wish to order from ProQuest, you may place your order during the creation of your ProQuest account. If you would like copies of your thesis or dissertation from UGA,  Print and Copy Services at the Tate Student Center  will print and bind your thesis or dissertation in the traditional black hard cover with gold lettering, or the format of your choice. Please note that UGA Print and Copy Services will not have access to your document until the Graduate School has approved the  final  copy of your thesis or dissertation.

  • Academic & Professional Development
  • Resumes/CVs & Cover Letter Resources
  • UGA Mentor Programs
  • Campus Resources
  • UGA StudyAway
  • Mentoring Resources for Faculty
  • Mentoring Resources for Graduate Students
  • Graduate School Awards
  • GradSlate Enrolled Student Progress Portal
  • Responsible Conduct of Research
  • Research with Human Participants
  • Selecting Software
  • Creating Single or Multiple Files
  • Converting to PDF
  • Creating PDF Files with Adobe Acrobat

Unlocking potential. Building futures.

Apply Today

The Graduate School Brooks Hall 310 Herty Drive Athens, GA 30602 706.542.1739

  • Administration
  • Graduate Bulletin
  • Strategic Plan
  • Virtual Tour
  • Request Information
  • Requirements
  • Application Fee
  • Check Status
  • UGA Main Campus
  • UGA Gwinnett
  • UGA Griffin
  • UGA Atlanta-Buckhead

Twitter

IMAGES

  1. Dissertation Questionnaire

    survey questions for thesis

  2. Thesis questionnaire

    survey questions for thesis

  3. 25+ Survey Examples

    survey questions for thesis

  4. Questionnaire For Thesis

    survey questions for thesis

  5. Sample Questionnaire For Students Thesis

    survey questions for thesis

  6. Sample Survey Thesis Questionnaire About Academic Performance

    survey questions for thesis

VIDEO

  1. Honor thesis survey

  2. Doing a Literature Review

  3. Part 2 : How and what to write

  4. Research introduction structure

  5. How to Approach Thesis Defense Questions

  6. ۧ۳ۊ۩ Ű§Ù„Ű§ŰłŰȘŰšÙŠŰ§Ù† Ù„Ù„ŰšŰ­ÙˆŰ« Ű§Ù„ŰčÙ„Ù…ÙŠŰ© QUESTIONNAIRES

COMMENTS

  1. Dissertation survey examples & questions

    How to write dissertation survey questions. The first thing to do is to figure out which group of people is relevant for your study. When you know that, you'll also be able to adjust the survey and write questions that will get the best results. The next step is to write down the goal of your research and define it properly.

  2. How to Frame and Explain the Survey Data Used in a Thesis

    Surveys are a special research tool with strengths, weaknesses, and a language all of their own. There are many different steps to designing and conducting a survey, and survey researchers have specific ways of describing what they do.This handout, based on an annual workshop offered by the Program on Survey Research at Harvard, is geared toward undergraduate honors thesis writers using survey ...

  3. Questionnaire Design Tip Sheet

    This PSR Tip Sheet provides some basic tips about how to write good survey questions and design a good survey questionnaire. ... Introduction to Surveys for Honors Thesis Writers; Managing and Manipulating Survey Data: A Beginners Guide; PSR Introduction to the Survey Process;

  4. Questionnaire Design

    Questionnaires vs. surveys. A survey is a research method where you collect and analyze data from a group of people. A questionnaire is a specific tool or instrument for collecting the data.. Designing a questionnaire means creating valid and reliable questions that address your research objectives, placing them in a useful order, and selecting an appropriate method for administration.

  5. Survey Research

    Survey research means collecting information about a group of people by asking them questions and analyzing the results. To conduct an effective survey, follow these six steps: Determine who will participate in the survey. Decide the type of survey (mail, online, or in-person) Design the survey questions and layout.

  6. Survey Design Basics: Top 5 Mistakes To Avoid

    Overview: 5 Survey Design Mistakes. Having poor overall survey structure and flow. Using poorly constructed questions and/or statements. Implementing inappropriate response types. Using unreliable and/or invalid scales and measures. Designing without consideration for analysis techniques.

  7. 13.1 Writing effective survey questions and questionnaires

    In that case, the question of whether someone rents or owns their home is a filter question, designed to identify some subset of survey respondents who are asked additional questions that are not relevant to the entire sample. Figure 13.1 presents an example of how to accomplish this on a paper survey by adding instructions to the participant ...

  8. Doing Survey Research

    Survey research means collecting information about a group of people by asking them questions and analysing the results. To conduct an effective survey, follow these six steps: Determine who will participate in the survey. Decide the type of survey (mail, online, or in-person) Design the survey questions and layout. Distribute the survey.

  9. Write Amazing Surveys for Your Psychology Thesis or Dissertation

    Make sure to describe your methodology in detail in your thesis. Identify the type of survey along with the types of questions. Describe variables, sampling method, and the way the study was conducted. In Summary. Creating a survey for a thesis is a time-consuming and multifaceted task. Yet, if done right, it can add immense value to your work.

  10. Student Survey for Thesis

    Qualitative or quantitative research is an important and sometimes even necessary element of a master's or bachelor's thesis. Startquestion allows you to create student surveys on any topic without limiting the number of questions. The questionnaire can be shared via a link or sent directly to the indicated e-mail address.

  11. The Essential Guide to Writing Effective Survey Questions

    Question Wording and Structure. The creation of effective survey questions is essential to accurately measure the opinions of the participants. If the questions are poorly worded, unclear or biased, the responses will be useless. A well-written question will mean the same thing to all respondents. It will communicate the desired information so ...

  12. Survey Questions: Tips & Examples in 2022

    For example, "With the best at the top, rank these items from best to worst". Be as specific as you can about how the respondent should consider the options and how to rank them. For example, "thinking about the last 3 months' viewing, rank these TV streaming services in order of quality, starting with the best".

  13. Writing Strong Research Questions

    A good research question is essential to guide your research paper, dissertation, or thesis. All research questions should be: Focused on a single problem or issue. Researchable using primary and/or secondary sources. Feasible to answer within the timeframe and practical constraints. Specific enough to answer thoroughly.

  14. Survey Questions: 70+ Survey Question Examples & Survey Types

    Impactful surveys start here: The main types of survey questions: most survey questions are classified as open-ended, closed-ended, nominal, Likert scale, rating scale, and yes/no. The best surveys often use a combination of questions. 💡 70+ good survey question examples: our top 70+ survey questions, categorized across ecommerce, SaaS, and ...

  15. Writing Survey Questions

    Writing Survey Questions. Perhaps the most important part of the survey process is the creation of questions that accurately measure the opinions, experiences and behaviors of the public. Accurate random sampling will be wasted if the information gathered is built on a shaky foundation of ambiguous or biased questions.

  16. Thesis Survey

    In this survey, approximately 200 people will be asked to complete a questionnaire that asks questions about the interview/hiring considerations towards convicted felons. For this survey, the term convicted felon will be used to describe non-violent offenders and white-collar criminals. Completion of this survey will take approximately 15-20 ...

  17. Survey Research Guidance

    In order to produce reliable and valid data, surveys must be carefully designed and rigorously pre-tested. This guidance is designed to provide best practice for survey research. The Graduate School requires all students conducting survey research to complete this Survey Development Checklist [DOCX] and submit it with their IRB application in ...

  18. 7+ Reasons to Use Surveys in Your Dissertation

    First-Hand Experience. The ability to gain a unique perspective is what distinguishes interviews from other surveys. Close-ended questions may be too rigid and make participants omit a lot of information that might help the research. In an interview, you may also correct some of your questions, and add more details to them, thus improving the ...

  19. 10 Research Question Examples to Guide your Research Project

    The first question asks for a ready-made solution, and is not focused or researchable. The second question is a clearer comparative question, but note that it may not be practically feasible. For a smaller research project or thesis, it could be narrowed down further to focus on the effectiveness of drunk driving laws in just one or two countries.

  20. Top 15 Demographic Survey Questions for Questionnaire

    Demographic survey questions are easily created using multiple-choice questions within a few minutes. The demographic survey includes questions on age, ethnicity, gender, marital status, basic qualifications, employment, household income, and other such parameters. Learn about: Demographic Survey Questions Template.

  21. How short or long should be a questionnaire for any research

    Response rate is defined as the number of people who responded to a question asked divided by the number of total potential respondents. Response rate which is a crucial factor in determining the quality and generalizability of the outcome of the survey depends indirectly on the length and number of questions in a questionnaire.[7,8]Several studies have been conducted to assess the ...

  22. Getting Survey Permissions

    Why might I need permission to use an instrument in my thesis/dissertation? If you want to use surveys, questionnaires, interview questions, tests, measures, or other instruments created by other people, you are required to locate and follow usage permissions. ... (when applicable) or permissions department of the publisher in question. Be sure ...

  23. Research Paper Appendix

    Research Paper Appendix | Example & Templates. Published on August 4, 2022 by Tegan George and Kirsten Dingemanse. Revised on July 18, 2023. An appendix is a supplementary document that facilitates your reader's understanding of your research but is not essential to your core argument. Appendices are a useful tool for providing additional information or clarification in a research paper ...

  24. Theses & Dissertations Overview

    The survey data are reported only in aggregate form or in a manner that does not identify information about any individual. Your information is vital to future program development and funding. ... Questions About Formatting Your Thesis or Dissertation. If you cannot find your answers to formatting questions in the Graduate School Style Manual, ...