Logo for UNT Open Books

1 Introduction to Quantitative Analysis

Chris Bailey, PhD, CSCS, RSCC

Chapter Learning Objectives

  • Understand the justification for quantitative analysis
  • Learn how data and the scientific process can be used to inform decisions
  • Learn and differentiate between some of the commonly used terminology in quantitative analysis
  • Introduce the functions of quantitative analysis
  • Introduce some of the technology used in quantitative analysis

Why is quantitative analysis important?

Let’s begin by answering the “who cares” question. When will you ever use any of this? As we will soon demonstrate, you likely already are, but it may not be the most objective method. Quantitative data are objective in nature, which is a benefit when we are trying to make decisions based on data without the influence of anything else. Much of what we will learn in quantitative analysis enables us to become more objective so that our individual experiences, traditions, and biases [1] cannot influence our decisions.

No matter what career path you are on, you will need to be able to justify your actions and decisions with data. Whether you are a sport performance professional, personal trainer, or physical therapist, you are likely tracking progression and using the data to influence your future plans for athletes, clients, or patients. Data from the individuals you work with may justify your current plan, or they could illustrate an area that needs to be adjusted to meet certain goals.

If we are not collecting data, we have to rely on our short memories and subjective feelings. These can be biased whether we realize it or not. For example as a physical therapist (PT), we want our rehabilitation plan to work, so we may only see and remember the positives and miss the negatives. If we had a set regimen of tests, we could look at it in a more objective way that is less likely to be influenced by our own biases.

A woman looking at her phone

Let’s look at an example of how you might use analysis on a regular basis. In this scenario, your cell phone is outdated, has a cracked screen, and takes terrible photos compared to currently available options. What factors would you consider when thinking about your new phone purchase?

Here are some ways you might approach your decision:

  • Brand loyalty
  • Read reviews
  • Watch YouTube video reviews
  • Check out your friend’s phone

First and most often foremost is price. What can you afford? You’ll need to research the different phones available and which are in your price range.

What about the type of phone you currently have? Does that play a role? Many cell phone users like to stick to the same operating system they are used to. For example, if you currently have an iPhone, you are probably more likely to stick with an iPhone for your next purchase as opposed to switching to an Android device. This is referred to as brand loyalty.

The next step might be to read reviews or watch video reviews on YouTube.

Finally, maybe you are jealous of the phone your friend just got. So you’ll just get the same one or the slightly newer version. Of course, you may come up with other factors that play a role in your decision-making process.

Each of these are ways of collecting data to influence your decision, even if you don’t realize you are collecting data. The decision-making process is likely a multi-factor process as we discussed. In kinesiology, we can answer questions in a similar way by creating methods of data collection to help us answer questions and make informed decisions.

A more kinesiology specific example

Let’s look at a more specific example in kinesiology, tracking physical activity…or lack thereof. What if we wanted to evaluate the physical inactivity of adults in the United States at the state level and examine if there are differences according to race or ethnicity? Fortunately, the United State Center for Disease Control (CDC) has compiled such data. According to the CDC, all states in the United States had more than 15% of adults that were considered physically inactive as of 2018 [2] .

Let’s break this down a little further, because this statistic is actually worse than it sounds and the results differ depending on race/ethnicity. The CDC defines physical inactivity as not participating in any physical activity during the past month (via self-report and excluding work-related activities). The actual range of physically inactive adults was from 17.3 – 47.7% in all states. There were 22 states that had greater than 25% of their population classified as physically inactive. Interestingly, these results differ slightly when race or ethnicity is considered. This study classified their sample into 3 groups: Hispanic adults, non-Hispanic black adults, and non-Hispanic white adults. Of the 3, those that would be considered minorities in the United States had higher percentages of physical inactivity. Hispanic adults expressed physical inactivity of 30% or greater in 22 states plus Puerto Rico, and 23 states plus Washington D.C. expressed physical inactivity of 30% or higher in non-Hispanic black adults. If we compare that data to non-Hispanic white adults, only 5 states plus Puerto Rico expressed physical inactivity of 30% or higher. [3]

In this example, we just used some data to answer a question about the prevalence of physical inactivity in the United States. But, we shouldn’t stop there. We should come up with some sort of practical application. A very simple one based on the data is that we should encourage more physical activity in the U.S. Said another way, we should discourage physical inactivity as the data suggests that there are many that are physically inactive. Looking a bit deeper at the results, we might suggest that health educators target their efforts in specific areas and populations since the results suggest that geographic and population disparities exist. This study did not evaluate why these disparities exist, but we should consider them in potential solutions.

While this may seem fairly straight forward, there are many other factors we need to consider in quantitative analysis. For example, do we know whether or not the data are valid and reliable? Do you know the difference between validity and reliability ? It’s okay if you don’t. As we will see later, many people confuse these two on a regular basis. What issues do you see with the data collection? Many may take issue with the data being acquired via self-report. We will discuss surveys/questionnaires later in this book, but they are a great way to reach a very wide and large sample of a population. Obviously, more objective methods (e.g. an accelerometer or pedometer) would be better, but when we have a very large sample, potential error is less of a concern since a greater proportion of the population is being measured.

Using Data and the Scientific Process to Inform

As we have just seen, data collected on a specific topic is used as information to help us understand more about that topic. This is a part of the scientific process of acquiring knowledge, sometimes referred to as the scientific method, which you’ve likely heard of before. While the scientific method was popularized in the 20th century, it’s development is often credited to Aristotle. [4] [5]

Image depicitng steps of the scientific method

While the number of steps and their naming may differ depending on the source, they are often similar to Figure 1.1 shown above. First, one might wonder about a specific question based on an observation. Consider an example where Elise, an athletic trainer with a professional baseball team, notices that the majority of injuries and treatment times are highest each year during Spring Training [6] . Anecdotally , she observes that several of the injured players did not follow the off-season training program. She wonders if the sudden increase in workload plays a role. In this example, she is at the first step we described above.

Moving forward, she should examine previously published relevant research. In doing so, she notices there are quite a few studies in this area and many specifically look at the ratio of recent (acute) workloads to the accumulated (chronic) workloads and some have found higher risks of injury associated with higher levels of these ratios. [7] [8]

Now that she has enough information, she can finalize a hypothesis . Elise then hypothesizes that elevated ratios will increase the risk of injuries, but that the increased risk may differ from previous research because it wasn’t done on baseball players.

Now she is on to the experiment stage and she must design a way to test her hypothesis. So, she utilizes a smartphone application that helps athletes and coaches track their workloads during the off-season and during spring training. She also uses their injury data during Spring Training to see if those that incurred injuries during spring training had higher acute:chronic workload ratios compared to those that did not get injured. Spring Training is now over and she can now analyze the results. She finds that there is no statistical difference in acute:chronic workload ratios between the injured and non-injured groups.

Moving to the next stage she must draw conclusions based on the results found. The results did not support Elise’s hypothesis, so she cannot say that a sudden increase in workload increases risk of injury. But as she is contemplating this, she realizes that she did not take different injury types into consideration. Her sample included all athletes that were injured during Spring Training, which includes both ligament (for example, ulnar collateral ligament sprain), muscular (for example, hamstring strain), and tendon (for example, patellar tendon strain) injuries. She now recognizes that injury type may play a role in the relationship between workload accumulation and injury risk.

Now it’s time to report the results. This step may take different forms depending on your occupation. In Elise’s case, this may be a written report or a presentation to the team’s staff and front office executives. This could also be formally written up as a research paper and submitted for publication.

Hopefully you noticed that this step is also followed by an arrow that leads back to the first step. The scientific process is a cycle and we often finish the last step with more questions, which lead right back into more research. This was also the case with Elise’s example. She can now repeat the study and examine if injury type is important to her previous research question.

This text will focus on working with and analyzing data, but many of the other stages are dependent on this data. Also, the data analysis stage is dependent upon those that happened before it. Can you spot the data used in the example above? It primarily used workloads and injury status. If the data we need to answer a question aren’t available, we must find ways to collect it and that is what Elise did in the example above. There may be other times were the data are already available, but they aren’t recorded in the same source (table or spreadsheet), which means they need to be combined. Many times, the data are not recorded in an immediately usable format, so we may need to reorganize it (often referred to as data wrangling). Once we have the data in a usable format, we can then move onto analysis. Overwhelmingly, this text will focus on the analysis stage and all of the different techniques that can be used when appropriate. But how the other stages are influenced by the analysis stage and how the other stages influence it will also be addressed.

Terminology in Quantitative Analysis

There are many terms that are frequently used in statistical and quantitative analysis that are often confused and used interchangeably, but should not be. Many of which we may have already used, so now is a good time to begin defining some of our frequently used terms so that we avoid some confusion. Of course, we will have important terminology later on and we will define it when we encounter it.

  • If we were to measure the body mass index (BMI) of all of the U.S. population, we would need to collect both the height and body mass of roughly 332.4 million people [9] .
  • In the example above using the BMI of the entire U.S. population, the BMI would be a parameter.
  • If we were to measure the BMI of only a sample of the U.S. population, we might randomly sample only 1% of the U.S. population (≈ 3.3 million people).
  • In the example above using the BMI in a sample of the U.S. population, the BMI would be a statistic.
  • A new device is created to evaluate your heart rate variability (HRV) via the camera sensors on your smart phone. To make sure it is actually measuring accurately, we might compare the new data to a well known and accepted standard way to measure HRV.
  • In order to evaluate the between trial reliability of the newly created HRV device described above, we might collect data at 2 or 3 different times throughout the early morning to see how similar they are (or aren’t).
  • Anecdotal : evidence that is collected by personal experiences and not in systematic manner. Most often considered of lower value in scientific occupations and may lead to error.
  • Empirical : evidence that is collected and documented by systematic experimentation.
  • Hypothesis : a research and scientific-based guess to answer a specific question or phenomenon.
  • For example, we might compare  jump performance results of one athlete to other athletes to say that he or she is a superior performer. Or we could use these results in a rehab setting to determine if our patient is progressing in their rehabilitation as they should, compared to data previous patients have produced at the same stage of recovery.
  • For example measuring vertical jump performance likely results in a measure of jump height or vertical power.
  • Following with the example above, we could use a jump and reach device, a switch mat, or a force plate to measure vertical jumping performance. Not all measurements in kinesiology are physical in nature, so these instruments may take other forms.
  • Formative evaluation: Pretest, mid-test, or any evaluation prior to the final evaluation that helps to track changes in the quantity being measured.
  • Summative evaluation: Final evaluation that helps to demonstrate achievement.

Time series plot depicting change in strength asymmetry of the knee at various stages of the rehabilitation process.

Examine the data plot above that shows a measurement of strength asymmetry as a percentage for an athlete returning from an ACL knee injury. Positive values indicate a left-side asymmetry and negative values indicate a right-side asymmetry. Can you guess which side was injured based on the initial data? This athlete had a right knee injury. Initially the athlete was roughly 17% stronger on the left side, which should have given you a clue to answer the previous question. Based on what we discussed in the previous 2 terms, would you say this data was created by formative or summative assessments?

Criterion-referencing : compares a performance to a specific preset requirement.

For example, passing a strength and conditioning, personal trainer, or other fitness related certification exam. Generally, these exams require test-takers to achieve a minimum score that represents mastery over the content. Some may even require that test takers achieve a specific score in many areas, not necessarily just the overall score. Either way, there may be a set score that represents the “criterion” necessary for certification, such as 70% or better. Some other criterion referenced standards  based evaluation examples include: athletic training Board of Certification (BOC) exam, CPR exam, or a U.S. driver’s learners permit exam (this may vary by state).

Norm-referencing : compares performance(s) to the sample that the performer tested with or with a similar population.

Examples of norm-referenced standards include the interpretation of SAT, ACT, GRE, and IQ test scores. All of these may express results relative to those that take the exam. For example a score of 100 on the IQ (intelligence quotient) test represents the average score based on a normal distribution. We’ll learn about the normal distribution later, but this means that roughly 66.6% of test takers will score between 85 and 115. This is because all scores are transformed to make the current average score equal 100 with a standard deviation of 15 [10] . This means that a test-takers score might change based on the intelligence of the others that also take the exam in a similar time period. This also means that comparing the IQ of someone who took the exam today to someone who took the test 10 or more years ago is meaningless as a score of 135 may show that you are in the 99th percentile of your current time period. Furthermore, IQs have been shown substantially rise with time [11] . So, you could argue that an IQ of 100 as tested in 2020 is superior to the IQ of 100 in 2000.

Functions of Quantitative Analysis

Overall, it is required of us as professionals (or future professionals) in the field of kinesiology to make informed decisions, which often means using quantitative data. We can break this down further into several functions of quantitative analysis. Morrow and colleagues (2016) [12] recognize the following functions of quantitative analysis in Human Performance:

  • Professionals may be able to group athletes, patients, or students following an evaluation of their abilities, which may help facilitate development. For example, an initial assessment may help a youth softball coach group athletes based on skill level and experience.
  • The ability to predict future events may be the “Holy Grail” of many fields of research and business, but it requires large amounts of data that is often hard to come by (especially in sport performance). A very common example of this is the efforts and money spent on predicting injury in sports. Intuitively, the notion makes sense. If we can predict an injury, we should be able to prevent it. Currently, much of this research lies in the area of training loads and the rate at which an athlete increases them. [13] [14] [15]
  • Many coaches and trainers set goals for their athletes and clients. Many physical therapists set goals for their patients. Many individuals set goals for themselves. Without doing this and measuring a specific quality, there will be no knowledge of improvement or progress.
  • For many, scores on a specific test may provide motivation to perform better. This may be because they did not perform as well as they thought they should, they performed well and want to set another personal record, or they may be competing with other participants. As another example, consider a situation where you have been running a 5k every other week, but don’t know your time when you finish. Would you train harder if you did? What if you knew your overall placement amongst those who ran?
  • Similar to achievement, programs themselves should be evaluated. Imagine you are a strength coach and you want to demonstrate that  you are doing a great job developing your athletes. If your team is very successful on the field, court, or pitch, this may not be too much more difficult than pointing to your win-loss record. But what if your are working with a team that is very young and not yet performing to their full potential. This is precisely where demonstrating improvement in key areas that are related to competition performance could demonstrate your value to those that pay your salary.

Technology in Quantitative Analysis

Data storage and analysis.

There are many different types of technology that will be beneficial in analysis and several will be introduced in this text. Microsoft Excel and JASP will primarily be used here due to their availability and price tag (often $0), but there are many other software programs and technologies that may be useful in your future careers. Depending on the specific type of work you are doing, some programs may be better than others. Or, more than likely, you may end up using a combination of resources. Each resource has its own advantages and disadvantages. This text will make an effort to highlight those along with potential solutions for any issues.

As mentioned previously, attributes such as availability and cost are quite important for many when selecting statistical analysis software. Historically, SPSS from IBM has been the most widely used software, but that is changing. SPSS can do quite a lot, but carries a large price tag for those not affiliated with a university where they can get affordable access. Free and open source resources such as R are increasing in usage in published research as is Python in quantitative based job requirements. Meanwhile programs such as SPSS are declining in usage and desirability from potential employers. [17] [18] [19] There are many that still prefer the SPSS “point and click” style over learning coding syntax, so it will likely stick around. Many learn to use SPSS during their time as a student at a university that provides access. Once they graduate, however, they are confronted with the fact that they will need to pay for SPSS use which can be expensive (≅ $1,200/year as of 2021 [20] ). This pushes more users to options such as Excel or a coding-based solution like R and Python. JASP , a relatively new and free use product recently became available that has a similar user interface to SPSS, which many may prefer. For many of the reasons above, this text will focus on the usage of Excel and JASP. Each technique described in this text will include solutions in both programs, [21] so readers can follow the path they find most useful in their specific situations. Solution tutorials for Excel will be shown in green/teal boxes, while solutions in JASP will be shown in purple boxes (examples below).

Example MS Excel Solution Tutorial

All solutions in Excel will be in this color scheme and will have the word “Excel” somewhere in the title.

Example JASP Solution Tutorial

Data collection.

Along with data storage and analysis software, we might also be using technology in the data collection process. Take a look at the image below. Here we see an example of data collection happening in a boxing training session. Notice that the coach is viewing near real-time data on his tablet. How is this occurring? It’s not magic. In fact, many of you probably use this technological process daily. If you have a smart watch that is connected to your phone, it is continuously sending data via Bluetooth throughout the day. The same process is happening in the picture above. Each of the punching bags is instrumented with an accelerometer, which measures acceleration of the bag after it is hit, and is connected to the tablet via Bluetooth. This data is often automatically saved to a cloud storage account that can also be retrieved later. Many of our data collection instruments are now equipped with some form of telemetry (WiFi or Bluetooth) that can send the collected data directly to a storage site. Can you think of one besides your smart watch and the example on the screen?

Image of boxing coach holding a tablet displaying boxing related data, while several students are hitting punching bags in the distance.

Specifically concerning the field of kinesiology, the usage of technology and the digitization process of data has solved quite a few issues from the past. Previously, data had to be manually tabulated by hand and then transcribed into a computer for analysis. This could result in many errors when typing in the data that could negatively impact our results. Now, much of our data collection involves equipment that automatically collects the digital data for us and often saves it in the cloud. Many patient and athlete management systems utilize these methods to track progress and performance.

Actually, we could go back a couple of decades before this, when much of the analysis was also done by hand. Thankfully, we won’t have to worry about that. We can now utilize computers and software to run the analysis for us and we rarely have to recall any formulas.

Beyond directly collecting data, computers and technology can be used to collect data in other ways. Public data can be taken from websites and other sources digitally from a process known as ”web-scraping.” This can be done in MS Excel, but is more often done with coding languages such as R or Python that can more precisely pull and then reformat the data into a usable format. There are also many freely available and open databases that we can use for research purposes. Many sports organizations and leagues produce these. Many data and sport scientists are trained to retrieve and analyze much of these types data on a regular basis.

Data Tables and Spreadsheets

While data tables and spreadsheets are terms that are often used interchangeably, the are not the same thing. A data table is simply a way to organize data into rows and columns, where each intersection of a row and column is a cell and this may also be referred to as a data set. Many who use MS Excel, Google Sheets, or Apple’s Numbers may refer to this as a spreadsheet, but this is technically incorrect as spreadsheets also allow for formatting and manipulation of the data in each cell. A simple spreadsheet can be used as a data table or it may include a data table. Spreadsheet software incorporates many of the analysis processes into the same spot, which can be a benefit depending on the complexity of your analyses. If you want to go further than some of the more basic analyses, you may not be able to complete the job with products such as MS Excel. This creates a potential issue for those who have stored their data in the standard .xlsx or .xls formats in MS Excel, as many other programs cannot import the data. Fortunately, MS Excel provides many options for saving your files as different extensions that are more usable in other programs. Currently, most common among these is the .csv file extension which stands for comma separated values. If you were to open this file in a text editor, you would literally see a list of all the data with each cell separated by a comma. Unfortunately, the .csv format will not save any of the equations one might use to manipulate data, any plots, or formatting. So it is a good idea to save the data tables created in Excel as a .csv file, but also to save any analysis files in the standard format (.xlsx).

Data Table Organization

No matter what software you use to store your data, it is always a good idea to standardize the organization. While you may like a specific format at one point in time, it’s important to remember that in needs to make sense to everyone who views it and other programs may not recognize it if it’s not organized in a traditional manner. That being said, there are some best practices to organizing our data. Within a data table or dataset, we have 3 main pieces: variables, observations, and values.

  • Variables are a specific attribute being measured. These are generally set up as columns.
  • Observations are all measures on specific entities (for example, a name or date). These are generally set up as rows.
  • Values are the intersections of our variables and observations. You would consider this an individual cell in a spreadsheet. Each value is one specific measure of an attribute for a specific date or individual.

Consider the table below in Figure 1.4 that depicts some objective and subjective data on exercise intensity collected at exhaustion in a graded treadmill test. Notice that each column is a variable. So we have 3 variables, which include the subject ID, % HRmax, and RPE. We also have several observations shown as rows. Each subject has 1 ID number, 1 % HRmax value, and 1 RPE value. Speaking of values, a specific value for a given variable and observation can be found at their intersection. For example, if we want to know what subject 314159’s RPE value is we must find where they intersect. The observation is shaded in red, the variable is shaded in blue, and the value (intersection) of 17 is shaded in purple for emphasis.

a table demonstrating best practices of organization

An Important Caveat for MS Excel/Spreadsheet Users

Consider a sample of 200 university students that were enrolled in a study measuring resting heart rates during finals week. How many rows should there be? If all 200 were tested once, we should have 200 rows. One caveat to that is if you are working in MS Excel or a similar spreadsheet application, the first row is often used to name your variables. So, row 1 wouldn’t contain any data yet. This would mean you would technically have 201 rows if you had 200 observations and your first row of data would be row 2. For other programs, variable names may be included separately and the type of data will also need to be selected. Data types will be discussed in the next chapter.

When logging data for use in an analysis program, it can be perfectly straightforward for many variables like weight or height (in cm). You just type in the value. But what about gender or class? Can you just type that in as a word? Most often you can’t. Many of the analysis programs do not know how to deal with strings or words. So you might code that as a number. For example, a value of 1 might refer to freshmen, 2 might refer to sophomore, and so on. This will be discussed this further later on when segmenting data into groups is desired.

Enabling the Data Analysis Toolpak in MS Excel

Excel can handle many of the same analysis that other statistical programs can, although it’s not always as easy as the other programs. But, it is much more available than those programs, so there are tradeoffs. In order to be able to run many of these types of analysis, you will need to enable the “ Data Analysis Toolpak ” as that is not automatically available. Please refer to the Microsoft support page in order to do this, which has step by step instructions for PCs and Macs.

Enable the Data Analysis Toolpak for MS Excel

Installing JASP

If you choose to utilize a true statistical analysis software, JASP is a good option. It is free and has easy solutions for nearly all types of analyses. JASP can be installed on PC, Mac, and Linux operating systems.

Download and Install JASP

  • Bias means that we lean more towards a specific notion and it is often thought of in a negative light. From a statistical perspective, the motivation for why we think a certain way does not matter. It can be negative or positive. All that matters is that our biases could result in beliefs that are not consistent with what the data actually tell us. For example, we might think very highly of a specific person we are testing and therefore give them a slightly better score than if we did not know that person at all. This type of bias may not be considered negative in motivation, but it is negative in that we are potentially misleading ourselves and others. Whether or not we like to admit it, we all have biases and relying on quantitative data to justify our decisions may help us to avoid them or avoid making decision because of them. ↵
  • 2020. Adult Physical Inactivity Prevalence Maps by Race/Ethnicity . https://www.cdc.gov/physicalactivity/data/inactivity-prevalence-maps/index.html ↵
  • If you would like to take a more granular look at this data, please visit https://www.cdc.gov/physicalactivity/data/inactivity-prevalence-maps/index.html . ↵
  • Riccardo Pozzo (2004) The impact of Aristotelianism on modern philosophy. CUA Press. p. 41. ↵
  • https://en.wikipedia.org/wiki/Scientific_method ↵
  • https://en.wikipedia.org/wiki/Spring_training ↵
  • Bowen L, Gross AS, Gimpel M, Bruce-Low S, Li FX. Spikes in acute:chronic workload ratio (ACWR) associated with a 5-7 times greater injury rate in English Premier League football players: a comprehensive 3-year study. Br J Sports Med. 2020 Jun;54(12):731-738. doi: 10.1136/bjsports-2018-099422. Epub 2019 Feb 21. PMID: 30792258; PMCID: PMC7285788. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7285788/ ↵
  • Bowen L, Gross AS, Gimpel M, Li FX. Accumulated workloads and the acute:chronic workload ratio relate to injury risk in elite youth football players. Br J Sports Med. 2017 Mar;51(5):452-459. doi: 10.1136/bjsports-2015-095820. Epub 2016 Jul 22. PMID: 27450360; PMCID: PMC5460663. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5460663/ ↵
  • Current US Population as checked in 2021. https://www.census.gov/popclock/ ↵
  • https://en.wikipedia.org/wiki/Intelligence_quotient#Precursors_to_IQ_testing ↵
  • Flynn Effect. https://en.wikipedia.org/wiki/Flynn_effect ↵
  • Morrow, J., Mood, D., Disch, J., and Kang, M. 2016. Measurement and Evaluation in Human Performance. Human Kinetics. Champaign, IL. ↵
  • Gabbett TJ. The training-injury prevention paradox: should athletes be training smarter and harder? Br J Sports Med. 2016 Mar;50(5):273-80. doi: 10.1136/bjsports-2015-095788. Epub 2016 Jan 12. PMID: 26758673; PMCID: PMC4789704. ↵
  • Bourdon PC, Cardinale M, Murray A, Gastin P, Kellmann M, Varley MC, Gabbett TJ, Coutts AJ, Burgess DJ, Gregson W, Cable NT. Monitoring Athlete Training Loads: Consensus Statement. Int J Sports Physiol Perform. 2017 Apr;12(Suppl 2):S2161-S2170. doi: 10.1123/IJSPP.2017-0208. PMID: 28463642. ↵
  • Eckard TG, Padua DA, Hearn DW, Pexa BS, Frank BS. The Relationship Between Training Load and Injury in Athletes: A Systematic Review. Sports Med. 2018 Aug;48(8):1929-1961. doi: 10.1007/s40279-018-0951-z. Erratum in: Sports Med. 2020 Jun;50(6):1223. PMID: 29943231. ↵
  • Morrow et al. (2016) also include Diagnosis as a function of quantitative analysis, but that is not included here as most professionals in human performance and kinesiology do not possess the the authority to diagnose. They may be asked to perform a test and those result may help diagnose an issue, but diagnosis is generally reserved to those practicing medicine. ↵
  • http://r4stats.com/2014/08/20/r-passes-spss-in-scholarly-use-stata-growing-rapidly/ ↵
  • http://r4stats.com/2019/04/01/scholarly-datasci-popularity-2019/ ↵
  • https://lindeloev.net/spss-is-dying/ ↵
  • https://www.ibm.com/products/spss-statistics/pricing ↵
  • When possible. There are some instances when MS Excel does not have the capability to run the same analyses as JASP. ↵
  • Wickham, H. (2014). Tidy Data. Journal of Statistical Software. https://www.jstatsoft.org/article/view/v059i10/ ↵

how well scores represent the variable they are supposed to; or how well the measurement measures what it is supposed to.

refers to the consistency of data. Often includes various types: test-retest (across time), between raters (interrater), within rater (intrarater), or internal consistency (across items).

evidence that is collected by personal experiences and not in systematic manner. Most often considered of lower value in scientific occupations.

a research and scientific-based guess to answer a specific question or phenomenon

includes every single member of a specific group

Variable of interest measured in the population

a subset of the population that should generally be representative of that population. Samples are often used when collecting data on the entire population is unrealistic.

Variable of interest measured in the sample

evidence that is collected and documented by systematic experimentation

a statement about quality that generally is decided upon after comparing other observations.

quantification of a specific quality being assessed.

a tool used to measure a specific quality

Pretest, mid-test, or any evaluation prior to the final evaluation that helps to track changes in the quantity being measured.

Final evaluation that helps to demonstrate achievement.

compares a performance to a specific preset requirement.

compares performance(s) to the sample that the performer tested with or with a similar population.

Quantitative Analysis in Exercise and Sport Science by Chris Bailey, PhD, CSCS, RSCC is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License , except where otherwise noted.

Share This Book

Logo for JCU Open eBooks

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

3.1 What is Quantitative Research?

Quantitative research is a research method that uses numerical data and statistical analysis to study phenomena. 1 Quantitative research plays an important role in scientific inquiry by providing a rigorous, objective, systematic process using numerical data to test relationships and examine cause-and-effect associations between variables. 1, 2 The goal is to make generalisations about a population (extrapolate findings from the sample to the general population). 2 The data and variables are predetermined and measured as consistently and accurately as possible, and statistical analysis is used to evaluate the outcomes. 2 Quantitative research is based on the scientific method, wherein deductive reductionist reasoning is used to formulate hypotheses about a particular phenomenon.

An Introduction to Research Methods for Undergraduate Health Profession Students Copyright © 2023 by Faith Alele and Bunmi Malau-Aduli is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License , except where otherwise noted.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Advanced Search
  • Journal List
  • J Korean Med Sci
  • v.37(16); 2022 Apr 25

Logo of jkms

A Practical Guide to Writing Quantitative and Qualitative Research Questions and Hypotheses in Scholarly Articles

Edward barroga.

1 Department of General Education, Graduate School of Nursing Science, St. Luke’s International University, Tokyo, Japan.

Glafera Janet Matanguihan

2 Department of Biological Sciences, Messiah University, Mechanicsburg, PA, USA.

The development of research questions and the subsequent hypotheses are prerequisites to defining the main research purpose and specific objectives of a study. Consequently, these objectives determine the study design and research outcome. The development of research questions is a process based on knowledge of current trends, cutting-edge studies, and technological advances in the research field. Excellent research questions are focused and require a comprehensive literature search and in-depth understanding of the problem being investigated. Initially, research questions may be written as descriptive questions which could be developed into inferential questions. These questions must be specific and concise to provide a clear foundation for developing hypotheses. Hypotheses are more formal predictions about the research outcomes. These specify the possible results that may or may not be expected regarding the relationship between groups. Thus, research questions and hypotheses clarify the main purpose and specific objectives of the study, which in turn dictate the design of the study, its direction, and outcome. Studies developed from good research questions and hypotheses will have trustworthy outcomes with wide-ranging social and health implications.

INTRODUCTION

Scientific research is usually initiated by posing evidenced-based research questions which are then explicitly restated as hypotheses. 1 , 2 The hypotheses provide directions to guide the study, solutions, explanations, and expected results. 3 , 4 Both research questions and hypotheses are essentially formulated based on conventional theories and real-world processes, which allow the inception of novel studies and the ethical testing of ideas. 5 , 6

It is crucial to have knowledge of both quantitative and qualitative research 2 as both types of research involve writing research questions and hypotheses. 7 However, these crucial elements of research are sometimes overlooked; if not overlooked, then framed without the forethought and meticulous attention it needs. Planning and careful consideration are needed when developing quantitative or qualitative research, particularly when conceptualizing research questions and hypotheses. 4

There is a continuing need to support researchers in the creation of innovative research questions and hypotheses, as well as for journal articles that carefully review these elements. 1 When research questions and hypotheses are not carefully thought of, unethical studies and poor outcomes usually ensue. Carefully formulated research questions and hypotheses define well-founded objectives, which in turn determine the appropriate design, course, and outcome of the study. This article then aims to discuss in detail the various aspects of crafting research questions and hypotheses, with the goal of guiding researchers as they develop their own. Examples from the authors and peer-reviewed scientific articles in the healthcare field are provided to illustrate key points.

DEFINITIONS AND RELATIONSHIP OF RESEARCH QUESTIONS AND HYPOTHESES

A research question is what a study aims to answer after data analysis and interpretation. The answer is written in length in the discussion section of the paper. Thus, the research question gives a preview of the different parts and variables of the study meant to address the problem posed in the research question. 1 An excellent research question clarifies the research writing while facilitating understanding of the research topic, objective, scope, and limitations of the study. 5

On the other hand, a research hypothesis is an educated statement of an expected outcome. This statement is based on background research and current knowledge. 8 , 9 The research hypothesis makes a specific prediction about a new phenomenon 10 or a formal statement on the expected relationship between an independent variable and a dependent variable. 3 , 11 It provides a tentative answer to the research question to be tested or explored. 4

Hypotheses employ reasoning to predict a theory-based outcome. 10 These can also be developed from theories by focusing on components of theories that have not yet been observed. 10 The validity of hypotheses is often based on the testability of the prediction made in a reproducible experiment. 8

Conversely, hypotheses can also be rephrased as research questions. Several hypotheses based on existing theories and knowledge may be needed to answer a research question. Developing ethical research questions and hypotheses creates a research design that has logical relationships among variables. These relationships serve as a solid foundation for the conduct of the study. 4 , 11 Haphazardly constructed research questions can result in poorly formulated hypotheses and improper study designs, leading to unreliable results. Thus, the formulations of relevant research questions and verifiable hypotheses are crucial when beginning research. 12

CHARACTERISTICS OF GOOD RESEARCH QUESTIONS AND HYPOTHESES

Excellent research questions are specific and focused. These integrate collective data and observations to confirm or refute the subsequent hypotheses. Well-constructed hypotheses are based on previous reports and verify the research context. These are realistic, in-depth, sufficiently complex, and reproducible. More importantly, these hypotheses can be addressed and tested. 13

There are several characteristics of well-developed hypotheses. Good hypotheses are 1) empirically testable 7 , 10 , 11 , 13 ; 2) backed by preliminary evidence 9 ; 3) testable by ethical research 7 , 9 ; 4) based on original ideas 9 ; 5) have evidenced-based logical reasoning 10 ; and 6) can be predicted. 11 Good hypotheses can infer ethical and positive implications, indicating the presence of a relationship or effect relevant to the research theme. 7 , 11 These are initially developed from a general theory and branch into specific hypotheses by deductive reasoning. In the absence of a theory to base the hypotheses, inductive reasoning based on specific observations or findings form more general hypotheses. 10

TYPES OF RESEARCH QUESTIONS AND HYPOTHESES

Research questions and hypotheses are developed according to the type of research, which can be broadly classified into quantitative and qualitative research. We provide a summary of the types of research questions and hypotheses under quantitative and qualitative research categories in Table 1 .

Research questions in quantitative research

In quantitative research, research questions inquire about the relationships among variables being investigated and are usually framed at the start of the study. These are precise and typically linked to the subject population, dependent and independent variables, and research design. 1 Research questions may also attempt to describe the behavior of a population in relation to one or more variables, or describe the characteristics of variables to be measured ( descriptive research questions ). 1 , 5 , 14 These questions may also aim to discover differences between groups within the context of an outcome variable ( comparative research questions ), 1 , 5 , 14 or elucidate trends and interactions among variables ( relationship research questions ). 1 , 5 We provide examples of descriptive, comparative, and relationship research questions in quantitative research in Table 2 .

Hypotheses in quantitative research

In quantitative research, hypotheses predict the expected relationships among variables. 15 Relationships among variables that can be predicted include 1) between a single dependent variable and a single independent variable ( simple hypothesis ) or 2) between two or more independent and dependent variables ( complex hypothesis ). 4 , 11 Hypotheses may also specify the expected direction to be followed and imply an intellectual commitment to a particular outcome ( directional hypothesis ) 4 . On the other hand, hypotheses may not predict the exact direction and are used in the absence of a theory, or when findings contradict previous studies ( non-directional hypothesis ). 4 In addition, hypotheses can 1) define interdependency between variables ( associative hypothesis ), 4 2) propose an effect on the dependent variable from manipulation of the independent variable ( causal hypothesis ), 4 3) state a negative relationship between two variables ( null hypothesis ), 4 , 11 , 15 4) replace the working hypothesis if rejected ( alternative hypothesis ), 15 explain the relationship of phenomena to possibly generate a theory ( working hypothesis ), 11 5) involve quantifiable variables that can be tested statistically ( statistical hypothesis ), 11 6) or express a relationship whose interlinks can be verified logically ( logical hypothesis ). 11 We provide examples of simple, complex, directional, non-directional, associative, causal, null, alternative, working, statistical, and logical hypotheses in quantitative research, as well as the definition of quantitative hypothesis-testing research in Table 3 .

Research questions in qualitative research

Unlike research questions in quantitative research, research questions in qualitative research are usually continuously reviewed and reformulated. The central question and associated subquestions are stated more than the hypotheses. 15 The central question broadly explores a complex set of factors surrounding the central phenomenon, aiming to present the varied perspectives of participants. 15

There are varied goals for which qualitative research questions are developed. These questions can function in several ways, such as to 1) identify and describe existing conditions ( contextual research question s); 2) describe a phenomenon ( descriptive research questions ); 3) assess the effectiveness of existing methods, protocols, theories, or procedures ( evaluation research questions ); 4) examine a phenomenon or analyze the reasons or relationships between subjects or phenomena ( explanatory research questions ); or 5) focus on unknown aspects of a particular topic ( exploratory research questions ). 5 In addition, some qualitative research questions provide new ideas for the development of theories and actions ( generative research questions ) or advance specific ideologies of a position ( ideological research questions ). 1 Other qualitative research questions may build on a body of existing literature and become working guidelines ( ethnographic research questions ). Research questions may also be broadly stated without specific reference to the existing literature or a typology of questions ( phenomenological research questions ), may be directed towards generating a theory of some process ( grounded theory questions ), or may address a description of the case and the emerging themes ( qualitative case study questions ). 15 We provide examples of contextual, descriptive, evaluation, explanatory, exploratory, generative, ideological, ethnographic, phenomenological, grounded theory, and qualitative case study research questions in qualitative research in Table 4 , and the definition of qualitative hypothesis-generating research in Table 5 .

Qualitative studies usually pose at least one central research question and several subquestions starting with How or What . These research questions use exploratory verbs such as explore or describe . These also focus on one central phenomenon of interest, and may mention the participants and research site. 15

Hypotheses in qualitative research

Hypotheses in qualitative research are stated in the form of a clear statement concerning the problem to be investigated. Unlike in quantitative research where hypotheses are usually developed to be tested, qualitative research can lead to both hypothesis-testing and hypothesis-generating outcomes. 2 When studies require both quantitative and qualitative research questions, this suggests an integrative process between both research methods wherein a single mixed-methods research question can be developed. 1

FRAMEWORKS FOR DEVELOPING RESEARCH QUESTIONS AND HYPOTHESES

Research questions followed by hypotheses should be developed before the start of the study. 1 , 12 , 14 It is crucial to develop feasible research questions on a topic that is interesting to both the researcher and the scientific community. This can be achieved by a meticulous review of previous and current studies to establish a novel topic. Specific areas are subsequently focused on to generate ethical research questions. The relevance of the research questions is evaluated in terms of clarity of the resulting data, specificity of the methodology, objectivity of the outcome, depth of the research, and impact of the study. 1 , 5 These aspects constitute the FINER criteria (i.e., Feasible, Interesting, Novel, Ethical, and Relevant). 1 Clarity and effectiveness are achieved if research questions meet the FINER criteria. In addition to the FINER criteria, Ratan et al. described focus, complexity, novelty, feasibility, and measurability for evaluating the effectiveness of research questions. 14

The PICOT and PEO frameworks are also used when developing research questions. 1 The following elements are addressed in these frameworks, PICOT: P-population/patients/problem, I-intervention or indicator being studied, C-comparison group, O-outcome of interest, and T-timeframe of the study; PEO: P-population being studied, E-exposure to preexisting conditions, and O-outcome of interest. 1 Research questions are also considered good if these meet the “FINERMAPS” framework: Feasible, Interesting, Novel, Ethical, Relevant, Manageable, Appropriate, Potential value/publishable, and Systematic. 14

As we indicated earlier, research questions and hypotheses that are not carefully formulated result in unethical studies or poor outcomes. To illustrate this, we provide some examples of ambiguous research question and hypotheses that result in unclear and weak research objectives in quantitative research ( Table 6 ) 16 and qualitative research ( Table 7 ) 17 , and how to transform these ambiguous research question(s) and hypothesis(es) into clear and good statements.

a These statements were composed for comparison and illustrative purposes only.

b These statements are direct quotes from Higashihara and Horiuchi. 16

a This statement is a direct quote from Shimoda et al. 17

The other statements were composed for comparison and illustrative purposes only.

CONSTRUCTING RESEARCH QUESTIONS AND HYPOTHESES

To construct effective research questions and hypotheses, it is very important to 1) clarify the background and 2) identify the research problem at the outset of the research, within a specific timeframe. 9 Then, 3) review or conduct preliminary research to collect all available knowledge about the possible research questions by studying theories and previous studies. 18 Afterwards, 4) construct research questions to investigate the research problem. Identify variables to be accessed from the research questions 4 and make operational definitions of constructs from the research problem and questions. Thereafter, 5) construct specific deductive or inductive predictions in the form of hypotheses. 4 Finally, 6) state the study aims . This general flow for constructing effective research questions and hypotheses prior to conducting research is shown in Fig. 1 .

An external file that holds a picture, illustration, etc.
Object name is jkms-37-e121-g001.jpg

Research questions are used more frequently in qualitative research than objectives or hypotheses. 3 These questions seek to discover, understand, explore or describe experiences by asking “What” or “How.” The questions are open-ended to elicit a description rather than to relate variables or compare groups. The questions are continually reviewed, reformulated, and changed during the qualitative study. 3 Research questions are also used more frequently in survey projects than hypotheses in experiments in quantitative research to compare variables and their relationships.

Hypotheses are constructed based on the variables identified and as an if-then statement, following the template, ‘If a specific action is taken, then a certain outcome is expected.’ At this stage, some ideas regarding expectations from the research to be conducted must be drawn. 18 Then, the variables to be manipulated (independent) and influenced (dependent) are defined. 4 Thereafter, the hypothesis is stated and refined, and reproducible data tailored to the hypothesis are identified, collected, and analyzed. 4 The hypotheses must be testable and specific, 18 and should describe the variables and their relationships, the specific group being studied, and the predicted research outcome. 18 Hypotheses construction involves a testable proposition to be deduced from theory, and independent and dependent variables to be separated and measured separately. 3 Therefore, good hypotheses must be based on good research questions constructed at the start of a study or trial. 12

In summary, research questions are constructed after establishing the background of the study. Hypotheses are then developed based on the research questions. Thus, it is crucial to have excellent research questions to generate superior hypotheses. In turn, these would determine the research objectives and the design of the study, and ultimately, the outcome of the research. 12 Algorithms for building research questions and hypotheses are shown in Fig. 2 for quantitative research and in Fig. 3 for qualitative research.

An external file that holds a picture, illustration, etc.
Object name is jkms-37-e121-g002.jpg

EXAMPLES OF RESEARCH QUESTIONS FROM PUBLISHED ARTICLES

  • EXAMPLE 1. Descriptive research question (quantitative research)
  • - Presents research variables to be assessed (distinct phenotypes and subphenotypes)
  • “BACKGROUND: Since COVID-19 was identified, its clinical and biological heterogeneity has been recognized. Identifying COVID-19 phenotypes might help guide basic, clinical, and translational research efforts.
  • RESEARCH QUESTION: Does the clinical spectrum of patients with COVID-19 contain distinct phenotypes and subphenotypes? ” 19
  • EXAMPLE 2. Relationship research question (quantitative research)
  • - Shows interactions between dependent variable (static postural control) and independent variable (peripheral visual field loss)
  • “Background: Integration of visual, vestibular, and proprioceptive sensations contributes to postural control. People with peripheral visual field loss have serious postural instability. However, the directional specificity of postural stability and sensory reweighting caused by gradual peripheral visual field loss remain unclear.
  • Research question: What are the effects of peripheral visual field loss on static postural control ?” 20
  • EXAMPLE 3. Comparative research question (quantitative research)
  • - Clarifies the difference among groups with an outcome variable (patients enrolled in COMPERA with moderate PH or severe PH in COPD) and another group without the outcome variable (patients with idiopathic pulmonary arterial hypertension (IPAH))
  • “BACKGROUND: Pulmonary hypertension (PH) in COPD is a poorly investigated clinical condition.
  • RESEARCH QUESTION: Which factors determine the outcome of PH in COPD?
  • STUDY DESIGN AND METHODS: We analyzed the characteristics and outcome of patients enrolled in the Comparative, Prospective Registry of Newly Initiated Therapies for Pulmonary Hypertension (COMPERA) with moderate or severe PH in COPD as defined during the 6th PH World Symposium who received medical therapy for PH and compared them with patients with idiopathic pulmonary arterial hypertension (IPAH) .” 21
  • EXAMPLE 4. Exploratory research question (qualitative research)
  • - Explores areas that have not been fully investigated (perspectives of families and children who receive care in clinic-based child obesity treatment) to have a deeper understanding of the research problem
  • “Problem: Interventions for children with obesity lead to only modest improvements in BMI and long-term outcomes, and data are limited on the perspectives of families of children with obesity in clinic-based treatment. This scoping review seeks to answer the question: What is known about the perspectives of families and children who receive care in clinic-based child obesity treatment? This review aims to explore the scope of perspectives reported by families of children with obesity who have received individualized outpatient clinic-based obesity treatment.” 22
  • EXAMPLE 5. Relationship research question (quantitative research)
  • - Defines interactions between dependent variable (use of ankle strategies) and independent variable (changes in muscle tone)
  • “Background: To maintain an upright standing posture against external disturbances, the human body mainly employs two types of postural control strategies: “ankle strategy” and “hip strategy.” While it has been reported that the magnitude of the disturbance alters the use of postural control strategies, it has not been elucidated how the level of muscle tone, one of the crucial parameters of bodily function, determines the use of each strategy. We have previously confirmed using forward dynamics simulations of human musculoskeletal models that an increased muscle tone promotes the use of ankle strategies. The objective of the present study was to experimentally evaluate a hypothesis: an increased muscle tone promotes the use of ankle strategies. Research question: Do changes in the muscle tone affect the use of ankle strategies ?” 23

EXAMPLES OF HYPOTHESES IN PUBLISHED ARTICLES

  • EXAMPLE 1. Working hypothesis (quantitative research)
  • - A hypothesis that is initially accepted for further research to produce a feasible theory
  • “As fever may have benefit in shortening the duration of viral illness, it is plausible to hypothesize that the antipyretic efficacy of ibuprofen may be hindering the benefits of a fever response when taken during the early stages of COVID-19 illness .” 24
  • “In conclusion, it is plausible to hypothesize that the antipyretic efficacy of ibuprofen may be hindering the benefits of a fever response . The difference in perceived safety of these agents in COVID-19 illness could be related to the more potent efficacy to reduce fever with ibuprofen compared to acetaminophen. Compelling data on the benefit of fever warrant further research and review to determine when to treat or withhold ibuprofen for early stage fever for COVID-19 and other related viral illnesses .” 24
  • EXAMPLE 2. Exploratory hypothesis (qualitative research)
  • - Explores particular areas deeper to clarify subjective experience and develop a formal hypothesis potentially testable in a future quantitative approach
  • “We hypothesized that when thinking about a past experience of help-seeking, a self distancing prompt would cause increased help-seeking intentions and more favorable help-seeking outcome expectations .” 25
  • “Conclusion
  • Although a priori hypotheses were not supported, further research is warranted as results indicate the potential for using self-distancing approaches to increasing help-seeking among some people with depressive symptomatology.” 25
  • EXAMPLE 3. Hypothesis-generating research to establish a framework for hypothesis testing (qualitative research)
  • “We hypothesize that compassionate care is beneficial for patients (better outcomes), healthcare systems and payers (lower costs), and healthcare providers (lower burnout). ” 26
  • Compassionomics is the branch of knowledge and scientific study of the effects of compassionate healthcare. Our main hypotheses are that compassionate healthcare is beneficial for (1) patients, by improving clinical outcomes, (2) healthcare systems and payers, by supporting financial sustainability, and (3) HCPs, by lowering burnout and promoting resilience and well-being. The purpose of this paper is to establish a scientific framework for testing the hypotheses above . If these hypotheses are confirmed through rigorous research, compassionomics will belong in the science of evidence-based medicine, with major implications for all healthcare domains.” 26
  • EXAMPLE 4. Statistical hypothesis (quantitative research)
  • - An assumption is made about the relationship among several population characteristics ( gender differences in sociodemographic and clinical characteristics of adults with ADHD ). Validity is tested by statistical experiment or analysis ( chi-square test, Students t-test, and logistic regression analysis)
  • “Our research investigated gender differences in sociodemographic and clinical characteristics of adults with ADHD in a Japanese clinical sample. Due to unique Japanese cultural ideals and expectations of women's behavior that are in opposition to ADHD symptoms, we hypothesized that women with ADHD experience more difficulties and present more dysfunctions than men . We tested the following hypotheses: first, women with ADHD have more comorbidities than men with ADHD; second, women with ADHD experience more social hardships than men, such as having less full-time employment and being more likely to be divorced.” 27
  • “Statistical Analysis
  • ( text omitted ) Between-gender comparisons were made using the chi-squared test for categorical variables and Students t-test for continuous variables…( text omitted ). A logistic regression analysis was performed for employment status, marital status, and comorbidity to evaluate the independent effects of gender on these dependent variables.” 27

EXAMPLES OF HYPOTHESIS AS WRITTEN IN PUBLISHED ARTICLES IN RELATION TO OTHER PARTS

  • EXAMPLE 1. Background, hypotheses, and aims are provided
  • “Pregnant women need skilled care during pregnancy and childbirth, but that skilled care is often delayed in some countries …( text omitted ). The focused antenatal care (FANC) model of WHO recommends that nurses provide information or counseling to all pregnant women …( text omitted ). Job aids are visual support materials that provide the right kind of information using graphics and words in a simple and yet effective manner. When nurses are not highly trained or have many work details to attend to, these job aids can serve as a content reminder for the nurses and can be used for educating their patients (Jennings, Yebadokpo, Affo, & Agbogbe, 2010) ( text omitted ). Importantly, additional evidence is needed to confirm how job aids can further improve the quality of ANC counseling by health workers in maternal care …( text omitted )” 28
  • “ This has led us to hypothesize that the quality of ANC counseling would be better if supported by job aids. Consequently, a better quality of ANC counseling is expected to produce higher levels of awareness concerning the danger signs of pregnancy and a more favorable impression of the caring behavior of nurses .” 28
  • “This study aimed to examine the differences in the responses of pregnant women to a job aid-supported intervention during ANC visit in terms of 1) their understanding of the danger signs of pregnancy and 2) their impression of the caring behaviors of nurses to pregnant women in rural Tanzania.” 28
  • EXAMPLE 2. Background, hypotheses, and aims are provided
  • “We conducted a two-arm randomized controlled trial (RCT) to evaluate and compare changes in salivary cortisol and oxytocin levels of first-time pregnant women between experimental and control groups. The women in the experimental group touched and held an infant for 30 min (experimental intervention protocol), whereas those in the control group watched a DVD movie of an infant (control intervention protocol). The primary outcome was salivary cortisol level and the secondary outcome was salivary oxytocin level.” 29
  • “ We hypothesize that at 30 min after touching and holding an infant, the salivary cortisol level will significantly decrease and the salivary oxytocin level will increase in the experimental group compared with the control group .” 29
  • EXAMPLE 3. Background, aim, and hypothesis are provided
  • “In countries where the maternal mortality ratio remains high, antenatal education to increase Birth Preparedness and Complication Readiness (BPCR) is considered one of the top priorities [1]. BPCR includes birth plans during the antenatal period, such as the birthplace, birth attendant, transportation, health facility for complications, expenses, and birth materials, as well as family coordination to achieve such birth plans. In Tanzania, although increasing, only about half of all pregnant women attend an antenatal clinic more than four times [4]. Moreover, the information provided during antenatal care (ANC) is insufficient. In the resource-poor settings, antenatal group education is a potential approach because of the limited time for individual counseling at antenatal clinics.” 30
  • “This study aimed to evaluate an antenatal group education program among pregnant women and their families with respect to birth-preparedness and maternal and infant outcomes in rural villages of Tanzania.” 30
  • “ The study hypothesis was if Tanzanian pregnant women and their families received a family-oriented antenatal group education, they would (1) have a higher level of BPCR, (2) attend antenatal clinic four or more times, (3) give birth in a health facility, (4) have less complications of women at birth, and (5) have less complications and deaths of infants than those who did not receive the education .” 30

Research questions and hypotheses are crucial components to any type of research, whether quantitative or qualitative. These questions should be developed at the very beginning of the study. Excellent research questions lead to superior hypotheses, which, like a compass, set the direction of research, and can often determine the successful conduct of the study. Many research studies have floundered because the development of research questions and subsequent hypotheses was not given the thought and meticulous attention needed. The development of research questions and hypotheses is an iterative process based on extensive knowledge of the literature and insightful grasp of the knowledge gap. Focused, concise, and specific research questions provide a strong foundation for constructing hypotheses which serve as formal predictions about the research outcomes. Research questions and hypotheses are crucial elements of research that should not be overlooked. They should be carefully thought of and constructed when planning research. This avoids unethical studies and poor outcomes by defining well-founded objectives that determine the design, course, and outcome of the study.

Disclosure: The authors have no potential conflicts of interest to disclose.

Author Contributions:

  • Conceptualization: Barroga E, Matanguihan GJ.
  • Methodology: Barroga E, Matanguihan GJ.
  • Writing - original draft: Barroga E, Matanguihan GJ.
  • Writing - review & editing: Barroga E, Matanguihan GJ.

No internet connection.

All search filters on the page have been cleared., your search has been saved..

  • All content
  • Dictionaries
  • Encyclopedias
  • Expert Insights
  • Foundations
  • How-to Guides
  • Journal Articles
  • Little Blue Books
  • Little Green Books
  • Project Planner
  • Tools Directory
  • Sign in to my profile My Profile

Not Logged In

  • Sign in Signed in
  • My profile My Profile

Not Logged In

Doing Quantitative Research in Education with SPSS

  • By: Daniel Muijs
  • Publisher: SAGE Publications Ltd
  • Publication year: 2011
  • Online pub date: December 20, 2013
  • Discipline: Education
  • Methods: Secondary data analysis , Quantitative data collection , Dependent variables
  • DOI: https:// doi. org/10.4135/9781446287989
  • Keywords: achievement , attitudes , population , pupils , scale , self-concept , teaching Show all Show less
  • Print ISBN: 9781849203241
  • Online ISBN: 9781446287989
  • Buy the book icon link

Subject index

Doing Quantitative Research in Education with SPSS, Second Edition, an accessible and authoritative introduction, is essential for education students and researchers needing to use quantitative methods for the first time.

Using datasets from real-life educational research and avoiding the use of mathematical formulae, the author guides students through the essential techniques that they will need to know, explaining each procedure using the latest version of SPSS. The datasets can also be downloaded from the book's website, enabling students to practice the techniques for themselves.

This revised and updated second edition now also includes more advanced methods such as log linear analysis, logistic regression, and canonical correlation.

Written specifically for those with no prior experience of quantitative research, this book is ideal for education students and researchers in this field.

Front Matter

  • Education at SAGE
  • List of Figures
  • List of Tables
  • Key Product Names of SPSS
  • Chapter 1 | Introduction to Quantitative Research
  • Chapter 2 | Experimental and Quasi-Experimental Research
  • Chapter 3 | Designing Non-Experimental Studies
  • Chapter 4 | Validity, Reliability and Generalisability
  • Chapter 5 | Introduction to SPSS and the Data Set
  • Chapter 6 | Univariate Statistics
  • Chapter 7 | Bivariate Analysis: Comparing Two Groups
  • Chapter 8 | Bivariate Analysis: Looking at the Relationship Between Two Variables
  • Chapter 9 | Multivariate Analysis: Using Regression Models to Look at the Relationship Between Several Predictors and One Dependent Variable
  • Chapter 10 | Using Analysis of Variance to Compare More Than Two Groups
  • Chapter 11 | Developing Scales and Measures: Item and Factor Analysis
  • Chapter 12 | One Step Beyond: Introduction to Multilevel Modelling and Structural Equation Modelling

Back Matter

Sign in to access this content, get a 30 day free trial, more like this, sage recommends.

We found other relevant content for you on other Sage platforms.

Have you created a personal profile? Login or create a profile so that you can save clips, playlists and searches

  • Sign in/register

Navigating away from this page will delete your results

Please save your results to "My Self-Assessments" in your profile before navigating away from this page.

Sign in to my profile

Sign up for a free trial and experience all Sage Learning Resources have to offer.

You must have a valid academic email address to sign up.

Get off-campus access

  • View or download all content my institution has access to.

Sign up for a free trial and experience all Sage Research Methods has to offer.

  • view my profile
  • view my lists

Logo for Open Library Publishing Platform

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Chapter 1: Introduction to Research Methods

Learning Objectives

At the end of this chapter, you will be able to:

  • Define the term “research methods”.
  • List the nine steps in undertaking a research project.
  • Differentiate between applied and basic research.
  • Explain where research ideas come from.
  • Define ontology and epistemology and explain the difference between the two.
  • Identify and describe five key research paradigms in social sciences.
  • Differentiate between inductive and deductive approaches to research.

Welcome to Introduction to Research Methods. In this textbook, you will learn why research is done and, more importantly, about the methods researchers use to conduct research. Research comes in many forms and, although you may feel that it has no relevance to you and/ or that you know nothing about it, you are exposed to research multiple times a day. You also undertake research yourself, perhaps without even realizing it. This course will help you to understand the research you are exposed to on a daily basis, and how to be more critical of the research you read and use in your own life and career.

This text is intended as an introduction. A plethora of resources exists related to more detailed aspects of conducting research; it is not our intention to replace any of these more comprehensive resources. Keep notes and build your own reading list of articles as you go through the course. Feedback helps to improve this open-source textbook, and is appreciated in the development of the resource.

Research Methods, Data Collection and Ethics Copyright © 2020 by Valerie Sheppard is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Module 3 Chapter 1: From Research Questions to Research Approaches

The approaches that social work investigators adopt in their research studies are directly related to the nature of the research questions being addressed.In Module 2 you learned about exploratory, descriptive, and explanatory research questions. Let’s consider different approaches to finding answers to each type of question.

In this chapter we build on what was learned in Module 2 about research questions, examining how investigators’ approaches to research are determined by the nature of those questions. The approaches we explore are all systematic, scientific approaches, and when properly conducted and reported, they all contribute empirical evidence to build knowledge.  In this chapter you will read about:

  • qualitative research approaches for understanding diverse populations, social problems, and social phenomena,
  • quantitative research approaches for understanding diverse populations, social problems, and social phenomena,
  • mixed methods research approaches for understanding diverse populations, social problems, and social phenomena.

Overview of Qualitative Approaches

Questions of a descriptive or exploratory nature are often asked and addressed through  qualitative research . The specific aim in these studies is to understand diverse populations, social work problems, or social phenomena as they naturally occur, situated in their natural environments, providing rich, in-depth, participant-centered descriptions of the phenomena being studied. Qualitative research approaches have been described as “humanistic” in aiming to study the world from the perspective of those who are experiencing it themselves; this also contributes to a social justice commitment in that the approaches give “voice” to the individuals who are experiencing the phenomena of interest (Denzen & Lincoln, 2011).  As such, qualitative research approaches are also credited with being sensitive and responsive to diversity—embracing feminist, ethnic, class, critical race, queer, and ability/disability theory and lenses.

In qualitative research, the investigator is engaged as an observer and interpreter, being acutely aware of the subjectivity of the resulting observations and interpretations.

“At this level, qualitative research involves an interpretive, naturalistic approach to the world” (Denzin & Lincoln, 2011, p. 3)

Because the data are rich and deep, a lot of information is collected by involving relatively few participants; otherwise, the investigator would be overwhelmed by a tremendous volume of information to collect, sift through, process, interpret, and analyze. Thus, a single qualitative study has a relatively low level of generalizability  to the population as a whole because of its methodology, but that is not the aim or goal of this approach.

In addition, because the aim is to develop understanding of the participating individuals’ lived experiences, the investigator in a qualitative study seldom imposes structure with standardized measurement tools. The investigator may not even start with preconceived theory and hypotheses. Instead, the methodologies involve a great deal of open-ended triggers, questions, or stimuli to be interpreted by the persons providing insight:

“Qualitative research’s express purpose is to produce descriptive data in an individual’s own written or spoken words and/or observable behavior” (Holosko, 2006, p. 12).

Furthermore, investigators often become a part of the qualitative research process: they maintain awareness of their own influences on the data being collected and on the impact of their own experiences and processes in interpreting the data provided by participants. In some qualitative methodologies, the investigator actually enters into/becomes immersed in the events or phenomena being studied, to both live and observe the experiences first-hand.

Qualitative data and interpretations are recognized as being subjective in nature—that is the purpose—rather than assuming objectivity. Qualitative research is based on experientially derived data and is interpretive, meaning it is “concerned with understanding the meaning of human experience from the subject’s own frame of reference” (Holosko, 2006, p. 13). In this approach, conclusions about the nature of reality are specific to each individual study participant, following his or her own interpretation of that reality. These approaches are considered to flow from an inductive reasoning process where specific themes or patterns are derived from general data (Creswell & Poth, 2018).

Several purposes of qualitative approaches in social work include:

  • describing and exploring the nature of phenomena, events, or relationships at any system level (individual to global)
  • generating theory
  • initially test ideas or assumptions (in theory or about practices)
  • evaluate participants’ lived experiences with practices, programs, policies, or participation in a research study, particularly with diverse participants
  • explore “fit” of quantitative research conclusions with participants’ lived experiences, particularly with diverse participants
  • inform the development of clinical or research assessment/measurement tools, particularly with diverse participants.

Overview of Quantitative Approaches

Questions of the exploratory, descriptive, or explanatory type are often asked and addressed through quantitative research  approaches, particularly questions that have a numeric component. Exploratory and descriptive quantitative studies rely on objective measures for data collection which is a major difference from qualitative studies which are aimed at understanding subjective perspectives and experiences. Explanatory quantitative studies often begin with theory and hypotheses, and proceed to empirically test the hypotheses that investigators generated. By their quantitative (numeric) nature, statistical hypothesis testing is possible in many types of quantitative studies.

Quantitative research studies utilize methodologies that enhance generalizability of results to the greatest extent possible—individual differences are de-emphasized, similarities across individuals are emphasized. These studies can be quite large in terms of participant numbers, and the study samples need to be developed in such a manner as to support generalization to the larger populations of interest.

The process is generally described as following a deductive logical system where specific data points are combined to lead to developing a generalizable conclusion. The philosophical roots (epistemology) underlying quantitative approaches is positivism, involving the seeking of empirical “facts or causes of social phenomena based on experimentallyderived evidence and/or valid observations” (Holosko, 2006, p. 13). The empirical orientation is objective in that investigators attempt to be detached from the collection and interpretation of data in order to minimize their own influences and biases. Furthermore, investigators utilize objective measurement tools to the greatest extent possible in the process of collecting quantitative study data.

Several purposes of quantitative approaches in social work include:

  • describing and exploring the dimensions of diverse populations, phenomena, events, or relationships at any system level (individual to global)—how much, how many, how large, how often, etc. (including epidemiology questions and methods)
  • testing theory (including etiology questions)
  • experimentally determining the existence of relationships between factors that might influence phenomena or relationships at any system level (including epidemiology and etiology questions)
  • testing causal pathways between factors that might influence phenomena or relationships at any system level (including etiology questions)
  • evaluate quantifiable outcomes of practices, programs, or policies
  • assess the reliability and validity of clinical or research assessment/measurement tools.

Overview of Mixed-Method Approaches

Important dimensions distinguish between qualitative and quantitative approaches. First, qualitative approaches rely on “insider” perspectives, whereas quantitative approaches are directed by “outsiders” in the role of investigator (Padgett, 2008). Second, qualitative results are presented holistically, whereas quantitative approaches present results in terms of specific variables dissected from the whole for close examination; qualitative studies emphasize the context of individuals’ experiences, whereas quantitative studies tend to decontextualize the phenomena under study (Padgett, 2008). Third, quantitative research approaches tend to follow a positivist philosophy, seeking objectivity and representation of what actually exists; qualitative research approaches follow from a post-positivist philosophy, recognizing that observation is always shaped by the observer, therefore is always subjective in nature and this should be acknowledged and embraced. In post-positivist qualitative research traditions, realities are perceived as being socially constructed, whereas in positivist quantitative research, a single reality exists, waiting to be discovered or understood. The quantitative perspective on reality has a long tradition in the physical and natural sciences (physics, chemistry, anatomy, physiology, astronomy, and others). The social construction perspective has a strong hold in social science and understanding social phenomena. But what if an investigator’s questions are relevant to both qualitative and quantitative approaches?

Given the fundamental philosophical and practical differences, some scholars argue that there can be no mixing of the approaches, that the underlying paradigms are too different. However, mixed-methods research  has also been described as a new paradigm (since the 1980s) for social science:

“Like the mythology of the phoenix, mixed methods research has arisen out of the ashes of the paradigm wars to become the third methodological movement. The fields of applied social science and evaluation are among those which have shown the greatest popularity and uptake of mixed methods research designs” (Cameron & Miller, 2007, p. 3). 

chapter 1 of quantitative research

Mixed-methods research approaches are used to address in a single study the acknowledged limitations of both quantitative and qualitative approaches. Mixed methods research combines elements of both qualitative and quantitative approaches for the purpose of achieving both depth and breadth of understanding, along with corroboration of results (Johnson, Onwuegbuzie, & Turner, 2007, p. 123). One mixed-methods strategy is related to the concept of  triangulation : understanding an event or phenomenon from the use of varied data sources and methods all applied to understanding the same phenomenon (Denzin & Lincoln, 2011; see Figure 1-1).

Figure 1-1. Depiction of triangulation as synthesis of different data sources

chapter 1 of quantitative research

For example, in a survey research study of student debt load experienced by social work doctoral students, the investigators gathered quantitative data concerning demographics, dollar amounts of debt and resources, and other numeric data from students and programs (Begun & Carter, 2017). In addition, they collected qualitative data about the experience of incurring and managing debt load, how debt shaped students’ career path decisions, practices around mentoring doctoral students about student debt load, and ideas for addressing the problem. Triangulation came into play in two ways: first, collecting data from students and programs about the topics, and second, a sub-sample of the original surveyed participants engaged in qualitative interviews concerning the “fit” or validity of conclusions drawn from the prior qualitative and quantitative data.

Three different types of mixed methods approaches are used:

  • Convergent designs involve the simultaneous collection of both qualitative and quantitative data, followed by analysis of both data sets, and merging the two sets of results in a comparative manner.
  • Explanatory sequential designs use quantitative methods first, and then apply qualitative methods to help explain and further interpret the quantitative results.
  • Exploratory sequential designs first explore a problem or phenomenon through qualitative methods, especially if the topic is previously unknown or the population is understudied and unfamiliar. These qualitative findings are then used to build the quantitative phase of a project (Creswell, 2014, p. 6).

Mixed methods approaches are useful in developing and testing new research or clinical measurement tools. For example, this is done in an exploratory sequential process whereby detail-rich qualitative data inform the creation of a quantitative instrument. The quantitative instrument is then tested in both quantitative and qualitative ways to confirm that it is adequate for its intended use. This iterative process is depicted in Figure 1-2.

Figure 1-2. Iterative qualitative and quantitative process of instrument development

chapter 1 of quantitative research

One example of how this mixed-methods approach was utilized was in development of the Safe-At-Home instrument for assessing individuals’ subjective readiness to change their intimate partner violence behavior (Begun et al., 2003; 2008). The transtheoretical model of behavior change (TMBC) underlies the instrument’s development: identifying stages in readiness to change one’s behavior and matching these stages to the most appropriate type of intervention strategy (Begun et al., 2001). The first step in developing the intimate partner violence Safe-At-Home instrument for assessing readiness to change was to qualitatively generate a list of statements that could be used in a quantitative rating scale. Providers of treatment services to individuals arrested for domestic or relationship violence were engaged in mutual teaching/learning with the investigators concerning the TMBC as it might relate to the perpetration of intimate partner violence. They independently generated lists of the kinds of statements they heard from individuals in their treatment programs, statements they believed were demonstrative of what they understood as the different stages in the change process. The investigators then worked with them to reduce the amassed list of statements into stage-representative categories, eliminating duplicates and ambiguous statements, and retaining the original words and phrases they heard to the greatest extent possible. The second phase was both quantitative and qualitative in nature: testing the instrument with a small sample of men engaged in batters’ treatment programs and interviewing the men about the experience of using the instrument. Based on the results and their feedback, the instrument was revised. This process was followed through several iterations. The next phases were quantitative: determining the psychometric characteristics of the instrument and using it to quantitatively evaluate batterer treatment programs—the extent to which individuals were helped to move forward in stages of the change cycle.

Interactive Excel Workbook Activities

Complete the following Workbook Activity:

  • SWK 3401.3-1.1 Getting Started: Understanding an Excel Data File

Chapter Summary

In this chapter you were introduced to three general approaches for moving from research question to research method. You were provided with a brief overview of the philosophical underpinnings and uses of qualitative, quantitative, and mixed-methods approaches. Next, you are provided with more detailed descriptions of qualitative and quantitative traditions and their associated methodologies.

Stop and Think

Take a moment to complete the following activity.

Social Work 3401 Coursebook Copyright © by Dr. Audrey Begun is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License , except where otherwise noted.

Share This Book

University of Northern Iowa Home

  • Chapter Four: Quantitative Methods (Part 1)

Once you have chosen a topic to investigate, you need to decide which type of method is best to study it. This is one of the most important choices you will make on your research journey. Understanding the value of each of the methods described in this textbook to answer different questions allows you to be able to plan your own studies with more confidence, critique the studies others have done, and provide advice to your colleagues and friends on what type of research they should do to answer questions they have. After briefly reviewing quantitative research assumptions, this chapter is organized in three parts or sections. These parts can also be used as a checklist when working through the steps of your study. Specifically, part 1 focuses on planning a quantitative study (collecting data), part two explains the steps involved in doing a quantitative study, and part three discusses how to make sense of your results (organizing and analyzing data).

  • Chapter One: Introduction
  • Chapter Two: Understanding the distinctions among research methods
  • Chapter Three: Ethical research, writing, and creative work
  • Chapter Four: Quantitative Methods (Part 2 - Doing Your Study)
  • Chapter Four: Quantitative Methods (Part 3 - Making Sense of Your Study)
  • Chapter Five: Qualitative Methods (Part 1)
  • Chapter Five: Qualitative Data (Part 2)
  • Chapter Six: Critical / Rhetorical Methods (Part 1)
  • Chapter Six: Critical / Rhetorical Methods (Part 2)
  • Chapter Seven: Presenting Your Results

Quantitative Worldview Assumptions: A Review

In chapter 2, you were introduced to the unique assumptions quantitative research holds about knowledge and how it is created, or what the authors referred to in chapter one as "epistemology." Understanding these assumptions can help you better determine whether you need to use quantitative methods for a particular research study in which you are interested.

Quantitative researchers believe there is an objective reality, which can be measured. "Objective" here means that the researcher is not relying on their own perceptions of an event. S/he is attempting to gather "facts" which may be separate from people's feeling or perceptions about the facts. These facts are often conceptualized as "causes" and "effects." When you ask research questions or pose hypotheses with words in them such as "cause," "effect," "difference between," and "predicts," you are operating under assumptions consistent with quantitative methods. The overall goal of quantitative research is to develop generalizations that enable the researcher to better predict, explain, and understand some phenomenon.

Because of trying to prove cause-effect relationships that can be generalized to the population at large, the research process and related procedures are very important for quantitative methods. Research should be consistently and objectively conducted, without bias or error, in order to be considered to be valid (accurate) and reliable (consistent). Perhaps this emphasis on accurate and standardized methods is because the roots of quantitative research are in the natural and physical sciences, both of which have at their base the need to prove hypotheses and theories in order to better understand the world in which we live. When a person goes to a doctor and is prescribed some medicine to treat an illness, that person is glad such research has been done to know what the effects of taking this medicine is on others' bodies, so s/he can trust the doctor's judgment and take the medicines.

As covered in chapters 1 and 2, the questions you are asking should lead you to a certain research method choice. Students sometimes want to avoid doing quantitative research because of fear of math/statistics, but if their questions call for that type of research, they should forge ahead and use it anyway. If a student really wants to understand what the causes or effects are for a particular phenomenon, they need to do quantitative research. If a student is interested in what sorts of things might predict a person's behavior, they need to do quantitative research. If they want to confirm the finding of another researcher, most likely they will need to do quantitative research. If a student wishes to generalize beyond their participant sample to a larger population, they need to be conducting quantitative research.

So, ultimately, your choice of methods really depends on what your research goal is. What do you really want to find out? Do you want to compare two or more groups, look for relationships between certain variables, predict how someone will act or react, or confirm some findings from another study? If so, you want to use quantitative methods.

A topic such as self-esteem can be studied in many ways. Listed below are some example RQs about self-esteem. Which of the following research questions should be answered with quantitative methods?

  • Is there a difference between men's and women's level of self- esteem?
  • How do college-aged women describe their ups and downs with self-esteem?
  • How has "self-esteem" been constructed in popular self-help books over time?
  • Is there a relationship between self-esteem levels and communication apprehension?

What are the advantages of approaching a topic like self-esteem using quantitative methods? What are the disadvantages?

For more information, see the following website: Analyse This!!! Learning to analyse quantitative data

Answers:  1 & 4

Quantitative Methods Part One: Planning Your Study

Planning your study is one of the most important steps in the research process when doing quantitative research. As seen in the diagram below, it involves choosing a topic, writing research questions/hypotheses, and designing your study. Each of these topics will be covered in detail in this section of the chapter.

Image removed.

Topic Choice

Decide on topic.

How do you go about choosing a topic for a research project? One of the best ways to do this is to research something about which you would like to know more. Your communication professors will probably also want you to select something that is related to communication and things you are learning about in other communication classes.

When the authors of this textbook select research topics to study, they choose things that pique their interest for a variety of reasons, sometimes personal and sometimes because they see a need for more research in a particular area. For example, April Chatham-Carpenter studies adoption return trips to China because she has two adopted daughters from China and because there is very little research on this topic for Chinese adoptees and their families; she studied home vs. public schooling because her sister home schools, and at the time she started the study very few researchers had considered the social network implications for home schoolers (cf.  http://www.uni.edu/chatham/homeschool.html ).

When you are asked in this class and other classes to select a topic to research, think about topics that you have wondered about, that affect you personally, or that know have gaps in the research. Then start writing down questions you would like to know about this topic. These questions will help you decide whether the goal of your study is to understand something better, explain causes and effects of something, gather the perspectives of others on a topic, or look at how language constructs a certain view of reality.

Review Previous Research

In quantitative research, you do not rely on your conclusions to emerge from the data you collect. Rather, you start out looking for certain things based on what the past research has found. This is consistent with what was called in chapter 2 as a deductive approach (Keyton, 2011), which also leads a quantitative researcher to develop a research question or research problem from reviewing a body of literature, with the previous research framing the study that is being done. So, reviewing previous research done on your topic is an important part of the planning of your study. As seen in chapter 3 and the Appendix, to do an adequate literature review, you need to identify portions of your topic that could have been researched in the past. To do that, you select key terms of concepts related to your topic.

Some people use concept maps to help them identify useful search terms for a literature review. For example, see the following website: Concept Mapping: How to Start Your Term Paper Research .

Narrow Topic to Researchable Area

Once you have selected your topic area and reviewed relevant literature related to your topic, you need to narrow your topic to something that can be researched practically and that will take the research on this topic further. You don't want your research topic to be so broad or large that you are unable to research it. Plus, you want to explain some phenomenon better than has been done before, adding to the literature and theory on a topic. You may want to test out what someone else has found, replicating their study, and therefore building to the body of knowledge already created.

To see how a literature review can be helpful in narrowing your topic, see the following sources.  Narrowing or Broadening Your Research Topic  and  How to Conduct a Literature Review in Social Science

Research Questions & Hypotheses

Write Your Research Questions (RQs) and/or Hypotheses (Hs)

Once you have narrowed your topic based on what you learned from doing your review of literature, you need to formalize your topic area into one or more research questions or hypotheses. If the area you are researching is a relatively new area, and no existing literature or theory can lead you to predict what you might find, then you should write a research question. Take a topic related to social media, for example, which is a relatively new area of study. You might write a research question that asks:

"Is there a difference between how 1st year and 4th year college students use Facebook to communicate with their friends?"

If, however, you are testing out something you think you might find based on the findings of a large amount of previous literature or a well-developed theory, you can write a hypothesis. Researchers often distinguish between  null  and  alternative  hypotheses. The alternative hypothesis is what you are trying to test or prove is true, while the null hypothesis assumes that the alternative hypothesis is not true. For example, if the use of Facebook had been studied a great deal, and there were theories that had been developed on the use of it, then you might develop an alternative hypothesis, such as: "First-year students spend more time on using Facebook to communicate with their friends than fourth-year students do." Your null hypothesis, on the other hand, would be: "First-year students do  not  spend any more time using Facebook to communication with their friends than fourth-year students do." Researchers, however, only state the alternative hypothesis in their studies, and actually call it "hypothesis" rather than "alternative hypothesis."

Process of Writing a Research Question/Hypothesis.

Once you have decided to write a research question (RQ) or hypothesis (H) for your topic, you should go through the following steps to create your RQ or H.

Name the concepts from your overall research topic that you are interested in studying.

RQs and Hs have variables, or concepts that you are interested in studying. Variables can take on different values. For example, in the RQ above, there are at least two variables – year in college and use of Facebook (FB) to communicate. Both of them have a variety of levels within them.

When you look at the concepts you identified, are there any concepts which seem to be related to each other? For example, in our RQ, we are interested in knowing if there is a difference between first-year students and fourth-year students in their use of FB, meaning that we believe there is some connection between our two variables.

  • Decide what type of a relationship you would like to study between the variables. Do you think one causes the other? Does a difference in one create a difference in the other? As the value of one changes, does the value of the other change?

Identify which one of these concepts is the independent (or predictor) variable, or the concept that is perceived to be the cause of change in the other variable? Which one is the dependent (criterion) variable, or the one that is affected by changes in the independent variable? In the above example RQ, year in school is the independent variable, and amount of time spent on Facebook communicating with friends is the dependent variable. The amount of time spent on Facebook depends on a person's year in school.

If you're still confused about independent and dependent variables, check out the following site: Independent & Dependent Variables .

Express the relationship between the concepts as a single sentence – in either a hypothesis or a research question.

For example, "is there a difference between international and American students on their perceptions of the basic communication course," where cultural background and perceptions of the course are your two variables. Cultural background would be the independent variable, and perceptions of the course would be your dependent variable. More examples of RQs and Hs are provided in the next section.

APPLICATION: Try the above steps with your topic now. Check with your instructor to see if s/he would like you to send your topic and RQ/H to him/her via e-mail.

Types of Research Questions/Hypotheses

Once you have written your RQ/H, you need to determine what type of research question or hypothesis it is. This will help you later decide what types of statistics you will need to run to answer your question or test your hypothesis. There are three possible types of questions you might ask, and two possible types of hypotheses. The first type of question cannot be written as a hypothesis, but the second and third types can.

Descriptive Question.

The first type of question is a descriptive question. If you have only one variable or concept you are studying, OR if you are not interested in how the variables you are studying are connected or related to each other, then your question is most likely a descriptive question.

This type of question is the closest to looking like a qualitative question, and often starts with a "what" or "how" or "why" or "to what extent" type of wording. What makes it different from a qualitative research question is that the question will be answered using numbers rather than qualitative analysis. Some examples of a descriptive question, using the topic of social media, include the following.

"To what extent are college-aged students using Facebook to communicate with their friends?"
"Why do college-aged students use Facebook to communicate with their friends?"

Notice that neither of these questions has a clear independent or dependent variable, as there is no clear cause or effect being assumed by the question. The question is merely descriptive in nature. It can be answered by summarizing the numbers obtained for each category, such as by providing percentages, averages, or just the raw totals for each type of strategy or organization. This is true also of the following research questions found in a study of online public relations strategies:

"What online public relations strategies are organizations implementing to combat phishing" (Baker, Baker, & Tedesco, 2007, p. 330), and
"Which organizations are doing most and least, according to recommendations from anti- phishing advocacy recommendations, to combat phishing" (Baker, Baker, & Tedesco, 2007, p. 330)

The researchers in this study reported statistics in their results or findings section, making it clearly a quantitative study, but without an independent or dependent variable; therefore, these research questions illustrate the first type of RQ, the descriptive question.

Difference Question/Hypothesis.

The second type of question is a question/hypothesis of difference, and will often have the word "difference" as part of the question. The very first research question in this section, asking if there is a difference between 1st year and 4th year college students' use of Facebook, is an example of this type of question. In this type of question, the independent variable is some type of grouping or categories, such as age. Another example of a question of difference is one April asked in her research on home schooling: "Is there a difference between home vs. public schoolers on the size of their social networks?" In this example, the independent variable is home vs. public schooling (a group being compared), and the dependent variable is size of social networks. Hypotheses can also be difference hypotheses, as the following example on the same topic illustrates: "Public schoolers have a larger social network than home schoolers do."

Relationship/Association Question/Hypothesis.

The third type of question is a relationship/association question or hypothesis, and will often have the word "relate" or "relationship" in it, as the following example does: "There is a relationship between number of television ads for a political candidate and how successful that political candidate is in getting elected." Here the independent (or predictor) variable is number of TV ads, and the dependent (or criterion) variable is the success at getting elected. In this type of question, there is no grouping being compared, but rather the independent variable is continuous (ranges from zero to a certain number) in nature. This type of question can be worded as either a hypothesis or as a research question, as stated earlier.

Test out your knowledge of the above information, by answering the following questions about the RQ/H listed below. (Remember, for a descriptive question there are no clear independent & dependent variables.)

  • What is the independent variable (IV)?
  • What is the dependent variable (DV)?
  • What type of research question/hypothesis is it? (descriptive, difference, relationship/association)
  • "Is there a difference on relational satisfaction between those who met their current partner through online dating and those who met their current partner face-to-face?"
  • "How do Fortune 500 firms use focus groups to market new products?"
  • "There is a relationship between age and amount of time spent online using social media."

Answers: RQ1  is a difference question, with type of dating being the IV and relational satisfaction being the DV. RQ2  is a descriptive question with no IV or DV. RQ3  is a relationship hypothesis with age as the IV and amount of time spent online as the DV.

Design Your Study

The third step in planning your research project, after you have decided on your topic/goal and written your research questions/hypotheses, is to design your study which means to decide how to proceed in gathering data to answer your research question or to test your hypothesis. This step includes six things to do. [NOTE: The terms used in this section will be defined as they are used.]

  • Decide type of study design: Experimental, quasi-experimental, non-experimental.
  • Decide kind of data to collect: Survey/interview, observation, already existing data.
  • Operationalize variables into measurable concepts.
  • Determine type of sample: Probability or non-probability.
  • Decide how you will collect your data: face-to-face, via e-mail, an online survey, library research, etc.
  • Pilot test your methods.

Types of Study Designs

With quantitative research being rooted in the scientific method, traditional research is structured in an experimental fashion. This is especially true in the natural sciences, where they try to prove causes and effects on topics such as successful treatments for cancer. For example, the University of Iowa Hospitals and Clinics regularly conduct clinical trials to test for the effectiveness of certain treatments for medical conditions ( University of Iowa Hospitals & Clinics: Clinical Trials ). They use human participants to conduct such research, regularly recruiting volunteers. However, in communication, true experiments with treatments the researcher controls are less necessary and thus less common. It is important for the researcher to understand which type of study s/he wishes to do, in order to accurately communicate his/her methods to the public when describing the study.

There are three possible types of studies you may choose to do, when embarking on quantitative research: (a) True experiments, (b) quasi-experiments, and (c) non-experiments.

For more information to read on these types of designs, take a look at the following website and related links in it: Types of Designs .

The following flowchart should help you distinguish between the three types of study designs described below.

Image removed.

True Experiments.

The first two types of study designs use difference questions/hypotheses, as the independent variable for true and quasi-experiments is  nominal  or categorical (based on categories or groupings), as you have groups that are being compared. As seen in the flowchart above, what distinguishes a true experiment from the other two designs is a concept called "random assignment." Random assignment means that the researcher controls to which group the participants are assigned. April's study of home vs. public schooling was NOT a true experiment, because she could not control which participants were home schooled and which ones were public schooled, and instead relied on already existing groups.

An example of a true experiment reported in a communication journal is a study investigating the effects of using interest-based contemporary examples in a lecture on the history of public relations, in which the researchers had the following two hypotheses: "Lectures utilizing interest- based examples should result in more interested participants" and "Lectures utilizing interest- based examples should result in participants with higher scores on subsequent tests of cognitive recall" (Weber, Corrigan, Fornash, & Neupauer, 2003, p. 118). In this study, the 122 college student participants were randomly assigned by the researchers to one of two lecture video viewing groups: a video lecture with traditional examples and a video with contemporary examples. (To see the results of the study, look it up using your school's library databases).

A second example of a true experiment in communication is a study of the effects of viewing either a dramatic narrative television show vs. a nonnarrative television show about the consequences of an unexpected teen pregnancy. The researchers randomly assigned their 367 undergraduate participants to view one of the two types of shows.

Moyer-Gusé, E., & Nabi, R. L. (2010). Explaining the effects of narrative in an entertainment television program: Overcoming resistance to persuasion.  Human Communication Research, 36 , 26-52.

A third example of a true experiment done in the field of communication can be found in the following study.

Jensen, J. D. (2008). Scientific uncertainty in news coverage of cancer research: Effects of hedging on scientists' and journalists' credibility.  Human Communication Research, 34,  347-369.

In this study, Jakob Jensen had three independent variables. He randomly assigned his 601 participants to 1 of 20 possible conditions, between his three independent variables, which were (a) a hedged vs. not hedged message, (b) the source of the hedging message (research attributed to primary vs. unaffiliated scientists), and (c) specific news story employed (of which he had five randomly selected news stories about cancer research to choose from). Although this study was pretty complex, it does illustrate the true experiment in our field since the participants were randomly assigned to read a particular news story, with certain characteristics.

Quasi-Experiments.

If the researcher is not able to randomly assign participants to one of the treatment groups (or independent variable), but the participants already belong to one of them (e.g., age; home vs. public schooling), then the design is called a quasi-experiment. Here you still have an independent variable with groups, but the participants already belong to a group before the study starts, and the researcher has no control over which group they belong to.

An example of a hypothesis found in a communication study is the following: "Individuals high in trait aggression will enjoy violent content more than nonviolent content, whereas those low in trait aggression will enjoy violent content less than nonviolent content" (Weaver & Wilson, 2009, p. 448). In this study, the researchers could not assign the participants to a high or low trait aggression group since this is a personality characteristic, so this is a quasi-experiment. It does not have any random assignment of participants to the independent variable groups. Read their study, if you would like to, at the following location.

Weaver, A. J., & Wilson, B. J. (2009). The role of graphic and sanitized violence in the enjoyment of television dramas.  Human Communication Research, 35  (3), 442-463.

Benoit and Hansen (2004) did not choose to randomly assign participants to groups either, in their study of a national presidential election survey, in which they were looking at differences between debate and non-debate viewers, in terms of several dependent variables, such as which candidate viewers supported. If you are interested in discovering the results of this study, take a look at the following article.

Benoit, W. L., & Hansen, G. J. (2004). Presidential debate watching, issue knowledge, character evaluation, and vote choice.  Human Communication Research, 30  (1), 121-144.

Non-Experiments.

The third type of design is the non-experiment. Non-experiments are sometimes called survey designs, because their primary way of collecting data is through surveys. This is not enough to distinguish them from true experiments and quasi-experiments, however, as both of those types of designs may use surveys as well.

What makes a study a non-experiment is that the independent variable is not a grouping or categorical variable. Researchers observe or survey participants in order to describe them as they naturally exist without any experimental intervention. Researchers do not give treatments or observe the effects of a potential natural grouping variable such as age. Descriptive and relationship/association questions are most often used in non-experiments.

Some examples of this type of commonly used design for communication researchers include the following studies.

  • Serota, Levine, and Boster (2010) used a national survey of 1,000 adults to determine the prevalence of lying in America (see  Human Communication Research, 36 , pp. 2-25).
  • Nabi (2009) surveyed 170 young adults on their perceptions of reality television on cosmetic surgery effects, looking at several things: for example, does viewing cosmetic surgery makeover programs relate to body satisfaction (p. 6), finding no significant relationship between those two variables (see  Human Communication Research, 35 , pp. 1-27).
  • Derlega, Winstead, Mathews, and Braitman (2008) collected stories from 238 college students on reasons why they would disclose or not disclose personal information within close relationships (see  Communication Research Reports, 25 , pp. 115-130). They coded the participants' answers into categories so they could count how often specific reasons were mentioned, using a method called  content analysis , to answer the following research questions:

RQ1: What are research participants' attributions for the disclosure and nondisclosure of highly personal information?

RQ2: Do attributions reflect concerns about rewards and costs of disclosure or the tension between openness with another and privacy?

RQ3: How often are particular attributions for disclosure/nondisclosure used in various types of relationships? (p. 117)

All of these non-experimental studies have in common no researcher manipulation of an independent variable or even having an independent variable that has natural groups that are being compared.

Identify which design discussed above should be used for each of the following research questions.

  • Is there a difference between generations on how much they use MySpace?
  • Is there a relationship between age when a person first started using Facebook and the amount of time they currently spend on Facebook daily?
  • Is there a difference between potential customers' perceptions of an organization who are shown an organization's Facebook page and those who are not shown an organization's Facebook page?

[HINT: Try to identify the independent and dependent variable in each question above first, before determining what type of design you would use. Also, try to determine what type of question it is – descriptive, difference, or relationship/association.]

Answers: 1. Quasi-experiment 2. Non-experiment 3. True Experiment

Data Collection Methods

Once you decide the type of quantitative research design you will be using, you will need to determine which of the following types of data you will collect: (a) survey data, (b) observational data, and/or (c) already existing data, as in library research.

Using the survey data collection method means you will talk to people or survey them about their behaviors, attitudes, perceptions, and demographic characteristics (e.g., biological sex, socio-economic status, race). This type of data usually consists of a series of questions related to the concepts you want to study (i.e., your independent and dependent variables). Both of April's studies on home schooling and on taking adopted children on a return trip back to China used survey data.

On a survey, you can have both closed-ended and open-ended questions. Closed-ended questions, can be written in a variety of forms. Some of the most common response options include the following.

Likert responses – for example: for the following statement, ______ do you strongly agree agree neutral disagree strongly disagree

Semantic differential – for example: does the following ______ make you Happy ..................................... Sad

Yes-no answers for example: I use social media daily. Yes / No.

One site to check out for possible response options is  http://www.360degreefeedback.net/media/ResponseScales.pdf .

Researchers often follow up some of their closed-ended questions with an "other" category, in which they ask their participants to "please specify," their response if none of the ones provided are applicable. They may also ask open-ended questions on "why" a participant chose a particular answer or ask participants for more information about a particular topic. If the researcher wants to use the open-ended question responses as part of his/her quantitative study, the answers are usually coded into categories and counted, in terms of the frequency of a certain answer, using a method called  content analysis , which will be discussed when we talk about already-existing artifacts as a source of data.

Surveys can be done face-to-face, by telephone, mail, or online. Each of these methods has its own advantages and disadvantages, primarily in the form of the cost in time and money to do the survey. For example, if you want to survey many people, then online survey tools such as surveygizmo.com and surveymonkey.com are very efficient, but not everyone has access to taking a survey on the computer, so you may not get an adequate sample of the population by doing so. Plus you have to decide how you will recruit people to take your online survey, which can be challenging. There are trade-offs with every method.

For more information on things to consider when selecting your survey method, check out the following website:

Selecting the Survey Method .

There are also many good sources for developing a good survey, such as the following websites. Constructing the Survey Survey Methods Designing Surveys

Observation.

A second type of data collection method is  observation . In this data collection method, you make observations of the phenomenon you are studying and then code your observations, so that you can count what you are studying. This type of data collection method is often called interaction analysis, if you collect data by observing people's behavior. For example, if you want to study the phenomenon of mall-walking, you could go to a mall and count characteristics of mall-walkers. A researcher in the area of health communication could study the occurrence of humor in an operating room, for example, by coding and counting the use of humor in such a setting.

One extended research study using observational data collection methods, which is cited often in interpersonal communication classes, is John Gottman's research, which started out in what is now called "The Love Lab." In this lab, researchers observe interactions between couples, including physiological symptoms, using coders who look for certain items found to predict relationship problems and success.

Take a look at the YouTube video about "The Love Lab" at the following site to learn more about the potential of using observation in collecting data for a research study:  The "Love" Lab .

Already-Existing Artifacts.

The third method of quantitative data collection is the use of  already-existing artifacts . With this method, you choose certain artifacts (e.g., newspaper or magazine articles; television programs; webpages) and code their content, resulting in a count of whatever you are studying. With this data collection method, researchers most often use what is called quantitative  content analysis . Basically, the researcher counts frequencies of something that occurs in an artifact of study, such as the frequency of times something is mentioned on a webpage. Content analysis can also be used in qualitative research, where a researcher identifies and creates text-based themes but does not do a count of the occurrences of these themes. Content analysis can also be used to take open-ended questions from a survey method, and identify countable themes within the questions.

Content analysis is a very common method used in media studies, given researchers are interested in studying already-existing media artifacts. There are many good sources to illustrate how to do content analysis such as are seen in the box below.

See the following sources for more information on content analysis. Writing Guide: Content Analysis A Flowchart for the Typical Process of Content Analysis Research What is Content Analysis?

With content analysis and any method that you use to code something into categories, one key concept you need to remember is  inter-coder or inter-rater reliability , in which there are multiple coders (at least two) trained to code the observations into categories. This check on coding is important because you need to check to make sure that the way you are coding your observations on the open-ended answers is the same way that others would code a particular item. To establish this kind of inter-coder or inter-rater reliability, researchers prepare codebooks (to train their coders on how to code the materials) and coding forms for their coders to use.

To see some examples of actual codebooks used in research, see the following website:  Human Coding--Sample Materials .

There are also online inter-coder reliability calculators some researchers use, such as the following:  ReCal: reliability calculation for the masses .

Regardless of which method of data collection you choose, you need to decide even more specifically how you will measure the variables in your study, which leads us to the next planning step in the design of a study.

Operationalization of Variables into Measurable Concepts

When you look at your research question/s and/or hypotheses, you should know already what your independent and dependent variables are. Both of these need to be measured in some way. We call that way of measuring  operationalizing  a variable. One way to think of it is writing a step by step recipe for how you plan to obtain data on this topic. How you choose to operationalize your variable (or write the recipe) is one all-important decision you have to make, which will make or break your study. In quantitative research, you have to measure your variables in a valid (accurate) and reliable (consistent) manner, which we discuss in this section. You also need to determine the level of measurement you will use for your variables, which will help you later decide what statistical tests you need to run to answer your research question/s or test your hypotheses. We will start with the last topic first.

Level of Measurement

Level of measurement has to do with whether you measure your variables using categories or groupings OR whether you measure your variables using a continuous level of measurement (range of numbers). The level of measurement that is considered to be categorical in nature is called nominal, while the levels of measurement considered to be continuous in nature are ordinal, interval, and ratio. The only ones you really need to know are nominal, ordinal, and interval/ratio.

Image removed.

Nominal  variables are categories that do not have meaningful numbers attached to them but are broader categories, such as male and female, home schooled and public schooled, Caucasian and African-American.  Ordinal  variables do have numbers attached to them, in that the numbers are in a certain order, but there are not equal intervals between the numbers (e.g., such as when you rank a group of 5 items from most to least preferred, where 3 might be highly preferred, and 2 hated).  Interval/ratio  variables have equal intervals between the numbers (e.g., weight, age).

For more information about these levels of measurement, check out one of the following websites. Levels of Measurement Measurement Scales in Social Science Research What is the difference between ordinal, interval and ratio variables? Why should I care?

Validity and Reliability

When developing a scale/measure or survey, you need to be concerned about validity and reliability. Readers of quantitative research expect to see researchers justify their research measures using these two terms in the methods section of an article or paper.

Validity.   Validity  is the extent to which your scale/measure or survey adequately reflects the full meaning of the concept you are measuring. Does it measure what you say it measures? For example, if researchers wanted to develop a scale to measure "servant leadership," the researchers would have to determine what dimensions of servant leadership they wanted to measure, and then create items which would be valid or accurate measures of these dimensions. If they included items related to a different type of leadership, those items would not be a valid measure of servant leadership. When doing so, the researchers are trying to prove their measure has internal validity. Researchers may also be interested in external validity, but that has to do with how generalizable their study is to a larger population (a topic related to sampling, which we will consider in the next section), and has less to do with the validity of the instrument itself.

There are several types of validity you may read about, including face validity, content validity, criterion-related validity, and construct validity. To learn more about these types of validity, read the information at the following link: Validity .

To improve the validity of an instrument, researchers need to fully understand the concept they are trying to measure. This means they know the academic literature surrounding that concept well and write several survey questions on each dimension measured, to make sure the full idea of the concept is being measured. For example, Page and Wong (n.d.) identified four dimensions of servant leadership: character, people-orientation, task-orientation, and process-orientation ( A Conceptual Framework for Measuring Servant-Leadership ). All of these dimensions (and any others identified by other researchers) would need multiple survey items developed if a researcher wanted to create a new scale on servant leadership.

Before you create a new survey, it can be useful to see if one already exists with established validity and reliability. Such measures can be found by seeing what other respected studies have used to measure a concept and then doing a library search to find the scale/measure itself (sometimes found in the reference area of a library in books like those listed below).

Reliability .  Reliability  is the second criterion you will need to address if you choose to develop your own scale or measure. Reliability is concerned with whether a measurement is consistent and reproducible. If you have ever wondered why, when taking a survey, that a question is asked more than once or very similar questions are asked multiple times, it is because the researchers one concerned with proving their study has reliability. Are you, for example, answering all of the similar questions similarly? If so, the measure/scale may have good reliability or consistency over time.

Researchers can use a variety of ways to show their measure/scale is reliable. See the following websites for explanations of some of these ways, which include methods such as the test-retest method, the split-half method, and inter-coder/rater reliability. Types of Reliability Reliability

To understand the relationship between validity and reliability, a nice visual provided below is explained at the following website (Trochim, 2006, para. 2). Reliability & Validity

Self-Quiz/Discussion:

Take a look at one of the surveys found at the following poll reporting sites on a topic which interests you. Critique one of these surveys, using what you have learned about creating surveys so far.

http://www.pewinternet.org/ http://pewresearch.org/ http://www.gallup.com/Home.aspx http://www.kff.org/

One of the things you might have critiqued in the previous self-quiz/discussion may have had less to do with the actual survey itself, but rather with how the researchers got their participants or sample. How participants are recruited is just as important to doing a good study as how valid and reliable a survey is.

Imagine that in the article you chose for the last "self-quiz/discussion" you read the following quote from the Pew Research Center's Internet and American Life Project: "One in three teens sends more than 100 text messages a day, or 3000 texts a month" (Lenhart, 2010, para.5). How would you know whether you could trust this finding to be true? Would you compare it to what you know about texting from your own and your friends' experiences? Would you want to know what types of questions people were asked to determine this statistic, or whether the survey the statistic is based on is valid and reliable? Would you want to know what type of people were surveyed for the study? As a critical consumer of research, you should ask all of these types of questions, rather than just accepting such a statement as undisputable fact. For example, if only people shopping at an Apple Store were surveyed, the results might be skewed high.

In particular, related to the topic of this section, you should ask about the sampling method the researchers did. Often, the researchers will provide information related to the sample, stating how many participants were surveyed (in this case 800 teens, aged 12-17, who were a nationally representative sample of the population) and how much the "margin of error" is (in this case +/- 3.8%). Why do they state such things? It is because they know the importance of a sample in making the case for their findings being legitimate and credible.  Margin of error  is how much we are confident that our findings represent the population at large. The larger the margin of error, the less likely it is that the poll or survey is accurate. Margin of error assumes a 95% confidence level that what we found from our study represents the population at large.

For more information on margin of error, see one of the following websites. Answers.com Margin of Error Stats.org Margin of Error Americanresearchgroup.com Margin of Error [this last site is a margin of error calculator, which shows that margin of error is directly tied to the size of your sample, in relationship to the size of the population, two concepts we will talk about in the next few paragraphs]

In particular, this section focused on sampling will talk about the following topics: (a) the difference between a population vs. a sample; (b) concepts of error and bias, or "it's all about significance"; (c) probability vs. non-probability sampling; and (d) sample size issues.

Population vs. Sample

When doing quantitative studies, such as the study of cell phone usage among teens, you are never able to survey the entire population of teenagers, so you survey a portion of the population. If you study every member of a population, then you are conducting a census such as the United States Government does every 10 years. When, however, this is not possible (because you do not have the money the U.S. government has!), you attempt to get as good a sample as possible.

Characteristics of a population are summarized in numerical form, and technically these numbers are called  parameters . However, numbers which summarize the characteristics of a sample are called  statistics .

Error and Bias

If a sample is not done well, then you may not have confidence in how the study's results can be generalized to the population from which the sample was taken. Your confidence level is often stated as the  margin of error  of the survey. As noted earlier, a study's margin of error refers to the degree to which a sample differs from the total population you are studying. In the Pew survey, they had a margin of error of +/- 3.8%. So, for example, when the Pew survey said 33% of teens send more than 100 texts a day, the margin of error means they were 95% sure that 29.2% - 36.8% of teens send this many texts a day.

Margin of error is tied to  sampling error , which is how much difference there is between your sample's results and what would have been obtained if you had surveyed the whole population. Sample error is linked to a very important concept for quantitative researchers, which is the notion of  significance . Here, significance does not refer to whether some finding is morally or practically significant, it refers to whether a finding is statistically significant, meaning the findings are not due to chance but actually represent something that is found in the population.  Statistical significance  is about how much you, as the researcher, are willing to risk saying you found something important and be wrong.

For the difference between statistical significance and practical significance, see the following YouTube video:  Statistical and Practical Significance .

Scientists set certain arbitrary standards based on the probability they could be wrong in reporting their findings. These are called  significance levels  and are commonly reported in the literature as  p <.05  or  p <.01  or some other probability (or  p ) level.

If an article says a statistical test reported that  p < .05 , it simply means that they are most likely correct in what they are saying, but there is a 5% chance they could be wrong and not find the same results in the population. If p < .01, then there would be only a 1% chance they were wrong and would not find the same results in the population. The lower the probability level, the more certain the results.

When researchers are wrong, or make that kind of decision error, it often implies that either (a) their sample was biased and was not representative of the true population in some way, or (b) that something they did in collecting the data biased the results. There are actually two kinds of sampling error talked about in quantitative research: Type I and Type II error.  Type 1 error  is what happens when you think you found something statistically significant and claim there is a significant difference or relationship, when there really is not in the actual population. So there is something about your sample that made you find something that is not in the actual population. (Type I error is the same as the probability level, or .05, if using the traditional p-level accepted by most researchers.)  Type II error  happens when you don't find a statistically significant difference or relationship, yet there actually is one in the population at large, so once again, your sample is not representative of the population.

For more information on these two types of error, check out the following websites. Hypothesis Testing: Type I Error, Type II Error Type I and Type II Errors - Making Mistakes in the Justice System

Researchers want to select a sample that is representative of the population in order to reduce the likelihood of having a sample that is biased. There are two types of bias particularly troublesome for researchers, in terms of sampling error. The first type is  selection bias , in which each person in the population does not have an equal chance to be chosen for the sample, which happens frequently in communication studies, because we often rely on convenience samples (whoever we can get to complete our surveys). The second type of bias is  response bias , in which those who volunteer for a study have different characteristics than those who did not volunteer for the study, another common challenge for communication researchers. Volunteers for a study may very well be different from persons who choose not to volunteer for a study, so that you have a biased sample by relying just on volunteers, which is not representative of the population from which you are trying to sample.

Probability vs. Non-Probability Sampling

One of the best ways to lower your sampling error and reduce the possibility of bias is to do probability or random sampling. This means that every person in the population has an equal chance of being selected to be in your sample. Another way of looking at this is to attempt to get a  representative  sample, so that the characteristics of your sample closely approximate those of the population. A sample needs to contain essentially the same variations that exist in the population, if possible, especially on the variables or elements that are most important to you (e.g., age, biological sex, race, level of education, socio-economic class).

There are many different ways to draw a probability/random sample from the population. Some of the most common are a  simple random sample , where you use a random numbers table or random number generator to select your sample from the population.

There are several examples of random number generators available online. See the following example of an online random number generator:  http://www.randomizer.org/ .

A  systematic random sample  takes every n-th number from the population, depending on how many people you would like to have in your sample. A  stratified random sample  does random sampling within groups, and a  multi-stage  or  cluster sample  is used when there are multiple groups within a large area and a large population, and the researcher does random sampling in stages.

If you are interested in understanding more about these types of probability/random samples, take a look at the following website: Probability Sampling .

However, many times communication researchers use whoever they can find to participate in their study, such as college students in their classes since these people are easily accessible. Many of the studies in interpersonal communication and relationship development, for example, used this type of sample. This is called a convenience sample. In doing so, they are using a non- probability or non-random sample. In these types of samples, each member of the population does not have an equal opportunity to be selected. For example, if you decide to ask your facebook friends to participate in an online survey you created about how college students in the U.S. use cell phones to text, you are using a non-random type of sample. You are unable to randomly sample the whole population in the U.S. of college students who text, so you attempt to find participants more conveniently. Some common non-random or non-probability samples are:

  • accidental/convenience samples, such as the facebook example illustrates
  • quota samples, in which you do convenience samples within subgroups of the population, such as biological sex, looking for a certain number of participants in each group being compared
  • snowball or network sampling, where you ask current participants to send your survey onto their friends.

For more information on non-probability sampling, see the following website: Nonprobability Sampling .

Researchers, such as communication scholars, often use these types of samples because of the nature of their research. Most research designs used in communication are not true experiments, such as would be required in the medical field where they are trying to prove some cause-effect relationship to cure or alleviate symptoms of a disease. Most communication scholars recognize that human behavior in communication situations is much less predictable, so they do not adhere to the strictest possible worldview related to quantitative methods and are less concerned with having to use probability sampling.

They do recognize, however, that with either probability or non-probability sampling, there is still the possibility of bias and error, although much less with probability sampling. That is why all quantitative researchers, regardless of field, will report statistical significance levels if they are interested in generalizing from their sample to the population at large, to let the readers of their work know how confident they are in their results.

Size of Sample

The larger the sample, the more likely the sample is going to be representative of the population. If there is a lot of variability in the population (e.g., lots of different ethnic groups in the population), a researcher will need a larger sample. If you are interested in detecting small possible differences (e.g., in a close political race), you need a larger sample. However, the bigger your population, the less you have to increase the size of your sample in order to have an adequate sample, as is illustrated by an example sample size calculator such as can be found at  http://www.raosoft.com/samplesize.html .

Using the example sample size calculator, see how you might determine how large of a sample you might need in order to study how college students in the U.S. use texting on their cell phones. You would have to first determine approximately how many college students are in the U.S. According to ANEKI, there are a little over 14,000,000 college students in the U.S. ( Countries with the Most University Students ). When inputting that figure into the sample size calculator below (using no commas for the population size), you would need a sample size of approximately 385 students. If the population size was 20,000, you would need a sample of 377 students. If the population was only 2,000, you would need a sample of 323. For a population of 500, you would need a sample of 218.

It is not enough, however, to just have an adequate or large sample. If there is bias in the sampling, you can have a very bad large sample, one that also does not represent the population at large. So, having an unbiased sample is even more important than having a large sample.

So, what do you do, if you cannot reasonably conduct a probability or random sample? You run statistics which report significance levels, and you report the limitations of your sample in the discussion section of your paper/article.

Pilot Testing Methods

Now that we have talked about the different elements of your study design, you should try out your methods by doing a pilot test of some kind. This means that you try out your procedures with someone to try to catch any mistakes in your design before you start collecting data from actual participants in your study. This will save you time and money in the long run, along with unneeded angst over mistakes you made in your design during data collection. There are several ways you might do this.

You might ask an expert who knows about this topic (such as a faculty member) to try out your experiment or survey and provide feedback on what they think of your design. You might ask some participants who are like your potential sample to take your survey or be a part of your pilot test; then you could ask them which parts were confusing or needed revising. You might have potential participants explain to you what they think your questions mean, to see if they are interpreting them like you intended, or if you need to make some questions clearer.

The main thing is that you do not just assume your methods will work or are the best type of methods to use until you try them out with someone. As you write up your study, in your methods section of your paper, you can then talk about what you did to change your study based on the pilot study you did.

Institutional Review Board (IRB) Approval

The last step of your planning takes place when you take the necessary steps to get your study approved by your institution's review board. As you read in chapter 3, this step is important if you are planning on using the data or results from your study beyond just the requirements for your class project. See chapter 3 for more information on the procedures involved in this step.

Conclusion: Study Design Planning

Once you have decided what topic you want to study, you plan your study. Part 1 of this chapter has covered the following steps you need to follow in this planning process:

  • decide what type of study you will do (i.e., experimental, quasi-experimental, non- experimental);
  • decide on what data collection method you will use (i.e., survey, observation, or already existing data);
  • operationalize your variables into measureable concepts;
  • determine what type of sample you will use (probability or non-probability);
  • pilot test your methods; and
  • get IRB approval.

At that point, you are ready to commence collecting your data, which is the topic of the next section in this chapter.

To read this content please select one of the options below:

Please note you do not have access to teaching notes, chapter 1 quantitative research in education: impact on evidence-based instruction.

Current Issues and Trends in Special Education: Research, Technology, and Teacher Preparation

ISBN : 978-1-84950-954-1 , eISBN : 978-1-84950-955-8

Publication date: 23 April 2010

Quantitative research is based on epistemic beliefs that can be traced back to David Hume. Hume and others who followed in his wake suggested that we can never directly observe cause and effect. Rather we perceive what is called “constant conjunction” or the regularities of relationships among events. Through observing these regularities, we can develop generalizable laws that, once established, describe predictable patterns that can be replicated with reliability. This form of reasoning involves studying groups of individuals and is often called nomothetic and is contrasted with idiographic research that focuses on the uniqueness of the individual. It is clear that large-scale experiments with random assignment to treatment are based on nomothetic models, as are quasi-experimental studies where intact groups of people (e.g., students in a particular classroom) are assigned to treatments.

Brigham, F.J. (2010), "Chapter 1 Quantitative research in education: Impact on evidence-based instruction", Obiakor, F.E. , Bakken, J.P. and Rotatori, n.F. (Ed.) Current Issues and Trends in Special Education: Research, Technology, and Teacher Preparation ( Advances in Special Education, Vol. 20 ), Emerald Group Publishing Limited, Leeds, pp. 3-17. https://doi.org/10.1108/S0270-4013(2010)0000020004

Emerald Group Publishing Limited

Copyright © 2010, Emerald Group Publishing Limited

We’re listening — tell us what you think

Something didn’t work….

Report bugs here

All feedback is valuable

Please share your general feedback

Join us on our journey

Platform update page.

Visit emeraldpublishing.com/platformupdate to discover the latest news and updates

Questions & More Information

Answers to the most commonly asked questions here

  • Chapter 1: Home
  • Narrowing Your Topic
  • Problem Statement
  • Purpose Statement
  • Conceptual Framework
  • Theoretical Framework
  • Quantitative Research Questions This link opens in a new window
  • Qualitative Research Questions This link opens in a new window
  • Qualitative & Quantitative Research Support with the ASC This link opens in a new window
  • Library Research Consultations This link opens in a new window
  • Last Updated: Apr 24, 2023 1:37 PM
  • URL: https://resources.nu.edu/c.php?g=1006886

NCU Library Home

Book cover

Digitalisation of Global Business Services pp 85–102 Cite as

Quantitative Research

  • Albert Plugge 5 &
  • Shahrokh Nikou 6  
  • First Online: 27 February 2024

28 Accesses

Part of the book series: Technology, Work and Globalization ((TWG))

This chapter describes the quantitative research that we conducted to examine the path relationships depicted in the conceptual research model (see Fig. 3.1 ). We employed Partial Least Squares Structural Equation Modelling (PLS-SEM) technique (Hair et al., 2016) to test the hypotheses and the relationships between the antecedents of business services portfolio. This approach seems to be appropriate for the research at hand. Overall, PLS-SEM offers a flexible and robust approach to test relationships and hypotheses, particularly in situations where sample size is small, and the conceptual model (see Fig. 3.1 ) is complex, and the research is exploratory or prediction-oriented.

This is a preview of subscription content, log in via an institution .

Buying options

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Durable hardcover edition
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Aier, S., Bucher, T., & Winter, R. (2011). Critical success factors of service orientation in information systems engineering. Business & Information Systems Engineering, 3 (2), 77–88.

Article   Google Scholar  

Bagozzi, R. P., & Yi, Y. (1988). On the evaluation of structural equation models. Journal of the Academy of Marketing Science, 16 (1), 74–94.

Ciarli, T., Meliciani, V., & Savona, M. (2012). Knowledge dynamics, structural change and the geography of business services. Journal of Economic Surveys, 26 (3), 445–467.

Dahlberg, T., & Lahdelma, P. (2007). IT Governance Maturity and IT Outsourcing Degree: An Exploratory Study. In Proceedings of the 39th annual Hawaii International Conference on System Sciences, paper 236a.

Google Scholar  

Fornell, C., & Larcker, D. F. (1981). Evaluating structural equation models with unobservable variables and measurement error. Journal of Marketing Research, 18 (1), 39–50.

Franke, G., & Sarstedt, M. (2019). Heuristics versus statistics in discriminant validity testing: A comparison of four procedures. Internet Research, 29 (3), 430–447.

Gefen, D., & Straub, D. (2005). A practical guide to factorial validity using PLS-Graph: Tutorial and annotated example. Communication of the Association for Information Systems, 16 (5), 91–109.

Hair, J., Hollingsworth, C. L., Randolph, A. B., & Chong, A. Y. L. (2017). An updated and expanded assessment of PLS-SEM in information systems research. Industrial Management & Data Systems, 117 (3), 442–458.

Hair, J. F., Hult, G. T. M., Ringle, C., & Sarstedt, M. (2016). A primer on partial least squares structural equation modeling (PLS-SEM) . Sage Publications.

Hair, J. F., Sarstedt, M., Hopkins, L., & Kuppelwieser, V. G. (2014). Partial least squares structural equation modeling (PLS-SEM): An emerging tool in business research. European Business Review, 26 (2), 106–121.

Hair, J. F., Sarstedt, M., Ringle, C. M., & Mena, J. A. (2012). An assessment of the use of partial least squares structural equation modeling in marketing research. Journal of the Academy of Marketing Science, 40 (3), 414–433.

Henseler, J., Ringle, C. M., & Sarstedt, M. (2015). A new criterion for assessing discriminant validity in variance-based structural equation modeling. Journal of the Academy of Marketing Science, 43 (1), 115–135.

Hooper, D., Coughlan, J., & Mullen, M. R. (2008). Structural equation modelling: Guidelines for determining model fit. The Electronic Journal of Business Research Methods, 6 (1), 53–60.

Janssen, M., & Feenstra, R. (2010). Service portfolios for supply chain composition: Creating business network interoperability and agility. International Journal of Computer Integrated Manufacturing, 23 (8–9), 747–757.

Kock, N. (2015). Common method bias in PLS-SEM: A full collinearity assessment approach. International Journal of e-Collaboration, 11 (4), 1–10.

Lacity, M. C., Khan, S. A., & Yan, A. (2017). Review of the empirical business services sourcing literature: An update and future directions. In Outsourcing and offshoring business services (pp. 499–651). Palgrave Macmillan.

Chapter   Google Scholar  

Lau, A. K. W., Tang, E., & Yam, R. C. M. (2010). Effects of suppliers and customer integration on product innovation and performance: Empirical evidence in Hong Kong manufacturers. Journal of Production Innovation Management, 27 (5), 761–777.

Maatman, M., & Meijerink, J. (2017). Why sharing is synergy: The role of decentralized control mechanisms and centralized HR capabilities in creating HR shared service value. Personnel Review, 46 (7), 1297–1317.

MacKenzie, S. B., & Podsakoff, P. M. (2012). Common method bias in marketing: Causes, mechanisms, and procedural remedies. Journal of Retailing, 88 (4), 542–555.

Matthews, L. M., Sarstedt, M., Hair, J. F., & Ringle, C. M. (2016). Identifying and treating unobserved heterogeneity with FIMIX-PLS: Part II—A case study. European Business Review, 28 (2), 208–224.

Parmigiani, A. (2007). Why do firms both make and buy? An investigation of concurrent sourcing. Strategic Management Journal, 28 (3), 285–311.

Podsakoff, P. M., MacKenzie, S. B., Lee, J. Y., & Podsakoff, N. P. (2003). Common method biases in behavioral research: A critical review of the literature and recommended remedies. Journal of Applied Psychology, 88 (5), 879–903.

Richter, P. C., & Brühl, R. (2017). Shared service centre research: A review of the past, present, and future. European Management Journal, 35 (3), 26–38.

Ringle, C. M., Sarstedt, M., & Straub, D. W. (2012). Editor’s comments: A critical look at the use of PLS-SEM in MIS Quarterly. MIS Quarterly, 36 (1), iii–xiv.

Sako, M., Chondrakis, G., & Vaaler, P. M. (2016). How do plural-sourcing firms make and buy? The impact of supplier portfolio design. Organization Science, 27 (5), 1161–1182.

Ulbrich, F., & Schulz, V. (2014). Seven challenges management must overcome when implementing IT-shared services. Strategic Outsourcing: An International Journal, 7 (2), 94–106.

Van der Aalst, W. M. P. (2012). A decade of business process management conferences: Personal reflections on a developing discipline. In A. Barros, A. Gal, & E. Kindler (Eds.), Business Process Management (LNCS 7481). Springer-Verlag Berlin Heidelberg.

Wirtz, J., Tuzovic, S., & Ehret, M. (2015). Global business services: Increasing specialization and integration of the world economy as drivers of economic growth. Journal of Service Management., 26 (4), 565–587.

Yoshikuni, A. C., Dwivedi, R., Dultra-de-Lima, R. G., Parisi, C., & Oyadomari, J. C. T. (2023). Role of emerging technologies in accounting information systems for achieving strategic flexibility through decision-making performance: An exploratory study based on North American and South American firms. Global Journal of Flexible Systems Management, 24 , 199–218.

Download references

Author information

Authors and affiliations.

Nyenrode Business University, Breukelen, The Netherlands

Albert Plugge

Department of Design, Organisation and Strategy, Delft University of Technology, Delft, The Netherlands

Shahrokh Nikou

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Albert Plugge .

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Cite this chapter.

Plugge, A., Nikou, S. (2024). Quantitative Research. In: Digitalisation of Global Business Services. Technology, Work and Globalization. Palgrave Macmillan, Cham. https://doi.org/10.1007/978-3-031-51528-6_4

Download citation

DOI : https://doi.org/10.1007/978-3-031-51528-6_4

Published : 27 February 2024

Publisher Name : Palgrave Macmillan, Cham

Print ISBN : 978-3-031-51527-9

Online ISBN : 978-3-031-51528-6

eBook Packages : Business and Management Business and Management (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research
  • Study Guides
  • Homework Questions

Research methods chapter 2 - getting started with research

IMAGES

  1. Chapter 1

    chapter 1 of quantitative research

  2. Importance of Quantitative Research Across Different Fields

    chapter 1 of quantitative research

  3. (PDF) Quantitative Research Method

    chapter 1 of quantitative research

  4. QUANTITATIVE RESEARCH.docx

    chapter 1 of quantitative research

  5. (DOC) Chapter 1 Lesson 1 Characteristics, Strengths, Weaknesses, Kinds

    chapter 1 of quantitative research

  6. Quantitative Research Sample Example Of Methodology In Research Paper

    chapter 1 of quantitative research

VIDEO

  1. L-20/1 Quantitative Research Process (In Urdu / Hindi)

  2. Introduction to Quantitative Research Course 1-مقدمة مقرر البحوث الكميَّة

  3. Qualitative Research Vs Quantitative Research

  4. Quantitative research data

  5. Quantitative research process

  6. Introduction to Quantitative & Qualitative Research

COMMENTS

  1. PDF CHAPTER 1 THE PROBLEM AND ITS BACKGROUND

    Profile® Version 8.1? 4) Is there a significant effect of mentoring program on the AQ® Scores of the respondents as revealed by the AQ Profile® Version 8.1? Hypothesis The hypothesis will be raised in the study and will be tested at .05 level of significance. H o: There is no significant effect of mentoring program on the

  2. PDF CHAPTER 1 The Selection of a Research Approach

    CHAPTER The Selection of a 1 Research Approach Introducing Key Terms in this Chapter Research has its own language, and it is important to understand key terms ... Often the distinction between qualitative research and quantitative research is framed in terms of using words (qualitative) rather than num-bers (quantitative) or, better yet, using ...

  3. Purpose Statement

    In PhD studies, the purpose usually involves applying a theory to solve the problem. In other words, the purpose tells the reader what the goal of the study is, and what your study will accomplish, through which theoretical lens. The purpose statement also includes brief information about direction, scope, and where the data will come from.

  4. PDF Introduction to quantitative research

    The specificity of quantitative research lies in the next part of the defini-tion. In quantitative research we collect numerical data. This is closely connected to the final part of the definition: analysis using mathematically Chapter 1 Introduction to quantitative research 1 9079 Chapter 01 (1-12) 1/4/04 1:18 PM Page 1

  5. (PDF) QRM Chapter 1

    Chapter 1: Research Basics. 55. The unique requirement of a master's or doctorate d egree in most universities around the. globe is the successful completi on and defense of a scholarly re ...

  6. PDF CHAPTER 1 AN INTRODUCTION TO QUANTITATIVE ANALYSIS

    chapter as an Advanced Topic. The use of such quantitative data goes back at least to the earliest city-states. We know the Babylonians and Egyptians collected numerical data on crops, for example. In fact the term statistics has its roots in the Latin word for "state" indicating the historical linkage between the government and numerical data.

  7. Introduction to Quantitative Analysis

    Chapter Learning Objectives. Understand the justification for quantitative analysis. Learn how data and the scientific process can be used to inform decisions. Learn and differentiate between some of the commonly used terminology in quantitative analysis. Introduce the functions of quantitative analysis.

  8. Chapter 1: Home

    The definitions of any technical terms necessary for the reader to understand are essential. Chapter 1 also presents the research questions and theoretical foundation (Ph.D.) or conceptual framework (Applied Doctorate) and provides an overview of the research methods (qualitative or quantitative) being used in the study.

  9. Sage Research Methods

    This concise text provides a clear and digestible introduction to completing quantitative research. Taking you step-by-step through the process of completing your quantitative research project, it offers guidance on: • Formulating your research question • Completing literature reviews and meta-analysis • Formulating a research design and specifying your target population and data source ...

  10. 3.1 What is Quantitative Research?

    Quantitative research is a research method that uses numerical data and statistical analysis to study phenomena. 1 Quantitative research plays an important role in scientific inquiry by providing a rigorous, objective, systematic process using numerical data to test relationships and examine cause-and-effect associations between variables.

  11. A Practical Guide to Writing Quantitative and Qualitative Research

    INTRODUCTION. Scientific research is usually initiated by posing evidenced-based research questions which are then explicitly restated as hypotheses.1,2 The hypotheses provide directions to guide the study, solutions, explanations, and expected results.3,4 Both research questions and hypotheses are essentially formulated based on conventional theories and real-world processes, which allow the ...

  12. Doing Quantitative Research in Education with SPSS

    Doing Quantitative Research in Education with SPSS, Second Edition, an accessible and authoritative introduction, is essential for education students and researchers needing to use quantitative methods for the first time. ... Chapter 1 | Introduction to Quantitative Research. Chapter 2 | Experimental and Quasi-Experimental Research. Chapter 3 ...

  13. How to write your Chapter 1

    In this tutorial video, I discussed that basic contents your Chapter 1 - Introduction must have in case you are pursuing a Quantitative design in your research.

  14. Chapter 1: Introduction to Research Methods

    Chapter 1: Introduction to Research Methods. Learning Objectives. At the end of this chapter, you will be able to: Define the term "research methods". List the nine steps in undertaking a research project. Differentiate between applied and basic research. Explain where research ideas come from. Define ontology and epistemology and explain ...

  15. Alignment

    The purpose of this quantitative correlational Dissertation is to examine if there is a relationship between (variable 1) and (variable 2). Causal Comparative Design Purpose Statement The purpose of this quantitative causal-comparative dissertation is to examine the difference in (dependent variable) between (group 1) and (group2).

  16. Module 3 Chapter 1: From Research Questions to Research Approaches

    Overview of Quantitative Approaches. Questions of the exploratory, descriptive, or explanatory type are often asked and addressed through quantitative research approaches, particularly questions that have a numeric component. Exploratory and descriptive quantitative studies rely on objective measures for data collection which is a major ...

  17. Chapter Four: Quantitative Methods (Part 1)

    These parts can also be used as a checklist when working through the steps of your study. Specifically, part 1 focuses on planning a quantitative study (collecting data), part two explains the steps involved in doing a quantitative study, and part three discusses how to make sense of your results (organizing and analyzing data). Research Methods.

  18. Chapter 1 Quantitative research in education: Impact on evidence-based

    Chapter 1 Quantitative research in education: Impact on evidence-based instruction - Author: Frederick J. Brigham ... Quantitative research is based on epistemic beliefs that can be traced back to David Hume. Hume and others who followed in his wake suggested that we can never directly observe cause and effect. Rather we perceive what is called ...

  19. Chapter 1 Quantitative research in education: Impact on evidence-based

    Request PDF | Chapter 1 Quantitative research in education: Impact on evidence-based instruction | Quantitative research is based on epistemic beliefs that can be traced back to David Hume. Hume ...

  20. Chapter 1 Quantitative research in education: Impact on evidence-based

    This work has shown that large-scale experiments with random assignment to treatment are based on nomothetic models, as are quasi-experimental studies where intact groups of people (e.g., students in a particular classroom) are assigned to treatments. Quantitative research is based on epistemic beliefs that can be traced back to David Hume. Hume and others who followed in his wake suggested ...

  21. PARTS OF QUANTITATIVE RESEARCH Chapter 1-5 .pdf

    THE PROBLEM AND ITS BACKGROUND CHAPTER 1 -A hypothesis is a prediction of the possible outcomes of a study (Fraenkel & Wallen, 2009) -Hypotheses are statements in quantitative research in which the investigator makes a prediction about the outcome of a relationship among attributes or characteristics (Creswell, 2012). Hypothesis

  22. Chapter 1 AND 2 Quantitative Research Paper

    PRACTICAL RESEARCH 2 (Quantitative Research) By: Balagulan, Julius Ivan G. Berou, Althia Mae S. Cruz, Kelvin John L. Fernandez, Mori Redjcel L. Gallentes, Jhunmark Rave G. Sacramento, Jazel Mae V. Salazar, Aldrich G. Sevilla, Cristy Ann B. December 2021. CHAPTER 1 THE PROBLEM AND REVIEW OF RELATED LITERATURE BACKGROUND OF THE STUDY

  23. Chapter 1: Research Questions

    Chapter 1: Home; Narrowing Your Topic; Problem Statement; Purpose Statement; Alignment; Conceptual Framework; Theoretical Framework; Quantitative Research Questions This link opens in a new window; Qualitative Research Questions This link opens in a new window; Qualitative & Quantitative Research Support with the ASC This link opens in a new window; Library Research Consultations This link ...

  24. Quantitative Research

    This chapter describes the quantitative research that we conducted to examine the path relationships depicted in the conceptual research model (see Fig. 3.1).We employed Partial Least Squares Structural Equation Modelling (PLS-SEM) technique (Hair et al., 2016) to test the hypotheses and the relationships between the antecedents of business services portfolio.

  25. Research methods chapter 2

    Research methods chapter 2 - getting started with research 1. Define the concept of a variable, distinguish quantitative from categorical variables, and give examples of variables that might be of interest to psychologists. Research questions in psychology are about variables. Variable: quantity or quality that varies across people or situations.