Examples

Comparative Research

comparison research paper example

Although not everyone would agree, comparing is not always bad. Comparing things can also give you a handful of benefits. For instance, there are times in our life where we feel lost. You may not be getting the job that you want or have the sexy body that you have been aiming for a long time now. Then, you happen to cross path with an old friend of yours, who happened to get the job that you always wanted. This scenario may put your self-esteem down, knowing that this friend got what you want, while you didn’t. Or you can choose to look at your friend as an example that your desire is actually attainable. Come up with a plan to achieve your  personal development goal . Perhaps, ask for tips from this person or from the people who inspire you. According to the article posted in  brit.co , licensed master social worker and therapist Kimberly Hershenson said that comparing yourself to someone successful can be an excellent self-motivation to work on your goals.

Aside from self-improvement, as a researcher, you should know that comparison is an essential method in scientific studies, such as experimental research and descriptive research . Through this method, you can uncover the relationship between two or more variables of your project in the form of comparative analysis .

[bb_toc content=”][/bb_toc]

What is Comparative Research?

Aiming to compare two or more variables of an experiment project, experts usually apply comparative research examples in social sciences to compare countries and cultures across a particular area or the entire world. Despite its proven effectiveness, you should keep it in mind that some states have different disciplines in sharing data. Thus, it would help if you consider the affecting factors in gathering specific information.

Quantitative and Qualitative Research Methods in Comparative Studies

In comparing variables, the statistical and mathematical data collection, and analysis that quantitative research methodology naturally uses to uncover the correlational connection of the variables, can be essential. Additionally, since quantitative research requires a specific research question, this method can help you can quickly come up with one particular comparative research question.

The goal of comparative research is drawing a solution out of the similarities and differences between the focused variables. Through non-experimental or qualitative research , you can include this type of research method in your comparative research design.

13+ Comparative Research Examples

Know more about comparative research by going over the following examples. You can download these zipped documents in PDF and MS Word formats.

1. Comparative Research Report Template

Comparative Research Report Template

  • Google Docs

Size: 113 KB

2. Business Comparative Research Template

Business Comparative Research Template

Size: 69 KB

3. Comparative Market Research Template

Comparative Market Research Template

Size: 172 KB

4. Comparative Research Strategies Example

Comparative Research Strategies Example

5. Comparative Research in Anthropology Example

Comparative Research in Anthropology Example

Size: 192 KB

6. Sample Comparative Research Example

Sample Comparative Research Example

Size: 516 KB

7. Comparative Area Research Example

Comparative Area Research Example

8. Comparative Research on Women’s Emplyment Example

Comparative Research on Womens Emplyment

Size: 290 KB

9. Basic Comparative Research Example

Basic Comparative Research Example

Size: 19 KB

10. Comparative Research in Medical Treatments Example

Comparative Research in Medical Treatments

11. Comparative Research in Education Example

Comparative Research in Education

Size: 455 KB

12. Formal Comparative Research Example

Formal Comparative Research Example

Size: 244 KB

13. Comparative Research Designs Example

Comparing Comparative Research Designs

Size: 259 KB

14. Casual Comparative Research in DOC

Caasual Comparative Research in DOC

Best Practices in Writing an Essay for Comparative Research in Visual Arts

If you are going to write an essay for a comparative research examples paper, this section is for you. You must know that there are inevitable mistakes that students do in essay writing . To avoid those mistakes, follow the following pointers.

1. Compare the Artworks Not the Artists

One of the mistakes that students do when writing a comparative essay is comparing the artists instead of artworks. Unless your instructor asked you to write a biographical essay, focus your writing on the works of the artists that you choose.

2. Consult to Your Instructor

There is broad coverage of information that you can find on the internet for your project. Some students, however, prefer choosing the images randomly. In doing so, you may not create a successful comparative study. Therefore, we recommend you to discuss your selections with your teacher.

3. Avoid Redundancy

It is common for the students to repeat the ideas that they have listed in the comparison part. Keep it in mind that the spaces for this activity have limitations. Thus, it is crucial to reserve each space for more thoroughly debated ideas.

4. Be Minimal

Unless instructed, it would be practical if you only include a few items(artworks). In this way, you can focus on developing well-argued information for your study.

5. Master the Assessment Method and the Goals of the Project

We get it. You are doing this project because your instructor told you so. However, you can make your study more valuable by understanding the goals of doing the project. Know how you can apply this new learning. You should also know the criteria that your teachers use to assess your output. It will give you a chance to maximize the grade that you can get from this project.

Comparing things is one way to know what to improve in various aspects. Whether you are aiming to attain a personal goal or attempting to find a solution to a certain task, you can accomplish it by knowing how to conduct a comparative study. Use this content as a tool to expand your knowledge about this research methodology .

AI Generator

Text prompt

  • Instructive
  • Professional

10 Examples of Public speaking

20 Examples of Gas lighting

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base
  • Comparing and contrasting in an essay | Tips & examples

Comparing and Contrasting in an Essay | Tips & Examples

Published on August 6, 2020 by Jack Caulfield . Revised on July 23, 2023.

Comparing and contrasting is an important skill in academic writing . It involves taking two or more subjects and analyzing the differences and similarities between them.

Instantly correct all language mistakes in your text

Upload your document to correct all your mistakes in minutes

upload-your-document-ai-proofreader

Table of contents

When should i compare and contrast, making effective comparisons, comparing and contrasting as a brainstorming tool, structuring your comparisons, other interesting articles, frequently asked questions about comparing and contrasting.

Many assignments will invite you to make comparisons quite explicitly, as in these prompts.

  • Compare the treatment of the theme of beauty in the poetry of William Wordsworth and John Keats.
  • Compare and contrast in-class and distance learning. What are the advantages and disadvantages of each approach?

Some other prompts may not directly ask you to compare and contrast, but present you with a topic where comparing and contrasting could be a good approach.

One way to approach this essay might be to contrast the situation before the Great Depression with the situation during it, to highlight how large a difference it made.

Comparing and contrasting is also used in all kinds of academic contexts where it’s not explicitly prompted. For example, a literature review involves comparing and contrasting different studies on your topic, and an argumentative essay may involve weighing up the pros and cons of different arguments.

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

As the name suggests, comparing and contrasting is about identifying both similarities and differences. You might focus on contrasting quite different subjects or comparing subjects with a lot in common—but there must be some grounds for comparison in the first place.

For example, you might contrast French society before and after the French Revolution; you’d likely find many differences, but there would be a valid basis for comparison. However, if you contrasted pre-revolutionary France with Han-dynasty China, your reader might wonder why you chose to compare these two societies.

This is why it’s important to clarify the point of your comparisons by writing a focused thesis statement . Every element of an essay should serve your central argument in some way. Consider what you’re trying to accomplish with any comparisons you make, and be sure to make this clear to the reader.

Comparing and contrasting can be a useful tool to help organize your thoughts before you begin writing any type of academic text. You might use it to compare different theories and approaches you’ve encountered in your preliminary research, for example.

Let’s say your research involves the competing psychological approaches of behaviorism and cognitive psychology. You might make a table to summarize the key differences between them.

Or say you’re writing about the major global conflicts of the twentieth century. You might visualize the key similarities and differences in a Venn diagram.

A Venn diagram showing the similarities and differences between World War I, World War II, and the Cold War.

These visualizations wouldn’t make it into your actual writing, so they don’t have to be very formal in terms of phrasing or presentation. The point of comparing and contrasting at this stage is to help you organize and shape your ideas to aid you in structuring your arguments.

When comparing and contrasting in an essay, there are two main ways to structure your comparisons: the alternating method and the block method.

The alternating method

In the alternating method, you structure your text according to what aspect you’re comparing. You cover both your subjects side by side in terms of a specific point of comparison. Your text is structured like this:

Mouse over the example paragraph below to see how this approach works.

One challenge teachers face is identifying and assisting students who are struggling without disrupting the rest of the class. In a traditional classroom environment, the teacher can easily identify when a student is struggling based on their demeanor in class or simply by regularly checking on students during exercises. They can then offer assistance quietly during the exercise or discuss it further after class. Meanwhile, in a Zoom-based class, the lack of physical presence makes it more difficult to pay attention to individual students’ responses and notice frustrations, and there is less flexibility to speak with students privately to offer assistance. In this case, therefore, the traditional classroom environment holds the advantage, although it appears likely that aiding students in a virtual classroom environment will become easier as the technology, and teachers’ familiarity with it, improves.

The block method

In the block method, you cover each of the overall subjects you’re comparing in a block. You say everything you have to say about your first subject, then discuss your second subject, making comparisons and contrasts back to the things you’ve already said about the first. Your text is structured like this:

  • Point of comparison A
  • Point of comparison B

The most commonly cited advantage of distance learning is the flexibility and accessibility it offers. Rather than being required to travel to a specific location every week (and to live near enough to feasibly do so), students can participate from anywhere with an internet connection. This allows not only for a wider geographical spread of students but for the possibility of studying while travelling. However, distance learning presents its own accessibility challenges; not all students have a stable internet connection and a computer or other device with which to participate in online classes, and less technologically literate students and teachers may struggle with the technical aspects of class participation. Furthermore, discomfort and distractions can hinder an individual student’s ability to engage with the class from home, creating divergent learning experiences for different students. Distance learning, then, seems to improve accessibility in some ways while representing a step backwards in others.

Note that these two methods can be combined; these two example paragraphs could both be part of the same essay, but it’s wise to use an essay outline to plan out which approach you’re taking in each paragraph.

Prevent plagiarism. Run a free check.

If you want to know more about AI tools , college essays , or fallacies make sure to check out some of our other articles with explanations and examples or go directly to our tools!

  • Ad hominem fallacy
  • Post hoc fallacy
  • Appeal to authority fallacy
  • False cause fallacy
  • Sunk cost fallacy

College essays

  • Choosing Essay Topic
  • Write a College Essay
  • Write a Diversity Essay
  • College Essay Format & Structure
  • Comparing and Contrasting in an Essay

 (AI) Tools

  • Grammar Checker
  • Paraphrasing Tool
  • Text Summarizer
  • AI Detector
  • Plagiarism Checker
  • Citation Generator

Some essay prompts include the keywords “compare” and/or “contrast.” In these cases, an essay structured around comparing and contrasting is the appropriate response.

Comparing and contrasting is also a useful approach in all kinds of academic writing : You might compare different studies in a literature review , weigh up different arguments in an argumentative essay , or consider different theoretical approaches in a theoretical framework .

Your subjects might be very different or quite similar, but it’s important that there be meaningful grounds for comparison . You can probably describe many differences between a cat and a bicycle, but there isn’t really any connection between them to justify the comparison.

You’ll have to write a thesis statement explaining the central point you want to make in your essay , so be sure to know in advance what connects your subjects and makes them worth comparing.

Comparisons in essays are generally structured in one of two ways:

  • The alternating method, where you compare your subjects side by side according to one specific aspect at a time.
  • The block method, where you cover each subject separately in its entirety.

It’s also possible to combine both methods, for example by writing a full paragraph on each of your topics and then a final paragraph contrasting the two according to a specific metric.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Caulfield, J. (2023, July 23). Comparing and Contrasting in an Essay | Tips & Examples. Scribbr. Retrieved April 2, 2024, from https://www.scribbr.com/academic-essay/compare-and-contrast/

Is this article helpful?

Jack Caulfield

Jack Caulfield

Other students also liked, how to write an expository essay, how to write an argumentative essay | examples & tips, academic paragraph structure | step-by-step guide & examples, "i thought ai proofreading was useless but..".

I've been using Scribbr for years now and I know it's a service that won't disappoint. It does a good job spotting mistakes”

comparison research paper example

  • Walden University
  • Faculty Portal

Writing a Paper: Comparing & Contrasting

A compare and contrast paper discusses the similarities and differences between two or more topics. The paper should contain an introduction with a thesis statement, a body where the comparisons and contrasts are discussed, and a conclusion.

Address Both Similarities and Differences

Because this is a compare and contrast paper, both the similarities and differences should be discussed. This will require analysis on your part, as some topics will appear to be quite similar, and you will have to work to find the differing elements.

Make Sure You Have a Clear Thesis Statement

Just like any other essay, a compare and contrast essay needs a thesis statement. The thesis statement should not only tell your reader what you will do, but it should also address the purpose and importance of comparing and contrasting the material.

Use Clear Transitions

Transitions are important in compare and contrast essays, where you will be moving frequently between different topics or perspectives.

  • Examples of transitions and phrases for comparisons: as well, similar to, consistent with, likewise, too
  • Examples of transitions and phrases for contrasts: on the other hand, however, although, differs, conversely, rather than.

For more information, check out our transitions page.

Structure Your Paper

Consider how you will present the information. You could present all of the similarities first and then present all of the differences. Or you could go point by point and show the similarity and difference of one point, then the similarity and difference for another point, and so on.

Include Analysis

It is tempting to just provide summary for this type of paper, but analysis will show the importance of the comparisons and contrasts. For instance, if you are comparing two articles on the topic of the nursing shortage, help us understand what this will achieve. Did you find consensus between the articles that will support a certain action step for people in the field? Did you find discrepancies between the two that point to the need for further investigation?

Make Analogous Comparisons

When drawing comparisons or making contrasts, be sure you are dealing with similar aspects of each item. To use an old cliché, are you comparing apples to apples?

  • Example of poor comparisons: Kubista studied the effects of a later start time on high school students, but Cook used a mixed methods approach. (This example does not compare similar items. It is not a clear contrast because the sentence does not discuss the same element of the articles. It is like comparing apples to oranges.)
  • Example of analogous comparisons: Cook used a mixed methods approach, whereas Kubista used only quantitative methods. (Here, methods are clearly being compared, allowing the reader to understand the distinction.

Related Webinar

Webinar

Didn't find what you need? Search our website or email us .

Read our website accessibility and accommodation statement .

  • Previous Page: Developing Arguments
  • Next Page: Avoiding Logical Fallacies
  • Office of Student Disability Services

Walden Resources

Departments.

  • Academic Residencies
  • Academic Skills
  • Career Planning and Development
  • Customer Care Team
  • Field Experience
  • Military Services
  • Student Success Advising
  • Writing Skills

Centers and Offices

  • Center for Social Change
  • Office of Academic Support and Instructional Services
  • Office of Degree Acceleration
  • Office of Research and Doctoral Services
  • Office of Student Affairs

Student Resources

  • Doctoral Writing Assessment
  • Form & Style Review
  • Quick Answers
  • ScholarWorks
  • SKIL Courses and Workshops
  • Walden Bookstore
  • Walden Catalog & Student Handbook
  • Student Safety/Title IX
  • Legal & Consumer Information
  • Website Terms and Conditions
  • Cookie Policy
  • Accessibility
  • Accreditation
  • State Authorization
  • Net Price Calculator
  • Contact Walden

Walden University is a member of Adtalem Global Education, Inc. www.adtalem.com Walden University is certified to operate by SCHEV © 2024 Walden University LLC. All rights reserved.

Sociology Group: Welcome to Social Sciences Blog

How to Do Comparative Analysis in Research ( Examples )

Comparative analysis is a method that is widely used in social science . It is a method of comparing two or more items with an idea of uncovering and discovering new ideas about them. It often compares and contrasts social structures and processes around the world to grasp general patterns. Comparative analysis tries to understand the study and explain every element of data that comparing. 

Comparative Analysis in Social SCIENCE RESEARCH

We often compare and contrast in our daily life. So it is usual to compare and contrast the culture and human society. We often heard that ‘our culture is quite good than theirs’ or ‘their lifestyle is better than us’. In social science, the social scientist compares primitive, barbarian, civilized, and modern societies. They use this to understand and discover the evolutionary changes that happen to society and its people.  It is not only used to understand the evolutionary processes but also to identify the differences, changes, and connections between societies.

Most social scientists are involved in comparative analysis. Macfarlane has thought that “On account of history, the examinations are typically on schedule, in that of other sociologies, transcendently in space. The historian always takes their society and compares it with the past society, and analyzes how far they differ from each other.

The comparative method of social research is a product of 19 th -century sociology and social anthropology. Sociologists like Emile Durkheim, Herbert Spencer Max Weber used comparative analysis in their works. For example, Max Weber compares the protestant of Europe with Catholics and also compared it with other religions like Islam, Hinduism, and Confucianism.

To do a systematic comparison we need to follow different elements of the method.

1. Methods of comparison The comparison method

In social science, we can do comparisons in different ways. It is merely different based on the topic, the field of study. Like Emile Durkheim compare societies as organic solidarity and mechanical solidarity. The famous sociologist Emile Durkheim provides us with three different approaches to the comparative method. Which are;

  • The first approach is to identify and select one particular society in a fixed period. And by doing that, we can identify and determine the relationship, connections and differences exist in that particular society alone. We can find their religious practices, traditions, law, norms etc.
  •  The second approach is to consider and draw various societies which have common or similar characteristics that may vary in some ways. It may be we can select societies at a specific period, or we can select societies in the different periods which have common characteristics but vary in some ways. For example, we can take European and American societies (which are universally similar characteristics) in the 20 th century. And we can compare and contrast their society in terms of law, custom, tradition, etc. 
  • The third approach he envisaged is to take different societies of different times that may share some similar characteristics or maybe show revolutionary changes. For example, we can compare modern and primitive societies which show us revolutionary social changes.

2 . The unit of comparison

We cannot compare every aspect of society. As we know there are so many things that we cannot compare. The very success of the compare method is the unit or the element that we select to compare. We are only able to compare things that have some attributes in common. For example, we can compare the existing family system in America with the existing family system in Europe. But we are not able to compare the food habits in china with the divorce rate in America. It is not possible. So, the next thing you to remember is to consider the unit of comparison. You have to select it with utmost care.

3. The motive of comparison

As another method of study, a comparative analysis is one among them for the social scientist. The researcher or the person who does the comparative method must know for what grounds they taking the comparative method. They have to consider the strength, limitations, weaknesses, etc. He must have to know how to do the analysis.

Steps of the comparative method

1. Setting up of a unit of comparison

As mentioned earlier, the first step is to consider and determine the unit of comparison for your study. You must consider all the dimensions of your unit. This is where you put the two things you need to compare and to properly analyze and compare it. It is not an easy step, we have to systematically and scientifically do this with proper methods and techniques. You have to build your objectives, variables and make some assumptions or ask yourself about what you need to study or make a hypothesis for your analysis.

The best casings of reference are built from explicit sources instead of your musings or perceptions. To do that you can select some attributes in the society like marriage, law, customs, norms, etc. by doing this you can easily compare and contrast the two societies that you selected for your study. You can set some questions like, is the marriage practices of Catholics are different from Protestants? Did men and women get an equal voice in their mate choice? You can set as many questions that you wanted. Because that will explore the truth about that particular topic. A comparative analysis must have these attributes to study. A social scientist who wishes to compare must develop those research questions that pop up in your mind. A study without those is not going to be a fruitful one.

2. Grounds of comparison

The grounds of comparison should be understandable for the reader. You must acknowledge why you selected these units for your comparison. For example, it is quite natural that a person who asks why you choose this what about another one? What is the reason behind choosing this particular society? If a social scientist chooses primitive Asian society and primitive Australian society for comparison, he must acknowledge the grounds of comparison to the readers. The comparison of your work must be self-explanatory without any complications.

If you choose two particular societies for your comparative analysis you must convey to the reader what are you intended to choose this and the reason for choosing that society in your analysis.

3 . Report or thesis

The main element of the comparative analysis is the thesis or the report. The report is the most important one that it must contain all your frame of reference. It must include all your research questions, objectives of your topic, the characteristics of your two units of comparison, variables in your study, and last but not least the finding and conclusion must be written down. The findings must be self-explanatory because the reader must understand to what extent did they connect and what are their differences. For example, in Emile Durkheim’s Theory of Division of Labour, he classified organic solidarity and Mechanical solidarity . In which he means primitive society as Mechanical solidarity and modern society as Organic Solidarity. Like that you have to mention what are your findings in the thesis.

4. Relationship and linking one to another

Your paper must link each point in the argument. Without that the reader does not understand the logical and rational advance in your analysis. In a comparative analysis, you need to compare the ‘x’ and ‘y’ in your paper. (x and y mean the two-unit or things in your comparison). To do that you can use likewise, similarly, on the contrary, etc. For example, if we do a comparison between primitive society and modern society we can say that; ‘in the primitive society the division of labour is based on gender and age on the contrary (or the other hand), in modern society, the division of labour is based on skill and knowledge of a person.

Demerits of comparison

Comparative analysis is not always successful. It has some limitations. The broad utilization of comparative analysis can undoubtedly cause the feeling that this technique is a solidly settled, smooth, and unproblematic method of investigation, which because of its undeniable intelligent status can produce dependable information once some specialized preconditions are met acceptably.

Perhaps the most fundamental issue here respects the independence of the unit picked for comparison. As different types of substances are gotten to be analyzed, there is frequently a fundamental and implicit supposition about their independence and a quiet propensity to disregard the mutual influences and common impacts among the units.

One more basic issue with broad ramifications concerns the decision of the units being analyzed. The primary concern is that a long way from being a guiltless as well as basic assignment, the decision of comparison units is a basic and precarious issue. The issue with this sort of comparison is that in such investigations the depictions of the cases picked for examination with the principle one will in general turn out to be unreasonably streamlined, shallow, and stylised with contorted contentions and ends as entailment.

However, a comparative analysis is as yet a strategy with exceptional benefits, essentially due to its capacity to cause us to perceive the restriction of our psyche and check against the weaknesses and hurtful results of localism and provincialism. We may anyway have something to gain from history specialists’ faltering in utilizing comparison and from their regard for the uniqueness of settings and accounts of people groups. All of the above, by doing the comparison we discover the truths the underlying and undiscovered connection, differences that exist in society.

Also Read: How to write a Sociology Analysis? Explained with Examples

comparison research paper example

Sociology Group

The Sociology Group is an organization dedicated to creating social awareness through thoughtful initiatives like "social stories" and the "Meet the Professor" insightful interview series. Recognized for our book reviews, author interviews, and social sciences articles, we also host annual social sciences writing competition. Interested in joining us? Email [email protected] . We are a dedicated team of social scientists on a mission to simplify complex theories, conduct enlightening interviews, and offer academic assistance, making Social Science accessible and practical for all curious minds.

The Writing Center • University of North Carolina at Chapel Hill

Comparing and Contrasting

What this handout is about.

This handout will help you first to determine whether a particular assignment is asking for comparison/contrast and then to generate a list of similarities and differences, decide which similarities and differences to focus on, and organize your paper so that it will be clear and effective. It will also explain how you can (and why you should) develop a thesis that goes beyond “Thing A and Thing B are similar in many ways but different in others.”

Introduction

In your career as a student, you’ll encounter many different kinds of writing assignments, each with its own requirements. One of the most common is the comparison/contrast essay, in which you focus on the ways in which certain things or ideas—usually two of them—are similar to (this is the comparison) and/or different from (this is the contrast) one another. By assigning such essays, your instructors are encouraging you to make connections between texts or ideas, engage in critical thinking, and go beyond mere description or summary to generate interesting analysis: when you reflect on similarities and differences, you gain a deeper understanding of the items you are comparing, their relationship to each other, and what is most important about them.

Recognizing comparison/contrast in assignments

Some assignments use words—like compare, contrast, similarities, and differences—that make it easy for you to see that they are asking you to compare and/or contrast. Here are a few hypothetical examples:

  • Compare and contrast Frye’s and Bartky’s accounts of oppression.
  • Compare WWI to WWII, identifying similarities in the causes, development, and outcomes of the wars.
  • Contrast Wordsworth and Coleridge; what are the major differences in their poetry?

Notice that some topics ask only for comparison, others only for contrast, and others for both.

But it’s not always so easy to tell whether an assignment is asking you to include comparison/contrast. And in some cases, comparison/contrast is only part of the essay—you begin by comparing and/or contrasting two or more things and then use what you’ve learned to construct an argument or evaluation. Consider these examples, noticing the language that is used to ask for the comparison/contrast and whether the comparison/contrast is only one part of a larger assignment:

  • Choose a particular idea or theme, such as romantic love, death, or nature, and consider how it is treated in two Romantic poems.
  • How do the different authors we have studied so far define and describe oppression?
  • Compare Frye’s and Bartky’s accounts of oppression. What does each imply about women’s collusion in their own oppression? Which is more accurate?
  • In the texts we’ve studied, soldiers who served in different wars offer differing accounts of their experiences and feelings both during and after the fighting. What commonalities are there in these accounts? What factors do you think are responsible for their differences?

You may want to check out our handout on understanding assignments for additional tips.

Using comparison/contrast for all kinds of writing projects

Sometimes you may want to use comparison/contrast techniques in your own pre-writing work to get ideas that you can later use for an argument, even if comparison/contrast isn’t an official requirement for the paper you’re writing. For example, if you wanted to argue that Frye’s account of oppression is better than both de Beauvoir’s and Bartky’s, comparing and contrasting the main arguments of those three authors might help you construct your evaluation—even though the topic may not have asked for comparison/contrast and the lists of similarities and differences you generate may not appear anywhere in the final draft of your paper.

Discovering similarities and differences

Making a Venn diagram or a chart can help you quickly and efficiently compare and contrast two or more things or ideas. To make a Venn diagram, simply draw some overlapping circles, one circle for each item you’re considering. In the central area where they overlap, list the traits the two items have in common. Assign each one of the areas that doesn’t overlap; in those areas, you can list the traits that make the things different. Here’s a very simple example, using two pizza places:

Venn diagram indicating that both Pepper's and Amante serve pizza with unusual ingredients at moderate prices, despite differences in location, wait times, and delivery options

To make a chart, figure out what criteria you want to focus on in comparing the items. Along the left side of the page, list each of the criteria. Across the top, list the names of the items. You should then have a box per item for each criterion; you can fill the boxes in and then survey what you’ve discovered.

Here’s an example, this time using three pizza places:

As you generate points of comparison, consider the purpose and content of the assignment and the focus of the class. What do you think the professor wants you to learn by doing this comparison/contrast? How does it fit with what you have been studying so far and with the other assignments in the course? Are there any clues about what to focus on in the assignment itself?

Here are some general questions about different types of things you might have to compare. These are by no means complete or definitive lists; they’re just here to give you some ideas—you can generate your own questions for these and other types of comparison. You may want to begin by using the questions reporters traditionally ask: Who? What? Where? When? Why? How? If you’re talking about objects, you might also consider general properties like size, shape, color, sound, weight, taste, texture, smell, number, duration, and location.

Two historical periods or events

  • When did they occur—do you know the date(s) and duration? What happened or changed during each? Why are they significant?
  • What kinds of work did people do? What kinds of relationships did they have? What did they value?
  • What kinds of governments were there? Who were important people involved?
  • What caused events in these periods, and what consequences did they have later on?

Two ideas or theories

  • What are they about?
  • Did they originate at some particular time?
  • Who created them? Who uses or defends them?
  • What is the central focus, claim, or goal of each? What conclusions do they offer?
  • How are they applied to situations/people/things/etc.?
  • Which seems more plausible to you, and why? How broad is their scope?
  • What kind of evidence is usually offered for them?

Two pieces of writing or art

  • What are their titles? What do they describe or depict?
  • What is their tone or mood? What is their form?
  • Who created them? When were they created? Why do you think they were created as they were? What themes do they address?
  • Do you think one is of higher quality or greater merit than the other(s)—and if so, why?
  • For writing: what plot, characterization, setting, theme, tone, and type of narration are used?
  • Where are they from? How old are they? What is the gender, race, class, etc. of each?
  • What, if anything, are they known for? Do they have any relationship to each other?
  • What are they like? What did/do they do? What do they believe? Why are they interesting?
  • What stands out most about each of them?

Deciding what to focus on

By now you have probably generated a huge list of similarities and differences—congratulations! Next you must decide which of them are interesting, important, and relevant enough to be included in your paper. Ask yourself these questions:

  • What’s relevant to the assignment?
  • What’s relevant to the course?
  • What’s interesting and informative?
  • What matters to the argument you are going to make?
  • What’s basic or central (and needs to be mentioned even if obvious)?
  • Overall, what’s more important—the similarities or the differences?

Suppose that you are writing a paper comparing two novels. For most literature classes, the fact that they both use Caslon type (a kind of typeface, like the fonts you may use in your writing) is not going to be relevant, nor is the fact that one of them has a few illustrations and the other has none; literature classes are more likely to focus on subjects like characterization, plot, setting, the writer’s style and intentions, language, central themes, and so forth. However, if you were writing a paper for a class on typesetting or on how illustrations are used to enhance novels, the typeface and presence or absence of illustrations might be absolutely critical to include in your final paper.

Sometimes a particular point of comparison or contrast might be relevant but not terribly revealing or interesting. For example, if you are writing a paper about Wordsworth’s “Tintern Abbey” and Coleridge’s “Frost at Midnight,” pointing out that they both have nature as a central theme is relevant (comparisons of poetry often talk about themes) but not terribly interesting; your class has probably already had many discussions about the Romantic poets’ fondness for nature. Talking about the different ways nature is depicted or the different aspects of nature that are emphasized might be more interesting and show a more sophisticated understanding of the poems.

Your thesis

The thesis of your comparison/contrast paper is very important: it can help you create a focused argument and give your reader a road map so they don’t get lost in the sea of points you are about to make. As in any paper, you will want to replace vague reports of your general topic (for example, “This paper will compare and contrast two pizza places,” or “Pepper’s and Amante are similar in some ways and different in others,” or “Pepper’s and Amante are similar in many ways, but they have one major difference”) with something more detailed and specific. For example, you might say, “Pepper’s and Amante have similar prices and ingredients, but their atmospheres and willingness to deliver set them apart.”

Be careful, though—although this thesis is fairly specific and does propose a simple argument (that atmosphere and delivery make the two pizza places different), your instructor will often be looking for a bit more analysis. In this case, the obvious question is “So what? Why should anyone care that Pepper’s and Amante are different in this way?” One might also wonder why the writer chose those two particular pizza places to compare—why not Papa John’s, Dominos, or Pizza Hut? Again, thinking about the context the class provides may help you answer such questions and make a stronger argument. Here’s a revision of the thesis mentioned earlier:

Pepper’s and Amante both offer a greater variety of ingredients than other Chapel Hill/Carrboro pizza places (and than any of the national chains), but the funky, lively atmosphere at Pepper’s makes it a better place to give visiting friends and family a taste of local culture.

You may find our handout on constructing thesis statements useful at this stage.

Organizing your paper

There are many different ways to organize a comparison/contrast essay. Here are two:

Subject-by-subject

Begin by saying everything you have to say about the first subject you are discussing, then move on and make all the points you want to make about the second subject (and after that, the third, and so on, if you’re comparing/contrasting more than two things). If the paper is short, you might be able to fit all of your points about each item into a single paragraph, but it’s more likely that you’d have several paragraphs per item. Using our pizza place comparison/contrast as an example, after the introduction, you might have a paragraph about the ingredients available at Pepper’s, a paragraph about its location, and a paragraph about its ambience. Then you’d have three similar paragraphs about Amante, followed by your conclusion.

The danger of this subject-by-subject organization is that your paper will simply be a list of points: a certain number of points (in my example, three) about one subject, then a certain number of points about another. This is usually not what college instructors are looking for in a paper—generally they want you to compare or contrast two or more things very directly, rather than just listing the traits the things have and leaving it up to the reader to reflect on how those traits are similar or different and why those similarities or differences matter. Thus, if you use the subject-by-subject form, you will probably want to have a very strong, analytical thesis and at least one body paragraph that ties all of your different points together.

A subject-by-subject structure can be a logical choice if you are writing what is sometimes called a “lens” comparison, in which you use one subject or item (which isn’t really your main topic) to better understand another item (which is). For example, you might be asked to compare a poem you’ve already covered thoroughly in class with one you are reading on your own. It might make sense to give a brief summary of your main ideas about the first poem (this would be your first subject, the “lens”), and then spend most of your paper discussing how those points are similar to or different from your ideas about the second.

Point-by-point

Rather than addressing things one subject at a time, you may wish to talk about one point of comparison at a time. There are two main ways this might play out, depending on how much you have to say about each of the things you are comparing. If you have just a little, you might, in a single paragraph, discuss how a certain point of comparison/contrast relates to all the items you are discussing. For example, I might describe, in one paragraph, what the prices are like at both Pepper’s and Amante; in the next paragraph, I might compare the ingredients available; in a third, I might contrast the atmospheres of the two restaurants.

If I had a bit more to say about the items I was comparing/contrasting, I might devote a whole paragraph to how each point relates to each item. For example, I might have a whole paragraph about the clientele at Pepper’s, followed by a whole paragraph about the clientele at Amante; then I would move on and do two more paragraphs discussing my next point of comparison/contrast—like the ingredients available at each restaurant.

There are no hard and fast rules about organizing a comparison/contrast paper, of course. Just be sure that your reader can easily tell what’s going on! Be aware, too, of the placement of your different points. If you are writing a comparison/contrast in service of an argument, keep in mind that the last point you make is the one you are leaving your reader with. For example, if I am trying to argue that Amante is better than Pepper’s, I should end with a contrast that leaves Amante sounding good, rather than with a point of comparison that I have to admit makes Pepper’s look better. If you’ve decided that the differences between the items you’re comparing/contrasting are most important, you’ll want to end with the differences—and vice versa, if the similarities seem most important to you.

Our handout on organization can help you write good topic sentences and transitions and make sure that you have a good overall structure in place for your paper.

Cue words and other tips

To help your reader keep track of where you are in the comparison/contrast, you’ll want to be sure that your transitions and topic sentences are especially strong. Your thesis should already have given the reader an idea of the points you’ll be making and the organization you’ll be using, but you can help them out with some extra cues. The following words may be helpful to you in signaling your intentions:

  • like, similar to, also, unlike, similarly, in the same way, likewise, again, compared to, in contrast, in like manner, contrasted with, on the contrary, however, although, yet, even though, still, but, nevertheless, conversely, at the same time, regardless, despite, while, on the one hand … on the other hand.

For example, you might have a topic sentence like one of these:

  • Compared to Pepper’s, Amante is quiet.
  • Like Amante, Pepper’s offers fresh garlic as a topping.
  • Despite their different locations (downtown Chapel Hill and downtown Carrboro), Pepper’s and Amante are both fairly easy to get to.

You may reproduce it for non-commercial use if you use the entire handout and attribute the source: The Writing Center, University of North Carolina at Chapel Hill

Make a Gift

  • Utility Menu

University Logo

GA4 Tracking Code

Gen ed writes, writing across the disciplines at harvard college.

  • Comparative Analysis

What It Is and Why It's Useful

Comparative analysis asks writers to make an argument about the relationship between two or more texts. Beyond that, there's a lot of variation, but three overarching kinds of comparative analysis stand out:

  • Coordinate (A ↔ B): In this kind of analysis, two (or more) texts are being read against each other in terms of a shared element, e.g., a memoir and a novel, both by Jesmyn Ward; two sets of data for the same experiment; a few op-ed responses to the same event; two YA books written in Chicago in the 2000s; a film adaption of a play; etc. 
  • Subordinate (A  → B) or (B → A ): Using a theoretical text (as a "lens") to explain a case study or work of art (e.g., how Anthony Jack's The Privileged Poor can help explain divergent experiences among students at elite four-year private colleges who are coming from similar socio-economic backgrounds) or using a work of art or case study (i.e., as a "test" of) a theory's usefulness or limitations (e.g., using coverage of recent incidents of gun violence or legislation un the U.S. to confirm or question the currency of Carol Anderson's The Second ).
  • Hybrid [A  → (B ↔ C)] or [(B ↔ C) → A] , i.e., using coordinate and subordinate analysis together. For example, using Jack to compare or contrast the experiences of students at elite four-year institutions with students at state universities and/or community colleges; or looking at gun culture in other countries and/or other timeframes to contextualize or generalize Anderson's main points about the role of the Second Amendment in U.S. history.

"In the wild," these three kinds of comparative analysis represent increasingly complex—and scholarly—modes of comparison. Students can of course compare two poems in terms of imagery or two data sets in terms of methods, but in each case the analysis will eventually be richer if the students have had a chance to encounter other people's ideas about how imagery or methods work. At that point, we're getting into a hybrid kind of reading (or even into research essays), especially if we start introducing different approaches to imagery or methods that are themselves being compared along with a couple (or few) poems or data sets.

Why It's Useful

In the context of a particular course, each kind of comparative analysis has its place and can be a useful step up from single-source analysis. Intellectually, comparative analysis helps overcome the "n of 1" problem that can face single-source analysis. That is, a writer drawing broad conclusions about the influence of the Iranian New Wave based on one film is relying entirely—and almost certainly too much—on that film to support those findings. In the context of even just one more film, though, the analysis is suddenly more likely to arrive at one of the best features of any comparative approach: both films will be more richly experienced than they would have been in isolation, and the themes or questions in terms of which they're being explored (here the general question of the influence of the Iranian New Wave) will arrive at conclusions that are less at-risk of oversimplification.

For scholars working in comparative fields or through comparative approaches, these features of comparative analysis animate their work. To borrow from a stock example in Western epistemology, our concept of "green" isn't based on a single encounter with something we intuit or are told is "green." Not at all. Our concept of "green" is derived from a complex set of experiences of what others say is green or what's labeled green or what seems to be something that's neither blue nor yellow but kind of both, etc. Comparative analysis essays offer us the chance to engage with that process—even if only enough to help us see where a more in-depth exploration with a higher and/or more diverse "n" might lead—and in that sense, from the standpoint of the subject matter students are exploring through writing as well the complexity of the genre of writing they're using to explore it—comparative analysis forms a bridge of sorts between single-source analysis and research essays.

Typical learning objectives for single-sources essays: formulate analytical questions and an arguable thesis, establish stakes of an argument, summarize sources accurately, choose evidence effectively, analyze evidence effectively, define key terms, organize argument logically, acknowledge and respond to counterargument, cite sources properly, and present ideas in clear prose.

Common types of comparative analysis essays and related types: two works in the same genre, two works from the same period (but in different places or in different cultures), a work adapted into a different genre or medium, two theories treating the same topic; a theory and a case study or other object, etc.

How to Teach It: Framing + Practice

Framing multi-source writing assignments (comparative analysis, research essays, multi-modal projects) is likely to overlap a great deal with "Why It's Useful" (see above), because the range of reasons why we might use these kinds of writing in academic or non-academic settings is itself the reason why they so often appear later in courses. In many courses, they're the best vehicles for exploring the complex questions that arise once we've been introduced to the course's main themes, core content, leading protagonists, and central debates.

For comparative analysis in particular, it's helpful to frame assignment's process and how it will help students successfully navigate the challenges and pitfalls presented by the genre. Ideally, this will mean students have time to identify what each text seems to be doing, take note of apparent points of connection between different texts, and start to imagine how those points of connection (or the absence thereof)

  • complicates or upends their own expectations or assumptions about the texts
  • complicates or refutes the expectations or assumptions about the texts presented by a scholar
  • confirms and/or nuances expectations and assumptions they themselves hold or scholars have presented
  • presents entirely unforeseen ways of understanding the texts

—and all with implications for the texts themselves or for the axes along which the comparative analysis took place. If students know that this is where their ideas will be heading, they'll be ready to develop those ideas and engage with the challenges that comparative analysis presents in terms of structure (See "Tips" and "Common Pitfalls" below for more on these elements of framing).

Like single-source analyses, comparative essays have several moving parts, and giving students practice here means adapting the sample sequence laid out at the " Formative Writing Assignments " page. Three areas that have already been mentioned above are worth noting:

  • Gathering evidence : Depending on what your assignment is asking students to compare (or in terms of what), students will benefit greatly from structured opportunities to create inventories or data sets of the motifs, examples, trajectories, etc., shared (or not shared) by the texts they'll be comparing. See the sample exercises below for a basic example of what this might look like.
  • Why it Matters: Moving beyond "x is like y but also different" or even "x is more like y than we might think at first" is what moves an essay from being "compare/contrast" to being a comparative analysis . It's also a move that can be hard to make and that will often evolve over the course of an assignment. A great way to get feedback from students about where they're at on this front? Ask them to start considering early on why their argument "matters" to different kinds of imagined audiences (while they're just gathering evidence) and again as they develop their thesis and again as they're drafting their essays. ( Cover letters , for example, are a great place to ask writers to imagine how a reader might be affected by reading an their argument.)
  • Structure: Having two texts on stage at the same time can suddenly feel a lot more complicated for any writer who's used to having just one at a time. Giving students a sense of what the most common patterns (AAA / BBB, ABABAB, etc.) are likely to be can help them imagine, even if provisionally, how their argument might unfold over a series of pages. See "Tips" and "Common Pitfalls" below for more information on this front.

Sample Exercises and Links to Other Resources

  • Common Pitfalls
  • Advice on Timing
  • Try to keep students from thinking of a proposed thesis as a commitment. Instead, help them see it as more of a hypothesis that has emerged out of readings and discussion and analytical questions and that they'll now test through an experiment, namely, writing their essay. When students see writing as part of the process of inquiry—rather than just the result—and when that process is committed to acknowledging and adapting itself to evidence, it makes writing assignments more scientific, more ethical, and more authentic. 
  • Have students create an inventory of touch points between the two texts early in the process.
  • Ask students to make the case—early on and at points throughout the process—for the significance of the claim they're making about the relationship between the texts they're comparing.
  • For coordinate kinds of comparative analysis, a common pitfall is tied to thesis and evidence. Basically, it's a thesis that tells the reader that there are "similarities and differences" between two texts, without telling the reader why it matters that these two texts have or don't have these particular features in common. This kind of thesis is stuck at the level of description or positivism, and it's not uncommon when a writer is grappling with the complexity that can in fact accompany the "taking inventory" stage of comparative analysis. The solution is to make the "taking inventory" stage part of the process of the assignment. When this stage comes before students have formulated a thesis, that formulation is then able to emerge out of a comparative data set, rather than the data set emerging in terms of their thesis (which can lead to confirmation bias, or frequency illusion, or—just for the sake of streamlining the process of gathering evidence—cherry picking). 
  • For subordinate kinds of comparative analysis , a common pitfall is tied to how much weight is given to each source. Having students apply a theory (in a "lens" essay) or weigh the pros and cons of a theory against case studies (in a "test a theory") essay can be a great way to help them explore the assumptions, implications, and real-world usefulness of theoretical approaches. The pitfall of these approaches is that they can quickly lead to the same biases we saw here above. Making sure that students know they should engage with counterevidence and counterargument, and that "lens" / "test a theory" approaches often balance each other out in any real-world application of theory is a good way to get out in front of this pitfall.
  • For any kind of comparative analysis, a common pitfall is structure. Every comparative analysis asks writers to move back and forth between texts, and that can pose a number of challenges, including: what pattern the back and forth should follow and how to use transitions and other signposting to make sure readers can follow the overarching argument as the back and forth is taking place. Here's some advice from an experienced writing instructor to students about how to think about these considerations:

a quick note on STRUCTURE

     Most of us have encountered the question of whether to adopt what we might term the “A→A→A→B→B→B” structure or the “A→B→A→B→A→B” structure.  Do we make all of our points about text A before moving on to text B?  Or do we go back and forth between A and B as the essay proceeds?  As always, the answers to our questions about structure depend on our goals in the essay as a whole.  In a “similarities in spite of differences” essay, for instance, readers will need to encounter the differences between A and B before we offer them the similarities (A d →B d →A s →B s ).  If, rather than subordinating differences to similarities you are subordinating text A to text B (using A as a point of comparison that reveals B’s originality, say), you may be well served by the “A→A→A→B→B→B” structure.  

     Ultimately, you need to ask yourself how many “A→B” moves you have in you.  Is each one identical?  If so, you may wish to make the transition from A to B only once (“A→A→A→B→B→B”), because if each “A→B” move is identical, the “A→B→A→B→A→B” structure will appear to involve nothing more than directionless oscillation and repetition.  If each is increasingly complex, however—if each AB pair yields a new and progressively more complex idea about your subject—you may be well served by the “A→B→A→B→A→B” structure, because in this case it will be visible to readers as a progressively developing argument.

As we discussed in "Advice on Timing" at the page on single-source analysis, that timeline itself roughly follows the "Sample Sequence of Formative Assignments for a 'Typical' Essay" outlined under " Formative Writing Assignments, " and it spans about 5–6 steps or 2–4 weeks. 

Comparative analysis assignments have a lot of the same DNA as single-source essays, but they potentially bring more reading into play and ask students to engage in more complicated acts of analysis and synthesis during the drafting stages. With that in mind, closer to 4 weeks is probably a good baseline for many single-source analysis assignments. For sections that meet once per week, the timeline will either probably need to expand—ideally—a little past the 4-week side of things, or some of the steps will need to be combined or done asynchronously.

What It Can Build Up To

Comparative analyses can build up to other kinds of writing in a number of ways. For example:

  • They can build toward other kinds of comparative analysis, e.g., student can be asked to choose an additional source to complicate their conclusions from a previous analysis, or they can be asked to revisit an analysis using a different axis of comparison, such as race instead of class. (These approaches are akin to moving from a coordinate or subordinate analysis to more of a hybrid approach.)
  • They can scaffold up to research essays, which in many instances are an extension of a "hybrid comparative analysis."
  • Like single-source analysis, in a course where students will take a "deep dive" into a source or topic for their capstone, they can allow students to "try on" a theoretical approach or genre or time period to see if it's indeed something they want to research more fully.
  • DIY Guides for Analytical Writing Assignments

For Teaching Fellows & Teaching Assistants

  • Types of Assignments
  • Unpacking the Elements of Writing Prompts
  • Formative Writing Assignments
  • Single-Source Analysis
  • Research Essays
  • Multi-Modal or Creative Projects
  • Giving Feedback to Students

Assignment Decoder

Home

  • 18.2k views
  • Results & Discussion

Q: How do I write a comparison between the results of my research and those of other models?

avatar mx-auto white

Asked by Hamid Heydari Gholanlo on 25 Jan, 2019

While writing a scientific paper, you compare the results from other models in either the Introduction or Discussion section depending on how you are presenting your study in writing. Ideally, you will start by providing the key results from each model and then compare and contrast them. You can then provide any interpretations or future directions to sum up the comparison. You may also choose to tabulate the comparison if it is possible to make the information easy for the readers. 

Related reading:

  • The secret to writing the introduction and methods section of a manuscript
  • The secret to writing the results and discussion section of a manuscript

avatar mx-auto white

Answered by Editage Insights on 04 Feb, 2019

  • Upvote this Answer

comparison research paper example

This content belongs to the Manuscript Writing Stage

Translate your research into a publication-worthy manuscript by understanding the nuances of academic writing. Subscribe and get curated reads that will help you write an excellent manuscript.

Confirm that you would also like to sign up for free personalized email coaching for this stage.

Trending Searches

  • Statement of the problem
  • Background of study
  • Scope of the study
  • Types of qualitative research
  • Rationale of the study
  • Concept paper
  • Literature review
  • Introduction in research
  • Under "Editor Evaluation"
  • Ethics in research

Recent Searches

  • Review paper
  • Responding to reviewer comments
  • Predatory publishers
  • Scope and delimitations
  • Open access
  • Plagiarism in research
  • Journal selection tips
  • Editor assigned
  • Types of articles
  • "Reject and Resubmit" status
  • Decision in process
  • Conflict of interest

National Academies Press: OpenBook

On Evaluating Curricular Effectiveness: Judging the Quality of K-12 Mathematics Evaluations (2004)

Chapter: 5 comparative studies, 5 comparative studies.

It is deceptively simple to imagine that a curriculum’s effectiveness could be easily determined by a single well-designed study. Such a study would randomly assign students to two treatment groups, one using the experimental materials and the other using a widely established comparative program. The students would be taught the entire curriculum, and a test administered at the end of instruction would provide unequivocal results that would permit one to identify the more effective treatment.

The truth is that conducting definitive comparative studies is not simple, and many factors make such an approach difficult. Student placement and curricular choice are decisions that involve multiple groups of decision makers, accrue over time, and are subject to day-to-day conditions of instability, including student mobility, parent preference, teacher assignment, administrator and school board decisions, and the impact of standardized testing. This complex set of institutional policies, school contexts, and individual personalities makes comparative studies, even quasi-experimental approaches, challenging, and thus demands an honest and feasible assessment of what can be expected of evaluation studies (Usiskin, 1997; Kilpatrick, 2002; Schoenfeld, 2002; Shafer, in press).

Comparative evaluation study is an evolving methodology, and our purpose in conducting this review was to evaluate and learn from the efforts undertaken so far and advise on future efforts. We stipulated the use of comparative studies as follows:

A comparative study was defined as a study in which two (or more) curricular treatments were investigated over a substantial period of time (at least one semester, and more typically an entire school year) and a comparison of various curricular outcomes was examined using statistical tests. A statistical test was required to ensure the robustness of the results relative to the study’s design.

We read and reviewed a set of 95 comparative studies. In this report we describe that database, analyze its results, and draw conclusions about the quality of the evaluation database both as a whole and separated into evaluations supported by the National Science Foundation and commercially generated evaluations. In addition to describing and analyzing this database, we also provide advice to those who might wish to fund or conduct future comparative evaluations of mathematics curricular effectiveness. We have concluded that the process of conducting such evaluations is in its adolescence and could benefit from careful synthesis and advice in order to increase its rigor, feasibility, and credibility. In addition, we took an interdisciplinary approach to the task, noting that various committee members brought different expertise and priorities to the consideration of what constitutes the most essential qualities of rigorous and valid experimental or quasi-experimental design in evaluation. This interdisciplinary approach has led to some interesting observations and innovations in our methodology of evaluation study review.

This chapter is organized as follows:

Study counts disaggregated by program and program type.

Seven critical decision points and identification of at least minimally methodologically adequate studies.

Definition and illustration of each decision point.

A summary of results by student achievement in relation to program types (NSF-supported, University of Chicago School Mathematics Project (UCSMP), and commercially generated) in relation to their reported outcome measures.

A list of alternative hypotheses on effectiveness.

Filters based on the critical decision points.

An analysis of results by subpopulations.

An analysis of results by content strand.

An analysis of interactions among content, equity, and grade levels.

Discussion and summary statements.

In this report, we describe our methodology for review and synthesis so that others might scrutinize our approach and offer criticism on the basis of

our methodology and its connection to the results stated and conclusions drawn. In the spirit of scientific, fair, and open investigation, we welcome others to undertake similar or contrasting approaches and compare and discuss the results. Our work was limited by the short timeline set by the funding agencies resulting from the urgency of the task. Although we made multiple efforts to collect comparative studies, we apologize to any curriculum evaluators if comparative studies were unintentionally omitted from our database.

Of these 95 comparative studies, 65 were studies of NSF-supported curricula, 27 were studies of commercially generated materials, and 3 included two curricula each from one of these two categories. To avoid the problem of double coding, two studies, White et al. (1995) and Zahrt (2001), were coded within studies of NSF-supported curricula because more of the classes studied used the NSF-supported curriculum. These studies were not used in later analyses because they did not meet the requirements for the at least minimally methodologically adequate studies, as described below. The other, Peters (1992), compared two commercially generated curricula, and was coded in that category under the primary program of focus. Therefore, of the 95 comparative studies, 67 studies were coded as NSF-supported curricula and 28 were coded as commercially generated materials.

The 11 evaluation studies of the UCSMP secondary program that we reviewed, not including White et al. and Zahrt as previously mentioned, benefit from the maturity of the program, while demonstrating an orientation to both establishing effectiveness and improving a product line. For these reasons, at times we will present the summary of UCSMP’s data separately.

The Saxon materials also present a somewhat different profile from the other commercially generated materials because many of the evaluations of these materials were conducted in the 1980s and the materials were originally developed with a rather atypical program theory. Saxon (1981) designed its algebra materials to combine distributed practice with incremental development. We selected the Saxon materials as a middle grades commercially generated program, and limited its review to middle school studies from 1989 onward when the first National Council of Teachers of Mathematics (NCTM) Standards (NCTM, 1989) were released. This eliminated concerns that the materials or the conditions of educational practice have been altered during the intervening time period. The Saxon materials explicitly do not draw from the NCTM Standards nor did they receive support from the NSF; thus they truly represent a commercial venture. As a result, we categorized the Saxon studies within the group of studies of commercial materials.

At times in this report, we describe characteristics of the database by

comparison research paper example

FIGURE 5-1 The distribution of comparative studies across programs. Programs are coded by grade band: black bars = elementary, white bars = middle grades, and gray bars = secondary. In this figure, there are six studies that involved two programs and one study that involved three programs.

NOTE: Five programs (MathScape, MMAP, MMOW/ARISE, Addison-Wesley, and Harcourt) are not shown above since no comparative studies were reviewed.

particular curricular program evaluations, in which case all 19 programs are listed separately. At other times, when we seek to inform ourselves on policy-related issues of funding and evaluating curricular materials, we use the NSF-supported, commercially generated, and UCSMP distinctions. We remind the reader of the artificial aspects of this distinction because at the present time, 18 of the 19 curricula are published commercially. In order to track the question of historical inception and policy implications, a distinction is drawn between the three categories. Figure 5-1 shows the distribution of comparative studies across the 14 programs.

The first result the committee wishes to report is the uneven distribution of studies across the curricula programs. There were 67 coded studies of the NSF curricula, 11 studies of UCSMP, and 17 studies of the commercial publishers. The 14 evaluation studies conducted on the Saxon materials compose the bulk of these 17-non-UCSMP and non-NSF-supported curricular evaluation studies. As these results suggest, we know more about the

evaluations of the NSF-supported curricula and UCSMP than about the evaluations of the commercial programs. We suggest that three factors account for this uneven distribution of studies. First, evaluations have been funded by the NSF both as a part of the original call, and as follow-up to the work in the case of three supplemental awards to two of the curricula programs. Second, most NSF-supported programs and UCSMP were developed at university sites where there is access to the resources of graduate students and research staff. Finally, there was some reported reluctance on the part of commercial companies to release studies that could affect perceptions of competitive advantage. As Figure 5-1 shows, there were quite a few comparative studies of Everyday Mathematics (EM), Connected Mathematics Project (CMP), Contemporary Mathematics in Context (Core-Plus Mathematics Project [CPMP]), Interactive Mathematics Program (IMP), UCSMP, and Saxon.

In the programs with many studies, we note that a significant number of studies were generated by a core set of authors. In some cases, the evaluation reports follow a relatively uniform structure applied to single schools, generating multiple studies or following cohorts over years. Others use a standardized evaluation approach to evaluate sequential courses. Any reports duplicating exactly the same sample, outcome measures, or forms of analysis were eliminated. For example, one study of Mathematics Trailblazers (Carter et al., 2002) reanalyzed the data from the larger ARC Implementation Center study (Sconiers et al., 2002), so it was not included separately. Synthesis studies referencing a variety of evaluation reports are summarized in Chapter 6 , but relevant individual studies that were referenced in them were sought out and included in this comparative review.

Other less formal comparative studies are conducted regularly at the school or district level, but such studies were not included in this review unless we could obtain formal reports of their results, and the studies met the criteria outlined for inclusion in our database. In our conclusions, we address the issue of how to collect such data more systematically at the district or state level in order to subject the data to the standards of scholarly peer review and make it more systematically and fairly a part of the national database on curricular effectiveness.

A standard for evaluation of any social program requires that an impact assessment is warranted only if two conditions are met: (1) the curricular program is clearly specified, and (2) the intervention is well implemented. Absent this assurance, one must have a means of ensuring or measuring treatment integrity in order to make causal inferences. Rossi et al. (1999, p. 238) warned that:

two prerequisites [must exist] for assessing the impact of an intervention. First, the program’s objectives must be sufficiently well articulated to make

it possible to specify credible measures of the expected outcomes, or the evaluator must be able to establish such a set of measurable outcomes. Second, the intervention should be sufficiently well implemented that there is no question that its critical elements have been delivered to appropriate targets. It would be a waste of time, effort, and resources to attempt to estimate the impact of a program that lacks measurable outcomes or that has not been properly implemented. An important implication of this last consideration is that interventions should be evaluated for impact only when they have been in place long enough to have ironed out implementation problems.

These same conditions apply to evaluation of mathematics curricula. The comparative studies in this report varied in the quality of documentation of these two conditions; however, all addressed them to some degree or another. Initially by reviewing the studies, we were able to identify one general design template, which consisted of seven critical decision points and determined that it could be used to develop a framework for conducting our meta-analysis. The seven critical decision points we identified initially were:

Choice of type of design: experimental or quasi-experimental;

For those studies that do not use random assignment: what methods of establishing comparability of groups were built into the design—this includes student characteristics, teacher characteristics, and the extent to which professional development was involved as part of the definition of a curriculum;

Definition of the appropriate unit of analysis (students, classes, teachers, schools, or districts);

Inclusion of an examination of implementation components;

Definition of the outcome measures and disaggregated results by program;

The choice of statistical tests, including statistical significance levels and effect size; and

Recognition of limitations to generalizability resulting from design choices.

These are critical decisions that affect the quality of an evaluation. We further identified a subset of these evaluation studies that met a set of minimum conditions that we termed at least minimally methodologically adequate studies. Such studies are those with the greatest likelihood of shedding light on the effectiveness of these programs. To be classified as at least minimally methodologically adequate, and therefore to be considered for further analysis, each evaluation study was required to:

Include quantifiably measurable outcomes such as test scores, responses to specified cognitive tasks of mathematical reasoning, performance evaluations, grades, and subsequent course taking; and

Provide adequate information to judge the comparability of samples. In addition, a study must have included at least one of the following additional design elements:

A report of implementation fidelity or professional development activity;

Results disaggregated by content strands or by performance by student subgroups; and/or

Multiple outcome measures or precise theoretical analysis of a measured construct, such as number sense, proof, or proportional reasoning.

Using this rubric, the committee identified a subset of 63 comparative studies to classify as at least minimally methodologically adequate and to analyze in depth to inform the conduct of future evaluations. There are those who would argue that any threat to the validity of a study discredits the findings, thus claiming that until we know everything, we know nothing. Others would claim that from the myriad of studies, examining patterns of effects and patterns of variation, one can learn a great deal, perhaps tentatively, about programs and their possible effects. More importantly, we can learn about methodologies and how to concentrate and focus to increase the likelihood of learning more quickly. As Lipsey (1997, p. 22) wrote:

In the long run, our most useful and informative contribution to program managers and policy makers and even to the evaluation profession itself may be the consolidation of our piecemeal knowledge into broader pictures of the program and policy spaces at issue, rather than individual studies of particular programs.

We do not wish to imply that we devalue studies of student affect or conceptions of mathematics, but decided that unless these indicators were connected to direct indicators of student learning, we would eliminate them from further study. As a result of this sorting, we eliminated 19 studies of NSF-supported curricula and 13 studies of commercially generated curricula. Of these, 4 were eliminated for their sole focus on affect or conceptions, 3 were eliminated for their comparative focus on outcomes other than achievement, such as teacher-related variables, and 19 were eliminated for their failure to meet the minimum additional characteristics specified in the criteria above. In addition, six others were excluded from the studies of commercial materials because they were not conducted within the grade-

level band specified by the committee for the selection of that program. From this point onward, all references can be assumed to refer to at least minimally methodologically adequate unless a study is referenced for illustration, in which case we label it with “EX” to indicate that it is excluded in the summary analyses. Studies labeled “EX” are occasionally referenced because they can provide useful information on certain aspects of curricular evaluation, but not on the overall effectiveness.

The at least minimally methodologically adequate studies reported on a variety of grade levels. Figure 5-2 shows the different grade levels of the studies. At times, the choice of grade levels was dictated by the years in which high-stakes tests were given. Most of the studies reported on multiple grade levels, as shown in Figure 5-2 .

Using the seven critical design elements of at least minimally methodologically adequate studies as a design template, we describe the overall database and discuss the array of choices on critical decision points with examples. Following that, we report on the results on the at least minimally methodologically adequate studies by program type. To do so, the results of each study were coded as either statistically significant or not. Those studies

comparison research paper example

FIGURE 5-2 Single-grade studies by grade and multigrade studies by grade band.

that contained statistically significant results were assigned a percentage of outcomes that are positive (in favor of the treatment curriculum) based on the number of statistically significant comparisons reported relative to the total number of comparisons reported, and a percentage of outcomes that are negative (in favor of the comparative curriculum). The remaining were coded as the percentage of outcomes that are non significant. Then, using seven critical decision points as filters, we identified and examined more closely sets of studies that exhibited the strongest designs, and would therefore be most likely to increase our confidence in the validity of the evaluation. In this last section, we consider alternative hypotheses that could explain the results.

The committee emphasizes that we did not directly evaluate the materials. We present no analysis of results aggregated across studies by naming individual curricular programs because we did not consider the magnitude or rigor of the database for individual programs substantial enough to do so. Nevertheless, there are studies that provide compelling data concerning the effectiveness of the program in a particular context. Furthermore, we do report on individual studies and their results to highlight issues of approach and methodology and to remain within our primary charge, which was to evaluate the evaluations, we do not summarize results of the individual programs.

DESCRIPTION OF COMPARATIVE STUDIES DATABASE ON CRITICAL DECISION POINTS

An experimental or quasi-experimental design.

We separated the studies into experimental and quasiexperimental, and found that 100 percent of the studies were quasiexperimental (Campbell and Stanley, 1966; Cook and Campbell, 1979; and Rossi et al., 1999). 1 Within the quasi-experimental studies, we identified three subcategories of comparative study. In the first case, we identified a study as cross-curricular comparative if it compared the results of curriculum A with curriculum B. A few studies in this category also compared two samples within the curriculum to each other and specified different conditions such as high and low implementation quality.

A second category of a quasi-experimental study involved comparisons that could shed light on effectiveness involving time series studies. These studies compared the performance of a sample of students in a curriculum

comparison research paper example

FIGURE 5-3 The number of comparative studies in each category.

under investigation across time, such as in a longitudinal study of the same students over time. A third category of comparative study involved a comparison to some form of externally normed results, such as populations taking state, national, or international tests or prior research assessment from a published study or studies. We categorized these studies and divided them into NSF, UCSMP, and commercial and labeled them by the categories above ( Figure 5-3 ).

In nearly all studies in the comparative group, the titles of experimental curricula were explicitly identified. The only exception to this was the ARC Implementation Center study (Sconiers et al., 2002), where three NSF-supported elementary curricula were examined, but in the results, their effects were pooled. In contrast, in the majority of the cases, the comparison curriculum is referred to simply as “traditional.” In only 22 cases were comparisons made between two identified curricula. Many others surveyed the array of curricula at comparison schools and reported on the most frequently used, but did not identify a single curriculum. This design strategy is used often because other factors were used in selecting comparison groups, and the additional requirement of a single identified curriculum in

these sites would often make it difficult to match. Studies were categorized into specified (including a single or multiple identified curricula) and nonspecified curricula. In the 63 studies, the central group was compared to an NSF-supported curriculum (1), an unnamed traditional curriculum (41), a named traditional curriculum (19), and one of the six commercial curricula (2). To our knowledge, any systematic impact of such a decision on results has not been studied, but we express concern that when a specified curriculum is compared to an unspecified content which is a set of many informal curriculum, the comparison may favor the coherency and consistency of the single curricula, and we consider this possibility subsequently under alternative hypotheses. We believe that a quality study should at least report the array of curricula that comprise the comparative group and include a measure of the frequency of use of each, but a well-defined alternative is more desirable.

If a study was both longitudinal and comparative, then it was coded as comparative. When studies only examined performances of a group over time, such as in some longitudinal studies, it was coded as quasi-experimental normed. In longitudinal studies, the problems created by student mobility were evident. In one study, Carroll (2001), a five-year longitudinal study of Everyday Mathematics, the sample size began with 500 students, 24 classrooms, and 11 schools. By 2nd grade, the longitudinal sample was 343. By 3rd grade, the number of classes increased to 29 while the number of original students decreased to 236 students. At the completion of the study, approximately 170 of the original students were still in the sample. This high rate of attrition from the study suggests that mobility is a major challenge in curricular evaluation, and that the effects of curricular change on mobile students needs to be studied as a potential threat to the validity of the comparison. It is also a challenge in curriculum implementation because students coming into a program do not experience its cumulative, developmental effect.

Longitudinal studies also have unique challenges associated with outcome measures, a study by Romberg et al. (in press) (EX) discussed one approach to this problem. In this study, an external assessment system and a problem-solving assessment system were used. In the External Assessment System, items from the National Assessment of Educational Progress (NAEP) and Third International Mathematics and Science Survey (TIMSS) were balanced across four strands (number, geometry, algebra, probability and statistics), and 20 items of moderate difficulty, called anchor items, were repeated on each grade-specific assessment (p. 8). Because the analyses of the results are currently under way, the evaluators could not provide us with final results of this study, so it is coded as EX.

However, such longitudinal studies can provide substantial evidence of the effects of a curricular program because they may be more sensitive to an

TABLE 5-1 Scores in Percentage Correct by Everyday Mathematics Students and Various Comparison Groups Over a Five-Year Longitudinal Study

accumulation of modest effects and/or can reveal whether the rates of learning change over time within curricular change.

The longitudinal study by Carroll (2001) showed that the effects of curricula may often accrue over time, but measurements of achievement present challenges to drawing such conclusions as the content and grade level change. A variety of measures were used over time to demonstrate growth in relation to comparison groups. The author chose a set of measures used previously in studies involving two Asian samples and an American sample to provide a contrast to the students in EM over time. For 3rd and 4th grades, where the data from the comparison group were not available, the authors selected items from the NAEP to bridge the gap. Table 5-1 summarizes the scores of the different comparative groups over five years. Scores are reported as the mean percentage correct for a series of tests on number computation, number concepts and applications, geometry, measurement, and data analysis.

It is difficult to compare performances on different tests over different groups over time against a single longitudinal group from EM, and it is not possible to determine whether the students’ performance is increasing or whether the changes in the tests at each grade level are producing the results; thus the results from longitudinal studies lacking a control group or use of sophisticated methodological analysis may be suspect and should be interpreted with caution.

In the Hirsch and Schoen (2002) study, based on a sample of 1,457 students, scores on Ability to Do Quantitative Thinking (ITED-Q) a subset of the Iowa Tests of Education Development, students in Core-Plus showed increasing performance over national norms over the three-year time period. The authors describe the content of the ITED-Q test and point out

that “although very little symbolic algebra is required, the ITED-Q is quite demanding for the full range of high school students” (p. 3). They further point out that “[t]his 3-year pattern is consistent, on average, in rural, urban, and suburban schools, for males and females, for various minority groups, and for students for whom English was not their first language” (p. 4). In this case, one sees that studies over time are important as results over shorter periods may mask cumulative effects of consistent and coherent treatments and such studies could also show increases that do not persist when subject to longer trajectories. One approach to longitudinal studies was used by Webb and Dowling in their studies of the Interactive Mathematics Program (Webb and Dowling, 1995a, 1995b, 1995c). These researchers conducted transcript analyses as a means to examine student persistence and success in subsequent course taking.

The third category of quasi-experimental comparative studies measured student outcomes on a particular curricular program and simply compared them to performance on national tests or international tests. When these tests were of good quality and were representative of a genuine sample of a relevant population, such as NAEP reports or TIMSS results, the reports often provided one a reasonable indicator of the effects of the program if combined with a careful description of the sample. Also, sometimes the national tests or state tests used were norm-referenced tests producing national percentiles or grade-level equivalents. The normed studies were considered of weaker quality in establishing effectiveness, but were still considered valid as examples of comparing samples to populations.

For Studies That Do Not Use Random Assignment: What Methods of Establishing Comparability Across Groups Were Built into the Design

The most fundamental question in an evaluation study is whether the treatment has had an effect on the chosen criterion variable. In our context, the treatment is the curriculum materials, and in some cases, related professional development, and the outcome of interest is academic learning. To establish if there is a treatment effect, one must logically rule out as many other explanations as possible for the differences in the outcome variable. There is a long tradition on how this is best done, and the principle from a design point of view is to assure that there are no differences between the treatment conditions (especially in these evaluations, often there are only the new curriculum materials to be evaluated and a control group) either at the outset of the study or during the conduct of the study.

To ensure the first condition, the ideal procedure is the random assignment of the appropriate units to the treatment conditions. The second condition requires that the treatment is administered reliably during the length of the study, and is assured through the careful observation and

control of the situation. Without randomization, there are a host of possible confounding variables that could differ among the treatment conditions and that are related themselves to the outcome variables. Put another way, the treatment effect is a parameter that the study is set up to estimate. Statistically, an estimate that is unbiased is desired. The goal is that its expected value over repeated samplings is equal to the true value of the parameter. Without randomization at the onset of a study, there is no way to assure this property of unbiasness. The variables that differ across treatment conditions and are related to the outcomes are confounding variables, which bias the estimation process.

Only one study we reviewed, Peters (1992), used randomization in the assignment of students to treatments, but that occurred because the study was limited to one teacher teaching two sections and included substantial qualitative methods, so we coded it as quasi-experimental. Others report partially assigning teachers randomly to treatment conditions (Thompson, et al., 2001; Thompson et al., 2003). Two primary reasons seem to account for a lack of use of pure experimental design. To justify the conduct and expense of a randomized field trial, the program must be described adequately and there must be relative assurance that its implementation has occurred over the duration of the experiment (Peterson et al., 1999). Additionally, one must be sure that the outcome measures are appropriate for the range of performances in the groups and valid relative to the curricula under investigation. Seldom can such conditions be assured for all students and teachers and over the duration of a year or more.

A second reason is that random assignment of classrooms to curricular treatment groups typically is not permitted or encouraged under normal school conditions. As one evaluator wrote, “Building or district administrators typically identified teachers who would be in the study and in only a few cases was random assignment of teachers to UCSMP Algebra or comparison classes possible. School scheduling and teacher preference were more important factors to administrators and at the risk of losing potential sites, we did not insist on randomization” (Mathison et al., 1989, p. 11).

The Joint Committee on Standards for Educational Evaluation (1994, p. 165) committee of evaluations recognized the likelihood of limitations on randomization, writing:

The groups being compared are seldom formed by random assignment. Rather, they tend to be natural groupings that are likely to differ in various ways. Analytical methods may be used to adjust for these initial differences, but these methods are based upon a number of assumptions. As it is often difficult to check such assumptions, it is advisable, when time and resources permit, to use several different methods of analysis to determine whether a replicable pattern of results is obtained.

Does the dearth of pure experimentation render the results of the studies reviewed worthless? Bias is not an “either-or” proposition, but it is a quantity of varying degrees. Through careful measurement of the most salient potential confounding variables, precise theoretical description of constructs, and use of these methods of statistical analysis, it is possible to reduce the amount of bias in the estimated treatment effect. Identification of the most likely confounding variables and their measurement and subsequent adjustments can greatly reduce bias and help estimate an effect that is likely to be more reflective of the true value. The theoretical fully specified model is an alternative to randomization by including relevant variables and thus allowing the unbiased estimation of the parameter. The only problem is realizing when the model is fully specified.

We recognized that we can never have enough knowledge to assure a fully specified model, especially in the complex and unstable conditions of schools. However, a key issue in determining the degree of confidence we have in these evaluations is to examine how they have identified, measured, or controlled for such confounding variables. In the next sections, we report on the methods of the evaluators in identifying and adjusting for such potential confounding variables.

One method to eliminate confounding variables is to examine the extent to which the samples investigated are equated either by sample selection or by methods of statistical adjustments. For individual students, there is a large literature suggesting the importance of social class to achievement. In addition, prior achievement of students must be considered. In the comparative studies, investigators first identified participation of districts, schools, or classes that could provide sufficient duration of use of curricular materials (typically two years or more), availability of target classes, or adequate levels of use of program materials. Establishing comparability was a secondary concern.

These two major factors were generally used in establishing the comparability of the sample:

Student population characteristics, such as demographic characteristics of students in terms of race/ethnicity, economic levels, or location type (urban, suburban, or rural).

Performance-level characteristics such as performance on prior tests, pretest performance, percentage passing standardized tests, or related measures (e.g., problem solving, reading).

In general, four methods of comparing groups were used in the studies we examined, and they permit different degrees of confidence in their results. In the first type, a matching class, school, or district was identified.

Studies were coded as this type if specified characteristics were used to select the schools systematically. In some of these studies, the methodology was relatively complex as correlates of performance on the outcome measures were found empirically and matches were created on that basis (Schneider, 2000; Riordan and Noyce, 2001; and Sconiers et al., 2002). For example, in the Sconiers et al. study, where the total sample of more than 100,000 students was drawn from five states and three elementary curricula are reviewed (Everyday Mathematics, Math Trailblazers [MT], and Investigations [IN], a highly systematic method was developed. After defining eligibility as a “reform school,” evaluators conducted separate regression analyses for the five states at each tested grade level to identify the strongest predictors of average school mathematics score. They reported, “reading score and low-income variables … consistently accounted for the greatest percentage of total variance. These variables were given the greatest weight in the matching process. Other variables—such as percent white, school mobility rate, and percent with limited English proficiency (LEP)—accounted for little of the total variance but were typically significant. These variables were given less weight in the matching process” (Sconiers et al., 2002, p. 10). To further provide a fair and complete comparison, adjustments were made based on regression analysis of the scores to minimize bias prior to calculating the difference in scores and reporting effect sizes. In their results the evaluators report, “The combined state-grade effect sizes for math and total are virtually identical and correspond to a percentile change of about 4 percent favoring the reform students” (p. 12).

A second type of matching procedure was used in the UCSMP evaluations. For example, in an evaluation centered on geometry learning, evaluators advertised in NCTM and UCSMP publications, and set conditions for participation from schools using their program in terms of length of use and grade level. After selecting schools with heterogeneous grouping and no tracking, the researchers used a match-pair design where they selected classes from the same school on the basis of mathematics ability. They used a pretest to determine this, and because the pretest consisted of two parts, they adjusted their significance level using the Bonferroni method. 2 Pairs were discarded if the differences in means and variance were significant for all students or for those students completing all measures, or if class sizes became too variable. In the algebra study, there were 20 pairs as a result of the matching, and because they were comparing three experimental conditions—first edition, second edition, and comparison classes—in the com-

parison study relevant to this review, their matching procedure identified 8 pairs. When possible, teachers were assigned randomly to treatment conditions. Most results are presented with the eight identified pairs and an accumulated set of means. The outcomes of this particular study are described below in a discussion of outcome measures (Thompson et al., 2003).

A third method was to measure factors such as prior performance or socio-economic status (SES) based on pretesting, and then to use analysis of covariance or multiple regression in the subsequent analysis to factor in the variance associated with these factors. These studies were coded as “control.” A number of studies of the Saxon curricula used this method. For example, Rentschler (1995) conducted a study of Saxon 76 compared to Silver Burdett with 7th graders in West Virginia. He reported that the groups differed significantly in that the control classes had 65 percent of the students on free and reduced-price lunch programs compared to 55 percent in the experimental conditions. He used scores on California Test of Basic Skills mathematics computation and mathematics concepts and applications as his pretest scores and found significant differences in favor of the experimental group. His posttest scores showed the Saxon experimental group outperformed the control group on both computation and concepts and applications. Using analysis of covariance, the computation difference in favor of the experimental group was statistically significant; however, the difference in concepts and applications was adjusted to show no significant difference at the p < .05 level.

A fourth method was noted in studies that used less rigorous methods of selection of sample and comparison of prior achievement or similar demographics. These studies were coded as “compare.” Typically, there was no explicit procedure to decide if the comparison was good enough. In some of the studies, it appeared that the comparison was not used as a means of selection, but rather as a more informal device to convince the reader of the plausibility of the equivalence of the groups. Clearly, the studies that used a more precise method of selection were more likely to produce results on which one’s confidence in the conclusions is greater.

Definition of Unit of Analysis

A major decision in forming an evaluation design is the unit of analysis. The unit of selection or randomization used to assign elements to treatment and control groups is closely linked to the unit of analysis. As noted in the National Research Council (NRC) report (1992, p. 21):

If one carries out the assignment of treatments at the level of schools, then that is the level that can be justified for causal analysis. To analyze the results at the student level is to introduce a new, nonrandomized level into

the study, and it raises the same issues as does the nonrandomized observational study…. The implications … are twofold. First, it is advisable to use randomization at the level at which units are most naturally manipulated. Second, when the unit of observation is at a “lower” level of aggregation than the unit of randomization, then for many purposes the data need to be aggregated in some appropriate fashion to provide a measure that can be analyzed at the level of assignment. Such aggregation may be as simple as a summary statistic or as complex as a context-specific model for association among lower-level observations.

In many studies, inadequate attention was paid to the fact that the unit of selection would later become the unit of analysis. The unit of analysis, for most curriculum evaluators, needs to be at least the classroom, if not the school or even the district. The units must be independently responding units because instruction is a group process. Students are not independent, the classroom—even if the teachers work together in a school on instruction—is not entirely independent, so the school is the unit. Care needed to be taken to ensure that an adequate numbers of units would be available to have sufficient statistical power to detect important differences.

A curriculum is experienced by students in a group, and this implies that individual student responses and what they learn are correlated. As a result, the appropriate unit of assignment and analysis must at least be defined at the classroom or teacher level. Other researchers (Bryk et al., 1993) suggest that the unit might be better selected at an even higher level of aggregation. The school itself provides a culture in which the curriculum is enacted as it is influenced by the policies and assignments of the principal, by the professional interactions and governance exhibited by the teachers as a group, and by the community in which the school resides. This would imply that the school might be the appropriate unit of analysis. Even further, to the extent that such decisions about curriculum are made at the district level and supported through resources and professional development at that level, the appropriate unit could arguably be the district. On a more practical level, we found that arguments can be made for a variety of decisions on the selection of units, and what is most essential is to make a clear argument for one’s choice, to use the same unit in the analysis as in the sample selection process, and to recognize the potential limits to generalization that result from one’s decisions.

We would argue in all cases that reports of how sites are selected must be explicit in the evaluation report. For example, one set of evaluation studies selected sites by advertisements in a journal distributed by the program and in NCTM journals (UCSMP) (Thompson et al., 2001; Thompson et al., 2003). The samples in their studies tended to be affluent suburban populations and predominantly white populations. Other conditions of inclusion, such as frequency of use also might have influenced this outcome,

but it is important that over a set of studies on effectiveness, all populations of students be adequately sampled. When a study is not randomized, adjustments for these confounding variables should be included. In our analysis of equity, we report on the concerns about representativeness of the overall samples and their impact on the generalizability of the results.

Implementation Components

The complexity of doing research on curricular materials introduces a number of possible confounding variables. Due to the documented complexity of curricular implementation, most comparative study evaluators attempt to monitor implementation in some fashion. A valuable outcome of a well-conducted evaluation is to determine not only if the experimental curriculum could ideally have a positive impact on learning, but whether it can survive or thrive in the conditions of schooling that are so variable across sites. It is essential to know what the treatment was, whether it occurred, and if so, to what degree of intensity, fidelity, duration, and quality. In our model in Chapter 3 , these factors were referred to as “implementation components.” Measuring implementation can be costly for large-scale comparative studies; however, many researchers have shown that variation in implementation is a key factor in determining effectiveness. In coding the comparative studies, we identified three types of components that help to document the character of the treatment: implementation fidelity, professional development treatments, and attention to teacher effects.

Implementation Fidelity

Implementation fidelity is a measure of the basic extent of use of the curricular materials. It does not address issues of instructional quality. In some studies, implementation fidelity is synonymous with “opportunity to learn.” In examining implementation fidelity, a variety of data were reported, including, most frequently, the extent of coverage of the curricular material, the consistency of the instructional approach to content in relation to the program’s theory, reports of pedagogical techniques, and the length of use of the curricula at the sample sites. Other less frequently used approaches documented the calendar of curricular coverage, requested teacher feedback by textbook chapter, conducted student surveys, and gauged homework policies, use of technology, and other particular program elements. Interviews with teachers and students, classroom surveys, and observations were the most frequently used data-gathering techniques. Classroom observations were conducted infrequently in these studies, except in cases when comparative studies were combined with case studies, typically with small numbers of schools and classes where observations

were conducted for long or frequent time periods. In our analysis, we coded only the presence or absence of one or more of these methods.

If the extent of implementation was used in interpreting the results, then we classified the study as having adjusted for implementation differences. Across all 63 at least minimally methodologically adequate studies, 44 percent reported some type of implementation fidelity measure, 3 percent reported and adjusted for it in interpreting their outcome measures, and 53 percent recorded no information on this issue. Differences among studies, by study type (NSF, UCSMP, and commercially generated), showed variation on this issue, with 46 percent of NSF reporting or adjusting for implementation, 75 percent of UCSMP, and only 11 percent of the other studies of commercial materials doing so. Of the commercial, non-UCSMP studies included, only one reported on implementation. Possibly, the evaluators for the NSF and UCSMP Secondary programs recognized more clearly that their programs demanded significant changes in practice that could affect their outcomes and could pose challenges to the teachers assigned to them.

A study by Abrams (1989) (EX) 3 on the use of Saxon algebra by ninth graders showed that concerns for implementation fidelity extend to all curricula, even those like Saxon whose methods may seem more likely to be consistent with common practice. Abrams wrote, “It was not the intent of this study to determine the effectiveness of the Saxon text when used as Saxon suggests, but rather to determine the effect of the text as it is being used in the classroom situations. However, one aspect of the research was to identify how the text is being taught, and how closely teachers adhere to its content and the recommended presentation” (p. 7). Her findings showed that for the 9 teachers and 300 students, treatment effects favoring the traditional group (using Dolciani’s Algebra I textbook, Houghton Mifflin, 1980) were found on the algebra test, the algebra knowledge/skills subtest, and the problem-solving test for this population of teachers (fixed effect). No differences were found between the groups on an algebra understanding/applications subtest, overall attitude toward mathematics, mathematical self-confidence, anxiety about mathematics, or enjoyment of mathematics. She suggests that the lack of differences might be due to the ways in which teachers supplement materials, change test conditions, emphasize

and deemphasize topics, use their own tests, vary the proportion of time spent on development and practice, use calculators and group work, and basically adapt the materials to their own interpretation and method. Many of these practices conflict directly with the recommendations of the authors of the materials.

A study by Briars and Resnick (2000) (EX) in Pittsburgh schools directly confronted issues relevant to professional development and implementation. Evaluators contrasted the performance of students of teachers with high and low implementation quality, and showed the results on two contrasting outcome measures, Iowa Test of Basic Skills (ITBS) and Balanced Assessment. Strong implementers were defined as those who used all of the EM components and provided student-centered instruction by giving students opportunities to explore mathematical ideas, solve problems, and explain their reasoning. Weak implementers were either not using EM or using it so little that the overall instruction in the classrooms was “hardly distinguishable from traditional mathematics instruction” (p. 8). Assignment was based on observations of student behavior in classes, the presence or absence of manipulatives, teacher questionnaires about the programs, and students’ knowledge of classroom routines associated with the program.

From the identification of strong- and weak-implementing teachers, strong- and weak-implementation schools were identified as those with strong- or weak-implementing teachers in 3rd and 4th grades over two consecutive years. The performance of students with 2 years of EM experience in these settings composed the comparative samples. Three pairs of strong- and weak-implementation schools with similar demographics in terms of free and reduced-price lunch (range 76 to 93 percent), student living with only one parent (range 57 to 82 percent), mobility (range 8 to 16 percent), and ethnicity (range 43 to 98 percent African American) were identified. These students’ 1st-grade ITBS scores indicated similarity in prior performance levels. Finally, evaluators predicted that if the effects were due to the curricular implementation and accompanying professional development, the effects on scores should be seen in 1998, after full implementation. Figure 5-4 shows that on the 1998 New Standards exams, placement in strong- and weak-implementation schools strongly affected students’ scores. Over three years, performance in the district on skills, concepts, and problem solving rose, confirming the evaluator’s predictions.

An article by McCaffrey et al. (2001) examining the interactions among instructional practices, curriculum, and student achievement illustrates the point that distinctions are often inadequately linked to measurement tools in their treatment of the terms traditional and reform teaching. In this study, researchers conducted an exploratory factor analysis that led them to create two scales for instructional practice: Reform Practices and Tradi-

comparison research paper example

FIGURE 5-4 Percentage of students who met or exceeded the standard. Districtwide grade 4 New Standards Mathematics Reference Examination (NSMRE) performance for 1996, 1997, and 1998 by level of Everyday Mathematics implementation. Percentage of students who achieved the standard. Error bars denote the 99 percent confidence interval for each data point.

SOURCE: Re-created from Briars and Resnick (2000, pp. 19-20).

tional Practices. The reform scale measured the frequency, by means of teacher report, of teacher and student behaviors associated with reform instruction and assessment practices, such as using small-group work, explaining reasoning, representing and using data, writing reflections, or performing tasks in groups. The traditional scale focused on explanations to whole classes, the use of worksheets, practice, and short-answer assessments. There was a –0.32 correlation between scores for integrated curriculum teachers. There was a 0.27 correlation between scores for traditional

curriculum teachers. This shows that it is overly simplistic to think that reform and traditional practices are oppositional. The relationship among a variety of instructional practices is rather more complex as they interact with curriculum and various student populations.

Professional Development

Professional development and teacher effects were separated in our analysis from implementation fidelity. We recognized that professional development could be viewed by the readers of this report in two ways. As indicated in our model, professional development can be considered a program element or component or it can be viewed as part of the implementation process. When viewed as a program element, professional development resources are considered mandatory along with program materials. In relation to evaluation, proponents of considering professional development as a mandatory program element argue that curricular innovations, which involve the introduction of new topics, new types of assessment, or new ways of teaching, must make provision for adequate training, just as with the introduction of any new technology.

For others, the inclusion of professional development in the program elements without a concomitant inclusion of equal amounts of professional development relevant to a comparative treatment interjects a priori disproportionate treatments and biases the results. We hoped for an array of evaluation studies that might shed some empirical light on this dispute, and hence separated professional development from treatment fidelity, coding whether or not studies reported on the amount of professional development provided for the treatment and/or comparison groups. A study was coded as positive if it either reported on the professional development provided on the experimental group or reported the data on both treatments. Across all 63 at least minimally methodologically adequate studies, 27 percent reported some type of professional development measure, 1.5 percent reported and adjusted for it in interpreting their outcome measures, and 71.5 percent recorded no information on the issue.

A study by Collins (2002) (EX) 4 illustrates the critical and controversial role of professional development in evaluation. Collins studied the use of Connected Math over three years, in three middle schools in threat of being classified as low performing in the Massachusetts accountability system. A comparison was made between one school (School A) that engaged

substantively in professional development opportunities accompanying the program and two that did not (Schools B and C). In the CMP school reports (School A) totals between 100 and 136 hours of professional development were recorded for all seven teachers in grades 6 through 8. In School B, 66 hours were reported for two teachers and in School C, 150 hours were reported for eight teachers over three years. Results showed significant differences in the subsequent performance by students at the school with higher participation in professional development (School A) and it became a districtwide top performer; the other two schools remained at risk for low performance. No controls for teacher effects were possible, but the results do suggest the centrality of professional development for successful implementation or possibly suggest that the results were due to professional development rather than curriculum materials. The fact that these two interpretations cannot be separated is a problem when professional development is given to one and not the other. The effect could be due to textbook or professional development or an interaction between the two. Research designs should be adjusted to consider these issues when different conditions of professional development are provided.

Teacher Effects

These studies make it obvious that there are potential confounding factors of teacher effects. Many evaluation studies devoted inadequate attention to the variable of teacher quality. A few studies (Goodrow, 1998; Riordan and Noyce, 2001; Thompson et al., 2001; and Thompson et al., 2003) reported on teacher characteristics such as certification, length of service, experience with curricula, or degrees completed. Those studies that matched classrooms and reported by matched results rather than aggregated results sought ways to acknowledge the large variations among teacher performance and its impact on student outcomes. We coded any effort to report on possible teacher effects as one indicator of quality. Across all 63 at least minimally methodologically adequate studies, 16 percent reported some type of teacher effect measure, 3 percent reported and adjusted for it in interpreting their outcome measures, and 81 percent recorded no information on this issue.

One can see that the potential confounding factors of teacher effects, in terms of the provision of professional development or the measure of teacher effects, are not adequately considered in most evaluation designs. Some studies mention and give a subjective judgment as to the nature of the problem, but this is descriptive at the most. Hardly any of the studies actually do anything analytical, and because these are such important potential confounding variables, this presents a serious challenge to the efficacy of these studies. Figure 5-5 shows how attention to these factors varies

comparison research paper example

FIGURE 5-5 Treatment of implementation components by program type.

NOTE: PD = professional development.

across program categories among NSF-supported, UCSMP, and studies of commercial materials. In general, evaluations of NSF-supported studies were the most likely to measure these variables; UCSMP had the most standardized use of methods to do so across studies; and commercial material evaluators seldom reported on issues of implementation fidelity.

Identification of a Set of Outcome Measures and Forms of Disaggregation

Using the selected student outcomes identified in the program theory, one must conduct an impact assessment that refers to the design and measurement of student outcomes. In addition to selecting what outcomes should be measured within one’s program theory, one must determine how these outcomes are measured, when those measures are collected, and what

purpose they serve from the perspective of the participants. In the case of curricular evaluation, there are significant issues involved in how these measures are reported. To provide insight into the level of curricular validity, many evaluators prefer to report results by topic, content strand, or item cluster. These reports often present the level of specificity of outcome needed to inform curriculum designers, especially when efforts are made to document patterns of errors, distribution of results across multiple choices, or analyses of student methods. In these cases, whole test scores may mask essential differences in impact among curricula at the level of content topics, reporting only average performance.

On the other hand, many large-scale assessments depend on methods of test equating that rely on whole test scores and make comparative interpretations of different test administrations by content strands of questionable reliability. Furthermore, there are questions such as whether to present only gain scores effect sizes, how to link pretests and posttests, and how to determine the relative curricular sensitivity of various outcome measures.

The findings of comparative studies are reported in terms of the outcome measure(s) collected. To describe the nature of the database with regard to outcome measures and to facilitate our analyses of the studies, we classified each of the included studies on four outcome measure dimensions:

Total score reported;

Disaggregation of content strands, subtest, performance level, SES, or gender;

Outcome measure that was specific to curriculum; and

Use of multiple outcome measures.

Most studies reported a total score, but we did find studies that reported only subtest scores or only scores on an item-by-item basis. For example, in the Ben-Chaim et al. (1998) evaluation study of Connected Math, the authors were interested in students’ proportional reasoning proficiency as a result of use of this curriculum. They asked students from eight seventh-grade classes of CMP and six seventh-grade classes from the control group to solve a variety of tasks categorized as rate and density problems. The authors provide precise descriptions of the cognitive challenges in the items; however, they do not explain if the problems written up were representative of performance on a larger set of items. A special rating form was developed to code responses in three major categories (correct answer, incorrect answer, and no response), with subcategories indicating the quality of the work that accompanied the response. No reports on reliability of coding were given. Performance on standardized tests indicated that control students’ scores were slightly higher than CMP at the beginning of the

year and lower at the end. Twenty-five percent of the experimental group members were interviewed about their approaches to the problems. The CMP students outperformed the control students (53 percent versus 28 percent) overall in providing the correct answers and support work, and 27 percent of the control group gave an incorrect answer or showed incorrect thinking compared to 13 percent of the CMP group. An item-level analysis permitted the researchers to evaluate the actual strategies used by the students. They reported, for example, that 82 percent of CMP students used a “strategy focused on package price, unit price, or a combination of the two; those effective strategies were used by only 56 of 91 control students (62 percent)” (p. 264).

The use of item or content strand-level comparative reports had the advantage that they permitted the evaluators to assess student learning strategies specific to a curriculum’s program theory. For example, at times, evaluators wanted to gauge the effectiveness of using problems different from those on typical standardized tests. In this case, problems were drawn from familiar circumstances but carefully designed to create significant cognitive challenges, and assess how well the informal strategies approach in CMP works in comparison to traditional instruction. The disadvantages of such an approach include the use of only a small number of items and the concerns for reliability in scoring. These studies seem to represent a method of creating hybrid research models that build on the detailed analyses possible using case studies, but still reporting on samples that provide comparative data. It possibly reflects the concerns of some mathematicians and mathematics educators that the effectiveness of materials needs to be evaluated relative to very specific, research-based issues on learning and that these are often inadequately measured by multiple-choice tests. However, a decision not to report total scores led to a trade-off in the reliability and representativeness of the reported data, which must be addressed to increase the objectivity of the reports.

Second, we coded whether outcome data were disaggregated in some way. Disaggregation involved reporting data on dimensions such as content strand, subtest, test item, ethnic group, performance level, SES, and gender. We found disaggregated results particularly helpful in understanding the findings of studies that found main effects, and also in examining patterns across studies. We report the results of the studies’ disaggregation by content strand in our reports of effects. We report the results of the studies’ disaggregation by subgroup in our discussions of generalizability.

Third, we coded whether a study used an outcome measure that the evaluator reported as being sensitive to a particular treatment—this is a subcategory of what was defined in our framework as “curricular validity of measures.” In such studies, the rationale was that readily available measures such as state-mandated tests, norm-referenced standardized tests, and

college entrance examinations do not measure some of the aims of the program under study. A frequently cited instance of this was that “off the shelf” instruments do not measure well students’ ability to apply their mathematical knowledge to problems embedded in complex settings. Thus, some studies constructed a collection of tasks that assessed this ability and collected data on it (Ben-Chaim et al., 1998; Huntley et al., 2000).

Finally, we recorded whether a study used multiple outcome measures. Some studies used a variety of achievement measures and other studies reported on achievement accompanied by measures such as subsequent course taking or various types of affective measures. For example, Carroll (2001, p. 47) reported results on a norm-referenced standardized achievement test as well as a collection of tasks developed in other studies.

A study by Huntley et al. (2000) illustrates how a variety of these techniques were combined in their outcome measures. They developed three assessments. The first emphasized contextualized problem solving based on items from the American Mathematical Association of Two-Year Colleges and others; the second assessment was on context-free symbolic manipulation and a third part requiring collaborative problem solving. To link these measures to the overall evaluation, they articulated an explicit model of cognition based on how one links an applied situation to mathematical activity through processes of formulation and interpretation. Their assessment strategy permitted them to investigate algebraic reasoning as an ability to use algebraic ideas and techniques to (1) mathematize quantitative problem situations, (2) use algebraic principles and procedures to solve equations, and (3) interpret results of reasoning and calculations.

In presenting their data comparing performance on Core-Plus and traditional curriculum, they presented both main effects and comparisons on subscales. Their design of outcome measures permitted them to examine differences in performance with and without context and to conclude with statements such as “This result illustrates that CPMP students perform better than control students when setting up models and solving algebraic problems presented in meaningful contexts while having access to calculators, but CPMP students do not perform as well on formal symbol-manipulation tasks without access to context cues or calculators” (p. 349). The authors go on to present data on the relationship between knowing how to plan or interpret solutions and knowing how to carry them out. The correlations between these variables were weak but significantly different (0.26 for control groups and 0.35 for Core-Plus). The advantage of using multiple measures carefully tied to program theory is that they can permit one to test fine content distinctions that are likely to be the level of adjustments necessary to fine tune and improve curricular programs.

Another interesting approach to the use of outcome measures is found in the UCSMP studies. In many of these studies, evaluators collected infor-

TABLE 5-2 Mean Percentage Correct on the Subject Tests

mation from teachers’ reports and chapter reviews as to whether topics for items on the posttests were taught, calling this an “opportunity to learn” measure. The authors reported results from three types of analyses: (1) total test scores, (2) fair test scores (scores reported by program but only on items on topics taught), and (3) conservative test scores (scores on common items taught in both). Table 5-2 reports on the variations across the multiple- choice test scores for the Geometry study (Thompson et al., 2003) on a standardized test, High School Subject Tests-Geometry Form B , and the UCSMP-constructed Geometry test, and for the Advanced Algebra Study on the UCSMP-constructed Advanced Algebra test (Thompson et al., 2001). The table shows the mean scores for UCSMP classes and comparison classes. In each cell, mean percentage correct is reported first by whole test, then by fair test, and then by conservative test.

The authors explicitly compare the items from the standard Geometry test with the items from the UCSMP test and indicate overlap and difference. They constructed their own test because, in their view, the standard test was not adequately balanced among skills, properties, and real-world uses. The UCSMP test included items on transformations, representations, and applications that were lacking in the national test. Only five items were taught by all teachers; hence in the case of the UCSMP geometry test, there is no report on a conservative test. In the Advanced Algebra evaluation, only a UCSMP-constructed test was viewed as appropriate to cover the treatment of the prior material and alignment to the goals of the new course. These data sets demonstrate the challenge of selecting appropriate outcome measures, the sensitivity of the results to those decisions, and the importance of full disclosure of decision-making processes in order to permit readers to assess the implications of the choices. The methodology utilized sought to ensure that the material in the course was covered adequately by treatment teachers while finding ways to make comparisons that reflected content coverage.

Only one study reported on its outcomes using embedded assessment items employed over the course of the year. In a study of Saxon and UCSMP, Peters (1992) (EX) studied the use of these materials with two classrooms taught by the same teacher. In this small study, he randomly assigned students to treatment groups and then measured their performance on four unit tests composed of items common to both curricula and their progress on the Orleans-Hanna Algebraic Prognosis Test.

Peters’ study showed no significant difference in placement scores between Saxon and UCSMP on the posttest, but did show differences on the embedded assessment. Figure 5-6 (Peters, 1992, p. 75) shows an interesting display of the differences on a “continuum” that shows both the direction and magnitude of the differences and provides a level of concept specificity missing in many reports. This figure and a display ( Figure 5-7 ) in a study by Senk (1991, p. 18) of students’ mean scores on Curriculum A versus Curriculum B with a 10 percent range of differences marked represent two excellent means to communicate the kinds of detailed content outcome information that promises to be informative to curriculum writers, publishers, and school decision makers. In Figure 5-7 , 16 items listed by number were taken from the Second International Mathematics Study. The Functions, Statistics, and Trigonometry sample averaged 41 percent correct on these items whereas the U.S. precalculus sample averaged 38 percent. As shown in the figure, differences of 10 percent or less fall inside the banded area and greater than 10 percent fall outside, producing a display that makes it easy for readers and designers to identify the relative curricular strengths and weaknesses of topics.

While we value detailed outcome measure information, we also recognize the importance of examining curricular impact on students’ standardized test performance. Many developers, but not all, are explicit in rejecting standardized tests as adequate measures of the outcomes of their programs, claiming that these tests focus on skills and manipulations, that they are overly reliant on multiple-choice questions, and that they are often poorly aligned to new content emphases such as probability and statistics, transformations, use of contextual problems and functions, and process skills, such as problem solving, representation, or use of calculators. However, national and state tests are being revised to include more content on these topics and to draw on more advanced reasoning. Furthermore, these high-stakes tests are of major importance in school systems, determining graduation, passing standards, school ratings, and so forth. For this reason, if a curricular program demonstrated positive impact on such measures, we referred to that in Chapter 3 as establishing “curricular alignment with systemic factors.” Adequate performance on these measures is of paramount importance to the survival of reform (to large groups of parents and

comparison research paper example

FIGURE 5-6 Continuum of criterion score averages for studied programs.

SOURCE: Peters (1992, p. 75).

school administrators). These examples demonstrate how careful attention to outcomes measures is an essential element of valid evaluation.

In Table 5-3 , we document the number of studies using a variety of types of outcome measures that we used to code the data, and also report on the types of tests used across the studies.

comparison research paper example

FIGURE 5-7 Achievement (percentage correct) on Second International Mathematics Study (SIMS) items by U.S. precalculus students and functions, statistics, and trigonometry (FST) students.

SOURCE: Re-created from Senk (1991, p. 18).

TABLE 5-3 Number of Studies Using a Variety of Outcome Measures by Program Type

A Choice of Statistical Tests, Including Statistical Significance and Effect Size

In our first review of the studies, we coded what methods of statistical evaluation were used by different evaluators. Most common were t-tests; less frequently one found Analysis of Variance (ANOVA), Analysis of Co-

comparison research paper example

FIGURE 5-8 Statistical tests most frequently used.

variance (ANCOVA), and chi-square tests. In a few cases, results were reported using multiple regression or hierarchical linear modeling. Some used multiple tests; hence the total exceeds 63 ( Figure 5-8 ).

One of the difficult aspects of doing curriculum evaluations concerns using the appropriate unit both in terms of the unit to be randomly assigned in an experimental study and the unit to be used in statistical analysis in either an experimental or quasi-experimental study.

For our purposes, we made the decision that unless the study concerned an intact student population such as the freshman at a single university, where a student comparison was the correct unit, we believed that for statistical tests, the unit should be at least at the classroom level. Judgments were made for each study as to whether the appropriate unit was utilized. This question is an important one because statistical significance is related to sample size, and as a result, studies that inappropriately use the student as the unit of analysis could be concluding significant differences where they are not present. For example, if achievement differences between two curricula are tested in 16 classrooms with 400 students, it will always be easier to show significant differences using scores from those 400 students than using 16 classroom means.

Fifty-seven studies used students as the unit of analysis in at least one test of significance. Three of these were coded as correct because they involved whole populations. In all, 10 studies were coded as using the

TABLE 5-4 Performance on Applied Algebra Problems with Use of Calculators, Part 1

TABLE 5-5 Reanalysis of Algebra Performance Data

correct unit of analysis; hence, 7 studies used teachers or classes, or schools. For some studies where multiple tests were conducted, a judgment was made as to whether the primary conclusions drawn treated the unit of analysis adequately. For example, Huntley et al. (2000) compared the performance of CPMP students with students in a traditional course on a measure of ability to formulate and use algebraic models to answer various questions about relationships among variables. The analysis used students as the unit of analysis and showed a significant difference, as shown in Table 5-4 .

To examine the robustness of this result, we reanalyzed the data using an independent sample t-test and a matched pairs t-test with class means as the unit of analysis in both tests ( Table 5-5 ). As can be seen from the analyses, in neither statistical test was the difference between groups found to be significantly different (p < .05), thus emphasizing the importance of using the correct unit in analyzing the data.

Reanalysis of student-level data using class means will not always result

TABLE 5-6 Mean Percentage Correct on Entire Multiple-Choice Posttest: Second Edition and Non-UCSMP

in a change in finding. Furthermore, using class means as the unit of analysis does not suggest that significant differences will not be found. For example, a study by Thompson et al. (2001) compared the performance of UCSMP students with the performance of students in a more traditional program across several measures of achievement. They found significant differences between UCSMP students and the non-UCSMP students on several measures. Table 5-6 shows results of an analysis of a multiple-choice algebraic posttest using class means as the unit of analysis. Significant differences were found in five of eight separate classroom comparisons, as shown in the table. They also found a significant difference using a matched-pairs t-test on class means.

The lesson to be learned from these reanalyses is that the choice of unit of analysis and the way the data are aggregated can impact study findings in important ways including the extent to which these findings can be generalized. Thus it is imperative that evaluators pay close attention to such considerations as the unit of analysis and the way data are aggregated in the design, implementation, and analysis of their studies.

Second, effect size has become a relatively common and standard way of gauging the practical significance of the findings. Statistical significance only indicates whether the main-level differences between two curricula are large enough to not be due to chance, assuming they come from the same population. When statistical differences are found, the question remains as to whether such differences are large enough to consider. Because any innovation has its costs, the question becomes one of cost-effectiveness: Are the differences in student achievement large enough to warrant the costs of change? Quantifying the practical effect once statistical significance is established is one way to address this issue. There is a statistical literature for doing this, and for the purposes of this review, the committee simply noted whether these studies have estimated such an effect. However, the committee further noted that in conducting meta-analyses across these studies, effect size was likely to be of little value. These studies used an enormous variety of outcome measures, and even using effect size as a means to standardize units across studies is not sensible when the measures in each

study address such a variety of topics, forms of reasoning, content levels, and assessment strategies.

We note very few studies drew upon the advances in methodologies employed in modeling, which include causal modeling, hierarchical linear modeling (Bryk and Raudenbush, 1992; Bryk et al., 1993), and selection bias modeling (Heckman and Hotz, 1989). Although developing detailed specifications for these approaches is beyond the scope of this review, we wish to emphasize that these methodological advances should be considered within future evaluation designs.

Results and Limitations to Generalizability Resulting from Design Constraints

One also must consider what generalizations can be drawn from the results (Campbell and Stanley, 1966; Caporaso and Roos, 1973; and Boruch, 1997). Generalization is a matter of external validity in that it determines to what populations the study results are likely to apply. In designing an evaluation study, one must carefully consider, in the selection of units of analysis, how various characteristics of those units will affect the generalizability of the study. It is common for evaluators to conflate issues of representativeness for the purpose of generalizability (external validity) and comparativeness (the selection of or adjustment for comparative groups [internal validity]). Not all studies must be representative of the population served by mathematics curricula to be internally valid. But, to be generalizable beyond restricted communities, representativeness must be obtained by the random selection of the basic units. Clearly specifying such limitations to generalizability is critical. Furthermore, on the basis of equity considerations, one must be sure that if overall effectiveness is claimed, that the studies have been conducted and analyzed with reference of all relevant subgroups.

Thus, depending on the design of a study, its results may be limited in generalizability to other populations and circumstances. We identified four typical kinds of limitations on the generalizability of studies and coded them to determine, on the whole, how generalizable the results across studies might be.

First, there were studies whose designs were limited by the ability or performance level of the students in the samples. It was not unusual to find that when new curricula were implemented at the secondary level, schools kept in place systems of tracking that assigned the top students to traditional college-bound curriculum sequences. As a result, studies either used comparative groups who were matched demographically but less skilled than the population as a whole, in relation to prior learning, or their results compared samples of less well-prepared students to samples of students

with stronger preparations. Alternatively, some studies reported on the effects of curricula reform on gifted and talented students or on college-attending students. In these cases, the study results would also limit the generalizability of the results to similar populations. Reports using limited samples of students’ ability and prior performance levels were coded as a limitation to the generalizability of the study.

For example, Wasman (2000) conducted a study of one school (six teachers) and examined the students’ development of algebraic reasoning after one (n=100) and two years (n=73) in CMP. In this school, the top 25 percent of the students are counseled to take a more traditional algebra course, so her experimental sample, which was 61 percent white, 35 percent African American, 3 percent Asian, and 1 percent Hispanic, consisted of the lower 75 percent of the students. She reported on the student performance on the Iowa Algebraic Aptitude Test (IAAT) (1992), in the subcategories of interpreting information, translating symbols, finding relationships, and using symbols. Results for Forms 1 and 2 of the test, for the experimental and norm group, are shown in Table 5-7 for 8th graders.

In our coding of outcomes, this study was coded as showing no significant differences, although arguably its results demonstrate a positive set of

TABLE 5-7 Comparing Iowa Algebraic Aptitude Test (IAAT) Mean Scores of the Connected Mathematics Project Forms 1 and 2 to the Normative Group (8th Graders)

outcomes as the treatment group was weaker than the control group. Had the researcher used a prior achievement measure and a different statistical technique, significance might have been demonstrated, although potential teacher effects confound interpretations of results.

A second limitation to generalizability was when comparative studies resided entirely at curriculum pilot site locations, where such sites were developed as a means to conduct formative evaluations of the materials with close contact and advice from teachers. Typically, pilot sites have unusual levels of teacher support, whether it is in the form of daily technical support in the use of materials or technology or increased quantities of professional development. These sites are often selected for study because they have established cooperative agreements with the program developers and other sources of data, such as classroom observations, are already available. We coded whether the study was conducted at a pilot site to signal potential limitations in generalizability of the findings.

Third, studies were also coded as being of limited generalizability if they failed to disaggregate their data by socioeconomic class, race, gender, or some other potentially significant sources of restriction on the claims. We recorded the categories in which disaggregation occurred and compiled their frequency across the studies. Because of the need to open the pipeline to advanced study in mathematics by members of underrepresented groups, we were particularly concerned about gauging the extent to which evaluators factored such variables into their analysis of results and not just in terms of the selection of the sample.

Of the 46 included studies of NSF-supported curricula, 19 disaggregated their data by student subgroup. Nine of 17 studies of commercial materials disaggregated their data. Figure 5-9 shows the number of studies that disaggregated outcomes by race or ethnicity, SES, gender, LEP, special education status, or prior achievement. Studies using multiple categories of disaggregation were counted multiple times by program category.

The last category of restricted generalization occurred in studies of limited sample size. Although such studies may have provided more indepth observations of implementation and reports on professional development factors, the smaller numbers of classrooms and students in the study would limit the extent of generalization that could be drawn from it. Figure 5-10 shows the distribution of sizes of the samples in terms of numbers of students by study type.

Summary of Results by Student Achievement Among Program Types

We present the results of the studies as a means to further investigate their methodological implications. To this end, for each study, we counted across outcome measures the number of findings that were positive, nega-

comparison research paper example

FIGURE 5-9 Disaggregation of subpopulations.

comparison research paper example

FIGURE 5-10 Proportion of studies by sample size and program.

tive, or indeterminate (no significant difference) and then calculated the proportion of each. We represented the calculation of each study as a triplet (a, b, c) where a indicates the proportion of the results that were positive and statistically significantly stronger than the comparison program, b indicates the proportion that were negative and statistically significantly weaker than the comparison program, and c indicates the proportion that showed no significant difference between the treatment and the comparative group. For studies with a single outcome measure, without disaggregation by content strand, the triplet is always composed of two zeros and a single one. For studies with multiple measures or disaggregation by content strand, the triplet is typically a set of three decimal values that sum to one. For example, a study with one outcome measure in favor of the experimental treatment would be coded (1, 0, 0), while one with multiple measures and mixed results more strongly in favor of the comparative curriculum might be listed as (.20, .50, .30). This triplet would mean that for 20 percent of the comparisons examined, the evaluators reported statistically significant positive results, for 50 percent of the comparisons the results were statistically significant in favor of the comparison group, and for 30 percent of the comparisons no significant difference were found. Overall, the mean score on these distributions was (.54, .07, .40), indicating that across all the studies, 54 percent of the comparisons favored the treatment, 7 percent favored the comparison group, and 40 percent showed no significant difference. Table 5-8 shows the comparison by curricular program types. We present the results by individual program types, because each program type relies on a similar program theory and hence could lead to patterns of results that would be lost in combining the data. If the studies of commercial materials are all grouped together to include UCSMP, their pattern of results is (.38, .11, .51). Again we emphasize that due to our call for increased methodological rigor and the use of multiple methods, this result is not sufficient to establish the curricular effectiveness of these programs as a whole with adequate certainty.

We caution readers that these results are summaries of the results presented across a set of evaluations that meet only the standard of at least

TABLE 5-8 Comparison by Curricular Program Types

minimally methodologically adequate . Calculations of statistical significance of each program’s results were reported by the evaluators; we have made no adjustments for weaknesses in the evaluations such as inappropriate use of units of analysis in calculating statistical significance. Evaluations that consistently used the correct unit of analysis, such as UCSMP, could have fewer reports of significant results as a consequence. Furthermore, these results are not weighted by study size. Within any study, the results pay no attention to comparative effect size or to the established credibility of an outcome measure. Similarly, these results do not take into account differences in the populations sampled, an important consideration in generalizing the results. For example, using the same set of studies as an example, UCSMP studies used volunteer samples who responded to advertisements in their newsletters, resulting in samples with disproportionately Caucasian subjects from wealthier schools compared to national samples. As a result, we would suggest that these results are useful only as baseline data for future evaluation efforts. Our purpose in calculating these results is to permit us to create filters from the critical decision points and test how the results change as one applies more rigorous standards.

Given that none of the studies adequately addressed all of the critical criteria, we do not offer these results as definitive, only suggestive—a hypothesis for further study. In effect, given the limitations of time and support, and the urgency of providing advice related to policy, we offer this filtering approach as an informal meta-analytic technique sufficient to permit us to address our primary task, namely, evaluating the quality of the evaluation studies.

This approach reflects the committee’s view that to deeply understand and improve methodology, it is necessary to scrutinize the results and to determine what inferences they provide about the conduct of future evaluations. Analogous to debates on consequential validity in testing, we argue that to strengthen methodology, one must consider what current methodologies are able (or not able) to produce across an entire series of studies. The remainder of the chapter is focused on considering in detail what claims are made by these studies, and how robust those claims are when subjected to challenge by alternative hypothesis, filtering by tests of increasing rigor, and examining results and patterns across the studies.

Alternative Hypotheses on Effectiveness

In the spirit of scientific rigor, the committee sought to consider rival hypotheses that could explain the data. Given the weaknesses in the designs generally, often these alternative hypotheses cannot be dismissed. However, we believed that only after examining the configuration of results and

alternative hypotheses can the next generation of evaluations be better informed and better designed. We began by generating alternative hypotheses to explain the positive directionality of the results in favor of experimental groups. Alternative hypotheses included the following:

The teachers in the experimental groups tended to be self-selecting early adopters, and thus able to achieve effects not likely in regular populations.

Changes in student outcomes reflect the effects of professional development instruction, or level of classroom support (in pilot sites), and thus inflate the predictions of effectiveness of curricular programs.

Hawthorne effect (Franke and Kaul, 1978) occurs when treatments are compared to everyday practices, due to motivational factors that influence experimental participants.

The consistent difference is due to the coherence and consistency of a single curricular program when compared to multiple programs.

The significance level is only achieved by the use of the wrong unit of analysis to test for significance.

Supplemental materials or new teaching techniques produce the results and not the experimental curricula.

Significant results reflect inadequate outcome measures that focus on a restricted set of activities.

The results are due to evaluator bias because too few evaluators are independent of the program developers.

At the same time, one could argue that the results actually underestimate the performance of these materials and are conservative measures, and their alternative hypotheses also deserve consideration:

Many standardized tests are not sensitive to these curricular approaches, and by eliminating studies focusing on affect, we eliminated a key indicator of the appeal of these curricula to students.

Poor implementation or increased demands on teachers’ knowledge dampens the effects.

Often in the experimental treatment, top-performing students are missing as they are advised to take traditional sequences, rendering the samples unequal.

Materials are not well aligned with universities and colleges because tests for placement and success in early courses focus extensively on algebraic manipulation.

Program implementation has been undercut by negative publicity and the fears of parents concerning change.

There are also a number of possible hypotheses that may be affecting the results in either direction, and we list a few of these:

Examining the role of the teacher in curricular decision making is an important element in effective implementation, and design mandates of evaluation design make this impossible (and the positives and negatives or single- versus dual-track curriculum as in Lundin, 2001).

Local tests that are sensitive to the curricular effects typically are not mandatory and hence may lead to unpredictable performance by students.

Different types and extent of professional development may affect outcomes differentially.

Persistence or attrition may affect the mean scores and are often not considered in the comparative analyses.

One could also generate reasons why the curricular programs produced results showing no significance when one program or the other is actually more effective. This could include high degrees of variability in the results, samples that used the correct unit of analysis but did not obtain consistent participation across enough cases, implementation that did not show enough fidelity to the measures, or outcome measures insensitive to the results. Again, subsequent designs should be better informed by these findings to improve the likelihood that they will produce less ambiguous results and replication of studies could also give more confidence in the findings.

It is beyond the scope of this report to consider each of these alternative hypotheses separately and to seek confirmation or refutation of them. However, in the next section, we describe a set of analyses carried out by the committee that permits us to examine and consider the impact of various critical evaluation design decisions on the patterns of outcomes across sets of studies. A number of analyses shed some light on various alternative hypotheses and may inform the conduct of future evaluations.

Filtering Studies by Critical Decision Points to Increase Rigor

In examining the comparative studies, we identified seven critical decision points that we believed would directly affect the rigor and efficacy of the study design. These decision points were used to create a set of 16 filters. These are listed as the following questions:

Was there a report on comparability relative to SES?

Was there a report on comparability of samples relative to prior knowledge?

Was there a report on treatment fidelity?

Was professional development reported on?

Was the comparative curriculum specified?

Was there any attempt to report on teacher effects?

Was a total test score reported?

Was total test score(s) disaggregated by content strand?

Did the outcome measures match the curriculum?

Were multiple tests used?

Was the appropriate unit of analysis used in their statistical tests?

Did they estimate effect size for the study?

Was the generalizability of their findings limited by use of a restricted range of ability levels?

Was the generalizability of their findings limited by use of pilot sites for their study?

Was the generalizability of their findings limited by not disaggregating their results by subgroup?

Was the generalizability of their findings limited by use of small sample size?

The studies were coded to indicate if they reported having addressed these considerations. In some cases, the decision points were coded dichotomously as present or absent in the studies, and in other cases, the decision points were coded trichotomously, as description presented, absent, or statistically adjusted for in the results. For example, a study may or may not report on the comparability of the samples in terms of race, ethnicity, or socioeconomic status. If a report on SES was given, the study was coded as “present” on this decision; if a report was missing, it was coded as “absent”; and if SES status or ethnicity was used in the analysis to actually adjust outcomes, it was coded as “adjusted for.” For each coding, the table that follows reports the number of studies that met that condition, and then reports on the mean percentage of statistically significant results, and results showing no significant difference for that set of studies. A significance test is run to see if the application of the filter produces changes in the probability that are significantly different. 5

In the cases in which studies are coded into three distinct categories—present, absent, and adjusted for—a second set of filters is applied. First, the studies coded as present or adjusted for are combined and compared to those coded as absent; this is what we refer to as a weak test of the rigor of the study. Second, the studies coded as present or absent are combined and compared to those coded as adjusted for. This is what we refer to as a strong test. For dichotomous codings, there can be as few as three compari-

sons, and for trichotomous codings, there can be nine comparisons with accompanying tests of significance. Trichotomous codes were used for adjustments for SES and prior knowledge, examining treatment fidelity, professional development, teacher effects, and reports on effect sizes. All others were dichotomous.

NSF Studies and the Filters

For example, there were 11 studies of NSF-supported curricula that simply reported on the issues of SES in creating equivalent samples for comparison, and for this subset the mean probabilities of getting positive, negative, or results showing no significant difference were (.47, .10, .43). If no report of SES was supplied (n= 21), those probabilities become (.57, .07, .37), indicating an increase in positive results and a decrease in results showing no significant difference. When an adjustment is made in outcomes based on differences in SES (n=14), the probabilities change to (.72, .00, .28), showing a higher likelihood of positive outcomes. The probabilities that result from filtering should always be compared back to the overall results of (.59, .06, .35) (see Table 5-8 ) so as to permit one to judge the effects of more rigorous methodological constraints. This suggests that a simple report on SES without adjustment is least likely to produce positive outcomes; that is, no report produces the outcomes next most likely to be positive and studies that adjusted for SES tend to have a higher proportion of their comparisons producing positive results.

The second method of applying the filter (the weak test for rigor) for the treatment of the adjustment of SES groups compares the probabilities when a report is either given or adjusted for compared to when no report is offered. The combined percentage of a positive outcome of a study in which SES is reported or adjusted for is (.61, .05, .34), while the percentage for no report remains as reported previously at (.57, .07, .37). A final filter compares the probabilities of the studies in which SES is adjusted for with those that either report it only or do not report it at all. Here we compare the percentage of (.72, .00, .28) to (.53, .08, .37) in what we call a strong test. In each case we compared the probability produced by the whole group to those of the filtered studies and conducted a test of the differences to determine if they were significant. These differences were not significant. These findings indicate that to date, with this set of studies, there is no statistically significant difference in results when one reports or adjusts for changes in SES. It appears that by adjusting for SES, one sees increases in the positive results, and this result deserves a closer examination for its implications should it prove to hold up over larger sets of studies.

We ran tests that report the impact of the filters on the number of studies, the percentage of studies, and the effects described as probabilities

for each of the three study categories, NSF-supported and commercially generated with UCSMP included. We claim that when a pattern of probabilities of results does not change after filtering, one can have more confidence in that pattern. When the pattern of results changes, there is a need for an explanatory hypothesis, and that hypothesis can shed light on experimental design. We propose that this “filtering process” constitutes a test of the robustness of the outcome measures subjected to increasing degrees of rigor by using filtering.

Results of Filtering on Evaluations of NSF-Supported Curricula

For the NSF-supported curricular programs, out of 15 filters, 5 produced a probability that differed significantly at the p<.1 level. The five filters were for treatment fidelity, specification of control group, choosing the appropriate statistical unit, generalizability for ability, and generalizability based on disaggregation by subgroup. For each filter, there were from three to nine comparisons, as we examined how the probabilities of outcomes change as tests were more stringent and across the categories of positive results, negative results, and results with no significant differences. Out of a total of 72 possible tests, only 11 produced a probability that differed significantly at the p < .1 level. With 85 percent of the comparisons showing no significant difference after filtering, we suggest the results of the studies were relatively robust in relation to these tests. At the same time, when rigor is increased for the five filters just listed, the results become generally more ambiguous and signal the need for further research with more careful designs.

Studies of Commercial Materials and the Filters

To ensure enough studies to conduct the analysis (n=17), our filtering analysis of the commercially generated studies included UCSMP (n=8). In this case, there were six filters that produced a probability that differed significantly at the p < .1 level. These were treatment fidelity, disaggregation by content, use of multiple tests, use of effect size, generalizability by ability, and generalizability by sample size. In this case, because there were no studies in some possible categories, there were a total of 57 comparisons, and 9 displayed significant differences in the probabilities after filtering at the p < .1 level. With 84 percent of the comparisons showing no significant difference after filtering, we suggest the results of the studies were relatively robust in relation to these tests. Table 5-9 shows the cases in which significant differences were recorded.

Impact of Treatment Fidelity on Probabilities

A few of these differences are worthy of comment. In the cases of both the NSF-supported and commercially generated curricula evaluation studies, studies that reported treatment fidelity differed significantly from those that did not. In the case of the studies of NSF-supported curricula, it appeared that a report or adjustment on treatment fidelity led to proportions with less positive effects and more results showing no significant differences. We hypothesize that this is partly because larger studies often do not examine actual classroom practices, but can obtain significance more easily due to large sample sizes.

In the studies of commercial materials, the presence or absence of measures of treatment fidelity worked differently. Studies reporting on or adjusting for treatment fidelity tended to have significantly higher probabilities in favor of experimental treatment, less positive effects in fewer of the comparative treatments, and more likelihood of results with no significant differences. We hypothesize, and confirm with a separate analysis, that this is because UCSMP frequently reported on treatment fidelity in their designs while study of Saxon typically did not, and the change represents the preponderance of these different curricular treatments in the studies of commercially generated materials.

Impact of Identification of Curricular Program on Probabilities

The significant differences reported under specificity of curricular comparison also merit discussion for studies of NSF-supported curricula. When the comparison group is not specified, a higher percentage of mean scores in favor of the experimental curricula is reported. In the studies of commercial materials, a failure to name specific curricular comparisons also produced a higher percentage of positive outcomes for the treatment, but the difference was not statistically significant. This suggests the possibility that when a specified curriculum is compared to an unspecified curriculum, reports of impact may be inflated. This finding may suggest that in studies of effectiveness, specifying comparative treatments would provide more rigorous tests of experimental approaches.

When studies of commercial materials disaggregate their results of content strands or use multiple measures, their reports of positive outcomes increase, the negative outcomes decrease, and in one case, the results show no significant differences. Percentage of significant difference was only recorded in one comparison within each one of these filters.

TABLE 5-9 Cases of Significant Differences

Impact of Units of Analysis on Probabilities 6

For the evaluations of the NSF-supported materials, a significant difference was reported on the outcomes for the studies that used the correct unit of analysis compared to those that did not. The percentage for those with the correct unit were (.30, .40, .30) compared to (.63, .01, .36) for those that used the incorrect result. These results suggest that our prediction that using the correct unit of analysis would decrease the percentage of positive outcomes is likely to be correct. It also suggests that the most serious threat to the apparent conclusions of these studies comes from selecting an incorrect unit of analysis. It causes a decrease in favorable results, making the results more ambiguous, but never reverses the direction of the effect. This is a concern that merits major attention in the conduct of further studies.

For the commercially generated studies, most of the ones coded with the correct unit of analysis were UCSMP studies. Because of the small number of studies involved, we could not break out from the overall filtering of studies of commercial materials, but report this issue to assist readers in interpreting the relative patterns of results.

Impact of Generalizability on Probabilities

Both types of studies yielded significant differences for some of the comparisons coded as restrictions to generalizability. Investigating these is important in order to understand the effects of these curricular programs on different subpopulations of students. In the case of the studies of commercially generated materials, significantly different results occurred in the categories of ability and sample size. In the studies of NSF-supported materials, the significant differences occurred in ability and disaggregation by subgroups.

In relation to generalizability, the studies of NSF-supported curricula reported significantly more positive results in favor of the treatment when they included all students. Because studies coded as “limited by ability” were restricted either by focusing only on higher achieving students or on lower achieving students, we sorted these two groups. For higher performing students (n=3), the probabilities of effects were (.11, .67, .22). For lower

performing students (n=2), the probabilities were (.39, .025, .59). The first two comparisons are significantly different at p < .05. These findings are based on only a total of five studies, but they suggest that these programs may be serving the weaker ability students more effectively than the stronger ability students, serving both less well than they serve whole heterogeneous groups. For the studies of commercial materials, there were only three studies that were restricted to limited populations. The results for those three studies were (.23, .41, .32) and for all students (n=14) were (.42, .53, .09). These studies were significantly different at p = .004. All three studies included UCSMP and one also included Saxon and was limited by serving primarily high-performing students. This means both categories of programs are showing weaker results when used with high-ability students.

Finally, the studies on NSF-supported materials were disaggregated by subgroups for 28 studies. A complete analysis of this set follows, but the studies that did not report results disaggregated by subgroup generated probabilities of results of (.48, .09, .43) whereas those that did disaggregate their results reported (.76, 0, .24). These gains in positive effects came from significant losses in reporting no significant differences. Studies of commercial materials also reported a small decrease in likelihood of negative effects for the comparison program when disaggregation by subgroup is reported offset by increases in positive results and results with no significant differences, although these comparisons were not significantly different. A further analysis of this topic follows.

Overall, these results suggest that increased rigor seems to lead in general to less strong outcomes, but never reports of completely contrary results. These results also suggest that in recommending design considerations to evaluators, there should be careful attention to having evaluators include measures of treatment fidelity, considering the impact on all students as well as one particular subgroup; using the correct unit of analysis; and using multiple tests that are also disaggregated by content strand.

Further Analyses

We conducted four further analyses: (1) an analysis of the outcome probabilities by test type; (2) content strands analysis; (3) equity analysis; and (4) an analysis of the interactions of content and equity by grade band. Careful attention to the issues of content strand, equity, and interaction is essential for the advancement of curricular evaluation. Content strand analysis provides the detail that is often lost by reporting overall scores; equity analysis can provide essential information on what subgroups are adequately served by the innovations, and analysis by content and grade level can shed light on the controversies that evolve over time.

Analysis by Test Type

Different studies used varied combinations of outcome measures. Because of the importance of outcome measures on test results, we chose to examine whether the probabilities for the studies changed significantly across different types of outcome measures (national test, local test). The most frequent use of tests across all studies was a combination of national and local tests (n=18 studies), a local test (n=16), and national tests (n=17). Other uses of test combinations were used by three studies or less. The percentages of various outcomes by test type in comparison to all studies are described in Table 5-10 .

These data ( Table 5-11 ) suggest that national tests tend to produce less positive results, and with the resulting gains falling into results showing no significant differences, suggesting that national tests demonstrate less curricular sensitivity and specificity.

TABLE 5-10 Percentage of Outcomes by Test Type

TABLE 5-11 Percentage of Outcomes by Test Type and Program Type

TABLE 5-12 Number of Studies That Disaggregated by Content Strand

Content Strand

Curricular effectiveness is not an all-or-nothing proposition. A curriculum may be effective in some topics and less effective in others. For this reason, it is useful for evaluators to include an analysis of curricular strands and to report on the performance of students on those strands. To examine this issue, we conducted an analysis of the studies that reported their results by content strand. Thirty-eight studies did this; the breakdown is shown in Table 5-12 by type of curricular program and grade band.

To examine the evaluations of these content strands, we began by listing all of the content strands reported across studies as well as the frequency of report by the number of studies at each grade band. These results are shown in Figure 5-11 , which is broken down by content strand, grade level, and program type.

Although there are numerous content strands, some of them were reported on infrequently. To allow the analysis to focus on the key results from these studies, we separated out the most frequently reported on strands, which we call the “major content strands.” We defined these as strands that were examined in at least 10 percent of the studies. The major content strands are marked with an asterisk in the Figure 5-11 . When we conduct analyses across curricular program types or grade levels, we use these to facilitate comparisons.

A second phase of our analysis was to examine the performance of students by content strand in the treatment group in comparison to the control groups. Our analysis was conducted across the major content strands at the level of NSF-supported versus commercially generated, initially by all studies and then by grade band. It appeared that such analysis permitted some patterns to emerge that might prove helpful to future evaluators in considering the overall effectiveness of each approach. To do this, we then coded the number of times any particular strand was measured across all studies that disaggregated by content strand. Then, we coded the proportion of times that this strand was reported as favoring the experimental treatment, favoring the comparative curricula, or showing no significant difference. These data are presented across the major content strands for the NSF-supported curricula ( Figure 5-12 ) and the commercially generated curricula, ( Figure 5-13 ) (except in the case of the elemen-

comparison research paper example

FIGURE 5-11 Study counts for all content strands.

tary curricula where no data were available) in the forms of percentages, with the frequencies listed in the bars.

The presentation of results by strands must be accompanied by the same restrictions as stated previously. These results are based on studies identified as at least minimally methodologically adequate. The quality of the outcome measures in measuring the content strands has not been examined. Their results are coded in relation to the comparison group in the study and are indicated as statistically in favor of the program, as in favor of the comparative program, or as showing no significant differences. The results are combined across studies with no weighting by study size. Their results should be viewed as a means for the identification of topics for potential future study. It is completely possible that a refinement of methodologies may affect the future patterns of results, so the results are to be viewed as tentative and suggestive.

comparison research paper example

FIGURE 5-12 Major content strand result: All NSF (n=27).

According to these tentative results, future evaluations should examine whether the NSF-supported programs produce sufficient competency among students in the areas of algebraic manipulation and computation. In computation, approximately 40 percent of the results were in favor of the treatment group, no significant differences were reported in approximately 50 percent of the results, and results in favor of the comparison were revealed 10 percent of the time. Interpreting that final proportion of no significant difference is essential. Some would argue that because computation has not been emphasized, findings of no significant differences are acceptable. Others would suggest that such findings indicate weakness, because the development of the materials and accompanying professional development yielded no significant difference in key areas.

comparison research paper example

FIGURE 5-13 Major content strand result: All commercial (n=8).

From Figure 5-13 of findings from studies of commercially generated curricula, it appears that mixed results are commonly reported. Thus, in evaluations of commercial materials, lack of significant differences in computations/operations, word problems, and probability and statistics suggest that careful attention should be given to measuring these outcomes in future evaluations.

Overall, the grade band results for the NSF-supported programs—while consistent with the aggregated results—provide more detail. At the elementary level, evaluations of NSF-supported curricula (n=12) report better performance in mathematics concepts, geometry, and reasoning and problem solving, and some weaknesses in computation. No content strand analysis for commercially generated materials was possible. Evaluations

(n=6) at middle grades of NSF-supported curricula showed strength in measurement, geometry, and probability and statistics and some weaknesses in computation. In the studies of commercial materials, evaluations (n=4) reported favorable results in reasoning and problem solving and some unfavorable results in algebraic procedures, contextual problems, and mathematics concepts. Finally, at the high school level, the evaluations (n=9) by content strand for the NSF-supported curricula showed strong favorable results in algebra concepts, reasoning/problem solving, word problems, probability and statistics, and measurement. Results in favor of the control were reported in 25 percent of the algebra procedures and 33 percent of computation measures.

For the studies of commercial materials (n=4), only the geometry results favor the control group 25 percent of the time, with 50 percent having favorable results. Algebra concepts, reasoning, and probability and statistics also produced favorable results.

Equity Analysis of Comparative Studies

When the goal of providing a standards-based curriculum to all students was proposed, most people could recognize its merits: the replacement of dull, repetitive, largely dead-end courses with courses that would lead all students to be able, if desired and earned, to pursue careers in mathematics-reliant fields. It was clear that the NSF-supported projects, a stated goal of which was to provide standards-based courses to all students, called for curricula that would address the problem of too few students persisting in the study of mathematics. For example, as stated in the NSF Request for Proposals (RFP):

Rather than prematurely tracking students by curricular objectives, secondary school mathematics should provide for all students a common core of mainstream mathematics differentiated instructionally by level of abstraction and formalism, depth of treatment and pace (National Science Foundation, 1991, p. 1). In the elementary level solicitation, a similar statement on causes for all students was made (National Science Foundation, 1988, pp. 4-5).

Some, but not enough attention has been paid to the education of students who fall below the average of the class. On the other hand, because the above average students sometimes do not receive a demanding education, it may be incorrectly assumed they are easy to teach (National Science Foundation, 1989, p. 2).

Likewise, with increasing numbers of students in urban schools, and increased demographic diversity, the challenges of equity are equally significant for commercial publishers, who feel increasing pressures to demonstrate the effectiveness of their products in various contexts.

The problem was clearly identified: poorer performance by certain subgroups of students (minorities—non-Asian, LEP students, sometimes females) and a resulting lack of representation of such groups in mathematics-reliant fields. In addition, a secondary problem was acknowledged: Highly talented American students were not being provided adequate challenge and stimulation in comparison with their international counterparts. We relied on the concept of equity in examining the evaluation. Equity was contrasted to equality, where one assumed all students should be treated exactly the same (Secada et al., 1995). Equity was defined as providing opportunities and eliminating barriers so that the membership in a subgroup does not subject one to undue and systematically diminished possibility of success in pursuing mathematical study. Appropriate treatment therefore varies according to the needs of and obstacles facing any subgroup.

Applying the principles of equity to evaluate the progress of curricular programs is a conceptually thorny challenge. What is challenging is how to evaluate curricular programs on their progress toward equity in meeting the needs of a diverse student body. Consider how the following questions provide one with a variety of perspectives on the effectiveness of curricular reform regarding equity:

Does one expect all students to improve performance, thus raising the bar, but possibly not to decrease the gap between traditionally well-served and under-served students?

Does one focus on reducing the gap and devote less attention to overall gains, thus closing the gap but possibly not raising the bar?

Or, does one seek evidence that progress is made on both challenges—seeking progress for all students and arguably faster progress for those most at risk?

Evaluating each of the first two questions independently seems relatively straightforward. When one opts for a combination of these two, the potential for tensions between the two becomes more evident. For example, how can one differentiate between the case in which the gap is closed because talented students are being underchallenged from the case in which the gap is closed because the low-performing students improved their progress at an increased rate? Many believe that nearly all mathematics curricula in this country are insufficiently challenging and rigorous. Therefore achieving modest gains across all ability levels with evidence of accelerated progress by at-risk students may still be criticized for failure to stimulate the top performing student group adequately. Evaluating curricula with regard to this aspect therefore requires judgment and careful methodological attention.

Depending on one’s view of equity, different implications for the collection of data follow. These considerations made examination of the quality of the evaluations as they treated questions of equity challenging for the committee members. Hence we spell out our assumptions as precisely as possible:

Evaluation studies should include representative samples of student demographics, which may require particular attention to the inclusion of underrepresented minority students from lower socioeconomic groups, females, and special needs populations (LEP, learning disabled, gifted and talented students) in the samples. This may require one to solicit participation by particular schools or districts, rather than to follow the patterns of commercial implementation, which may lead to an unrepresentative sample in aggregate.

Analysis of results should always consider the impact of the program on the entire spectrum of the sample to determine whether the overall gains are distributed fairly among differing student groups, and not achieved as improvements in the mean(s) of an identifiable subpopulation(s) alone.

Analysis should examine whether any group of students is systematically less well served by curricular implementation, causing losses or weakening the rate of gains. For example, this could occur if one neglected the continued development of programs for gifted and talented students in mathematics in order to implement programs focused on improving access for underserved youth, or if one improved programs solely for one group of language learners, ignoring the needs of others, or if one’s study systematically failed to report high attrition affecting rates of participation of success or failure.

Analyses should examine whether gaps in scores between significantly disadvantaged or underperforming subgroups and advantaged subgroups are decreasing both in relation to eliminating the development of gaps in the first place and in relation to accelerating improvement for underserved youth relative to their advantaged peers at the upper grades.

In reviewing the outcomes of the studies, the committee reports first on what kinds of attention to these issues were apparent in the database, and second on what kinds of results were produced. Some of the studies used multiple methods to provide readers with information on these issues. In our report on the evaluations, we both provide descriptive information on the approaches used and summarize the results of those studies. Developing more effective methods to monitor the achievement of these objectives may need to go beyond what is reported in this study.

Among the 63 at least minimally methodologically adequate studies, 26 reported on the effects of their programs on subgroups of students. The

TABLE 5-13 Most Common Subgroups Used in the Analyses and the Number of Studies That Reported on That Variable

other 37 reported on the effects of the curricular intervention on means of whole groups and their standard deviations, but did not report on their data in terms of the impact on subpopulations. Of those 26 evaluations, 19 studies were on NSF-supported programs and 7 were on commercially generated materials. Table 5-13 reports the most common subgroups used in the analyses and the number of studies that reported on that variable. Because many studies used multiple categories for disaggregation (ethnicity, SES, and gender), the number of reports is more than double the number of studies. For this reason, we report the study results in terms of the “frequency of reports on a particular subgroup” and distinguish this from what we refer to as “study counts.” The advantage of this approach is that it permits reporting on studies that investigated multiple ways to disaggregate their data. The disadvantage is that in a sense, studies undertaking multiple disaggregations become overrepresented in the data set as a result. A similar distinction and approach were used in our treatment of disaggregation by content strands.

It is apparent from these data that the evaluators of NSF-supported curricula documented more equity-based outcomes, as they reported 43 of the 56 comparisons. However, the same percentage of the NSF-supported evaluations disaggregated their results by subgroup, as did commercially generated evaluations (41 percent in both cases). This is an area where evaluations of curricula could benefit greatly from standardization of ex-

pectation and methodology. Given the importance of the topic of equity, it should be standard practice to include such analyses in evaluation studies.

In summarizing these 26 studies, the first consideration was whether representative samples of students were evaluated. As we have learned from medical studies, if conclusions on effectiveness are drawn without careful attention to representativeness of the sample relative to the whole population, then the generalizations drawn from the results can be seriously flawed. In Chapter 2 we reported that across the studies, approximately 81 percent of the comparative studies and 73 percent of the case studies reported data on school location (urban, suburban, rural, or state/region), with suburban students being the largest percentage in both study types. The proportions of students studied indicated a tendency to undersample urban and rural populations and oversample suburban schools. With a high concentration of minorities and lower SES students in these areas, there are some concerns about the representativeness of the work.

A second consideration was to see whether the achievement effects of curricular interventions were achieved evenly among the various subgroups. Studies answered this question in different ways. Most commonly, evaluators reported on the performance of various subgroups in the treatment conditions as compared to those same subgroups in the comparative condition. They reported outcome scores or gains from pretest to posttest. We refer to these as “between” comparisons.

Other studies reported on the differences among subgroups within an experimental treatment, describing how well one group does in comparison with another group. Again, these reports were done in relation either to outcome measures or to gains from pretest to posttest. Often these reports contained a time element, reporting on how the internal achievement patterns changed over time as a curricular program was used. We refer to these as “within” comparisons.

Some studies reported both between and within comparisons. Others did not report findings by comparing mean scores or gains, but rather created regression equations that predicted the outcomes and examined whether demographic characteristics are related to performance. Six studies (all on NSF-supported curricula) used this approach with variables related to subpopulations. Twelve studies used ANCOVA or Multiple Analysis of Variance (MANOVA) to study disaggregation by subgroup, and two reported on comparative effect sizes. In the studies using statistical tests other than t-tests or Chi-squares, two were evaluations of commercially generated materials and the rest were of NSF-supported materials.

Of the studies that reported on gender (n=19), the NSF-supported ones (n=13) reported five cases in which the females outperformed their counterparts in the controls and one case in which the female-male gap decreased within the experimental treatments across grades. In most cases, the studies

present a mixed picture with some bright spots, with the majority showing no significant difference. One study reported significant improvements for African-American females.

In relation to race, 15 of 16 reports on African Americans showed positive effects in favor of the treatment group for NSF-supported curricula. Two studies reported decreases in the gaps between African Americans and whites or Asians. One of the two evaluations of African Americans, performance reported for the commercially generated materials, showed significant positive results, as mentioned previously.

For Hispanic students, 12 of 15 reports of the NSF-supported materials were significantly positive, with the other 3 showing no significant difference. One study reported a decrease in the gaps in favor of the experimental group. No evaluations of commercially generated materials were reported on Hispanic populations. Other reports on ethnic groups occurred too seldom to generalize.

Students from lower socioeconomic groups fared well, according to reported evaluations of NSF-supported materials (n=8), in that experimental groups outperformed control groups in all but one case. The one study of commercially generated materials that included SES as a variable reported no significant difference. For students with limited English proficiency, of the two evaluations of NSF-supported materials, one reported significantly more positive results for the experimental treatment. Likewise, one study of commercially generated materials yielded a positive result at the elementary level.

We also examined the data for ability differences and found reports by quartiles for a few evaluation studies. In these cases, the evaluations showed results across quartiles in favor of the NSF-supported materials. In one case using the same program, the lower quartiles showed the most improvement, and in the other, the gains were in the middle and upper groups for the Iowa Test of Basic Skills and evenly distributed for the informal assessment.

Summary Statements

After reviewing these studies, the committee observed that examining differences by gender, race, SES, and performance levels should be examined as a regular part of any review of effectiveness. We would recommend that all comparative studies report on both “between” and “within” comparisons so that the audience of an evaluation can simply and easily consider the level of improvement, its distribution across subgroups, and the impact of curricular implementation on any gaps in performance. Each of the major categories—gender, race/ethnicity, SES, and achievement level—contributes a significant and contrasting view of curricular impact. Further-

more, more sophisticated accounts would begin to permit, across studies, finer distinctions to emerge, such as the effect of a program on young African-American women or on first generation Asian students.

In addition, the committee encourages further study and deliberation on the use of more complex approaches to the examination of equity issues. This is particularly important due to the overlaps among these categories, where poverty can show itself as its own variable but also may be highly correlated to prior performance. Hence, the use of one variable can mask differences that should be more directly attributable to another. The committee recommends that a group of measurement and equity specialists confer on the most effective design to advance on these questions.

Finally, it is imperative that evaluation studies systematically include demographically representative student populations and distinguish evaluations that follow the commercial patterns of use from those that seek to establish effectiveness with a diverse student population. Along these lines, it is also important that studies report on the impact data on all substantial ethnic groups, including whites. Many studies, perhaps because whites were the majority population, failed to report on this ethnic group in their analyses. As we saw in one study, where Asian students were from poor homes and first generation, any subgroup can be an at-risk population in some setting, and because gains in means may not necessarily be assumed to translate to gains for all subgroups or necessarily for the majority subgroup. More complete and thorough descriptions and configurations of characteristics of the subgroups being served at any location—with careful attention to interactions—is needed in evaluations.

Interactions Among Content and Equity, by Grade Band

By examining disaggregation by content strand by grade levels, along with disaggregation by diverse subpopulations, the committee began to discover grade band patterns of performance that should be useful in the conduct of future evaluations. Examining each of these issues in isolation can mask some of the overall effects of curricular use. Two examples of such analysis are provided. The first example examines all the evaluations of NSF-supported curricula from the elementary level. The second examines the set of evaluations of NSF-supported curricula at the high school level, and cannot be carried out on evaluations of commercially generated programs because they lack disaggregation by student subgroup.

Example One

At the elementary level, the findings of the review of evaluations of data on effectiveness of NSF-supported curricula report consistent patterns of

benefits to students. Across the studies, it appears that positive results are enhanced when accompanied by adequate professional development and the use of pedagogical methods consistent with those indicated by the curricula. The benefits are most consistently evidenced in the broadening topics of geometry, measurement, probability, and statistics, and in applied problem solving and reasoning. It is important to consider whether the outcome measures in these areas demonstrate a depth of understanding. In early understanding of fractions and algebra, there is some evidence of improvement. Weaknesses are sometimes reported in the areas of computational skills, especially in the routinization of multiplication and division. These assertions are tentative due to the possible flaws in designs but quite consistent across studies, and future evaluations should seek to replicate, modify, or discredit these results.

The way to most efficiently and effectively link informal reasoning and formal algorithms and procedures is an open question. Further research is needed to determine how to most effectively link the gains and flexibility associated with student-generated reasoning to the automaticity and generalizability often associated with mastery of standard algorithms.

The data from these evaluations at the elementary level generally present credible evidence of increased success in engaging minority students and students in poverty based on reported gains that are modestly higher for these students than for the comparative groups. What is less well documented in the studies is the extent to which the curricula counteract the tendencies to see gaps emerge and result in long-term persistence in performance by gender and minority group membership as they move up the grades. However, the evaluations do indicate that these curricula can help, and almost never do harm. Finally, on the question of adequate challenge for advanced and talented students, the data are equivocal. More attention to this issue is needed.

Example Two

The data at the high school level produced the most conflicting results, and in conducting future evaluations, evaluators will need to examine this level more closely. We identify the high school as the crucible for curricular change for three reasons: (1) the transition to postsecondary education puts considerable pressure on these curricula; (2) the criteria outlined in the NSF RFP specify significant changes from traditional practice; and (3) high school freshmen arrive from a myriad of middle school curricular experiences. For the NSF-supported curricula, the RFP required that the programs provide a core curriculum “drawn from statistics/probability, algebra/functions, geometry/trigonometry, and discrete mathematics” (NSF, 1991, p. 2) and use “a full range of tools, including graphing calculators

and computers” (NSF, 1991, p. 2). The NSF RFP also specified the inclusion of “situations from the natural and social sciences and from other parts of the school curriculum as contexts for developing and using mathematics” (NSF, 1991, p. 1). It was during the fourth year that “course options should focus on special mathematical needs of individual students, accommodating not only the curricular demands of the college-bound but also specialized applications supportive of the workplace aspirations of employment-bound students” (NSF, 1991, p. 2). Because this set of requirements comprises a significant departure from conventional practice, the implementation of the high school curricula should be studied in particular detail.

We report on a Systemic Initiative for Montana Mathematics and Science (SIMMS) study by Souhrada (2001) and Brown et al. (1990), in which students were permitted to select traditional, reform, and mixed tracks. It became apparent that the students were quite aware of the choices they faced, as illustrated in the following quote:

The advantage of the traditional courses is that you learn—just math. It’s not applied. You get a lot of math. You may not know where to use it, but you learn a lot…. An advantage in SIMMS is that the kids in SIMMS tell me that they really understand the math. They understand where it comes from and where it is used.

This quote succinctly captures the tensions reported as experienced by students. It suggests that student perceptions are an important source of evidence in conducting evaluations. As we examined these curricular evaluations across the grades, we paid particular attention to the specificity of the outcome measures in relation to curricular objectives. Overall, a review of these studies would lead one to draw the following tentative summary conclusions:

There is some evidence of discontinuity in the articulation between high school and college, resulting from the organization and emphasis of the new curricula. This discontinuity can emerge in scores on college admission tests, placement tests, and first semester grades where nonreform students have shown some advantage on typical college achievement measures.

The most significant areas of disadvantage seem to be in students’ facility with algebraic manipulation, and with formalization, mathematical structure, and proof when isolated from context and denied technological supports. There is some evidence of weakness in computation and numeration, perhaps due to reliance on calculators and varied policies regarding their use at colleges (Kahan, 1999; Huntley et al., 2000).

There is also consistent evidence that the new curricula present

strengths in areas of solving applied problems, the use of technology, new areas of content development such as probability and statistics and functions-based reasoning in the use of graphs, using data in tables, and producing equations to describe situations (Huntley et al., 2000; Hirsch and Schoen, 2002).

Despite early performance on standard outcome measures at the high school level showing equivalent or better performance by reform students (Austin et al., 1997; Merlino and Wolff, 2001), the common standardized outcome measures (Preliminary Scholastic Assessment Test [PSAT] scores or national tests) are too imprecise to determine with more specificity the comparisons between the NSF-supported and comparison approaches, while program-generated measures lack evidence of external validity and objectivity. There is an urgent need for a set of measures that would provide detailed information on specific concepts and conceptual development over time and may require use as embedded as well as summative assessment tools to provide precise enough data on curricular effectiveness.

The data also report some progress in strengthening the performance of underrepresented groups in mathematics relative to their counterparts in the comparative programs (Schoen et al., 1998; Hirsch and Schoen, 2002).

This reported pattern of results should be viewed as very tentative, as there are only a few studies in each of these areas, and most do not adequately control for competing factors, such as the nature of the course received in college. Difficulties in the transition may also be the result of a lack of alignment of measures, especially as placement exams often emphasize algebraic proficiencies. These results are presented only for the purpose of stimulating further evaluation efforts. They further emphasize the need to be certain that such designs examine the level of mathematical reasoning of students, particularly in relation to their knowledge of understanding of the role of proofs and definitions and their facility with algebraic manipulation as we as carefully document the competencies taught in the curricular materials. In our framework, gauging the ease of transition to college study is an issue of examining curricular alignment with systemic factors, and needs to be considered along with those tests that demonstrate a curricular validity of measures. Furthermore, the results raising concerns about college success need replication before secure conclusions are drawn.

Also, it is important that subsequent evaluations also examine curricular effects on students’ interest in mathematics and willingness to persist in its study. Walker (1999) reported that there may be some systematic differences in these behaviors among different curricula and that interest and persistence may help students across a variety of subgroups to survive entry-level hurdles, especially if technical facility with symbol manipulation

can be improved. In the context of declines in advanced study in mathematics by American students (Hawkins, 2003), evaluation of curricular impact on students’ interest, beliefs, persistence, and success are needed.

The committee takes the position that ultimately the question of the impact of different curricula on performance at the collegiate level should be resolved by whether students are adequately prepared to pursue careers in mathematical sciences, broadly defined, and to reason quantitatively about societal and technological issues. It would be a mistake to focus evaluation efforts solely or primarily on performance on entry-level courses, which can clearly function as filters and may overly emphasize procedural competence, but do not necessarily represent what concepts and skills lead to excellence and success in the field.

These tentative patterns of findings indicate that at the high school level, it is necessary to conduct individual evaluations that examine the transition to college carefully in order to gauge the level of success in preparing students for college entry and the successful negotiation of majors. Equally, it is imperative to examine the impact of high school curricula on other possible student trajectories, such as obtaining high school diplomas, moving into worlds of work or through transitional programs leading to technical training, two-year colleges, and so on.

These two analyses of programs by grade-level band, content strand, and equity represent a methodological innovation that could strengthen the empirical database on curricula significantly and provide the level of detail really needed by curriculum designers to improve their programs. In addition, it appears that one could characterize the NSF programs (and not the commercial programs as a group) as representing a particular approach to curriculum, as discussed in Chapter 3 . It is an approach that integrates content strands; relies heavily on the use of situations, applications, and modeling; encourages the use of technology; and has a significant dose of mathematical inquiry. One could ask the question of whether this approach as a whole is “effective.” It is beyond the charge and scope of this report, but is a worthy target of investigation if one uses proper care in design, execution, and analysis. Likewise other approaches to curricular change should be investigated at the aggregate level, using careful and rigorous design.

The committee believes that a diversity of curricular approaches is a strength in an educational system that maintains local and state control of curricular decision making. While “scientifically established as effective” should be an increasingly important consideration in curricular choice, local cultural differences, needs, values, and goals will also properly influence curricular choice. A diverse set of effective curricula would be ideal. Finally, the committee emphasizes once again the importance of basing the studies on measures with established curricular validity and avoiding cor-

ruption of indicators as a result of inappropriate amounts of teaching to the test, so as to be certain that the outcomes are the product of genuine student learning.

CONCLUSIONS FROM THE COMPARATIVE STUDIES

In summary, the committee reviewed a total of 95 comparative studies. There were more NSF-supported program evaluations than commercial ones, and the commercial ones were primarily on Saxon or UCSMP materials. Of the 19 curricular programs reviewed, 23 percent of the NSF-supported and 33 percent of the commercially generated materials selected had programs with no comparative reviews. This finding is particularly disturbing in light of the legislative mandate in No Child Left Behind (U.S. Department of Education, 2001) for scientifically based curricular programs and materials to be used in the schools. It suggests that more explicit protocols for the conduct of evaluation of programs that include comparative studies need to be required and utilized.

Sixty-nine percent of NSF-supported and 61 percent of commercially generated program evaluations met basic conditions to be classified as at least minimally methodologically adequate studies for the evaluation of effectiveness. These studies were ones that met the criteria of including measures of student outcomes on mathematical achievement, reporting a method of establishing comparability among samples and reporting on implementation elements, disaggregating by content strand, or using precise, theoretical analyses of the construct or multiple measures.

Most of these studies had both strengths and weaknesses in their quasi-experimental designs. The committee reviewed the studies and found that evaluators had developed a number of features that merit inclusions in future work. At the same time, many had internal threats to validity that suggest a need for clearer guidelines for the conduct of comparative evaluations.

Many of the strengths and innovations came from the evaluators’ understanding of the program theories behind the curricula, their knowledge of the complexity of practice, and their commitment to measuring valid and significant mathematical ideas. Many of the weaknesses came from inadequate attention to experimental design, insufficient evidence of the independence of evaluators in some studies, and instability and lack of cooperation in interfacing with the conditions of everyday practice.

The committee identified 10 elements of comparative studies needed to establish a basis for determining the effectiveness of a curriculum. We recognize that not all studies will be able to implement successfully all elements, and those experimental design variations will be based largely on study size and location. The list of elements begins with the seven elements

corresponding to the seven critical decisions and adds three additional elements that emerged as a result of our review:

A better balance needs to be achieved between experimental and quasi-experimental studies. The virtual absence of large-scale experimental studies does not provide a way to determine whether the use of quasi-experimental approaches is being systematically biased in unseen ways.

If a quasi-experimental design is selected, it is necessary to establish comparability. When quasi-experimentation is used, it “pertains to studies in which the model to describe effects of secondary variables is not known but assumed” (NRC, 1992, p. 18). This will lead to weaker and potentially suspect causal claims, which should be acknowledged in the evaluation report, but may be necessary in relation to feasibility (Joint Committee on Standards for Educational Evaluation, 1994). In general, to date, studies have assumed prior achievement measures, ethnicity, gender, and SES, are acceptable variables on which to match samples or on which to make statistical adjustments. But there are often other variables in need of such control in such evaluations including opportunity to learn, teacher effectiveness, and implementation (see #4 below).

The selection of a unit of analysis is of critical importance to the design. To the extent possible, it is useful to randomly assign the unit for the different curricula. The number of units of analysis necessary for the study to establish statistical significance depends not on the number of students, but on this unit of analysis. It appears that classrooms and schools are the most likely units of analysis. In addition, the development of increasingly sophisticated means of conducting studies that recognize that the level of the educational system in which experimentation occurs affects research designs.

It is essential to examine the implementation components through a set of variables that include the extent to which the materials are implemented, teaching methods, the use of supplemental materials, professional development resources, teacher background variables, and teacher effects. Gathering these data to gauge the level of implementation fidelity is essential for evaluators to ensure adequate implementation. Studies could also include nested designs to support analysis of variation by implementation components.

Outcome data should include a variety of measures of the highest quality. These measures should vary by question type (open ended, multiple choice), by type of test (international, national, local) and by relation of testing to everyday practice (formative, summative, high stakes), and ensure curricular validity of measures and assess curricular alignment with systemic factors. The use of comparisons among total tests, fair tests, and

conservative tests, as done in the evaluations of UCSMP, permits one to gain insight into teacher effects and to contrast test results by items included. Tests should also include content strands to aid disaggregation, at a level of major content strands (see Figure 5-11 ) and content-specific items relevant to the experimental curricula.

Statistical analysis should be conducted on the appropriate unit of analysis and should include more sophisticated methods of analysis such as ANOVA, ANCOVA, MACOVA, linear regression, and multiple regression analysis as appropriate.

Reports should include clear statements of the limitations to generalization of the study. These should include indications of limitations in populations sampled, sample size, unique population inclusions or exclusions, and levels of use or attrition. Data should also be disaggregated by gender, race/ethnicity, SES, and performance levels to permit readers to see comparative gains across subgroups both between and within studies.

It is useful to report effect sizes. It is also useful to present item-level data across treatment program and show when performances between the two groups are within the 10 percent confidence interval of each other. These two extremes document how crucial it is for curricula developers to garner both precise and generalizable information to inform their revisions.

Careful attention should also be given to the selection of samples of populations for participation. These samples should be representative of the populations to whom one wants to generalize the results. Studies should be clear if they are generalizing to groups who have already selected the materials (prior users) or to populations who might be interested in using the materials (demographically representative).

The control group should use an identified comparative curriculum or curricula to avoid comparisons to unstructured instruction.

In addition to these prototypical decisions to be made in the conduct of comparative studies, the committee suggests that it would be ideal for future studies to consider some of the overall effects of these curricula and to test more directly and rigorously some of the findings and alternative hypotheses. Toward this end, the committee reported the tentative findings of these studies by program type. Although these results are subject to revision, based on the potential weaknesses in design of many of the studies summarized, the form of analysis demonstrated in this chapter provides clear guidance about the kinds of knowledge claims and the level of detail that we need to be able to judge effectiveness. Until we are able to achieve an array of comparative studies that provide valid and reliable information on these issues, we will be vulnerable to decision making based excessively on opinion, limited experience, and preconceptions.

This book reviews the evaluation research literature that has accumulated around 19 K-12 mathematics curricula and breaks new ground in framing an ambitious and rigorous approach to curriculum evaluation that has relevance beyond mathematics. The committee that produced this book consisted of mathematicians, mathematics educators, and methodologists who began with the following charge:

  • Evaluate the quality of the evaluations of the thirteen National Science Foundation (NSF)-supported and six commercially generated mathematics curriculum materials;
  • Determine whether the available data are sufficient for evaluating the efficacy of these materials, and if not;
  • Develop recommendations about the design of a project that could result in the generation of more reliable and valid data for evaluating such materials.

The committee collected, reviewed, and classified almost 700 studies, solicited expert testimony during two workshops, developed an evaluation framework, established dimensions/criteria for three methodologies (content analyses, comparative studies, and case studies), drew conclusions on the corpus of studies, and made recommendations for future research.

READ FREE ONLINE

Welcome to OpenBook!

You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

Do you want to take a quick tour of the OpenBook's features?

Show this book's table of contents , where you can jump to any chapter by name.

...or use these buttons to go back to the previous chapter or skip to the next one.

Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

Switch between the Original Pages , where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

To search the entire text of this book, type in your search term here and press Enter .

Share a link to this book page on your preferred social network or via email.

View our suggested citation for this chapter.

Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

Get Email Updates

Do you enjoy reading reports from the Academies online for free ? Sign up for email notifications and we'll let you know about new publications in your areas of interest when they're released.

comparison research paper example

Compare and Contrast Essay: Full Writing Guide and 150+ Topics

comparison research paper example

Compare and contrast essays are academic papers in which a student analyses two or more subjects with each other. To compare means to explore similarities between subjects, while to contrast means to look at their differences. Both subjects of the comparison are usually in the same category, although they have their differences. For example, it can be two movies, two universities, two cars etc.

Good compare and contrast papers from college essay writer focus on a central point, explaining the importance and implications of this analysis. A compare and contrast essay thesis must make a meaningful comparison. Find the central theme of your essay and do some brainstorming for your thesis.

This type of essay is very common among college and university students. Professors challenge their students to use their analytical and comparative skills and pay close attention to the subjects of their comparisons. This type of essay exercises observance and analysis, helps to establish a frame of reference, and makes meaningful arguments about a subject. Let's get deeper on how to write a compare and contrast essay with our research writing services .

How to Start a Compare and Contrast Essay: Brainstorm Similarities and Differences

Now that you know what is compare and contrast essay and are set with your topic, the first thing you should do is grab a piece of paper and make a list with two columns: similarities and differences. Jot down key things first, the most striking ones. Then try to look at the subjects from a different angle, incorporating your imagination.

If you are more of a visual learner, creating a Venn diagram might be a good idea. In order to create it, draw two circles that overlap. In the section where it overlaps, note similarities. Differences should be written in the part of the circle that does not overlap.

Let’s look at a simple example of compare and contrast essay. Let one of the subjects be oranges, and the other one be apples. Oranges have thick peel, originally from India, and are tropical fruit. These characteristics pertain only to oranges and should be in the part of the circle that does not overlap. For the same section on apples, we put thin peel, originated in Turkey or Kazakhstan, and moderate to subtropical. In the section that overlaps, let’s say that they are both fruit, can be juiced, and grow on trees. This simple, yet good example illustrates how the same concept can be applied to many other complicated topics with additional points of comparison and contrast.

Example of compare and contrast

This format of visual aid helps to organize similarities and differences and make them easier to perceive. Your diagram will give you a clear idea of the things you can write about.

Another good idea for brainstorming in preparation for your comparison contrast essay is to create a list with 2 columns, one for each subject, and compare the same characteristics for each of them simultaneously. This compare and contrast format will make writing your comparison contrast paper argument a breeze, as you will have your ideas ready and organized.

One mistake you should avoid is simply listing all of the differences or similarities for each subject. Sometimes students get too caught up in looking for similarities and differences that their compare and contrast essays end up sounding like grocery lists. Your essay should be based on analyzing the similarities and differences, analyzing your conclusions about the two subjects, and finding connections between them—while following a specific format.

Compare and Contrast Essay Structure and Outline

So, how do you structure this compare and contrast paper? Well, since compare and contrast essay examples rely heavily on factual analysis, there are two outline methods that can help you organize your facts. You can use the block method, or point-by-point method, to write a compare and contrast essay outline.

While using the block structure of a compare and contrast essay, all the information is presented for the first subject, and its characteristics and specific details are explained. This concludes one block. The second block takes the same approach as the first for the second subject.

The point-by-point structure lists each similarity and difference simultaneously—making notes of both subjects. For example, you can list a characteristic specific to one subject, followed by its similarity or difference to the other subject.

Both formats have their pros and cons. The block method is clearly easier for a compare and contrast essay writer, as you simply point out all of the information about the two subjects, and basically leave it to the reader to do the comparison. The point-by-point format requires you to analyze the points yourself while making similarities and differences more explicit to the reader for them to be easier to understand. Here is a detailed structure of each type presented below.

Point-by-Point Method

  • Introduce the topic;
  • Specify your theme;
  • Present your thesis - cover all areas of the essay in one sentence.
Example thesis: Cars and motorcycles make for excellent means of transportation, but a good choice depends on the person’s lifestyle, finances, and the city they live in.

Body Paragraph 1 - LIFESTYLE

  • Topic Sentence: Motorcycles impact the owner’s lifestyle less than cars.
  • Topic 1 - Motorcycles
  • ~ Argument: Motorcycles are smaller and more comfortable to store.
  • ~ Argument: Motorcycles are easy to learn and use.
  • Topic 2 - Cars
  • ~ Argument: Cars are a big deal - they are like a second home.
  • ~ Argument: It takes time to learn to become a good driver.

Body Paragraph 2 - FINANCES

  • Topic sentence: Cars are much more expensive than motorcycles
  • ~ Argument: You can buy a good motorcycle for under 300$.
  • ~ Argument: Fewer parts that are more accessible to fix.
  • ~ Argument: Parts and service are expensive if something breaks.
  • ~ Argument: Cars need more gas than motorcycles.

Body Paragraph 3 - CITY

  • Topic sentence: Cars are a better option for bigger cities with wider roads.
  • ~ Argument: Riding motorcycles in a big city is more dangerous than with cars.
  • ~ Argument: Motorcycles work great in a city like Rome, where all the streets are narrow.
  • ~ Argument: Big cities are easier and more comfortable to navigate by car.
  • ~ Argument: With a car, traveling outside of the city is much easier.
  • Sum up all you wrote in the article.

Block Method

  • Thesis — cover all areas of the essay in one sentence

Body Paragraph 1

  • Topic Sentence: Motorcycles are cheaper and easier to take care of than cars.
  • Aspect 1 - Lifestyle
  • Aspect 2 - Finances
  • ~ Argument: Fewer parts, easier to fix.
  • Aspect 3 - City
  • ~ Argument: Riding motorcycles in a big city is more dangerous than cars.

Body Paragraph 2

  • Topic sentence: Cars are more expensive but more comfortable for a big city and for travelling.
  • ~ Argument: Cars are a big deal—like a second home.
  • ~ Argument: With a car, traveling outside the city is much more comfortable.

Body Paragraph 3 ‍

Use the last paragraph to evaluate the comparisons and explain why they’re essential. Giving a lot of facts can be intense. To water it down, try to give the reader any real-life applications of these facts.

Depending on the structure selected, you can begin to create an outline for your essay. The typical comparison essay follows the format of having an introduction, three body paragraphs, and a conclusion — though, if you need to focus on each subject in more detailed ways, feel free to include an extra paragraph to cover all of the most important points.

To make your compare and contrast essay flow better, we recommend using special transition words and phrases. They will add variety and improve your paper overall.

For the section where you compare two subjects, you can include any of the following words: similarly, likewise, also, both, just like, similar to, the same as, alike, or to compare to. When contrasting two subjects, use: in contrast, in comparison, by comparison, on the other hand, while, whereas, but, to differ from, dissimilar to, or unlike.

Show Your Evidence

Arguments for any essay, including compare and contrast essays, need to be supported by sufficient evidence. Make good use of your personal experiences, books, scholarly articles, magazine and newspaper articles, movies, or anything that will make your argument sound credible. For example, in your essay, if you were to compare attending college on campus vs. distance-based learning, you could include your personal experiences of being a student, and how often students show up to class on a daily basis. You could also talk about your experience taking online classes, which makes your argument about online classes credible as well.

Helpful Final Tips

The biggest tip dissertation writing services can give you is to have the right attitude when writing a compare contrast essay, and actively engage the reader in the discussion. If you find it interesting, so will your reader! Here are some more compare and contrast essay tips that will help you to polish yours up:

types of writing

  • Compare and contrast essays need powerful transitions. Try learning more about writing transition sentences using the words we provided for you in the 'Compare and Contrast Structure and Outline' section.
  • Always clarify the concepts you introduce in your essay. Always explain lesser known information—don’t assume the reader must already know it.
  • Do not forget to proofread. Small mistakes, but in high quantities, can result in a low grade. Pay attention to your grammar and punctuation.
  • Have a friend or family member take a look at your essay; they may notice things you have missed.

Compare and Contrast Essay Examples

Now that you know everything there is to know about compare and contrast essays, let’s take a look at some compare and contrast examples to get you started on your paper or get a hand from our essay helper .

Different countries across the world have diverse cultural practices, and this has an effect on work relationships and development. Geert Hofstede came up with a structured way of comparing cultural dimensions of different countries. The theory explains the impacts of a community’s culture on the values of the community members, and the way these values relate to their behaviors. He gives scores as a way to help distinguish people from different nations using the following dimensions: long-term orientation, individualism, power distance, indulgence, necessity avoidance, and masculinity. Let us examine comparisons between two countries: the United Kingdom and China — based on Hofstede’s Six Dimensions of Culture.
Over the last two decades, the demand from consumers for organic foods has increased tremendously. In fact, the popularity of organic foods has exploded significantly with consumers, spending a considerably higher amount of money on them as compared to the amount spent on inorganic foods. The US market noted an increase in sales of more than 10% between 2014 and 2015 (Brown, n.p). The increase is in line with the views of many consumers that organic foods are safer, tastier, and healthier compared to the inorganic foods. Furthermore, considering the environmental effects of foods, organic foods present less risk of environmental pollution — compared to inorganic foods. By definition, organic foods are those that are grown without any artificial chemical treatment, or treatment by use of other substances that have been modified genetically, such as hormones and/or antibiotics (Brown, n.p).

Still feeling confused about the complexities of the compare and contrast essay? Feel free to contact our paper writing service to get a professional writing help.

Finding the Best Compare and Contrast Essay Topics For You

When choosing a topic for your comparison essay, remember that subjects cannot be drastically different, because there would be little to no points of comparison (similarities). The same goes for too many similarities, which will result in poor contrasts. For example, it is better to write about two composers, rather than a composer and a singer.

It is extremely important to choose a topic you are passionate about. You never want to come across something that seems dull and uninspiring for you. Here are some excellent ways to brainstorm for a topic from essay writer :

  • Find categories: Choose a type (like animals, films or economics), and compare subjects within that category – wild animals to farm animals, Star Wars to Star Trek, private companies to public companies, etc.
  • Random Surprising Fact: Dig for fun facts which could make great topics. Did you know that chickens can be traced back to dinosaurs?
  • Movie vs. Book: Most of the time, the book is better than the movie — unless it’s Blade Runner or Lord of the Rings. If you’re a pop culture lover, compare movies vs. books, video games, comics, etc.

Use our rewrite essay service when you need help from professionals.

How to Choose a Great Compare and Contrast Topic

College students should consider providing themselves with a chance to use all topic examples. With enough revision, an advantage is gained. As it will be possible to compare arguments and contrast their aspects. Also, discuss numerous situations to get closer to the conclusion.

For example:

  • Choose a topic from the field of your interests. Otherwise you risk failing your paper.
  • It is a good idea to choose a topic based upon the class subject or specialist subject. (Unless the requirements say otherwise.)
  • Analyze each argument carefully. Include every detail for each opposing idea. Without doing so, you can definitely lower grades.
  • Write a conclusion that summarizes both arguments. It should allow readers to find the answer they’re looking for.
  • It is up to you to determine which arguments are right and wrong in the final conclusion.
  • Before approaching the final conclusion, it’s important to discuss each argument equally. It is a bad idea to be biased, as it can also lower grades.

Need a Great Essay From Us?

Our professionals are ready to help you asap! Contact us 24/7.

150 Compare and Contrast Essay Topics to Consider

Choosing a topic can be a challenging task, but there are plenty of options to consider. In the following sections, we have compiled a list of 150 compare and contrast essay topics to help you get started. These topics cover a wide range of subjects, from education and technology to history and politics. Whether you are a high school student or a college student, you are sure to find a topic that interests you. So, read on to discover some great compare and contrast essay ideas.

Compare and Contrast Essay Topics For College Students

When attending a college, at any time your professor can assign you the task of writing this form of an essay. Consider these topics for college students from our team to get the grades you deserve.

  • Attending a College Course Vs. Distance-Based Learning.
  • Writing a Research Paper Vs. Writing a Creative Writing Paper. What are the differences and similarities?
  • The differences between a Bachelor’s Degree and a Master’s Degree.
  • The key aspects of the differences between the US and the UK education systems.
  • Completing assignments at a library compared with doing so at home. Which is the most efficient?
  • The similarities and differences in the behavior among married and unmarried couples.
  • The similarities and differences between the EU (European Union) and ASEAN (The Association of Southeast Asian Nations)?
  • The similarities and significant differences between American and Canadian English.
  • Writing an Internship Report Vs. Writing a Research Paper
  • The differences between US colleges and colleges in the EU?

Interesting Compare and Contrast Essay Topics

Some topics for the compare and contrast essay format can be boring. To keep up motivation, doing a research , have a look at these topics. Maybe they can serve you as research paper help .

  • Public Transport Vs. Driving A Car. Which is more efficient?
  • Mandarin Vs. Cantonese: What are the differences between these Chinese languages?
  • Sports Cars Vs. Luxurious Family Cars
  • Wireless Technology Vs. Wired Devices
  • Thai Food Vs. Filipino Cuisine
  • What is the difference and similarities between a register office marriage and a traditional marriage?
  • The 2000s Vs. The 2010s. What are the differences and what makes them similar?
  • Abu Dhabi Vs. Dubai. What are the main factors involved in the differences?
  • What are the differences between American and British culture?
  • What does the New York Metro do differently to the London Underground?

Compare and Contrast Essay Topics for High School Students

When writing essays for high school, it is good to keep them informative. Have a look at these compare and contrast sample topics.

  • Highschool Life Vs. College Life
  • Paying College Fees Vs. Being Awarded a Scholarship
  • All Night Study Sessions Vs. Late Night Parties
  • Teenager Vs. Young Adult Relationships
  • Being in a Relationship Vs. Being Single
  • Male Vs. Female Behavior
  • The similarities and differences between a high school diploma and a college degree
  • The similarities and differences between Economics and Business Studies
  • The benefits of having a part-time job, instead of a freelance job, in college
  • High School Extra Curricular Activities Vs. Voluntarily Community Services

Compare and Contrast Essay Topics for Science

At some point, every science student will be assigned this type of essay. To keep things at flow, have a look at best compare and contrast essay example topics on science:

  • Undiscovered Species on Earth Vs. Potential Life on Mars: What will we discover in the future?
  • The benefits of Gasoline Powered Cars Vs. Electric Powered Cars
  • The differences of the Milky Way Vs. Centaurus (Galaxies).
  • Earthquakes Vs. Hurricanes: What should be prepared for the most?
  • The differences between our moon and Mars’ moons.
  • SpaceX Vs. NASA. What is done differently within these organizations?
  • The differences and similarities between Stephen Hawking and Brian Cox’s theories on the cosmos. Do they agree or correspond with each other?
  • Pregnancy Vs. Motherhood
  • Jupiter Vs. Saturn
  • Greenhouse Farming Vs. Polytunnel Farming

Sports & Leisure Topics

Studying Physical Education? Or a gym fanatic? Have a look at our compare and contrast essay topics for sports and leisure.

  • The English Premier League Compared With The Bundesliga
  • Real Madrid Vs. Barcelona
  • Football Vs. Basketball
  • Walking Vs. Eating Outside with Your Partner
  • Jamaica Team Vs. United States Team: Main Factors and Differences
  • Formula One Vs. Off-Road Racing
  • Germany Team Vs. Brazil Team
  • Morning Exercise Vs. Evening Exercise.
  • Manning Team Vs. Brazil Team
  • Swimming Vs. Cycling

Topics About Culture

Culture can have several meanings. If you’re a Religious Studies or Culture student, take a look at these good compare and contrast essay topics about culture.

  • The fundamental similarities and differences between Pope Francis and Tawadros II of Alexandria
  • Canadian Vs. Australian Religion
  • The differences between Islamic and Christian Holidays
  • The cultural similarities and differences between the Native Aboriginals and Caucasian Australians
  • Native American Culture Vs. New England Culture
  • The cultural differences and similarities between Italians and Sicilians
  • In-depth: The origins of Buddhism and Hinduism
  • In-depth: The origins of Christianity and Islam
  • Greek Gods Vs. Hindu Gods
  • The Bible: Old Testament Vs. New Testament

Unique Compare and Contrast Essay Topics

What about writing an essay which is out of the ordinary? Consider following these topics to write a compare and contrast essay on, that are unique.

  • The reasons why some wealthy people pay extortionate amounts of money for gold-plated cell phones, rather than buying the normal phone.
  • The differences between Lipton Tea and Ahmad Tea
  • American Football Vs. British Football: What are their differences?
  • The differences and similarities between France and Britain
  • Fanta Vs. 7Up
  • Traditional Helicopters Vs. Lifesize Drones
  • The differences and similarities between Boston Dynamics and the fictional equivalent Skynet (From Terminator Movies).
  • Socialism Vs. Capitalism: Which is better?
  • Curved Screen TVs’ Vs. Regular Flat Screen TVs’: Are they really worth big bucks?
  • Is it better to wear black or white at funerals?

Good Compare and Contrast Essay Topics

Sometimes, it may be a requirement to take it back a notch. Especially if you’re new to these style of writing. Consider having a look at these good compare and contrast essay topics that are pretty easy to start off.

  • Is it a good idea to work on weekdays or weekends?
  • Black of White Coffee
  • Becoming a teacher or a doctor? Which career choice has more of an impact on society?
  • Air Travel Vs. Sea Travel: Which is better?
  • Rail Travel Vs. Road Travel: Which is more convenient?
  • What makes Europe far greater than Africa? In terms of financial growth, regulations, public funds, policies etc…
  • Eating fruit for breakfast Vs. cereals
  • Staying Home to Read Vs. Traveling the World During Holidays. Which is more beneficial for personal growth?
  • Japanese Vs. Brazilian Cuisine
  • What makes ASEAN Nations more efficient than African Nations?

Compare and Contrast Essay Topics About TV Shows, Music and Movies

We all enjoy at least one of these things. If not, all of them. Why not have a go at writing a compare and contrast essay about what you have been recently watching or listening to?

  • Breaking Bad Vs. Better Call Saul: Which is more commonly binge watched?
  • The differences between Dance Music and Heavy Metal
  • James Bond Vs. Johnny English
  • Iron Man Vs. The Incredible Hulk: Who would win?
  • What is done differently in modern movies, compared to old black and white movies?
  • Dumber and Dumber 2 Vs. Ted: Which movie is funnier?
  • Are Horror movies or Action Movies best suited to you?
  • The differences and similarities between Mozart and Beethoven compositions.
  • Hip Hop Vs. Traditional Music
  • Classical Music Vs. Pop Music. Which genre helps people concentrate?

Topics About Art

Sometimes, art students are required to write this style of essay. Have a look at these compare and contrast essay topics about the arts of the centuries.

  • The fundamental differences and similarities between paintings and sculptures
  • The different styles of Vincent Van Gogh and Leonardo Da Vinci.
  • Viewing Original Art Compared With Digital Copies. How are these experiences different?
  • 18th Century Paintings Vs. 21st Century Digitally Illustrated Images
  • German Art Vs. American Art
  • Modern Painting Vs. Modern Photography
  • How can we compare modern graphic designers to 18th-century painters?
  • Ancient Greek Art Vs. Ancient Egyptian Art
  • Ancient Japanese Art Vs. Ancient Persian Art
  • What 16th Century Painting Materials were used compared with the modern day?

Best Compare and Contrast Essay Topics

Almost every student at any stage of academics is assigned this style of writing. If you’re lacking inspiration, consider looking at some of the best compare and contrast essay topics to get you on track with your writing.

  • The United States and North Korea Governmental Conflict: What is the reason behind this phenomenon?
  • In the Early Hours, Drinking Water is far healthier than consuming soda.
  • The United States Vs. The People’s Republic of China: Which economy is the most efficient?
  • Studying in Foreign Countries Vs. Studying In Your Hometown: Which is more of an advantage?
  • Toast Vs. Cereal: Which is the most consumed in the morning?
  • Sleeping Vs. Daydreaming: Which is the most commonly prefered? And amongst who?
  • Learning French Vs. Chinese: Which is the most straightforward?
  • Android Phones Vs. iPhones
  • The Liberation of Slaves Vs. The Liberation of Women: Which is more remembered?
  • The differences between the US Dollar and British Pound. What are their advantages? And How do they correspond with each other?

Easy Compare and Contrast Essay Topics

In all types of academics, these essays occur. If you’re new to this style of writing, check our easy compare and contrast essay topics.

  • The Third Reich Vs. North Korea
  • Tea Vs. Coffee
  • iPhone Vs. Samsung
  • KFC Vs. Wendy’s
  • Laurel or Yanny?
  • Healthy Lifestyle Vs. Obese Lifestyle
  • Forkes Vs. Sporks
  • Rice Vs. Porridge
  • Roast Dinner Vs. Chicken & Mushroom Pie
  • What’s the difference between apples and oranges?

Psychology Compare and Contrast Essay Topics

Deciding upon good compare and contrast essay topics for psychology assignments can be difficult. Consider referring to our list of 10 psychology compare and contrast essay topics to help get the deserved grades.

  • What is a more severe eating order? Bulimia or Anorexia
  • Modern Medicine Vs. Traditional Medicine for Treating Depression?
  • Soft Drugs Vs. Hard Drugs. Which is more dangerous for people’s psychological well-being?
  • How do the differences between Lust and Love have an effect on people’s mindsets?
  • Ego Vs. Superego
  • Parents Advice Vs. Peers Advice amongst children and teens.
  • Strict Parenting Vs. Relaxed Parenting
  • Mental Institutions Vs. Stress Clinics
  • Bipolar Disorder Vs. Epilepsy
  • How does child abuse affect victims in later life?

Compare and Contrast Essay Topics for Sixth Graders

From time to time, your teacher will assign the task of writing a compare and contrast essay. It can be hard to choose a topic, especially for beginners. Check out our easy compare and contrast essay topics for sixth graders.

  • Exam Preparation Vs. Homework Assignments
  • Homeschooling Vs. Public Education
  • High School Vs. Elementary School
  • 5th Grade Vs. 6th Grade: What makes them different or the same?
  • Are Moms’ or Dads’ more strict among children?
  • Is it better to have strict parents or more open parents?
  • Sandy Beaches Vs. Pebble Beaches: Which beaches are more popular?
  • Is it a good idea to learn guitar or piano?
  • Is it better to eat vegetable salads or pieces of fruit for lunch?
  • 1st Grade Vs. 6th Grade

Funny Compare and Contrast Essay Topics

Sometimes, it is good to have a laugh. As they always say : 'laughter is the best medicine'. Check out these funny compare and contrast essay topics for a little giggle when writing.

  • What is the best way to waste your time? Watching Funny Animal Videos or Mr. Bean Clips?
  • Are Pug Dogs or Maltese Dogs crazier?
  • Pot Noodles Vs. McDonalds Meals.
  • What is the difference between Peter Griffin and Homer Simpson?
  • Mrs. Doubtfire Vs. Mrs. Brown. How are they similar?
  • Which game is more addictive? Flappy Bird or Angry Birds?
  • Big Shaq Vs. PSY
  • Stewie Griffin Vs. Maggie Simpson
  • Quarter Pounders Vs. Big Macs
  • Mr. Bean Vs. Alan Harper

Feeling Overwhelmed While Writing a COMPARE AND CONTRAST ESSAY?

Give us your paper requirements, set the deadline, choose a writer and chill while we write an original paper for you.

Related Articles

 How to Write a Policy Analysis Paper Step-by-Step

Compare/Contrast Papers

  • Point by Point Outline
  • Subject Outline
  • Lens Outline
  • Compare then Contrast Outline

Chat with a Librarian

Outline formats and examples.

  • Point-by-Point Outline
  • Compare Then Contrast Outline

Thesis Statement

A thesis is a one or two sentence statement that directly states what your paper is about.  Your thesis is generally one of the harder sentences to write, especially for those new to writing research papers.  You want the reader to know with this one statement what your stance on the topic is and what you intend to prove with your research.  Some good examples of a thesis statement would be:

  • While cake and pie are both desserts, the structure, ingredients, and ease of transportation sets pie apart from its main competitor.
  • An analysis of video game profit margins reveals one challenge facing game developers: the success of AAA games and the popularity of independent titles.
  • In the movies  Shaun of the Dead  and  28 Days Later , the opening sequence establishes the tone and theme of the film through non-diegetic sound, methodical pacing, and striking visuals.

Transitions

Transitions are visual cues that inform your reader where you are in your paper.  Your thesis has already established the topics you will be covers in your compare/contrast paper so these cues help your reader know when you are moving on to a new topic.

Transition Words

  • Compare/Agreement
  • Contrast/Opposition
  • Cause/Condition
  • Examples/Support
  • Effect/Consequence
  • Time/Sequence
  • Space/Location

Qualification

Intensification

Compare or Agreement

Contrast or Opposition

Cause or Condition

Examples and Support

Effect or Consequence

Time or Sequence

Space or Location

Concession  

  • << Previous: Prewriting
  • Next: Point by Point Outline >>
  • Last Updated: Jun 6, 2023 3:25 PM
  • URL: https://tstc.libguides.com/writing-compare

Compare And Contrast Essay Guide

Compare And Contrast Essay Examples

Last updated on: Mar 22, 2024

Good Compare and Contrast Essay Examples For Your Help

By: Barbara P.

Reviewed By: Jacklyn H.

Published on: Mar 22, 2023

Compare and Contrast Essay Examples

Are you ready to challenge your critical thinking skills and take your writing to the next level? Look no further than the exciting world of compare and contrast essays! 

As a college student, you'll have the unique opportunity to delve into the details and differences of a variety of subjects. But don't let the pressure of writing the perfect compare-and-contrast essay weigh you down. 

To help guide you on this journey, we've got some great compare-and-contrast essay examples. It will make the writing process not only manageable but also enjoyable. So grab a pen and paper, and let's get started on this exciting adventure!

Compare and Contrast Essay Examples

On this Page

Good Compare and Contrast Essay Examples

A compare and contrast essay is all about comparing two subjects. Writing essays is not always easy, but it can be made easier with help from the examples before you write your own first. The examples will give you an idea of the perfect compare-and-contrast essay. 

We have compiled a selection of free compare-and-contrast essay examples that can help you structure this type of essay. 

SAMPLE COMPARE AND CONTRAST ESSAY EXAMPLE

COMPARE AND CONTRAST ESSAY INTRODUCTION EXAMPLE

BOOK COMPARE AND CONTRAST ESSAY

CITY COMPARE AND CONTRAST ESSAY

CATS & DOGS COMPARE AND CONTRAST ESSAY

SCIENCE & ART COMPARE AND CONTRAST ESSAY

E-BOOKS & HARDBACK BOOKS COMPARE AND CONTRAST ESSAY

HOMESCHOOLING BOOKS COMPARE AND CONTRAST ESSAY

PARENTING STYLES COMPARE AND CONTRAST ESSAY

CONVENTIONAL AND ALTERNATIVE MEDICINE COMPARE AND CONTRAST ESSAY

Don't know how to map out your compare and contrast essay? Visit this link to learn how to perfectly outline your essay!

Compare and Contrast Essay Examples University

Compare and contrast paper is a common assignments for university students. This type of essay tells the reader how two subjects are the same or different from each other. Also, show the points of comparison between the two subjects.

Look at the example that is mentioned below and create a well-written essay.

COMPARE AND CONTRAST ESSAY EXAMPLE UNIVERSITY

Compare and Contrast Essay Examples College

COMPARE AND CONTRAST ESSAY EXAMPLE COLLEGE

Compare and Contrast Essay Examples High School

Compare and contrast essays are often assigned to high school students to help them improve their analytical skills .

In addition, some teachers assign this type of essay because it is a great way for students to improve their analytical and writing skills.

COMPARE AND CONTRAST ESSAY EXAMPLE HIGH SCHOOL

COMPARE AND CONTRAST ESSAY EXAMPLE 9TH GRADE

Check out the video below to gain a quick and visual comprehension of what a compare and contrast essay entails.

Compare and Contrast Essay Examples Middle school

In middle school, students have the opportunity to write a compare-and-contrast essay. It does not require an expert level of skills, but it is still a way to improve writing skills.

Middle school students can easily write a compare-and-contrast essay with a little help from examples. We have gathered excellent examples of this essay that you can use to get started.

COMPARE AND CONTRAST ESSAY EXAMPLE MIDDLE SCHOOL

COMPARE AND CONTRAST ESSAY EXAMPLES 5TH GRADE

Literary Analysis Compare and Contrast Essay Examples

The perfect way to inform readers about the pros and cons of two subjects is with a comparison and contrast essay.

It starts by stating the thesis statement, and then you explain why these two subjects are being compared in this essay.

The following is an example that you can use for your help.

LITERARY ANALYSIS COMPARE AND CONTRAST ESSAY EXAMPLE

Order Essay

Tough Essay Due? Hire Tough Writers!

Compare and Contrast Essay Conclusion Example

The conclusion of an essay is the last part, in which you wrap up everything. It should not include a story but rather summarize the whole document so readers have something meaningful they can take away from it.

COMPARE AND CONTRAST ESSAY CONCLUSION EXAMPLE

Struggling to think of the perfect compare-and-contrast essay topic ? Visit this link for a multitude of inspiring ideas.

Compare and Contrast Essay Writing Tips

A compare and contrast essay presents the facts point by point, and mostly, the argumentative essay uses this compared-contrasted technique for its subjects.

If you are looking for some easy and simple tips to craft a perfectly researched and structured compare and contrast essay, we will not disappoint you.

Following are some quick tips that you can keep in mind while writing your essay:

  • Choose the essay topic carefully.
  • Research and brainstorm the points that make them similar and different.
  • Create and add your main statement and claim.
  • Create a Venn diagram and show the similarities and differences.
  • Choose the design through which you will present your arguments and claims.
  • Create compare and contrast essay outline. Use either the block method or the point-by-point structure.
  • Research and add credible supporting evidence.
  • Transitioning is also important. Use transitional words and phrases to engage your readers.
  • Edit, proofread, and revise the essay before submission.

AI Essay Writer

Create captivating essays effortlessly!

In conclusion, writing a compare and contrast essay can be an effective way to explore the similarities and differences between two topics. By using examples, it is possible to see the different approaches that can be taken when writing this type of essay. 

Whether you are a student or a professional writer, these examples can provide valuable insight to enhance your writing skills. You can also use our AI-powered essay typer to generate sample essays for your specific topic and subject.

However, if you don’t feel confident in your writing skills, you can always hire our professional essay writer.

5StarEssays.com offer comprehensive essay writing service for students across the globe. Our experts are highly trained and qualified, making sure all of your essays will meet academic requirements while receiving top grades. 

Don't wait - take advantage of our 50% introductory discount today and get ahead of the game with us! 

Frequently Asked Questions

How do i write a compare and contrast essay.

Here are some steps that you should follow and write a great essay.

  • Begin by brainstorming with a Venn diagram.
  • Create a thesis statement.
  • Develop an outline.
  • Write the introduction.
  • Write the body paragraphs.
  • Write the conclusion.
  • Proofreading.

How do you start a compare and contrast essay introduction?

When writing a compare and contrast essay, it is important to have an engaging introduction that will grab the reader's attention. A good way to do this would be by starting with a question or fact related to the topic to catch their interest.

What are some good compare and contrast essay topics?

Here are some good topics for compare and contrast essay:

  • E-books or textbooks.
  • Anxiety vs. Depression.
  • Vegetables and fruits.
  • Cinnamon vs. sugar.
  • Similarities between cultural and traditional fashion trends.

How long is a compare and contrast essay?

Usually, a compare and contrast essay would consist of five paragraphs but there are no hard and fast rules regarding it. Some essays could be longer than five paragraphs, based on the scope of the topic of the essay.

What are the two methods for arranging a comparison and contrast essay?

The two ways to organize and arrange your compare and contrast essay. The first one is the Point-by-Point method and the second one is the Block method.

Barbara P.

Dr. Barbara is a highly experienced writer and author who holds a Ph.D. degree in public health from an Ivy League school. She has worked in the medical field for many years, conducting extensive research on various health topics. Her writing has been featured in several top-tier publications.

Was This Blog Helpful?

Keep reading.

  • Compare and Contrast Essay - A Complete Guide With Topics & Examples

Compare and Contrast Essay Examples

  • Compare and Contrast Essay Topics: 100+ Fresh New Ideas

Compare and Contrast Essay Examples

  • Compare and Contrast Essay Outline - Template & Examples

Compare and Contrast Essay Examples

People Also Read

  • persuasive essay examples
  • narrative essay examples
  • college application essay format
  • 40 best argumentative essay
  • persuasive speech topics

Burdened With Assignments?

Bottom Slider

Advertisement

  • Homework Services: Essay Topics Generator

© 2024 - All rights reserved

Facebook Social Icon

Logo for M Libraries Publishing

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

10.7 Comparison and Contrast

Learning objectives.

  • Determine the purpose and structure of comparison and contrast in writing.
  • Explain organizational methods used when comparing and contrasting.
  • Understand how to write a compare-and-contrast essay.

The Purpose of Comparison and Contrast in Writing

Comparison in writing discusses elements that are similar, while contrast in writing discusses elements that are different. A compare-and-contrast essay , then, analyzes two subjects by comparing them, contrasting them, or both.

The key to a good compare-and-contrast essay is to choose two or more subjects that connect in a meaningful way. The purpose of conducting the comparison or contrast is not to state the obvious but rather to illuminate subtle differences or unexpected similarities. For example, if you wanted to focus on contrasting two subjects you would not pick apples and oranges; rather, you might choose to compare and contrast two types of oranges or two types of apples to highlight subtle differences. For example, Red Delicious apples are sweet, while Granny Smiths are tart and acidic. Drawing distinctions between elements in a similar category will increase the audience’s understanding of that category, which is the purpose of the compare-and-contrast essay.

Similarly, to focus on comparison, choose two subjects that seem at first to be unrelated. For a comparison essay, you likely would not choose two apples or two oranges because they share so many of the same properties already. Rather, you might try to compare how apples and oranges are quite similar. The more divergent the two subjects initially seem, the more interesting a comparison essay will be.

Writing at Work

Comparing and contrasting is also an evaluative tool. In order to make accurate evaluations about a given topic, you must first know the critical points of similarity and difference. Comparing and contrasting is a primary tool for many workplace assessments. You have likely compared and contrasted yourself to other colleagues. Employee advancements, pay raises, hiring, and firing are typically conducted using comparison and contrast. Comparison and contrast could be used to evaluate companies, departments, or individuals.

Brainstorm an essay that leans toward contrast. Choose one of the following three categories. Pick two examples from each. Then come up with one similarity and three differences between the examples.

  • Romantic comedies
  • Internet search engines
  • Cell phones

Brainstorm an essay that leans toward comparison. Choose one of the following three items. Then come up with one difference and three similarities.

  • Department stores and discount retail stores
  • Fast food chains and fine dining restaurants
  • Dogs and cats

The Structure of a Comparison and Contrast Essay

The compare-and-contrast essay starts with a thesis that clearly states the two subjects that are to be compared, contrasted, or both and the reason for doing so. The thesis could lean more toward comparing, contrasting, or both. Remember, the point of comparing and contrasting is to provide useful knowledge to the reader. Take the following thesis as an example that leans more toward contrasting.

Thesis statement: Organic vegetables may cost more than those that are conventionally grown, but when put to the test, they are definitely worth every extra penny.

Here the thesis sets up the two subjects to be compared and contrasted (organic versus conventional vegetables), and it makes a claim about the results that might prove useful to the reader.

You may organize compare-and-contrast essays in one of the following two ways:

  • According to the subjects themselves, discussing one then the other
  • According to individual points, discussing each subject in relation to each point

See Figure 10.1 “Comparison and Contrast Diagram” , which diagrams the ways to organize our organic versus conventional vegetables thesis.

Figure 10.1 Comparison and Contrast Diagram

Comparison and Contrast Diagram

The organizational structure you choose depends on the nature of the topic, your purpose, and your audience.

Given that compare-and-contrast essays analyze the relationship between two subjects, it is helpful to have some phrases on hand that will cue the reader to such analysis. See Table 10.3 “Phrases of Comparison and Contrast” for examples.

Table 10.3 Phrases of Comparison and Contrast

Create an outline for each of the items you chose in Note 10.72 “Exercise 1” and Note 10.73 “Exercise 2” . Use the point-by-point organizing strategy for one of them, and use the subject organizing strategy for the other.

Writing a Comparison and Contrast Essay

First choose whether you want to compare seemingly disparate subjects, contrast seemingly similar subjects, or compare and contrast subjects. Once you have decided on a topic, introduce it with an engaging opening paragraph. Your thesis should come at the end of the introduction, and it should establish the subjects you will compare, contrast, or both as well as state what can be learned from doing so.

The body of the essay can be organized in one of two ways: by subject or by individual points. The organizing strategy that you choose will depend on, as always, your audience and your purpose. You may also consider your particular approach to the subjects as well as the nature of the subjects themselves; some subjects might better lend themselves to one structure or the other. Make sure to use comparison and contrast phrases to cue the reader to the ways in which you are analyzing the relationship between the subjects.

After you finish analyzing the subjects, write a conclusion that summarizes the main points of the essay and reinforces your thesis. See Chapter 15 “Readings: Examples of Essays” to read a sample compare-and-contrast essay.

Many business presentations are conducted using comparison and contrast. The organizing strategies—by subject or individual points—could also be used for organizing a presentation. Keep this in mind as a way of organizing your content the next time you or a colleague have to present something at work.

Choose one of the outlines you created in Note 10.75 “Exercise 3” , and write a full compare-and-contrast essay. Be sure to include an engaging introduction, a clear thesis, well-defined and detailed paragraphs, and a fitting conclusion that ties everything together.

Key Takeaways

  • A compare-and-contrast essay analyzes two subjects by either comparing them, contrasting them, or both.
  • The purpose of writing a comparison or contrast essay is not to state the obvious but rather to illuminate subtle differences or unexpected similarities between two subjects.
  • The thesis should clearly state the subjects that are to be compared, contrasted, or both, and it should state what is to be learned from doing so.

There are two main organizing strategies for compare-and-contrast essays.

  • Organize by the subjects themselves, one then the other.
  • Organize by individual points, in which you discuss each subject in relation to each point.
  • Use phrases of comparison or phrases of contrast to signal to readers how exactly the two subjects are being analyzed.

Writing for Success Copyright © 2015 by University of Minnesota is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

helpful professor logo

5 Compare and Contrast Essay Examples (Full Text)

A compare and contrast essay selects two or more items that are critically analyzed to demonstrate their differences and similarities. Here is a template for you that provides the general structure:

compare and contrast essay format

A range of example essays is presented below.

Compare and Contrast Essay Examples

#1 jean piaget vs lev vygotsky essay.

1480 Words | 5 Pages | 10 References

(Level: University Undergraduate)

paget vs vygotsky essay

Thesis Statement: “This essay will critically examine and compare the developmental theories of Jean Piaget and Lev Vygotsky, focusing on their differing views on cognitive development in children and their influence on educational psychology, through an exploration of key concepts such as the role of culture and environment, scaffolding, equilibration, and their overall implications for educational practices..”

#2 Democracy vs Authoritarianism Essay

democracy vs authoritarianism essay

Thesis Statement: “The thesis of this analysis is that, despite the efficiency and control offered by authoritarian regimes, democratic systems, with their emphasis on individual freedoms, participatory governance, and social welfare, present a more balanced and ethically sound approach to governance, better aligned with the ideals of a just and progressive society.”

#3 Apples vs Oranges Essay

1190 Words | 5 Pages | 0 References

(Level: 4th Grade, 5th Grade, 6th Grade)

apples vs oranges essay

Thesis Statement: “While apples and oranges are both popular and nutritious fruits, they differ significantly in their taste profiles, nutritional benefits, cultural symbolism, and culinary applications.”

#4 Nature vs Nurture Essay

1525 Words | 5 Pages | 11 References

(Level: High School and College)

nature vs nurture essay

Thesis Statement: “The purpose of this essay is to examine and elucidate the complex and interconnected roles of genetic inheritance (nature) and environmental influences (nurture) in shaping human development across various domains such as physical traits, personality, behavior, intelligence, and abilities.”

#5 Dogs vs Cats Essay

1095 Words | 5 Pages | 7 Bibliographic Sources

(Level: 6th Grade, 7th Grade, 8th Grade)

Thesis Statement: “This essay explores the distinctive characteristics, emotional connections, and lifestyle considerations associated with owning dogs and cats, aiming to illuminate the unique joys and benefits each pet brings to their human companions.”

How to Write a Compare and Contrast Essay

I’ve recorded a full video for you on how to write a compare and contrast essay:

Get the Compare and Contrast Templates with AI Prompts Here

In the video, I outline the steps to writing your essay. Here they are explained below:

1. Essay Planning

First, I recommend using my compare and contrast worksheet, which acts like a Venn Diagram, walking you through the steps of comparing the similarities and differences of the concepts or items you’re comparing.

I recommend selecting 3-5 features that can be compared, as shown in the worksheet:

compare and contrast worksheet

Grab the Worksheet as Part of the Compare and Contrast Essay Writing Pack

2. Writing the Essay

Once you’ve completed the worksheet, you’re ready to start writing. Go systematically through each feature you are comparing and discuss the similarities and differences, then make an evaluative statement after showing your depth of knowledge:

compare and contrast essay template

Get the Rest of the Premium Compare and Contrast Essay Writing Pack (With AI Prompts) Here

How to Write a Compare and Contrast Thesis Statement

Compare and contrast thesis statements can either:

  • Remain neutral in an expository tone.
  • Prosecute an argument about which of the items you’re comparing is overall best.

To write an argumentative thesis statement for a compare and contrast essay, try this AI Prompts:

💡 AI Prompt to Generate Ideas I am writing a compare and contrast essay that compares [Concept 1] and [Concept2]. Give me 5 potential single-sentence thesis statements that pass a reasonable judgement.

Ready to Write your Essay?

compare and contrast essay pack promotional image

Take action! Choose one of the following options to start writing your compare and contrast essay now:

Read Next: Process Essay Examples

compare and contrast examples and definition

Chris Drew (PhD)

Dr. Chris Drew is the founder of the Helpful Professor. He holds a PhD in education and has published over 20 articles in scholarly journals. He is the former editor of the Journal of Learning Development in Higher Education. [Image Descriptor: Photo of Chris]

  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd/ 5 Top Tips for Succeeding at University
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd/ 50 Durable Goods Examples
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd/ 100 Consumer Goods Examples
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd/ 30 Globalization Pros and Cons

Leave a Comment Cancel Reply

Your email address will not be published. Required fields are marked *

  • Grades 6-12
  • School Leaders

FREE Poetry Worksheet Bundle! Perfect for National Poetry Month.

34 Compelling Compare and Contrast Essay Examples

Topics cover education, technology, pop culture, sports, animals, and more.

comparison research paper example

Do your writers need some inspiration? If you’re teaching students to write a compare and contrast essay, a strong example is an invaluable tool. This round-up of our favorite compare and contrast essays covers a range of topics and grade levels, so no matter your students’ interests or ages, you’ll always have a helpful example to share. You’ll find links to full essays about education, technology, pop culture, sports, animals, and more. (Need compare-and-contrast essay topic ideas? Check out our big list of compare and contrast essay topics! )

What is a compare and contrast essay?

  • Education and parenting essays
  • Technology essays
  • Pop culture essays
  • Historical and political essays
  • Sports essays
  • Lifestyle essays
  • Healthcare essays
  • Animal essays

When choosing a compare and contrast essay example to include on this list, we considered the structure. A strong compare and contrast essay begins with an introductory paragraph that includes background context and a strong thesis. Next, the body includes paragraphs that explore the similarities and differences. Finally, a concluding paragraph restates the thesis, draws any necessary inferences, and asks any remaining questions.

A compare and contrast essay example can be an opinion piece comparing two things and making a conclusion about which is better. For example, “Is Tom Brady really the GOAT?” It can also help consumers decide which product is better suited to them. Should you keep your subscription to Hulu or Netflix? Should you stick with Apple or explore Android? Here’s our list of compare and contrast essay samples categorized by subject.

Education and Parenting Compare and Contrast Essay Examples

Private school vs. public school.

Sample lines: “Deciding whether to send a child to public or private school can be a tough choice for parents. … Data on whether public or private education is better can be challenging to find and difficult to understand, and the cost of private school can be daunting. … According to the most recent data from the National Center for Education Statistics, public schools still attract far more students than private schools, with 50.7 million students attending public school as of 2018. Private school enrollment in the fall of 2017 was 5.7 million students, a number that is down from 6 million in 1999.”

Read the full essay: Private School vs. Public School at U.S. News and World Report

Homeschool vs. Public School: How Home Schooling Will Change Public Education

Homeschool vs. Public School: How Home Schooling Will Change Public Education

Sample lines: “Home schooling, not a present threat to public education, is nonetheless one of the forces that will change it. If the high estimates of the number of children in home schools (1.2 million) is correct, then the home-schooling universe is larger than the New York City public school system and roughly the size of the Los Angeles and Chicago public school systems combined. … Critics charge that three things are wrong with home schooling: harm to students academically; harm to society by producing students who are ill-prepared to function as democratic citizens and participants in a modern economy; and harm to public education, making it more difficult for other parents to educate their children. … It is time to ask whether home schooling, charters, and vouchers should be considered parts of a broad repertoire of methods that we as a society use to educate our children.”

Read the full essay: Homeschool vs. Public School: How Home Schooling Will Change Public Education at Brookings

Which parenting style is right for you?

Sample lines: “The three main types of parenting are on a type of ‘sliding scale’ of parenting, with permissive parenting as the least strict type of parenting. Permissive parenting typically has very few rules, while authoritarian parenting is thought of as a very strict, rule-driven type of parenting.”

Read the full essay: What Is Authoritative Parenting? at Healthline

Masked Education? The Benefits and Burdens of Wearing Face Masks in Schools During the Pandemic

Sample lines: “Face masks can prevent the spread of the virus SARS-CoV-2. … However, covering the lower half of the face reduces the ability to communicate. Positive emotions become less recognizable, and negative emotions are amplified. Emotional mimicry, contagion, and emotionality in general are reduced and (thereby) bonding between teachers and learners, group cohesion, and learning—of which emotions are a major driver. The benefits and burdens of face masks in schools should be seriously considered and made obvious and clear to teachers and students.”

Read the full essay: Masked Education? The Benefits and Burdens of Wearing Face Masks in Schools During the Pandemic at National Library of Medicine

To Ban or Not: What Should We Really Make of Book Bans?

To Ban or Not: What Should We Really Make of Book Bans?

Sample lines: “In recent years, book bans have soared in schools, reaching an all-time high in fall 2022. … The challenge of balancing parent concerns about ‘age appropriateness’ against the imperative of preparing students to be informed citizens is still on the minds of many educators today. … Such curricular decision-making  should  be left to the professionals, argues English/language arts instructional specialist Miriam Plotinsky. ‘Examining texts for their appropriateness is not a job that noneducators are trained to do,’ she wrote last year, as the national debate over censorship resurged with the news that a Tennessee district banned the graphic novel  Maus  just days before Holocaust Remembrance Day.”

Read the full essay: To Ban or Not: What Should We Really Make of Book Bans? at Education Week

Technology Compare and Contrast Essay Examples

Netflix vs. hulu 2023: which is the best streaming service.

Sample lines: “Netflix fans will point to its high-quality originals, including  The Witcher ,  Stranger Things ,  Emily in Paris ,  Ozark , and more, as well as a wide variety of documentaries like  Cheer ,  The Last Dance ,  My Octopus Teacher , and many others. It also boasts a much larger subscription base, with more than 222 million subscribers compared to Hulu’s 44 million. Hulu, on the other hand, offers a variety of extras such as HBO and Showtime—content that’s unavailable on Netflix. Its price tag is also cheaper than the competition, with its $7/mo. starting price, which is a bit more palatable than Netflix’s $10/mo. starting price.”

Read the full essay: Netflix vs. Hulu 2023: Which is the best streaming service? at TV Guide

Kindle vs. Hardcover: Which is easier on the eyes?

Kindle vs. Hardcover: Which is easier on the eyes?

Sample lines: “In the past, we would have to drag around heavy books if we were really into reading. Now, we can have all of those books, and many more, stored in one handy little device that can easily be stuffed into a backpack, purse, etc. … Many of us still prefer to hold an actual book in our hands. … But, whether you use a Kindle or prefer hardcover books or paperbacks, the main thing is that you enjoy reading. A story in a book or on a Kindle device can open up new worlds, take you to fantasy worlds, educate you, entertain you, and so much more.”

Read the full essay: Kindle vs. Hardcover: Which is easier on the eyes? at Books in a Flash

iPhone vs. Android: Which is better for you?

Sample lines: “The iPhone vs. Android comparison is a never-ending debate on which one is best. It will likely never have a real winner, but we’re going to try and help you to find your personal pick all the same. iOS 17 and Android 14—the latest versions of the two operating systems—both offer smooth and user-friendly experiences, and several similar or identical features. But there are still important differences to be aware of. … Owning an iPhone is a simpler, more convenient experience. There’s less to think about. … Android-device ownership is a bit harder. … Yet it’s simultaneously more freeing, because it offers more choice.”

Read the full essay: iPhone vs. Android: Which is better for you? at Tom’s Guide

Cutting the cord: Is streaming or cable better for you?

Sample lines: “Cord-cutting has become a popular trend in recent years, thanks to the rise of streaming services. For those unfamiliar, cord cutting is the process of canceling your cable subscription and instead, relying on streaming platforms such as Netflix and Hulu to watch your favorite shows and movies. The primary difference is that you can select your streaming services à la carte while cable locks you in on a set number of channels through bundles. So, the big question is: should you cut the cord?”

Read the full essay: Cutting the cord: Is streaming or cable better for you? at BroadbandNow

PS5 vs. Nintendo Switch

PS5 vs. Nintendo Switch

Sample lines: “The crux of the comparison comes down to portability versus power. Being able to migrate fully fledged Nintendo games from a big screen to a portable device is a huge asset—and one that consumers have taken to, especially given the Nintendo Switch’s meteoric sales figures. … It is worth noting that many of the biggest franchises like Call of Duty, Madden, modern Resident Evil titles, newer Final Fantasy games, Grand Theft Auto, and open-world Ubisoft adventures like Assassin’s Creed will usually skip Nintendo Switch due to its lack of power. The inability to play these popular games practically guarantees that a consumer will pick up a modern system, while using the Switch as a secondary device.”

Read the full essay: PS5 vs. Nintendo Switch at Digital Trends

What is the difference between Facebook and Instagram?

Sample lines: “Have you ever wondered what is the difference between Facebook and Instagram? Instagram and Facebook are by far the most popular social media channels used by digital marketers. Not to mention that they’re also the biggest platforms used by internet users worldwide. So, today we’ll look into the differences and similarities between these two platforms to help you figure out which one is the best fit for your business.”

Read the full essay: What is the difference between Facebook and Instagram? at SocialBee

Digital vs. Analog Watches—What’s the Difference?

Sample lines: “In short, digital watches use an LCD or LED screen to display the time. Whereas, an analog watch features three hands to denote the hour, minutes, and seconds. With the advancement in watch technology and research, both analog and digital watches have received significant improvements over the years. Especially in terms of design, endurance, and accompanying features. … At the end of the day, whether you go analog or digital, it’s a personal preference to make based on your style, needs, functions, and budget.”

Read the full essay: Digital vs. Analog Watches—What’s the Difference? at Watch Ranker

AI Art vs. Human Art: A Side-by-Side Analysis

Sample lines: “Art has always been a reflection of human creativity, emotion, and cultural expression. However, with the rise of artificial intelligence (AI), a new form of artistic creation has emerged, blurring the lines between what is created by human hands and what is generated by algorithms. … Despite the excitement surrounding AI Art, it also raises complex ethical, legal, and artistic questions that have sparked debates about the definition of art, the role of the artist, and the future of art production. … Regardless of whether AI Art is considered ‘true’ art, it is crucial to embrace and explore the vast possibilities and potential it brings to the table. The transformative influence of AI art on the art world is still unfolding, and only time will reveal its true extent.”

Read the full essay: AI Art vs. Human Art: A Side-by-Side Analysis at Raul Lara

Pop Culture Compare and Contrast Essay Examples

Christina aguilera vs. britney spears.

Christina Aguilera vs. Britney Spears- compare and contrast essay example

Sample lines: “Britney Spears vs. Christina Aguilera was the Coke vs. Pepsi of 1999 — no, really, Christina repped Coke and Britney shilled for Pepsi. The two teen idols released debut albums seven months apart before the turn of the century, with Britney’s becoming a standard-bearer for bubblegum pop and Aguilera’s taking an R&B bent to show off her range. … It’s clear that Spears and Aguilera took extremely divergent paths following their simultaneous breakout successes.”

Read the full essay: Christina Aguilera vs. Britney Spears at The Ringer

Harry Styles vs. Ed Sheeran

Sample lines: “The world heard our fantasies and delivered us two titans simultaneously—we have been blessed with Ed Sheeran and Harry Styles. Our cup runneth over; our bounty is immeasurable. More remarkable still is the fact that both have released albums almost at the same time: Ed’s third, Divide , was released in March and broke the record for one-day Spotify streams, while Harry’s frenziedly anticipated debut solo, called Harry Styles , was released yesterday.”

Read the full essay: Harry Styles versus Ed Sheeran at Belfast Telegraph

The Grinch: Three Versions Compared

Sample lines: “Based on the original story of the same name, this movie takes a completely different direction by choosing to break away from the cartoony form that Seuss had established by filming the movie in a live-action form. Whoville is preparing for Christmas while the Grinch looks down upon their celebrations in disgust. Like the previous film, The Grinch hatches a plan to ruin Christmas for the Who’s. … Like in the original Grinch, he disguises himself as Santa Claus, and makes his dog, Max, into a reindeer. He then takes all of the presents from the children and households. … Cole’s favorite is the 2000 edition, while Alex has only seen the original. Tell us which one is your favorite.”

Read the full essay: The Grinch: Three Versions Compared at Wooster School

Historical and Political Compare and Contrast Essay Examples

Malcolm x vs. martin luther king jr.: comparison between two great leaders’ ideologies .

Sample lines: “Although they were fighting for civil rights at the same time, their ideology and way of fighting were completely distinctive. This can be for a plethora of reasons: background, upbringing, the system of thought, and vision. But keep in mind, they devoted their whole life to the same prospect. … Through boycotts and marches, [King] hoped to end racial segregation. He felt that the abolition of segregation would improve the likelihood of integration. Malcolm X, on the other hand, spearheaded a movement for black empowerment.”

Read the full essay: Malcolm X vs. Martin Luther King Jr.: Comparison Between Two Great Leaders’ Ideologies  at Melaninful

Contrast Between Obama and Trump Has Become Clear

Contrast Between Obama and Trump Has Become Clear

Sample lines: “The contrast is even clearer when we look to the future. Trump promises more tax cuts, more military spending, more deficits and deeper cuts in programs for the vulnerable. He plans to nominate a coal lobbyist to head the Environmental Protection Agency. … Obama says America must move forward, and he praises progressive Democrats for advocating Medicare for all. … With Obama and then Trump, Americans have elected two diametrically opposed leaders leading into two very different directions.”

Read the full essay: Contrast Between Obama and Trump Has Become Clear at Chicago Sun-Times

Sports Compare and Contrast Essay Examples

Lebron james vs. kobe bryant: a complete comparison.

Sample lines: “LeBron James has achieved so much in his career that he is seen by many as the greatest of all time, or at least the only player worthy of being mentioned in the GOAT conversation next to Michael Jordan. Bridging the gap between Jordan and LeBron though was Kobe Bryant, who often gets left out of comparisons and GOAT conversations. … Should his name be mentioned more though? Can he compare to LeBron or is The King too far past The Black Mamba in historical rankings already?”

Read the full essay: LeBron James vs. Kobe Bryant: A Complete Comparison at Sportskeeda

NFL: Tom Brady vs. Peyton Manning Rivalry Comparison

NFL: Tom Brady vs. Peyton Manning Rivalry Comparison

Sample lines: “Tom Brady and Peyton Manning were largely considered the best quarterbacks in the NFL for the majority of the time they spent in the league together, with the icons having many head-to-head clashes in the regular season and on the AFC side of the NFL Playoffs. Manning was the leader of the Indianapolis Colts of the AFC South. … Brady spent his career as the QB of the AFC East’s New England Patriots, before taking his talents to Tampa Bay. … The reality is that winning is the most important aspect of any career, and Brady won more head-to-head matchups than Manning did.”

Read the full essay: NFL: Tom Brady vs. Peyton Manning Rivalry Comparison at Sportskeeda

The Greatest NBA Franchise Ever: Boston Celtics or Los Angeles Lakers?

Sample lines: “The Celtics are universally considered as the greatest franchise in NBA history. But if you take a close look at the numbers, there isn’t really too much separation between them and their arch-rival Los Angeles Lakers. In fact, you can even make a good argument for the Lakers. … In 72 seasons played, the Boston Celtics have won a total of 3,314 games and lost 2,305 or a .590 winning mark. On the other hand, the Los Angeles Lakers have won 3,284 of 5,507 total games played or a slightly better winning record of .596. … But while the Lakers have the better winning percentage, the Celtics have the advantage over them in head-to-head competition.”

Read the full essay: The Greatest NBA Franchise Ever: Boston Celtics or Los Angeles Lakers? at Sport One

Is Soccer Better Than Football?

Sample lines: “Is soccer better than football? Soccer and football lovers have numerous reasons to support their sport of choice. Both keep the players physically fit and help to bring people together for an exciting cause. However, soccer has drawn more numbers globally due to its popularity in more countries.”

Read the full essay: Is Soccer Better Than Football? at Sports Brief

Lifestyle Choices Compare and Contrast Essay Examples

Mobile home vs. tiny house: similarities, differences, pros & cons.

Mobile Home vs. Tiny House: Similarities, Differences, Pros & Cons

Sample lines: “Choosing the tiny home lifestyle enables you to spend more time with those you love. The small living space ensures quality bonding time rather than hiding away in a room or behind a computer screen. … You’ll be able to connect closer to nature and find yourself able to travel the country at any given moment. On the other hand, we have the mobile home. … They are built on a chassis with transportation in mind. … They are not built to be moved on a constant basis. … While moving the home again *is* possible, it may cost you several thousand dollars.”

Read the full essay: Mobile Home vs. Tiny House: Similarities, Differences, Pros & Cons at US Mobile Home Pros

Whole Foods vs. Walmart: The Story of Two Grocery Stores

Sample lines: “It is clear that both stores have very different stories and aims when it comes to their customers. Whole Foods looks to provide organic, healthy, exotic, and niche products for an audience with a very particular taste. … Walmart, on the other hand, looks to provide the best deals, every possible product, and every big brand for a broader audience. … Moreover, they look to make buying affordable and accessible, and focus on the capitalist nature of buying.”

Read the full essay: Whole Foods vs. Walmart: The Story of Two Grocery Stores at The Archaeology of Us

Artificial Grass vs. Turf: The Real Differences Revealed

Sample lines: “The key difference between artificial grass and turf is their intended use. Artificial turf is largely intended to be used for sports, so it is shorter and tougher. On the other hand, artificial grass is generally longer, softer and more suited to landscaping purposes. Most homeowners would opt for artificial grass as a replacement for a lawn, for example. Some people actually prefer playing sports on artificial grass, too … artificial grass is often softer and more bouncy, giving it a feel similar to playing on a grassy lawn. … At the end of the day, which one you will choose will depend on your specific household and needs.”

Read the full essay: Artificial Grass vs. Turf: The Real Differences Revealed at Almost Grass

Minimalism vs. Maximalism: Differences, Similarities, and Use Cases

Minimalism vs. Maximalism: Differences, Similarities, and Use Cases- compare and contrast essay example

Sample lines: “Maximalists love shopping, especially finding unique pieces. They see it as a hobby—even a skill—and a way to express their personality. Minimalists don’t like shopping and see it as a waste of time and money. They’d instead use those resources to create memorable experiences. Maximalists desire one-of-a-kind possessions. Minimalists are happy with duplicates—for example, personal uniforms. … Minimalism and maximalism are about being intentional with your life and belongings. It’s about making choices based on what’s important to you.”

Read the full essay: Minimalism vs. Maximalism: Differences, Similarities, and Use Cases at Minimalist Vegan

Vegetarian vs. Meat Eating: Is It Better To Be a Vegetarian?

Sample lines: “You’ve heard buzz over the years that following a vegetarian diet is better for your health, and you’ve probably read a few magazine articles featuring a celeb or two who swore off meat and animal products and ‘magically’ lost weight. So does ditching meat automatically equal weight loss? Will it really help you live longer and be healthier overall? … Vegetarians appear to have lower low-density lipoprotein cholesterol levels, lower blood pressure  and lower rates of hypertension and type 2 diabetes than meat eaters. Vegetarians also tend to have a lower body mass index, lower overall cancer rates and lower risk of chronic disease. But if your vegetarian co-worker is noshing greasy veggie burgers and fries every day for lunch, is he likely to be healthier than you, who always orders the grilled salmon? Definitely not!”

Read the full essay: Vegetarian vs. Meat Eating: Is It Better To Be a Vegetarian? at WebMD

Healthcare Compare and Contrast Essay Examples

Similarities and differences between the health systems in australia & usa.

Sample lines: “Australia and the United States are two very different countries. They are far away from each other, have contrasting fauna and flora, differ immensely by population, and have vastly different healthcare systems. The United States has a population of 331 million people, compared to Australia’s population of 25.5 million people.”

Read the full essay: Similarities and Differences Between the Health Systems in Australia & USA at Georgia State University

Universal Healthcare in the United States of America: A Healthy Debate

Universal Healthcare in the United States of America: A Healthy Debate

Sample lines: “Disadvantages of universal healthcare include significant upfront costs and logistical challenges. On the other hand, universal healthcare may lead to a healthier populace, and thus, in the long-term, help to mitigate the economic costs of an unhealthy nation. In particular, substantial health disparities exist in the United States, with low socio-economic status segments of the population subject to decreased access to quality healthcare and increased risk of non-communicable chronic conditions such as obesity and type II diabetes, among other determinants of poor health.”

Read the full essay: Universal Healthcare in the United States of America: A Healthy Debate at National Library of Medicine

Pros and Cons of Physician Aid in Dying

Sample lines: “Physician aid in dying is a controversial subject raising issues central to the role of physicians. … The two most common arguments in favor of legalizing AID are respect for patient autonomy and relief of suffering. A third, related, argument is that AID is a safe medical practice, requiring a health care professional. … Although opponents of AID offer many arguments ranging from pragmatic to philosophical, we focus here on concerns that the expansion of AID might cause additional, unintended harm through suicide contagion, slippery slope, and the deaths of patients suffering from depression.”

Read the full essay: Pros and Cons of Physician Aid in Dying at National Library of Medicine

Animals Compare and Contrast Essay Examples

Compare and contrast paragraph—dogs and cats.

Compare and Contrast Paragraph—Dogs and Cats- compare and contrast essay example

Sample lines: “Researchers have found that dogs have about twice the number of neurons in their cerebral cortexes than what cats have. Specifically, dogs had around 530 million neurons, whereas the domestic cat only had 250 million neurons. Moreover, dogs can be trained to learn and respond to our commands, but although your cat understands your name, and anticipates your every move, he/she may choose to ignore you.”

Read the full essay: Compare and Contrast Paragraph—Dogs and Cats at Proofwriting Guru via YouTube

Giddyup! The Differences Between Horses and Dogs

Sample lines: “Horses are prey animals with a deep herding instinct. They are highly sensitive to their environment, hyper aware, and ready to take flight if needed. Just like dogs, some horses are more confident than others, but just like dogs, all need a confident handler to teach them what to do. Some horses are highly reactive and can be spooked by the smallest things, as are dogs. … Another distinction between horses and dogs … was that while dogs have been domesticated , horses have been  tamed. … Both species have influenced our culture more than any other species on the planet.”

Read the full essay: Giddyup! The Differences Between Horses and Dogs at Positively Victoria Stilwell

Exotic, Domesticated, and Wild Pets

Sample lines: “Although the words ‘exotic’ and ‘wild’ are frequently used interchangeably, many people do not fully understand how these categories differ when it comes to pets. ‘A wild animal is an indigenous, non-domesticated animal, meaning that it is native to the country where you are located,’ Blue-McLendon explained. ‘For Texans, white-tailed deer, pronghorn sheep, raccoons, skunks, and bighorn sheep are wild animals … an exotic animal is one that is wild but is from a different continent than where you live.’ For example, a hedgehog in Texas would be considered an exotic animal, but in the hedgehog’s native country, it would be considered wildlife.”

Read the full essay: Exotic, Domesticated, and Wild Pets at Texas A&M University

Should Zoos Be Banned? Pros & Cons of Zoos

Should Zoos Be Banned? Pros & Cons of Zoos

Sample lines: “The pros and cons of zoos often come from two very different points of view. From a legal standard, animals are often treated as property. That means they have less rights than humans, so a zoo seems like a positive place to maintain a high quality of life. For others, the forced enclosure of any animal feels like an unethical decision. … Zoos provide a protected environment for endangered animals, and also help in raising awareness and funding for wildlife initiatives and research projects. … Zoos are key for research. Being able to observe and study animals is crucial if we want to contribute to help them and repair the ecosystems. … Zoos are a typical form of family entertainment, but associating leisure and fun with the contemplation of animals in captivity can send the wrong signals to our children.”

Read the full essay: Should Zoos Be Banned? Pros & Cons of Zoos at EcoCation

Do you have a favorite compare and contrast essay example? Come share in the We Are Teachers HELPLINE group on Facebook .

Plus, if you liked these compare and contrast essay examples check out intriguing compare and contrast essay topics for kids and teens ..

A good compare and contrast essay example, like the ones here, explores the similarities and differences between two or more subjects.

You Might Also Like

First day of school vs. the last day of school.

80 Intriguing Compare and Contrast Essay Topics for Kids and Teens

Android vs. iPhone? Capitalism vs. communism? Hot dog vs. taco? Continue Reading

Copyright © 2023. All rights reserved. 5335 Gate Parkway, Jacksonville, FL 32256

  • Study Guides
  • Homework Questions

Children Brought Up in Saudi Arabia and USA Comparison - 1647 Words Research Paper Example

  • Communications

IMAGES

  1. Surprising Comparison Contrast Essay Examples ~ Thatsnotus

    comparison research paper example

  2. 😊 Comparison essay template. How To Write A Compare and Contrast Essay

    comparison research paper example

  3. 😍 Comparison research paper. comparison research paper .docx. 2022-10-24

    comparison research paper example

  4. Comparative Essay

    comparison research paper example

  5. ️ Comparison paper sample. Guidelines for Writing a Compare and

    comparison research paper example

  6. Comparison essays examples in 2021

    comparison research paper example

VIDEO

  1. WASPAS (Weighted Aggregated Sum Product Assessment)

  2. Difference between Research Paper and Research Article

  3. How to Plot Taylor Diagram

  4. Research Paper Types

  5. Globalization and Cultural Hybridization

  6. Converting Thesis Into Research Paper

COMMENTS

  1. Comparative Research

    Comparative is a research methodology that aims to compare two or more variables that leads to a conclusion. Expand your understanding of this research by downloading the samples that we included in this article. ... If you are going to write an essay for a comparative research examples paper, this section is for you. You must know that there ...

  2. Comparing and Contrasting in an Essay

    The block method. In the block method, you cover each of the overall subjects you're comparing in a block. You say everything you have to say about your first subject, then discuss your second subject, making comparisons and contrasts back to the things you've already said about the first. Your text is structured like this: Subject 1.

  3. Academic Guides: Writing a Paper: Comparing & Contrasting

    Use Clear Transitions. Transitions are important in compare and contrast essays, where you will be moving frequently between different topics or perspectives. Examples of transitions and phrases for comparisons: as well, similar to, consistent with, likewise, too. Examples of transitions and phrases for contrasts: on the other hand, however ...

  4. PDF How to Write a Comparative Analysis

    To write a good compare-and-contrast paper, you must take your raw data—the similarities and differences you've observed —and make them cohere into a meaningful argument. Here are the five elements required. Frame of Reference. This is the context within which you place the two things you plan to compare and contrast; it is the umbrella ...

  5. (PDF) A Short Introduction to Comparative Research

    A comparative study is a kind of method that analyzes phenomena and then put them together. to find the points of differentiation and similarity (MokhtarianPour, 2016). A comparative perspective ...

  6. How to Do Comparative Analysis in Research ( Examples )

    October 31, 2021 by Sociology Group. Comparative analysis is a method that is widely used in social science. It is a method of comparing two or more items with an idea of uncovering and discovering new ideas about them. It often compares and contrasts social structures and processes around the world to grasp general patterns.

  7. Comparing and Contrasting

    Here are a few hypothetical examples: Compare and contrast Frye's and Bartky's accounts of oppression. Compare WWI to WWII, identifying similarities in the causes, development, and outcomes of the wars. ... (for example, "This paper will compare and contrast two pizza places," or "Pepper's and Amante are similar in some ways and ...

  8. Comparative Analysis

    Comparative analyses can build up to other kinds of writing in a number of ways. For example: They can build toward other kinds of comparative analysis, e.g., student can be asked to choose an additional source to complicate their conclusions from a previous analysis, or they can be asked to revisit an analysis using a different axis of comparison, such as race instead of class.

  9. Comparative Research Methods

    A common application of this technique is found in political science, where it is used, for example, to compare party manifestos and investigate which parties take similar stances on certain issues (see, for example, Pennings, Keman, & Kleinnijenhuis, 2006). Various techniques can be used to calculate the distance between the cases and the best ...

  10. PDF Writing Compare/Contrast Papers

    Step Two: Decide the best method to organize the paper. Compare/contrast papers can be organized in two different ways. Neither approach is better; they are different and play differently to your analytical/writing strengths. Holistic Approach: A Focus on the Concepts In this method of organization, you will first offer your introduction and ...

  11. (PDF) Four Varieties of Comparative Analysis

    Comparative analysis methods consist of four different types methods which are individualizing, universalizing, variating finding and encompassing. According to Pickvance, C. (2001 ...

  12. Comparative Research Methods

    Research goals. Comparative communication research is a combination of substance (specific objects of investigation studied in diferent macro-level contexts) and method (identification of diferences and similarities following established rules and using equivalent concepts).

  13. 15

    What makes a study comparative is not the particular techniques employed but the theoretical orientation and the sources of data. All the tools of the social scientist, including historical analysis, fieldwork, surveys, and aggregate data analysis, can be used to achieve the goals of comparative research. So, there is plenty of room for the ...

  14. How to write a comparison between the results of my research ...

    Ideally, you will start by providing the key results from each model and then compare and contrast them. You can then provide any interpretations or future directions to sum up the comparison. You may also choose to tabulate the comparison if it is possible to make the information easy for the readers. Related reading:

  15. 5 Comparative Studies

    For example, there were 11 studies of NSF-supported curricula that simply reported on the issues of SES in creating equivalent samples for comparison, and for this subset the mean probabilities of getting positive, negative, or results showing no significant difference were (.47, .10, .43).

  16. PDF COMPARATIVE RESEARCH

    comparative but makes use of comparison in a small aspect of the research (Fredrickson, 1997). b. Universalizing comparison 'aims to establish that every instance of a phenomenon follows essentially the same rule' (1984, p. 82). This involves the use of comparison to develop fundamental theories with significant generality

  17. Compare and Contrast Essay: Topics, Outline, Examples

    Compare and contrast essays are academic papers in which a student analyses two or more subjects with each other. To compare means to explore similarities between subjects, while to contrast means to look at their differences. Both subjects of the comparison are usually in the same category, although they have their differences.

  18. Library Guides: Compare/Contrast Papers: Outline Examples

    A thesis is a one or two sentence statement that directly states what your paper is about. Your thesis is generally one of the harder sentences to write, especially for those new to writing research papers. You want the reader to know with this one statement what your stance on the topic is and what you intend to prove with your research.

  19. 15+ Outstanding Compare and Contrast Essay Examples

    Research and brainstorm the points that make them similar and different. Create and add your main statement and claim. Create a Venn diagram and show the similarities and differences. Choose the design through which you will present your arguments and claims. Create compare and contrast essay outline.

  20. 10.7 Comparison and Contrast

    The compare-and-contrast essay starts with a thesis that clearly states the two subjects that are to be compared, contrasted, or both and the reason for doing so. The thesis could lean more toward comparing, contrasting, or both. Remember, the point of comparing and contrasting is to provide useful knowledge to the reader.

  21. 5 Compare and Contrast Essay Examples (Full Text)

    Here they are explained below: 1. Essay Planning. First, I recommend using my compare and contrast worksheet, which acts like a Venn Diagram, walking you through the steps of comparing the similarities and differences of the concepts or items you're comparing. I recommend selecting 3-5 features that can be compared, as shown in the worksheet:

  22. A Comparative analysis of two ESP research papers: A schema-based

    table 2 presents the comparison among the types of schemata used in both papers in three main domains, i.e., semantic, syntactic and parasyntactic. As can be seen, semantic domain is signi -

  23. 34 Compelling Compare and Contrast Essay Examples

    Animals Compare and Contrast Essay Examples Compare and Contrast Paragraph—Dogs and Cats. Sample lines: "Researchers have found that dogs have about twice the number of neurons in their cerebral cortexes than what cats have. Specifically, dogs had around 530 million neurons, whereas the domestic cat only had 250 million neurons.

  24. Children Brought Up in Saudi Arabia and USA Comparison

    Communications document from University of Nairobi, 3 pages, 3/25/24, 12:42 PM Children Brought Up in Saudi Arabia and USA: Comparison - 1647 Words | Research Paper Example Children Brought Up in Saudi Arabia and USA: Comparison Research Paper Table of Contents 1. Introduction 2. Saudi Arabia and the USA environmen