The 9 Parts of Speech: Definitions and Examples

  • Ph.D., Rhetoric and English, University of Georgia
  • M.A., Modern English and American Literature, University of Leicester
  • B.A., English, State University of New York

A part of speech is a term used in traditional grammar for one of the nine main categories into which words are classified according to their functions in sentences , such as nouns or verbs. Also known as word classes , these are the building blocks of grammar.

Parts of Speech

  • Word types can be divided into nine parts of speech:
  • prepositions
  • conjunctions
  • articles/determiners
  • interjections
  • Some words can be considered more than one part of speech, depending on context and usage.
  • Interjections can form complete sentences on their own.

Every sentence you write or speak in English includes words that fall into some of the nine parts of speech. These include nouns, pronouns, verbs, adjectives, adverbs, prepositions, conjunctions, articles/determiners, and interjections. (Some sources include only eight parts of speech and leave interjections in their own category.)

Learning the names of the parts of speech probably won't make you witty, healthy, wealthy, or wise. In fact, learning just the names of the parts of speech won't even make you a better writer. However, you will gain a basic understanding of sentence structure  and the  English language by familiarizing yourself with these labels.

Open and Closed Word Classes

The parts of speech are commonly divided into  open classes  (nouns, verbs, adjectives, and adverbs) and  closed classes  (pronouns, prepositions, conjunctions, articles/determiners, and interjections). The idea is that open classes can be altered and added to as language develops and closed classes are pretty much set in stone. For example, new nouns are created every day, but conjunctions never change.

In contemporary linguistics , the label  part of speech has generally been discarded in favor of the term word class or syntactic category . These terms make words easier to qualify objectively based on word construction rather than context. Within word classes, there is the lexical or open class and the function or closed class.

The 9 Parts of Speech

Read about each part of speech below and get started practicing identifying each.

Nouns are a person, place, thing, or idea. They can take on a myriad of roles in a sentence, from the subject of it all to the object of an action. They are capitalized when they're the official name of something or someone, called proper nouns in these cases. Examples: pirate, Caribbean, ship, freedom, Captain Jack Sparrow.

Pronouns stand in for nouns in a sentence. They are more generic versions of nouns that refer only to people. Examples:​  I, you, he, she, it, ours, them, who, which, anybody, ourselves.

Verbs are action words that tell what happens in a sentence. They can also show a sentence subject's state of being ( is , was ). Verbs change form based on tense (present, past) and count distinction (singular or plural). Examples:  sing, dance, believes, seemed, finish, eat, drink, be, became

Adjectives describe nouns and pronouns. They specify which one, how much, what kind, and more. Adjectives allow readers and listeners to use their senses to imagine something more clearly. Examples:  hot, lazy, funny, unique, bright, beautiful, poor, smooth.

Adverbs describe verbs, adjectives, and even other adverbs. They specify when, where, how, and why something happened and to what extent or how often. Examples:  softly, lazily, often, only, hopefully, softly, sometimes.

Preposition

Prepositions  show spacial, temporal, and role relations between a noun or pronoun and the other words in a sentence. They come at the start of a prepositional phrase , which contains a preposition and its object. Examples:  up, over, against, by, for, into, close to, out of, apart from.

Conjunction

Conjunctions join words, phrases, and clauses in a sentence. There are coordinating, subordinating, and correlative conjunctions. Examples:  and, but, or, so, yet, with.

Articles and Determiners

Articles and determiners function like adjectives by modifying nouns, but they are different than adjectives in that they are necessary for a sentence to have proper syntax. Articles and determiners specify and identify nouns, and there are indefinite and definite articles. Examples: articles:  a, an, the ; determiners:  these, that, those, enough, much, few, which, what.

Some traditional grammars have treated articles  as a distinct part of speech. Modern grammars, however, more often include articles in the category of determiners , which identify or quantify a noun. Even though they modify nouns like adjectives, articles are different in that they are essential to the proper syntax of a sentence, just as determiners are necessary to convey the meaning of a sentence, while adjectives are optional.

Interjection

Interjections are expressions that can stand on their own or be contained within sentences. These words and phrases often carry strong emotions and convey reactions. Examples:  ah, whoops, ouch, yabba dabba do!

How to Determine the Part of Speech

Only interjections ( Hooray! ) have a habit of standing alone; every other part of speech must be contained within a sentence and some are even required in sentences (nouns and verbs). Other parts of speech come in many varieties and may appear just about anywhere in a sentence.

To know for sure what part of speech a word falls into, look not only at the word itself but also at its meaning, position, and use in a sentence.

For example, in the first sentence below,  work  functions as a noun; in the second sentence, a verb; and in the third sentence, an adjective:

  • The noun  work  is the thing Bosco shows up for.
  • The verb  work  is the action he must perform.
  • The  attributive noun  [or converted adjective]  work  modifies the noun  permit .

Learning the names and uses of the basic parts of speech is just one way to understand how sentences are constructed.

Dissecting Basic Sentences

To form a basic complete sentence, you only need two elements: a noun (or pronoun standing in for a noun) and a verb. The noun acts as a subject and the verb, by telling what action the subject is taking, acts as the predicate. 

In the short sentence above,  birds  is the noun and  fly  is the verb. The sentence makes sense and gets the point across.

You can have a sentence with just one word without breaking any sentence formation rules. The short sentence below is complete because it's a command to an understood "you".

Here, the pronoun, standing in for a noun, is implied and acts as the subject. The sentence is really saying, "(You) go!"

Constructing More Complex Sentences

Use more parts of speech to add additional information about what's happening in a sentence to make it more complex. Take the first sentence from above, for example, and incorporate more information about how and why birds fly.

  • Birds fly when migrating before winter.

Birds and fly remain the noun and the verb, but now there is more description. 

When  is an adverb that modifies the verb fly.  The word before  is a little tricky because it can be either a conjunction, preposition, or adverb depending on the context. In this case, it's a preposition because it's followed by a noun. This preposition begins an adverbial phrase of time ( before winter ) that answers the question of when the birds migrate . Before is not a conjunction because it does not connect two clauses.

  • Sentence Parts and Sentence Structures
  • 100 Key Terms Used in the Study of Grammar
  • Prepositional Phrases in English Grammar
  • The Top 25 Grammatical Terms
  • Foundations of Grammar in Italian
  • Pronoun Definition and Examples
  • What Is an Adverb in English Grammar?
  • What Are the Parts of a Prepositional Phrase?
  • Definition and Examples of Adjectives
  • Definition and Examples of Function Words in English
  • Lesson Plan: Label Sentences with Parts of Speech
  • Sentence Patterns
  • Nominal: Definition and Examples in Grammar
  • Constituent: Definition and Examples in Grammar
  • Adding Adjectives and Adverbs to the Basic Sentence Unit
  • The Difference Between Gerunds, Participles, and Infinitives
  • Daily Crossword
  • Word Puzzle
  • Word Finder
  • Word of the Day
  • Synonym of the Day
  • Word of the Year
  • Language stories
  • All featured
  • Gender and sexuality
  • All pop culture
  • Grammar Coach ™
  • Writing hub
  • Grammar essentials
  • Commonly confused
  • All writing tips
  • Pop culture
  • Writing tips
  • interaction

reciprocal action, effect, or influence.

the direct effect that one kind of particle has on another, in particular, in inducing the emission or absorption of one particle by another.

the mathematical expression that specifies the nature and strength of this effect.

Origin of interaction

Other words from interaction.

  • in·ter·ac·tion·al, adjective

Words Nearby interaction

  • interacinous
  • interactant
  • interactionism
  • interactive
  • interactive engineering
  • interactive fiction
  • interactive video

Dictionary.com Unabridged Based on the Random House Unabridged Dictionary, © Random House, Inc. 2024

How to use interaction in a sentence

Part of the fun of going out to eat is the interaction , even at a distance, with staff.

In addition, the league is seeking to limit social interactions for teams on the road.

Board meetings are quickly increasing in their significance to foster consistent and vital interactions as an organization.

Pandemics are as much a product of human behavior as they are of biology, because a virus spreads via social interaction .

You find them, run and hide, though there is more interaction between monster and player in comparison to the first game.

After four or five months of casual interaction , they realized they both had lost a young parent to cancer.

Sometimes everything wrong with a larger dynamic is captured in one small interaction .

Otherwise, we morally erode the environment to be the type that makes interaction with others so difficult in the first place.

I think the first obvious thing we can do is videotape every police interaction —body cams, in-car cameras—you name it.

Still, after nearly a month at sea, I imagine they are eager to recharge, ready for interaction with the outside world.

Metaphysicians have argued endlessly as to the interaction of mind and matter.

Discusses the interaction between physical and mental things, and the possibility of freedom in a world of fixed causes.

The student of human progress is likely to be increasingly impressed with the interaction between ideas and institutions.

Thus, here also each element reinforces every other; all the factors of life are in constant interaction .

We see this complex process of the interaction of language and thought actually taking place under our eyes.

British Dictionary definitions for interaction

/ ( ˌɪntərˈækʃən ) /

a mutual or reciprocal action or influence

physics the transfer of energy between elementary particles, between a particle and a field, or between fields : See strong interaction , electromagnetic interaction , fundamental interaction , gravitational interaction , weak interaction , electroweak interaction

Derived forms of interaction

  • interactional , adjective

Collins English Dictionary - Complete & Unabridged 2012 Digital Edition © William Collins Sons & Co. Ltd. 1979, 1986 © HarperCollins Publishers 1998, 2000, 2003, 2005, 2006, 2007, 2009, 2012

PrepScholar

Choose Your Test

Sat / act prep online guides and tips, understanding the 8 parts of speech: definitions and examples.

author image

General Education

feature-parts-of-speech-sentence-map

If you’re trying to learn the grammatical rules of English, you’ve probably been asked to learn the parts of speech. But what are parts of speech and how many are there? How do you know which words are classified in each part of speech?

The answers to these questions can be a bit complicated—English is a difficult language to learn and understand. Don’t fret, though! We’re going to answer each of these questions for you with a full guide to the parts of speech that explains the following:

  • What the parts of speech are, including a comprehensive parts of speech list
  • Parts of speech definitions for the individual parts of speech. (If you’re looking for information on a specific part of speech, you can search for it by pressing Command + F, then typing in the part of speech you’re interested in.) 
  • Parts of speech examples
  • A ten question quiz covering parts of speech definitions and parts of speech examples

We’ve got a lot to cover, so let’s begin!

Feature Image: (Gavina S / Wikimedia Commons)

body-woman-question-marks

What Are Parts of Speech? 

The parts of speech definitions in English can vary, but here’s a widely accepted one: a part of speech is a category of words that serve a similar grammatical purpose in sentences.  

To make that definition even simpler, a part of speech is just a category for similar types of words . All of the types of words included under a single part of speech function in similar ways when they’re used properly in sentences.

In the English language, it’s commonly accepted that there are 8 parts of speech: nouns, verbs, adjectives, adverbs, pronouns, conjunctions, interjections, and prepositions. Each of these categories plays a different role in communicating meaning in the English language. Each of the eight parts of speech—which we might also call the “main classes” of speech—also have subclasses. In other words, we can think of each of the eight parts of speech as being general categories for different types within their part of speech . There are different types of nouns, different types of verbs, different types of adjectives, adverbs, pronouns...you get the idea. 

And that’s an overview of what a part of speech is! Next, we’ll explain each of the 8 parts of speech—definitions and examples included for each category. 

body-people-drinking-coffee-with-dog

There are tons of nouns in this picture. Can you find them all? 

Nouns are a class of words that refer, generally, to people and living creatures, objects, events, ideas, states of being, places, and actions. You’ve probably heard English nouns referred to as “persons, places, or things.” That definition is a little simplistic, though—while nouns do include people, places, and things, “things” is kind of a vague term. I t’s important to recognize that “things” can include physical things—like objects or belongings—and nonphysical, abstract things—like ideas, states of existence, and actions. 

Since there are many different types of nouns, we’ll include several examples of nouns used in a sentence while we break down the subclasses of nouns next!

Subclasses of Nouns, Including Examples

As an open class of words, the category of “nouns” has a lot of subclasses. The most common and important subclasses of nouns are common nouns, proper nouns, concrete nouns, abstract nouns, collective nouns, and count and mass nouns. Let’s break down each of these subclasses!

Common Nouns and Proper Nouns

Common nouns are generic nouns—they don’t name specific items. They refer to people (the man, the woman), living creatures (cat, bird), objects (pen, computer, car), events (party, work), ideas (culture, freedom), states of being (beauty, integrity), and places (home, neighborhood, country) in a general way. 

Proper nouns are sort of the counterpart to common nouns. Proper nouns refer to specific people, places, events, or ideas. Names are the most obvious example of proper nouns, like in these two examples: 

Common noun: What state are you from?

Proper noun: I’m from Arizona .

Whereas “state” is a common noun, Arizona is a proper noun since it refers to a specific state. Whereas “the election” is a common noun, “Election Day” is a proper noun. Another way to pick out proper nouns: the first letter is often capitalized. If you’d capitalize the word in a sentence, it’s almost always a proper noun. 

Concrete Nouns and Abstract Nouns

Concrete nouns are nouns that can be identified through the five senses. Concrete nouns include people, living creatures, objects, and places, since these things can be sensed in the physical world. In contrast to concrete nouns, abstract nouns are nouns that identify ideas, qualities, concepts, experiences, or states of being. Abstract nouns cannot be detected by the five senses. Here’s an example of concrete and abstract nouns used in a sentence: 

Concrete noun: Could you please fix the weedeater and mow the lawn ?

Abstract noun: Aliyah was delighted to have the freedom to enjoy the art show in peace .

See the difference? A weedeater and the lawn are physical objects or things, and freedom and peace are not physical objects, though they’re “things” people experience! Despite those differences, they all count as nouns. 

Collective Nouns, Count Nouns, and Mass Nouns

Nouns are often categorized based on number and amount. Collective nouns are nouns that refer to a group of something—often groups of people or a type of animal. Team , crowd , and herd are all examples of collective nouns. 

Count nouns are nouns that can appear in the singular or plural form, can be modified by numbers, and can be described by quantifying determiners (e.g. many, most, more, several). For example, “bug” is a count noun. It can occur in singular form if you say, “There is a bug in the kitchen,” but it can also occur in the plural form if you say, “There are many bugs in the kitchen.” (In the case of the latter, you’d call an exterminator...which is an example of a common noun!) Any noun that can accurately occur in one of these singular or plural forms is a count noun. 

Mass nouns are another type of noun that involve numbers and amount. Mass nouns are nouns that usually can’t be pluralized, counted, or quantified and still make sense grammatically. “Charisma” is an example of a mass noun (and an abstract noun!). For example, you could say, “They’ve got charisma, ” which doesn’t imply a specific amount. You couldn’t say, “They’ve got six charismas, ” or, “They’ve got several charismas .” It just doesn’t make sense! 

body-people-running-relay-race

Verbs are all about action...just like these runners. 

A verb is a part of speech that, when used in a sentence, communicates an action, an occurrence, or a state of being . In sentences, verbs are the most important part of the predicate, which explains or describes what the subject of the sentence is doing or how they are being. And, guess what? All sentences contain verbs!

There are many words in the English language that are classified as verbs. A few common verbs include the words run, sing, cook, talk, and clean. These words are all verbs because they communicate an action performed by a living being. We’ll look at more specific examples of verbs as we discuss the subclasses of verbs next!

Subclasses of Verbs, Including Examples

Like nouns, verbs have several subclasses. The subclasses of verbs include copular or linking verbs, intransitive verbs, transitive verbs, and ditransitive or double transitive verbs. Let’s dive into these subclasses of verbs!

Copular or Linking Verbs

Copular verbs, or linking verbs, are verbs that link a subject with its complement in a sentence. The most familiar linking verb is probably be. Here’s a list of other common copular verbs in English: act, be, become, feel, grow, seem, smell, and taste. 

So how do copular verbs work? Well, in a sentence, if we said, “Michi is ,” and left it at that, it wouldn’t make any sense. “Michi,” the subject, needs to be connected to a complement by the copular verb “is.” Instead, we could say, “Michi is leaving.” In that instance, is links the subject of the sentence to its complement. 

Transitive Verbs, Intransitive Verbs, and Ditransitive Verbs

Transitive verbs are verbs that affect or act upon an object. When unattached to an object in a sentence, a transitive verb does not make sense. Here’s an example of a transitive verb attached to (and appearing before) an object in a sentence: 

Please take the clothes to the dry cleaners.

In this example, “take” is a transitive verb because it requires an object—”the clothes”—to make sense. “The clothes” are the objects being taken. “Please take” wouldn’t make sense by itself, would it? That’s because the transitive verb “take,” like all transitive verbs, transfers its action onto another being or object. 

Conversely, intransitive verbs don’t require an object to act upon in order to make sense in a sentence. These verbs make sense all on their own! For instance, “They ran ,” “We arrived ,” and, “The car stopped ” are all examples of sentences that contain intransitive verbs. 

Finally, ditransitive verbs, or double transitive verbs, are a bit more complicated. Ditransitive verbs are verbs that are followed by two objects in a sentence . One of the objects has the action of the ditransitive verb done to it, and the other object has the action of the ditransitive verb directed towards it. Here’s an example of what that means in a sentence: 

I cooked Nathan a meal.

In this example, “cooked” is a ditransitive verb because it modifies two objects: Nathan and meal . The meal has the action of “cooked” done to it, and “Nathan” has the action of the verb directed towards him. 

body-rainbow-colored-chalk

Adjectives are descriptors that help us better understand a sentence. A common adjective type is color.

#3: Adjectives

Here’s the simplest definition of adjectives: adjectives are words that describe other words . Specifically, adjectives modify nouns and noun phrases. In sentences, adjectives appear before nouns and pronouns (they have to appear before the words they describe!). 

Adjectives give more detail to nouns and pronouns by describing how a noun looks, smells, tastes, sounds, or feels, or its state of being or existence. . For example, you could say, “The girl rode her bike.” That sentence doesn’t have any adjectives in it, but you could add an adjective before both of the nouns in the sentence—”girl” and “bike”—to give more detail to the sentence. It might read like this: “The young girl rode her red bike.”   You can pick out adjectives in a sentence by asking the following questions: 

  • Which one? 
  • What kind? 
  • How many? 
  • Whose’s? 

We’ll look at more examples of adjectives as we explore the subclasses of adjectives next!

Subclasses of Adjectives, Including Examples

Subclasses of adjectives include adjective phrases, comparative adjectives, superlative adjectives, and determiners (which include articles, possessive adjectives, and demonstratives). 

Adjective Phrases

An adjective phrase is a group of words that describe a noun or noun phrase in a sentence. Adjective phrases can appear before the noun or noun phrase in a sentence, like in this example: 

The extremely fragile vase somehow did not break during the move.

In this case, extremely fragile describes the vase. On the other hand, adjective phrases can appear after the noun or noun phrase in a sentence as well: 

The museum was somewhat boring. 

Again, the phrase somewhat boring describes the museum. The takeaway is this: adjective phrases describe the subject of a sentence with greater detail than an individual adjective. 

Comparative Adjectives and Superlative Adjectives

Comparative adjectives are used in sentences where two nouns are compared. They function to compare the differences between the two nouns that they modify. In sentences, comparative adjectives often appear in this pattern and typically end with -er. If we were to describe how comparative adjectives function as a formula, it might look something like this: 

Noun (subject) + verb + comparative adjective + than + noun (object).

Here’s an example of how a comparative adjective would work in that type of sentence: 

The horse was faster than the dog.

The adjective faster compares the speed of the horse to the speed of the dog. Other common comparative adjectives include words that compare distance ( higher, lower, farther ), age ( younger, older ), size and dimensions ( bigger, smaller, wider, taller, shorter ), and quality or feeling ( better, cleaner, happier, angrier ). 

Superlative adjectives are adjectives that describe the extremes of a quality that applies to a subject being compared to a group of objects . Put more simply, superlative adjectives help show how extreme something is. In sentences, superlative adjectives usually appear in this structure and end in -est : 

Noun (subject) + verb + the + superlative adjective + noun (object).

Here’s an example of a superlative adjective that appears in that type of sentence: 

Their story was the funniest story. 

In this example, the subject— story —is being compared to a group of objects—other stories. The superlative adjective “funniest” implies that this particular story is the funniest out of all the stories ever, period. Other common superlative adjectives are best, worst, craziest, and happiest... though there are many more than that! 

It’s also important to know that you can often omit the object from the end of the sentence when using superlative adjectives, like this: “Their story was the funniest.” We still know that “their story” is being compared to other stories without the object at the end of the sentence.

Determiners

The last subclass of adjectives we want to look at are determiners. Determiners are words that determine what kind of reference a noun or noun phrase makes. These words are placed in front of nouns to make it clear what the noun is referring to. Determiners are an example of a part of speech subclass that contains a lot of subclasses of its own. Here is a list of the different types of determiners: 

  • Definite article: the
  • Indefinite articles : a, an 
  • Demonstratives: this, that, these, those
  • Pronouns and possessive determiners: my, your, his, her, its, our, their
  • Quantifiers : a little, a few, many, much, most, some, any, enough
  • Numbers: one, twenty, fifty
  • Distributives: all, both, half, either, neither, each, every
  • Difference words : other, another
  • Pre-determiners: such, what, rather, quite

Here are some examples of how determiners can be used in sentences: 

Definite article: Get in the car.  

Demonstrative: Could you hand me that magazine?  

Possessive determiner: Please put away your clothes. 

Distributive: He ate all of the pie. 

Though some of the words above might not seem descriptive, they actually do describe the specificity and definiteness, relationship, and quantity or amount of a noun or noun phrase. For example, the definite article “the” (a type of determiner) indicates that a noun refers to a specific thing or entity. The indefinite article “an,” on the other hand, indicates that a noun refers to a nonspecific entity. 

One quick note, since English is always more complicated than it seems: while articles are most commonly classified as adjectives, they can also function as adverbs in specific situations, too. Not only that, some people are taught that determiners are their own part of speech...which means that some people are taught there are 9 parts of speech instead of 8! 

It can be a little confusing, which is why we have a whole article explaining how articles function as a part of speech to help clear things up . 

body_time-11

Adverbs can be used to answer questions like "when?" and "how long?"

Adverbs are words that modify verbs, adjectives (including determiners), clauses, prepositions, and sentences. Adverbs typically answer the questions how?, in what way?, when?, where?, and to what extent? In answering these questions, adverbs function to express frequency, degree, manner, time, place, and level of certainty . Adverbs can answer these questions in the form of single words, or in the form of adverbial phrases or adverbial clauses. 

Adverbs are commonly known for being words that end in -ly, but there’s actually a bit more to adverbs than that, which we’ll dive into while we look at the subclasses of adverbs!

Subclasses Of Adverbs, Including Examples

There are many types of adverbs, but the main subclasses we’ll look at are conjunctive adverbs, and adverbs of place, time, manner, degree, and frequency. 

Conjunctive Adverbs

Conjunctive adverbs look like coordinating conjunctions (which we’ll talk about later!), but they are actually their own category: conjunctive adverbs are words that connect independent clauses into a single sentence . These adverbs appear after a semicolon and before a comma in sentences, like in these two examples: 

She was exhausted; nevertheless , she went for a five mile run. 

They didn’t call; instead , they texted.  

Though conjunctive adverbs are frequently used to create shorter sentences using a semicolon and comma, they can also appear at the beginning of sentences, like this: 

He chopped the vegetables. Meanwhile, I boiled the pasta.  

One thing to keep in mind is that conjunctive adverbs come with a comma. When you use them, be sure to include a comma afterward! 

There are a lot of conjunctive adverbs, but some common ones include also, anyway, besides, finally, further, however, indeed, instead, meanwhile, nevertheless, next, nonetheless, now, otherwise, similarly, then, therefore, and thus.  

Adverbs of Place, Time, Manner, Degree, and Frequency

There are also adverbs of place, time, manner, degree, and frequency. Each of these types of adverbs express a different kind of meaning. 

Adverbs of place express where an action is done or where an event occurs. These are used after the verb, direct object, or at the end of a sentence. A sentence like “She walked outside to watch the sunset” uses outside as an adverb of place. 

Adverbs of time explain when something happens. These adverbs are used at the beginning or at the end of sentences. In a sentence like “The game should be over soon,” soon functions as an adverb of time. 

Adverbs of manner describe the way in which something is done or how something happens. These are the adverbs that usually end in the familiar -ly.  If we were to write “She quickly finished her homework,” quickly is an adverb of manner. 

Adverbs of degree tell us the extent to which something happens or occurs. If we were to say “The play was quite interesting,” quite tells us the extent of how interesting the play was. Thus, quite is an adverb of degree.  

Finally, adverbs of frequency express how often something happens . In a sentence like “They never know what to do with themselves,” never is an adverb of frequency. 

Five subclasses of adverbs is a lot, so we’ve organized the words that fall under each category in a nifty table for you here: 

It’s important to know about these subclasses of adverbs because many of them don’t follow the old adage that adverbs end in -ly. 

body-pronoun-chart

Here's a helpful list of pronouns. (Attanata / Flickr )

#5: Pronouns

Pronouns are words that can be substituted for a noun or noun phrase in a sentence . Pronouns function to make sentences less clunky by allowing people to avoid repeating nouns over and over. For example, if you were telling someone a story about your friend Destiny, you wouldn’t keep repeating their name over and over again every time you referred to them. Instead, you’d use a pronoun—like they or them—to refer to Destiny throughout the story. 

Pronouns are typically short words, often only two or three letters long. The most familiar pronouns in the English language are they, she, and he. But these aren’t the only pronouns. There are many more pronouns in English that fall under different subclasses!

Subclasses of Pronouns, Including Examples

There are many subclasses of pronouns, but the most commonly used subclasses are personal pronouns, possessive pronouns, demonstrative pronouns, indefinite pronouns, and interrogative pronouns. 

Personal Pronouns

Personal pronouns are probably the most familiar type of pronoun. Personal pronouns include I, me, you, she, her, him, he, we, us, they, and them. These are called personal pronouns because they refer to a person! Personal pronouns can replace specific nouns in sentences, like a person’s name, or refer to specific groups of people, like in these examples: 

Did you see Gia pole vault at the track meet? Her form was incredible!

The Cycling Club is meeting up at six. They said they would be at the park. 

In both of the examples above, a pronoun stands in for a proper noun to avoid repetitiveness. Her replaces Gia in the first example, and they replaces the Cycling Club in the second example. 

(It’s also worth noting that personal pronouns are one of the easiest ways to determine what point of view a writer is using.) 

Possessive Pronouns

Possessive pronouns are used to indicate that something belongs to or is the possession of someone. The possessive pronouns fall into two categories: limiting and absolute. In a sentence, absolute possessive pronouns can be substituted for the thing that belongs to a person, and limiting pronouns cannot. 

The limiting pronouns are my, your, its, his, her, our, their, and whose, and the absolute pronouns are mine, yours, his, hers, ours, and theirs . Here are examples of a limiting possessive pronoun and absolute possessive pronoun used in a sentence: 

Limiting possessive pronoun: Juan is fixing his car. 

In the example above, the car belongs to Juan, and his is the limiting possessive pronoun that shows the car belongs to Juan. Now, here’s an example of an absolute pronoun in a sentence: 

Absolute possessive pronoun: Did you buy your tickets ? We already bought ours . 

In this example, the tickets belong to whoever we is, and in the second sentence, ours is the absolute possessive pronoun standing in for the thing that “we” possess—the tickets. 

Demonstrative Pronouns, Interrogative Pronouns, and Indefinite Pronouns

Demonstrative pronouns include the words that, this, these, and those. These pronouns stand in for a noun or noun phrase that has already been mentioned in a sentence or conversation. This and these are typically used to refer to objects or entities that are nearby distance-wise, and that and those usually refer to objects or entities that are farther away. Here’s an example of a demonstrative pronoun used in a sentence: 

The books are stacked up in the garage. Can you put those away? 

The books have already been mentioned, and those is the demonstrative pronoun that stands in to refer to them in the second sentence above. The use of those indicates that the books aren’t nearby—they’re out in the garage. Here’s another example: 

Do you need shoes? Here...you can borrow these. 

In this sentence, these refers to the noun shoes. Using the word these tells readers that the shoes are nearby...maybe even on the speaker’s feet! 

Indefinite pronouns are used when it isn’t necessary to identify a specific person or thing . The indefinite pronouns are one, other, none, some, anybody, everybody, and no one. Here’s one example of an indefinite pronoun used in a sentence: 

Promise you can keep a secret? 

Of course. I won’t tell anyone. 

In this example, the person speaking in the second two sentences isn’t referring to any particular people who they won’t tell the secret to. They’re saying that, in general, they won’t tell anyone . That doesn’t specify a specific number, type, or category of people who they won’t tell the secret to, which is what makes the pronoun indefinite. 

Finally, interrogative pronouns are used in questions, and these pronouns include who, what, which, and whose. These pronouns are simply used to gather information about specific nouns—persons, places, and ideas. Let’s look at two examples of interrogative pronouns used in sentences: 

Do you remember which glass was mine? 

What time are they arriving? 

In the first glass, the speaker wants to know more about which glass belongs to whom. In the second sentence, the speaker is asking for more clarity about a specific time. 

body-puzzle-pieces

Conjunctions hook phrases and clauses together so they fit like pieces of a puzzle.

#6: Conjunctions

Conjunctions are words that are used to connect words, phrases, clauses, and sentences in the English language. This function allows conjunctions to connect actions, ideas, and thoughts as well. Conjunctions are also used to make lists within sentences. (Conjunctions are also probably the most famous part of speech, since they were immortalized in the famous “Conjunction Junction” song from Schoolhouse Rock .) 

You’re probably familiar with and, but, and or as conjunctions, but let’s look into some subclasses of conjunctions so you can learn about the array of conjunctions that are out there!

Subclasses of Conjunctions, Including Examples

Coordinating conjunctions, subordinating conjunctions, and correlative conjunctions are three subclasses of conjunctions. Each of these types of conjunctions functions in a different way in sentences!

Coordinating Conjunctions

Coordinating conjunctions are probably the most familiar type of conjunction. These conjunctions include the words for, and, nor, but, or, yet, so (people often recommend using the acronym FANBOYS to remember the seven coordinating conjunctions!). 

Coordinating conjunctions are responsible for connecting two independent clauses in sentences, but can also be used to connect two words in a sentence. Here are two examples of coordinating conjunctions that connect two independent clauses in a sentence: 

He wanted to go to the movies, but he couldn’t find his car keys. 

They put on sunscreen, and they went to the beach. 

Next, here are two examples of coordinating conjunctions that connect two words: 

Would you like to cook or order in for dinner? 

The storm was loud yet refreshing. 

The two examples above show that coordinating conjunctions can connect different types of words as well. In the first example, the coordinating conjunction “or” connects two verbs; in the second example, the coordinating conjunction “yet” connects two adjectives. 

But wait! Why does the first set of sentences have commas while the second set of sentences doesn’t? When using a coordinating conjunction, put a comma before the conjunction when it’s connecting two complete sentences . Otherwise, there’s no comma necessary. 

Subordinating Conjunctions

Subordinating conjunctions are used to link an independent clause to a dependent clause in a sentence. This type of conjunction always appears at the beginning of a dependent clause, which means that subordinating conjunctions can appear at the beginning of a sentence or in the middle of a sentence following an independent clause. (If you’re unsure about what independent and dependent clauses are, be sure to check out our guide to compound sentences.) 

Here is an example of a subordinating conjunction that appears at the beginning of a sentence: 

Because we were hungry, we ordered way too much food. 

Now, here’s an example of a subordinating conjunction that appears in the middle of a sentence, following an independent clause and a comma: 

Rakim was scared after the power went out. 

See? In the example above, the subordinating conjunction after connects the independent clause Rakim was scared to the dependent clause after the power went out. Subordinating conjunctions include (but are not limited to!) the following words: after, as, because, before, even though, one, since, unless, until, whenever, and while. 

Correlative Conjunctions

Finally, correlative conjunctions are conjunctions that come in pairs, like both/and, either/or, and neither/nor. The two correlative conjunctions that come in a pair must appear in different parts of a sentence to make sense— they correlate the meaning in one part of the sentence with the meaning in another part of the sentence . Makes sense, right? 

Here are two examples of correlative conjunctions used in a sentence: 

We’re either going to the Farmer’s Market or the Natural Grocer’s for our shopping today. 

They’re going to have to get dog treats for both Piper and Fudge. 

Other pairs of correlative conjunctions include as many/as, not/but, not only/but also, rather/than, such/that, and whether/or. 

body-wow-interjection

Interjections are single words that express emotions that end in an exclamation point. Cool!

#7: Interjections 

Interjections are words that often appear at the beginning of sentences or between sentences to express emotions or sentiments such as excitement, surprise, joy, disgust, anger, or even pain. Commonly used interjections include wow!, yikes!, ouch!, or ugh! One clue that an interjection is being used is when an exclamation point appears after a single word (but interjections don’t have to be followed by an exclamation point). And, since interjections usually express emotion or feeling, they’re often referred to as being exclamatory. Wow! 

Interjections don’t come together with other parts of speech to form bigger grammatical units, like phrases or clauses. There also aren’t strict rules about where interjections should appear in relation to other sentences . While it’s common for interjections to appear before sentences that describe an action or event that the interjection helps explain, interjections can appear after sentences that contain the action they’re describing as well. 

Subclasses of Interjections, Including Examples

There are two main subclasses of interjections: primary interjections and secondary interjections. Let’s take a look at these two types of interjections!

Primary Interjections  

Primary interjections are single words, like oh!, wow!, or ouch! that don’t enter into the actual structure of a sentence but add to the meaning of a sentence. Here’s an example of how a primary interjection can be used before a sentence to add to the meaning of the sentence that follows it: 

Ouch ! I just burned myself on that pan!

While someone who hears, I just burned myself on that pan might assume that the person who said that is now in pain, the interjection Ouch! makes it clear that burning oneself on the pan definitely was painful. 

Secondary Interjections

Secondary interjections are words that have other meanings but have evolved to be used like interjections in the English language and are often exclamatory. Secondary interjections can be mixed with greetings, oaths, or swear words. In many cases, the use of secondary interjections negates the original meaning of the word that is being used as an interjection. Let’s look at a couple of examples of secondary interjections here: 

Well , look what the cat dragged in!

Heck, I’d help if I could, but I’ve got to get to work. 

You probably know that the words well and heck weren’t originally used as interjections in the English language. Well originally meant that something was done in a good or satisfactory way, or that a person was in good health. Over time and through repeated usage, it’s come to be used as a way to express emotion, such as surprise, anger, relief, or resignation, like in the example above. 

body-prepositional-phrases

This is a handy list of common prepositional phrases. (attanatta / Flickr) 

#8: Prepositions

The last part of speech we’re going to define is the preposition. Prepositions are words that are used to connect other words in a sentence—typically nouns and verbs—and show the relationship between those words. Prepositions convey concepts such as comparison, position, place, direction, movement, time, possession, and how an action is completed. 

Subclasses of Prepositions, Including Examples

The subclasses of prepositions are simple prepositions, double prepositions, participle prepositions, and prepositional phrases. 

Simple Prepositions

Simple prepositions appear before and between nouns, adjectives, or adverbs in sentences to convey relationships between people, living creatures, things, or places . Here are a couple of examples of simple prepositions used in sentences: 

I’ll order more ink before we run out. 

Your phone was beside your wallet. 

In the first example, the preposition before appears between the noun ink and the personal pronoun we to convey a relationship. In the second example, the preposition beside appears between the verb was and the possessive pronoun your.

In both examples, though, the prepositions help us understand how elements in the sentence are related to one another. In the first sentence, we know that the speaker currently has ink but needs more before it’s gone. In the second sentence, the preposition beside helps us understand how the wallet and the phone are positioned relative to one another! 

Double Prepositions

Double prepositions are exactly what they sound like: two prepositions joined together into one unit to connect phrases, nouns, and pronouns with other words in a sentence. Common examples of double prepositions include outside of, because of, according to, next to, across from, and on top of. Here is an example of a double preposition in a sentence: 

I thought you were sitting across from me. 

You see? Across and from both function as prepositions individually. When combined together in a sentence, they create a double preposition. (Also note that the prepositions help us understand how two people— you and I— are positioned with one another through spacial relationship.)  

Prepositional Phrases

Finally, prepositional phrases are groups of words that include a preposition and a noun or pronoun. Typically, the noun or pronoun that appears after the preposition in a prepositional phrase is called the object of the preposition. The object always appears at the end of the prepositional phrase. Additionally, prepositional phrases never include a verb or a subject. Here are two examples of prepositional phrases: 

The cat sat under the chair . 

In the example above, “under” is the preposition, and “the chair” is the noun, which functions as the object of the preposition. Here’s one more example: 

We walked through the overgrown field . 

Now, this example demonstrates one more thing you need to know about prepositional phrases: they can include an adjective before the object. In this example, “through” is the preposition, and “field” is the object. “Overgrown” is an adjective that modifies “the field,” and it’s quite common for adjectives to appear in prepositional phrases like the one above. 

While that might sound confusing, don’t worry: the key is identifying the preposition in the first place! Once you can find the preposition, you can start looking at the words around it to see if it forms a compound preposition, a double preposition of a prepositional phrase. 

body_quiz_tiles

10 Question Quiz: Test Your Knowledge of Parts of Speech Definitions and Examples

Since we’ve covered a lot of material about the 8 parts of speech with examples ( a lot of them!), we want to give you an opportunity to review and see what you’ve learned! While it might seem easier to just use a parts of speech finder instead of learning all this stuff, our parts of speech quiz can help you continue building your knowledge of the 8 parts of speech and master each one. 

Are you ready? Here we go:  

1) What are the 8 parts of speech? 

a) Noun, article, adverb, antecedent, verb, adjective, conjunction, interjection b) Noun, pronoun, verb, adverb, determiner, clause, adjective, preposition c) Noun, verb, adjective, adverb, pronoun, conjunction, interjection, preposition

2) Which parts of speech have subclasses?

a) Nouns, verbs, adjectives, and adverbs b) Nouns, verbs, adjectives, adverbs, conjunctions, and prepositions c) All of them! There are many types of words within each part of speech.

3) What is the difference between common nouns and proper nouns?

a) Common nouns don’t refer to specific people, places, or entities, but proper nouns do refer to specific people, places, or entities.  b) Common nouns refer to regular, everyday people, places, or entities, but proper nouns refer to famous people, places, or entities.  c) Common nouns refer to physical entities, like people, places, and objects, but proper nouns refer to nonphysical entities, like feelings, ideas, and experiences.

4) In which of the following sentences is the emboldened word a verb?

a) He was frightened by the horror film .   b) He adjusted his expectations after the first plan fell through.  c) She walked briskly to get there on time.

5) Which of the following is a correct definition of adjectives, and what other part of speech do adjectives modify?

a) Adjectives are describing words, and they modify nouns and noun phrases.  b) Adjectives are describing words, and they modify verbs and adverbs.  c) Adjectives are describing words, and they modify nouns, verbs, and adverbs.

6) Which of the following describes the function of adverbs in sentences?

a) Adverbs express frequency, degree, manner, time, place, and level of certainty. b) Adverbs express an action performed by a subject.  c) Adverbs describe nouns and noun phrases.

7) Which of the following answers contains a list of personal pronouns?

a) This, that, these, those b) I, you, me, we, he, she, him, her, they, them c) Who, what, which, whose

8) Where do interjections typically appear in a sentence?

a) Interjections can appear at the beginning of or in between sentences. b) Interjections appear at the end of sentences.  c) Interjections appear in prepositional phrases.

9) Which of the following sentences contains a prepositional phrase?

a) The dog happily wagged his tail.  b) The cow jumped over the moon.  c) She glared, angry that he forgot the flowers.

10) Which of the following is an accurate definition of a “part of speech”?

a) A category of words that serve a similar grammatical purpose in sentences. b) A category of words that are of similar length and spelling. c) A category of words that mean the same thing.

So, how did you do? If you got 1C, 2C, 3A, 4B, 5A, 6A, 7B, 8A, 9B, and 10A, you came out on top! There’s a lot to remember where the parts of speech are concerned, and if you’re looking for more practice like our quiz, try looking around for parts of speech games or parts of speech worksheets online!

body_next

What’s Next?

You might be brushing up on your grammar so you can ace the verbal portions of the SAT or ACT. Be sure you check out our guides to the grammar you need to know before you tackle those tests! Here’s our expert guide to the grammar rules you need to know for the SAT , and this article teaches you the 14 grammar rules you’ll definitely see on the ACT.

When you have a good handle on parts of speech, it can make writing essays tons easier. Learn how knowing parts of speech can help you get a perfect 12 on the ACT Essay (or an 8/8/8 on the SAT Essay ).

While we’re on the topic of grammar: keep in mind that knowing grammar rules is only part of the battle when it comes to the verbal and written portions of the SAT and ACT. Having a good vocabulary is also important to making the perfect score ! Here are 262 vocabulary words you need to know before you tackle your standardized tests.

author image

Ashley Sufflé Robinson has a Ph.D. in 19th Century English Literature. As a content writer for PrepScholar, Ashley is passionate about giving college-bound students the in-depth information they need to get into the school of their dreams.

Student and Parent Forum

Our new student and parent forum, at ExpertHub.PrepScholar.com , allow you to interact with your peers and the PrepScholar staff. See how other students and parents are navigating high school, college, and the college admissions process. Ask questions; get answers.

Join the Conversation

Ask a Question Below

Have any questions about this article or other topics? Ask below and we'll reply!

Improve With Our Famous Guides

  • For All Students

The 5 Strategies You Must Be Using to Improve 160+ SAT Points

How to Get a Perfect 1600, by a Perfect Scorer

Series: How to Get 800 on Each SAT Section:

Score 800 on SAT Math

Score 800 on SAT Reading

Score 800 on SAT Writing

Series: How to Get to 600 on Each SAT Section:

Score 600 on SAT Math

Score 600 on SAT Reading

Score 600 on SAT Writing

Free Complete Official SAT Practice Tests

What SAT Target Score Should You Be Aiming For?

15 Strategies to Improve Your SAT Essay

The 5 Strategies You Must Be Using to Improve 4+ ACT Points

How to Get a Perfect 36 ACT, by a Perfect Scorer

Series: How to Get 36 on Each ACT Section:

36 on ACT English

36 on ACT Math

36 on ACT Reading

36 on ACT Science

Series: How to Get to 24 on Each ACT Section:

24 on ACT English

24 on ACT Math

24 on ACT Reading

24 on ACT Science

What ACT target score should you be aiming for?

ACT Vocabulary You Must Know

ACT Writing: 15 Tips to Raise Your Essay Score

How to Get Into Harvard and the Ivy League

How to Get a Perfect 4.0 GPA

How to Write an Amazing College Essay

What Exactly Are Colleges Looking For?

Is the ACT easier than the SAT? A Comprehensive Guide

Should you retake your SAT or ACT?

When should you take the SAT or ACT?

Stay Informed

part of speech meaning of interaction

Get the latest articles and test prep tips!

Looking for Graduate School Test Prep?

Check out our top-rated graduate blogs here:

GRE Online Prep Blog

GMAT Online Prep Blog

TOEFL Online Prep Blog

Holly R. "I am absolutely overjoyed and cannot thank you enough for helping me!”
  • English Grammar
  • Parts of Speech

Parts of Speech - Definition, 8 Types and Examples

In the English language , every word is called a part of speech. The role a word plays in a sentence denotes what part of speech it belongs to. Explore the definition of parts of speech, the different parts of speech and examples in this article.

Table of Contents

Parts of speech definition, different parts of speech with examples.

  • Sentences Examples for the 8 Parts of Speech

A Small Exercise to Check Your Understanding of Parts of Speech

Frequently asked questions on parts of speech, what is a part of speech.

Parts of speech are among the first grammar topics we learn when we are in school or when we start our English language learning process. Parts of speech can be defined as words that perform different roles in a sentence. Some parts of speech can perform the functions of other parts of speech too.

  • The Oxford Learner’s Dictionary defines parts of speech as “one of the classes into which words are divided according to their grammar, such as noun, verb, adjective, etc.”
  • The Cambridge Dictionary also gives a similar definition – “One of the grammatical groups into which words are divided, such as noun, verb, and adjective”.

Parts of speech include nouns, pronouns, verbs, adverbs, adjectives, prepositions, conjunctions and interjections.

8 Parts of Speech Definitions and Examples:

1. Nouns are words that are used to name people, places, animals, ideas and things. Nouns can be classified into two main categories: Common nouns and Proper nouns . Common nouns are generic like ball, car, stick, etc., and proper nouns are more specific like Charles, The White House, The Sun, etc.

Examples of nouns used in sentences:

  • She bought a pair of shoes . (thing)
  • I have a pet. (animal)
  • Is this your book ? (object)
  • Many people have a fear of darkness . (ideas/abstract nouns)
  • He is my brother . (person)
  • This is my school . (place)

Also, explore Singular Nouns and Plural Nouns .

2. Pronouns are words that are used to substitute a noun in a sentence. There are different types of pronouns. Some of them are reflexive pronouns, possessive pronouns , relative pronouns and indefinite pronouns . I, he, she, it, them, his, yours, anyone, nobody, who, etc., are some of the pronouns.

Examples of pronouns used in sentences:

  • I reached home at six in the evening. (1st person singular pronoun)
  • Did someone see a red bag on the counter? (Indefinite pronoun)
  • Is this the boy who won the first prize? (Relative pronoun)
  • That is my mom. (Possessive pronoun)
  • I hurt myself yesterday when we were playing cricket. (Reflexive pronoun)

3. Verbs are words that denote an action that is being performed by the noun or the subject in a sentence. They are also called action words. Some examples of verbs are read, sit, run, pick, garnish, come, pitch, etc.

Examples of verbs used in sentences:

  • She plays cricket every day.
  • Darshana and Arul are going to the movies.
  • My friends visited me last week.
  • Did you have your breakfast?
  • My name is Meenakshi Kishore.

4. Adverbs are words that are used to provide more information about verbs, adjectives and other adverbs used in a sentence. There are five main types of adverbs namely, adverbs of manner , adverbs of degree , adverbs of frequency , adverbs of time and adverbs of place . Some examples of adverbs are today, quickly, randomly, early, 10 a.m. etc.

Examples of adverbs used in sentences:

  • Did you come here to buy an umbrella? (Adverb of place)
  • I did not go to school yesterday as I was sick. (Adverb of time)
  • Savio reads the newspaper everyday . (Adverb of frequency)
  • Can you please come quickly ? (Adverb of manner)
  • Tony was so sleepy that he could hardly keep his eyes open during the meeting. (Adverb of degree)

5. Adjectives are words that are used to describe or provide more information about the noun or the subject in a sentence. Some examples of adjectives include good, ugly, quick, beautiful, late, etc.

Examples of adjectives used in sentences:

  • The place we visited yesterday was serene .
  • Did you see how big that dog was?
  • The weather is pleasant today.
  • The red dress you wore on your birthday was lovely.
  • My brother had only one chapati for breakfast.

6. Prepositions are words that are used to link one part of the sentence to another. Prepositions show the position of the object or subject in a sentence. Some examples of prepositions are in, out, besides, in front of, below, opposite, etc.

Examples of prepositions used in sentences:

  • The teacher asked the students to draw lines on the paper so that they could write in straight lines.
  • The child hid his birthday presents under his bed.
  • Mom asked me to go to the store near my school.
  • The thieves jumped over the wall and escaped before we could reach home.

7. Conjunctions are a part of speech that is used to connect two different parts of a sentence, phrases and clauses . Some examples of conjunctions are and, or, for, yet, although, because, not only, etc.

Examples of conjunctions used in sentences:

  • Meera and Jasmine had come to my birthday party.
  • Jane did not go to work as she was sick.
  • Unless you work hard, you cannot score good marks.
  • I have not finished my project,  yet I went out with my friends.

8. Interjections are words that are used to convey strong emotions or feelings. Some examples of interjections are oh, wow, alas, yippee, etc. It is always followed by an exclamation mark.

Examples of interjections used in sentences:

  • Wow ! What a wonderful work of art.
  • Alas ! That is really sad.
  • Yippee ! We won the match.

Sentence Examples for the 8 Parts of Speech

  • Noun – Tom lives in New York .
  • Pronoun – Did she find the book she was looking for?
  • Verb – I reached home.
  • Adverb – The tea is too hot.
  • Adjective – The movie was amazing .
  • Preposition – The candle was kept under the table.
  • Conjunction – I was at home all day, but I am feeling very tired.
  • Interjection – Oh ! I forgot to turn off the stove.

Let us find out if you have understood the different parts of speech and their functions. Try identifying which part of speech the highlighted words belong to.

  • My brother came home  late .
  • I am a good girl.
  • This is the book I  was looking for.
  • Whoa ! This is amazing .
  • The climate  in  Kodaikanal is very pleasant.
  • Can you please pick up Dan and me on  your way home?

Now, let us see if you got it right. Check your answers.

  • My – Pronoun, Home – Noun, Late – Adverb
  • Am – Verb, Good – Adjective
  • I – Pronoun, Was looking – Verb
  • Whoa – Interjection, Amazing – Adjective
  • Climate – Noun, In – Preposition, Kodaikanal – Noun, Very – Adverb
  • And – Conjunction, On – Preposition, Your – Pronoun

What are parts of speech?

The term ‘parts of speech’ refers to words that perform different functions in a sentence  in order to give the sentence a proper meaning and structure.

How many parts of speech are there?

There are 8 parts of speech in total.

What are the 8 parts of speech?

Nouns, pronouns, verbs, adverbs, adjectives, prepositions, conjunctions and interjections are the 8 parts of speech.

Leave a Comment Cancel reply

Your Mobile number and Email id will not be published. Required fields are marked *

Request OTP on Voice Call

Post My Comment

part of speech meaning of interaction

  • Share Share

Register with BYJU'S & Download Free PDFs

Register with byju's & watch live videos.

Library homepage

  • school Campus Bookshelves
  • menu_book Bookshelves
  • perm_media Learning Objects
  • login Login
  • how_to_reg Request Instructor Account
  • hub Instructor Commons
  • Download Page (PDF)
  • Download Full Book (PDF)
  • Periodic Table
  • Physics Constants
  • Scientific Calculator
  • Reference & Cite
  • Tools expand_more
  • Readability

selected template will load here

This action is not available.

Social Sci LibreTexts

3.1: Language and Meaning

  • Last updated
  • Save as PDF
  • Page ID 18449

Learning Objectives

  • Explain how the triangle of meaning describes the symbolic nature of language.
  • Distinguish between denotation and connotation.
  • Discuss the function of the rules of language.
  • Describe the process of language acquisition.

The relationship between language and meaning is not a straightforward one. One reason for this complicated relationship is the limitlessness of modern language systems like English (Crystal, 2005). Language is productive in the sense that there are an infinite number of utterances we can make by connecting existing words in new ways. In addition, there is no limit to a language’s vocabulary, as new words are coined daily. Of course, words aren’t the only things we need to communicate, and although verbal and nonverbal communication are closely related in terms of how we make meaning, nonverbal communication is not productive and limitless. Although we can only make a few hundred physical signs, we have about a million words in the English language. So with all this possibility, how does communication generate meaning?

You’ll recall that “generating meaning” was a central part of the definition of communication we learned earlier. We arrive at meaning through the interaction between our nervous and sensory systems and some stimulus outside of them. It is here, between what the communication models we discussed earlier labeled as encoding and decoding, that meaning is generated as sensory information is interpreted. The indirect and sometimes complicated relationship between language and meaning can lead to confusion, frustration, or even humor. We may even experience a little of all three, when we stop to think about how there are some twenty-five definitions available to tell us the meaning of word meaning ! (Crystal, 2005) Since language and symbols are the primary vehicle for our communication, it is important that we not take the components of our verbal communication for granted.

Language Is Symbolic

Our language system is primarily made up of symbols. A symbol is something that stands in for or represents something else. Symbols can be communicated verbally (speaking the word hello ), in writing (putting the letters H-E-L-L-O together), or nonverbally (waving your hand back and forth). In any case, the symbols we use stand in for something else, like a physical object or an idea; they do not actually correspond to the thing being referenced in any direct way. Unlike hieroglyphics in ancient Egypt, which often did have a literal relationship between the written symbol and the object being referenced, the symbols used in modern languages look nothing like the object or idea to which they refer.

The symbols we use combine to form language systems or codes. Codes are culturally agreed on and ever-changing systems of symbols that help us organize, understand, and generate meaning (Leeds-Hurwitz, 1993). There are about 6,000 language codes used in the world, and around 40 percent of those (2,400) are only spoken and do not have a written version (Crystal, 2005). Remember that for most of human history the spoken word and nonverbal communication were the primary means of communication. Even languages with a written component didn’t see widespread literacy, or the ability to read and write, until a little over one hundred years ago.

The symbolic nature of our communication is a quality unique to humans. Since the words we use do not have to correspond directly to a “thing” in our “reality,” we can communicate in abstractions. This property of language is called displacement and specifically refers to our ability to talk about events that are removed in space or time from a speaker and situation (Crystal, 2005). Animals do communicate, but in a much simpler way that is only a reaction to stimulus. Further, animal communication is very limited and lacks the productive quality of language that we discussed earlier.

Dog looking at camera with letters on carpet next to it spelling BARK.

As I noted in the chapter titled “Introduction to Communication Studies”, the earliest human verbal communication was not very symbolic or abstract, as it likely mimicked sounds of animals and nature. Such a simple form of communication persisted for thousands of years, but as later humans turned to settled agriculture and populations grew, things needed to be more distinguishable. More terms (symbols) were needed to accommodate the increasing number of things like tools and ideas like crop rotation that emerged as a result of new knowledge about and experience with farming and animal domestication. There weren’t written symbols during this time, but objects were often used to represent other objects; for example, a farmer might have kept a pebble in a box to represent each chicken he owned. As further advancements made keeping track of objects-representing-objects more difficult, more abstract symbols and later written words were able to stand in for an idea or object. Despite the fact that these transitions occurred many thousands of years ago, we can trace some words that we still use today back to their much more direct and much less abstract origins.

For example, the word calculate comes from the Latin word calculus , which means “pebble.” But what does a pebble have to do with calculations? Pebbles were used, very long ago, to calculate things before we developed verbal or written numbering systems (Hayakawa & Hayakawa, 1990). As I noted earlier, a farmer may have kept, in a box, one pebble for each of his chickens. Each pebble represented one chicken, meaning that each symbol (the pebble) had a direct correlation to another thing out in the world (its chicken). This system allowed the farmer to keep track of his livestock. He could periodically verify that each pebble had a corresponding chicken. If there was a discrepancy, he would know that a chicken was lost, stolen, or killed. Later, symbols were developed that made accounting a little easier. Instead of keeping track of boxes of pebbles, the farmer could record a symbol like the word five or the numeral 15 that could stand in for five or fifteen pebbles. This demonstrates how our symbols have evolved and how some still carry that ancient history with them, even though we are unaware of it. While this evolution made communication easier in some ways, it also opened up room for misunderstanding, since the relationship between symbols and the objects or ideas they represented became less straightforward. Although the root of calculate means “pebble,” the word calculate today has at least six common definitions.

The Triangle of Meaning

The triangle of meaning is a model of communication that indicates the relationship among a thought, symbol, and referent and highlights the indirect relationship between the symbol and referent (Richards & Ogden, 1923). As you can see in Figure 3.1, the thought is the concept or idea a person references. The symbol is the word that represents the thought, and the referent is the object or idea to which the symbol refers. This model is useful for us as communicators because when we are aware of the indirect relationship between symbols and referents, we are aware of how common misunderstandings occur, as the following example illustrates: Jasper and Abby have been thinking about getting a new dog. So each of them is having a similar thought. They are each using the same symbol, the word dog , to communicate about their thought. Their referents, however, are different. Jasper is thinking about a small dog like a dachshund, and Abby is thinking about an Australian shepherd. Since the word dog doesn’t refer to one specific object in our reality, it is possible for them to have the same thought, and use the same symbol, but end up in an awkward moment when they get to the shelter and fall in love with their respective referents only to find out the other person didn’t have the same thing in mind.

Image of Triangle of Meaning

Being aware of this indirect relationship between symbol and referent, we can try to compensate for it by getting clarification. Some of what we learned in the chapter titled “Communication and Perception”, about perception checking, can be useful here. Abby might ask Jasper, “What kind of dog do you have in mind?” This question would allow Jasper to describe his referent, which would allow for more shared understanding. If Jasper responds, “Well, I like short-haired dogs. And we need a dog that will work well in an apartment,” then there’s still quite a range of referents. Abby could ask questions for clarification, like “Sounds like you’re saying that a smaller dog might be better. Is that right?” Getting to a place of shared understanding can be difficult, even when we define our symbols and describe our referents.

Definitions

Definitions help us narrow the meaning of particular symbols, which also narrows a symbol’s possible referents. They also provide more words (symbols) for which we must determine a referent. If a concept is abstract and the words used to define it are also abstract, then a definition may be useless. Have you ever been caught in a verbal maze as you look up an unfamiliar word, only to find that the definition contains more unfamiliar words? Although this can be frustrating, definitions do serve a purpose.

Words have denotative and connotative meanings. Denotation refers to definitions that are accepted by the language group as a whole, or the dictionary definition of a word. For example, the denotation of the word cowboy is a man who takes care of cattle. Another denotation is a reckless and/or independent person. A more abstract word, like change , would be more difficult to understand due to the multiple denotations. Since both cowboy and change have multiple meanings, they are considered polysemic words. Monosemic words have only one use in a language, which makes their denotation more straightforward. Specialized academic or scientific words, like monosemic , are often monosemic, but there are fewer commonly used monosemic words, for example, handkerchief . As you might guess based on our discussion of the complexity of language so far, monosemic words are far outnumbered by polysemic words.

Connotation refers to definitions that are based on emotion- or experience-based associations people have with a word. To go back to our previous words, change can have positive or negative connotations depending on a person’s experiences. A person who just ended a long-term relationship may think of change as good or bad depending on what he or she thought about his or her former partner. Even monosemic words like handkerchief that only have one denotation can have multiple connotations. A handkerchief can conjure up thoughts of dainty Southern belles or disgusting snot-rags. A polysemic word like cowboy has many connotations, and philosophers of language have explored how connotations extend beyond one or two experiential or emotional meanings of a word to constitute cultural myths (Barthes, 1972). Cowboy , for example, connects to the frontier and the western history of the United States, which has mythologies associated with it that help shape the narrative of the nation. The Marlboro Man is an enduring advertising icon that draws on connotations of the cowboy to attract customers. While people who grew up with cattle or have family that ranch may have a very specific connotation of the word cowboy based on personal experience, other people’s connotations may be more influenced by popular cultural symbolism like that seen in westerns.

Language Is Learned

As we just learned, the relationship between the symbols that make up our language and their referents is arbitrary, which means they have no meaning until we assign it to them. In order to effectively use a language system, we have to learn, over time, which symbols go with which referents, since we can’t just tell by looking at the symbol. Like me, you probably learned what the word apple meant by looking at the letters A-P-P-L-E and a picture of an apple and having a teacher or caregiver help you sound out the letters until you said the whole word. Over time, we associated that combination of letters with the picture of the red delicious apple and no longer had to sound each letter out. This is a deliberate process that may seem slow in the moment, but as we will see next, our ability to acquire language is actually quite astounding. We didn’t just learn individual words and their meanings, though; we also learned rules of grammar that help us put those words into meaningful sentences.

Child sitting outside, reading.

The Rules of Language

Any language system has to have rules to make it learnable and usable. Grammar refers to the rules that govern how words are used to make phrases and sentences. Someone would likely know what you mean by the question “Where’s the remote control?” But “The control remote where’s?” is likely to be unintelligible or at least confusing (Crystal, 2005). Knowing the rules of grammar is important in order to be able to write and speak to be understood, but knowing these rules isn’t enough to make you an effective communicator. As we will learn later, creativity and play also have a role in effective verbal communication. Even though teachers have long enforced the idea that there are right and wrong ways to write and say words, there really isn’t anything inherently right or wrong about the individual choices we make in our language use. Rather, it is our collective agreement that gives power to the rules that govern language.

Some linguists have viewed the rules of language as fairly rigid and limiting in terms of the possible meanings that we can derive from words and sentences created from within that system (de Saussure, 1974). Others have viewed these rules as more open and flexible, allowing a person to make choices to determine meaning (Eco, 1976). Still others have claimed that there is no real meaning and that possibilities for meaning are limitless (Derrida, 1978). For our purposes in this chapter, we will take the middle perspective, which allows for the possibility of individual choice but still acknowledges that there is a system of rules and logic that guides our decision making.

Looking back to our discussion of connotation, we can see how individuals play a role in how meaning and language are related, since we each bring our own emotional and experiential associations with a word that are often more meaningful than a dictionary definition. In addition, we have quite a bit of room for creativity, play, and resistance with the symbols we use. Have you ever had a secret code with a friend that only you knew? This can allow you to use a code word in a public place to get meaning across to the other person who is “in the know” without anyone else understanding the message. The fact that you can take a word, give it another meaning, have someone else agree on that meaning, and then use the word in your own fashion clearly shows that meaning is in people rather than words. As we will learn later, many slang words developed because people wanted a covert way to talk about certain topics like drugs or sex without outsiders catching on.

Language Acquisition

Language acquisition refers to the process by which we learn to understand, produce, and use words to communicate within a given language group. The way we acquire language is affected by many factors. We know that learning a language is not just about learning words. We have to learn how to correctly connect the words to what they mean in a given context and be able to order the words in such a way, within the rules of grammar for the language code we are using, that other people will be able to understand us (Hayakawa & Hayakawa, 1990). As if that didn’t seem like enough to learn, we also have to learn various conversational patterns that we regularly but often unconsciously follow to make our interactions smooth and successful. A brief overview of language acquisition from birth to adulthood offers us a look at the amazing and still somewhat mysterious relationships between our brain, eyes, ears, voice, and other physiological elements (Crystal, 2005). In terms of language acquisition, there is actually a great deal of variation between individuals due to physical and contextual differences, but this overview presumes “typical development.”

Much is being taken in during the first year of life as brain development accelerates and senses are focused and tuned. Primary caregivers are driven, almost instinctively, to begin instilling conversational abilities in babies from birth. As just about anyone who has spent time around a baby during this phase of rapid development can attest, there is a compulsion to interact with the child, which is usually entertaining for adult and baby. This compulsion isn’t random or accidental, and we would be wrong to assume that our communication is useless or just for fun. We would also be wrong to assume that language acquisition doesn’t begin until a baby says his or her first words. By the time this happens, babies have learned much, through observation and practice, about our verbal communication and interaction patterns. These key developments include the following:

  • 2–4 months. Babies can respond to different tones of voice (angry, soothing, or playful).
  • 6 months. Babies can associate some words, like bye-bye , with a corresponding behavior, and they begin “babbling,” which is actually practice for more intelligible speech to come.
  • 8–10 months. Babies learn that pointing can attract or direct attention, and they begin to follow adult conversations, shifting eye contact from one speaker to the next.
  • 1 year. Babies recognize some individual words (people’s names, no ) and basic rituals of verbal interaction such as question-pause-answer and various greetings. Shortly before or after this time, babies begin to use “melodic utterances” echoing the variety in pitch and tone in various verbal interactions such as questioning, greeting, or wanting.

Mother kissing baby on cheek.

Language acquisition after the age of two seems sluggish compared to the pace of development during the first year or so. By the end of the first year, babies have learned most of the basic phonetic components necessary for speech. The second year represents a time of intense practice—of verbal trial and error. From three to five we continue to develop our pronunciation ability, which develops enough by our teens to allow us to engage in everyday communication. Of course, our expressive repertoire, including ways of speaking and the vocabulary we use, continues to develop. A person’s life and career choices determine to a large degree how much further development occurs. But the language abilities we have acquired can decrease or disappear as a result of disease or trauma. Additionally, if such things occur early in life, or before birth, the process of language acquisition can be quite different. Barriers to speech and language acquisition are common and are the domain of a related but distinct field of study often housed in departments of communication sciences and disorders. The “Getting Real” box featured discusses this field of study and related careers.

“Getting Real”: Communication Sciences and Disorders

The field of communication sciences and disorders includes career paths in audiology and speech-language pathology—we will focus on the latter here. Individuals working in this field can work in schools, hospitals, private practice, or in academia as researchers and professors. Speech and language disorders affect millions of people. Between six and eight million people in the United States have some kind of language impairment, ranging from stuttering to lack of language comprehension to lack of language expression. [1] Speech language pathologists may work with children who have exhibited a marked slowness or gap in language acquisition or adults who have recently lost language abilities due to stroke or some other trauma or disease. Speech-language pathologists often diagnose and treat language disorders as part of a team that may include teachers, physicians, social workers, and others. The career outlook is predicted to be very strong for the next eight years as the baby boomers reach an age where age-related hearing and language impairments develop, as medical advances increase survival rates for premature babies and stroke and trauma victims, and as schools continue to grow. Speech-language pathologists often obtain graduate degrees, complete clinical experiences, and take tests for various certifications and licenses. To be successful in this field, individuals must have good interpersonal communication skills to work with a variety of clients and other service providers, above-average intellectual aptitude (particularly in science), and excellent oral and written communication skills. Typical salaries range from $58,000 a year for individuals working in elementary schools to $70,000 for those in health care settings.

  • What specific communication skills do you think would be important for a speech-language pathologist and why?
  • The motto for the American Speech-Language-Hearing Association is “Making effective communication a human right, accessible and achievable for all.” How does this motto relate to our discussion of communication ethics so far? What kinds of things do speech-language pathologists do that fulfill that motto?

Key Takeaways

  • The triangle of meaning is a model of communication that indicates the relationship among a thought, symbol, and referent, and highlights the indirect relationship between the symbol and the referent. The model explains how for any given symbol there can be many different referents, which can lead to misunderstanding.
  • Denotation refers to the agreed on or dictionary definition of a word. Connotation refers to definitions that are based on emotion- or experience-based associations people have with a word.
  • The rules of language help make it learnable and usable. Although the rules limit some of the uses of language, they still allow for the possibility of creativity and play.
  • Language acquisition refers to the process by which we learn to understand, produce, and use words to communicate within a given language group. This process happens at an amazing speed during the first two years of life, and we attain all the linguistic information we need to participate in everyday conversations, assuming normal development, by our early teens.
  • Trace the history of a word (its etymology) like we did with calculate earlier in the chapter. Discuss how the meaning of the word (the symbol) has changed as it has gotten further from its original meaning. Two interesting words to trace are hazard and phony .
  • Apply the triangle of meaning to a recent message exchange you had in which differing referents led to misunderstanding. What could you have done to help prevent or correct the misunderstanding?
  • Think of some words that have strong connotations for you. How does your connotation differ from the denotation? How might your connotation differ from another person’s?

Barthes, R., Mythologies (New York, NY: Hill and Wang, 1972).

Crystal, D., How Language Works: How Babies Babble, Words Change Meaning, and Languages Live or Die (Woodstock, NY: Overlook Press, 2005), 8–9.

de Saussure, F., Course in General Linguistics , trans. Wade Baskin (London: Fontana/Collins, 1974).

Derrida, J., Writing and Difference , trans. Alan Bass (London: Routledge, 1978).

Eco, U., A Theory of Semiotics (Bloomington, IN: Indiana University Press, 1976).

Hayakawa, S. I. and Alan R. Hayakawa, Language in Thought and Action , 5th ed. (San Diego, CA: Harcourt Brace, 1990), 87.

Leeds-Hurwitz, W., Semiotics and Communication: Signs, Codes, Cultures (Hillsdale, NJ: Lawrence Erlbaum Associates, 1993), 53.

Richards, I. A. and Charles K. Ogden, The Meaning of Meaning (London: Kegan, Paul, Trench, Tubner, 1923).

  • American Speech-Language-Hearing Association, accessed June 7, 2012, http://www.asha.org/careers/professions/default-overview.htm . ↵

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base
  • Parts of speech
  • What Is an Interjection? | Examples, Definition & Types

What Is an Interjection? | Examples, Definition & Types

Published on September 29, 2022 by Eoghan Ryan . Revised on November 16, 2022.

An interjection is a word or phrase used to express a feeling or to request or demand something. While interjections are a part of speech , they are not grammatically connected to other parts of a sentence.

Interjections are common in everyday speech and informal writing. While some interjections such as “well” and “indeed” are acceptable in formal conversation, it’s best to avoid interjections in formal or academic writing .

Uh-oh . I forgot to get gas.

We’re not lost. We just need to go, um , this way.

Table of contents

How are interjections used in sentences.

  • Primary interjections
  • Secondary interjections
  • Volitive interjections
  • Emotive interjections
  • Cognitive interjections

Greetings and parting words

Interjections and punctuation, other interesting language articles, frequently asked questions.

Interjections add meaning to a sentence or context by expressing a feeling, making a demand, or emphasizing a thought.

Interjections can be either a single word or a phrase, and they can be used on their own or as part of a sentence.

Shoot , I’ve broken a nail.

As interjections are a grammatically independent part of speech, they can often be excluded from a sentence without impacting its meaning.

  • Oh boy , I’m tired.
  • Ouch ! That hurts!
  • That hurts!

Check for common mistakes

Use the best grammar checker available to check for common mistakes in your text.

Fix mistakes for free

A primary interjection is a word or sound that can only be used as an interjection. Primary interjections do not have alternative meanings and can’t function as another part of speech (i.e., noun , verb , or adjective ).

Primary interjections are typically just sounds without a clear etymology. As such, while they sometimes have standard spellings, a single interjection may be written in different ways (e.g., “um-hum” or “mm-hmm”).

Um-hum . I think that could work.

A secondary interjection is a word that is typically used as another part of speech (such as a noun , verb , or adjective ) that can also be used as an interjection.

Shoot ! My flight has been canceled.

A volitive interjection is used to give a command or make a request. For example, the volitive interjection “shh” or “shush” is used to command someone to be quiet.

Psst . Pass me an eraser.

An emotive interjection is used to express an emotion or to indicate a reaction to something. For example, the emotive interjection “ew” is used to express disgust.

Curse words, also called expletives, are commonly used (in informal contexts) as emotive interjections to express frustration or anger.

Yay ! I’m so excited to see you.

A cognitive interjection is used to express a thought or indicate a thought process. For example, the cognitive interjection “um” can express confusion or indicate that the speaker is thinking.

Wow ! I wasn’t expecting that.

Greetings and parting words/phrases are interjections used to acknowledge or welcome someone or to express good wishes at the end of a conversation.

Hello ! It’s good to see you.

How an interjection is punctuated depends on the context and the intensity of the emotion or thought being expressed.

Exclamation points are most commonly used along with interjections to emphasize the intensity of an emotion, thought, or demand.

When the emotion or thought being expressed is less extreme, an interjection can also be followed by a period. If an interjection is used to express uncertainty or to ask a question, it should be followed by a question mark .

We’ve just won the lottery. Hurray !

When an interjection is used as part of a sentence, it should be set off from the rest of the sentence using commas .

It was an interesting lecture, indeed .

If you want to know more about nouns , pronouns , verbs , and other parts of speech , make sure to check out some of our other language articles with explanations and examples.

Nouns & pronouns

  • Common nouns
  • Collective nouns
  • Personal pronouns
  • First-person pronouns
  • Second-person pronouns
  • Verb tenses
  • Phrasal verbs
  • Types of verbs
  • Active vs passive voice
  • Subject-verb agreement
  • Prepositions
  • Determiners

There are numerous ways to categorize interjections into various types. The main types of interjections are:

  • Greetings and parting words/phrases

Interjections are often followed by exclamation points to emphasize the intensity of an emotion, thought, or demand (e.g., “ Whoa !”).

An interjection can also be followed by a period or a comma when the emotion or thought being expressed is less intense (e.g., “Oh. I didn’t know that.”).

An interjection can have different meanings depending on how it is used. Some common interjections, along with an explanation of how they are commonly used, are listed below.

Sources in this article

We strongly encourage students to use sources in their work. You can cite our article (APA Style) or take a deep dive into the articles below.

Ryan, E. (2022, November 16). What Is an Interjection? | Examples, Definition & Types. Scribbr. Retrieved April 15, 2024, from https://www.scribbr.com/parts-of-speech/interjections/
Aarts, B. (2011).  Oxford modern English grammar . Oxford University Press.
Butterfield, J. (Ed.). (2015).  Fowler’s dictionary of modern English usage  (4th ed.). Oxford University Press.
Garner, B. A. (2016).  Garner’s modern English usage (4th ed.). Oxford University Press.

Is this article helpful?

Eoghan Ryan

Eoghan Ryan

Other students also liked, what is an adjective | definition, types & examples, what are prepositions | list, examples & how to use, definite and indefinite articles | when to use "the", "a" or "an", unlimited academic ai-proofreading.

✔ Document error-free in 5minutes ✔ Unlimited document corrections ✔ Specialized in correcting academic texts

  • Dictionaries home
  • American English
  • Collocations
  • German-English
  • Grammar home
  • Practical English Usage
  • Learn & Practise Grammar (Beta)
  • Word Lists home
  • My Word Lists
  • Recent additions
  • Resources home
  • Text Checker

Definition of part of speech noun from the Oxford Advanced Learner's Dictionary

  • part of speech
  • alphabetical
  • pronunciation

Join our community to access the latest language learning and assessment tips from Oxford University Press!

Nearby words

  • Interaction

Parts of Speech for Interaction

Gramatical hierarchy.

Grammatically "Interaction" is a noun. But also it is used as a .

  • Tools and Resources
  • Customer Services
  • Applied Linguistics
  • Biology of Language
  • Cognitive Science
  • Computational Linguistics
  • Historical Linguistics
  • History of Linguistics
  • Language Families/Areas/Contact
  • Linguistic Theories
  • Neurolinguistics
  • Phonetics/Phonology
  • Psycholinguistics
  • Sign Languages
  • Sociolinguistics
  • Share This Facebook LinkedIn Twitter

Article contents

Conversation analysis.

  • Jack Sidnell Jack Sidnell University of Toronto
  • https://doi.org/10.1093/acrefore/9780199384655.013.40
  • Published online: 03 March 2016

Conversation analysis is an approach to the study of social interaction and talk-in-interaction that, although rooted in the sociological study of everyday life, has exerted significant influence across the humanities and social sciences including linguistics. Drawing on recordings (both audio and video) naturalistic interaction (unscripted, non-elicited, etc.) conversation analysts attempt to describe the stable practices and underlying normative organizations of interaction by moving back and forth between the close study of singular instances and the analysis of patterns exhibited across collections of cases. Four important domains of research within conversation analysis are turn-taking, repair, action formation and ascription, and action sequencing.

  • conversation
  • interaction
  • turn-taking
  • speech acts

Introduction

Conversation analysis (CA) is an approach to the study of social interaction that emerged through the collaborative research of Harvey Sacks, Emanuel Schegloff, Gail Jefferson, and their students in the 1960s and early 1970s. In 1974 , Sacks, Schegloff, and Jefferson published a landmark paper in Language titled, “A Simplest Systematics for the Organization of Turn-Taking for Conversation.” Not only did this paper lay out an account of turn-taking in conversation and provide a detailed exemplification of the conversation analytic method, it also articulated with concerns in linguistics and brought CA to the attention of linguists and others engaged in the scientific study of language. The paper remains the most cited and the most downloaded paper ever published in the history of the journal (Joseph, 2003 , see also Google citation index ). Since the publication of the turn-taking paper, researchers in this area have continued to identify ways in which the study of conversation and social interaction relates to the concerns of linguistic science.

Interaction as the Home of Language

An underlying, guiding assumption of research in conversation analysis is that the home environment of language is co-present interaction and that its structure is in some basic ways adapted to that environment. This distinguishes CA from much of linguistic science, which generally understands language to have its home in the human mind and to reflect in its structure the organization of mind. For the most part these can be seen as complementary rather than opposed perspectives (depending, perhaps on the model of mind involved). Language is both a cognitive and an interactional phenomenon, and its organization must certainly reflect this fact.

What do we mean by interaction or co-present interaction? Goffman (who supervised the PhD studies of both Sacks and Schegloff) described interaction as a normatively organized structure of attention (see inter alia 1957 , 1964 )—when people interact they are, however fleetingly, attending to one another’s attention. While drawing on these and other ideas from Goffman, conversation analysts tend to emphasize the fact that interaction is the arena for human action. In order to accomplish the business of everyday life—for instance checking to see that a neighbor received the newspaper, updating a friend about a recent event, asking for a ride to work—we interact with one another. Conversation analysis seeks to discover and describe (formally and in a rigorous, generalizable way) the underlying norms and practices that make interaction the orderly thing that it is. For instance, one fundamental aspect of the orderliness of interaction has to do with the distribution of opportunities to participate in it. How, that is, does a participant determine when it is her turn to speak, or her turn to listen? Another aspect of orderliness concerns the apparatus for addressing problems of hearing, speaking, or understanding. How, that is, do participants in conversation remedy problems that inevitably arise in the course of interaction and how do they do this in an effective yet efficient way, such that they are able to resume whatever activity they were engaged before the trouble arose? A third aspect of orderliness has to do with the way in which speakers produce, and recipients understand, stretches of talk so as constitute them as actions by which they can achieve their interactional goals. A final aspect of the orderliness of interaction has to do with the way these actions are organized into sequences in such a way as to construct an architecture of intersubjectivity—a basis for mutual understanding in conversation. Each of these four domains of conversational organization will be briefly sketched out and ways in which research in each area connects with the concerns of linguists and other scholars of language will be highlighted.

Turn-Taking

We can begin by noting, as the authors of Sacks et al. ( 1974 ) do, that there are various ways in which turn-taking for conversation (and indeed the distribution of opportunities to participate in interaction more generally) could be organized. For instance, turns could be pre-allocated so that every potential participant was entitled to talk for two minutes and the order of speakers was decided in advance (by their age, gender, status, first initial, height, weight, etc.). There are speech exchange systems (as Sacks et al., 1974 calls them) that operate more or less in this way, such as debate. But there are reasons that such a system would not work for conversation. If, for instance, we imagine that in such a system participants A, B, C, D each get an opportunity to talk and in that order, what will happen if B asks A a question? B now has to wait for C and D to speak before A can answer. But what if C and D also ask A a question? Or what if D does not hear the question that B has asked and so on? Of course, although this kind of pre-allocated system obviously won’t work for conversation, there are many other ways in which turn-taking might be organized (and, indeed, is organized for activities other than conversation). We need not review all the possibilities here. We can already see, in light of these considerations and common sense, that turn-taking for conversation must be organized locally , by the participants themselves. As Sacks et al. ( 1974 ) puts it, turn-taking in conversation is “locally managed, party-administered, interactionally controlled.”

The model these authors describe has two components and a set of “rules” that coordinate their operation. The “turn constructional component” determines the shape and extent of possible turns by specifying a sharply delimited set of units from which turns can be composed. Specifically, in English, turn constructional units (TCUs) can be lexical items, phrases, clauses, and sentences. In the following case, Shelley’s declaratively formatted question at line 01, “you were at the Halloween thing.” is a sentential TCU while her “the Halloween party” at line 03 is a phrasal TCU. Debbie’s turns at lines 02 and 04 are lexical TCUs.

(1) Debbie & Shelley

Instances of these TCUs “allow a projection of the unit-type under way, and what, roughly, it will take for an instance of that unit-type to be completed” (Sacks et al., 1974 , p. 702). This feature of projectability allows a recipient to anticipate possible completion of the current TCU and to target this “point of possible completion” as a place to begin his or her own talk. We can see how this works in example (1). Debbie is able to position her talk at line 02 so that it begins just as Shelley reaches possible completion and, in the case of line 04, just before Shelley reaches possible completion. As Sacks et al. write “we find sequentially appropriate starts by next speakers after turns composed of single-word, single-phrase, or single-clause constructions, with no gap—i.e., with no waiting for possible sentence completion” (Sacks et al., 1974 , p. 702). The precise timing of these starts thus provides evidence for the projectability of possible completion of a TCU. At the same time the fact that participants target these points as appropriate places to begin their own talk indicates that such points are treated as transition-relevant. Points of possible completion constitute transition relevance places (TRPs), which are, as Schegloff ( 1992 , p. 116) puts it, “discrete places in the developing course of a speaker’s talk ( . . . ) at which ending the turn or continuing it, transfer of the turn or its retention become relevant.”

The “turn allocation component” specifies techniques by which turns are allocated among parties to a conversation. For current purposes the most important of these techniques are those by which a current speaker selects a next speaker. A basic technique in this respect involves combining an address term (or other method of address such as directed gaze) with a sequence-initiating action such as a question, request, invitation, complaint, and so on. Consider (2), in which Michael and Nancy are guests for dinner at the home of Shane and Vivian. In the fragment below, Michael addresses his talk to Nancy by using her name (or a short from of it) and produces a question that is also a request. In this way he selects her to speak next, which she does at line 03.

(2) Chicken Dinner p. 3 (Address term)

According to Sacks et al. ( 1974 ), a set of rules coordinates the use of the turn constructional and turn allocation component. These rules apply at the first transition relevance place of any turn.

Rule 1 C= current speaker, N= next speaker (a) If C selects N in current turn, then C must stop speaking, and N must speak next, transition occurring at the first possible completion after N-selection. (b) If C does not select N, then any party (other than C) may self-select at a first point of possible completion, first speaker gaining rights to the next turn. (c) If C has not selected N, and no other party self-selects under option (b), then C may (but need not) continue (i.e., claim rights to a further TCU).
Rule 2 applies at all subsequent TRPs: When Rule 1(c) has been applied by C, then at the next TRP Rules 1 (a)–(c) apply, and recursively at the next TRP, until speaker change is effected.

This “simplest systematics” allows us to see how turn-taking in ordinary conversation is accomplished in such a way as to minimize both gap and overlap. It also allows us to see why (and to predict where) many cases of overlap occur. Consider (3).

Here Tourist’s turn at line 01, being formatted as a polar interrogative, selects some other party who is knowledgeable about the park. Parky is the first to respond and his answer is precision timed to begin at just the point where Tourist’s turn reaches completion. Parky’s turn does not select a next speaker and, after a delay of one second, Old man self-selects, elaborating the answer that Parky has provided. Parky apparently means to agree with this elaboration and produces a turn (line 06) that is, again, precisely timed to begin at just the point that Old man reaches possible completion with no gap and no overlap. However, we can see that the talk at line 06 is in fact Parky’s third attempt to articulate the agreement. What is important to see for present purposes is that the first two attempts to self-select actually target points of possible, though not actual, completion within the emerging course of Old man’s turn. That is to say, “Th’ Fun fair changed it” is in fact a possibly complete turn in this context, as is “Th’ Fun fair changed it’n ahful lot.” This example, which is in no way unusual, provides clear evidence that Parky is able to parse the talk as it emerges so as to project points of possible completion within it and thus be prepared to begin his own turn at just these places. Overlap of the kind produced here provides further evidence of the projectability of possible completion and, moreover, of the fact that participants orient to such possible completion as transition relevant.

Two implications of what has so far been said are first, the turn-taking system for conversation operates over only two turn constructional units at a time: current and next. Second, a current speaker is initially entitled to produce only one TCU and at the first point of possible completion transition to a next speaker becomes a relevant possibility. Thus, if a current speaker is to talk for more than one TCU, some effort to secure additional opportunity will have to be made. One set of practices involves foreclosing the possibility of another self-selecting at possible completion by, for instance, reducing the extent and recognizability of that point of possible completion. Another practice involves issuing a bid to produce a longer stretch of talk. If the other participants buy in and provide a go-ahead response to such a bid, the result is to effectively suspend the association between possible completion and transition relevance for the duration of the telling. So, for example, when speakers produce stories they often begin with a short sequence in which a bid is made with “Guess what happened to me today?” and a recipient responds with “What.” etc. (see Sidnell, 2010 ). Another implication of the foregoing discussion is that the turn-taking mechanism for ordinary conversation is, as Goodwin ( 1979 ) writes, “coercive” rather than “permissive.” A number of other models of turn-taking propose that speakers employ “turn-ending signals” or “completion cues,” and that a listener must wait to hear one of these cues before beginning his or her own talk. Such a system would be “permissive” in that it would allow a current speaker to continue talk as long he or she wished. But the system described by Sacks et al. ( 1974 ) is not like this. Rather it is “spring-loaded” with a number of pressures encouraging shorter turns, most important the fact that a current a speaker is initially entitled to produce only a single TCU.

This analysis of turn-taking draws upon basic ideas about language structure. For instance, in their description of the turn-constructional component, the authors of Sacks et al. ( 1974 ) suggest that grammar plays a key role in determining what can count as a possible TCU—these are lexical items, phrases, clauses, sentences. Subsequent researchers have developed these ideas and have sought to determine the relative role of intonation, prosody, grammar, and pragmatics in shaping possible completion (see Ford, Fox, & Thompson, 1996 ). Other research has addressed the question of whether the turn-taking system described by Sacks et al. ( 1974 ) applies to English only or, rather, applies generally to all languages (see, e.g., Sidnell, 2001 ). Stivers et al. ( 2009 ) draws on a sample of 10 languages, showing that there was clear evidence in all of them for a general avoidance of overlapping talk and a minimization of silence between conversational turns. Focusing on transitions between Yes-No (or polar) questions and their responses, Stivers et al. provides evidence that in all the languages they compared the same factors account for the variation in speed of response. Answers were produced with significantly less delay than non-answer responses. Within the set of answers, those that were confirmations were delivered with less delay than those that were disconfirmations. When a response included a visible (nonverbal) component this was produced with less delay than those responses without. Finally, in 9 of the 10 languages studied, responses were delivered faster if the speaker was looking at the recipient while asking the question. This study then also provides strong evidence that turn-taking for conversation is organized in ways that are independent of the language being spoken.

A second important area of research within conversation analysis concerns the systematically organized set of practices of “repair” that participants use to address troubles of speaking, hearing, and understanding. Episodes of repair are composed of parts (Schegloff, 1997 ; Schegloff, Jefferson, & Sacks, 1977 ). A repair initiation marks a “possible disjunction with the immediately preceding talk,” while a repair outcome results either in a “solution or abandonment of the problem” (Schegloff, 2000 , p. 207). That problem, the particular segment of talk to which the repair is addressed, is termed the “trouble source” or “repairable.”

Repair can be initiated either by the speaker of the repairable item or by some other participant (e.g., the recipient). Likewise the repair itself can be done either by the speaker of the trouble source or someone else. In describing the organization of repair it is usual to use the term “self” for the speaker of the trouble source and “other” for any other participant. Thus we can identify cases of self-initiated, self-repair (see [4]), other-initiated, self-repair (see [5]) and self-initiated, other-repair, etc. In these examples, the arrow labeled (a) indicates the position of the repairable item or “trouble source,” the arrow labeled (b) indicates the position of the repair initiator, and the arrow labeled (c) indicates the position of the repair or correction.

(4) XTR (1.2)

We can immediately see that the components of the repair episode (a, b, c) cluster in one turn in (4), whereas in (5) they are distributed across a sequence of three turns. In (4) we see that B initiates repair with a cut-off on “Fri:-” and then subsequently provides the repair by replacing what was presumably going to be “on Friday” with “on Sunday.” Several other observations are that the word to be replaced is framed by repeated material (“on”), and that the problem is pre-monitored by delay (“ah” in line 02). In (5) when Guy asks for the first time “Is Cliff dow:n by any chance?=do you know?” Jon responds not with an answer to the question (which it can be observed he knows) but rather with “↑Ha:h?” thereby indicating trouble with some aspect of what Guy has said and initiating repair of the prior turn. In response, Guy re-asks the question, hesitating slightly before substituting “Brown” for “Cliff” (a surname for a first name). At line 06 Jon answers the question, affirmatively, saying “Yeah he’s down.” (“down” here refers to being at the beach rather than in town).

When repair is initiated by a participant other than the speaker of the trouble source, this is typically done in the turn subsequent to that which contains the trouble-source by one of the available next-turn-repair-initiators (NTRI). The various NTRIs “have a natural ordering, based on their relative strength or power on such parameters as their capacity to locate a repairable.” (Schegloff et al., 1977 , p. 369). At one end of the scale, NTRIs such as what? and huh? indicate only that a recipient has detected some trouble in the previous turn; they do not locate any particular repairable component within that turn. Question words such as who , where , when are more specific in that they indicate what part of speech is repairable (e.g., who —a person referring noun phrase, etc.). The power of such question words to locate trouble in a previous turn is increased when appended to a partial repeat. Repair may also be initiated by a partial repeat without any question word.

Recent research has sought to describe the linguistic practices and resources used in initiating repair from a cross-linguistic, comparative perspective. Fox, Hayashi, and Jasperson ( 1996 ) notes differences between self-repair in English and Japanese and links these to the different “syntactic practices” of the two languages. The authors of Hayashi and Hayano ( 2013 ) describe a particular format used in Japanese conversation, which they term “proferring an insertable element” (PIE), in which a next speaker articulates a candidate understanding of the prior utterance, but does so with an item that is understood to be inserted into rather than appended onto the preceding turn. In a comparison of a diverse set of languages, Dingemanse, Blythe, and Dirksmeyer ( 2014 ) describes various formats for other to initiate repair, suggesting that, “different languages make available a wide but remarkably similar range of linguistic resources for this function,” noting that repair initiation formats are adapted to deal with different contingencies of trouble in interaction. Specifically, repair initiation formats respond to the problems of characterizing the trouble encountered, managing responsibility for the trouble and displaying their speaker’s understanding of the distribution of knowledge. Thus a form such as “huh?” indicates trouble but does not characterize it, includes no on-record position with respect to responsibility for the trouble, and also claims no knowledge of what has been said. In contrast, a repair initiation format such as “you mean the one around the corner?” locates (e.g., the expression “the coffee stand”) and characterizes (as a problem of reference or understanding) the trouble. Although such a format again includes no explicit indication of which participant is responsible for the trouble, it nevertheless suggests that the one initiating repair takes responsibility for finding a solution. And, finally, by displaying an understanding (candidate) of what has been said, it thereby shows that its speaker is knowledgeable in this respect (and has heard what was said).

Action in Interaction

A basic question addressed by research within linguistic pragmatics concerns how saying something can count as doing something. Much of the work in this area has drawn on the ideas of John Searle and others who have argued for a solution to the problem based on a theory of speech acts. While there are different versions of the theory, some common assumptions seem to be that actions are relatively discrete and can therefore be classified or categorized. Applied to interaction, the theory suggests that recipients listen for cues (or clues) that allow for the identification of whatever act the talk is meant to be doing (e.g., greeting, complaining, requesting, inviting). Moreover, the theory seems to presume a closed set or inventory of actions that are cued by a delimited range of linguistic devices. On this formulation, the basic problem to be accounted for by scholars of interaction is how participants are able to recognize so quickly what action is being done (see Levinson, 2012 ). As we have already seen, participants in interaction are able to respond to prior turns with no waiting, no gap, and so on (indeed they routinely respond in overlap). Operating with the standard assumptions of psycholinguistics (i.e., that speech recognition and language comprehension requires “processing time,” that speech production requires “planning time,” and so on), this creates something of a mystery—how are participants able not only to parse the turn at talk into TCUs (and thereby anticipate points of possible completion), but also to recognize what action is being done in and through those TCUs, and somehow be prepared to respond to that action with little or no latency (indeed, in cases of overlapped response, with less than zero latency).

Sidnell and Enfield ( 2014 ) offers a critique of the underlying assumptions of speech act theory applied to action in interaction describing it as a “binning” approach,

in which the central problem is taken to involve recipients of talk (or other participants) sorting the stream of interactional conduct into the appropriate categories or bins. . . . These accounts appear to involve a presumption about the psychological reality of action types that is somewhat akin to the psychological reality of phonemes. . . . That is, for the binning account to be correct, there must be an inventory of actions just as there is a set of phonemes in a language. Each token bit of conduct would be put into an appropriate pre-existing action-type category. The binning approach thus also suggests that it would be reasonable to ask how many actions there are. But we think that to ask how many actions there are is more like asking how many sentences there are.

An alternative account treats “action” as, always, a formulation or a construal of some configuration of practices in interaction. For the most part, formulations are not required to ensure the orderly flow of interaction. Participants respond on the fly and infer what a speaker is doing from a broad range of evidence. However, on occasion (such as in some cases of reported speech and in some cases of third position repair), a speaker formulates, using the vernacular metalinguistic terms available to her, the action that she or another participant is understood to have accomplished (e.g., “I requested that he get off the table!,” “I’m not asking you to come down, I’m just saying you’re welcome if you want,” etc.). And, of course, in various kinds of post hoc reporting contexts and in scholarly analysis, persons outside of an interaction routinely formulate the actions that were done within it. So an alternative to the binning or speech act account is one in which producing an “action” (in quotation marks to indicate that this is merely a heuristic use of the word) involves putting together, configuring, or orchestrating a range of distinct practices of conduct to allow for the inference that the speaker is doing “x” or “y” where “x” or “y” are possible formulations or descriptions.

It is often suggested by conversation analysts that there is no necessary “one-to-one mapping” between a given practice of speaking (e.g., “do you want me to come over and get her?”) and some specific action (such as “an offer”), and this is usually taken to imply a many-to-one relation running in both directions; that is, there are multiple practices available to accomplish any given action, and any given practice can, in context, be understood to accomplish a range of different actions (see, e.g., Schegloff, 1997 ; Sidnell, 2010 ). But, while this is no doubt true (insofar as the terms in which it formulates the problem are adequate, e.g., “context,” “an action,” etc.), matters are a good deal more complicated than this, because any determination of “what a speaker is doing” is an inference from a complex putting together of distinct practices of composition and positioning.

Levinson ( 2012 ), puzzled as to how recipients are seemingly able to determine what action is being done so early on in the production of a turn and somehow able to respond without delay, distinguishes two major types of information that can be gleaned from a turn-at-talk. On the one hand there is the “front-loaded” information of prosody (e.g., pitch reset), gaze, and turn-initial tokens (such as “oh,” “look,” “well,” and so on) that can potentially tip off the recipient as to what is being done. On the other hand there is the detailed linguistic information that is revealed only as the turn-at-talk unfolds. This includes much of the information available through grammatical formatting (e.g., morphological cues, syntactic inversion, imperative forms, etc.), as well as through richly informative linguistic formulations (e.g., “the deal,” “my boss,” “stupid trial thing,” etc.). While Levinson thus recognizes that the passage from a turn-at-talk to “action” involves a recipient putting together various strands of evidence, he argues that the solution must involve a delimited inventory of actions, recognition of which these practices, solely or in combination, are able to trigger. Alternatively, Sidnell and Enfield ( 2014 ) argue that a model involving inference from a complex set of features implies an inevitable degree of indeterminacy in action ascription, which is always merely an inference from evidence. For the most part, participants in interaction get along just fine, such inference-based action ascriptions are good enough for all practical purposes and, because no formulation is typically required, problems typically do not arise.

It is well established in CA that one can look to subsequent turns in order to ground an analysis of previous ones—this is called the “next turn proof procedure” (Sacks et al., 1974 ). In the analysis of single cases we can ground our analysis of some turn as, for instance, an “accusation” by looking to see how the recipient responds to it (e.g., with an excuse or justification). Sacks et al. ( 1974 ) proposes along these lines that:

while understandings of other turns’ talk are displayed to co-participants, they are available as well to professional analysts, who are thereby afforded a proof criterion . . . for the analysis of what a turn’s talk is occupied with. Since it is the parties’ understandings of prior turns’ talk that is relevant to their construction of next turns, it is their understandings that are wanted for analysis. The display of those understandings in the talk of subsequent turns affords both a resource for the analysis of prior turns and a proof procedure for professional analyses of prior turns—resources intrinsic to the data themselves.

This “data-internal evidence” is used, for instance, to ground the claim that when Debbie says “what is the deal” in line 15 of example (6), which comes from the opening of a telephone call, she is not simply asking a question but is, in doing so, accusing Shelley of wrong-doing:

(6) Debbie and Shelley

“What is the deal” is hearable as an accusation, as conveying that Shelley has done or is otherwise responsible for something that Debbie is unhappy about. What aspects of the talk convey that? First, the positioning of the question, pre-empting “how are you” type inquiries, provides for a hearing of this as “abrupt” and in some sense interruptive of the usual niceties with which a call’s opening is typically occupied (e.g., “how are you?”). Second, by posing a question that requires Shelley to figure out what is meant by “the deal,” Debbie thereby suggests that Shelley should already know what she is talking about and thus that there is something in the “common ground,” something to which both Debbie and Shelley are already attending (have “on their minds”). Third, by selecting the idiom “the deal” Debbie reveals her stance toward what she is talking about as “a problem” or as something that she is not happy about. Fourth, with the prosody, including the stress on “is” so that it is not contracted, the emphasis on “dea::l.” and the apparent pitch reset with which the turn begins conveys, Debbie conveys heightened emotional involvement. Putting all this together we can hear in what Debbie says here something other than a simple request for information—this is an accusation. It seems clear that Debbie is upset and the implication is that Shelley is responsible for this. But how can we ground the analysis of the turn in question in the displayed orientations of the participants themselves? To do this we look to Shelley’s response.

That Shelley hears in this more than a simple question is evidenced first by her plea of innocence with “whadayou ↑mean.” and secondly by her excuse. All other-initiations of repair indicate that the speaker has encountered a trouble of hearing or understanding in the previous turn. Among these “What do you mean” appears specifically adapted to indicate a problem of understanding based on presuppositions about common ground (Hayashi, Raymond, & Sidnell, 2013 ). Here “what do you mean?,” which is produced with a noticeably higher pitch, suggests Shelley does not understand what Debbie means by the clearly allusive, in-the-know expression, “the deal.” More narrowly, it conveys that the expression “what is the deal” has asked Shelley to search for a possible problem that she is perhaps responsible for, and that no such problem can be identified. It is thus hearable as claiming “innocence.”

When Debbie redoes the question, in response to the initiation of repair by “What do you mean,” she does it with a yes-no (polar) question that strongly suggests she already knows the answer. “You’re not going to go” is what Pomerantz ( 1988 ) calls a candidate answer question that presents, in a declarative format, to Shelley what Debbie suspects is the answer, and requests confirmation of this. This then reveals the problem that Debbie had in mind and meant to refer to by “the deal.” And when Shelley responds to the repaired question she does so with what is recognizable as an excuse. This is a “type non-conforming” response (i.e., one that contains no “yes” or “no” token; see Raymond, 2003 ), in which Shelley pushes the responsibility for not going (which is implied, not stated) onto “her boss” (invoking the undeniable obligations of work in the district attorney’s office), and suggesting that the obstacle here is an inconvenience for her (as well as for Debbie) by characterizing the impediment to her participation as a “stupid trial-thing.”

Clearly, as the quote from Sacks et al. ( 1974 ) makes clear and as the foregoing discussion is meant to explicate, the most important data-internal evidence we have comes in subsequent talk . In the case we have considered, subsequent talk reveals how Shelley herself understood the talk that has been addressed to her because this understanding is embodied in the way she responds.

It is important to clarify what exactly is being claimed. Subsequent talk, and data internal evidence, allow us to ground the analysis of this question—“What is the deal”—as projecting an accusation of Shelley by Debbie. It does not , however, tell us what specific features of the talk cue, convey, or carry that complaint/accusation. As the pioneers of conversation analysis demonstrated, in order to address that question, the question as to which specific features or practices provide for an understanding of what a given turn is doing, we need to look across different cases. We need to isolate these practices in order to discern their generic, context-free, cohort independent character. So case-by-case analysis (single case analysis using data-internal evidence) inevitably leaves us with a question—specifically, what particular aspects of a turn convey (allow for an inference as to) what the speaker is doing (i.e., what action is being done)? What are the particular practices of speaking that result in that consequence? What are the generic features of the practice that are independent of this particular context, situation, group of participants, etc.?

In order to attempt an answer to these questions we have to move beyond the analysis of a single case to look at multiple instances. However, and this is the key point in the context of the present discussion, when we do this we inevitably find that each practice that is put together with others in some particular instance (to effect some particular action outcome) can be used in other ways, combined with other practices, to result in other outcomes . We can take any particular “practice” from the Debbie and Shelley case and work out from there. We can look for questions that, like Debbie’s “What is the deal?,” occur in this position, pre-empting what normatively happens in the opening turns of a telephone call. If we do this we find that some are like this one and seem to deliver or imply an accusation, but others do not. We can look at other cases in which a speaker refers to something as “the deal” or asks “what is the deal” and again find some cases in which an accusation is inferred but others in which it is not. And we can find other instances in which similar prosody is used in the formation of a question or instances in which a question is delivered with an initial pitch reset. The result is always the same: no single feature is associated with some particular outcome. The conclusion we must then draw is that “action” is an inference from a diverse set of pieces of evidence that a speaker puts together or orchestrates within a single TCU or utterance (see also Robinson, 2007 ).

Action Sequencing

As we have already seen, in conversation, actions are organized into sequences. The most basic form such sequences can take is as a set of two paired actions, a first and a second, known as an adjacency pair . For instance, production of a question establishes a next position within which an answer is relevant and expected next. In order to capture this aspect of organization, Schegloff ( 1968 , p. 1083) introduced the concept of conditional relevance :

By the conditional relevance of one item on another we mean: given the first, the second is expectable; upon its occurrence it can be seen to be a second item to the first; upon its nonoccurrence it can be seen to be officially absent—all this provided by the occurrence of the first item.

Although questions are not always followed by answers, the conditional relevance that a question activates ensures that participants will inspect any talk that responds to a question to see if and how it might be an answer, or might account for why an answer is not being produced. In response to questions the most common account for not answering is “I don’t know.” So, in the following, when Guy asks Jon if a mutual acquaintance might like to go golfing with them, Jon replies with “I don’t know,” and follows up by suggesting that he “go by and see,” thereby indicating a willingness to obtain the information that has been requested.

(7) NB 1.1 1:05

So even where second speakers do not (for whatever reason) actually produce the second pair part that is called for, they typically exhibit some orientation to its relevance and often account for its non-occurrence and even, in some cases, apologize for an inability to deliver it. The same example also provides evidence that questioners orient to the conditional relevance exerted by sequence initiating action such as Guy’s “Think he’d like to go?” in line 07. Thus when Jon does not answer the question posed Guy reissues it at line 12 thereby pursuing a response.

Schegloff and Sacks ( 1973 ) identifies four defining characteristics of the adjacency pair. It is composed of two utterances that are:

Produced by different speakers.

Ordered as a first pair part (FPP) and second pair part (SPP).

“Typed”?, so that a particular first pair part provides for the relevance of a particular second pair part (or some delimited range of seconds, e.g., a complaint can be relevantly responded to by a remedy, an excuse, a justification, a denial, and so on).

Adjacency pairs are sequences composed of only two turns—a first and second pair part. But talk-in-interaction and conversation in particular is not composed solely of paired actions, produced one after the other. Rather, an adjacency pair may be expanded so as to result in a much more complex sequence. An adjacency pair can be expanded prior to the occurrence of its first part, after the occurrence of its first part but before the occurrence of its second, or after its second pair part. These expansions are themselves often built out of paired actions and can themselves serve as the bases upon which further expansion takes place.

Pre-expansions involve an expansion of a base adjacency pair prior to the occurrence of the first pair part and are preparatory to the action the base pair part is meant to accomplish. So, for instance, a pre-invitation “hey, are you busy tonight?” checks on the availability of the recipient. A pre-request such as “You wouldn’t happen to be going my way would you?” checks on the degree of inconvenience a projected request is likely to impose, and so on. Such pre-expansions check on a condition for the successful accomplishment of the base first pair part. Consider the following phone call excerpt:

(8) HS:STI,1

Judy’s “why” at line 07 displays an orientation to the preceding turn as something more than an information-seeking question and John’s answer at lines 8–11 confirms this inference.

As just noted, an adjacency pair consists of two adjacent utterances, with the second selected from some range of possibilities defined by the first. However, on some occasions, the two utterances of an adjacency pair are not, in fact, adjacent. In some cases this is because another sequence has been inserted between the first and second pair part of an adjacency pair. Such insert expansions can be divided into post-firsts and pre-seconds (Schegloff, 2007 ) according to the kind of interactional relevancy they address.

Post-expansions are highly variable with respect to their complexity. Schegloff ( 2007 ) suggests that they can be divided into minimal and non-minimal types. Minimal post-expansions consist of one turn. “Oh” for instance can occur after the response to a question, thereby registering that the questioner has been informed by that response and minimally expanding the sequence with a single turn of post-expansion. Other forms of post expansion are more elaborate and addressed to a range of interactional contingencies.

This brief overview of conversation analysis has discussed four domains of organization: turn-taking, repair, action formation, and action sequencing. Research in each of these four domains has consequences for our understanding of language and language structure (see Couper-Kuhlen & Selting, in press ; Thompson, Fox, & Couper-Kuhlen, 2015 ). While work to date has drawn connections primarily between linguistics and turn-taking and repair, there are obvious ways in which the work on action and action sequencing bears on the concerns of linguistics. For instance, work on action formation intersects with research within linguistics on mood and with the analysis of speech acts. Work on action sequencing bears on problems of anaphora resolution and inter-sentential grammatical relations. In order to fully explore these and other themes, we will likely require a robustly cross-linguistic, comparative, and interdisciplinary program of research.

Further Reading

  • Couper-Kuhlen, E. , & Selting, M. (in press). Interactional linguistics: Studying language in social interaction . Cambridge, U.K.: Cambridge University Press.
  • Ford, C. E. , Fox, B. A. , & Thompson, S. A. (1996). Practices in the construction of turns: The TCU revisited. Pragmatics , 6 (3), 427–454.
  • Hayashi, M. , Raymond, G. , & Sidnell, J. (2013) Conversational repair and human understanding: An introduction. In M. Hayashi , G. Raymond , & J. Sidnell (Eds.), Conversational Repair and Human Understanding (pp. 1–40). Cambridge, U.K.: Cambridge University Press.
  • Levinson, S. C. (2012). Action formation and ascription . In J. Sidnell & T. Stivers (Eds.), The handbook of conversation analysis (pp. 103–130). Malden, MA: Wiley-Blackwell.
  • Raymond, G. (2003). Grammar and social organization: Yes/no interrogatives and the structure of responding. American Sociological Review, 68 , 939–967.
  • Sacks, H. , Schegloff, E. A. , & Jefferson, G. (1974). A simplest systematics for the organization of turn-taking for conversation. Language , 50 (4), 696–735.
  • Schegloff, E. A. (1968). Sequencing in conversational openings. American Anthropologist , 70 (6), 1075–1095.
  • Schegloff, E. A. (1997). Practices and actions: Boundary cases of other-initiated repair. Discourse Processes , 23 (3), 499–545.
  • Schegloff, E. A. (2007). Sequence organization in interaction: A primer in conversation analysis . Cambridge, U.K.: Cambridge University Press.
  • Schegloff, E. A. , Jefferson, G. , & Sacks, H. (1977). The preference for self-correction in the organization of repair in conversation. Language, 53 (2), 361–382.
  • Schegloff, E. A. , & Sacks, H. (1973). Opening up closings. Semiotica , 8 , 289–327.
  • Sidnell, J. (2010). Conversation Analysis: An introduction . Oxford: Wiley-Blackwell.
  • Sidnell, J. , & Enfield, N. J. (2014). The ontology of action in interaction. In N. J. Enfield , P. Kockelman & J. Sidnell (Eds.), The Cambridge Handbook of Linguistic Anthropology (pp. 423–446). Cambridge, U.K.: Cambridge University Press.
  • Stivers, T. , Enfield, N. J. , Brown, P. , Englert, C. , Hayashi, M. , Heinemann, T. , et al. (2009). Universals and cultural variation in turn-taking in conversation. Proceedings of the National Academy of Sciences, 106 (26), 10587–10592.
  • Thompson, S. , Fox, B. , & Couper-Kuhlen, E. (2015) Grammar in everyday talk: Building responsive actions . Cambridge, U.K.: Cambridge University Press.
  • Couper-Kuhlen, E. , & Selting, M. (in press). Interactional Linguistics: Studying language in social interaction . Cambridge, U.K.: Cambridge University Press.
  • Dingemanse, M. , Blythe, J. , & Dirksmeyer, T. (2014). Formats for other-initiation of repair across languages: An exercise in pragmatic typology. Studies in Language , 38 , 5–43.
  • Fox, B. A. , Hayashi, M. , & Jasperson, R. (1996). Resources and repair: A cross-linguistic study of syntax and repair. In E. Ochs , E. A. Schegloff & S. A. Thompson (Eds.), Interaction and grammar (pp. 185–237). Cambridge, U.K.: Cambridge University Press.
  • Goffman, E. (1957). Alienation from interaction. Human Relations , 10 , 47–60.
  • Goffman, E. (1964). The neglected situation. American Anthropologist , 66 (6, Pt. 2), 133–136.
  • Goodwin, C. (1979). Review of Starkey Duncan Jr. and Donald W. Fiske, Face-to-face interaction: Research methods and theories . Language in Society , 8 (3), 439–444.
  • Hayashi, M. , & Hayano, K. (2013). Proffering insertable elements: A study of other-initiated repair in Japanese. In M. Hayashi , G. Raymond , & J. Sidnell (Eds.), Conversational repair and human understanding (pp. 293–321). Cambridge, U.K.: Cambridge University Press.
  • Hayashi, M. , Raymond, G. , & Sidnell, J. (2013). Conversational repair and human understanding: An introduction. In M. Hayashi , G. Raymond , & J. Sidnell (Eds.), Conversational repair and human understanding (pp. 1–40). Cambridge, U.K.: Cambridge University Press.
  • Heritage, J. (1984). A change of state token and aspects of its sequential placement. In J. M. Atkinson & J. Heritage (Eds.), Structures of social action: Studies in conversation analysis (pp. 299–345). Cambridge, U.K.: Cambridge University Press.
  • Joseph, B. D. (2003). The Editor’s Department: Reviewing our contents. Language , 79 (3), 461–463.
  • Levinson, S. C. (2013). Action formation and ascription . In J. Sidnell & T. Stivers (Eds.), The handbook of conversation analysis (pp. 103–130). Malden, MA: Wiley-Blackwell.
  • Pomerantz, A. M. (1988). Offering a candidate answer: An information seeking strategy. Communication Monographs, 55 (4), 360–373.
  • Raymond, G. (2003). Grammar and social organization: Yes/no interrogatives and the structure of responding. American Sociological Review , 68 , 939–967.
  • Robinson, J. (2007). The role of numbers and statistics within conversation analysis. Communication Methods and Measures , 1 (1), 65–75.
  • Schegloff, E. A. (1992). To Searle on conversation: A note in return. In H. Parret & J. Verschueren (Eds.), (On) Searle on conversation (pp. 113–128). Amsterdam: John Benjamins.
  • Schegloff, E. A. (2000). When “others” initiate repair. Applied Linguistics , 21 (2), 205–243.
  • Schegloff, E. A. , Jefferson, G. , & Sacks, H. (1977). The preference for self-correction in the organization of repair in conversation. Language , 53 (2), 361–382.
  • Sidnell, J. (2001). Conversational turn-taking in a Caribbean English Creole. Journal of Pragmatics , 33 (8), 1263–1290.
  • Sidnell, J. (2010). Conversation analysis: An introduction . Oxford: Wiley-Blackwell.
  • Sidnell, J. , & Enfield, N. J. (2014). The ontology of action in interaction. In N. J. Enfield , P. Kockelman , & J. Sidnell (Eds.), The Cambridge handbook of linguistic anthropology (pp. 423–446). Cambridge, U.K.: Cambridge University Press.
  • Stivers, T. , Enfield, N. J. , Brown, P. , Englert, C. , Hayashi, M. , Heinemann, T. , et al. (2009). Universals and cultural variation in turn-taking in conversation. Proceedings of the National Academy of Sciences , 106 (26), 10587–10592.
  • Thompson, S. , Fox, B. , & Couper-Kuhlen, E. (2015). Grammar in everyday talk: Building responsive actions . Cambridge, U.K.: Cambridge University Press.

Related Articles

  • Acquisition of Pragmatics
  • Conversational Implicature
  • The Language of the Economy and Business in the Romance Languages

Printed from Oxford Research Encyclopedias, Linguistics. Under the terms of the licence agreement, an individual user may print out a single article for personal use (for details see Privacy Policy and Legal Notice).

date: 17 April 2024

  • Cookie Policy
  • Privacy Policy
  • Legal Notice
  • [66.249.64.20|185.80.151.41]
  • 185.80.151.41

Character limit 500 /500

  • More from M-W
  • To save this word, you'll need to log in. Log In

part of speech

noun phrase

Definition of part of speech, examples of part of speech in a sentence.

These examples are programmatically compiled from various online sources to illustrate current usage of the word 'part of speech.' Any opinions expressed in the examples do not represent those of Merriam-Webster or its editors. Send us feedback about these examples.

Word History

1517, in the meaning defined above

Articles Related to part of speech

puzzle-pieces-photo

A Comprehensive Guide to Forming...

A Comprehensive Guide to Forming Compounds

Everything you need to know

image424106148

The Adverb: A Most Fascinating POS

'POS' means "part of speech," obviously

Dictionary Entries Near part of speech

partnership life insurance

part of the package

Cite this Entry

“Part of speech.” Merriam-Webster.com Dictionary , Merriam-Webster, https://www.merriam-webster.com/dictionary/part%20of%20speech. Accessed 17 Apr. 2024.

Kids Definition

Kids definition of part of speech, more from merriam-webster on part of speech.

Nglish: Translation of part of speech for Spanish Speakers

Britannica.com: Encyclopedia article about part of speech

Subscribe to America's largest dictionary and get thousands more definitions and advanced search—ad free!

Play Quordle: Guess all four words in a limited number of tries.  Each of your guesses must be a real 5-letter word.

Can you solve 4 words at once?

Word of the day, circumlocution.

See Definitions and Examples »

Get Word of the Day daily email!

Popular in Grammar & Usage

Your vs. you're: how to use them correctly, every letter is silent, sometimes: a-z list of examples, more commonly mispronounced words, how to use em dashes (—), en dashes (–) , and hyphens (-), absent letters that are heard anyway, popular in wordplay, a great big list of bread words, the words of the week - apr. 12, 10 scrabble words without any vowels, 12 more bird names that sound like insults (and sometimes are), 9 superb owl words, games & quizzes.

Play Blossom: Solve today's spelling word game by finding as many words as you can using just 7 letters. Longer words score more points.

Part of Speech

Interjection: definition and examples.

The interjection is a part of speech which is more commonly used in informal language than in formal writing or speech. Basically, the function of interjections is to express emotions or sudden bursts of feelings. They can express a wide variety of emotions such as: excitement, joy, surprise, or disgust.

What are the Structures and Importance of Interjections?

Interjections can come in the form of a single word, a phrase, or even a short clause. Aside from that, they are usually (but not always) placed at the beginning of a sentence. The importance of interjections lies in the fact that they can convey feelings that may sometimes be neglected in the sentence.

Take for example, the sentence “ That book is about vampires .”

One person can write it as:

  • That book is about vampires.

But then again, another person might use an interjection to show the same feeling of disgust (as in sentence number 2):

  • Eww ! That book is about vampires.

So you see from the sentence above that the word “eww” conveys the emotional response to what is said in the sentence. It can act as a replacement for emoticons and are more appropriate to use in writing, especially in character dialogues.

What are the Different Kinds of Interjections?

Below are the different kinds of interjections:

  • Adjectives that are used as interjections.
  • Nice ! You got a Monster Kill in your first game!
  • Sweet! I got a PS4 for my birthday!
  • Good! Now we can move on to the next lesson.

The italicized words in the sample sentences above are just some of the adjectives that can be used as interjections.

  • Nouns or noun phrases that are used as interjections.
  • Congratulations , you won the match.
  • Hello! How are you?
  • Holy cow ! I forgot my keys!

The italicized parts of the sentences above are just some of the nouns that can be used as interjections.

  • Short clauses that are used as interjections.
  • Shawie is our chemistry teacher. Oh, the horror!

The short clause that is italicized in the example above functions as an interjection.

  • Some interjections are sounds .
  • Ugh! I’m never doing that again!
  • Whew! That was really close!
  • Uh-oh! Dude, I think we’re in serious trouble.

How do You Punctuate Interjections?

Since interjections convey different kinds of emotions, there are also different ways to punctuate them.

  • Exclamation point

The exclamation point is the most commonly used punctuation mark for interjections. Obviously, it is used to communicate strong emotions such as surprise, excitement, or anger.

  • I just replaced your sugar with salt. Bazinga!
  • Hooray! I got the job!
  • Hey! Stop messing with me!
  • Ouch! That must’ve hurt really bad!
  • Oh! They’re here!
  • Boo-yah ! This is the bomb!
  • Are you still going to eat that? Yuck!
  • Yahoo! I got my Christmas bonus!
  • Eek! There’s a flying cockroach!
  • Period or comma

For weaker emotions, a period or a comma will suffice.

  • What’s the answer to number 24?
  • Meh , who cares?
  • Ah, that feels great!
  • Oh well, what’s done is done.
  • Well, what did your mom say?
  • Um… I don’t think so.
  • Hmm, your house always smells like freshly brewed coffee.
  • Question mark

If you intend to use interjections to express uncertainty or disbelief, it is more appropriate to use a question mark.

  • Huh? What did you just say?
  • What? You still haven’t submitted your project?
  • Oh, really? I never thought he’s that kind of guy.

What are Other Examples of Interjections?

Final Thoughts

Although interjections may seem trivial, the reality is that this part of speech is very important because it can sometimes be difficult to express emotions in written language. Emoticons may not be appropriate or possible under certain circumstances, so using interjections proves to be a more viable option. Just remember all the substantial information provided in this article, especially when it comes to using the proper punctuation marks to convey intensity, and you will surely be able to use this part of speech effectively in your own written text.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • HHS Author Manuscripts

Logo of nihpa

Gesture’s role in speaking, learning, and creating language

When speakers talk, they gesture. The goal of this chapter is to understand the contribution that these gestures make to how we communicate and think. Gesture can play a role in communication and thought at many timespans. We explore, in turn, gesture’s contribution to how language is produced and understood in the moment; its contribution to how we learn language and other cognitive skills; and its contribution to how language is created over generations, over childhood, and on-the-spot. We find that the gestures speakers produce when they talk are integral to communication and can be harnessed in a number of ways. (1) Gesture reflects speakers’ thoughts, often their unspoken thoughts, and thus can serve as a window onto cognition. Encouraging speakers to gesture can thus provide another route for teachers, clinicians, interviewers, etc., to better understand their communication partners. (2) Gesture can change speakers’ thoughts. Encouraging gesture thus has the potential to change how students, patients, witnesses, etc., think about a problem and, as a result, alter the course of learning, therapy, or an interchange. (3) Gesture provides building blocks that can be used to construct a language. By watching how children and adults who do not already have a language put those blocks together, we can observe the process of language creation first hand. Our hands are with us at all times and thus provide researchers and learners with an ever-present tool for understanding how we talk and think.

1. Why study gesture?

The goal of this chapter is to explore the role that our hands play in communication and cognition. We focus on the hands for a number of reasons. First, hand movements during talk––better known as gestures ––are ubiquitous. Speakers in all cultures gesture when they talk, and the topics that elicit gesture can be as simple as a child’s board game ( Evans & Rubin 1979 ) or as complex as kinship relations ( Enfield 2005 ). Even congenitally blind individuals, who have never seen anyone gesture, move their hands when they talk ( Iverson & Goldin-Meadow 1998 ), highlighting the robustness of gesture in communication.

Equally important, the gestures speakers produce when they talk do not go unnoticed by their listeners. For example, an interviewee is just as likely to be led astray by the interviewer’s misleading gestures as by his misleading words. Asking the listener an open-ended question, “what else was the man wearing?” accompanied by a hat gesture (moving the hand as though donning a hat), elicits just as many hat responses as the pointed question, “what color was the hat that the man was wearing?”––in both cases, the man was not wearing a hat ( Broaders & Goldin-Meadow 2010 ). Gesture is part of our conversations and, as such, requires our research attention.

Gesture plays a role in communication at a variety of timespans––in speaking at the moment, in learning language over developmental time, and in creating language over shorter and longer periods of time. We use this structure in organizing our chapter. We begin by exploring gesture’s role in how language is processed in the moment––how it is produced and how it is understood. We then explore the role that gesture plays over development, initially in learning language and later, once language has been mastered, in learning other concepts and skills. Finally, we explore the role that gesture plays in creating language over generations (in deaf individuals who share a communication system and transmit that system to the next generation), over developmental time (in deaf children who do not have access to a usable model for language, spoken or signed), and on-the-spot (in adults who are asked to communicate without using speech).

Having shown that gesture is an integral part of communication, we end with a discussion of how gesture can be put to good use––how it can be harnessed for diagnosis and intervention in the clinic and for assessment and instruction in the classroom.

2. Gesture’s role in language processing

2.1. gesture production and its role in producing language.

The gestures that speakers produce along with their speech may actually help them to produce that speech. In this section, we consider a number of accounts of this process.

Speakers’ gestures convey meaning but, importantly, they do so using a different representational format from speech. Gesture conveys meaning globally, relying on visual and mimetic imagery, whereas speech conveys meaning discretely, relying on codified words and grammatical devices ( McNeill 1992 ). According to McNeill’s (1992 , 2005 , McNeill & Duncan 2000 ) Growth Point theory, the internal “core” or growth point of an utterance contains both the global-synthetic image carried by gesture and the linear-segmented hierarchical linguistic structure carried by speech. Moreover, the visuo-spatial and linguistic aspects of an utterance cannot be separated—gesture and speech form a single integrated system.

Building on these ideas, the Information Packaging Hypothesis ( Kita 2000 ) holds that producing gestures helps speakers organize and package visuo-spatial information into units that are compatible with the linear, sequential format of speech. The visuo-spatial representations that underlie gestures offer possibilities for organizing information that differ from the more analytic representations that underlie speech. When describing complex spatial information (such as a set of actions or an array of objects), there are many possible ways in which the information can be broken down into units and sequenced. According to the Information Packaging Hypothesis, gestures, which are individual actions in space, help speakers to select and organize the visuo-spatial information into units that are appropriate for verbalization. For example, in describing the layout of furniture in a room, a speaker might produce a gesture in which her two hands represent a couch and a chair as they are positioned in the room, and this might help in formulating the utterance, “The couch and the chair are facing one another”.

The most straightforward way to test the Information Packaging Hypothesis would be to manipulate gesture and observe the impact of that manipulation on how speech is packaged. At the moment, the evidence for the theory is more indirect––studies have manipulated the demands of packaging visuo-spatial information and shown that this manipulation has an effect on gesture production. In tasks where it is more challenging to package information into linguistic form, speakers produce more gestures, even when other factors are controlled. For example, Hostetter, Alibali and Kita (2007) asked participants to describe arrays of dots in terms of the geometric shapes that connected those dots (e.g., “The top 3 dots form a triangle, and the base of that triangle is the top of a square with dots at each corner”). For some participants, the shapes were drawn in the dot arrays, so that packaging the information into units was easy; for other participants, the shapes were not provided, so participants had to decide on their own how to group the dots into shapes. In the second case, packaging the information into units for speaking was more challenging. As predicted by the Information Packaging hypothesis, participants in this latter group produced more gestures when describing the arrays.

Whether or not we gesture is also influenced by the ease with which we can access words, as proposed in Krauss’ (1998 , Krauss et al 2000 ) Lexical Gesture Process Model . According to this theory, gestures cross-modally prime lexical items, increasing their activation and making them easier to access. For example, if a speaker produces a circular gesture as he starts to say, “The ball rolled down the hill”, the gesture will increase activation of the lexical item “roll”, making it easier for the speaker to access that word. As evidence, when lexical access is made more difficult, speakers gesture at higher rates ( Chawla & Krauss 1994 , Morsella & Krauss 2004 ). Conversely, when gesture is prohibited, speakers become more dysfluent ( Rauscher et al 1996 ).

The Interface Model proposed by Kita and Özyürek (2003) extends these theories, arguing that gestures are planned by an action generator, verbal utterances by a message generator. According to this view, although speech and gesture are generated by separate systems, those systems communicate bi-directionally and interact as utterances are conceptualized and formulated. Gestures are thus shaped by the linguistic possibilities and constraints provided by the language they accompany. Evidence for this view comes from cross-linguistic findings showing that the gestures speakers produce are shaped by the syntactic structures that underlie their language. For example, in English, the manner and path of a motion event are expressed in the same clause ( run down ), with manner in the verb and path in a satellite to the verb, as in “The child runs ( manner ) down ( path ) the street.” In contrast, in Turkish, manner and path are expressed in separate clauses ( run and descend ), with path in one verb and manner in another, as in “Cocuk kosarak tepeden asagi indi” = child as running ( manner ) descended ( path ) the hill. When English speakers produce gestures for manner and path, they conflate the two into a single gesture (a inverted-V with wiggling fingers produced while moving the hand in a downward trajectory = run+down), paralleling the single-clause structure of their speech. Turkish speakers, in contrast, produce separate gestures for manner and path (a palm moved downward = down, followed by an inverted-V with wiggling fingers in place = run), paralleling the two-clause structure of their speech ( Özyürek et al 2008 ). The particular gestures we produce are shaped by the words we speak.

An alternative view of the mechanism underlying gesture production is the Gesture as Simulated Action framework ( Hostetter & Alibali 2008 , 2010 ), which holds that speakers naturally activate simulations of actions and perceptual states when they produce speech. These simulations activate areas of motor and premotor cortex responsible for producing movements. If the level of motor activation exceeds a pre-set threshold (which is influenced by individual, social, and contextual factors), then the speaker produces overt motor movements, which we recognize as gestures. For example, according to this view, in speaking about a child running down a hill, a speaker forms a mental simulation of the scene that includes action and perceptual components. This simulation will activate corresponding motor and premotor areas, and if activation in those areas exceeds the speaker’s gesture threshold, the speaker will produce a gesture. In support of this view, a number of studies have found that gesture rates increase when action and perceptual simulations are activated ( Hostetter & Alibali 2010 , Sassenberg & Van Der Meer 2010 ). Within this framework, linguistic factors may also influence the form of the gestures, as long as they influence the nature of speakers’ simulations. For example, if linguistic factors affect the way the speaker simulates a child running down a hill, they will also shape the form of the gestures that the speaker uses to describe that event because gesture and speech are expressions of the same simulation. Thus, according to the Gesture as Simulated Action framework, speaking involves simulations of perception and action, and gestures arise as a natural consequence of these simulations.

2.2. Gesture comprehension and its role in understanding language

Although some argue that gesture plays little role in language comprehension ( Krauss et al 1996 , Krauss et al 1995 ), there is a great deal of evidence that gesture can have an impact on language comprehension. Consider a speaker who says, “The man was wearing a hat,” while moving her hand as though grasping the bill of a baseball cap. This gesture could help listeners understand that the man was wearing a hat, and it might even encourage them infer that the hat was a baseball cap. Both observational and experimental studies support these claims.

A recent quantitative meta-analysis that included 63 separate samples found that gestures foster comprehension in listeners ( Hostetter 2011 ). The overall effect size was moderate, and the size of the beneficial effect depended on several factors, including the topic of the gestures, their semantic overlap with speech, and the age of the listeners. Across studies, gestures about topics involving movement (e.g., how to make pottery, Sueyoshi & Hardison 2005 ) yielded greater benefits for listeners’ comprehension than gestures about abstract topics (e.g., the taste of tea, Krauss et al 1995 ). In addition, gestures that conveyed task-relevant information not expressed in speech (e.g., a gesture depicting width while saying “this cup is bigger ”) played a greater role in comprehension than gestures that conveyed information that was also expressed in speech (e.g., a gesture depicting width while saying “this cup is wider ”). Finally, children showed greater benefits from gesture than older listeners.

In this section, we review two types of evidence arguing that gesture has an effect on language comprehension: (1) evidence that speakers’ gestures affect listeners’ comprehension of speech, and (2) evidence that speakers’ gestures communicate information that is not expressed in speech. We conclude by considering whether there is evidence that speakers intend their gestures to be communicative.

2.2.1. Do speakers’ gestures affect listeners’ comprehension of speech?

Under ordinary circumstances, listeners comprehend speech with ease. However, if speech is difficult to comprehend, either because it is unclear, ambiguous, or difficult relative to the listeners’ skills, gesture can provide a second channel that makes successful comprehension more likely.

Many studies have investigated whether gestures influence listeners’ comprehension of speech. These include studies using video clips as stimuli (e.g., Kelly & Church 1997 ) and studies in which listeners view or participate in “live” interactions (e.g., Goldin-Meadow et al 1999 , Goldin-Meadow & Sandhofer 1999 , Holler et al 2009 ). Across studies, researchers have used a variety of outcome measures to evaluate comprehension. In some studies, participants are asked to answer questions about the speech they heard (e.g., Kelly & Church 1998 ); in others, they are asked to restate or reiterate that speech (e.g., Alibali et al 1997 ). In still other studies, participants’ spontaneous “uptake” of information from others’ speech was assessed, either in their next speaking turn ( Goldin-Meadow et al 1999 ) or in their behavioral responses ( McNeil et al 2000 ).

Across studies, there is strong evidence that gestures affect listeners’ comprehension of speech. When gestures express information that is redundant with speech, they contribute to successful comprehension ( Goldin-Meadow et al 1999 , McNeil et al 2000 ). When gestures express information that is not expressed in speech, they can detract from listeners’ direct uptake of the information in speech (e.g., Goldin-Meadow & Sandhofer 1999 ), but they often communicate important information in their own right, an issue we address in the next section.

2.2.2 Does gesture communicate information on its own?

When gesture conveys the same information as speech, it appears to help listeners pick up that information. But what happens when gesture conveys different information from speech? In the earlier hypothetical example in which the speaker said, “The man was wearing a hat,” while moving her hand as if grasping the bill of a baseball cap, the speaker expressed information about the type of hat (a baseball cap—not a cowboy hat, a stocking cap, or a sombrero) uniquely in gesture. Do listeners detect information that speakers express uniquely in gesture? They do. For example, Kelly and Church (1998) presented video clips of children explaining their judgments of Piagetian conservation tasks, and asked participants to respond to yes/no questions about the reasoning that the children expressed. A child in one video clip mentioned the height of a container in speech, but indicated the width of the container in gesture. When probed, observers often credited this child with reasoning about both the height and the width of the container. Other studies have also shown that listeners often incorporate the information conveyed uniquely in gesture into their own speech speech ( Goldin-Meadow et al 1992 , McNeill et al 1994 ). Thus, observers “credit” speakers with saying things that they express uniquely in gesture.

2.2.3 Are gestures intended to be communicative?

It is clear that gestures contribute to listeners’ comprehension. But do speakers intend for their gestures to communicate, or are gestures’ communicative effects merely an epiphenomenon of the gestures that speakers produce in the effort of speech production?

Several lines of evidence suggest that speakers do intend at least some of their gestures to be communicative. First, speakers gesture more when their listeners can see those gestures than when visibility between speaker and listener is blocked ( Alibali et al 2001 , Mol et al 2011 ). Second, when speakers repeat a message to different listeners, their gestures rates do not decline as they might if gestures were produced solely to aid with speech production ( Jacobs & Garnham 2007 ). Third, when speakers are explicitly asked to communicate specific information to their listeners, they sometimes express some of that information uniquely in gesture, and not in speech. For example, Melinger and Levelt (2004) explicitly directed speakers to communicate specific spatial information about a task to their addressees. Speakers frequently expressed this requested information in gesture and not in speech, suggesting that at least these gestures were intended to be communicative.

To summarize thus far, gesture plays a role in both language production and comprehension. One area that has received very little attention is individual differences ( Bergmann & Kopp 2010 , Hostetter & Alibali 2007 )––are there differences in the rate at which people gesture when they speak, or in the reliance people put on gesture when they listen to the speech of others? We know little about what accounts for individual differences in gesture, or even how consistent those differences are across tasks and conversational partners. This is an area of research in gesture studies that is ripe for future research.

3. Gesture’s role in language learning and beyond

Mature speakers of a language routinely use gesture when they talk, but so do young children just learning to talk. In fact, most children use gesture prior to speaking, and these gestures not only precede linguistic progress, but they also play a role in bringing that progress about.

3. 1. Gesture’s role in the early stages of language learning

3.1.1. gesture precedes and predicts changes in language.

Children typically begin to gesture between 8 and 12 months ( Bates 1976 , Bates et al 1979 ). They first use deictic gestures, whose meaning is given entirely by context and not by their form. For example, a child can hold up or point at an object to draw an adult’s attention to it months before the child produces her first word ( Iverson & Goldin-Meadow 2005 ). Pointing gestures function like context-sensitive pronouns (“this” or “that”) in that an adult has to follow the gesture’s trajectory to its target in order to figure out which object the child is indicating. In addition to deictic gestures, children produce conventional gestures common to their cultures ( Guidetti 2002 ). For example, in the United States, children may produce a side-to-side headshake to mean “no” or a finger held over the lips to mean “shush”. Children also produce iconic gestures, although initially the number tends to be quite small and varies across children ( Acredolo & Goodwyn 1988 ). For example, a child might open and close her mouth to represent a fish, or flap her hands at her sides to represent a bird ( Iverson et al 1994 ). Unlike pointing gestures, the form of an iconic gesture captures aspects of its intended referent––its meaning is consequently less dependent on context. These gestures therefore have the potential to function like words and, according to Goodwyn and Acredolo (1998 , p. 70), they do just that and can be used to express an idea that the child cannot yet express in speech. 1

Even though they treat their early gestures like words in some respects, children rarely combine gestures with other gestures and, if they do, the phase is short-lived ( Goldin-Meadow & Morford 1985 ). But children do frequently combine their gestures with words, and they produce these combinations well before they combine words with words. Because gesture and speech convey meaning differently, it is rare for the two modalities to contribute identical information to a message. Even simple pointing gestures are not completely redundant with speech. For example, when a child says “bottle” while pointing at the bottle, the word labels and thus classifies, but does not locate, the object. The point, in contrast, indicates where the object is, but not what it is. When produced together, point and word work together to more richly specify the same object. Children’s earliest gesture-speech combinations are of this type––gesture conveys information that further specifies the information conveyed in speech; for example, pointing at a box while saying “box” ( Capirci et al 1996 , de Laguna 1927 , Greenfield & Smith 1976 , Guillaume 1927 , Leopold 1949 ).

But gesture can also convey information that overlaps very little, if at all, with the information conveyed in the word it accompanies. A point, for example, can indicate an object that is not referred to in speech––the child says “bottle” while pointing at the baby. In this case, word and gesture together convey a simple proposition––“the bottle is the baby’s”––that neither modality conveys on its own ( Goldin-Meadow & Morford 1985 , Greenfield & Smith 1976 , Masur 1982 , Masur 1983 , Morford & Goldin-Meadow 1992 , Zinober & Martlew 1985 ). The types of semantic relations conveyed in these gesture-speech combinations change over time and presage changes in children’s speech ( Özçaliskan & Goldin-Meadow 2005 ). For example, children produce constructions containing an argument and a predicate in gesture+speech (“you”+ HIT gesture) at 18 months, but do not produce these constructions in speech alone (“me touch”) until 22 months.

Children thus use gesture to communicate before they use words. But do these gestures merely precede language development, or are they fundamentally tied to it? If gesture is integral to language learning, changes in gesture should not only predate, but also predict, changes in language. And they do. With respect to words, we can predict which lexical items will enter a child’s verbal vocabulary by looking at the objects that child indicated in gesture several months earlier ( Iverson & Goldin-Meadow 2005 ). With respect to sentences, we can predict when a child will produce her first two-word utterance by looking at the age at which she first produced combinations in which gesture conveys one idea and speech another (e.g., point at bird+“nap”, Goldin-Meadow & Butcher 2003 , Iverson et al 2008 , Iverson & Goldin-Meadow 2005 ).

3.1.2. Gesture can cause linguistic change

There are (at least) two ways in which children’s own gestures can change what they know about language. First, as we have just seen, gesture gives young children the opportunity to express ideas that they are not yet able to express in speech. Parents and other listeners may attend to those gestures and “translate” them into speech, thus providing children with timely input about how to express particular ideas in their language. Under this scenario, gesture plays a role in the process of change by shaping children’s learning environments. Mothers do, in fact, respond to the gestures their children produce ( Golinkoff 1986 , Masur 1982 ), often translating gestures that children produce without speech into words ( Goldin-Meadow et al 2007a ). These mother translations have been found to have an effect on language learning. With respect to word learning, when mothers translate the gestures that their children produce into words, those words are more likely to quickly become part of the child’s vocabulary than words for gestures that mothers do not translate. With respect to sentence learning, children whose mothers frequently translate their child’s gestures into speech tend to be first to produce two-word utterances ( Goldin-Meadow et al 2007a ).

Second, gesture could play a causal role in language learning by providing children with the opportunity to practice ideas and communicative devices that underlie the words and constructions that they are not yet able to express in speech. Repeated practice could then pave the way for later acquisition. Under this scenario, gesture plays a role in the process of change by affecting the learners themselves. Evidence for this hypothesis comes from the fact that child gesture at 14 months is an excellent predictor of child vocabulary at 42 months, often better than other predictors (e.g., family income, parent speech, and even child speech at 14 months, Rowe et al 2008 ). However, to convincingly demonstrate that child gesture plays a causal role in word learning, we would need to randomly select children and manipulate their gestures, encouraging some to gesture and discouraging others. If the act of gesturing itself contributes to progress in language development (as it does in other domains, see section 3.2.3), children who are encouraged to gesture should have larger vocabularies than children who are discouraged from gesturing.

The gestures that others produce may also play a causal role in language learning. By 12 months, children can understand the gestures that other people produce. For example, they can follow an adult’s pointing gesture to a target object ( Butterworth & Grover 1988 , Carpenter et al 1998 , Murphy & Messer 1977 ). Moreover, parents gesture frequently when they interact with their children, and the majority of these gestures co-occur with speech ( Acredolo & Goodwyn 1988 , Greenfield & Smith 1976 , Shatz 1982 ). Parent gesture could facilitate the child’s comprehension, and eventual acquisition, of new words simply by providing nonverbal support for understanding speech {see \ Zukow-Goldring, 1996 #782}.

However, it is often hard to tell whether parent gesture has an impact on child language learning above and beyond parent speech. For example, Iverson et al.( Iverson et al 1999 ) and Pan et al. (2005) both found a relation between parent gesture and later child language, but the relation disappeared when parent speech was taken into account. The best way to convincingly test this hypothesis is to manipulate parent gesture and observe the effects on child language. Acredolo and Goodwyn (1988) instructed parents to use symbolic gestures (now called baby signs , Acredolo & Goodwyn 2002) in addition to words when talking to their children. They found that these children showed greater gains in vocabulary than children whose parents were encouraged to use only words or were not trained at all. But the children whose parents used gesture also used more of their own gestures. The vocabulary gains may thus have been mediated by child gesture.

Previous work has, in fact, found a link between parent gesture and child gesture––parents who gesture a great deal have children who gesture a great deal ( Iverson et al 1999 , Namy et al 2000 , Rowe 2000 ). Moreover, parent gesture at 14 months predicts child gesture at 14 months, which, in turn, predicts child receptive vocabulary at 42 months. Importantly, parent gesture at 14 months does not directly predict child vocabulary at 42 months ( Rowe et al 2008 ), suggesting that parent gesture affects later child vocabulary through child gesture––parents who gesture more have children who gesture more who, in turn, go on to develop relatively large receptive vocabularies in speech.

To summarize thus far, gesture appears to play a role in learning when the task to be learned is language itself. When gesture is produced at this age, it often substitutes for a word that the child has not yet acquired. As we will see in the next section, gesture continues throughout development to convey ideas that are not expressed in speech, but often those ideas cannot easily be translated into a single word ( McNeill 1992 ). Thus, once children have become proficient language users, we should see a change in the kinds of ideas that gesture conveys. Future studies are needed to determine when this transition takes place.

3.2. Once language has been mastered: Gesture’s role in learning other domains

Gesture thus seems to offer children a “helping hand” as they learn language. Does gesture play a comparable role in other domains? We turn next to this question.

3.2.1. Gesture reveals understanding not found in speech

When children explain their understanding of concepts and problem-solving procedures, they often express some aspects of their knowledge in gestures and not in speech. Consider a six-year-old child explaining a Piagetian conservation task, in which two rows of checkers contain the same number; the checkers in one row are spread out and the child is asked whether the two rows continue to have the same number of checkers. Children who do not yet understand number conservation believe that the number of checkers in the transformed row has changed. Figure 1A displays a non-conserving child who says the number is different “because you spreaded them out” and conveys the same information in her gestures (she produces a spreading-out motion over the transformed row). In contrast, Figure 1B displays another non-conserving child who also focuses on the movements of the experimenter in his gestures––he says the number is different “because you moved them.” However, in his gestures, he indicates that the checkers in one row can be paired with the checkers in the second row, that is, he has focused on the one-to-one correspondence between the rows. This child has expressed information about the task in gestures that he did not express at all in his speech. Responses of this sort have been called “gesture-speech mismatches” ( Church & Goldin-Meadow 1986 ).

An external file that holds a picture, illustration, etc.
Object name is nihms-459534-f0001.jpg

Examples of children gesturing while giving explanations for their non-conserving judgments on a number conservation task. In the top picture (A), the child says, “you spreaded it out,” while producing a spreading motion with her hands, thus producing a gesture-speech match. In the bottom pictures (B), the child says, “you moved them,” again focusing on the experimenter’s movements in speech but, in gesture, he produces pointing gestures that align the checkers in one row with the checkers in the other row (one-to-one correspondence), thus producing a gesture-speech mismatch.

People express aspects of their knowledge in gesture on a wide range of cognitive tasks, including mathematical equations (e.g., Perry et al 1988 ), balance tasks (e.g., Pine et al 2004 ), logical puzzles (e.g., the Tower of Hanoi, Garber & Goldin-Meadow 2002 ), science explanations ( Roth 2002 ), and even moral reasoning ( Church et al 1995 ). In all of these domains, people sometimes express information in gesture that they do not express in the accompanying speech. Thus, across a wide range of cognitive domains, gesture reveals information about people’s reasoning and problem solving that is not found in their speech.

From this perspective, gesture-speech mismatches occur when children explore aspects of the task stimuli in gesture, but do not ultimately express all of those aspects in speech. In the example presented in Figure 1B , the child uses gesture to explore the one-to-one-correspondence between the checkers in the two rows, but he does not ultimately express this aspect of the task in his speech.

3.2.2. The mismatch between gesture and speech presages knowledge change

Gesture-speech mismatches are of interest because they provide insight into aspects of learners’ knowledge that they do not express in speech. But even more important, mismatches are a good index of the stability of a learner’s knowledge. Several studies across a variety of domains have shown that children who produce gesture-speech mismatches when explaining a concept are in state of transitional knowledge with respect to that concept. For example, in the domain of Piagetian conservation, Church and Goldin-Meadow (1986) found that, among partial conservers (i.e., children who conserved on some tasks and not on others), those who produced a majority of mismatches in their conservation explanations prior to instruction were more likely to profit from instruction about conservation than were those who produced few mismatches. Thus, frequent mismatches between speech and gesture in children’s task explanations at pretest indexed their readiness to benefit from instruction. Similar findings have been documented in children learning about mathematical equations such as 3 + 4 + 5 = 3 + __ ( Perry et al 1988 ), in children solving balance problems ( Pine et al 2004 ), and in adults learning about stereoisomers in organic chemistry ( Ping et al. 2012 ).

Gesture-speech mismatch thus reflects readiness to learn––and does so better than other possible indices of learning that rely on the verbal channel alone. Church (1999) compared three indices that can be used to predict children’s readiness to learn from a conservation lesson: number of pretest responses containing a gesture-speech mismatch (i.e., two different strategies, one in speech, one in gesture), number of pretest responses containing more than one strategy in speech (i.e., two different strategies, both in speech), and total number of different strategies conveyed in speech across the entire pretest. Each of these indices individually predicted learning from the lesson, but when all three were included in the same model, the only significant predictor was gesture-speech mismatch.

Gesture-speech mismatches also index knowledge transition in another sense: the state in which children frequently produce mismatches is both preceded and followed by a state in which they seldom produce mismatches. In a micro-longitudinal study, Alibali and Goldin-Meadow (1993) tracked the relationship between gesture and speech in children’s explanations over a series of problems as the children learned to solve mathematical equations, such as 3 + 4 + 5 = 3 + __. Among children who produced gestures on the task, the large majority of children traversed all or part of the following path: (1) Children began in a state in which they predominantly produced gesture-speech match responses, expressing a single, incorrect strategy for solving the problems conveyed in both gesture and speech. (2) They then progressed to a state in which they produced gesture-speech mismatches, expressing more than one strategy, one in gesture and the other in speech. (3) Finally, they reached a state in which they produced gesture-speech match responses, now expressing a single, correct strategy conveyed in both gesture and speech. Thus, the state in which children frequently produce gesture-speech mismatches is also transitional in the sense that it is both preceded and followed by a more stable state.

3.2.3. Gesture can cause knowledge change

Gesture can provide information about the content and stability of children’s knowledge. But can gesture do more? As in language learning, gesture might play a causal role in the process of knowledge change. There are (at least) two classes of mechanisms by which gestures could play a causal role in bringing about knowledge change: social mechanisms by which learners’ gestures convey information about their knowledge states to listeners who, in turn, alter the input they provide to the learners, and cognitive mechanisms by which learners’ own gestures alter the state their knowledge. We consider each class of mechanisms in turn.

3.2.3.1 Social mechanisms by which gesture can cause change

Gesture is implicated in social mechanisms of knowledge change. According to these mechanisms, learners’ gestures convey information about their cognitive states to listeners (teachers, parents, or peers), and those listeners then use this information to guide their ongoing interactions with the learners. Learners’ gestures can provide information about the leading edge of their knowledge, information that could be used to scaffold their developing understanding. Learners thus have the potential to influence the input they receive just by moving their hands. For the social construction of knowledge to occur in this way, listeners must grasp the information that learners express in their gestures, and they must also change their responses to those learners as a function of the information. Evidence supports both of these steps.

As reviewed earlier in the section on gesture’s role in language comprehension, there is evidence that listeners detect and interpret the information that speakers express solely in their gestures on a variety of tasks, for example, on Piagetian conservation problems ( Goldin-Meadow et al 1992 , Kelly & Church 1997 , 1998 ) and mathematical equations ( Alibali et al 1997 ). Moreover, there is evidence that listeners can detect gestured information not only when viewing speakers on video, but also when interacting with “live” speakers in real time ( Goldin-Meadow & Sandhofer 1999 ).

As one example, Alibali, Flevares, and Goldin-Meadow (1997) presented clips of children explaining mathematics problems to two groups of adults— teachers and college students—and asked the adults to describe each child’s reasoning about the problems. Both teachers and college students detected the information that children expressed in their gestures. In some of the clips, the child expressed a strategy for solving the problems solely in gesture. For example, one boy explained his incorrect solution (he put 18 in the blank) to the problem 5 + 6 + 7 = __ + 7 by saying that he added the numbers on the left side of the equation. In gesture; however, he pointed to the 5 and the 6––the two numbers that should be added to yield the correct solution of 11. In reacting to this clip, one teacher said, “What I’m picking up now is [the child’s] inability to realize that these (5 and 6) are meant to represent the same number…. There isn’t a connection being made by the fact that the 7 on this (left) side of the equal sign is supposed to also be the same as this 7 on this (right) side of the equal sign, which would, you know, once you made that connection it should be fairly clear that the 5 and 6 belong in the box.” It seems likely that the teacher’s reaction was prompted by the child’s gestures. In general, the teachers were more likely to mention a strategy when the target child expressed that strategy solely in gesture than when the target child did not express the strategy in either gesture or speech.

Communication partners can thus glean information from a learner’s gestures. But do they use this information to guide their interactions with the learner? If the teacher in the preceding example were asked to instruct the child she viewed in the video, she might point out the two 7’s and suggest that the child cancel the like addends and then group and add the remaining numbers. In this way, the teacher would be tailoring her instruction to the child’s knowledge state, and instruction that is targeted to a child’s knowledge state might be particularly helpful in promoting learning in the child.

Teachers have been found to alter their input to children on the basis of the children’s gestures. Goldin-Meadow and Singer (2003) asked teachers to instruct children in one-on-one tutorials on mathematical equations; they asked whether the teachers’ instruction varied as a function of their pupils’ gestures. They found that the teachers offered more different types of problem-solving strategies to children who produced gesture-speech mismatches, and also produced more mismatches of their own (i.e., typically a correct strategy in speech and a different correct strategy in gesture) when instructing children who produced mismatches than when instructing children who produced matches. Importantly, including mismatches of this sort in instruction greatly increases the likelihood that children will profit from that instruction ( Singer & Goldin-Meadow 2005 ). Children can thus have an active hand in shaping their own instruction.

3.2.3.1 Cognitive mechanisms by which gesture can cause change

There is growing evidence that producing gestures can alter the gesturer’s cognitive state. If this is the case, then a learner’s gestures will not only reflect the process of cognitive change, but also cause that change. A number of specific claims regarding how gesturing might cause cognitive change have been made.

First, gestures may manifest implicit knowledge that a learner has about a concept or problem. When learners express this implicit knowledge and express other more explicit knowledge at the same time, the simultaneous activation of these ideas may destabilize their knowledge, making them more receptive to instructional input and more likely to alter their problem-solving strategies. In support of this view, Broaders, Cook, Mitchell, and Goldin-Meadow (2007) told some children to gesture and others not to gesture as they solved a series of mathematical equations. When required to gesture, many children expressed problem-solving strategies in gesture that they had not previously expressed in either speech or gesture. When later given instruction in the problems, it was the children who had been told to gesture and expressed novel information in those gestures who were particularly likely to learn mathematical equivalence.

Second, gesturing could help learners manage how much cognitive effort they expend. Goldin-Meadow, Nusbaum, Kelly and Wagner (2001 , see also Ping & Goldin-Meadow 2010 , Wagner et al 2004 ) found that speakers who gestured when explaining how they solved a series of math problems while, at the same time, trying to remember an unrelated list of items had better recall than speakers who did not gesture. This effect holds even when speakers are told when to gesture and told when not to gesture ( Cook et al 2012 ). If gesturing does serve to reduce a learner’s effort, that saved effort could be put toward other facets of the problem and thus facilitate learning.

Third, gesturing could serve to highlight perceptual or motor information in a learner’s representations of a problem, making that information more likely to be engaged when solving the problem. In line with this view, Alibali and Kita (2010) found that children asked to solve a series of Piagetian conservation tasks were more likely to express information about the perceptual state of the task objects when they were allowed to gesture than when they were not allowed to gesture. Similarly, in a study of adult learners asked to predict how a gear in an array of gears would move if the first gear were rotated in a particular direction, Alibali, Spencer, Knox and Kita (2011) found that learners who were allowed to gesture were more likely to persist in using a perceptual-motor strategy to solve the problems (i.e., modeling the movements of each individual gear), and less likely to shift to a more abstract strategy (i.e., predicting the movement of the gear based on whether the total number of gears was even or odd).

As another example, Beilock and Goldin-Meadow (2010) demonstrated that gesturing can introduce motor information into a speaker’s mental representations of a problem. They used two versions of the Tower of Hanoi task, a puzzle in which 4 disks must be moved from one of three pegs to another peg; only one disk can be moved at a time and a bigger disk can never be placed on top of a smaller disk. In one version, the heaviest disk was also the largest disk; in the other, the heaviest disk was the smallest disk. Importantly, the heaviest disk could not be lifted with one hand. Participants solved the problem twice. Some participants used the “largest=heaviest” version for both trials (the No Switch group); others used the largest=heaviest version on the first trial and the “smallest=heaviest” version on the second trial (the Switch group). In between the two trials, participants were asked to explain how they solved the problem and to gesture during their explanation. Participants who used one-handed gestures when describing the smallest disk during their explanation of the first trial performed worse on the second trial than participants who used two-handed gestures to describe the smallest disk––but only in the Switch group (recall that the smallest disk could no longer be lifted with one hand after the disks were switched). Participants in the No Switch group improved on the task no matter which gestures they produced, as did participants who were not asked to explain their reasoning and thus produced no gestures at all. The participants never mentioned weight in their talk. But weight information is an inherent part of gesturing on this task––one has to use either one hand (=light disk) or two (=heavy disk) when gesturing. When the participants’ gestures highlighted weight information that did not align with the actual movement needed to solve the problem, subsequent performance suffered. Gesturing thus introduced action information into the participants’ problem representations, and this information affected their later problem solving.

It is likely that both cognitive and social mechanisms operate when gesture is involved in bringing about change ( Goldin-Meadow 2003a ). For example, Streeck (2009) argues that gesturing does not just reflect thought, but it is part of the cognitive process that accomplishes a task and, in this sense, is itself thought. Moreover, because gesture is an observable and external aspect of the cognitive process, it puts thought in the public domain and thus opens the learner to social mechanisms (see also Alac & Hutchins 2004 , Goodwin 2007 ).

4. Gesture’s role in creating language

We have seen that when gesture is produced along with speech, it provides a second window onto the speaker’s thoughts, offering insight into those thoughts that cannot be found in speech and predicting (perhaps even contributing to) cognitive change. The form that gesture assumes when it accompanies speech is imagistic and continuous, complementing the segmented and combinatorial form that characterizes speech. But what happens when the manual modality is called upon to fulfill, on its own, all of the functions of language? Interestingly, when the manual modality takes over the functions of language, as in sign languages of the deaf, it also takes over its segmented and combinatorial form.

4.1. Sign language: Codified manual language systems transmitted across generations

Sign languages of the deaf are autonomous languages that do not depend on the spoken language of the surrounding hearing community. For example, American Sign Language (ASL) is structured very differently from British Sign Language (BSL), despite that fact that English is the spoken language that surrounds both sign communities.

Even though sign languages are processed by the hand and eye rather than the mouth and ear, they have the defining properties of segmentation and combination that characterize all spoken language systems ( Klima & Bellugi 1979 , Sandler & Lillo-Martin 2006 ). Sign languages are structured at the sentence level (syntactic structure), at the sign level (morphological structure), and at the sub-sign level and thus have meaningless elements akin to phonemes (phonological structure). Just like words in spoken languages (but unlike the gestures that accompany speech, Goldin-Meadow et al 1996 ), signs combine to create larger wholes (sentences) that are typically characterized by a basic order, for example, SVO (Subject-Verb-Object) in ASL ( Chen Pichler 2008 ); SOV in Sign Language of the Netherlands ( Coerts 2000 ). Moreover, the signs that comprise the sentences are themselves composed of meaningful components (morphemes Klima & Bellugi 1979 ).

Although many of the signs in a language like ASL are iconic (i.e., the form of the sign is transparently related to its referent), iconicity characterizes only a small portion of the signs and structures in any conventional sign language. Moreover, sign languages do not always take advantage of the iconic potential that the manual modality offers. For example, although it would be physically easy to indicate the manner by which a skateboarder moves in a circle within the sign that conveys the path, to be grammatically correct the ASL signer must produce separate, serially linked signs, one for the manner and a separate one for the path ( Supalla 1990 ). As another example, the sign for slow in ASL is made by moving one hand across the back of the other hand. When the sign is modified to be very slow , it is made more rapidly since this is the particular modification of movement associated with an intensification meaning in ASL ( Klima & Bellugi 1979 ). Thus, modifying the meaning of a sign can reduce its iconicity.

Moreover, the iconicity found in a sign language does not appear to play a significant role in the way the language is processed or learned. For example, young children are just as likely to learn a sign whose form does not resemble its referent as a sign whose form is an iconic depiction of the referent ( Bonvillian et al 1983 ). Similarly, young sign learners find morphologically complex constructions difficult to learn even if they are iconic. Moving the sign give from the chest toward the listener would seem to be an iconically transparent way of expressing I give to you , and thus ought to be an early acquisition if children are paying attention to iconicity. However, the sign turns out to be a relatively late acquisition, presumably because the sign is marked for both the agent ( I ) and the recipient ( you ) and is thus morphologically complex ( Meier 1987 ).

Interestingly, the segmentation and combination that characterizes established languages, signed or spoken, is also found in newly emerging sign languages, as we will see in the next section.

4.2. Emerging sign systems

Deaf children born to deaf parents who are exposed to a conventional sign language learn that language as naturally, and following the same major milestones, as hearing children learning a spoken language from their hearing parents ( Lillo-Martin 1999 , Newport & Meier 1985 ). But 90% of deaf children are born to hearing parents who are not likely to know a conventional sign language ( Hoffmeister & Wilbur 1980 ). These hearing parents very often prefer that their deaf child learn a spoken rather than a signed language. They thus choose to educate the child using an oral method of instruction, instruction that focuses on lip-reading and discourages the use of sign language and gesture. Unfortunately, it is extremely difficult for a profoundly deaf child to learn a spoken language, even when that child is given intensive oral education ( Mayberry 1992 ). Under these circumstances, one might expect that a child would not communicate at all. But that is not what happens––deaf children who are unable to use the spoken language input that surrounds them and have not been exposed to sign language do communicate with the hearing individuals in their households and they use gesture to do so.

The gestures that deaf children in these circumstances develop are called homesigns . Interestingly, homesigns are characterized by segmentation and combination, as well as many other properties found in natural languages ( Goldin-Meadow 2003b ). For example, homesigners’ gestures form a lexicon, and these lexical items are composed of morphemes and thus form a system at the word level ( Goldin-Meadow et al 2007b ) . Moreover, the lexical items combine to form syntactically structured strings and thus form a system at the sentence level ( Feldman et al 1978 , Goldin-Meadow & Mylander 1998 ), with negative and question sentence modulators ( Franklin et al 2011 ), grammatical categories ( Goldin-Meadow et al 1994 ), and hierarchical structure built around the noun ( Hunsicker & Goldin-Meadow 2012 ). Importantly, homesigners use their gestures not only to make requests of others, but also to comment on the present and non-present ( Morford & Goldin-Meadow 1997 ); to make generic statements about classes of objects ( Goldin-Meadow et al 2005 ); to tell stories about real and imagined events ( Morford 1995 , Phillips et al 2001 ); to talk to themselves; and to talk about language ( Goldin-Meadow 2003b )––that is, to serve typical functions that all languages serve, signed or spoken.

But homesign does not exhibit all of the properties found in natural language. We can explore the conditions under which homesign takes on more and more linguistic properties to get a handle on factors that may have shaped human language. For example, deaf children rarely remain homesigners in countries like the United States; they either learn a conventional sign language or receive cochlear implants and focus on spoken language. However, in Nicaragua, not only do some homesigners continue to use their gesture systems into adulthood, but in the late 1970s and early 1980s, rapidly expanding programs in special education brought together in great numbers deaf children and adolescents who were, at the time, homesigners ( Kegl et al 1999 , Senghas 1995 ). As these children interacted on school buses and in the schoolyard, they converged on a common vocabulary of signs and ways to combine those signs into sentences, and a new language––Nicaraguan Sign Language (NSL)––was born.

NSL has continued to develop as new waves of children enter the community and learn to sign from older peers. NSL is not unique––other sign languages have originated in communal contexts and been passed from generation to generation. The Nicaraguan case is special because the originators of the language are still alive. We thus have in this first generation, taken together with subsequent generations and current day homesigners (child and adult), a living historical record of a language as it develops through its earliest stages.

Analyses of adult homesign in Nicaragua have, in fact, uncovered linguistic structures that may turn out to go beyond the structures found in child homesign: the grammatical category subject ( Coppola & Newport 2005 ); pointing devices representing locations vs. nominals ( Coppola & Senghas 2010 ); morpho-phonological finger complexity patterns ( Brentari et al 2012 ); and morphological devices that mark number ( Coppola et al 2012 ). By contrasting the linguistic systems constructed by child and adult homesigners, we can see the impact that growing older has on language creation.

In addition, by contrasting the linguistic systems constructed by adult homesigners in Nicaragua with the structures used by the first cohort of NSL signers, we can see the impact that a community of users has on language. Having a group with whom they could communicate meant that the first cohort of signers were both producers and receivers of their linguistic system, a circumstance that could lead to a system with greater systematicity, but perhaps less complexity, as the group may need to adjust to the lowest common denominator (i.e., to the homesigner with the least complex system).

Finally, by contrasting the linguistic systems developed by the first and second cohorts of NSL signers (e.g., Senghas 2003 ), we can see the impact that passing a language through a new generation of learners has on language. Once learners are exposed to a system that has linguistic structure, the processes of language change may be identical to the processes studied in historical linguistics. One interesting question is whether the changes seen in NSL in its earliest stages are of the same type and magnitude as the changes that occur in mature languages over historical time.

4.4. Gestures used by hearing adults when they are not permitted to speak

A defining feature of homesign is that it is not shared in the way that conventional communication systems are. Deaf homesigners produce gestures to communicate with the hearing individuals in their homes. But the hearing individuals, particularly hearing parents who are committed to teaching their children to talk and thus to oral education, use speech back. Although this speech is often accompanied by gesture ( Flaherty & Goldin-Meadow 2010 ), as we have seen earlier, the gestures that co-occur with speech form an integrated system with that speech and, in this sense, are not free to take on the properties of the deaf child’s gestures. As a result, although hearing parents respond to their deaf child’s gestures, they do not adopt the gestures themselves (nor do they typically acknowledge that the child even uses gesture to communicate). The parents produce co-speech gestures, not homesigns.

Not surprisingly, then, the structures found in child homesign cannot be traced back to the spontaneous gestures that hearing parents produce while talking to their children ( Goldin-Meadow et al 1994 , Goldin-Meadow & Mylander 1983 ). Homesigners see the global and unsegmented gestures that their parents produce. But when gesturing themselves, they use gestures that are characterized by segmentation and combination. The gestures that hearing individuals produce when they talk therefore do not provide a model for the linguistic structures found in homesign.

Nevertheless, co-speech gestures could provide the raw materials (e.g., handshapes, motions) for the linguistic constructions that homesigners build (see, for example, Goldin-Meadow et al 2007b ) and, as such, could contribute to the initial stages of an emerging sign language (see Senghas et al 2004 ). Moreover, the disparity between co-speech gesture and homesign has important implications for language learning. To the extent that the properties of homesign differ from the properties of co-speech gesture, the deaf children themselves are likely to be imposing these particular structural properties on their communication systems. It is an intriguing, but as yet unanswered, question as to where the tendency to impose structure on homesign comes from.

We have seen that co-speech gestures do not assume the linguistic properties found in homesign. But what would happen if we were to ask hearing speakers to abandon speech and create a manual communication system on the spot? Would that system contain the linguistic properties found in homesign? Examining the gestures that hearing speakers produce when requested to communicate without speech allows us to explore the robustness of linguistic constructions created on-line in the manual modality.

Hearing gesturers asked to gesture without speaking are able to construct some properties of language with their hands. For example, the order of the gestures they construct on the spot indicates who does what to whom ( Gershkoff-Stowe & Goldin-Meadow 2002 , Goldin-Meadow et al 1996 ). However, hearing gesturers do not display other linguistic properties found in established sign languages and even in homesign. For example, they do not use consistent form-meaning pairings akin to morphemes ( Singleton et al 1993 ), nor do they use the same finger complexity patterns that established sign languages and homesign display ( Brentari et al 2012 ).

Interestingly, the gestures that hearing speakers construct on the spot without speech do not appear to be derived from their spoken language. When hearing speakers of different languages (English, Spanish, Chinese, Turkish) are asked to describe animated events using their hands and no speech, they abandon the order typical of their respective spoken languages and produce gestures that all conform to the same order––SOV (e.g., captain-pail-swings, Goldin-Meadow et al 2008 ). This order has been found in some emerging sign languages (e.g., Al-Sayyid Bedouin Sign Language, Sandler et al 2005 ). Moreover, the SOV order is also found when hearing speakers of the same four languages perform a non-communicative, non-gestural task ( Goldin-Meadow et al 2008 ). Recent work on English-, Turkish-, and Italian-speakers has replicated the SOV order in hearing gesturers, but finds that gesturers move away from this order when given a lexicon (either spoken or manual, Hall et al 2010 ); when asked to describe reversible events involving two animates (girl pulled man, Meir et al 2010 ) and when asked to describe more complex events (man tells child that girl catches fish, Langus & Nespor 2010 ). Studies of hearing gesturers give us the opportunity to manipulate conditions that have the potential to affect communication, and to then observe the effect of those conditions on the structure of the emerging language.

4.5. Do signers gesture?

We have seen that hearing speakers produce analog, imagistic signals in the manual modality (i.e., gesture) along with the segmented, discrete signals they produce in the oral modality (i.e., speech), and that these gestures serve a number of communicative and cognitive functions. The question we now ask is whether signers also produce gestures and, if so, whether those gestures serve the same functions as co-speech gesture.

Deaf signers have been found to gesture when they sign ( Emmorey 1999 ). But do they produce mismatches and do those mismatches predict learning? ASL-signing deaf children were asked to explain their solutions to the same math problems studied in hearing children ( Perry et al 1988 ), and were then given instruction in those problems in ASL. The deaf children produced gestures as often as the hearing children. Moreover, the deaf children who produced many gestures conveying different information from their signs (i.e., gesture-sign mismatches) were more likely to succeed after instruction than the deaf children who produced few ( Goldin-Meadow et al in press ).

These findings suggest not only that mismatch can occur within a single modality (hand alone), but that within-modality mismatch can predict learning just as well as cross-modality mismatch (hand and mouth). Juxtaposing different ideas across two modalities is thus not essential for mismatch to predict learning. Rather, it appears to be the juxtaposition of different ideas across two distinct representational formats––an analog format underlying gesture vs. a discrete segmented format underlying words or signs––that is responsible for mismatch predicting learning.

5. Gesture’s role in the clinic and the classroom

The gestures learners spontaneously produce when they talk provide insight into their thoughts––often their cutting-edge thoughts. This fact opens up the possibility that gesture can be used to assess children’s knowledge in the clinic and the classroom. Moreover, the fact that encouraging learners to gesture on a task can lead to better understanding of the task opens up the possibility that gesture can also be used to change what children know in the clinic or the classroom.

5.1. Clinical situations

Gesture can provide unique information about the nature and extent of underlying deficits in children and adults with a variety of language and communication disorders ( Capone & McGregor 2004 , Goldin-Meadow & Iverson 2010 ). Studies of a range of disordered populations across the lifespan have identified subgroups on the basis of gesture use and then examined future language in relation to subgroup membership. For example, spontaneous gesture production at 18 months in children with early focal brain injury can be used to distinguish children who are likely to recover from initial language delay from children who are not likely to recover ( Sauer et al 2008 ).

As another example, infants subsequently diagnosed with autism produce fewer gestures overall and almost no instances of pointing at 12 months, compared to typically developing infants at the same age ( Osterling & Dawson 1994 , see also Bernabei et al 1998 ). This finding has been replicated in prospective studies of younger infant siblings of older children already diagnosed with autism. Infant siblings who later turn out to be diagnosed with autism have significantly smaller gesture repertoires at 12 and 18 months than infant siblings who do not receive such a diagnosis, and than a comparison group of infants with no family history of autism. Importantly, at early ages, gesture seems to be more informative about future diagnostic status than word comprehension or production––differences between infant siblings later diagnosed with autism and the two comparison groups do not emerge in speech until 18 months of age ( Mitchell et al 2006 ). Future work is needed to determine whether gesture use (or its lack) is a specific marker of autism or a general marker of language and communication delay independent of etiology.

Early gesture thus appears to be a sign of resilience in children with language difficulties, and an indicator that they may not be delayed in the future. In contrast, adults with aphasia who gesture within the first months after the onset of their illness appear to do less well in terms of recovery than aphasic adults who do not gesture ( Braddock 2007 ). An initial pattern of “compensation” via gesture thus appears to be a positive prognostic indicator for language recovery in children, but not in adults. These findings suggest that encouraging gesture might be more helpful to children with language disabilities than to adults.

5.2. Educational situations

Because children’s gestures often display information about their thinking that they do not express in speech, gesture can provide teachers with important information about their pupils’ knowledge. As reviewed earlier, there is evidence that teachers not only detect information that children express in gesture (e.g., Alibali et al 1997 ) but also alter their input to children as a function of those gestures ( Goldin-Meadow & Singer 2003 ).

It is also becoming increasingly clear that the gestures teachers produce during their lessons matter for students’ learning. Many studies have shown that lessons with gestures promote deeper learning (i.e., new forms of reasoning, generalization to new problem types, retention of knowledge) better than lessons without gestures. For example, Church, Ayman-Nolley, and Mahootian (2004) examined first-grade students learning about Piagetian conservation from videotaped lessons and found that, for native English speakers, 91% showed deep learning (i.e., added new same judgments) from a speech-plus-gesture lesson, compared to 53% from a speech-only lesson. For Spanish speakers with little English proficiency, 50% learned from the speech-plus-gesture lesson, compared to 20% from the speech-only lesson. As a second example, Valenzeno, Alibali, and Klatzky (2003) studied preschoolers learning about symmetry from a videotaped lesson and found that children who viewed a speech-plus-gesture lesson succeeded on more than twice as many posttest problems as children who viewed a speech-only lesson (2.08 vs. 0.85 out of 6). Clearly, teachers’ gestures can have a substantial impact on student learning. A teacher’s inclination to support difficult material with gesture may be precisely what their students need to grasp challenging material.

Building on growing evidence that teachers’ gestures matter for student learning, recent studies have sought to characterize how teachers use gesture in naturalistic instructional settings (e.g., Alibali & Nathan 2011 , Richland et al 2007 ). Other research has sought to instruct teachers about how to effectively use gesture ( Hostetter et al 2006 ). Given that teachers’ gestures affect the information that students take up from a lesson, and given that teachers can alter their gestures if they wish to do so, it may be worthwhile for teachers to use gesture intentionally, in a planned and purposeful fashion, to reinforce the message they intend to convey.

In light of evidence that the act of gesturing can itself promote learning, teachers and clinicians may also wish to encourage children and patients to produce gestures themselves. Encouraging children to gesture may serve to activate their implicit knowledge, making them particularly receptive to instruction ( Broaders et al 2007 ). Teachers may also encourage their students to gesture by producing gestures of their own. Cook and Goldin-Meadow (2006) found that children imitated their instructor’s gestures in a lesson about a mathematics task and, in turn, children’s gestures predicted their success on the math problems after instruction. Thus, teacher gesture promoted student gesture, which in turn fostered cognitive change.

6. Conclusions

We have seen that gesture is a robust part of human communication and can be harnessed in a variety of ways. First, gesture reflects what speakers know and can therefore serve as a window onto their thoughts. Importantly, this window often reveals thoughts that speakers do not even know they have. Encouraging speakers (e.g., students, patients, witnesses) to gesture thus has the potential to uncover thoughts that would be useful for individuals who interact with these speakers (teachers, clinicians, interviewers) to know. Second, gesture can change what speakers know. The act of producing gesture can bring out previously unexpressed thoughts and may even introduce new thoughts into a speaker’s repertoire, altering the course of a conversation or developmental trajectory as a result. Encouraging gesture thus also has the potential to change cognition. Finally, gesture provides building blocks that can be used to construct a language. By watching how children and adults who do not already have a language put those blocks together, we can observe the process of language creation first hand. Our hands are with us at all times and we routinely use them for communication. They thus provide both researchers and learners with an ever-present tool for understanding how we talk and think.

1 Two other types of gestures found in adult repertoires––the simple rhythmic beat gesture that patterns with discourse and does not convey semantic content, and the metaphoric gesture that represents abstract ideas rather than concrete ones––are not produced by children until much later in development McNeill D. 1992 . Hand and mind: What gestures reveal about thought . Chicago: University of Chicago Press.

  • Acredolo LP, Goodwyn SW. Symbolic gesturing in normal infants. Child Development. 1988; 59 :450–56. [ PubMed ] [ Google Scholar ]
  • Acredolo LP, Goodwyn SW. Baby signs: How to talk with your baby before your baby can talk. McGraw-Hill; NY: 2002. NY. [ Google Scholar ]
  • Alac M, Hutchins E. I see what you are saying: Action as cognition in fMRI brain mapping practice. Journal of Cognition and Culture. 2004; 4 :629–61. [ Google Scholar ]
  • Alibali MW, Flevares L, Goldin-Meadow S. Assessing knowledge conveyed in gesture: Do teachers have the upper hand? Journal of Educational Psychology. 1997; 89 :183–93. [ Google Scholar ]
  • Alibali MW, Goldin-Meadow S. Transitions in learning: What the hands reveal about a child’s state of mind. Cognitive Psychology. 1993; 25 :468–523. [ PubMed ] [ Google Scholar ]
  • Alibali MW, Heath DC, Myers HJ. Effects of visibility between speaker and listener on gesture production: Some gestures are meant to be seen. Journal of Memory and Language. 2001; 44 :169–88. [ Google Scholar ]
  • Alibali MW, Kita S. Gesture highlights perceptually present information for speakers. Gesture. 2010; 10 :3–28. [ Google Scholar ]
  • Alibali MW, Nathan MJ. Embodiment in mathematics teaching and learning: Evidence from students’ and teachers’ gestures. Journal of the Learning Sciences. 2011 in press. [ Google Scholar ]
  • Alibali MW, Spencer RC, Knox L, Kita S. Spontaneous gestures influence strategy choices in problem solving. Psychological Science. 2011; 22 :1138–44. [ PubMed ] [ Google Scholar ]
  • Bates E. Language and context. Academic Press; New York: 1976. [ Google Scholar ]
  • Bates E, Benigni L, Bretherton I, Camaioni L, Volterra V. The emergence of symbols: Cognition and communication in infancy. Academic Press; New York: 1979. [ Google Scholar ]
  • Beilock SL, Goldin-Meadow S. Gesture changes thought by grounding it in action. Psychological Science. 2010; 21 :1605–10. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Bergmann K, Kopp S. Systematicity and idiosyncrasy in iconic gesture use: Empirical analysis and computational modeling. In: Kopp S, Wachsmuth I, editors. Gesture in Embodied Communication and Human-Computer Interaction. Springer; Berlin/Heidelberg, Germany: 2010. pp. 182–94. [ Google Scholar ]
  • Bernabei P, Camaoini L, Levi G. An evaluation of early development in children with autism and pervasive developmental disorders from home movies: Preliminary findings. Autism. 1998; 2 :243–58. [ Google Scholar ]
  • Bonvillian JD, Orlansky MO, Novack LL. Developmental milestones: Sign language acquisition and motor development. Child Development. 1983; 54 :1435–45. [ PubMed ] [ Google Scholar ]
  • Braddock BA. Unpublished doctoral dissertation. University of Missouri-Columbia; 2007. Links between language, gesture, and motor skill: A longitudinal study of communication recovery in Broca’s aphasia. [ Google Scholar ]
  • Brentari D, Coppola M, Mazzoni L, Goldin-Meadow S. When does a system become phonological? Handshape production in gesturers, signers, and homesigners. Natural Language and Linguistic Theory. 2012; 30 [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Broaders S, Goldin-Meadow S. Truth is at hand: How gesture adds information during investigative interviews. Psychological Science. 2010; 21 :623–28. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Broaders SC, Cook SW, Mitchell Z, Goldin-Meadow S. Making children gesture brings out implicit knowledge and leads to learning. Journal of Experimental Psychology: General. 2007; 136 :539–50. [ PubMed ] [ Google Scholar ]
  • Butterworth G, Grover L. The origins of referential communication in human infancy. In: Weiskrantz L, editor. Thought without language. Carendon; Oxford: 1988. pp. 5–24. [ Google Scholar ]
  • Capirci O, Iverson JM, Pizzuto E, Volterra V. Communicative gestures during the transition to two-word speech. Journal of Child Language. 1996; 23 :645–73. [ Google Scholar ]
  • Capone N, McGregor K. Gesture development: A review for clinical and research practices. Journal of Speech, Language, and Hearing Research. 2004; 47 :173–86. [ PubMed ] [ Google Scholar ]
  • Carpenter M, Nagell K, Tomasello M. Social cognition, joint attention, and communicative competence from 9 to 15 months of age. Monographs of the Society for Research in Child Development. 1998; 63 (no. 255) [ PubMed ] [ Google Scholar ]
  • Chawla P, Krauss RM. Gesture and speech in spontaneous and rehearsed narratives. Journal of Experimental Social Psychology. 1994; 30 :580–601. [ Google Scholar ]
  • Chen Pichler D. Acquisition of word order: Then and now. In: Quer J, editor. Selected papers from the 8th Congress on Theoretical Issues in Sign Language Research. Signum-Verlag; Germany: 2008. [ Google Scholar ]
  • Church RB. Using gesture and speech to capture transitions in learning. Cognitive Development. 1999; 14 :313–42. [ Google Scholar ]
  • Church RB, Ayman-Nolley S, Mahootian S. The role of gesture in bilingual education: Does gesture enhance learning? International Journal of Bilingual Education and Bilingualism. 2004; 7 :303–19. [ Google Scholar ]
  • Church RB, Goldin-Meadow S. The mismatch between gesture and speech as an index of transitional knowledge. Cognition. 1986; 23 :43–71. [ PubMed ] [ Google Scholar ]
  • Church RB, Schonert-Reichl K, Goodman N, Kelly S, Ayman-Nolley S. The role of gesture and speech communication as a reflection of cognitive understanding. Journal of Contemporary Legal Issues. 1995; 6 :237–80. [ Google Scholar ]
  • Coerts JA. Early sign combinations in the acquisition of Sign Language of the Netherlands: Evidence for language-specific features. In: Chamberlain C, Morford JP, Mayberry R, editors. Language acquisition by eye. Erlbaum; Mahwah, NJ: 2000. pp. 91–109. [ Google Scholar ]
  • Cook SW, Goldin-Meadow S. The role of gesture in learning: Do children use their hands to change their minds? Journal of Cognition and Development. 2006; 7 :211–32. [ Google Scholar ]
  • Cook SW, Yip T, Goldin-Meadow S. Gestures, but not meaningless movements, lighten working memory load when explaining math. Language and Cognitive Processes. 2012 in press. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Coppola M, Newport EL. Grammatical Subjects in homesign: Abstract linguistic structure in adult primary gesture systems without linguistic input. Proceedings of the National Academy of Sciences. 2005; 102 :19249–53. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Coppola M, Senghas A. The emergence of deixis in Nicaraguan signing. In: Brentari D, editor. Sign languages: A Cambridge language survey. Cambridge University Press; Cambridge: 2010. pp. 543–69. [ Google Scholar ]
  • Coppola M, Spaepen E, Goldin-Meadow S. Communicating about number without a language model: Linguistic devices for number are robust. 2012. Manuscript under review.
  • de Laguna G. Speech: Its function and development. Indiana University Press; Bloomington, IN: 1927. [ Google Scholar ]
  • Emmorey K. Do signers gesture? In: Messing LS, Campbell R, editors. Gesture, speech, and sign. Oxford University Press; Oxford: 1999. pp. 133–59. [ Google Scholar ]
  • Enfield NJ. The body as a cognitive artifact in kinship representations. Hand gesture diagrams by speakers of Lao. Current Anthropology. 2005; 46 :51–81. [ Google Scholar ]
  • Evans MA, Rubin KH. Hand gestures as a communicative mode in school-aged children. The Journal of Genetic Psychology. 1979; 135 :189–96. [ Google Scholar ]
  • Feldman H, Goldin-Meadow S, Gleitman L. Beyond Herodotus: The creation of language by linguistically deprived deaf children. In: Lock A, editor. Action, symbol, and gesture: The emergence of language. Academic Press; New York: 1978. pp. 351–414. [ Google Scholar ]
  • Flaherty M, Goldin-Meadow S. Does input matter? Gesture and homesign in Nicaragua, China, Turkey, and the USA. In: Smith ADM, Schouwstra M, Boer Bd, Smith K, editors. Proceedings of the Eighth Evolution of Language Conference; Singapore: World Scientific Publishing; 2010. pp. 403–04. [ Google Scholar ]
  • Franklin A, Giannakidou A, Goldin-Meadow S. Negation, questions, and structure building in a homesign system. Cognition. 2011; 118 :398–416. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Garber P, Goldin-Meadow S. Gesture offers insight into problem solving in adults and children. Cognitive Science. 2002; 26 :817–31. [ Google Scholar ]
  • Gershkoff-Stowe L, Goldin-Meadow S. Is there a natural order for expressing semantic relations? Cognitive Psychology. 2002; 45 :375–412. [ PubMed ] [ Google Scholar ]
  • Goldin-Meadow S. Hearing gesture: How our hands help us think. Harvard University Press; Cambridge, MA: 2003a. [ Google Scholar ]
  • Goldin-Meadow S. The resilience of language: What gesture creation in deaf children can tell us about how all children learn language. Psychology Press; New York: 2003b. NY. [ Google Scholar ]
  • Goldin-Meadow S, Butcher C. Pointing: Where language, culture, and cognition meet. Erlbaum; NJ: 2003. Pointing toward two-word speech in young children. [ Google Scholar ]
  • Goldin-Meadow S, Butcher C, Mylander C, Dodge M. Nouns and verbs in a self-styled gesture system: What’s in a name? Cognitive Psychology. 1994; 27 :259–319. [ PubMed ] [ Google Scholar ]
  • Goldin-Meadow S, Gelman S, Mylander C. Expressing generic concepts with and without a language model. Cognition. 2005; 96 :109–26. [ PubMed ] [ Google Scholar ]
  • Goldin-Meadow S, Goodrich W, Sauer E, Iverson JM. Young children use their hands to tell their mothers what to say. Developmental Science. 2007a; 10 :778–85. [ PubMed ] [ Google Scholar ]
  • Goldin-Meadow S, Iverson JM. Gesturing across the lifespan. In: Overton WF, editor. Cognition, biology, and methods across the lifespan. Wiley; Hoboken, NJ: 2010. pp. 36–55. [ Google Scholar ]
  • Goldin-Meadow S, Kim S, Singer M. What the teachers’ hands tell the students’ minds about math. Journal of Educational Psychology. 1999; 91 :720–30. [ Google Scholar ]
  • Goldin-Meadow S, McNeill D, Singleton J. Silence is liberating: Removing the handcuffs on grammatical expression in the manual modality. Psychological Review. 1996; 103 :34–55. [ PubMed ] [ Google Scholar ]
  • Goldin-Meadow S, Morford M. Gesture in early child language: Studies of deaf and hearing children. Merrill-Palmer Quarterly. 1985; 31 :145–76. [ Google Scholar ]
  • Goldin-Meadow S, Mylander C. Gestural communication in deaf children: The non-effects of parental input on language development. Science. 1983; 221 :372–74. [ PubMed ] [ Google Scholar ]
  • Goldin-Meadow S, Mylander C. Spontaneous sign systems created by deaf children in two cultures. Nature. 1998; 91 :279–81. [ PubMed ] [ Google Scholar ]
  • Goldin-Meadow S, Mylander C, Franklin A. How children make language out of gesture: Morphological structure in gesture systems developed by American and Chinese deaf children. Cognitive Psychology. 2007b; 55 :87–135. [ PubMed ] [ Google Scholar ]
  • Goldin-Meadow S, Nusbaum H, Kelly SD, Wagner SM. Explaining math: Gesturing lightens the load. Psychological Science. 2001; 12 :516–22. [ PubMed ] [ Google Scholar ]
  • Goldin-Meadow S, Sandhofer CM. Gesture conveys substantive information to ordinary listeners. Developmental Science. 1999; 2 :67–74. [ Google Scholar ]
  • Goldin-Meadow S, Shield A, Lenzen D, Herzig M, Padden C. The gestures ASL signers use tell us when they are ready to learn math. Cognition. in press. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Goldin-Meadow S, Singer MA. From children’s hands to adults’ ears: Gesture’s role in the learning process. Developmental Psychology. 2003; 39 :509–20. [ PubMed ] [ Google Scholar ]
  • Goldin-Meadow S, So W-C, Ozyurek A, Mylander C. The natural order of events: How speakers of different languages represent events nonverbally. Proceedings of the National Academy of Sciences. 2008; 105 :9163–68. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Goldin-Meadow S, Wein D, Chang C. Assessing knowledge through gesture: Using children’s hands to read their minds. Cognition and Instruction. 1992; 9 :201–19. [ Google Scholar ]
  • Golinkoff RM. ‘I beg your pardon?’: the preverbal negotiation of failed messages. Journal of Child Language. 1986; 13 :455–76. [ PubMed ] [ Google Scholar ]
  • Goodwin C. Environmentally coupled gestures. In: S Duncan, J Cassell, E Levy., editors. Gesture and the dynamic dimensions of language. John Benjamins; Amsterdam/Philadelphia: 2007. pp. 195–212. [ Google Scholar ]
  • Goodwyn S, Acredolo L. Encouraging symbolic gestures: A new perspective on the relationship between gesture and speech. In: Iverson JM, Goldin-Meadow S, editors. The nature and functions of gesture in children’s communication. Jossey-Bass; San Francisco: 1998. pp. 61–73. [ PubMed ] [ Google Scholar ]
  • Greenfield P, Smith J. The structure of communication in early language development. Academic Press; New York: 1976. [ Google Scholar ]
  • Guidetti M. The emergence of pragmatics: Forms and functions of conventional gestures in young French children. First Language. 2002; 22 :265–85. [ Google Scholar ]
  • Guillaume P. Les debuts de la phrase dans le langage de l’enfant. Journal de Psychologie. 1927; 24 :1–25. [ Google Scholar ]
  • Hall M, Mayberry R, Ferreira V. Communication systems shape the natural order of events: Competing biases from grammar and pantomime. Abstracts of the 4th conference of the International Society for Gesture Studies, Frankfurt an der Oder; Germany. 2010. [ Google Scholar ]
  • Hoffmeister R, Wilbur R. Developmental: The acquisition of sign language. In: Lane H, Grosjean F, editors. Recent perspectives on American Sign Language. Erlbaum; Hillsdale, NJ: 1980. [ Google Scholar ]
  • Holler J, Shovelton H, Beattie G. Do iconic hand gestures really contribute to the communication of semantic information in a face-to-face context? Journal of Nonverbal Behavior. 2009; 33 [ Google Scholar ]
  • Hostetter AB. When do gestures communicate? A meta-analysis. Psychological Bulletin. 2011; 137 :297–315. [ PubMed ] [ Google Scholar ]
  • Hostetter AB, Alibali MW. Raise your hand if you’re spatial: Relations between verbal and spatial skills and gesture production. Gesture. 2007; 7 :73–95. [ Google Scholar ]
  • Hostetter AB, Alibali MW. Visible embodiment: Gestures as simulated action. Psychonomic Bulletin and Review. 2008; 15 :495–514. [ PubMed ] [ Google Scholar ]
  • Hostetter AB, Alibali MW. Language, gesture, action A test of the Gesture as Simulated Action framework. Journal of Memory and Language. 2010; 63 :245–57. [ Google Scholar ]
  • Hostetter AB, Alibali MW, Kita S. I see it in my hands’ eye: Representational gestures reflect conceptual demands. Language and Cognitive Processes. 2007; 22 :313–36. [ Google Scholar ]
  • Hostetter AB, Bieda K, Alibali MW, Nathan MJ, Knuth EJ. Don’t just tell them, show them Teachers can intentionally alter their instructional gestures. In: Sun R, editor. Proceedings of the Twenty-Eighth Annual Conference of the Cognitive Science Society; Mahwah, NJ: Erlbaum; 2006. pp. 1523–28. [ Google Scholar ]
  • Hunsicker D, Goldin-Meadow S. Hierarchical structure in a self-created communication system: Building nominal constituents in homesign. 2012. Manuscript under review. [ PMC free article ] [ PubMed ]
  • Iverson JM, Capirci O, Caselli MS. From communication to language in two modalities. Cognitive Development. 1994; 9 :23–43. [ Google Scholar ]
  • Iverson JM, Capirci O, Longobardi E, Caselli MC. Gesturing in mother--child interactions. Cognitive Development. 1999; 14 :57–75. [ Google Scholar ]
  • Iverson JM, Capirci O, Volterra V, Goldin-Meadow S. Learning to talk in a gesture-rich world: Early communication of Italian vs. American children. First Language. 2008; 28 :164–81. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Iverson JM, Goldin-Meadow S. Why people gesture when they speak. Nature. 1998; 396 :228. [ PubMed ] [ Google Scholar ]
  • Iverson JM, Goldin-Meadow S. Gesture paves the way for language development. Psychological Science. 2005; 16 :368–71. [ PubMed ] [ Google Scholar ]
  • Jacobs N, Garnham A. The role of conversational hand gestures in a narrative task. Journal of Memory and Language. 2007; 56 :291–303. [ Google Scholar ]
  • Kegl J, Senghas A, Coppola M. Creation through contact: Sign language emergence and sign language change in Nicaragua. In: DeGraff M, editor. Language creation and language change: Creolization, diachrony, and development. MIT Press; Cambridge, MA: 1999. pp. 179–237. [ Google Scholar ]
  • Kelly SD, Church RB. Can children detect conceptual information conveyed through other children’s nonverbal behaviors? Cognition and Instruction. 1997; 15 :107–34. [ Google Scholar ]
  • Kelly SD, Church RB. A comparison between children’s and adults’ ability to detect conceptual information conveyed through representational gestures. Child Development. 1998; 69 :85–93. [ PubMed ] [ Google Scholar ]
  • Kita S. How representational gestures help speaking. In: McNeill D, editor. Language and gesture. Cambridge University Press; Cambridge, UK: 2000. pp. 162–85. [ Google Scholar ]
  • Kita S, Özyürek A. What does cross-linguistic variation in semantic coordination of speech and gesture reveal? Evidence for an interface representation of spatial thinking and speaking. Journal of Memory and Language. 2003; 48 :16–32. [ Google Scholar ]
  • Klima E, Bellugi U. The signs of language. Harvard University Press; Cambridge, MA: 1979. [ Google Scholar ]
  • Krauss RM. Why do we gesture when we speak? Current Directions in Psychological Science. 1998; 7 :54–60. [ Google Scholar ]
  • Krauss RM, Chen Y, Chawla P. Nonverbal behavior and nonverbal communication: What do conversational hand gestures tell us? Advances in Experimental Social Psychology. 1996; 28 :389–450. [ Google Scholar ]
  • Krauss RM, Chen Y, Gottesman R. Lexical gestures and lexical access: A process model. In: McNeill D, editor. Language and gesture. Cambridge University Press; Cambridge, UK: 2000. pp. 261–83. [ Google Scholar ]
  • Krauss RM, Dushay R, Chen Y, Rauscher F. The communicative value of conversational hand gestures. Journal of Experimental Social Psychology. 1995; 31 :533–52. [ Google Scholar ]
  • Langus A, Nespor M. Cognitive systems struggling for word order. Cognitive Psychology. 2010; 60 [ PubMed ] [ Google Scholar ]
  • Leopold W. Speech development of a bilingual child: A linguist’s record. Volume 3. Northwestern University Press; Evanston, IL: 1949. [ Google Scholar ]
  • Lillo-Martin D. Modality effects and modularity in language acquisition: The acquisition of American Sign Language. In: Ritchie WC, Bhatia TK, editors. Handbook of Child Language Acquisition. Academic Press; New York, NY: 1999. pp. 531–67. [ Google Scholar ]
  • Masur EF. Mothers’ responses to infants’ object-related gestures: Influences on lexical development. Journal of Child Language. 1982; 9 :23–30. [ PubMed ] [ Google Scholar ]
  • Masur EF. Gestural development, dual-directional signaling, and the transition to words. Journal of Psycholinguistic Research. 1983; 12 :93–109. [ Google Scholar ]
  • Mayberry RI. The cognitive development of deaf children: Recent insights. In: Segalowitz S, Rapin I, editors. Child Neuropsychology. Elsevier; Amsterdam: 1992. pp. 51–68. [ Google Scholar ]
  • McNeil NM, Alibali MW, Evans JL. The role of gesture in children’s comprehension of spoken language: Now they need it, now they don’t. Journal of Nonverbal Behavior. 2000; 24 :131–50. [ Google Scholar ]
  • McNeill D. Hand and mind: What gestures reveal about thought. University of Chicago Press; Chicago: 1992. [ Google Scholar ]
  • McNeill D. Gesture and thought. University of Chicago Press; Chicago: 2005. [ Google Scholar ]
  • McNeill D, Cassell J, McCullough K-E. Communicative effects of speech-mismatched gestures. Research on Language in Social Interaction. 1994; 27 :223–37. [ Google Scholar ]
  • McNeill D, Duncan S. Growth points in thinking-for-speaking. In: McNeill D, editor. Language and gesture. Cambridge University Press; Cambridge, UK: 2000. pp. 141–61. [ Google Scholar ]
  • Meier RP. Elicited imitation of verb agreement in American Sign Language: Iconically or morphologically determined? Journal of Memory and Language. 1987; 26 :362–76. [ Google Scholar ]
  • Meir I, Lifshitz A, Ilkbasaran D, Padden C. The interaction of animacy and word order in human languages: A study of strategies in a novel communication task. In: Smith ADM, Schouwstra M, Boer Bd, Smith K, editors. Proceedings of the Eighth Evolution of Language Conference; Singapore: World Scientific Publishing Co; 2010. pp. 455–56. [ Google Scholar ]
  • Melinger A, Levelt WJM. Gesture and the communicative intention of the speaker. Gesture. 2004; 4 :119–41. [ Google Scholar ]
  • Mitchell S, Brian J, Zwaigenbaum L, Roberts W, Szatmari P, et al. Early language and communication development of infants later diagnosed with autism spectrum disorder. Developmental and Behavioral Pediatrics. 2006; 27 :S69–S78. [ PubMed ] [ Google Scholar ]
  • Mol L, Krahmer E, Maes A, Swerts M. Seeing and being seen: The effects on gesture production. Journal of Computer-Mediated Communication. 2011; 17 :77–100. [ Google Scholar ]
  • Morford JP. How to hunt an iguana: The gestured narratives of non-signing deaf children. In: Bos H, Schermer T, editors. Sign language research 1994: Proceedings of the Fourth European Congress on Sign Language Research; Hamburg: Signum Press; 1995. pp. 99–115. [ Google Scholar ]
  • Morford JP, Goldin-Meadow S. From here and now to there and then: The development of displaced reference in homesign and English. Child Development. 1997; 68 :420–35. [ PubMed ] [ Google Scholar ]
  • Morford M, Goldin-Meadow S. Comprehension and production of gesture in combination with speech in one-word speakers. Journal of Child Language. 1992; 19 :559–80. [ PubMed ] [ Google Scholar ]
  • Morsella E, Krauss RM. The role of gestures in spatial working memory and speech. American Journal of Psychology. 2004; 117 :411–24. [ PubMed ] [ Google Scholar ]
  • Murphy CM, Messer DJ. Mothers, infants and pointing: A study of a gesture. In: Schaffer HR, editor. Studies in mother-infant interaction. Academic Press; London: 1977. pp. 325–54. [ Google Scholar ]
  • Namy LL, Acredolo LP, Goodwyn SW. Verbal labels and gestural routines in parental communication with young children. Journal of Nonverbal Behavior. 2000; 24 :63–79. [ Google Scholar ]
  • Newport EL, Meier RP. The acquisition of American Sign Language. In: Slobin DI, editor. The cross-linguistic study of language acquisition. Vol. 1: The data. Erlbaum; Hillsdale, NJ: 1985. [ Google Scholar ]
  • Osterling J, Dawson G. Early recognition of children with autism: A study of first birthday home videotapes. Journal of Autism and Developmental Disorders. 1994; 24 :247–57. [ PubMed ] [ Google Scholar ]
  • Özçaliskan S, Goldin-Meadow S. Gesture is at the cutting edge of early language development. Cognition. 2005; 96 :B101–B13. [ PubMed ] [ Google Scholar ]
  • Özyürek A, Kita S, Allen S, Brown A, Furman R, Ishizuka T. Development of cross-linguistic variation in speech and gesture: Motion events in English and Turkish. Developmental Psychology. 2008; 44 :1040–54. [ PubMed ] [ Google Scholar ]
  • Pan BA, Rowe ML, Singer JD, Snow CE. Maternal correlates of growth in toddler vocabulary production in low-income families. Child Development. 2005; 76 :763–82. [ PubMed ] [ Google Scholar ]
  • Perry M, Church RB, Goldin-Meadow S. Transitional knowledge in the acquisition of concepts. Cognitive Development. 1988; 3 :359–400. [ Google Scholar ]
  • Phillips SBVD, Goldin-Meadow S, Miller PJ. Enacting stories, seeing worlds: Similarities and differences in the cross-cultural narrative development of linguistically isolated deaf children. Human Development. 2001; 44 :311–36. [ Google Scholar ]
  • Pine KJ, Lufkin N, Messer D. More gestures than answers: Children learning about balance. Developmental Psychology. 2004; 40 :1059–67. [ PubMed ] [ Google Scholar ]
  • Ping R, Goldin-Meadow S. Gesturing saves cognitive resources when talking about nonpresent objects. Cognitive Science. 2010; 34 :602–19. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Ping R, Decatur MA, Larson SW, Zinchenko E, Goldin-Meadow S. Gesture-speech mismatch predicts who will learn to solve an organic chemistry problem. Presented at the annual meeting of AERA; New Orleans. Apr, 2012. [ Google Scholar ]
  • Rauscher FH, Krauss RM, Chen Y. Gesture, speech, and lexical access: The role of lexical movements in speech production. Psychological Science. 1996; 7 :226–31. [ Google Scholar ]
  • Richland LE, Zur O, Holyoak KJ. Cognitive supports for analogies in the mathematics classroom. Science. 2007; 316 :1128–29. [ PubMed ] [ Google Scholar ]
  • Roth W-M. Gestures: Their role in teaching and learning. Review of Educational Research. 2002; 71 :365–92. [ Google Scholar ]
  • Rowe ML. Pointing and talk by low-income mothers and their 14-month-old children. First Language. 2000; 20 :305–30. [ Google Scholar ]
  • Rowe ML, Özçalıskan S, Goldin-Meadow S. Learning words by hand: Gesture’s role in predicting vocabulary development. First Language. 2008; 28 :185–203. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Sandler W, Lillo-Martin D. Sign Language and Linguistic Universals. Cambridge University Press; Cambridge, UK: 2006. [ Google Scholar ]
  • Sandler W, Meir I, Padden C, Aronoff M. The emergence of grammar: Systematic structure in a new language. Proceedings of the National Academy of Sciences of America. 2005; 102 :261–2665. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Sassenberg U, Van Der Meer E. Do we really gesture more when it is more difficult? Cognitive Science. 2010; 34 :643–64. [ PubMed ] [ Google Scholar ]
  • Sauer E, Levine SC, Goldin-Meadow S. Early gesture predicts language delay in children with pre- and perinatal brain lesions. Under review. Child Development. 2008; 81 :528–39. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Senghas A. The development of Nicaraguan Sign Language via the language acquisition process. Proceedings of Boston University Child Language Development. 1995; 19 :543–52. [ Google Scholar ]
  • Senghas A. Intergenerational influence and ontogenetic development in the emergence of spatial grammar in Nicaraguan Sign Language. Cognitive Development. 2003; 18 :511–31. [ Google Scholar ]
  • Senghas A, Kita S, Ozyurek A. Children creating core properties of language: Evidence from an emerging Sign Language in Nicaragua. Science. 2004; 305 :1779–82. [ PubMed ] [ Google Scholar ]
  • Shatz M. On mechanisms of language acquisition: Can features of the communicative environment account for development? In: Wanner E, Gleitman L, editors. Language acquisition: The state of the art. Cambridge University Press; New York: 1982. pp. 102–27. [ Google Scholar ]
  • Singer MA, Goldin-Meadow S. Children learn when their teacher’s gestures and speech differ. Psychological Science. 2005; 16 :85–89. [ PubMed ] [ Google Scholar ]
  • Singleton JL, Morford JP, Goldin-Meadow S. Once is not enough: Standards of well-formedness in manual communication created over three different timespans. Language. 1993; 69 :683–715. [ Google Scholar ]
  • Streeck J. Gesturecraft: The manu-facture of meaning. John Benjamins; Amsterdam: 2009. [ Google Scholar ]
  • Sueyoshi A, Hardison DM. The role of gestures and facial cues in second language listening comprehension. Language Learning. 2005; 55 :661–99. [ Google Scholar ]
  • Supalla T. Serial verbs of motion in American Sign Language. In: Fischer S, editor. Issues in Sign Language Research. University of Chicago Press; Chicago, IL: 1990. [ Google Scholar ]
  • Valenzeno L, Alibali MW, Klatzky RL. Teachers’ gestures facilitate students’ learning: A lesson in symmetry. Contemporary Educational Psychology. 2003; 28 :187–204. [ Google Scholar ]
  • Wagner SM, Nusbaum H, Goldin-Meadow S. Probing the mental representation of gesture: Is handwaving spatial? Journal of Memory and Language. 2004; 50 :395–407. [ Google Scholar ]
  • Zinober B, Martlew M. Developmental changes in four types of gesture in relation to acts and vocalizations from 10 to 21 months. British Journal of Developmental Psychology. 1985; 3 :293–306. [ Google Scholar ]
  • Zukow-Goldring P. Sensitive caregivers foster the comprehension of speech: When gestures speak louder than words. Early Development and Parenting. 1996; 5 :195–211. [ Google Scholar ]

IMAGES

  1. Parts of speech with examples and definition| List of all parts of

    part of speech meaning of interaction

  2. parts of speech in english grammar with examples

    part of speech meaning of interaction

  3. Parts of SPEECH Table in English

    part of speech meaning of interaction

  4. Parts of Speech

    part of speech meaning of interaction

  5. Learn 8 Parts of Speech in English Grammar!

    part of speech meaning of interaction

  6. 8 Parts of speech in English

    part of speech meaning of interaction

VIDEO

  1. Pronunciation of E/Identify the Parts of Speech/meaning of the word

  2. English Grammar

  3. Speech meaning in Malayalam/Speech മലയാളത്തിൽ അർത്ഥം

  4. Parts Of Speech

  5. Mastering the Art of Speech: 4 Key Axes #theartofstorytelling #themasterkey #mysteryoftheuniverse

  6. "Breakfast with grandma: beautiful moments of a rural family in nature"

COMMENTS

  1. Interaction Definition & Meaning

    The meaning of INTERACTION is mutual or reciprocal action or influence. How to use interaction in a sentence.

  2. The 9 Parts of Speech: Definitions and Examples

    The parts of speech are commonly divided into open classes (nouns, verbs, adjectives, and adverbs) and closed classes (pronouns, prepositions, conjunctions, articles/determiners, and interjections). The idea is that open classes can be altered and added to as language develops and closed classes are pretty much set in stone.

  3. The 8 Parts of Speech: Examples and Rules

    Just like y is sometimes a vowel and sometimes a consonant, there are words that are sometimes one part of speech and other times another. Here are a few examples: "I went to work " (noun). "I work in the garden" (verb). "She paints very well " (adverb). "They are finally well now, after weeks of illness" (adjective).

  4. The 8 Parts of Speech

    A part of speech (also called a word class) is a category that describes the role a word plays in a sentence.Understanding the different parts of speech can help you analyze how words function in a sentence and improve your writing. The parts of speech are classified differently in different grammars, but most traditional grammars list eight parts of speech in English: nouns, pronouns, verbs ...

  5. interaction noun

    interaction (between A and B) | interaction (of A) (with B) if one thing has an interaction with another, or if there is an interaction between two things, the two things have an effect on each other. the interaction of bacteria with the body's natural chemistry; See interaction in the Oxford Learner's Dictionary of Academic English

  6. interaction

    part of speech: noun. definition: action each upon the other or others; reciprocal action, influence, or effect. the group's social interaction. related words: congress, cooperation. Word Combinations Subscriber feature About this feature. derivation: interactional (adj.)

  7. INTERACTION Definition & Meaning

    Interaction definition: reciprocal action, effect, or influence. See examples of INTERACTION used in a sentence.

  8. Understanding the 8 Parts of Speech: Definitions and Examples

    In the English language, it's commonly accepted that there are 8 parts of speech: nouns, verbs, adjectives, adverbs, pronouns, conjunctions, interjections, and prepositions. Each of these categories plays a different role in communicating meaning in the English language. Each of the eight parts of speech—which we might also call the "main ...

  9. Parts of Speech: Complete Guide (With Examples and More)

    The parts of speech refer to categories to which a word belongs. In English, there are eight of them : verbs , nouns, pronouns, adjectives, adverbs, prepositions, conjunctions, and interjections. Many English words fall into more than one part of speech category. Take the word light as an example.

  10. interact verb

    [intransitive] interact (with something) if one thing interacts with another, or if two things interact, the two things have an effect on each other Perfume interacts with the skin's natural chemicals. This hormone interacts closely with other hormones in the body. These devices allow the robot to physically interact with its environment.

  11. Interaction vs Communication: Meaning And Differences

    Interaction refers to the exchange of information and ideas between two or more individuals or entities. It involves a back-and-forth exchange, where each party is actively engaged in the conversation or activity. Communication, on the other hand, is the act of conveying information or ideas from one person or entity to another.

  12. Parts of Speech

    8 Parts of Speech Definitions and Examples: 1. Nouns are words that are used to name people, places, animals, ideas and things. Nouns can be classified into two main categories: Common nouns and Proper nouns. Common nouns are generic like ball, car, stick, etc., and proper nouns are more specific like Charles, The White House, The Sun, etc.

  13. The Eight Parts of Speech in English: Definitions and Examples

    The English language, with its vast vocabulary and intricate grammar rules, is built upon eight fundamental parts of speech. Understanding these parts of speech is crucial for effective communication, writing, and comprehension. From nouns to interjections, each part plays a unique role in constructing sentences and conveying meaning.

  14. 3.1: Language and Meaning

    The triangle of meaning is a model of communication that indicates the relationship among a thought, symbol, and referent and highlights the indirect relationship between the symbol and referent (Richards & Ogden, 1923). As you can see in Figure 3.1, the thought is the concept or idea a person references. The symbol is the word that represents ...

  15. What Is an Interjection?

    Revised on November 16, 2022. An interjection is a word or phrase used to express a feeling or to request or demand something. While interjections are a part of speech, they are not grammatically connected to other parts of a sentence. Interjections are common in everyday speech and informal writing.

  16. part of speech noun

    Definition of part of speech noun in Oxford Advanced Learner's Dictionary. Meaning, pronunciation, picture, example sentences, grammar, usage notes, synonyms and more.

  17. What part of speech is Interaction

    Parts of Speech for Interaction. in·ter·ac·tion . I i. Gramatical Hierarchy. Noun; Grammatically "Interaction" is a noun. But also it is used as a . All about interaction Download all about interaction in pdf. Was this page helpful? Yes No. Thank you for your feedback! Tell your friends about this page. Share:

  18. Conversation Analysis

    Summary. Conversation analysis is an approach to the study of social interaction and talk-in-interaction that, although rooted in the sociological study of everyday life, has exerted significant influence across the humanities and social sciences including linguistics. Drawing on recordings (both audio and video) naturalistic interaction ...

  19. Part of speech Definition & Meaning

    part of speech: [noun phrase] a traditional class of words (such as adjectives, adverbs, nouns, and verbs) distinguished according to the kind of idea denoted and the function performed in a sentence.

  20. Conversational Interaction Is the Brain in Action: Implicati ...

    The meaning in the interactions emerges and evolves as interlocutors build mental models of each other and the substance of their communication, with the goal of developing a cognitive state of shared understanding. ... In this context, speech communication as part of a social interaction can be seen as one input into a complex interplay of ...

  21. Speech acts and interaction in second language pragmatics: A position

    The 'Substantive' group includes speech act types that are generally considered to be 'meaningful', while 'Ritual' speech acts tend to occur in specific parts of an interaction and are, therefore, highly predictable, and have a social meaning, such that the literal meaning of the utterance - if any - is almost incidental to the ...

  22. Interjection: Definition and Examples

    Interjection: Definition and Examples. The interjection is a part of speech which is more commonly used in informal language than in formal writing or speech. Basically, the function of interjections is to express emotions or sudden bursts of feelings. They can express a wide variety of emotions such as: excitement, joy, surprise, or disgust.

  23. Gesture's role in speaking, learning, and creating language

    Having shown that gesture is an integral part of communication, we end with a discussion of how gesture can be put to good use--how it can be harnessed for diagnosis and intervention in the clinic and for assessment and instruction in the classroom. ... Because gesture and speech convey meaning differently, it is rare for the two modalities ...