U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

The PMC website is updating on October 15, 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Front Psychol

Cognitive theory development as we know it: specificity, explanatory power, and the brain

In an effort to define more precisely what we currently know about early steps in the visual identification of complex words, we recently published a review of morphological effects in lexical decision, unmasked priming and masked priming studies (Amenta and Crepaldi, 2012 ). The review aims at identifying a set of well-established experimental effects that any theory in the field should be able to explain, so as to allow for a more rigorous adjudication process to take place and for the field to progress incrementally toward more and more explanatory power (Grainger and Jacobs, 1996 ; Coltheart et al., 2001 ). We called this set of experimental effects “the target list” (Amenta and Crepaldi, 2012 , p. 9). Shortly afterwards, Koester ( 2012 ) published a commentary that highlights a set of open issues concerning our paper, which we try to address here. The questions raised by Koester ( 2012 ) are all well motivated, and their answer strongly influence how the target list is going to be used in future research; for this reason, it is important to address Koester's questions in a timely manner and, in doing so, to specify more clearly why we believe that the target list is important for the field, and how it should be used. Importantly, although we strongly advocated in our original paper for cognitive theories to become computational models, Koester's points apply more generally to any kind of cognitive theory, either computational or descriptive; our replies will thus try to stand at that general level, which stresses the generality and importance of the issues highlighted by Koester ( 2012 ).

The first issue raised by Koester ( 2012 ) speaks as follows in his own words:

“Amenta and Crepaldi's review points toward relevant linguistic … and psycholinguistics variables … and their relations regarding visual word identification. …. The authors suggest that these findings provide a basis for the evaluation of competing theories and, in doing so, to contribute to future theory development; in their own words, to construct an “all-inclusive model of visual identification of complex words”. In light of the specificity of the insights, these broad suggestions leave the reader with the impression of a gap between insights and suggestions.” (p. 1, 2nd column, 2nd paragraph)

Koester ( 2012 ) seems to question that focusing on very specific experimental effects might drive to enlarge the generality of our theories. We suspect that the exact definition of “generality” is the key point here. If a theory is general when it surpasses the boundaries of the field where it was developed (e.g., it is possibly insightful for spoken word identification while it was developed to explain reading data), then Koester is right that focusing on small-scale, specific effects would not help. But if generality is conceived as explanatory power, i.e., a theory is more general than another when it explains more data, then assessing theories on how many relevant experimental effects they are able to explain clearly encourages the development of general models. We acknowledge that it would be desirable to have general theories in the sense endorsed by the former approach. However, “cross-field” generality normally comes at some cost in terms of model under-specification, and current morphological theories lack in details under so many points of view that this is probably a cost we cannot afford at the moment. Theories are useful in the first place because they generate testable predictions; the less they are specified, the less likely they will be to generate predictions of this kind.

The second point highlighted by Koester ( 2012 ) concerns the role of some variables that are well-established factors in reading, which we left unaddressed in our review

“such as surface frequency, word length, word class, abstractness, or cues to morpheme boundaries.” (p. 1, 3rd column, 1st paragraph)

Of course, Koester is right that these variables are very relevant in the existing literature on visual word identification; however, it is difficult to envisage a specific role for them in theories that focus on morphology . Apart from cues to morphemic boundaries—whose effect, however, has never been demonstrated unambiguously, i.e., we do not know about any study where these variables were manipulated independently of any other—, these factors are not morphological in nature, and so whether or not any model of visual complex word identification will be able to account for them depends on aspects of the theory that have nothing to do with morphemes. Possibly, surface frequency might be relevant for the morphological aspects of a theory of visual word identification by virtue of its relationship with stem frequency. Indeed, stem and surface frequency were shown to interact in complex word processing (e.g., Baayen et al., 2007 ). This issue was covered in Amenta and Crepaldi ( 2012 , p. 2).

A third big issue raised by Koester ( 2012 ) concerns

“how the neural evidence is to be incorporated into a strictly cognitive model.” (p. 1, 3rd column, 2nd paragraph)

There are two levels, we believe, at which this issue needs to be addressed. In terms of assessing the explanatory adequacy of cognitive theories, i.e., which experimental data any model should be tested against, there seems to be little role for brain data (e.g., fMRI and ERP). Of course, cognitive neuropsychology has indeed proven decisive to inform the structure of cognitive models (and reading models in particular), and often it has provided evidence for theoretical claims in a way that is unrivalled by other disciplines for elegance and simplicity (e.g., Coltheart, 1982 ; Coslett and Saffran, 1989 ; Luzzatti et al., 2001 ). However, this evidence was always behavioral in nature—essentially, response time, and accuracy—, because this is what maps onto the predictions that purely cognitive models can make. In fact, existing cognitive theories of how we identify printed complex words make no explicit statement on the brain structures that underlie the system and on how these structures work (e.g., Taft and Ardasinski, 2006 ; Gonnerman et al., 2007 ; Crepaldi et al., 2010 ; Baayen et al., 2011 ; Grainger and Ziegler, 2011 ). Thus, they make no predictions on brain data. This is true more in general for all existing computational models of reading (e.g., Coltheart et al., 2001 ; Norris, 2006 ; Davis, 2010 ): they have no way to model neural responses such as, e.g., ERPs or BOLD signal, and consequently they should not be evaluated on these grounds.

Here the second, more general level at which this issue should be addressed comes about: why is this the case? There seems to be no principled reason behind this fact. Indeed, one would just need some function to link mental computations (of whatever kind) to the activity of brain units (of whatever size, from single neurons to cortical areas) in order to produce quantitative predictions about neural responses on the basis of some kind of cognitive model. The problem is exactly that this link function has been proven extremely difficult to find. Typically, this was related to the idea that the brain uses distributed representations, i.e., even simple concepts/mental objects such as individual words or individual letters are represented through a pattern of activation over an indefinite number of brain units, i.e., neurons or small clusters of neurons (e.g., McClelland et al., 1986 ; Young and Yamane, 1993 ; O'Reilly, 1998 ; McClelland, 2001 ). Because we do not know the exact dynamics that govern these units, where they would be localized in the brain with respect to each other, and so on, it is virtually impossible to draw any straight and well-defined connection, such as those required to generate clear predictions, between mental units and neural units. Indeed, some studies have challenged the idea of distributed representations and have stood in defense of the so-called grandmother cells (e.g., Quiroga et al., 2005 ; Bowers, 2009 ). This would point to an easy, one-to-one link function between mental and brain units; but then one needs to consider that (1) most grandmother cell studies have also highlighted massive redundancy in single-cell coding, i.e., there might be many cells coding for, e.g., the word sofa (e.g., Waydo et al., 2006 ); (2) we have no idea on where exactly to expect each relevant cell to be localized in the brain; and (3) widespread imaging techniques are currently far away from recording the activity of single neurons, or small clusters of neurons. Although there are signs that this latter problem might be overcome in a reasonably close future (e.g., Sahin et al., 2009 ), at least the former two points make clear that even hypothesizing a one-to-one mapping between mind and brain units would hardly be of any help in deriving testable predictions on neural data from (computational) models of cognition. Of course, one-to-one mapping between cognitive and bran units could logically emerge at higher levels of complexity, i.e., between mental processes and cortical areas, rather than between individual representations and single cells. However, experimental data indicate that this is not the case: there seems to be no single brain area that could be held responsible for one single mental operation, and even considering smaller sets of neurons, such as those tracked by cortical stimulation in awake patients, most brain units take part in different cognitive processes (e.g., Roux et al., 2012 ). These considerations all drive to think that not only existing cognitive models of visual word identification take no stance as to their neural substrates, but also that this would be far from our grasp, given what we currently know. It is important to stress that this is not even close to suggesting that brain data bear no relevance for cognitive theories. What we are saying is, more modestly, that neural effects should not be included into a list of to-be-explained facts because we do not know how exactly cognitive and brain units map into each other, and thus we cannot derive exact predictions on brain data on the basis of purely cognitive models.

A final important point raised by Koester ( 2012 ) concerns the fact that our target list

“comprises aspects of experimental techniques (masking)” (p. 2, 1st column, 2nd paragraph)

which is questionable because

“Masking does not pertain to the phenomenon in question.” (p. 2, 1st column, 2nd paragraph)

Of course we agree with Koester ( 2012 ) that task-related aspects do not belong to the domain of morphology. However, they do make a difference for morphological effects. For example, corner is an effective prime for corn only when it is presented in a masked form (e.g., Rastle et al., 2000 ). Or again, brother —when compared to brothel —makes it easier to identify broth in lexical decision (Rastle et al., 2004 ), but not in a same-different task (Duñabeitia et al., 2011 ) or in a semantic task (Marelli et al., in press ). If a theory refuses to take a position as to how readers carry out these different tasks, what should it account for in these cases? Should it care to explain why corner facilitates corn , or rather why corner does not facilitate corn ? It is clear that experimental effects only make sense in context, i.e., in specific tasks, because they always emerge in specific tasks. This is why we included aspects of experimental techniques in our target list, of course limiting ourselves to those aspects that modulate morphological effects.

In conclusion, we thank Koester ( 2012 ) for raising these issues, thus giving us the possibility to clarify our opinion where perhaps we were not clear enough in our original paper. We hope that the notes illustrated in this article will help readers to better understand the sense of our proposal of a target list, and to use this list properly so as to advance our knowledge on how human readers identify printed complex words in a more cooperative and incremental fashion.

This research was supported by funding from the Italian Ministry of Education, University, and Research to Davide Crepaldi.

  • Amenta S., Crepaldi D. (2012). Morphological processing as we know it: an analytical review of morphological effects in visual word identification . Front. Psychology 3 : 232 10.3389/fpsyg.2012.00232 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Baayen R., Milin P., Filipović Djurdjević D., Hendrix P., Marelli M. (2011). An amorphous model for morphological processing in visual comprehension based on naive discriminative learning . Psychol. Rev . 118 , 438–481 10.1037/a0023851 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Baayen R. H., Wurm L. H., Aycock J. (2007). Lexical dynamics for low-frequency complex words: a regression study across tasks and modalities . Ment. Lexicon 2 , 419–463 [ Google Scholar ]
  • Bowers J. (2009). On the biological plausibility of grandmother cells: implications for neural network theories in psychology and neuroscience . Psychol. Rev . 116 , 220–251 10.1037/a0014462 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Coltheart M. (1982). The psycholinguistic analysis of acquired dyslexias: some illustrations . Philos. Trans. R. Soc. Lond. B Biol. Sci . 298 , 151–164 10.1098/rstb.1982.0078 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Coltheart M., Rastle K., Perry C., Langdon R., Ziegler J. (2001). DRC: a dual route cascaded model of visual word recognition and reading aloud . Psychol. Rev . 108 , 204–256 [ PubMed ] [ Google Scholar ]
  • Coslett B. H., Saffran E. (1989). Preserved object recognition and reading comprehension in optic aphasia . Brain 112 , 1091–1110 10.1093/brain/112.4.1091 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Crepaldi D., Rastle K., Coltheart M., Nickels L. (2010). ‘Fell’ primes ‘fall’, but does ‘bell’ prime ‘ball’? Masked priming with irregularly-inflected primes . J. Mem. Lang . 63 , 83–99 [ Google Scholar ]
  • Davis C. (2010). The spatial coding model of visual word identification . Psychol. Rev . 117 , 713–758 10.1037/a0019738 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Duñabeitia J., Kinoshita S., Carreiras M., Norris D. (2011). Is morpho-orthographic decomposition purely orthographic? Evidence from masked priming in the same-different task . Lang. Cogn. Process . 26 , 509–529 [ Google Scholar ]
  • Gonnerman L. M., Seidenberg M. S., Andersen E. S. (2007). Graded semantic and phonological similarity effects in priming: evidence for a distributed connectionist approach to morphology . J. Exp. Psychol. Gen . 136 , 323–345 10.1037/0096-3445.136.2.323 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Grainger J., Jacobs A. (1996). Orthographic processing in visual word recognition: a multiple read-out models . Psychol. Rev . 103 , 518–565 [ PubMed ] [ Google Scholar ]
  • Grainger J., Ziegler J. (2011). A dual-route approach to orthographic processing . Front. Psychology 2 : 54 10.3389/fpsyg.2011.00054 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Koester D. (2012). Future morphology? Summary of visual word identification effects draws attention to necessary efforts in understanding morphological processing . Front. Psychology 3 : 395 10.3389/fpsyg.2012.00395 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Luzzatti C., Mondini S., Semenza C. (2001). Lexical representation and processing of morphologically complex words: evidence from the reading performance of an italian agrammatic patient . Brain Lang . 79 , 345–359 10.1006/brln.2001.2475 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Marelli M., Amenta S., Morone E. A., Crepaldi D. (in press). Meaning is in the beholder's eye: morpho-semantic effects in masked priming . Psychon. Bull. Rev . [Epub ahead of print]. 10.3758/s13423-012-0363-2 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • McClelland J. L. (2001). Cognitive neuroscience , in International Encyclopedia of the Social and Behavioral Sciences , eds Smelser N. J., Baltes P. B. (Oxford: Pergamon; ), 2133–2139 [ Google Scholar ]
  • McClelland J. L., Rumelhart D. E., Group P. R. (1986). Parallel Distributed Processing: Psychological and Biological Models . Vol 2 Cambridge, MA: MIT Press [ Google Scholar ]
  • Norris D. (2006). The bayesian reader: explaining word recognition as an optimal bayesian decision process . Psychol. Rev . 113 , 327–357 10.1037/0033-295X.113.2.327 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • O'Reilly R. C. (1998). Six principles for biologically based computational models of cortical cognition . Trends Cogn. Sci . 2 , 455–462 10.1016/S1364-6613(98)01241-8 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Quiroga R. Q., Reddy L., Kreiman G., Koch C., Fried I. (2005). Invariant visual representation by single neurons in the human brain . Nature 435 , 1103 10.1038/nature03687 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Rastle K., Davis M., Marslen-Wilson W., Tyler L. (2000). Morphological and semantic effects in visual word recognition: a time-course study . Lang. Cogn. Process . 15 , 507–537 [ Google Scholar ]
  • Rastle K., Davis M. H., New B. (2004). The broth in my brother's brothel: morpho-orthographic segmentation in visual word recognition . Psychon. Bull. Rev . 11 , 1090–1098 [ PubMed ] [ Google Scholar ]
  • Roux F.-E., Durand J.-B., Jucla M., Réhault E., Reddy M., Démonet J.-F. (2012). Segregation of lexical and sub-lexical reading processes in the left perisylvian cortex . PLoS ONE 7 :e50665 10.1371/journal.pone.0050665 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Sahin N., Pinker S., Cash S., Schomer D., Halgren E. (2009). Sequential processing of lexical, grammatical, and phonological information within Broca's Area . Science 326 , 445–449 10.1126/science.1174481 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Taft M., Ardasinski S. (2006). Obligatory decomposition in reading prefixed words . Ment. Lexicon 1 , 183–199 [ Google Scholar ]
  • Waydo S., Kraskov A., Quiroga R. Q., Fried I., Koch C. (2006). Sparse representation in the human medial temporal lobe . J. Neurosci . 26 , 10232–10234 10.1523/JNEUROSCI.2101-06.2006 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Young M. P., Yamane S. (1993). An analysis at the population level of the processing of faces in the inferotemporal cortex , in Brain Mechanisms of Perception and Memory: From Neuron to Behavior , eds Ono T., Squire L. R., Raichle M. E., Perrett D. I., Fukuda M. (Oxford: Oxford University Press; ), 47–70 [ Google Scholar ]
  • Tutorial Review
  • Open access
  • Published: 24 January 2018

Teaching the science of learning

  • Yana Weinstein   ORCID: orcid.org/0000-0002-5144-968X 1 ,
  • Christopher R. Madan 2 , 3 &
  • Megan A. Sumeracki 4  

Cognitive Research: Principles and Implications volume  3 , Article number:  2 ( 2018 ) Cite this article

274k Accesses

101 Citations

756 Altmetric

Metrics details

The science of learning has made a considerable contribution to our understanding of effective teaching and learning strategies. However, few instructors outside of the field are privy to this research. In this tutorial review, we focus on six specific cognitive strategies that have received robust support from decades of research: spaced practice, interleaving, retrieval practice, elaboration, concrete examples, and dual coding. We describe the basic research behind each strategy and relevant applied research, present examples of existing and suggested implementation, and make recommendations for further research that would broaden the reach of these strategies.

Significance

Education does not currently adhere to the medical model of evidence-based practice (Roediger, 2013 ). However, over the past few decades, our field has made significant advances in applying cognitive processes to education. From this work, specific recommendations can be made for students to maximize their learning efficiency (Dunlosky, Rawson, Marsh, Nathan, & Willingham, 2013 ; Roediger, Finn, & Weinstein, 2012 ). In particular, a review published 10 years ago identified a limited number of study techniques that have received solid evidence from multiple replications testing their effectiveness in and out of the classroom (Pashler et al., 2007 ). A recent textbook analysis (Pomerance, Greenberg, & Walsh, 2016 ) took the six key learning strategies from this report by Pashler and colleagues, and found that very few teacher-training textbooks cover any of these six principles – and none cover them all, suggesting that these strategies are not systematically making their way into the classroom. This is the case in spite of multiple recent academic (e.g., Dunlosky et al., 2013 ) and general audience (e.g., Dunlosky, 2013 ) publications about these strategies. In this tutorial review, we present the basic science behind each of these six key principles, along with more recent research on their effectiveness in live classrooms, and suggest ideas for pedagogical implementation. The target audience of this review is (a) educators who might be interested in integrating the strategies into their teaching practice, (b) science of learning researchers who are looking for open questions to help determine future research priorities, and (c) researchers in other subfields who are interested in the ways that principles from cognitive psychology have been applied to education.

While the typical teacher may not be exposed to this research during teacher training, a small cohort of teachers intensely interested in cognitive psychology has recently emerged. These teachers are mainly based in the UK, and, anecdotally (e.g., Dennis (2016), personal communication), appear to have taken an interest in the science of learning after reading Make it Stick (Brown, Roediger, & McDaniel, 2014 ; see Clark ( 2016 ) for an enthusiastic review of this book on a teacher’s blog, and “Learning Scientists” ( 2016c ) for a collection). In addition, a grassroots teacher movement has led to the creation of “researchED” – a series of conferences on evidence-based education (researchED, 2013 ). The teachers who form part of this network frequently discuss cognitive psychology techniques and their applications to education on social media (mainly Twitter; e.g., Fordham, 2016 ; Penfound, 2016 ) and on their blogs, such as Evidence Into Practice ( https://evidenceintopractice.wordpress.com/ ), My Learning Journey ( http://reflectionsofmyteaching.blogspot.com/ ), and The Effortful Educator ( https://theeffortfuleducator.com/ ). In general, the teachers who write about these issues pay careful attention to the relevant literature, often citing some of the work described in this review.

These informal writings, while allowing teachers to explore their approach to teaching practice (Luehmann, 2008 ), give us a unique window into the application of the science of learning to the classroom. By examining these blogs, we can not only observe how basic cognitive research is being applied in the classroom by teachers who are reading it, but also how it is being misapplied, and what questions teachers may be posing that have gone unaddressed in the scientific literature. Throughout this review, we illustrate each strategy with examples of how it can be implemented (see Table  1 and Figs.  1 , 2 , 3 , 4 , 5 , 6 and 7 ), as well as with relevant teacher blog posts that reflect on its application, and draw upon this work to pin-point fruitful avenues for further basic and applied research.

Spaced practice schedule for one week. This schedule is designed to represent a typical timetable of a high-school student. The schedule includes four one-hour study sessions, one longer study session on the weekend, and one rest day. Notice that each subject is studied one day after it is covered in school, to create spacing between classes and study sessions. Copyright note: this image was produced by the authors

a Blocked practice and interleaved practice with fraction problems. In the blocked version, students answer four multiplication problems consecutively. In the interleaved version, students answer a multiplication problem followed by a division problem and then an addition problem, before returning to multiplication. For an experiment with a similar setup, see Patel et al. ( 2016 ). Copyright note: this image was produced by the authors. b Illustration of interleaving and spacing. Each color represents a different homework topic. Interleaving involves alternating between topics, rather than blocking. Spacing involves distributing practice over time, rather than massing. Interleaving inherently involves spacing as other tasks naturally “fill” the spaces between interleaved sessions. Copyright note: this image was produced by the authors, adapted from Rohrer ( 2012 )

Concept map illustrating the process and resulting benefits of retrieval practice. Retrieval practice involves the process of withdrawing learned information from long-term memory into working memory, which requires effort. This produces direct benefits via the consolidation of learned information, making it easier to remember later and causing improvements in memory, transfer, and inferences. Retrieval practice also produces indirect benefits of feedback to students and teachers, which in turn can lead to more effective study and teaching practices, with a focus on information that was not accurately retrieved. Copyright note: this figure originally appeared in a blog post by the first and third authors ( http://www.learningscientists.org/blog/2016/4/1-1 )

Illustration of “how” and “why” questions (i.e., elaborative interrogation questions) students might ask while studying the physics of flight. To help figure out how physics explains flight, students might ask themselves the following questions: “How does a plane take off?”; “Why does a plane need an engine?”; “How does the upward force (lift) work?”; “Why do the wings have a curved upper surface and a flat lower surface?”; and “Why is there a downwash behind the wings?”. Copyright note: the image of the plane was downloaded from Pixabay.com and is free to use, modify, and share

Three examples of physics problems that would be categorized differently by novices and experts. The problems in ( a ) and ( c ) look similar on the surface, so novices would group them together into one category. Experts, however, will recognize that the problems in ( b ) and ( c ) both relate to the principle of energy conservation, and so will group those two problems into one category instead. Copyright note: the figure was produced by the authors, based on figures in Chi et al. ( 1981 )

Example of how to enhance learning through use of a visual example. Students might view this visual representation of neural communications with the words provided, or they could draw a similar visual representation themselves. Copyright note: this figure was produced by the authors

Example of word properties associated with visual, verbal, and motor coding for the word “SPOON”. A word can evoke multiple types of representation (“codes” in dual coding theory). Viewing a word will automatically evoke verbal representations related to its component letters and phonemes. Words representing objects (i.e., concrete nouns) will also evoke visual representations, including information about similar objects, component parts of the object, and information about where the object is typically found. In some cases, additional codes can also be evoked, such as motor-related properties of the represented object, where contextual information related to the object’s functional intention and manipulation action may also be processed automatically when reading the word. Copyright note: this figure was produced by the authors and is based on Aylwin ( 1990 ; Fig.  2 ) and Madan and Singhal ( 2012a , Fig.  3 )

Spaced practice

The benefits of spaced (or distributed) practice to learning are arguably one of the strongest contributions that cognitive psychology has made to education (Kang, 2016 ). The effect is simple: the same amount of repeated studying of the same information spaced out over time will lead to greater retention of that information in the long run, compared with repeated studying of the same information for the same amount of time in one study session. The benefits of distributed practice were first empirically demonstrated in the 19 th century. As part of his extensive investigation into his own memory, Ebbinghaus ( 1885/1913 ) found that when he spaced out repetitions across 3 days, he could almost halve the number of repetitions necessary to relearn a series of 12 syllables in one day (Chapter 8). He thus concluded that “a suitable distribution of [repetitions] over a space of time is decidedly more advantageous than the massing of them at a single time” (Section 34). For those who want to read more about Ebbinghaus’s contribution to memory research, Roediger ( 1985 ) provides an excellent summary.

Since then, hundreds of studies have examined spacing effects both in the laboratory and in the classroom (Kang, 2016 ). Spaced practice appears to be particularly useful at large retention intervals: in the meta-analysis by Cepeda, Pashler, Vul, Wixted, and Rohrer ( 2006 ), all studies with a retention interval longer than a month showed a clear benefit of distributed practice. The “new theory of disuse” (Bjork & Bjork, 1992 ) provides a helpful mechanistic explanation for the benefits of spacing to learning. This theory posits that memories have both retrieval strength and storage strength. Whereas retrieval strength is thought to measure the ease with which a memory can be recalled at a given moment, storage strength (which cannot be measured directly) represents the extent to which a memory is truly embedded in the mind. When studying is taking place, both retrieval strength and storage strength receive a boost. However, the extent to which storage strength is boosted depends upon retrieval strength, and the relationship is negative: the greater the current retrieval strength, the smaller the gains in storage strength. Thus, the information learned through “cramming” will be rapidly forgotten due to high retrieval strength and low storage strength (Bjork & Bjork, 2011 ), whereas spacing out learning increases storage strength by allowing retrieval strength to wane before restudy.

Teachers can introduce spacing to their students in two broad ways. One involves creating opportunities to revisit information throughout the semester, or even in future semesters. This does involve some up-front planning, and can be difficult to achieve, given time constraints and the need to cover a set curriculum. However, spacing can be achieved with no great costs if teachers set aside a few minutes per class to review information from previous lessons. The second method involves putting the onus to space on the students themselves. Of course, this would work best with older students – high school and above. Because spacing requires advance planning, it is crucial that the teacher helps students plan their studying. For example, teachers could suggest that students schedule study sessions on days that alternate with the days on which a particular class meets (e.g., schedule review sessions for Tuesday and Thursday when the class meets Monday and Wednesday; see Fig.  1 for a more complete weekly spaced practice schedule). It important to note that the spacing effect refers to information that is repeated multiple times, rather than the idea of studying different material in one long session versus spaced out in small study sessions over time. However, for teachers and particularly for students planning a study schedule, the subtle difference between the two situations (spacing out restudy opportunities, versus spacing out studying of different information over time) may be lost. Future research should address the effects of spacing out studying of different information over time, whether the same considerations apply in this situation as compared to spacing out restudy opportunities, and how important it is for teachers and students to understand the difference between these two types of spaced practice.

It is important to note that students may feel less confident when they space their learning (Bjork, 1999 ) than when they cram. This is because spaced learning is harder – but it is this “desirable difficulty” that helps learning in the long term (Bjork, 1994 ). Students tend to cram for exams rather than space out their learning. One explanation for this is that cramming does “work”, if the goal is only to pass an exam. In order to change students’ minds about how they schedule their studying, it might be important to emphasize the value of retaining information beyond a final exam in one course.

Ideas for how to apply spaced practice in teaching have appeared in numerous teacher blogs (e.g., Fawcett, 2013 ; Kraft, 2015 ; Picciotto, 2009 ). In England in particular, as of 2013, high-school students need to be able to remember content from up to 3 years back on cumulative exams (General Certificate of Secondary Education (GCSE) and A-level exams; see CIFE, 2012 ). A-levels in particular determine what subject students study in university and which programs they are accepted into, and thus shape the path of their academic career. A common approach for dealing with these exams has been to include a “revision” (i.e., studying or cramming) period of a few weeks leading up to the high-stakes cumulative exams. Now, teachers who follow cognitive psychology are advocating a shift of priorities to spacing learning over time across the 3 years, rather than teaching a topic once and then intensely reviewing it weeks before the exam (Cox, 2016a ; Wood, 2017 ). For example, some teachers have suggested using homework assignments as an opportunity for spaced practice by giving students homework on previous topics (Rose, 2014 ). However, questions remain, such as whether spaced practice can ever be effective enough to completely alleviate the need or utility of a cramming period (Cox, 2016b ), and how one can possibly figure out the optimal lag for spacing (Benney, 2016 ; Firth, 2016 ).

There has been considerable research on the question of optimal lag, and much of it is quite complex; two sessions neither too close together (i.e., cramming) nor too far apart are ideal for retention. In a large-scale study, Cepeda, Vul, Rohrer, Wixted, and Pashler ( 2008 ) examined the effects of the gap between study sessions and the interval between study and test across long periods, and found that the optimal gap between study sessions was contingent on the retention interval. Thus, it is not clear how teachers can apply the complex findings on lag to their own classrooms.

A useful avenue of research would be to simplify the research paradigms that are used to study optimal lag, with the goal of creating a flexible, spaced-practice framework that teachers could apply and tailor to their own teaching needs. For example, an Excel macro spreadsheet was recently produced to help teachers plan for lagged lessons (Weinstein-Jones & Weinstein, 2017 ; see Weinstein & Weinstein-Jones ( 2017 ) for a description of the algorithm used in the spreadsheet), and has been used by teachers to plan their lessons (Penfound, 2017 ). However, one teacher who found this tool helpful also wondered whether the more sophisticated plan was any better than his own method of manually selecting poorly understood material from previous classes for later review (Lovell, 2017 ). This direction is being actively explored within personalized online learning environments (Kornell & Finn, 2016 ; Lindsey, Shroyer, Pashler, & Mozer, 2014 ), but teachers in physical classrooms might need less technologically-driven solutions to teach cohorts of students.

It seems teachers would greatly appreciate a set of guidelines for how to implement spacing in the curriculum in the most effective, but also the most efficient manner. While the cognitive field has made great advances in terms of understanding the mechanisms behind spacing, what teachers need more of are concrete evidence-based tools and guidelines for direct implementation in the classroom. These could include more sophisticated and experimentally tested versions of the software described above (Weinstein-Jones & Weinstein, 2017 ), or adaptable templates of spaced curricula. Moreover, researchers need to evaluate the effectiveness of these tools in a real classroom environment, over a semester or academic year, in order to give pedagogically relevant evidence-based recommendations to teachers.

Interleaving

Another scheduling technique that has been shown to increase learning is interleaving. Interleaving occurs when different ideas or problem types are tackled in a sequence, as opposed to the more common method of attempting multiple versions of the same problem in a given study session (known as blocking). Interleaving as a principle can be applied in many different ways. One such way involves interleaving different types of problems during learning, which is particularly applicable to subjects such as math and physics (see Fig.  2 a for an example with fractions, based on a study by Patel, Liu, & Koedinger, 2016 ). For example, in a study with college students, Rohrer and Taylor ( 2007 ) found that shuffling math problems that involved calculating the volume of different shapes resulted in better test performance 1 week later than when students answered multiple problems about the same type of shape in a row. This pattern of results has also been replicated with younger students, for example 7 th grade students learning to solve graph and slope problems (Rohrer, Dedrick, & Stershic, 2015 ). The proposed explanation for the benefit of interleaving is that switching between different problem types allows students to acquire the ability to choose the right method for solving different types of problems rather than learning only the method itself, and not when to apply it.

Do the benefits of interleaving extend beyond problem solving? The answer appears to be yes. Interleaving can be helpful in other situations that require discrimination, such as inductive learning. Kornell and Bjork ( 2008 ) examined the effects of interleaving in a task that might be pertinent to a student of the history of art: the ability to match paintings to their respective painters. Students who studied different painters’ paintings interleaved at study were more successful on a later identification test than were participants who studied the paintings blocked by painter. Birnbaum, Kornell, Bjork, and Bjork ( 2013 ) proposed the discriminative-contrast hypothesis to explain that interleaving enhances learning by allowing the comparison between exemplars of different categories. They found support for this hypothesis in a set of experiments with bird categorization: participants benefited from interleaving and also from spacing, but not when the spacing interrupted side-by-side comparisons of birds from different categories.

Another type of interleaving involves the interleaving of study and test opportunities. This type of interleaving has been applied, once again, to problem solving, whereby students alternate between attempting a problem and viewing a worked example (Trafton & Reiser, 1993 ); this pattern appears to be superior to answering a string of problems in a row, at least with respect to the amount of time it takes to achieve mastery of a procedure (Corbett, Reed, Hoffmann, MacLaren, & Wagner, 2010 ). The benefits of interleaving study and test opportunities – rather than blocking study followed by attempting to answer problems or questions – might arise due to a process known as “test-potentiated learning”. That is, a study opportunity that immediately follows a retrieval attempt may be more fruitful than when that same studying was not preceded by retrieval (Arnold & McDermott, 2013 ).

For problem-based subjects, the interleaving technique is straightforward: simply mix questions on homework and quizzes with previous materials (which takes care of spacing as well); for languages, mix vocabulary themes rather than blocking by theme (Thomson & Mehring, 2016 ). But interleaving as an educational strategy ought to be presented to teachers with some caveats. Research has focused on interleaving material that is somewhat related (e.g., solving different mathematical equations, Rohrer et al., 2015 ), whereas students sometimes ask whether they should interleave material from different subjects – a practice that has not received empirical support (Hausman & Kornell, 2014 ). When advising students how to study independently, teachers should thus proceed with caution. Since it is easy for younger students to confuse this type of unhelpful interleaving with the more helpful interleaving of related information, it may be best for teachers of younger grades to create opportunities for interleaving in homework and quiz assignments rather than putting the onus on the students themselves to make use of the technique. Technology can be very helpful here, with apps such as Quizlet, Memrise, Anki, Synap, Quiz Champ, and many others (see also “Learning Scientists”, 2017 ) that not only allow instructor-created quizzes to be taken by students, but also provide built-in interleaving algorithms so that the burden does not fall on the teacher or the student to carefully plan which items are interleaved when.

An important point to consider is that in educational practice, the distinction between spacing and interleaving can be difficult to delineate. The gap between the scientific and classroom definitions of interleaving is demonstrated by teachers’ own writings about this technique. When they write about interleaving, teachers often extend the term to connote a curriculum that involves returning to topics multiple times throughout the year (e.g., Kirby, 2014 ; see “Learning Scientists” ( 2016a ) for a collection of similar blog posts by several other teachers). The “interleaving” of topics throughout the curriculum produces an effect that is more akin to what cognitive psychologists call “spacing” (see Fig.  2 b for a visual representation of the difference between interleaving and spacing). However, cognitive psychologists have not examined the effects of structuring the curriculum in this way, and open questions remain: does repeatedly circling back to previous topics throughout the semester interrupt the learning of new information? What are some effective techniques for interleaving old and new information within one class? And how does one determine the balance between old and new information?

Retrieval practice

While tests are most often used in educational settings for assessment, a lesser-known benefit of tests is that they actually improve memory of the tested information. If we think of our memories as libraries of information, then it may seem surprising that retrieval (which happens when we take a test) improves memory; however, we know from a century of research that retrieving knowledge actually strengthens it (see Karpicke, Lehman, & Aue, 2014 ). Testing was shown to strengthen memory as early as 100 years ago (Gates, 1917 ), and there has been a surge of research in the last decade on the mnemonic benefits of testing, or retrieval practice . Most of the research on the effectiveness of retrieval practice has been done with college students (see Roediger & Karpicke, 2006 ; Roediger, Putnam, & Smith, 2011 ), but retrieval-based learning has been shown to be effective at producing learning for a wide range of ages, including preschoolers (Fritz, Morris, Nolan, & Singleton, 2007 ), elementary-aged children (e.g., Karpicke, Blunt, & Smith, 2016 ; Karpicke, Blunt, Smith, & Karpicke, 2014 ; Lipko-Speed, Dunlosky, & Rawson, 2014 ; Marsh, Fazio, & Goswick, 2012 ; Ritchie, Della Sala, & McIntosh, 2013 ), middle-school students (e.g., McDaniel, Thomas, Agarwal, McDermott, & Roediger, 2013 ; McDermott, Agarwal, D’Antonio, Roediger, & McDaniel, 2014 ), and high-school students (e.g., McDermott et al., 2014 ). In addition, the effectiveness of retrieval-based learning has been extended beyond simple testing to other activities in which retrieval practice can be integrated, such as concept mapping (Blunt & Karpicke, 2014 ; Karpicke, Blunt, et al., 2014 ; Ritchie et al., 2013 ).

A debate is currently ongoing as to the effectiveness of retrieval practice for more complex materials (Karpicke & Aue, 2015 ; Roelle & Berthold, 2017 ; Van Gog & Sweller, 2015 ). Practicing retrieval has been shown to improve the application of knowledge to new situations (e.g., Butler, 2010 ; Dirkx, Kester, & Kirschner, 2014 ); McDaniel et al., 2013 ; Smith, Blunt, Whiffen, & Karpicke, 2016 ); but see Tran, Rohrer, and Pashler ( 2015 ) and Wooldridge, Bugg, McDaniel, and Liu ( 2014 ), for retrieval practice studies that showed limited or no increased transfer compared to restudy. Retrieval practice effects on higher-order learning may be more sensitive than fact learning to encoding factors, such as the way material is presented during study (Eglington & Kang, 2016 ). In addition, retrieval practice may be more beneficial for higher-order learning if it includes more scaffolding (Fiechter & Benjamin, 2017 ; but see Smith, Blunt, et al., 2016 ) and targeted practice with application questions (Son & Rivas, 2016 ).

How does retrieval practice help memory? Figure  3 illustrates both the direct and indirect benefits of retrieval practice identified by the literature. The act of retrieval itself is thought to strengthen memory (Karpicke, Blunt, et al., 2014 ; Roediger & Karpicke, 2006 ; Smith, Roediger, & Karpicke, 2013 ). For example, Smith et al. ( 2013 ) showed that if students brought information to mind without actually producing it (covert retrieval), they remembered the information just as well as if they overtly produced the retrieved information (overt retrieval). Importantly, both overt and covert retrieval practice improved memory over control groups without retrieval practice, even when feedback was not provided. The fact that bringing information to mind in the absence of feedback or restudy opportunities improves memory leads researchers to conclude that it is the act of retrieval – thinking back to bring information to mind – that improves memory of that information.

The benefit of retrieval practice depends to a certain extent on successful retrieval (see Karpicke, Lehman, et al., 2014 ). For example, in Experiment 4 of Smith et al. ( 2013 ), students successfully retrieved 72% of the information during retrieval practice. Of course, retrieving 72% of the information was compared to a restudy control group, during which students were re-exposed to 100% of the information, creating a bias in favor of the restudy condition. Yet retrieval led to superior memory later compared to the restudy control. However, if retrieval success is extremely low, then it is unlikely to improve memory (e.g., Karpicke, Blunt, et al., 2014 ), particularly in the absence of feedback. On the other hand, if retrieval-based learning situations are constructed in such a way that ensures high levels of success, the act of bringing the information to mind may be undermined, thus making it less beneficial. For example, if a student reads a sentence and then immediately covers the sentence and recites it out loud, they are likely not retrieving the information but rather just keeping the information in their working memory long enough to recite it again (see Smith, Blunt, et al., 2016 for a discussion of this point). Thus, it is important to balance success of retrieval with overall difficulty in retrieving the information (Smith & Karpicke, 2014 ; Weinstein, Nunes, & Karpicke, 2016 ). If initial retrieval success is low, then feedback can help improve the overall benefit of practicing retrieval (Kang, McDermott, & Roediger, 2007 ; Smith & Karpicke, 2014 ). Kornell, Klein, and Rawson ( 2015 ), however, found that it was the retrieval attempt and not the correct production of information that produced the retrieval practice benefit – as long as the correct answer was provided after an unsuccessful attempt, the benefit was the same as for a successful retrieval attempt in this set of studies. From a practical perspective, it would be helpful for teachers to know when retrieval attempts in the absence of success are helpful, and when they are not. There may also be additional reasons beyond retrieval benefits that would push teachers towards retrieval practice activities that produce some success amongst students; for example, teachers may hesitate to give students retrieval practice exercises that are too difficult, as this may negatively affect self-efficacy and confidence.

In addition to the fact that bringing information to mind directly improves memory for that information, engaging in retrieval practice can produce indirect benefits as well (see Roediger et al., 2011 ). For example, research by Weinstein, Gilmore, Szpunar, and McDermott ( 2014 ) demonstrated that when students expected to be tested, the increased test expectancy led to better-quality encoding of new information. Frequent testing can also serve to decrease mind-wandering – that is, thoughts that are unrelated to the material that students are supposed to be studying (Szpunar, Khan, & Schacter, 2013 ).

Practicing retrieval is a powerful way to improve meaningful learning of information, and it is relatively easy to implement in the classroom. For example, requiring students to practice retrieval can be as simple as asking students to put their class materials away and try to write out everything they know about a topic. Retrieval-based learning strategies are also flexible. Instructors can give students practice tests (e.g., short-answer or multiple-choice, see Smith & Karpicke, 2014 ), provide open-ended prompts for the students to recall information (e.g., Smith, Blunt, et al., 2016 ) or ask their students to create concept maps from memory (e.g., Blunt & Karpicke, 2014 ). In one study, Weinstein et al. ( 2016 ) looked at the effectiveness of inserting simple short-answer questions into online learning modules to see whether they improved student performance. Weinstein and colleagues also manipulated the placement of the questions. For some students, the questions were interspersed throughout the module, and for other students the questions were all presented at the end of the module. Initial success on the short-answer questions was higher when the questions were interspersed throughout the module. However, on a later test of learning from that module, the original placement of the questions in the module did not matter for performance. As with spaced practice, where the optimal gap between study sessions is contingent on the retention interval, the optimum difficulty and level of success during retrieval practice may also depend on the retention interval. Both groups of students who answered questions performed better on the delayed test compared to a control group without question opportunities during the module. Thus, the important thing is for instructors to provide opportunities for retrieval practice during learning. Based on previous research, any activity that promotes the successful retrieval of information should improve learning.

Retrieval practice has received a lot of attention in teacher blogs (see “Learning Scientists” ( 2016b ) for a collection). A common theme seems to be an emphasis on low-stakes (Young, 2016 ) and even no-stakes (Cox, 2015 ) testing, the goal of which is to increase learning rather than assess performance. In fact, one well-known charter school in the UK has an official homework policy grounded in retrieval practice: students are to test themselves on subject knowledge for 30 minutes every day in lieu of standard homework (Michaela Community School, 2014 ). The utility of homework, particularly for younger children, is often a hotly debated topic outside of academia (e.g., Shumaker, 2016 ; but see Jones ( 2016 ) for an opposing viewpoint and Cooper ( 1989 ) for the original research the blog posts were based on). Whereas some research shows clear links between homework and academic achievement (Valle et al., 2016 ), other researchers have questioned the effectiveness of homework (Dettmers, Trautwein, & Lüdtke, 2009 ). Perhaps amending homework to involve retrieval practice might make it more effective; this remains an open empirical question.

One final consideration is that of test anxiety. While retrieval practice can be very powerful at improving memory, some research shows that pressure during retrieval can undermine some of the learning benefit. For example, Hinze and Rapp ( 2014 ) manipulated pressure during quizzing to create high-pressure and low-pressure conditions. On the quizzes themselves, students performed equally well. However, those in the high-pressure condition did not perform as well on a criterion test later compared to the low-pressure group. Thus, test anxiety may reduce the learning benefit of retrieval practice. Eliminating all high-pressure tests is probably not possible, but instructors can provide a number of low-stakes retrieval opportunities for students to help increase learning. The use of low-stakes testing can serve to decrease test anxiety (Khanna, 2015 ), and has recently been shown to negate the detrimental impact of stress on learning (Smith, Floerke, & Thomas, 2016 ). This is a particularly important line of inquiry to pursue for future research, because many teachers who are not familiar with the effectiveness of retrieval practice may be put off by the implied pressure of “testing”, which evokes the much maligned high-stakes standardized tests (e.g., McHugh, 2013 ).

Elaboration

Elaboration involves connecting new information to pre-existing knowledge. Anderson ( 1983 , p.285) made the following claim about elaboration: “One of the most potent manipulations that can be performed in terms of increasing a subject’s memory for material is to have the subject elaborate on the to-be-remembered material.” Postman ( 1976 , p. 28) defined elaboration most parsimoniously as “additions to nominal input”, and Hirshman ( 2001 , p. 4369) provided an elaboration on this definition (pun intended!), defining elaboration as “A conscious, intentional process that associates to-be-remembered information with other information in memory.” However, in practice, elaboration could mean many different things. The common thread in all the definitions is that elaboration involves adding features to an existing memory.

One possible instantiation of elaboration is thinking about information on a deeper level. The levels (or “depth”) of processing framework, proposed by Craik and Lockhart ( 1972 ), predicts that information will be remembered better if it is processed more deeply in terms of meaning, rather than shallowly in terms of form. The leves of processing framework has, however, received a number of criticisms (Craik, 2002 ). One major problem with this framework is that it is difficult to measure “depth”. And if we are not able to actually measure depth, then the argument can become circular: is it that something was remembered better because it was studied more deeply, or do we conclude that it must have been studied more deeply because it is remembered better? (See Lockhart & Craik, 1990 , for further discussion of this issue).

Another mechanism by which elaboration can confer a benefit to learning is via improvement in organization (Bellezza, Cheesman, & Reddy, 1977 ; Mandler, 1979 ). By this view, elaboration involves making information more integrated and organized with existing knowledge structures. By connecting and integrating the to-be-learned information with other concepts in memory, students can increase the extent to which the ideas are organized in their minds, and this increased organization presumably facilitates the reconstruction of the past at the time of retrieval.

Elaboration is such a broad term and can include so many different techniques that it is hard to claim that elaboration will always help learning. There is, however, a specific technique under the umbrella of elaboration for which there is relatively strong evidence in terms of effectiveness (Dunlosky et al., 2013 ; Pashler et al., 2007 ). This technique is called elaborative interrogation, and involves students questioning the materials that they are studying (Pressley, McDaniel, Turnure, Wood, & Ahmad, 1987 ). More specifically, students using this technique would ask “how” and “why” questions about the concepts they are studying (see Fig.  4 for an example on the physics of flight). Then, crucially, students would try to answer these questions – either from their materials or, eventually, from memory (McDaniel & Donnelly, 1996 ). The process of figuring out the answer to the questions – with some amount of uncertainty (Overoye & Storm, 2015 ) – can help learning. When using this technique, however, it is important that students check their answers with their materials or with the teacher; when the content generated through elaborative interrogation is poor, it can actually hurt learning (Clinton, Alibali, & Nathan, 2016 ).

Students can also be encouraged to self-explain concepts to themselves while learning (Chi, De Leeuw, Chiu, & LaVancher, 1994 ). This might involve students simply saying out loud what steps they need to perform to solve an equation. Aleven and Koedinger ( 2002 ) conducted two classroom studies in which students were either prompted by a “cognitive tutor” to provide self-explanations during a problem-solving task or not, and found that the self-explanations led to improved performance. According to the authors, this approach could scale well to real classrooms. If possible and relevant, students could even perform actions alongside their self-explanations (Cohen, 1981 ; see also the enactment effect, Hainselin, Picard, Manolli, Vankerkore-Candas, & Bourdin, 2017 ). Instructors can scaffold students in these types of activities by providing self-explanation prompts throughout to-be-learned material (O’Neil et al., 2014 ). Ultimately, the greatest potential benefit of accurate self-explanation or elaboration is that the student will be able to transfer their knowledge to a new situation (Rittle-Johnson, 2006 ).

The technical term “elaborative interrogation” has not made it into the vernacular of educational bloggers (a search on https://educationechochamberuncut.wordpress.com , which consolidates over 3,000 UK-based teacher blogs, yielded zero results for that term). However, a few teachers have blogged about elaboration more generally (e.g., Hobbiss, 2016 ) and deep questioning specifically (e.g., Class Teaching, 2013 ), just without using the specific terminology. This strategy in particular may benefit from a more open dialog between researchers and teachers to facilitate the use of elaborative interrogation in the classroom and to address possible barriers to implementation. In terms of advancing the scientific understanding of elaborative interrogation in a classroom setting, it would be informative to conduct a larger-scale intervention to see whether having students elaborate during reading actually helps their understanding. It would also be useful to know whether the students really need to generate their own elaborative interrogation (“how” and “why”) questions, versus answering questions provided by others. How long should students persist to find the answers? When is the right time to have students engage in this task, given the levels of expertise required to do it well (Clinton et al., 2016 )? Without knowing the answers to these questions, it may be too early for us to instruct teachers to use this technique in their classes. Finally, elaborative interrogation takes a long time. Is this time efficiently spent? Or, would it be better to have the students try to answer a few questions, pool their information as a class, and then move to practicing retrieval of the information?

Concrete examples

Providing supporting information can improve the learning of key ideas and concepts. Specifically, using concrete examples to supplement content that is more conceptual in nature can make the ideas easier to understand and remember. Concrete examples can provide several advantages to the learning process: (a) they can concisely convey information, (b) they can provide students with more concrete information that is easier to remember, and (c) they can take advantage of the superior memorability of pictures relative to words (see “Dual Coding”).

Words that are more concrete are both recognized and recalled better than abstract words (Gorman, 1961 ; e.g., “button” and “bound,” respectively). Furthermore, it has been demonstrated that information that is more concrete and imageable enhances the learning of associations, even with abstract content (Caplan & Madan, 2016 ; Madan, Glaholt, & Caplan, 2010 ; Paivio, 1971 ). Following from this, providing concrete examples during instruction should improve retention of related abstract concepts, rather than the concrete examples alone being remembered better. Concrete examples can be useful both during instruction and during practice problems. Having students actively explain how two examples are similar and encouraging them to extract the underlying structure on their own can also help with transfer. In a laboratory study, Berry ( 1983 ) demonstrated that students performed well when given concrete practice problems, regardless of the use of verbalization (akin to elaborative interrogation), but that verbalization helped students transfer understanding from concrete to abstract problems. One particularly important area of future research is determining how students can best make the link between concrete examples and abstract ideas.

Since abstract concepts are harder to grasp than concrete information (Paivio, Walsh, & Bons, 1994 ), it follows that teachers ought to illustrate abstract ideas with concrete examples. However, care must be taken when selecting the examples. LeFevre and Dixon ( 1986 ) provided students with both concrete examples and abstract instructions and found that when these were inconsistent, students followed the concrete examples rather than the abstract instructions, potentially constraining the application of the abstract concept being taught. Lew, Fukawa-Connelly, Mejí-Ramos, and Weber ( 2016 ) used an interview approach to examine why students may have difficulty understanding a lecture. Responses indicated that some issues were related to understanding the overarching topic rather than the component parts, and to the use of informal colloquialisms that did not clearly follow from the material being taught. Both of these issues could have potentially been addressed through the inclusion of a greater number of relevant concrete examples.

One concern with using concrete examples is that students might only remember the examples – especially if they are particularly memorable, such as fun or gimmicky examples – and will not be able to transfer their understanding from one example to another, or more broadly to the abstract concept. However, there does not seem to be any evidence that fun relevant examples actually hurt learning by harming memory for important information. Instead, fun examples and jokes tend to be more memorable, but this boost in memory for the joke does not seem to come at a cost to memory for the underlying concept (Baldassari & Kelley, 2012 ). However, two important caveats need to be highlighted. First, to the extent that the more memorable content is not relevant to the concepts of interest, learning of the target information can be compromised (Harp & Mayer, 1998 ). Thus, care must be taken to ensure that all examples and gimmicks are, in fact, related to the core concepts that the students need to acquire, and do not contain irrelevant perceptual features (Kaminski & Sloutsky, 2013 ).

The second issue is that novices often notice and remember the surface details of an example rather than the underlying structure. Experts, on the other hand, can extract the underlying structure from examples that have divergent surface features (Chi, Feltovich, & Glaser, 1981 ; see Fig.  5 for an example from physics). Gick and Holyoak ( 1983 ) tried to get students to apply a rule from one problem to another problem that appeared different on the surface, but was structurally similar. They found that providing multiple examples helped with this transfer process compared to only using one example – especially when the examples provided had different surface details. More work is also needed to determine how many examples are sufficient for generalization to occur (and this, of course, will vary with contextual factors and individual differences). Further research on the continuum between concrete/specific examples and more abstract concepts would also be informative. That is, if an example is not concrete enough, it may be too difficult to understand. On the other hand, if the example is too concrete, that could be detrimental to generalization to the more abstract concept (although a diverse set of very concrete examples may be able to help with this). In fact, in a controversial article, Kaminski, Sloutsky, and Heckler ( 2008 ) claimed that abstract examples were more effective than concrete examples. Later rebuttals of this paper contested whether the abstract versus concrete distinction was clearly defined in the original study (see Reed, 2008 , for a collection of letters on the subject). This ideal point along the concrete-abstract continuum might also interact with development.

Finding teacher blog posts on concrete examples proved to be more difficult than for the other strategies in this review. One optimistic possibility is that teachers frequently use concrete examples in their teaching, and thus do not think of this as a specific contribution from cognitive psychology; the one blog post we were able to find that discussed concrete examples suggests that this might be the case (Boulton, 2016 ). The idea of “linking abstract concepts with concrete examples” is also covered in 25% of teacher-training textbooks used in the US, according to the report by Pomerance et al. ( 2016 ); this is the second most frequently covered of the six strategies, after “posing probing questions” (i.e., elaborative interrogation). A useful direction for future research would be to establish how teachers are using concrete examples in their practice, and whether we can make any suggestions for improvement based on research into the science of learning. For example, if two examples are better than one (Bauernschmidt, 2017 ), are additional examples also needed, or are there diminishing returns from providing more examples? And, how can teachers best ensure that concrete examples are consistent with prior knowledge (Reed, 2008 )?

Dual coding

Both the memory literature and folk psychology support the notion of visual examples being beneficial—the adage of “a picture is worth a thousand words” (traced back to an advertising slogan from the 1920s; Meider, 1990 ). Indeed, it is well-understood that more information can be conveyed through a simple illustration than through several paragraphs of text (e.g., Barker & Manji, 1989 ; Mayer & Gallini, 1990 ). Illustrations can be particularly helpful when the described concept involves several parts or steps and is intended for individuals with low prior knowledge (Eitel & Scheiter, 2015 ; Mayer & Gallini, 1990 ). Figure  6 provides a concrete example of this, illustrating how information can flow through neurons and synapses.

In addition to being able to convey information more succinctly, pictures are also more memorable than words (Paivio & Csapo, 1969 , 1973 ). In the memory literature, this is referred to as the picture superiority effect , and dual coding theory was developed in part to explain this effect. Dual coding follows from the notion of text being accompanied by complementary visual information to enhance learning. Paivio ( 1971 , 1986 ) proposed dual coding theory as a mechanistic account for the integration of multiple information “codes” to process information. In this theory, a code corresponds to a modal or otherwise distinct representation of a concept—e.g., “mental images for ‘book’ have visual, tactual, and other perceptual qualities similar to those evoked by the referent objects on which the images are based” (Clark & Paivio, 1991 , p. 152). Aylwin ( 1990 ) provides a clear example of how the word “dog” can evoke verbal, visual, and enactive representations (see Fig.  7 for a similar example for the word “SPOON”, based on Aylwin, 1990 (Fig.  2 ) and Madan & Singhal, 2012a (Fig.  3 )). Codes can also correspond to emotional properties (Clark & Paivio, 1991 ; Paivio, 2013 ). Clark and Paivio ( 1991 ) provide a thorough review of dual coding theory and its relation to education, while Paivio ( 2007 ) provides a comprehensive treatise on dual coding theory. Broadly, dual coding theory suggests that providing multiple representations of the same information enhances learning and memory, and that information that more readily evokes additional representations (through automatic imagery processes) receives a similar benefit.

Paivio and Csapo ( 1973 ) suggest that verbal and imaginal codes have independent and additive effects on memory recall. Using visuals to improve learning and memory has been particularly applied to vocabulary learning (Danan, 1992 ; Sadoski, 2005 ), but has also shown success in other domains such as in health care (Hartland, Biddle, & Fallacaro, 2008 ). To take advantage of dual coding, verbal information should be accompanied by a visual representation when possible. However, while the studies discussed all indicate that the use of multiple representations of information is favorable, it is important to acknowledge that each representation also increases cognitive load and can lead to over-saturation (Mayer & Moreno, 2003 ).

Given that pictures are generally remembered better than words, it is important to ensure that the pictures students are provided with are helpful and relevant to the content they are expected to learn. McNeill, Uttal, Jarvin, and Sternberg ( 2009 ) found that providing visual examples decreased conceptual errors. However, McNeill et al. also found that when students were given visually rich examples, they performed more poorly than students who were not given any visual example, suggesting that the visual details can at times become a distraction and hinder performance. Thus, it is important to consider that images used in teaching are clear and not ambiguous in their meaning (Schwartz, 2007 ).

Further broadening the scope of dual coding theory, Engelkamp and Zimmer ( 1984 ) suggest that motor movements, such as “turning the handle,” can provide an additional motor code that can improve memory, linking studies of motor actions (enactment) with dual coding theory (Clark & Paivio, 1991 ; Engelkamp & Cohen, 1991 ; Madan & Singhal, 2012c ). Indeed, enactment effects appear to primarily occur during learning, rather than during retrieval (Peterson & Mulligan, 2010 ). Along similar lines, Wammes, Meade, and Fernandes ( 2016 ) demonstrated that generating drawings can provide memory benefits beyond what could otherwise be explained by visual imagery, picture superiority, and other memory enhancing effects. Providing convergent evidence, even when overt motor actions are not critical in themselves, words representing functional objects have been shown to enhance later memory (Madan & Singhal, 2012b ; Montefinese, Ambrosini, Fairfield, & Mammarella, 2013 ). This indicates that motoric processes can improve memory similarly to visual imagery, similar to memory differences for concrete vs. abstract words. Further research suggests that automatic motor simulation for functional objects is likely responsible for this memory benefit (Madan, Chen, & Singhal, 2016 ).

When teachers combine visuals and words in their educational practice, however, they may not always be taking advantage of dual coding – at least, not in the optimal manner. For example, a recent discussion on Twitter centered around one teacher’s decision to have 7 th Grade students replace certain words in their science laboratory report with a picture of that word (e.g., the instructions read “using a syringe …” and a picture of a syringe replaced the word; Turner, 2016a ). Other teachers argued that this was not dual coding (Beaven, 2016 ; Williams, 2016 ), because there were no longer two different representations of the information. The first teacher maintained that dual coding was preserved, because this laboratory report with pictures was to be used alongside the original, fully verbal report (Turner, 2016b ). This particular implementation – having students replace individual words with pictures – has not been examined in the cognitive literature, presumably because no benefit would be expected. In any case, we need to be clearer about implementations for dual coding, and more research is needed to clarify how teachers can make use of the benefits conferred by multiple representations and picture superiority.

Critically, dual coding theory is distinct from the notion of “learning styles,” which describe the idea that individuals benefit from instruction that matches their modality preference. While this idea is pervasive and individuals often subjectively feel that they have a preference, evidence indicates that the learning styles theory is not supported by empirical findings (e.g., Kavale, Hirshoren, & Forness, 1998 ; Pashler, McDaniel, Rohrer, & Bjork, 2008 ; Rohrer & Pashler, 2012 ). That is, there is no evidence that instructing students in their preferred learning style leads to an overall improvement in learning (the “meshing” hypothesis). Moreover, learning styles have come to be described as a myth or urban legend within psychology (Coffield, Moseley, Hall, & Ecclestone, 2004 ; Hattie & Yates, 2014 ; Kirschner & van Merriënboer, 2013 ; Kirschner, 2017 ); skepticism about learning styles is a common stance amongst evidence-informed teachers (e.g., Saunders, 2016 ). Providing evidence against the notion of learning styles, Kraemer, Rosenberg, and Thompson-Schill ( 2009 ) found that individuals who scored as “verbalizers” and “visualizers” did not perform any better on experimental trials matching their preference. Instead, it has recently been shown that learning through one’s preferred learning style is associated with elevated subjective judgements of learning, but not objective performance (Knoll, Otani, Skeel, & Van Horn, 2017 ). In contrast to learning styles, dual coding is based on providing additional, complementary forms of information to enhance learning, rather than tailoring instruction to individuals’ preferences.

Genuine educational environments present many opportunities for combining the strategies outlined above. Spacing can be particularly potent for learning if it is combined with retrieval practice. The additive benefits of retrieval practice and spacing can be gained by engaging in retrieval practice multiple times (also known as distributed practice; see Cepeda et al., 2006 ). Interleaving naturally entails spacing if students interleave old and new material. Concrete examples can be both verbal and visual, making use of dual coding. In addition, the strategies of elaboration, concrete examples, and dual coding all work best when used as part of retrieval practice. For example, in the concept-mapping studies mentioned above (Blunt & Karpicke, 2014 ; Karpicke, Blunt, et al., 2014 ), creating concept maps while looking at course materials (e.g., a textbook) was not as effective for later memory as creating concept maps from memory. When practicing elaborative interrogation, students can start off answering the “how” and “why” questions they pose for themselves using class materials, and work their way up to answering them from memory. And when interleaving different problem types, students should be practicing answering them rather than just looking over worked examples.

But while these ideas for strategy combinations have empirical bases, it has not yet been established whether the benefits of the strategies to learning are additive, super-additive, or, in some cases, incompatible. Thus, future research needs to (a) better formalize the definition of each strategy (particularly critical for elaboration and dual coding), (b) identify best practices for implementation in the classroom, (c) delineate the boundary conditions of each strategy, and (d) strategically investigate interactions between the six strategies we outlined in this manuscript.

Aleven, V. A., & Koedinger, K. R. (2002). An effective metacognitive strategy: learning by doing and explaining with a computer-based cognitive tutor. Cognitive Science, 26 , 147–179.

Article   Google Scholar  

Anderson, J. R. (1983). A spreading activation theory of memory. Journal of Verbal Learning and Verbal Behavior, 22 , 261–295.

Arnold, K. M., & McDermott, K. B. (2013). Test-potentiated learning: distinguishing between direct and indirect effects of tests. Journal of Experimental Psychology: Learning, Memory, and Cognition, 39 , 940–945.

PubMed   Google Scholar  

Aylwin, S. (1990). Imagery and affect: big questions, little answers. In P. J. Thompson, D. E. Marks, & J. T. E. Richardson (Eds.), Imagery: Current developments . New York: International Library of Psychology.

Google Scholar  

Baldassari, M. J., & Kelley, M. (2012). Make’em laugh? The mnemonic effect of humor in a speech. Psi Chi Journal of Psychological Research, 17 , 2–9.

Barker, P. G., & Manji, K. A. (1989). Pictorial dialogue methods. International Journal of Man-Machine Studies, 31 , 323–347.

Bauernschmidt, A. (2017). GUEST POST: two examples are better than one. [Blog post]. The Learning Scientists Blog . Retrieved from http://www.learningscientists.org/blog/2017/5/30-1 . Accessed 25 Dec 2017.

Beaven, T. (2016). @doctorwhy @FurtherEdagogy @doc_kristy Right, I thought the whole point of dual coding was to use TWO codes: pics + words of the SAME info? [Tweet]. Retrieved from https://twitter.com/TitaBeaven/status/807504041341308929 . Accessed 25 Dec 2017.

Bellezza, F. S., Cheesman, F. L., & Reddy, B. G. (1977). Organization and semantic elaboration in free recall. Journal of Experimental Psychology: Human Learning and Memory, 3 , 539–550.

Benney, D. (2016). (Trying to apply) spacing in a content heavy subject [Blog post]. Retrieved from https://mrbenney.wordpress.com/2016/10/16/trying-to-apply-spacing-in-science/ . Accessed 25 Dec 2017.

Berry, D. C. (1983). Metacognitive experience and transfer of logical reasoning. Quarterly Journal of Experimental Psychology, 35A , 39–49.

Birnbaum, M. S., Kornell, N., Bjork, E. L., & Bjork, R. A. (2013). Why interleaving enhances inductive learning: the roles of discrimination and retrieval. Memory & Cognition, 41 , 392–402.

Bjork, R. A. (1999). Assessing our own competence: heuristics and illusions. In D. Gopher & A. Koriat (Eds.), Attention and peformance XVII. Cognitive regulation of performance: Interaction of theory and application (pp. 435–459). Cambridge, MA: MIT Press.

Bjork, R. A. (1994). Memory and metamemory considerations in the training of human beings. In J. Metcalfe & A. Shimamura (Eds.), Metacognition: Knowing about knowing (pp. 185–205). Cambridge, MA: MIT Press.

Bjork, R. A., & Bjork, E. L. (1992). A new theory of disuse and an old theory of stimulus fluctuation. From learning processes to cognitive processes: Essays in honor of William K. Estes, 2 , 35–67.

Bjork, E. L., & Bjork, R. A. (2011). Making things hard on yourself, but in a good way: creating desirable difficulties to enhance learning. Psychology and the real world: Essays illustrating fundamental contributions to society , 56–64.

Blunt, J. R., & Karpicke, J. D. (2014). Learning with retrieval-based concept mapping. Journal of Educational Psychology, 106 , 849–858.

Boulton, K. (2016). What does cognitive overload look like in the humanities? [Blog post]. Retrieved from https://educationechochamberuncut.wordpress.com/2016/03/05/what-does-cognitive-overload-look-like-in-the-humanities-kris-boulton-2/ . Accessed 25 Dec 2017.

Brown, P. C., Roediger, H. L., & McDaniel, M. A. (2014). Make it stick . Cambridge, MA: Harvard University Press.

Book   Google Scholar  

Butler, A. C. (2010). Repeated testing produces superior transfer of learning relative to repeated studying. Journal of Experimental Psychology: Learning, Memory, and Cognition, 36 , 1118–1133.

Caplan, J. B., & Madan, C. R. (2016). Word-imageability enhances association-memory by recruiting hippocampal activity. Journal of Cognitive Neuroscience, 28 , 1522–1538.

Article   PubMed   Google Scholar  

Cepeda, N. J., Pashler, H., Vul, E., Wixted, J. T., & Rohrer, D. (2006). Distributed practice in verbal recall tasks: a review and quantitative synthesis. Psychological Bulletin, 132 , 354–380.

Cepeda, N. J., Vul, E., Rohrer, D., Wixted, J. T., & Pashler, H. (2008). Spacing effects in learning a temporal ridgeline of optimal retention. Psychological Science, 19 , 1095–1102.

Chi, M. T., De Leeuw, N., Chiu, M. H., & LaVancher, C. (1994). Eliciting self-explanations improves understanding. Cognitive Science, 18 , 439–477.

Chi, M. T., Feltovich, P. J., & Glaser, R. (1981). Categorization and representation of physics problems by experts and novices. Cognitive Science, 5 , 121–152.

CIFE. (2012). No January A level and other changes. Retrieved from http://www.cife.org.uk/cife-general-news/no-january-a-level-and-other-changes/ . Accessed 25 Dec 2017.

Clark, D. (2016). One book on learning that every teacher, lecturer & trainer should read (7 reasons) [Blog post]. Retrieved from http://donaldclarkplanb.blogspot.com/2016/03/one-book-on-learning-that-every-teacher.html . Accessed 25 Dec 2017.

Clark, J. M., & Paivio, A. (1991). Dual coding theory and education. Educational Psychology Review, 3 , 149–210.

Class Teaching. (2013). Deep questioning [Blog post]. Retrieved from https://classteaching.wordpress.com/2013/07/12/deep-questioning/ . Accessed 25 Dec 2017.

Clinton, V., Alibali, M. W., & Nathan, M. J. (2016). Learning about posterior probability: do diagrams and elaborative interrogation help? The Journal of Experimental Education, 84 , 579–599.

Coffield, F., Moseley, D., Hall, E., & Ecclestone, K. (2004). Learning styles and pedagogy in post-16 learning: a systematic and critical review . London: Learning & Skills Research Centre.

Cohen, R. L. (1981). On the generality of some memory laws. Scandinavian Journal of Psychology, 22 , 267–281.

Cooper, H. (1989). Synthesis of research on homework. Educational Leadership, 47 , 85–91.

Corbett, A. T., Reed, S. K., Hoffmann, R., MacLaren, B., & Wagner, A. (2010). Interleaving worked examples and cognitive tutor support for algebraic modeling of problem situations. In Proceedings of the Thirty-Second Annual Meeting of the Cognitive Science Society (pp. 2882–2887).

Cox, D. (2015). No stakes testing – not telling students their results [Blog post]. Retrieved from https://missdcoxblog.wordpress.com/2015/06/06/no-stakes-testing-not-telling-students-their-results/ . Accessed 25 Dec 2017.

Cox, D. (2016a). Ditch revision. Teach it well [Blog post]. Retrieved from https://missdcoxblog.wordpress.com/2016/01/09/ditch-revision-teach-it-well/ . Accessed 25 Dec 2017.

Cox, D. (2016b). ‘They need to remember this in three years time’: spacing & interleaving for the new GCSEs [Blog post]. Retrieved from https://missdcoxblog.wordpress.com/2016/03/25/they-need-to-remember-this-in-three-years-time-spacing-interleaving-for-the-new-gcses/ . Accessed 25 Dec 2017.

Craik, F. I. (2002). Levels of processing: past, present… future? Memory, 10 , 305–318.

Craik, F. I., & Lockhart, R. S. (1972). Levels of processing: a framework for memory research. Journal of Verbal Learning and Verbal Behavior, 11 , 671–684.

Danan, M. (1992). Reversed subtitling and dual coding theory: new directions for foreign language instruction. Language Learning, 42 , 497–527.

Dettmers, S., Trautwein, U., & Lüdtke, O. (2009). The relationship between homework time and achievement is not universal: evidence from multilevel analyses in 40 countries. School Effectiveness and School Improvement, 20 , 375–405.

Dirkx, K. J., Kester, L., & Kirschner, P. A. (2014). The testing effect for learning principles and procedures from texts. The Journal of Educational Research, 107 , 357–364.

Dunlosky, J. (2013). Strengthening the student toolbox: study strategies to boost learning. American Educator, 37 (3), 12–21.

Dunlosky, J., Rawson, K. A., Marsh, E. J., Nathan, M. J., & Willingham, D. T. (2013). Improving students’ learning with effective learning techniques: promising directions from cognitive and educational psychology. Psychological Science in the Public Interest, 14 , 4–58.

Ebbinghaus, H. (1913). Memory (HA Ruger & CE Bussenius, Trans.). New York: Columbia University, Teachers College. (Original work published 1885) . Retrieved from http://psychclassics.yorku.ca/Ebbinghaus/memory8.htm . Accessed 25 Dec 2017.

Eglington, L. G., & Kang, S. H. (2016). Retrieval practice benefits deductive inference. Educational Psychology Review , 1–14.

Eitel, A., & Scheiter, K. (2015). Picture or text first? Explaining sequential effects when learning with pictures and text. Educational Psychology Review, 27 , 153–180.

Engelkamp, J., & Cohen, R. L. (1991). Current issues in memory of action events. Psychological Research, 53 , 175–182.

Engelkamp, J., & Zimmer, H. D. (1984). Motor programme information as a separable memory unit. Psychological Research, 46 , 283–299.

Fawcett, D. (2013). Can I be that little better at……using cognitive science/psychology/neurology to plan learning? [Blog post]. Retrieved from http://reflectionsofmyteaching.blogspot.com/2013/09/can-i-be-that-little-better-atusing.html . Accessed 25 Dec 2017.

Fiechter, J. L., & Benjamin, A. S. (2017). Diminishing-cues retrieval practice: a memory-enhancing technique that works when regular testing doesn’t. Psychonomic Bulletin & Review , 1–9.

Firth, J. (2016). Spacing in teaching practice [Blog post]. Retrieved from http://www.learningscientists.org/blog/2016/4/12-1 . Accessed 25 Dec 2017.

Fordham, M. [mfordhamhistory]. (2016). Is there a meaningful distinction in psychology between ‘thinking’ & ‘critical thinking’? [Tweet]. Retrieved from https://twitter.com/mfordhamhistory/status/809525713623781377 . Accessed 25 Dec 2017.

Fritz, C. O., Morris, P. E., Nolan, D., & Singleton, J. (2007). Expanding retrieval practice: an effective aid to preschool children’s learning. The Quarterly Journal of Experimental Psychology, 60 , 991–1004.

Gates, A. I. (1917). Recitation as a factory in memorizing. Archives of Psychology, 6.

Gick, M. L., & Holyoak, K. J. (1983). Schema induction and analogical transfer. Cognitive Psychology, 15 , 1–38.

Gorman, A. M. (1961). Recognition memory for nouns as a function of abstractedness and frequency. Journal of Experimental Psychology, 61 , 23–39.

Hainselin, M., Picard, L., Manolli, P., Vankerkore-Candas, S., & Bourdin, B. (2017). Hey teacher, don’t leave them kids alone: action is better for memory than reading. Frontiers in Psychology , 8 .

Harp, S. F., & Mayer, R. E. (1998). How seductive details do their damage. Journal of Educational Psychology, 90 , 414–434.

Hartland, W., Biddle, C., & Fallacaro, M. (2008). Audiovisual facilitation of clinical knowledge: A paradigm for dispersed student education based on Paivio’s dual coding theory. AANA Journal, 76 , 194–198.

Hattie, J., & Yates, G. (2014). Visible learning and the science of how we learn . New York: Routledge.

Hausman, H., & Kornell, N. (2014). Mixing topics while studying does not enhance learning. Journal of Applied Research in Memory and Cognition, 3 , 153–160.

Hinze, S. R., & Rapp, D. N. (2014). Retrieval (sometimes) enhances learning: performance pressure reduces the benefits of retrieval practice. Applied Cognitive Psychology, 28 , 597–606.

Hirshman, E. (2001). Elaboration in memory. In N. J. Smelser & P. B. Baltes (Eds.), International encyclopedia of the social & behavioral sciences (pp. 4369–4374). Oxford: Pergamon.

Chapter   Google Scholar  

Hobbiss, M. (2016). Make it meaningful! Elaboration [Blog post]. Retrieved from https://hobbolog.wordpress.com/2016/06/09/make-it-meaningful-elaboration/ . Accessed 25 Dec 2017.

Jones, F. (2016). Homework – is it really that useless? [Blog post]. Retrieved from http://www.learningscientists.org/blog/2016/4/5-1 . Accessed 25 Dec 2017.

Kaminski, J. A., & Sloutsky, V. M. (2013). Extraneous perceptual information interferes with children’s acquisition of mathematical knowledge. Journal of Educational Psychology, 105 (2), 351–363.

Kaminski, J. A., Sloutsky, V. M., & Heckler, A. F. (2008). The advantage of abstract examples in learning math. Science, 320 , 454–455.

Kang, S. H. (2016). Spaced repetition promotes efficient and effective learning policy implications for instruction. Policy Insights from the Behavioral and Brain Sciences, 3 , 12–19.

Kang, S. H. K., McDermott, K. B., & Roediger, H. L. (2007). Test format and corrective feedback modify the effects of testing on long-term retention. European Journal of Cognitive Psychology, 19 , 528–558.

Karpicke, J. D., & Aue, W. R. (2015). The testing effect is alive and well with complex materials. Educational Psychology Review, 27 , 317–326.

Karpicke, J. D., Blunt, J. R., Smith, M. A., & Karpicke, S. S. (2014). Retrieval-based learning: The need for guided retrieval in elementary school children. Journal of Applied Research in Memory and Cognition, 3 , 198–206.

Karpicke, J. D., Lehman, M., & Aue, W. R. (2014). Retrieval-based learning: an episodic context account. In B. H. Ross (Ed.), Psychology of Learning and Motivation (Vol. 61, pp. 237–284). San Diego, CA: Elsevier Academic Press.

Karpicke, J. D., Blunt, J. R., & Smith, M. A. (2016). Retrieval-based learning: positive effects of retrieval practice in elementary school children. Frontiers in Psychology, 7 .

Kavale, K. A., Hirshoren, A., & Forness, S. R. (1998). Meta-analytic validation of the Dunn and Dunn model of learning-style preferences: a critique of what was Dunn. Learning Disabilities Research & Practice, 13 , 75–80.

Khanna, M. M. (2015). Ungraded pop quizzes: test-enhanced learning without all the anxiety. Teaching of Psychology, 42 , 174–178.

Kirby, J. (2014). One scientific insight for curriculum design [Blog post]. Retrieved from https://pragmaticreform.wordpress.com/2014/05/05/scientificcurriculumdesign/ . Accessed 25 Dec 2017.

Kirschner, P. A. (2017). Stop propagating the learning styles myth. Computers & Education, 106 , 166–171.

Kirschner, P. A., & van Merriënboer, J. J. G. (2013). Do learners really know best? Urban legends in education. Educational Psychologist, 48 , 169–183.

Knoll, A. R., Otani, H., Skeel, R. L., & Van Horn, K. R. (2017). Learning style, judgments of learning, and learning of verbal and visual information. British Journal of Psychology, 108 , 544-563.

Kornell, N., & Bjork, R. A. (2008). Learning concepts and categories is spacing the “enemy of induction”? Psychological Science, 19 , 585–592.

Kornell, N., & Finn, B. (2016). Self-regulated learning: an overview of theory and data. In J. Dunlosky & S. Tauber (Eds.), The Oxford Handbook of Metamemory (pp. 325–340). New York: Oxford University Press.

Kornell, N., Klein, P. J., & Rawson, K. A. (2015). Retrieval attempts enhance learning, but retrieval success (versus failure) does not matter. Journal of Experimental Psychology: Learning, Memory, and Cognition, 41 , 283–294.

Kraemer, D. J. M., Rosenberg, L. M., & Thompson-Schill, S. L. (2009). The neural correlates of visual and verbal cognitive styles. Journal of Neuroscience, 29 , 3792–3798.

Article   PubMed   PubMed Central   Google Scholar  

Kraft, N. (2015). Spaced practice and repercussions for teaching. Retrieved from http://nathankraft.blogspot.com/2015/08/spaced-practice-and-repercussions-for.html . Accessed 25 Dec 2017.

Learning Scientists. (2016a). Weekly Digest #3: How teachers implement interleaving in their curriculum [Blog post]. Retrieved from http://www.learningscientists.org/blog/2016/3/28/weekly-digest-3 . Accessed 25 Dec 2017.

Learning Scientists. (2016b). Weekly Digest #13: how teachers implement retrieval in their classrooms [Blog post]. Retrieved from http://www.learningscientists.org/blog/2016/6/5/weekly-digest-13 . Accessed 25 Dec 2017.

Learning Scientists. (2016c). Weekly Digest #40: teachers’ implementation of principles from “Make It Stick” [Blog post]. Retrieved from http://www.learningscientists.org/blog/2016/12/18-1 . Accessed 25 Dec 2017.

Learning Scientists. (2017). Weekly Digest #54: is there an app for that? Studying 2.0 [Blog post]. Retrieved from http://www.learningscientists.org/blog/2017/4/9/weekly-digest-54 . Accessed 25 Dec 2017.

LeFevre, J.-A., & Dixon, P. (1986). Do written instructions need examples? Cognition and Instruction, 3 , 1–30.

Lew, K., Fukawa-Connelly, T., Mejí-Ramos, J. P., & Weber, K. (2016). Lectures in advanced mathematics: Why students might not understand what the mathematics professor is trying to convey. Journal of Research in Mathematics Education, 47 , 162–198.

Lindsey, R. V., Shroyer, J. D., Pashler, H., & Mozer, M. C. (2014). Improving students’ long-term knowledge retention through personalized review. Psychological Science, 25 , 639–647.

Lipko-Speed, A., Dunlosky, J., & Rawson, K. A. (2014). Does testing with feedback help grade-school children learn key concepts in science? Journal of Applied Research in Memory and Cognition, 3 , 171–176.

Lockhart, R. S., & Craik, F. I. (1990). Levels of processing: a retrospective commentary on a framework for memory research. Canadian Journal of Psychology, 44 , 87–112.

Lovell, O. (2017). How do we know what to put on the quiz? [Blog Post]. Retrieved from http://www.ollielovell.com/olliesclassroom/know-put-quiz/ . Accessed 25 Dec 2017.

Luehmann, A. L. (2008). Using blogging in support of teacher professional identity development: a case study. The Journal of the Learning Sciences, 17 , 287–337.

Madan, C. R., Glaholt, M. G., & Caplan, J. B. (2010). The influence of item properties on association-memory. Journal of Memory and Language, 63 , 46–63.

Madan, C. R., & Singhal, A. (2012a). Motor imagery and higher-level cognition: four hurdles before research can sprint forward. Cognitive Processing, 13 , 211–229.

Madan, C. R., & Singhal, A. (2012b). Encoding the world around us: motor-related processing influences verbal memory. Consciousness and Cognition, 21 , 1563–1570.

Madan, C. R., & Singhal, A. (2012c). Using actions to enhance memory: effects of enactment, gestures, and exercise on human memory. Frontiers in Psychology, 3 .

Madan, C. R., Chen, Y. Y., & Singhal, A. (2016). ERPs differentially reflect automatic and deliberate processing of the functional manipulability of objects. Frontiers in Human Neuroscience, 10 .

Mandler, G. (1979). Organization and repetition: organizational principles with special reference to rote learning. In L. G. Nilsson (Ed.), Perspectives on Memory Research (pp. 293–327). New York: Academic Press.

Marsh, E. J., Fazio, L. K., & Goswick, A. E. (2012). Memorial consequences of testing school-aged children. Memory, 20 , 899–906.

Mayer, R. E., & Gallini, J. K. (1990). When is an illustration worth ten thousand words? Journal of Educational Psychology, 82 , 715–726.

Mayer, R. E., & Moreno, R. (2003). Nine ways to reduce cognitive load in multimedia learning. Educational Psychologist, 38 , 43–52.

McDaniel, M. A., & Donnelly, C. M. (1996). Learning with analogy and elaborative interrogation. Journal of Educational Psychology, 88 , 508–519.

McDaniel, M. A., Thomas, R. C., Agarwal, P. K., McDermott, K. B., & Roediger, H. L. (2013). Quizzing in middle-school science: successful transfer performance on classroom exams. Applied Cognitive Psychology, 27 , 360–372.

McDermott, K. B., Agarwal, P. K., D’Antonio, L., Roediger, H. L., & McDaniel, M. A. (2014). Both multiple-choice and short-answer quizzes enhance later exam performance in middle and high school classes. Journal of Experimental Psychology: Applied, 20 , 3–21.

McHugh, A. (2013). High-stakes tests: bad for students, teachers, and education in general [Blog post]. Retrieved from https://teacherbiz.wordpress.com/2013/07/01/high-stakes-tests-bad-for-students-teachers-and-education-in-general/ . Accessed 25 Dec 2017.

McNeill, N. M., Uttal, D. H., Jarvin, L., & Sternberg, R. J. (2009). Should you show me the money? Concrete objects both hurt and help performance on mathematics problems. Learning and Instruction, 19 , 171–184.

Meider, W. (1990). “A picture is worth a thousand words”: from advertising slogan to American proverb. Southern Folklore, 47 , 207–225.

Michaela Community School. (2014). Homework. Retrieved from http://mcsbrent.co.uk/homework-2/ . Accessed 25 Dec 2017.

Montefinese, M., Ambrosini, E., Fairfield, B., & Mammarella, N. (2013). The “subjective” pupil old/new effect: is the truth plain to see? International Journal of Psychophysiology, 89 , 48–56.

O’Neil, H. F., Chung, G. K., Kerr, D., Vendlinski, T. P., Buschang, R. E., & Mayer, R. E. (2014). Adding self-explanation prompts to an educational computer game. Computers In Human Behavior, 30 , 23–28.

Overoye, A. L., & Storm, B. C. (2015). Harnessing the power of uncertainty to enhance learning. Translational Issues in Psychological Science, 1 , 140–148.

Paivio, A. (1971). Imagery and verbal processes . New York: Holt, Rinehart and Winston.

Paivio, A. (1986). Mental representations: a dual coding approach . New York: Oxford University Press.

Paivio, A. (2007). Mind and its evolution: a dual coding theoretical approach . Mahwah: Erlbaum.

Paivio, A. (2013). Dual coding theory, word abstractness, and emotion: a critical review of Kousta et al. (2011). Journal of Experimental Psychology: General, 142 , 282–287.

Paivio, A., & Csapo, K. (1969). Concrete image and verbal memory codes. Journal of Experimental Psychology, 80 , 279–285.

Paivio, A., & Csapo, K. (1973). Picture superiority in free recall: imagery or dual coding? Cognitive Psychology, 5 , 176–206.

Paivio, A., Walsh, M., & Bons, T. (1994). Concreteness effects on memory: when and why? Journal of Experimental Psychology: Learning, Memory, and Cognition, 20 , 1196–1204.

Pashler, H., McDaniel, M., Rohrer, D., & Bjork, R. (2008). Learning styles: concepts and evidence. Psychological Science in the Public Interest, 9 , 105–119.

Pashler, H., Bain, P. M., Bottge, B. A., Graesser, A., Koedinger, K., McDaniel, M., & Metcalfe, J. (2007). Organizing instruction and study to improve student learning. IES practice guide. NCER 2007–2004. National Center for Education Research .

Patel, R., Liu, R., & Koedinger, K. (2016). When to block versus interleave practice? Evidence against teaching fraction addition before fraction multiplication. In Proceedings of the 38th Annual Meeting of the Cognitive Science Society, Philadelphia, PA .

Penfound, B. (2017). Journey to interleaved practice #2 [Blog Post]. Retrieved from https://fullstackcalculus.com/2017/02/03/journey-to-interleaved-practice-2/ . Accessed 25 Dec 2017.

Penfound, B. [BryanPenfound]. (2016). Does blocked practice/learning lessen cognitive load? Does interleaved practice/learning provide productive struggle? [Tweet]. Retrieved from https://twitter.com/BryanPenfound/status/808759362244087808 . Accessed 25 Dec 2017.

Peterson, D. J., & Mulligan, N. W. (2010). Enactment and retrieval. Memory & Cognition, 38 , 233–243.

Picciotto, H. (2009). Lagging homework [Blog post]. Retrieved from http://blog.mathedpage.org/2013/06/lagging-homework.html . Accessed 25 Dec 2017.

Pomerance, L., Greenberg, J., & Walsh, K. (2016). Learning about learning: what every teacher needs to know. Retrieved from http://www.nctq.org/dmsView/Learning_About_Learning_Report . Accessed 25 Dec 2017.

Postman, L. (1976). Methodology of human learning. In W. K. Estes (Ed.), Handbook of learning and cognitive processes (Vol. 3). Hillsdale: Erlbaum.

Pressley, M., McDaniel, M. A., Turnure, J. E., Wood, E., & Ahmad, M. (1987). Generation and precision of elaboration: effects on intentional and incidental learning. Journal of Experimental Psychology: Learning, Memory, and Cognition, 13 , 291–300.

Reed, S. K. (2008). Concrete examples must jibe with experience. Science, 322 , 1632–1633.

researchED. (2013). How it all began. Retrieved from http://www.researched.org.uk/about/our-story/ . Accessed 25 Dec 2017.

Ritchie, S. J., Della Sala, S., & McIntosh, R. D. (2013). Retrieval practice, with or without mind mapping, boosts fact learning in primary school children. PLoS One, 8 (11), e78976.

Rittle-Johnson, B. (2006). Promoting transfer: effects of self-explanation and direct instruction. Child Development, 77 , 1–15.

Roediger, H. L. (1985). Remembering Ebbinghaus. [Retrospective review of the book On Memory , by H. Ebbinghaus]. Contemporary Psychology, 30 , 519–523.

Roediger, H. L. (2013). Applying cognitive psychology to education translational educational science. Psychological Science in the Public Interest, 14 , 1–3.

Roediger, H. L., & Karpicke, J. D. (2006). The power of testing memory: basic research and implications for educational practice. Perspectives on Psychological Science, 1 , 181–210.

Roediger, H. L., Putnam, A. L., & Smith, M. A. (2011). Ten benefits of testing and their applications to educational practice. In J. Mester & B. Ross (Eds.), The psychology of learning and motivation: cognition in education (pp. 1–36). Oxford: Elsevier.

Roediger, H. L., Finn, B., & Weinstein, Y. (2012). Applications of cognitive science to education. In Della Sala, S., & Anderson, M. (Eds.), Neuroscience in education: the good, the bad, and the ugly . Oxford, UK: Oxford University Press.

Roelle, J., & Berthold, K. (2017). Effects of incorporating retrieval into learning tasks: the complexity of the tasks matters. Learning and Instruction, 49 , 142–156.

Rohrer, D. (2012). Interleaving helps students distinguish among similar concepts. Educational Psychology Review, 24(3), 355–367.

Rohrer, D., Dedrick, R. F., & Stershic, S. (2015). Interleaved practice improves mathematics learning. Journal of Educational Psychology, 107 , 900–908.

Rohrer, D., & Pashler, H. (2012). Learning styles: Where’s the evidence? Medical Education, 46 , 34–35.

Rohrer, D., & Taylor, K. (2007). The shuffling of mathematics problems improves learning. Instructional Science, 35 , 481–498.

Rose, N. (2014). Improving the effectiveness of homework [Blog post]. Retrieved from https://evidenceintopractice.wordpress.com/2014/03/20/improving-the-effectiveness-of-homework/ . Accessed 25 Dec 2017.

Sadoski, M. (2005). A dual coding view of vocabulary learning. Reading & Writing Quarterly, 21 , 221–238.

Saunders, K. (2016). It really is time we stopped talking about learning styles [Blog post]. Retrieved from http://martingsaunders.com/2016/10/it-really-is-time-we-stopped-talking-about-learning-styles/ . Accessed 25 Dec 2017.

Schwartz, D. (2007). If a picture is worth a thousand words, why are you reading this essay? Social Psychology Quarterly, 70 , 319–321.

Shumaker, H. (2016). Homework is wrecking our kids: the research is clear, let’s ban elementary homework. Salon. Retrieved from http://www.salon.com/2016/03/05/homework_is_wrecking_our_kids_the_research_is_clear_lets_ban_elementary_homework . Accessed 25 Dec 2017.

Smith, A. M., Floerke, V. A., & Thomas, A. K. (2016). Retrieval practice protects memory against acute stress. Science, 354 , 1046–1048.

Smith, M. A., Blunt, J. R., Whiffen, J. W., & Karpicke, J. D. (2016). Does providing prompts during retrieval practice improve learning? Applied Cognitive Psychology, 30 , 784–802.

Smith, M. A., & Karpicke, J. D. (2014). Retrieval practice with short-answer, multiple-choice, and hybrid formats. Memory, 22 , 784–802.

Smith, M. A., Roediger, H. L., & Karpicke, J. D. (2013). Covert retrieval practice benefits retention as much as overt retrieval practice. Journal of Experimental Psychology: Learning, Memory, and Cognition, 39 , 1712–1725.

Son, J. Y., & Rivas, M. J. (2016). Designing clicker questions to stimulate transfer. Scholarship of Teaching and Learning in Psychology, 2 , 193–207.

Szpunar, K. K., Khan, N. Y., & Schacter, D. L. (2013). Interpolated memory tests reduce mind wandering and improve learning of online lectures. Proceedings of the National Academy of Sciences, 110 , 6313–6317.

Thomson, R., & Mehring, J. (2016). Better vocabulary study strategies for long-term learning. Kwansei Gakuin University Humanities Review, 20 , 133–141.

Trafton, J. G., & Reiser, B. J. (1993). Studying examples and solving problems: contributions to skill acquisition . Technical report, Naval HCI Research Lab, Washington, DC, USA.

Tran, R., Rohrer, D., & Pashler, H. (2015). Retrieval practice: the lack of transfer to deductive inferences. Psychonomic Bulletin & Review, 22 , 135–140.

Turner, K. [doc_kristy]. (2016a). My dual coding (in red) and some y8 work @AceThatTest they really enjoyed practising the technique [Tweet]. Retrieved from https://twitter.com/doc_kristy/status/807220355395977216 . Accessed 25 Dec 2017.

Turner, K. [doc_kristy]. (2016b). @FurtherEdagogy @doctorwhy their work is revision work, they already have the words on a different page, to compliment not replace [Tweet]. Retrieved from https://twitter.com/doc_kristy/status/807360265100599301 . Accessed 25 Dec 2017.

Valle, A., Regueiro, B., Núñez, J. C., Rodríguez, S., Piñeiro, I., & Rosário, P. (2016). Academic goals, student homework engagement, and academic achievement in elementary school. Frontiers in Psychology, 7 .

Van Gog, T., & Sweller, J. (2015). Not new, but nearly forgotten: the testing effect decreases or even disappears as the complexity of learning materials increases. Educational Psychology Review, 27 , 247–264.

Wammes, J. D., Meade, M. E., & Fernandes, M. A. (2016). The drawing effect: evidence for reliable and robust memory benefits in free recall. Quarterly Journal of Experimental Psychology, 69 , 1752–1776.

Weinstein, Y., Gilmore, A. W., Szpunar, K. K., & McDermott, K. B. (2014). The role of test expectancy in the build-up of proactive interference in long-term memory. Journal of Experimental Psychology: Learning, Memory, and Cognition, 40 , 1039–1048.

Weinstein, Y., Nunes, L. D., & Karpicke, J. D. (2016). On the placement of practice questions during study. Journal of Experimental Psychology: Applied, 22 , 72–84.

Weinstein, Y., & Weinstein-Jones, F. (2017). Topic and quiz spacing spreadsheet: a planning tool for teachers [Blog Post]. Retrieved from http://www.learningscientists.org/blog/2017/5/11-1 . Accessed 25 Dec 2017.

Weinstein-Jones, F., & Weinstein, Y. (2017). Topic spacing spreadsheet for teachers [Excel macro]. Zenodo. http://doi.org/10.5281/zenodo.573764 . Accessed 25 Dec 2017.

Williams, D. [FurtherEdagogy]. (2016). @doctorwhy @doc_kristy word accompanying the visual? I’m unclear how removing words benefit? Would a flow chart better suit a scientific exp? [Tweet]. Retrieved from https://twitter.com/FurtherEdagogy/status/807356800509104128 . Accessed 25 Dec 2017.

Wood, B. (2017). And now for something a little bit different….[Blog post]. Retrieved from https://justateacherstandinginfrontofaclass.wordpress.com/2017/04/20/and-now-for-something-a-little-bit-different/ . Accessed 25 Dec 2017.

Wooldridge, C. L., Bugg, J. M., McDaniel, M. A., & Liu, Y. (2014). The testing effect with authentic educational materials: a cautionary note. Journal of Applied Research in Memory and Cognition, 3 , 214–221.

Young, C. (2016). Mini-tests. Retrieved from https://colleenyoung.wordpress.com/revision-activities/mini-tests/ . Accessed 25 Dec 2017.

Download references

Acknowledgements

Not applicable.

YW and MAS were partially supported by a grant from The IDEA Center.

Availability of data and materials

Author information, authors and affiliations.

Department of Psychology, University of Massachusetts Lowell, Lowell, MA, USA

Yana Weinstein

Department of Psychology, Boston College, Chestnut Hill, MA, USA

Christopher R. Madan

School of Psychology, University of Nottingham, Nottingham, UK

Department of Psychology, Rhode Island College, Providence, RI, USA

Megan A. Sumeracki

You can also search for this author in PubMed   Google Scholar

Contributions

YW took the lead on writing the “Spaced practice”, “Interleaving”, and “Elaboration” sections. CRM took the lead on writing the “Concrete examples” and “Dual coding” sections. MAS took the lead on writing the “Retrieval practice” section. All authors edited each others’ sections. All authors were involved in the conception and writing of the manuscript. All authors gave approval of the final version.

Corresponding author

Correspondence to Yana Weinstein .

Ethics declarations

Ethics approval and consent to participate, consent for publication, competing interests.

YW and MAS run a blog, “The Learning Scientists Blog”, which is cited in the tutorial review. The blog does not make money. Free resources on the strategies described in this tutorial review are provided on the blog. Occasionally, YW and MAS are invited by schools/school districts to present research findings from cognitive psychology applied to education.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Cite this article.

Weinstein, Y., Madan, C.R. & Sumeracki, M.A. Teaching the science of learning. Cogn. Research 3 , 2 (2018). https://doi.org/10.1186/s41235-017-0087-y

Download citation

Received : 20 December 2016

Accepted : 02 December 2017

Published : 24 January 2018

DOI : https://doi.org/10.1186/s41235-017-0087-y

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

cognitive theory research paper

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Perspective
  • Published: 15 April 2021

Two views on the cognitive brain

  • David L. Barack 1 , 2 &
  • John W. Krakauer 3 , 4 , 5 , 6  

Nature Reviews Neuroscience volume  22 ,  pages 359–371 ( 2021 ) Cite this article

32k Accesses

85 Citations

247 Altmetric

Metrics details

  • Intelligence

A Publisher Correction to this article was published on 17 June 2021

This article has been updated

Cognition can be defined as computation over meaningful representations in the brain to produce adaptive behaviour. There are two views on the relationship between cognition and the brain that are largely implicit in the literature. The Sherringtonian view seeks to explain cognition as the result of operations on signals performed at nodes in a network and passed between them that are implemented by specific neurons and their connections in circuits in the brain. The contrasting Hopfieldian view explains cognition as the result of transformations between or movement within representational spaces that are implemented by neural populations. Thus, the Hopfieldian view relegates details regarding the identity of and connections between specific neurons to the status of secondary explainers. Only the Hopfieldian approach has the representational and computational resources needed to develop novel neurofunctional objects that can serve as primary explainers of cognition.

This is a preview of subscription content, access via your institution

Access options

Access Nature and 54 other Nature Portfolio journals

Get Nature+, our best-value online-access subscription

24,99 € / 30 days

cancel any time

Subscribe to this journal

Receive 12 print issues and online access

176,64 € per year

only 14,72 € per issue

Buy this article

  • Purchase on SpringerLink
  • Instant access to full article PDF

Prices may be subject to local taxes which are calculated during checkout

cognitive theory research paper

Similar content being viewed by others

cognitive theory research paper

The neuroconnectionist research programme

cognitive theory research paper

Biological constraints on neural network models of cognitive function

cognitive theory research paper

Constructing neural network models from brain data reveals representational transformations linked to adaptive behavior

Change history, 17 june 2021.

A Correction to this paper has been published: https://doi.org/10.1038/s41583-021-00487-z

Gallistel, C. R. & King, A. P. Memory and the Computational Brain: Why Cognitive Science Will Transform Neuroscience Vol. 3 (Wiley, 2009).

Goodman, N. Languages of Art: An Approach to a Theory of Symbols (Hackett Publishing, 1976).

Fodor, J. A. Propositional attitudes. Monist 61 , 501–523 (1978).

Article   Google Scholar  

Fodor, J. A. Psychosemantics: The Problem of Meaning in the Philosophy of Mind (MIT Press, 1987).

Fodor, J. A. A Theory of Content and Other Essays (MIT Press, 1990).

Cummins, R. Meaning and Mental Representation (MIT Press, 1989).

Cummins, R., Putnam, H. & Block, N. Representations, Targets, and Attitudes (MIT Press, 1996).

Millikan, R. G. Language, Thought, and Other Biological Categories: New Foundations for Realism (MIT Press, 1984).

Ramsey, W. M. Representation Reconsidered (Cambridge Univ. Press, 2007).

Shea, N. Representation in Cognitive Science (Oxford Univ. Press, 2018).

Yamins, D. L. & DiCarlo, J. J. Using goal-driven deep learning models to understand sensory cortex. Nat. Neurosci. 19 , 356 (2016).

Article   CAS   PubMed   Google Scholar  

Rajalingham, R. & DiCarlo, J. J. Reversible inactivation of different millimeter-scale regions of primate IT results in different patterns of core object recognition deficits. Neuron 102 , 493–505 (2019).

DiCarlo, J. J., Zoccolan, D. & Rust, N. C. How does the brain solve visual object recognition? Neuron 73 , 415–434 (2012).

Article   CAS   PubMed   PubMed Central   Google Scholar  

Tsao, D. Y., Freiwald, W. A., Knutsen, T. A., Mandeville, J. B. & Tootell, R. B. H. Faces and objects in macaque cerebral cortex. Nat. Neurosci. 6 , 989–995 (2003).

Bao, P., She, L., McGill, M. & Tsao, D. Y. A map of object space in primate inferotemporal cortex. Nature 583 , 103–108 (2020).

Kriegeskorte, N. & Diedrichsen, J. Peeling the onion of brain representations. Annu. Rev. Neurosci. 42 , 407–432 (2019).

Yuste, R. From the neuron doctrine to neural networks. Nat. Rev. Neurosci. 16 , 487–497 (2015).

Eichenbaum, H. Barlow versus Hebb: when is it time to abandon the notion of feature detectors and adopt the cell assembly as the unit of cognition? Neurosci. Lett. 680 , 88–93 (2018).

Sherrington, C. S. Observations on the scratch‐reflex in the spinal dog. J. Physiol. 34 , 1–50 (1906).

Barlow, H. B. Summation and inhibition in the frog’s retina. J. Physiol. 119 , 69–88 (1953).

Parker, D. Complexities and uncertainties of neuronal network function. Philos. Trans. R. Soc. B Biol. Sci. 361 , 81–99 (2006).

Tye, K. M. & Uchida, N. Editorial overview: Neurobiology of behavior. Curr. Opin. Neurobiol. 49 , iv–ix (2020).

Marder, E., Goeritz, M. L. & Otopalik, A. G. Robust circuit rhythms in small circuits arise from variable circuit components and mechanisms. Curr. Opin. Neurobiol. 31 , 156–163 (2015).

Creutzfeldt, O. D. Generality of the functional structure of the neocortex. Naturwissenschaften 64 , 507–517 (1977).

Douglas, R. J., Martin, K. A. & Whitteridge, D. A canonical microcircuit for neocortex. Neural Comput. 1 , 480–488 (1989).

Harris, K. D. & Shepherd, G. M. The neocortical circuit: themes and variations. Nat. Neurosci. 18 , 170–181 (2015).

Britten, K. H., Shadlen, M. N., Newsome, W. T. & Movshon, J. A. Responses of neurons in macaque MT to stochastic motion signals. Vis. Neurosci. 10 , 1157–1169 (1993).

Salzman, C. D. & Newsome, W. T. Neural mechanisms for forming a perceptual decision. Science 264 , 231–237 (1994).

Roitman, J. D. & Shadlen, M. N. Response of neurons in the lateral intraparietal area during a combined visual discrimination reaction time task. J. Neurosci. 22 , 9475–9489 (2002).

Gold, J. I. & Shadlen, M. N. Banburismus and the brain: decoding the relationship between sensory stimuli, decisions, and reward. Neuron 36 , 299–308 (2002).

Shadlen, M. & Newsome, W. Motion perception: seeing and deciding. Proc. Natl Acad. Sci. USA 93 , 628–633 (1996).

Mazurek, M. E., Roitman, J. D., Ditterich, J. & Shadlen, M. N. A role for neural integrators in perceptual decision making. Cereb. Cortex 13 , 1257–1269 (2003).

Article   PubMed   Google Scholar  

Ditterich, J. Stochastic models of decisions about motion direction: behavior and physiology. Neural Netw. 19 , 981–1012 (2006).

Zeki, S. M. Cells responding to changing image size and disparity in the cortex of the rhesus monkey. J. Physiol. 242 , 827–841 (1974).

Britten, K. H., Shadlen, M. N., Newsome, W. T. & Movshon, J. A. The analysis of visual motion: a comparison of neuronal and psychophysical performance. J. Neurosci. 12 , 4745–4765 (1992).

Blatt, G. J., Andersen, R. A. & Stoner, G. R. Visual receptive field organization and cortico-cortical connections of the lateral intraparietal area (area LIP) in the macaque. J. Comp. Neurol. 299 , 421–445 (1990).

Latimer, K. W., Yates, J. L., Meister, M. L., Huk, A. C. & Pillow, J. W. Single-trial spike trains in parietal cortex reveal discrete steps during decision-making. Science 349 , 184–187 (2015).

Katz, L. N., Yates, J. L., Pillow, J. W. & Huk, A. C. Dissociated functional significance of decision-related activity in the primate dorsal stream. Nature 535 , 285–288 (2016).

Fusi, S., Miller, E. K. & Rigotti, M. Why neurons mix: high dimensionality for higher cognition. Curr. Opin. Neurobiol. 37 , 66–74 (2016).

Rosenblatt, F. The perceptron: a probabilistic model for information storage and organization in the brain. Psychol. Rev. 65 , 386 (1958).

Minsky, M. & Papert, S. A. Perceptrons: An Introduction to Computational Geometry (MIT Press, 1969).

Rigotti, M., Rubin, D. B., Wang, X. J. & Fusi, S. Internal representation of task rules by recurrent dynamics: the importance of the diversity of neural responses. Front. Comput. Neurosci. 4 , 24 (2010).

Article   PubMed   PubMed Central   Google Scholar  

Zador, A. M., Claiborne, B. J. & Brown, T. H. Nonlinear pattern separation in single hippocampal neurons with active dendritic membrane, in Advances in Neural Information Processing Systems 51–58 (NIPS, 1991).

Legenstein, R. & Maass, W. Branch-specific plasticity enables self-organization of nonlinear computation in single neurons. J. Neurosci. 31 , 10787–10802 (2011).

Kimura, R. et al. Hippocampal polysynaptic computation. J. Neurosci. 31 , 13168–13179 (2011).

Bianchi, D. et al. On the mechanisms underlying the depolarization block in the spiking dynamics of CA1 pyramidal neurons. J. Comput. Neurosci. 33 , 207–225 (2012).

Gidon, A. et al. Dendritic action potentials and computation in human layer 2/3 cortical neurons. Science 367 , 83–87 (2020).

Wang, X. J. Probabilistic decision making by slow reverberation in cortical circuits. Neuron 36 , 955–968 (2002).

Hebb, D. The Organization of Behavior (Wiley, 1949).

McCrea, D. A. & Rybak, I. A. Organization of mammalian locomotor rhythm and pattern generation. Brain Res. Rev. 57 , 134–146 (2008).

Fries, P. A mechanism for cognitive dynamics: neuronal communication through neuronal coherence. Trends Cognit. Sci. 9 , 474–480 (2005).

Krakauer, J. W., Ghazanfar, A. A., Gomez-Marin, A., MacIver, M. A. & Poeppel, D. Neuroscience needs behavior: correcting a reductionist bias. Neuron 93 , 480–490 (2017).

Jazayeri, M. & Afraz, A. Navigating the neural space in search of the neural code. Neuron 93 , 1003–1014 (2017).

Paninski, L. & Cunningham, J. P. Neural data science: accelerating the experiment–analysis–theory cycle in large-scale neuroscience. Curr. Opin. Neurobiol. 50 , 232–241 (2018).

Hu, Y., Trousdale, J., Josić, K. & Shea-Brown, E. Motif statistics and spike correlations in neuronal networks. J. Stat. Mech. Theory Exp. 2013 , P03012 (2013).

Hu, Y., Trousdale, J., Josić, K. & Shea-Brown, E. Local paths to global coherence: cutting networks down to size. Phys. Rev. E 89 , 032802 (2014).

Hu, Y. et al. Feedback through graph motifs relates structure and function in complex networks. Phys. Rev. E 98 , 062312 (2018).

Article   CAS   Google Scholar  

Recanatesi, S., Ocker, G. K., Buice, M. A. & Shea-Brown, E. Dimensionality in recurrent spiking networks: global trends in activity and local origins in connectivity. PLoS Comput. Biol. 15 , e1006446 (2019).

Hopfield, J. J. Neural networks and physical systems with emergent collective computational abilities. Proc. Natl Acad. Sci. USA 79 , 2554–2558 (1982).

Hopfield, J. J. Neurons with graded response have collective computational properties like those of two-state neurons. Proc. Natl Acad. Sci. USA 81 , 3088–3092 (1984).

Hopfield, J. J. & Tank, D. W. Computing with neural circuits: a model. Science 233 , 625–633 (1986).

Ramón y Cajal, S. Estudios sobre la corteza cerebral humana. Corteza visual. Rev. Trim. Microgr. 4 , 1–63 (1899).

Google Scholar  

McCulloch, W. S. & Pitts, W. A logical calculus of the ideas immanent in nervous activity. Bull. Math. Biophys. 5 , 115–133 (1943).

Buzsaki, G. Neural syntax: cell assemblies, synapsembles, and readers. Neuron 68 , 362–385 (2010).

Saxena, S. & Cunningham, J. P. Towards the neural population doctrine. Curr. Opin. Neurobiol. 55 , 103–111 (2019).

Mesulam, M.-M. From sensation to cognition. Brain J. Neurol. 121 , 1013–1052 (1998).

Anderson, M. L. After Phrenology (Oxford Univ. Press, 2014).

Sporns, O. Networks of the Brain (MIT Press, 2010).

Lashley, K. S. Mass action in cerebral function. Science 73 , 245–254 (1931).

Trautmann, E. M. et al. Accurate estimation of neural population dynamics without spike sorting. Neuron 103 , 292–308 (2019).

Clark, A. A Theory of Sentience (Clarendon Press, 2000).

Gärdenfors, P. Conceptual Spaces: The Geometry of Thought (MIT Press, 2004).

Meister, M. L., Hennig, J. A. & Huk, A. C. Signal multiplexing and single-neuron computations in lateral intraparietal area during decision-making. J. Neurosci. 33 , 2254–2267 (2013).

Mante, V., Sussillo, D., Shenoy, K. V. & Newsome, W. T. Context-dependent computation by recurrent dynamics in prefrontal cortex. Nature 503 , 78–84 (2013).

Warden, M. R. & Miller, E. K. The representation of multiple objects in prefrontal neuronal delay activity. Cereb. Cortex 17 , i41–i50 (2007).

Warden, M. R. & Miller, E. K. Task-dependent changes in short-term memory in the prefrontal cortex. J. Neurosci. 30 , 15801–15810 (2010).

Rigotti, M. et al. The importance of mixed selectivity in complex cognitive tasks. Nature 497 , 585–590 (2013).

Moser, E. I. et al. Grid cells and cortical representation. Nat. Rev. Neurosci. 15 , 466 (2014).

Moser, E. I., Moser, M.-B. & McNaughton, B. L. Spatial representation in the hippocampal formation: a history. Nat. Neurosci. 20 , 1448 (2017).

Fyhn, M., Molden, S., Witter, M. P., Moser, E. I. & Moser, M.-B. Spatial representation in the entorhinal cortex. Science 305 , 1258–1264 (2004).

Hafting, T., Fyhn, M., Molden, S., Moser, M.-B. & Moser, E. I. Microstructure of a spatial map in the entorhinal cortex. Nature 436 , 801–806 (2005).

Sargolini, F. et al. Conjunctive representation of position, direction, and velocity in entorhinal cortex. Science 312 , 758–762 (2006).

Tolman, E. C. Cognitive maps in rats and men. Psychol. Rev. 55 , 189 (1948).

Constantinescu, A. O., O’Reilly, J. X. & Behrens, T. E. Organizing conceptual knowledge in humans with a gridlike code. Science 352 , 1464–1468 (2016).

Behrens, T. E. et al. What is a cognitive map? Organizing knowledge for flexible behavior. Neuron 100 , 490–509 (2018).

Sorscher, B., Mel, G., Ganguli, S. & Ocko, S. A unified theory for the origin of grid cells through the lens of pattern formation, in Advances in Neural Information Processing Systems 10003–10013 (NeurIPS, 2019).

Cueva, C. J. & Wei, X.-X. Emergence of grid-like representations by training recurrent neural networks to perform spatial localization. Preprint at https://arxiv.org/abs/1803.07770 (2018).

Banino, A. et al. Vector-based navigation using grid-like representations in artificial agents. Nature 557 , 429–433 (2018).

Felleman, D. & Van Essen, D. Distributed hierarchical processing in the primate cerebral cortex. Cereb. Cortex 1 , 1–47 (1991).

Zador, A. M. A critique of pure learning and what artificial neural networks can learn from animal brains. Nat. Commun. 10 , 1–7 (2019).

Polger, T. W. & Shapiro, L. A. The Multiple Realization Book (Oxford Univ. Press, 2016).

Bechtel, W. A bridge between cognitive science and neuroscience: the functional architecture of mind. Philos. Stud. 44 , 319–330 (1983).

Pylyshyn, Z. W. Computation and Cognition (Cambridge Univ. Press, 1984).

Ramon y Cajal, S. Estructura de los centros nerviosos de las aves [Spanish] (1888).

Sherrington, C. The Integrative Action of the Central Nervous System (Archibald Constable, 1906).

Barlow, H. Single units and sensation: a neuron doctrine for perceptual psychology? Perception 1 , 371–394 (1972).

Martin, K. A. A brief history of the “feature detector”. Cereb. Cortex 4 , 1–7 (1994).

Shepherd, G. M. Foundations of the Neuron Doctrine (Oxford Univ. Press, 2015).

Kuhn, T. S. The Structure of Scientific Revolutions (Univ. of Chicago Press, 1962).

Haberkern, H. & Jayaraman, V. Studying small brains to understand the building blocks of cognition. Curr. Opin. Neurobiol. 37 , 59–65 (2016).

Cobb, M. The Idea of the Brain: The Past and Future of Neuroscience (Basic Books, 2020).

Barack, D. L. Mental machines. Biol. Philos. 34 , 63 (2019).

Fuster, J. The Prefrontal Cortex (Academic Press, 2008).

Arbib, M. A., Plangprasopchok, A., Bonaiuto, J. & Schuler, R. E. A neuroinformatics of brain modeling and its implementation in the Brain Operation Database BODB. Neuroinformatics 12 , 5–26 (2014).

Carandini, M. & Heeger, D. J. Normalization as a canonical neural computation. Nat. Rev. Neurosci. 13 , 51–62 (2012).

Wong, K.-F. & Wang, X.-J. A recurrent network mechanism of time integration in perceptual decisions. J. Neurosci. 26 , 1314–1328 (2006).

Song, H. F., Yang, G. R. & Wang, X.-J. Reward-based training of recurrent neural networks for cognitive and value-based tasks. eLife 6 , e21492 (2017).

Song, H. F., Yang, G. R. & Wang, X.-J. Training excitatory–inhibitory recurrent neural networks for cognitive tasks: a simple and flexible framework. PLoS Comput. Biol. 12 , e1004792–e1004792 (2016).

Churchland, M. M., Byron, M. Y., Ryu, S. I., Santhanam, G. & Shenoy, K. V. Neural variability in premotor cortex provides a signature of motor preparation. J. Neurosci. 26 , 3697–3712 (2006).

Churchland, M. M., Cunningham, J. P., Kaufman, M. T., Ryu, S. I. & Shenoy, K. V. Cortical preparatory activity: representation of movement or first cog in a dynamical machine? Neuron 68 , 387–400 (2010).

Wong, A. L., Haith, A. M. & Krakauer, J. W. Motor planning. Neuroscientist 21 , 385–398 (2015).

Haith, A. M. & Bestmann, S. in The Cognitive Neurosciences VI (eds Poeppel, D., Mangun, R., & Gazzaniga, M. S.) 541–548 (MIT Press, 2020).

Shenoy, K. V., Sahani, M. & Churchland, M. M. Cortical control of arm movements: a dynamical systems perspective. Annu. Rev. Neurosci. 36 , 337–359 (2013).

Vyas, S., Golub, M. D., Sussillo, D. & Shenoy, K. Computation through neural population dynamics. Annu. Rev. Neurosci. 43 , 249–275 (2020).

Yoo, S. B. M. & Hayden, B. Y. Economic choice as an untangling of options into actions. Neuron 99 , 434–447 (2018).

Golub, M. D. et al. Learning by neural reassociation. Nat. Neurosci. 21 , 607–616 (2018).

Wang, J., Narain, D., Hosseini, E. A. & Jazayeri, M. Flexible timing by temporal scaling of cortical responses. Nat. Neurosci. 21 , 102–110 (2018).

Egger, S. W., Le, N. M. & Jazayeri, M. A neural circuit model for human sensorimotor timing. Nat. Commun. 11 , 3933 (2020).

Remington, E. D., Egger, S. W., Narain, D., Wang, J. & Jazayeri, M. A dynamical systems perspective on flexible motor timing. Trends Cogn. Sci. 22 , 938–952 (2018).

Sohn, H., Narain, D., Meirhaeghe, N. & Jazayeri, M. Bayesian computation through cortical latent dynamics. Neuron 103 , 934–947 (2019).

Chaisangmongkon, W., Swaminathan, S. K., Freedman, D. J. & Wang, X.-J. Computing by robust transience: how the fronto-parietal network performs sequential, category-based decisions. Neuron 93 , 1504–1517 (2017).

Sarafyazd, M. & Jazayeri, M. Hierarchical reasoning by neural circuits in the frontal cortex. Science 364 , eaav8911 (2019).

Feynman, R. P. Space–time approach to quantum electrodynamics. Phys. Rev. 76 , 769 (1949).

De Regt, H. W. Understanding Scientific Understanding (Oxford Univ. Press, 2017).

Bertolero, M. A. & Bassett, D. S. On the nature of explanations offered by network science: A perspective from and for practicing neuroscientists. Top. Cogn. Sci. 12 , 1272–1293 (2020).

Kohn, A. et al. Principles of corticocortical communication: proposed schemes and design considerations. Trends Neurosci. 43 , 725–737 (2020).

Nelson, S. B. Cortical microcircuits: diverse or canonical? Neuron 36 , 19–27 (2002).

Churchland, P. M. Cognitive neurobiology: a computational hypothesis for laminar cortex. Biol. Philos. 1 , 25–51 (1986).

Lisman, J. et al. The molecular basis of CaMKII function in synaptic and behavioural memory. Nat. Rev. Neurosci. 3 , 175–190 (2002).

Janak, P. H. & Tye, K. M. From circuits to behaviour in the amygdala. Nature 517 , 284–292 (2015).

Churchland, M. M. et al. Stimulus onset quenches neural variability: a widespread cortical phenomenon. Nat. Neurosci. 13 , 369–378 (2010).

Marr, D. Vision (Henry Holt, 1982).

Sterelny, K. The Representational Theory of Mind: An Introduction (Blackwell, 1990).

Shagrir, O. Marr on computational-level theories. Philos. Sci. 77 , 477–500 (2010).

Haugeland, J. Artificial Intelligence: The Very Idea (MIT Press, 1985).

Download references

Author information

Authors and affiliations.

Department of Philosopy, University of Pennsylvania, Philadelphia, PA, USA

David L. Barack

Department of Neuroscience, University of Pennsylvania, Philadelphia, PA, USA

Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD, USA

John W. Krakauer

Department of Neuroscience, Johns Hopkins University School of Medicine, Baltimore, MD, USA

Department of Physical Medicine and Rehabilitation, Johns Hopkins University School of Medicine, Baltimore, MD, USA

The Santa Fe Institute, Santa Fe, NM, USA

You can also search for this author in PubMed   Google Scholar

Contributions

D.L.B. and J.W.K. contributed equally to this work.

Corresponding authors

Correspondence to David L. Barack or John W. Krakauer .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Additional information

Peer review information.

Nature Reviews Neuroscience thanks T. Behrens, who co-reviewed with A. Baram; R. Krauzlis; and E. Miller for their contribution to the peer review of this work.

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

The referent of a state, what the state is about.

The set of basis elements whose combinations can describe any point in that space.

Either A or B but not both A and B.

Conceptualizations of brain regions as N -dimensional spaces where each N th dimension is a representation of a neuron and the value along the dimension is the firing rate of that neuron.

Early artificial neural network models.

An early idea about the brain’s biological organization that maintained the brain is a continuous network not divisible into cells.

Representations that have semantic content and can be mapped on to the content given some context of use.

A point or a region of neural space.

An orderly arrangement of the representation of auditory tones in the brain from lowest to highest.

Rights and permissions

Reprints and permissions

About this article

Cite this article.

Barack, D.L., Krakauer, J.W. Two views on the cognitive brain. Nat Rev Neurosci 22 , 359–371 (2021). https://doi.org/10.1038/s41583-021-00448-6

Download citation

Accepted : 25 February 2021

Published : 15 April 2021

Issue Date : June 2021

DOI : https://doi.org/10.1038/s41583-021-00448-6

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

The cognitive impact of light: illuminating iprgc circuit mechanisms.

  • Heather L. Mahoney
  • Tiffany M. Schmidt

Nature Reviews Neuroscience (2024)

De novo motor learning creates structure in neural activity that shapes adaptation

  • Joanna C. Chang
  • Matthew G. Perich
  • Claudia Clopath

Nature Communications (2024)

Downstream network transformations dissociate neural activity from causal functional contributions

  • Kayson Fakhar
  • Shrey Dixit
  • Claus C. Hilgetag

Scientific Reports (2024)

Preparatory activity and the expansive null-space

  • Mark M. Churchland
  • Krishna V. Shenoy

Neurodynamical Computing at the Information Boundaries of Intelligent Systems

  • Joseph D. Monaco
  • Grace M. Hwang

Cognitive Computation (2024)

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

cognitive theory research paper

Piaget’s Cognitive Developmental Theory: Critical Review

Education Quarterly Reviews, Vol.2 No.3 (2019)

9 Pages Posted: 19 Aug 2019

Zana Babakr

Soran University

Pakstan Mohamedamin

Karwan kakamad.

Date Written: August 15, 2019

In the last century, Jean Piaget proposed one of the most famous theories regarding cognitive development in children. Piaget proposed four cognitive developmental stages for children, including sensorimotor, preoperational, concrete operational, and the formal operational stage. Although Piaget’s theories have had a great impact on developmental psychology, his notions have not been fully accepted without critique. Piaget’s theory has some shortcomings, including overestimating the ability of adolescence and underestimating infant’s capacity. Piaget also neglected cultural and social interaction factors in the development of children’s cognition and thinking ability. Moreover, in terms of the methodological approach, Piaget’s theory had some ethical and bias problems as he studied his own children. However, Piaget contributions, particularly in regards to the process of education among children and transferring cognition into psychology, have had a significant effect on the science of child development.

Keywords: Cognitive Development, Sensorimotor, Preoperational, Concrete Operational, Formal Operational and Child Development

Suggested Citation: Suggested Citation

Zana Babakr (Contact Author)

Soran university ( email ).

Kurdistan Region Iraq

Do you have a job opening that you would like to promote on SSRN?

Paper statistics, related ejournals, social sciences education ejournal.

Subscribe to this fee journal for more curated articles on this topic

Human Cognition in Evolution & Development eJournal

Psychology of innovation ejournal, developmental psychology ejournal.

Cognitive load theory, educational research, and instructional design: some food for thought

  • Open access
  • Published: 27 August 2009
  • Volume 38 , pages 105–134, ( 2010 )

Cite this article

You have full access to this open access article

cognitive theory research paper

  • Ton de Jong 1  

144k Accesses

544 Citations

257 Altmetric

38 Mentions

Explore all metrics

Cognitive load is a theoretical notion with an increasingly central role in the educational research literature. The basic idea of cognitive load theory is that cognitive capacity in working memory is limited, so that if a learning task requires too much capacity, learning will be hampered. The recommended remedy is to design instructional systems that optimize the use of working memory capacity and avoid cognitive overload. Cognitive load theory has advanced educational research considerably and has been used to explain a large set of experimental findings. This article sets out to explore the open questions and the boundaries of cognitive load theory by identifying a number of problematic conceptual, methodological and application-related issues. It concludes by presenting a research agenda for future studies of cognitive load.

Similar content being viewed by others

cognitive theory research paper

Special Issue on Cognitive Load Theory: Editorial

cognitive theory research paper

Cognitive Architecture and Instructional Design: 20 Years Later

cognitive theory research paper

Cognitive Load Theory and Human Movement: Towards an Integrated Model of Working Memory

Avoid common mistakes on your manuscript.

Introduction

This article discusses cognitive load theory , a theory relating working memory characteristics and the design of instructional systems . Theories of the architecture of human memory make a distinction between long term memory and short term memory. Long term memory is that part of memory where large amounts of information are stored (semi-) permanently , whereas short term memory is the memory system where small amounts of information are stored (Cowan 2001 ; Miller 1956 ) for a very short duration (Dosher 2003 ). The original term “short term memory” has since been replaced by “working memory” to emphasize that this component of memory is responsible for the processing of information. More recent and advanced theories distinguish two subsystems within working memory: one for holding visuospatial information (e.g., written text, diagrams) and one for phonological information (e.g., narration; Baddeley and Hitch 1974 ). The significance of working memory capacity for cognitive functioning is evident. A series of studies have found that individual working memory performance correlates with cognitive abilities and academic achievement (for an overview see Yuan et al. 2006 ).

The characteristics of working memory have informed the design of artifacts for human functioning such as car operation (Forlines et al. 2005 ) and the processing of financial data (Rose et al. 2004 ). Another area in which the implications of working memory characteristics are studied is instructional design. This research field finds its roots in work by Sweller and colleagues in the late 1980s and early 1990s (Chandler and Sweller 1991 ; Sweller 1988 , 1989 ; Sweller et al. 1990 ). Their cognitive load theory has subsequently had a great impact on researchers and designers in the field of education. Basically, cognitive load theory asserts that learning is hampered when working memory capacity is exceeded in a learning task. Cognitive load theory distinguishes three different types of contributions to total cognitive load. Intrinsic cognitive load relates to inherent characteristics of the content to be learned, extraneous cognitive load is the load that is caused by the instructional material used to present the content, and finally, germane cognitive load refers to the load imposed by learning processes.

Cognitive load theory has recently been the subject of criticism regarding its conceptual clarity (Schnotz and Kürschner 2007 ) and methodological approaches (Gerjets et al. 2009a ). The current article follows this line of thinking. It explores the boundaries of cognitive load theory by presenting a number of questions concerning its foundations, by discussing a number of methodological issues, and by examining the consequences for instructional design. The core of the literature reviewed consists of the 35 most frequently cited articles, with “cognitive load” as a descriptor, taken from the Web of Science areas “educational psychology” and “education and educational research”. (The virtual H-Index for “cognitive load” in these categories was 35.) These have been supplemented with a selection of recent articles on cognitive load from major journals. The current article concludes by suggesting a role for cognitive load theory in educational theory, research, and design.

The three types of cognitive load revisited

Intrinsic cognitive load.

Intrinsic cognitive load relates to the difficulty of the subject matter (Cooper 1998 ; Sweller and Chandler 1994 ). More specifically, material that contains a large number of interactive elements is regarded as more difficult than material with a smaller number of elements and/or with a low interactivity. Low interactivity material consists of single, simple, elements that can be learned in isolation, whereas in high interactivity material individual elements can only be well understood in relation to other elements (Sweller 1994 ; Sweller et al. 1998 ). Pollock et al. ( 2002 ) give the example of a vocabulary where individual words can be learned independently of each other as an instance of low interactivity material, and grammatical syntax or the functioning of an electrical circuit as examples of high interactivity material. This implies that some (high interactivity) content by its nature consumes more of the available cognitive resources than other (low interactivity) material. However, intrinsic load is not merely a function of qualities of the subject matter but also of the prior knowledge a learner brings to the task (Bannert 2002 ; Sweller et al. 1998 ). An important premise regarding intrinsic load is that it cannot be changed by instructional treatments.

The first premise is that intrinsic load (which can be expressed as experienced difficulty of the subject matter) depends on the number of domain elements and their interactivity. This, however, is not the complete story and intrinsic load also seems to depend on other characteristics of the material. First, some types of content seem to be intrinsically more difficult than others, despite having the same number of elements and the same interactivity. Klahr and Robinson ( 1981 ), for example, found that children had considerably more difficulty solving Tower of Hanoi problems in which they had to end up with all pegs occupied compared to the traditional Tower of Hanoi problem where all disks must end up on one peg. This effect occurred despite the number of moves being the same in both problems. Second, learning difficulty increases independently of element interactivity when learners must change ontological categories. Chi ( 1992 ) gives a number of examples of learners who attribute the wrong ontological category to a concept (e.g., see “force” as a material substance) and find it very difficult to change to the correct ontological category. Chi ( 2005 ) adds to this that certain concepts (e.g., equilibrium) have emergent ontological characteristics, making them even more difficult for students to understand. Training students in ontological categories helps to improve their learning (Slotta and Chi 2006 ). Third, specific characteristics of relations are also seen as being related to difficulty. Campbell ( 1988 ) mentions in an overview study aspects of the material that contribute to difficulty such as “negative” relations between elements (this is conflicting information) and ambiguity or uncertainty of relations.

The second premise concerning intrinsic cognitive load is that it cannot be changed by instructional treatments. Ayres ( 2006b ), for example, describes intrinsic load as “fixed and innate to the task…” (p. 489). As a consequence, intrinsic load is unaffected by external influences. “It [intrinsic load] cannot be directly influenced by instructional designers although… it certainly needs to be considered” (Sweller et al. 1998 , p. 262). Or as Paas et al. ( 2003a , p. 1) state: “Different materials differ in their levels of element interactivity and thus intrinsic cognitive load, and they cannot be altered by instructional manipulations…”. Also in more recent work it explicitly stated that intrinsic load cannot be changed (e.g., Hasler et al. 2007 ; Wouters et al. in press ). A somewhat different stance is taken in other work. Sequencing the material in a simple-to-complex order so that learners do not experience its full complexity at the outset is a way to control intrinsic load (van Merriënboer et al. 2003 ). Pollock et al. ( 2002 ) introduced a similar approach in which isolated elements were introduced before the integrated task (see also, Ayres 2006a ). Gerjets et al. ( 2004 ) also argue that instructional approaches can change intrinsic load. Along with the simple-to-complex sequencing, they suggest part-whole sequencing (partial tasks are separately trained) and “modular presentation of solution procedures”. In the latter approach, learners are confronted with the complete task but without references to ‘molar’ concepts such as problem categories. Van Merriënboer et al. ( 2006 ) also suggest reducing intrinsic load by a whole-part approach in which the material is presented in its full complexity from the start, but learners’ attention is focused on subsets of interacting elements. Van Merriënboer and Sweller ( 2005 ) see these approaches as all fully in line with cognitive load theory, because both simple-to-complex and whole-part approaches start with few elements and gradually build up complexity.

Intrinsic load as defined within cognitive load theory is an interesting concept that helps explain why some types of material are more difficult than others and how this may influence the load on memory. However, the analysis above also shows that difficulty (and thus memory load) is not determined solely by number and interaction of elements and that there are techniques that may help to control intrinsic load.

Extraneous cognitive load

Extraneous cognitive load is cognitive load that is evoked by the instructional material and that does not directly contribute to learning (schema construction). As van Merriënboer and Sweller ( 2005 , p. 150) write: “Extranous cognitive load, in contrast, is load that is not necessary for learning (i.e., schema construction and automation) and that can be altered by instructional interventions”. Following this definition, extraneous load is imposed by the material but could have been avoided with a different design. A number of general sources of extraneous cognitive load are mentioned in the literature. The “split-attention” effect, for instance, refers to the separate presentation of domain elements that require simultaneous processing. In this case, learners must keep one domain element in memory while searching for another element in order to relate it to the first. Split attention may refer to spatially separated elements (as in visual presentations) or temporal separation of two elements, as in multi-media presentations (Ayres and Sweller 2005 ; Lowe 1999 ). This can be remedied by presenting material in an integrated way (e.g., Cerpa et al. 1996 ; Chandler and Sweller 1992 ; Sweller and Chandler 1991 ). A second identified source of extraneous cognitive load is when students must solve problems for which they have no schema-based knowledge; in general, this refers to conventional practice problems (Sweller 1993 ). In this situation students may use means-ends-analysis as a solution procedure (Paas and van Merriënboer 1994a ). Though this is an effective way of solving problems, it also requires keeping many elements (start goal, end goal, intermediate goals, operators) in working memory. To remedy this, students can be offered “goal free problems”, “worked out problems” or “completion problems” instead of traditional problems (Atkinson et al. 2000 ; Ayres 1993 ; Renkl et al. 2009 , 1998 ; Rourke and Sweller 2009 ; Sweller et al. 1998 ; Wirth et al. 2009 ). A third source of extraneous load may arise when the instructional design uses only one of the subsystems of working memory. More capacity can be used when both the visual and auditory parts of working memory are addressed. This “modality principle” implies that material is more efficiently presented as a combination of visual and auditory material (see amongst others Low and Sweller 2005 ; Sweller et al. 1998 ; Tindall-Ford et al. 1997 ). A fourth source of unnecessary load occurs when learners must coordinate materials having the same information. Cognitive resources can be freed by including just one of the two (or more) sources of information. This is called the “redundancy principle” (Craig et al. 2002 ; Diao and Sweller 2007 ; Sweller 2005 ; Sweller et al. 1998 ).

The majority of studies in the traditional cognitive load approach have focused on extraneous cognitive load, with the aim of reducing this type of load (see for example Mayer and Chandler 2001 ; Mayer and Moreno 2002 , 2003 ). This all seems quite logical; learners should not spend time and resources on processes that are not relevant for learning. However, there are still a few issues that merit attention. First, designs that seem to elicit extraneous processes may, at the same time, stimulate germane processes. For example, when there are two representations with essentially the same information, such as a graph and a formula, these can be considered as providing redundant information leading to higher extraneous load (Mayer et al. 2001 ); but relating the two representations and making translations from one representation to the other (including processes of abstraction) can equally well be regarded as a process of acquiring deep knowledge (Ainsworth 2006 ). Reducing extraneous load in these cases may therefore also remove the affordances for germane processes. Paas et al. ( 2004 , pp. 3–4) provide an example of this when they write: “In some learning environments, extraneous load can be inextricably bound with germane load. Consequently, the goal to reduce extraneous load and increase germane load may pose problems for instructional designers. For instance, in nonlinear hypertext-based learning environments, efforts to reduce high extraneous load by using linear formats may at the same time reduce germane cognitive load by disrupting the example comparison and elaboration processes.” Second, there seem to be limits to the reduction of extraneous load; an interesting question is whether extraneous load can be zero. Can environments be designed that have no extraneous cognitive load? In practice there are limits to what can be designed. Representations should be integrated to reduce the effects of split attention, but in many cases this would lead to “unreadable” representations. Having information in different places is then preferable to cluttering one place with all the information. Third, like intrinsic load, extraneous load is not independent of the prior experience of the learner. A learner’s awareness of specific conventions governing the construction of learning material assists with processing and thus reduces extraneous cognitive load. Some researchers even use a practice phase to “… avoid any extraneous load caused by an unfamiliar interface” (Schnotz and Rasch 2005 , p. 50). Fourth, as recent research results indicate, some characteristics of instructional material that have always been regarded as extraneous may not hinder learning, if the material is well designed. Mayer and Johnson ( 2008 ), for example, found that redundant information is advantageous as long as it is short and placed near the information to which it refers.

Eliminating characteristics of learning material that are not necessary for learning will help students to focus on the learning processes that matter. This has been one of the important lessons for instructional design from cognitive load theory. However, the above analysis also shows that it is not always evident which characteristics of material can be regarded as being extraneous.

Germane cognitive load

Cognitive load theory sees the construction and subsequent automation of schemas as the main goal of learning (see e.g., Sweller et al. 1998 ). The construction of schemas involves processes such as interpreting, exemplifying, classifying, inferring, differentiating, and organizing (Mayer 2002 ). The load that is imposed by these processes is denominated germane cognitive load. Instructional designs should, of course, try to stimulate and guide students to engage in schema construction and automation and in this way increase germane cognitive load. Sweller et al. ( 1998 ) give the example of presenting practice problems under a high variability schedule (different surface stories) as compared to a low variability schedule (same surface stories). Sweller et al. ( 1998 ) describe the finding that students under a high variability schedule report higher cognitive load and also achieve better scores on transfer tests. They explain this by stating that the higher cognitive load must have been germane in this case. Apart from the fact that this is a “post-hoc” explanation there seem to be no grounds for asserting that processes that lead to (correct) schema acquisition will impose a higher cognitive load than learning processes that do not lead to (correct) schemas. It could even be argued that poorly performed germane processes lead to a higher cognitive load than smoothly performed germane processes. The fragility of Sweller et al.’s conclusion is further illustrated by work in which the effect mentioned by Sweller et al. did not occur. In a study in which learners had to acquire troubleshooting skills in a distillery system, participants in a condition in which practice problems were offered under a high variability schedule showed better performance on transfer problems than a low variability group, but, as measured with a subjective rating scale, did not show higher cognitive load during learning (de Croock et al. 1998 ).

An interesting question concerning germane load is whether germane load can be too high. Cognitive load theory focuses on the “bad” effects of intrinsic and extraneous load, but since memory capacity is limited, even “good” processes may overload working memory. What would happen, for example, if an inexperienced learner were asked to abstract over a number of concrete cases? Could this task cause a memory overload for this learner? Some of the cognitive load articles suggest that this could indeed be the case. Van Merriënboer et al. ( 2002 ), for example, write that they intend to “increase, within the limits of total available cognitive capacity, germane cognitive load which is directly relevant to learning” (p. 12). This suggests both that too much germane load can be invoked, and also that the location of the borderline might be known. In another study, Stull and Mayer ( 2007 ) found that students who used author-generated graphic organizers outperformed students who created these organizers themselves. Stull and Mayer concluded that students in the self-generated organizer condition obviously had experienced more extraneous load (again as a post-hoc explanation), but it also might have been the case that these students had to perform germane processes that were too demanding. Kalyuga ( 2007 , p. 527) tried to solve this conceptual issue in the following way: “If specific techniques for engaging learners into additional cognitive activities designed to enhance germane cognitive load (e.g., explicitly self-explaining or imagining content of worked examples) cause overall cognitive load to exceed learner working memory limitations, the germane load could effectively become a form of extraneous load and inhibit learning.” This quote suggests that germane load can only be “good” and that when it taxes working memory capacity too much, it should be regarded as extraneous load. This, however, seems to be at odds with the accepted definition of extraneous load that was presented earlier.

Conceptual issues

Can the different types of cognitive load be distinguished.

The general assertion of cognitive load theory that the capacity of working memory is limited is not disputed. What may be questioned is another pillar upholding the theory, namely, that a clear distinction can be made between intrinsic, extraneous, and germane cognitive load. This section discusses differences between the different types of load at a conceptual level; a later section discusses whether the different forms of load can be distinguished empirically.

The first distinction to be discussed is that between intrinsic and germane load. The essential problem here is that these concepts are defined as being of a different (ontological) character. Intrinsic load refers to “objects” (the material itself) and germane load refers to “processes” (what goes on in learning). The first question is how intrinsic load, defined as a characteristic of the material, can contribute to the cognitive load of the learner. A contribution to cognitive load cannot come from the material as such, but can only take place when the learner starts processing the material. Without any action on the side of the learner, there cannot be cognitive load. This means that cognitive load only starts to exist when the learner relates elements, makes abstractions, creates short cuts, etc. Mayer and Moreno ( 2003 ), for example, use the term “essential processing” to denote processes of selecting words, selecting images, organizing words, organizing images, and integrating. Although Mayer and Moreno ( 2003 ) associate “essential processing” with germane load, the processes involved seem to be close to making sense of the material itself without relating it to prior knowledge and without yet engaging in schema construction, and as such to be characteristic of intrinsic load. A slightly alternative way to look upon the contribution of material characteristics to cognitive load would be to see these characteristics as mediating towards germane processes, meaning that some germane processes are harder to realize for some material than for others, or that different germane processes may be necessary for different types of material. But that means that intrinsic load is then defined in terms of opportunities for germane processing. If intrinsic load and germane load are defined in terms of relatively similar learning processes, the difference between the two seems to be very much a matter of degree, and possibly non-existent. In this respect it is interesting to note that the very early publications on cognitive load do not distinguish between intrinsic and germane load but only between extraneous load and load exerted by learning processes (Chandler and Sweller 1991 ; Sweller 1988 ).

The second distinction to be addressed is that between extraneous and germane load. Paas et al. ( 2004 , p. 3) present the following distinction: “If the load is imposed by mental activities that interfere with the construction or automation of schemas, that is, ineffective or extraneous load, then it will have negative effects on learning. If the load is imposed by relevant mental activities, i.e., effective or germane load, then it will have positive effects on learning.” The first issue apparent in this definition is its quite tautological character (what is good is good, what is bad is bad). Second, this definition seems to extend the definition of extraneous load as originating from unnecessary processes that are a result of poorly designed instructional formats (which was the original definition; see above) to all processes that do not lead to schema construction (for a critical comment see also Schnotz and Kürschner 2007 ). Does this extension, therefore, broaden the idea of extraneous load related to mental activities that could be avoided but occupy mental space to include wrongly performed “germane” activities as well? Or, stated differently, do instructional designs that lead to “mistakes” in students’ schemas (but still aim at schema construction processes) contribute to extraneous load? The difference between extraneous and germane load is additionally problematic because, as was the case with intrinsic load, this distinction seems to depend on the expertise level of the learner. Paas et al. ( 2004 , pp. 2–3) write: “A cognitive load that is germane for a novice may be extraneous for an expert. In other words, information that is relevant to the process of schema construction for a beginning learner may hinder this process for a more advanced learner”. This notion is related to the so-called “expertise-reversal effect”, which means that what is good for a novice might be detrimental for an expert. In a similar vein, Kalyuga ( 2007 , p. 515) states: “Similarly, instructional methods for enhancing levels of germane load may produce cognitive overload for less experienced learners, thus effectively converting germane load for experts into extraneous load for novice learners.” Taken together, this implies that it is not the nature of the processes that counts but rather the way they function. A further problem is that germane processes can be considered to be extraneous depending on the learning goal (see also, Schnotz and Kürschner 2007 ). Gerjets and Scheiter ( 2003 ), for example, report on a study in which students in a surface-emphasizing approach condition outperformed students who had followed a structure-emphasizing approach in solving isomorphic problems. The group with the structure-emphasizing approach showed longer learning times. This is interpreted by the authors as time devoted to schema construction by use of abstraction processes. This, however, was not very useful for solving isomorphic problems and should thus be judged as extraneous in this case, according to Gerjets and Scheiter. In a similar vein, Scott and Schwartz ( 2007 ) studied the function of navigational maps for learning from hypertexts and found that depending on the use of the maps for understanding or navigation, the cognitive load could be regarded as either germane or extraneous.

The distinction between intrinsic and extraneous load may also be troublesome. Intrinsic load is defined as the load stemming from the material itself; therefore intrinsic load in principle cannot be changed if the underlying material stays the same. But papers by influential authors present different views. For example, in a recent study, DeLeeuw and Mayer ( 2008 ) manipulated intrinsic load by changing complexity of sentences in a text while leaving the content the same. This manipulation seems to affect extraneous load more than intrinsic load.

Can the different types of cognitive load simply be added?

An important premise of cognitive load theory is that the different types of cognitive load can be added. Sweller et al. ( 1998 , p. 263) are very clear on the additivity of intrinsic and extraneous load: “Intrinsic cognitive load due to element interactivity and extraneous cognitive load due to instructional design are additive”. Also in later work, it is very clearly stated that the different types of load are additive (Paas et al. 2003a ). There are, however, indications that the total load experienced cannot simply be regarded as the sum of the three different types of load. This is, for example illustrated in the relation between intrinsic and germane load, where there are two different possible interpretations. If intrinsic and germane load are seen as members of two distinct ontological categories (“material” and “cognitive processes”, respectively), there are principled objections to adding the two together. However, if they are regarded as members of the same ontological category (namely “cognitive processes”) the two may interact. For instance, if learners engage in processes to understand the material as such (related to intrinsic load) this will help to perform germane processes as well.

Is there a difference between load and effort?

Merriam-Webster’s on-line dictionary defines effort as “the total work done to achieve a particular end” and load as “the demand on the operating resources of a system”. Clearly effort and load are related concepts; one important difference is that effort is a unit that is exercised by a system itself whereas load is a factor that is experienced by a system. The terms effort and load are often used as synonyms in cognitive load theory, but sometimes a distinction is being made.

Paas ( 1992 , p. 429) presents an early attempt to differentiate the two concepts taking the above distinction into account: “Cognitive load is a multidimensional concept in which two components—mental load and mental effort—can be distinguished. Mental load is imposed by instructional parameters (e.g., task structure, sequence of information), and mental effort refers to the amount of capacity that is allocated to the instructional demands.” Paas introduces here a distinction between cognitive load, mental load, and mental effort. In discussing the concept of cognitive load, Paas and van Merriënboer ( 1994a ) distinguish causal factors, those that influence cognitive load (including the task, learner characteristics, and interactions between these two), and assessment factors. For the assessment factors, mental load is again seen as a component of cognitive load. Along with this, they distinguish mental effort and task performance as other components. According to Paas and van Merriënboer, mental load is related to the task, and mental effort and performance reflect all three causal factors. This means that mental load is determined by the task only, and that mental effort and performance are determined by the task, subject characteristics and their interactions. Paas and van Merriënboer are very clear with respect to the (non-existent) role of subject characteristics with regard to mental load: “With regard to the assessment factors, mental load is imposed by the task or environmental demands. This task-centered dimension, considered independent of subject characteristics, is constant for a given task in a given environment” (Paas and van Merriënboer 1994a , p. 354). However, Paas et al. ( 2003b , p. 64) write: “ Mental load is the aspect of cognitive load that originates from the interaction between task and subject characteristics. According to Paas and van Merriënboer’s ( 1994a ) model, mental load can be determined on the basis of our current knowledge about task and subject characteristics.” So, they present a different view of the role of participant characteristics in relation to mental load even when quoting the original paper.

The role of performance is often interpreted differently as well. In the paper by Paas and van Merriënboer, performance is regarded as one of the three assessments (along with mental load and mental effort) that is a reflection of (among other things) cognitive load. However, Kirschner ( 2002 ) gives a somewhat different place to performance when he writes: “Mental load is the portion of CL that is imposed exclusively by the task and environmental demands. Mental effort refers to the cognitive capacity actually allocated to the task. The subject’s performance, finally, is a reflection of mental load, mental effort, and the aforementioned causal factors.” (p. 4). According to Kirschner, these causal factors can be “characteristics of the subject (e.g., cognitive abilities), the task (e.g., task complexity), the environment (e.g., noise), and their mutual relations” (p. 3). In Kirschner’s definition, performance is not one of the three measures reflecting (an aspect of) cognitive load, but it is determined by the other two assessment factors.

The implications of the definitions of load and effort from Merriam-Webster’s on-line dictionary are very straightforward: load is something experienced, whereas effort is something exerted. Following this approach, one might say that intrinsic and extraneous cognitive load concern cognitive activities that must unavoidably be performed, so they fall under cognitive load; germane cognitive load is the space that is left over that the learner can decide how to use, so this can be labelled as cognitive effort. This seems to be the tenor of the message of Sweller et al. ( 1998 , p. 266) when they write: “Mental load refers to the load that is imposed by task (environmental) demands. These demands may pertain to task-intrinsic aspects, such as element interactivity, which are relatively immune to instructional manipulations and to task-extraneous aspects associated with instructional design. Mental effort refers to the amount of cognitive capacity or resources that is actually allocated to accommodate the task demands.” Kirschner’s ( 2002 ) definition given above also suggests that the term “load” can be reserved for intrinsic and extraneous load, whereas “effort” is more associated with germane load. However, in this same paper, Kirschner ( 2002 , p. 4) uses the terms again as synonyms when load is defined in terms of effort “… extraneous CL is the effort required to process poorly designed instruction, whereas germane CL is the effort that contributes, as stated to the construction of schemas”. Another complication is that some authors state that processing extraneous characteristics is under learners’ control. Gerjets and Scheiter ( 2003 ) report on a study in which a strong reduction of study time did not impair learning, whereas cognitive load theory would predict a drop in performance. They explain this by assuming that learners make strategic decisions under time pressure and that they may decide to increase germane load and ignore distracting information, that is, to decrease extraneous load.

A related conceptual problem is that it is not clear whether or not cognitive load is defined as relative to capacity in cognitive load theory. Working memory is assumed to consist of two distinct parts (visual and phonological). If both parts are used the capacity of working memory increases compared to the use of only one. As Sweller et al. ( 1998 , pp. 281–282) write: “Although less than purely additive, there seems to be an appreciable increase in capacity available by the use of both, rather than a single, processor.” It is not clear, however, if cognitive load is seen as relative to the total capacity or not. If the material is presented in two modalities instead of one, will load decrease (if taken as relative to capacity) or will load stay the same (in that basically the same material is offered and only the capacity has changed)?

The above discussion shows that cognitive load is not only a complex concept but is also often not well defined. Although there are some indications given, none of the studies makes it very clear how mental load and mental effort relate to the central concepts of intrinsic, extraneous, and germane cognitive load. This terminological uncertainty has direct consequences for how studies on influencing cognitive load are designed as well as for how cognitive load is measured.

Research issues

The measurement of cognitive load.

In many studies there is no direct measurement of cognitive load; the level of cognitive load is induced from results on knowledge post-tests. It is then argued that when results on knowledge post-test are low(er) cognitive load (obviously) has been (too) high(er) (see for example, Mayer et al. 2005 ). Recently DeLeeuw and Mayer ( 2008 , p. 225) made this more explicit by stating that “one way to examine differences in germane processing… is to compare students who score high on a subsequent test of problem-solving transfer with those who score low”. Similarly, Stull and Mayer ( 2007 , p. 808) stated: “Although we do not have direct measures of generative and extraneous processing during learning in these studies, we use transfer test performance as an indirect measure. In short, higher transfer test performance is an indication of less extraneous processing and more generative processing during learning”. Of course, there is much uncertainty in this type of reasoning and therefore many authors have expressed the need for a direct measurement of cognitive load. Mayer et al. ( 2002 , p. 180), for example, state: “Admittedly, our argument for cognitive load would have been more compelling if we had included direct measures of cognitive load…”. Three different groups of techniques are used to measure cognitive load: self-ratings through questionnaires, physiological measures (e.g., heart rate variability, fMRI), and secondary tasks (for an overview see, Paas et al. 2003b ).

Measuring cognitive load through self-reporting

One of the most frequently used methods for measuring cognitive load is self-reporting, as becomes clear from the overview by Paas et al. ( 2003b ). The most frequently used self-report scale in educational science was introduced by Paas ( 1992 ). This questionnaire consists of one item in which learners indicate their “perceived amount of mental effort” on a 9-point rating scale (Paas 1992 , p. 430). In research that uses this measure, reported effort is seen as an index of cognitive load (see also, Paas et al. 1994 , p. 420). Though used very frequently, questionnaires based on the work by Paas ( 1992 ) have no standard format. Differences are seen in the number of units used for the scale(s), the labels used as scale ends, and the timing (during the learning process or after).

Following Paas’ original work, a scale with nine points is used most frequently (example studies are Kester et al. 2006a ; Paas 1992 ; Paas et al. 2007 ; Paas and van Merriënboer 1993 ; van Gerven et al. 2002 ) but also a 7-point scale is used quite often (see e.g., Kablan and Erden 2008 ; Kalyuga et al. 1999 ; Moreno and Valdez 2005 ; Ngu et al. 2009 ; Pollock et al. 2002 ). Still others use a 5-point scale (Camp et al. 2001 ; Huk 2006 ; Salden et al. 2004 ), a 10-point scale (Moreno 2004 ), a 100-point scale (Gerjets et al. 2006 ), or a continuous (electronic) scale with or without numerical values (de Jong et al. 1999 ; Swaak and de Jong 2001 ; van Gerven et al. 2002 ).

Questionnaires also differ in what are used as anchor terms at the extremes of the scale(s). In the original questionnaire by Paas, participants were asked for their mental effort, with the extremes on the scale being “very, very low mental effort” and “very very high mental effort”. Moreno and Valdez ( 2005 , p. 37) asked learners “How difficult was it for you to learn about the process of lightning”. Ayres ( 2006a ) also used a scale measuring difficulty and having the extremes “extremely easy” and “extremely difficult”, but still called this mental effort. This was also the case in a study by Yeung et al. ( 1997 ), who used the extremes “very, very easy” and “very very difficult”. Pollock et al. ( 2002 ) combined difficulty and understanding in one question by asking students “How easy or difficult was it to learn and understand the electrical tests from the instructions you were given?” (p. 68). In some studies the two issues (effort and difficulty) are queried separately and are then combined into one metric (Moreno 2007 ; Zheng et al. 2009 ). Still others (e.g., Moreno 2004 ) combine different aspects by asking participants “how helpful and difficult (mental effort)” the program was (p. 104). Clearly, questions differ in asking for “effort”, “difficulty” and related concepts and differ in whether they ask for these concepts related to the material, the learning process, or the resulting knowledge (understanding). An empirical distinction between these questions is reported by Kester et al. ( 2006a ), who used a 9-point scale to gauge the mental effort experienced during learning, but asked separately for the mental effort required to understand the subject matter. While their results showed an effect of interventions on experienced mental effort during learning, no effect was found on mental effort for understanding. In their conclusion on the use of efficiency measures in cognitive load research, van Gog and Paas ( 2008 , p. 23) even state that “… the outcomes of the effort and difficulty questions in the efficiency formula are completely opposite”. This means that the outcomes of studies may differ significantly depending on the specifics of the question asked.

The timing of the questionnaire also differs across studies. Most studies present the questionnaire only after learning has taken place (for example, Ayres 2006a ; Hasler et al. 2007 ; Kalyuga et al. 1999 ; Pociask and Morrison 2008 ; Tindall-Ford et al. 1997 ), whereas other studies repeat the same questionnaire several times during (and sometimes also after) the learning process (for example, Kester et al. 2006a ; Paas et al. 2007 ; Stark et al. 2002 ; Tabbers et al. 2004 ; van Gog et al. 2008 ; van Merriënboer et al. 2002 ). The more often cognitive load is measured the more accurate the view of the actual cognitive load is, certainly if it is assumed that cognitive load may vary during the learning process (Paas et al. 2003b ). It is also not clear if learners are able to estimate an average themselves, which implies that for the researcher to measure cognitive load a few times and calculate an average may result in a different value than letting the learner decide on an average cognitive load by posing one question at the end. It is also questionable whether an average load over the whole process is the type of measure that does justice to principles of cognitive load theory, because it does not measure (instantaneous) cognitive overload (see also later).

Along with, and probably related to, diversity in use of the one-item questionnaire is inconsistency in the outcomes of studies that use this questionnaire. First, the obtained values have no agreed upon meaning and second, findings using the questionnaire are not very well connected to theoretical predictions. A problematic issue with the one-item questionnaire is that values on the scales can be interpreted differently between studies. Levels of effort or difficulty reported as beneficial in one study are associated with the poorest scoring condition in other work. Pollock et al. ( 2002 , Experiment 1), for example, report cognitive load scores of around 3 (43% on a scale of 7) for their “best” condition, whereas Kablan and Erden ( 2008 ) found a value for cognitive load of 2.5 (36% on a scale of 7) for their “poorest” (separated information) condition. The highest performing group for Pollock et al. thus reports a higher cognitive load than the poorest performing group for Kablan and Erden (and in both cases a low cognitive load score was seen as being profitable for learning). The literature shows a wide variety of scores on the one-item questionnaire, with a specific score sometimes associated with “good” and sometimes with “poor” performance. Therefore, it seems as if there is no consistency in what can be called a high (let alone a too high) cognitive load score or a low score. Nonetheless, some authors take the scores as a kind of absolute measure. For example, Rikers, in a critical overview of a number of cognitive load studies, writes when discussing a study by Seufert and Brünken ( 2006 ): “As a result of this complexity, intrinsic cognitive load was increased to such an extent that hyperlinks could only be used superficially. This explanation is very plausible and substantiated by the students’ subjective cognitive load that ranged between M  = 4.38 (63%) and 5.63 (80%)” ( 2006 , p. 361). Apart from the fact that the different types of load were not measured separately in this study and that therefore there is no reason to assume that the scores were determined mainly by intrinsic load, it is also not clear why some scores are seen as high and others as low.

Taking the variations above into account, it may not be surprising to find that the relation of cognitive load measures to instructional formats is not very consistent. In their overview of cognitive load measures, Paas et al. ( 2003b ) claim that measures of cognitive load are related to differences in instructional formats in the majority of studies. This conclusion seems not to be completely valid. Not all of the studies they mention have shown such results. For example, in their overview Paas et al. ( 2003b , p. 69) state that in the original study by Paas ( 1992 ) it was found that worked-out examples and completion problems were superior in terms of extraneous load, but extraneous load during learning was not measured in this study; beyond that, no differences between conditions were found on invested mental effort during the learning process. As Paas ( 1992 ) himself writes: “Apparently the processes required to work during specific instruction demanded the same amount of mental effort in all conditions,…” (p. 433). And despite the fact that many studies have indeed found differences in cognitive load, it is also interesting to consider studies in which differences in resulting performance were found without detecting any associated differences in cognitive load, such as in the original study by Paas ( 1992 ), Stark et al. ( 2002 ), Tabbers et al. ( 2004 ), Lusk and Atkinson ( 2007 ), Hummel et al. ( 2006 ), de Westelinck et al. ( 2005 ), Clarebout and Elen ( 2007 ), Seufert et al. ( 2007 , study 2), Fischer et al. ( 2008 ), Beers et al. ( 2008 ), Wouters et al. ( 2009 ), Amadieu et al. ( 2009 ), Kester and Kirschner ( 2009 ), and de Koning et al. ( in press ). Conversely, there are also studies in which differences in experienced cognitive load were found but not differences on performance tests (Paas et al. 2007 ; Seufert et al. 2007 ). Often these unexpected outcomes are explained by pointing to external factors (e.g., motivation), but the validity of the test itself is never questioned. Seufert et al. ( 2007 ) found different effects on cognitive load as measured with a one-item questionnaire in different studies with the same set-up. Their speculative interpretation is that in one study participants interpreted the question in terms of extraneous cognitive load, whereas in the other study participants seemed to have used an overall impression of the load.

The overview above shows that there are many variations in how the technique of self-reporting is applied, that there are questions about what is really measured, and that results cannot always be interpreted unequivocally. Paas et al. ( 2003b , p. 66), however, claim that the uni-dimensional scale is reliable, sensitive to small differences in cognitive load, and valid (and non-intrusive). The claims by Paas et al. ( 2003b ) are based on work by Paas et al. ( 1994 ) who presented an analysis of the reliability and sensitivity of the subjective rating method. Their data, again, come from two other studies, namely Paas ( 1992 ) and Paas and van Merriënboer ( 1994b ). Students in these two studies had to solve problems, both in a training and in a test phase. The subjective rating of perceived mental effort was performed after solving each problem. The studies had several conditions but only the problems that were the same in all conditions (separately for both studies) were used in the analysis. Concerning reliability , Paas et al. report Cronbach’s α’s of .90 and .82 for two different studies. This tells us that there is a high correlation between the effort reported for the different problems. However, that would characterize cognitive load, measured as mental effort, as a student bound characteristic, while according to the theory, mental effort is influenced by the task, participants’ characteristics, and the interaction between those two factors (Paas et al. 1994 ). The interaction component in particular could mean that some problems elicit more effort from some students and other problems elicit more effort from other students. In that case the theory would not predict a high reliability. Concerning sensitivity , Paas et al. ( 1994 ) claim that the one-item questionnaire is sensitive to experimental interventions. Their supporting evidence comes from the Paas ( 1992 ) study in which several expected differences were found in the subjective ratings of test items based on the experimental conditions. However, they fail to say that, in that same study, no differences on the practice problems were found between conditions, despite the fact that these differences were expected to occur. Above, quite a number of other studies were also reported in which no differences were found on the subjective scale administered although they had been expected. Of course, those differences could be non-existent, but it could also mean that the measure is not as sensitive as is stated. In addition, in the Paas ( 1992 ) study, the questionnaire was given after each problem, while most of the later studies used only one question after the entire learning session. The latter procedure may also negatively influence the sensitivity of the measure. Paas et al. ( 2003 ) also claim that the Paas et al. ( 1994 ) study determined the validity of the construct as measured by the subjective questionnaire, but no data can be found on this in the quoted work. There were no comparisons made with other measures to find any of the central aspects of validity (concurrent, discriminant, predictive, convergent, and criterion validity). Some articles (e.g., Kalyuga et al. 2001a , b ; Lusk and Atkinson 2007 ) claim that older work by Moray ( 1982 ) validated the subjective measure of cognitive load with objective measures. However, Moray only looked at the concept of “perceived difficulty” within the domain of cognitive tasks and, in fact, Moray warned that there had been no work relating (subjective) difficulty and mental workload.

Physiological measures as indications for cognitive load

A second set of measures of cognitive load are physiological measures. One of these measures is heart rate variability. Paas and van Merriënboer ( 1994b ) classified this measure as invalid and insensitive (and quite intrusive); in addition, Nickel and Nachreiner ( 2003 ) concluded that heart rate variability could be used as an indicator for time pressure or emotional strain, but not for mental workload. Recent work, however, gives more support to this measure (Lahtinen et al. 2007 ) or to related measures that combine heart rate and blood pressure (Fredericks et al. 2005 ). Pupillary reactions are also used and are regarded as sensitive for cognitive load variations (Paas et al. 2003b ; van Gog et al. 2009 ), but have only been used in a limited number of studies (e.g., Schultheis and Jameson 2004 ). There are, however, indications that the sensitivity of this measure to workload changes diminishes with age of the participants (van Gerven et al. 2004 ). Cognitive load can also be assessed by using neuro-imaging techniques. Though this is a promising field (see e.g., Gerlic and Jausovec 1999 ; Murata 2005 ; Smith and Jonides 1997 ; Volke et al. 1999 ) there are still questions on how to relate brain activations precisely to cognitive load. Overall, physiological measures show promise for assessing cognitive load but (still) have the disadvantage of being quite intrusive.

Dual tasks for estimating cognitive load

A third way of measuring cognitive load is through the dual-task or secondary-task approach (Brünken et al. 2003 ). In this approach, borrowed from psychological research on tasks such as car driving (Verwey and Veltman 1996 ), a secondary task is introduced alongside the main task (in this case learning). A good secondary task could be a simple monitoring task such as reacting as quickly as possible to a color change (Brünken et al. 2003 ) or a change of the screen background color (DeLeeuw and Mayer 2008 ), or detecting a simple auditory stimulus (Brünken et al. 2004 ) or a visual stimulus (a colored letter on the screen) (Cierniak et al. 2009 ). A more complicated secondary task could be remembering and encoding single letters when cued (Chandler and Sweller 1996 ) or seven letters (Ayres 2001 ). The principle of this approach is that slower or more inaccurate performance on the secondary task indicates more consumption of cognitive resources by the primary task. A dual task approach has an advantage over, for example, a questionnaire, in that it is a concurrent measure of cognitive load as it occurs. Though Brünken et al. ( 2003 ) have empirically shown the suitability of the dual task approach for measuring cognitive load, this technique is used relatively rarely (for exceptions see, Ayres 2001 ; Brünken et al. 2002 ; Chandler and Sweller 1996 ; Ignacio Madrid et al. 2009 ; Marcus et al. 1996 ; Renkl et al. 2003 ; Sweller 1988 ).

Overall problems with cognitive load measures

There are three main problems remaining with cognitive load measures. The first is that cognitive load measures are always presented as relative. The second is that an overall rating of cognitive load, as is often utilized, does not provide much help with interpreting results in terms of cognitive load theory, because the contributions to learning of the different kinds of cognitive load are different. And the third is that the most frequently used measures are not sensitive to variations over time.

The basic principle of cognitive load is that learning is hindered when cognitive overload occurs, in other words when working memory capacity is exceeded (see e.g., Khalil et al. 2005 ). However, the measures that are discussed here cannot be used to measure overload; the critical level indicating overload is unknown. Instead, many studies compare the level of cognitive load for different learning conditions and conclude that when a certain condition has a lower reported load, this condition is the one best suited for learning (e.g., van Merriënboer et al. 2002 ). However, what should count in cognitive load theory is not the relative level of effort or difficulty reported, but the absolute level. Overload occurring in one condition is independent of what happens in another condition. In fact, all conditions in a study could show overload.

Studies that measure only one overall concept of cognitive load do not do justice to its multidimensional character (Ayres 2006b ). Though more popular in other science domains, the use of multi-dimensional rating scales for workload such as the NASA-TLX is exceptional within educational science (for examples see, Fischer et al. 2008 ; Gerjets et al. 2006 ; Kester et al. 2006b ). When cognitive load is measured as one concept, there is no distinction between intrinsic load, extraneous load, and germane load. This means that if one instructional intervention shows a lower level of cognitive load than another, this may be due to a lower level of extraneous load, a lower level of germane load or both (assuming that the intrinsic load is constant over interventions). Attributing better performance on a post-test to a reduction of extraneous load, as most of these studies do, is a post-hoc explanation without direct evidence, despite the measurement of cognitive load. This issue is clearly phrased by Wouters et al. ( in press ), who found differences between conditions in performance but no differences in mental effort, and who explain this in the following way:

The mental effort measure that was used did not differentiate between mental effort due to the perceived difficulty of the subject matter, the presentation of the instructional material or engaging in relevant learning activities. It is possible that the effects in the conditions with or without illusion of control on mental effort have neutralized each other. In other words, the illusion of control conditions may have imposed rather high extraneous cognitive load and low germane cognitive load, whereas the no illusion of control may have imposed rather low extraneous cognitive load and high germane cognitive load.

And in a related study these same authors (Wouters et al. 2009 , p. 7) write:

The mental effort measure used did not differentiate between mental effort due to perceived difficulty of the subject matter, presentation of the instructional material, or being engaged in relevant learning activities. It is possible that the effect on the perceived mental effort of the varying design guidelines, that is, modality and reflection prompts, have neutralized each other.

There are developments, mostly recent, that now aim to measure the different types of cognitive load separately. In de Jong et al. ( 1999 ), participants were asked, through an on-line pop-up questionnaire, to use sliders to answer three questions about (1) the perceived difficulty of the subject matter, (2) the perceived difficulty of interacting with the environment itself, and (3) the perceived helpfulness of the instructional measures (the functionality of the tools) that they had used. Gerjets et al. ( 2006 , see pp. 110–111) asked participants three questions concerning cognitive load: ‘task demands’ (how much mental and physical activity was required to accomplish the learning task, e.g., thinking, deciding, calculating, remembering, looking, searching, etc.), ‘effort’ (how hard the participant had to work to understand the contents of the learning environment), and ‘navigational demands’ (how much effort the participant had to invest to navigate the learning environment). These represented, according to Gerjets et al. intrinsic, germane, and extraneous load, respectively. In another recent study, Corbalan et al. ( 2008 ) used two measures for assessing cognitive load. Participants had to rate their “effort to perform the task” (p. 742) on a 7-point scale and in addition, after each task, participants were also asked to indicate, on a one-item 7-point scale, their “effort invested in gaining understanding of the relationships dealt with in the simulator and the task” (p. 744). Corbalan et al. call the first a measure of task load and the second a measure of germane load. Cierniak et al. ( 2009 ) asked students three different questions. The intrinsic load question was “How difficult was the learning content for you?”. The extraneous load question was: “How difficult was it for you to learn with the material?”. The question asking for germane load was: “How much did you concentrate during learning?”. Gerjets et al. ( 2009b ) used five questions to measure cognitive load: one item for intrinsic load, three for extraneous load, and one for germane load. Overall, results from these studies give inconsistent results, raising doubts whether students can themselves distinguish between different types of load.

A third issue is that measuring cognitive load is rarely time-related. Paas et al. ( 2003b ), citing Xie and Salvendy ( 2000 ), distinguish between different types of load in relation to time: instantaneous load, peak load, accumulated load, average load, and overall load. Instantaneous load is the load at a specific point in time, and peak load is the maximum experienced instantaneous load. Average load is the mean instantaneous load experienced during a specified part of the task, and overall load refers to average load over the whole period of learning. In addition, there is accumulated load, which is the total load experienced. Paas et al. ( 2003b ) state that both instantaneous load and overall load are important for cognitive load theory, but since the theory only refers to addition of types of load and to overload; only instantaneous load should be considered for these purposes. According to Paas et al. ( 2003b ) physiological measures are suited for capturing instantaneous load because they can monitor the load constantly, and overall load is often measured with questionnaire based measures. However, physiological measures are very rarely used in cognitive load research, whereas questionnaire measures of overall load prevail. A related problematic issue is that it is not clear if learners who use a questionnaire have overall load or accumulated load in mind when answering the question. In this respect, Paas et al. ( 2003b , p. 69) state: “It is not clear whether participants took the time spent on the task into account when they rated cognitive load. That means it is unknown whether a rating of 5 on a 9-point scale for a participant who worked for 5 min on a task is the same as a rating of 5 for a participant who worked for 10 min on a task.” These considerations show that the time aspect is not well considered when measuring cognitive load.

Cognitive load and learning efficiency measures

Many articles on cognitive load present an instructional efficiency measure that expresses in a metric the (relative) efficiency of an instructional approach. This measure was introduced by Paas and van Merriënboer ( 1993 ), who used the formula E  = | R  −  P | /√2 where P is the standardized performance score (on a post-test) and R is the standardized mental effort score (as measured in the test phase). Paas and van Merriënboer explain the formula by stating that if R  −  P  < 0, then E is positive, and if R  −  P  > 0, then E is negative. (Many later authors have used the more simple expression E  = ( P  −  R )/√2.) In this formula the mental effort is the effort reported by participants in relation to the post-test. Therefore, the metric basically expresses the level of automatization of the students’ resulting knowledge. For this reason it expresses the (relative) effectiveness of the instructional intervention, rather than, as Pass and van Merriënboer state, its efficiency . In part for similar reasons, Tuovinen and Paas ( 2004 ) state that this measure is more related to transfer. The measure, as introduced by Paas and van Merriënboer, has led to some confusion in the literature, at both a more technical and a conceptual level. As an example of the first type of confusion, Tindall-Ford et al. ( 1997 ), write the formula as E  = ( M  −  P )/√2 ( M  = mental effort; P  = performance), thus reversing the original formula in their presentation. These authors write “ E can be either positive or negative. The greater the value of E , the more efficient the instructional condition” (p. 272) which would mean that higher mental effort and lower performance implies better efficiency. A second example involves Marcus et al. ( 1996 ), who use the original formula by Paas and van Merriënboer in which the absolute value of M  −  P was taken and who nonetheless state that this value can be negative or positive. A third example can be found in the article by Kalyuga et al. ( 1998 , p. 7) who display the formula as E  = ( P  −  R ) ÷ |√2|, and thus erroneously take the absolute value of √2.

Along with these more technical issues, there have also been conceptual misunderstandings regarding the efficiency measure. As indicated above, the original measure is actually a measure of the quality of the resulting knowledge and not of the instructional condition. This has led many authors to use mental effort related to learning as the mental effort factor in the equation in order to express a quality of learning. A recent overview by van Gog and Paas ( 2008 ) shows indeed that the majority of studies have used a cognitive load rating associated with the learning phase; very few studies have used a cognitive load rating associated with the testing phase. Van Gog and Paas conclude that the original measure (using cognitive load during the test phase) is more suitable when instructional measures aim to increase germane load, and the measure using cognitive load in the learning phase is more suited for situations in which the intention is to reduce extraneous load. This does not seem to be justified. Cognitive load theory focuses on what happens during knowledge acquisition, so that measuring cognitive load during learning is essential. Measuring cognitive load during the performance after learning may add interesting information on the quality of the participants’ knowledge, and in this respect it is valuable information. That would be the case in any study that concerns learning and performance, not just studies within the cognitive load tradition. But there are more conceptual difficulties with the measure. A low performance associated with a low cognitive load would result in a similar efficiency value as a high performance with a high cognitive load, which leads to unclear experimental situations. Moreno and Valdez ( 2005 , p. 37), for example, studying the difference between students who were presented with a set of frames in the right order (the NI group) and students who had to order those frames themselves (the I group) write:

However, predictions about differences in the relative instructional efficiency of I and NI conditions are unclear. The predicted combination of relatively higher performance and higher cognitive load for Group I and relatively lower performance and lower cognitive load for Group NI may lead to equivalent intermediate efficiency conditions.

Studies from cognitive load theory sometimes use plain cognitive load measures to compare instructional conditions (e.g., de Koning et al. in press ) and sometimes use efficiency measures (e.g., Paas et al. 2007 ) even when, as in these two examples, the same phenomenon is studied (animations) on the same topic (the cardiovascular system). Since it is known that the two measures may lead to different results, shifting between measures does not assist in drawing overall interpretations of experimental outcomes. Cognitive load theory research would certainly benefit from standardization in this respect.

Individual differences

Individual differences may influence the outcomes of studies conducted within the cognitive load tradition. One such aspect that is taken up by cognitive load research and frequently reported nowadays is the fact that instructional treatments that should reduce extraneous load work differently for individuals with low versus high expertise. This is called the expertise-reversal effect (Kalyuga 2007 ; Kalyuga et al. 2003 ). This phenomenon implies that instructional designs that follow cognitive load recommendations are only beneficial for “… learners with very limited experience” (Kalyuga et al. 2003 , p. 23). Examples of studies in which this effect was found are Yeung et al. ( 1997 ), Kalyuga et al. ( 1998 ), Tuovinen and Sweller ( 1999 ), Cooper et al. ( 2001 ), Clarke et al. ( 2005 ), and Ayres ( 2006a ). The idea of the expertise reversal effect is that intrinsic load decreases with increasing expertise. This means that treatments that reduce extraneous load are more effective for less knowledgeable students than for more experienced students and experts. It also means that instruction can gradually be adapted to a learner’s developing expertise. Renkl and Atkinson ( 2003 ), for example, state that intrinsic load can be lower in later stages of knowledge acquisition so that “fading” can take place. For instance, a transition in later stages from worked-out problems to conventional problems might be superior for learning. An extensive discussion of the expertise reversal effect and an overview of related studies is given in Kalyuga ( 2007 ).

Another individual characteristic that has been found to interact with effects of cognitive load related instructional treatments is spatial ability. Several studies report differential effects for high and low spatial ability students. For example, Mayer and Sims ( 1994 ) performed a study in which participants had to learn about the functioning of a bicycle tire pump. One group saw an animation with concurrent narration explaining the working of the pump; another group saw the animation before the narration. The concurrent group outperformed the successive group overall, as could be expected on the basis of the contiguity (or split attention) effect, but this effect, although strong for high spatial ability students, was not prominent for those with low spatial ability. Others studies that report differential cognitive load effects in relation to spatial ability are Plass et al. ( 2003 ), Moreno and Mayer ( 1999b ), and Huk ( 2006 ).

An individual characteristic that is by nature very much related to cognitive load theory is working memory capacity. It is clear from many studies that next to intra-individual differences in working memory capacity (see e.g., Sandberg et al. 1996 ) there are also inter-individual differences in working memory capacity (see e.g., Miyake 2001 ). Different tests for measuring working memory capacity exist (for an overview see, Yuan et al. 2006 ). The most well known of these tasks is the operation span task (OSPAN) introduced by Turner and Engle ( 1989 ); different operationalizations of this test exist (see e.g., Pardo-Vazquez and Fernandez-Rey 2008 ). Several research fields use working memory capacity tests, and have found relations between scores on these test and performance. Examples include studies of reading ability (de Jonge and de Jong 1996 ) and mathematical reasoning (Passolunghi and Siegel 2004 ; Wilson and Swanson 2001 ). But the influence of working memory capacity in learning is also repeatedly found (see e.g., Dutke and Rinck 2006 ; Perlow et al. 1997 ; Reber and Kotovsky 1997 ). In cognitive load research, however, working memory capacity is hardly ever measured. Exceptions can be found in work by van Gerven et al. ( 2002 , 2004 ), who used a computation span test to control for working memory capacity between experimental groups. Another recent example is a study by Lusk et al. ( 2009 ), who measured participants’ individual working memory capacity with the OSPAN test and assessed the effects of segmentation of multimedia material in relation to working memory capacity. They found that students with high working memory capacity recalled more than students with low working memory capacity and generated more valid interpretations of the material presented. They also found a positive effect of segmentation on post-test scores for both recall and interpretation, and an interaction in which lack of segmentation was especially detrimental for students with a low working memory capacity. A further very recent example is a study by Seufert et al. ( 2009 ), who measured working memory capacity with a “numerical updating test” in which participants had to memorize additions and subtractions they had to perform in an evolving matrix of numbers. This study addressed the modality effect, which appeared to hold only for low working memory capacity learners, while combining audio and visual information was less beneficial for learners with a high working memory capacity than presenting just visual information. Finally, Berends and van Lieshout ( 2009 ) measured participants’ memory capacity with a digit span test and found that adding illustrations to algebra word problems decreased students’ performance overall, but this effect was less obvious for students with a higher working memory capacity.

Studies in the cognitive load field predominantly use a between-subjects design. This means, however, that cognitive load differences between conditions are also influenced by “… individual differences, such as abilities, interest, or prior knowledge…” (Brünken et al. 2003 , p. 57). A good recommendation, based also on the results above, is to include these individual characteristics as control variables in experimental set-ups so that differences between groups cannot be attributed to differences in these relevant student characteristics. Working memory capacity in particular should be included as a measure even in studies at an individual level. If we accept the definition of load presented above as “the demand on the operating resources of a system”, it is not enough to know the demand (which could mean the effort applied); the system’s capacity must also be known in order to determine (over)load.

The external validity of research results

Cognitive load theory is used in presenting guidelines for instructional design. Overviews of instructional designs based on cognitive load theory are given, for example, by Sweller et al. ( 1998 ), Mayer ( 2001 ), and Mayer and Moreno ( 2003 ). This presupposes that results from cognitive load research are applicable in realistic teaching and learning situations. The next few sections explore some questions concerning the external validity of cognitive load research.

When does “overload” occur in realistic situations?

Two generally accepted premises of cognitive load theory are that overload may occur and that overload is harmful for learning. Overload means that at some point in time the requested memory capacity is higher than what is available (the so-called instantaneous load). In realistic learning situations, however, learners have means to avoid instantaneous overload. First, when there is no time pressure persons who perform a task can perform different parts of it sequentially, thus avoiding overload at a particular moment. Second, in realistic situations learners will use devices (e.g., a notepad) to offload memory. People rarely perform a complex calculation mentally without noting down intermediate results. This is hardly ever allowed in cognitive load research. A study by Bodemer and Faust ( 2006 ) found that when students were asked to integrate separated pieces of information, learning was better when learners could do this physically, compared to when they had to do this mentally. And this is indeed what a normal learner in a normal learning situation would do. Cognitive load theory does apply to learning highly demanding, complex, time critical tasks, such as flying a fighter plane, for which tasks learners must use all available resources to make the right decisions in a very short time and they cannot swap, replay, or make annotations. Nearly all of the studies in the cognitive load tradition are using tasks where the limited capacity of working memory is not actually at stake. However, these studies are often designed in such a way as to prevent swapping or off-loading, thus creating a situation that is artificially time-critical.

Participants and study time

A substantial portion of the research on cognitive load theory includes participants who have no specific interest in learning the domain involved and who are also given a very short study time. Mayer and Johnson ( 2008 ), for example, gave students a PowerPoint slideshow of about 2.5 min. In Moreno and Valdez ( 2005 ) learners had 3 min to study a sequence of multimedia frames. In Hasler et al. ( 2007 ) participants saw a system-paced video of almost 4 min. In Stull and Mayer ( 2007 ), psychology students studied a text about a biology topic for 5 min. DeLeeuw and Mayer ( 2008 ) presented participants with a 6 min long multimedia lesson. In many other studies participants worked a little longer, but the time period was still too short for a realistic learning task. Also, in this field of research, it is common for the research subjects to be psychology students who are given the task of learning material chosen by the researcher (see, e.g., Ignacio Madrid et al. 2009 ; Moreno 2004 ). Research with short study times and with students who have no direct engagement with the domain may very well be used to test the basic cognitive mechanisms of cognitive load theory but raise problems when these results are translated into practical recommendations. This has been acknowledged within cognitive load research itself by van Merriënboer and Ayres ( 2005 ) who recommend studying students who are working on realistic tasks and using realistic study times. A recent example of this type of work can be found in Kissane et al. ( 2008 ).

Study conditions

There are other aspects of the experimental conditions set up by researchers that undermine the relevance of study outcomes to educational practice. We can take as an example one of the central phenomena in cognitive load theory, the modality principle (Mayer 1997 , 2001 ). This principle says that if information (graphs, diagrams, or animations) is presented over different modalities (visual and auditory) more effective learning will result than when the information is only presented visually. The modality principle has been observed in a large set of studies, as can be gathered from several overviews on this principle (Ginns 2005 ; Mayer 2001 ; Moreno 2006 ). A few recent studies, however, could not find support for the modality principle (Clarebout and Elen 2007 ; Dunsworth and Atkinson 2007 ; Elen et al. 2008 ; Opfermann et al. 2005 ; Tabbers et al. 2004 ; Wouters et al. 2009 ). The main difference between the studies in which the modality effect could be demonstrated and the ones that failed to find evidence for the effect seems to be the amount of learner control. In studies in which the modality effect was found (e.g., Kalyuga et al. 1999 , 2000 ; Mayer and Moreno 1998 ; Moreno and Mayer 1999a ; Mousavi et al. 1995 ) information presentation was system-paced and generally very rapid, so that even the information presented in the complete visual condition could hardly be processed. A relevant quote in this respect comes from a paper by Moreno and Mayer ( 2007 ) who discuss the modality principle and write: “By the time that the learner selects relevant words and images from one segment of the presentation, the next segment begins, thereby cutting short the time needed for deeper processing” (p. 319). The primary explanation Tabbers et al. ( 2004 ) give for what they call the “reverse modality effect” is that students in this study could pace their own instruction and move forward and backward through the material. Opfermann et al. ( 2005 ) explain their results from the fact that students in their study had sufficient time to study the material, comparable to a realistic study situation. Students also had the opportunity to replay material. Elen et al. ( 2008 ) point as well to the importance of learner control versus program control for achieving learning gains. Wouters et al. ( 2009 ) conclude, “Apparently, there are conditions (self-pacing, prompting attention) under which the modality effect does not hold true anymore”. Mayer et al. ( 2003 ) found support for the modality effect in a condition where there was learner control, but in addition they found that interactivity, the possibility of pacing and controlling the order of the presentation, enhanced learning outcomes. Or, as the authors write, “… interactivity reduces cognitive load by allowing learners to digest and integrate one segment of the explanation before moving on to the next.” (Mayer et al. 2003 , p. 810). Harskamp et al. ( 2007 ) studied the modality effect in a realistic situation (although still with a relatively short study time of 6–11 min) and no possibility of taking notes. The domain was biology (animal behavior) and the material consisted of the presentation of pictures with paper-based or narrated text. The data confirmed the modality effect, showing that there are cases in which the effect is also found under the condition of learner control. However, a further experiment by Harskamp et al. ( 2007 ) showed that this was only the case for the fast learners, who more or less acted as if there was system control. For learners who decided on their own to take more time to learn, the modality effect was not found.

These results could indicate that the modality effect holds mainly in situations where there are (very) short learning times and there is system control. It could also mean that the learning conditions in these studies are such that in the system-paced conditions, students who see all the information on screen simply do not have time to see everything in the short time the system allows (see Bannert 2002 ). This explanation also means that the limitations of short term memory (that indeed become evident in the type of conditions mentioned) can be overcome by allowing students to use more time and to switch back and forth between the information, as is the case in normal learning situations.

Conclusions

Cognitive load theory has without doubt inspired many researchers and has given great momentum to educational research. In this paper, however, some critical questions were posed concerning the conceptual clarity, the methodological rigor, and the external generalizability of this work. In so doing, no attempt was made to give an exhaustive and structured meta-analysis of the work. Instead, by highlighting a number of issues and illustrating these with considerations and results from this field of research, including the 35 most influential ISI publications, some fundamental problems in the area have been highlighted. Here a short synthesis of these issues is presented together with a number of topics for a research agenda.

An important point to note is that cognitive load theory is constructed in such a way that it is hard or even impossible to falsify (see also, Gerjets et al. 2009a ). In particular, the fact that cognitive load is composed of three different elements that are “good” (germane), “bad” (extraneous), or just there (intrinsic) means that every outcome fits within the theory post-hoc. If learners perform better, a higher cognitive load must have been composed of germane processes; if they perform poorly, a higher cognitive load must have been extraneous. In addition, the fact that processes can be regarded as germane in one case and the same processes as extraneous in another case means that the theory can account for nearly every situation. Many studies in cognitive load theory make rather speculative interpretations of what happened with cognitive load during learning on the basis of learning performances. Of course, what would help to make these interpretations valid is a suitable measure of cognitive load.

So far cognitive load research has lacked such a measure. The measure that is most often used is the one-item questionnaire as introduced by Paas ( 1992 ). Apart from the fact that this measure is used in many different forms and operationalizations throughout cognitive load research, it has two essential drawbacks in that it does not give a concurrent measure of cognitive load and it does not measure an essential concept in cognitive load theory, namely cognitive overload . New ways to measure cognitive load based on neuroscientific techniques might be able to overcome these two problems. Recently, Jaeggi et al. ( 2007 ) observed specific patterns of brain activity in cases of high cognitive load. It would be another step forward if techniques were available to distinguish between the three different types of cognitive load: intrinsic, extraneous and germane. This could also help to clear up a number of the conceptual difficulties in cognitive load theory that have been identified in this paper. It would be especially interesting to see if the distinction between intrinsic and germane load stands firm with a more precise assessment. This distinction entered cognitive load theory only at a later stage. Intrinsic and germane load belong to different ontological categories, and if learning processes are examined in both categories quite a few similarities turn up. A return to distinguishing just between processes that do contribute to knowledge generation and processes that do not may help to avoid a number of problems with the current version of the theory.

What has cognitive load theory brought to the field of educational design? The three main recommendations that come from cognitive load theory are: present material that aligns with the prior knowledge of the learner (intrinsic load), avoid non-essential and confusing information (extraneous load), and stimulate processes that lead to conceptually rich and deep knowledge (germane load). These design principles have been around in educational design for a long time (see e.g., Dick and Carey 1990 ; Gagné et al. 1988 ; Reigeluth 1983 ). Work in cognitive load theory often denies the existence of this earlier research, as illustrated in the following quote by Ayres ( 2006a , p. 288): “Whereas strategies to lower extraneous load are well documented… methods to lower intrinsic load have only more recently been investigated” (p. 288). In his study, Ayres introduces part-tasks as one of the initial approaches to lower cognitive load. Describing this as a “recent” approach denies much of the history of instructional design. The value of cognitive load theory until now has certainly been in directing extra and detailed attention to characteristics of instructional designs that may not contribute to learning. If intrinsic and extraneous load are high, it is clear that learning may be hampered, but having these two forms of load set low doesn’t guarantee that learning will take place (see also, Schnotz and Kürschner 2007 ). This situation has also been recognized in cognitive load research itself, and a number of recent publications point to the fact that cognitive load research has shifted its attention to stimulating germane processes (see e.g., Ayres and van Gog 2009 ; Kirschner 2002 ; van Merriënboer and Sweller 2005 ). What cognitive load research should do now is examine germane processes and estimate which germane processes are most suited for which learners so that experienced cognitive load is optimized and cognitive overload is avoided. The great challenge will be to find load-reducing approaches for intensive knowledge producing mechanisms such as learning from multiple representations (e.g., Ainsworth 2006 ; Brenner et al. 1997 ), self-explanations (Chi et al. 1994 ), inquiry learning (Linn et al. 2006 ), collaborative learning (Lou et al. 2001 ), or game-based learning (Nelson and Erlandson 2008 ). Combining these approaches, which strongly stimulate germane processes, with enough structure to avoid cognitive overload will most probably be one of the leading research themes in the near future (de Jong 2006 ; Kirschner et al. 2006 ; Mayer 2004 ).

The great achievement of cognitive load theory is that it has created unity in a diverse set of instructional design principles and that it has described a cognitive basis underlying these principles. It should not, however, remain at the level of confirming these general principles. Rather, its role is now to move forward and try to determine (1) which instructional treatments lead to which cognitive processes (and how), (2) what the corresponding effects are on memory workload and potential overload, (3) what characteristics of the learning material and the student mediate these effects and (4) how best to measure effects on working memory load in a theory-related manner. This will give a firmer foundation to principles of instructional design theory so that they can be applied in a more dedicated, flexible, and adaptive way.

Ainsworth, S. (2006). DeFT: A conceptual framework for considering learning with multiple representations. Learning and Instruction, 16 , 183–198.

Google Scholar  

Amadieu, F., Tricot, A., & Mariné, C. (2009). Prior knowledge in learning from a non-linear electronic document: Disorientation and coherence of the reading sequences. Computers in Human Behavior, 25 , 381–388.

Atkinson, R. K., Derry, S. J., Renkl, A., & Wortham, D. (2000). Learning from examples: Instructional principles from the worked examples research. Review of Educational Research, 70 , 181–214.

Ayres, P. (1993). Why goal free problems can facilitate learning. Contemporary Educational Psychology, 18 , 376–381.

Ayres, P. (2001). Systematic mathematical errors and cognitive load. Contemporary Educational Psychology, 26 , 227–248.

Ayres, P. (2006a). Impact of reducing intrinsic cognitive load on learning in a mathematical domain. Applied Cognitive Psychology, 20 , 287–298.

Ayres, P. (2006b). Using subjective measures to detect variations of intrinsic cognitive load within problems. Learning and Instruction, 16 , 389–400.

Ayres, P., & Sweller, J. (2005). The split-attention principle in multimedia learning. In R. E. Mayer (Ed.), Cambridge handbook of multimedia learning (pp. 135–147). Cambridge, UK: Cambridge University Press.

Ayres, P., & van Gog, T. (2009). State of the art research into cognitive load theory. Computers in Human Behavior, 25 , 253–257.

Baddeley, A. D., & Hitch, G. (1974). Working memory. In G. H. Bower (Ed.), The psychology of learning and motivation: Advances in research and theory (pp. 47–89). New York: Academic Press.

Bannert, M. (2002). Managing cognitive load—recent trends in cognitive load theory. Learning and Instruction, 12 , 139–146.

Beers, P., Boshuizen, H., Kirschner, P. A., Gijselaers, W., & Westendorp, J. (2008). Cognitive load measurements and stimulated recall interviews for studying the effects of information and communications technology. Educational Technology Research and Development, 56 , 309–328.

Berends, I. E., & van Lieshout, E. C. D. M. (2009). The effect of illustrations in arithmetic problem-solving: Effects of increased cognitive load. Learning and Instruction, 19 , 345–353.

Bodemer, D., & Faust, U. (2006). External and mental referencing of multiple representations. Computers in Human Behavior, 22 , 27–42.

Brenner, M. E., Mayer, R. E., Moseley, B., Brar, T., Duran, R., Reed, B., et al. (1997). Learning by understanding: The role of multiple representations in learning algebra. American Educational Research Journal, 34 , 663–689.

Brünken, R., Plass, J. L., & Leutner, D. (2003). Direct measurement of cognitive load in multimedia learning. Educational Psychologist, 38 , 53–62.

Brünken, R., Plass, J. L., & Leutner, D. (2004). Assessment of cognitive load in multimedia learning with dual-task methodology: Auditory load and modality effects. Instructional Science, 32 , 115–132.

Brünken, R., Steinbacher, S., Plass, J. L., & Leutner, D. (2002). Assessment of cognitive load in multimedia learning using dual-task methodology. Experimental Psychology, 49 , 1–12.

Camp, G., Paas, F., Rikers, R. M. J. P., & van Merriënboer, J. J. G. (2001). Dynamic problem selection in air traffic control training: A comparison between performance, mental effort and mental efficiency. Computers in Human Behavior, 17 , 575–595.

Campbell, D. J. (1988). Task complexity: A review and analysis. Academy of Management Review, 13 , 40–52.

Cerpa, N., Chandler, P., & Sweller, J. (1996). Some conditions under which integrated computer-based training software can facilitate learning. Journal of Educational Computing Research, 15 , 345–367.

Chandler, P., & Sweller, J. (1991). Cognitive load theory and the format of instruction. Cognition and Instruction, 8 , 293–332.

Chandler, P., & Sweller, J. (1992). The split attention effect as a factor in the design of instruction. British Journal of Educational Psychology, 62 , 233–246.

Chandler, P., & Sweller, J. (1996). Cognitive load while learning to use a computer program. Applied Cognitive Psychology, 10 , 151–170.

Chi, M. T. H. (1992). Conceptual change within and across ontological categories: Examples from learning and discovery in science. In R. N. Giere (Ed.), Cognitive models of science (Vol. 15, pp. 129–186). Minneapolis, MN: University of Minnesota Press.

Chi, M. T. H. (2005). Common sense conceptions of emergent processes: Why some misconceptions are robust. Journal of the Learning Sciences, 14 , 161–199.

Chi, M. T. H., Deleeuw, N., Chiu, M. H., & Lavancher, C. (1994). Eliciting self-explanations improves understanding. Cognitive Science, 18 , 439–477.

Cierniak, G., Scheiter, K., & Gerjets, P. (2009). Explaining the split-attention effect: Is the reduction of extraneous cognitive load accompanied by an increase in germane cognitive load? Computers in Human Behavior, 25 , 315–324.

Clarebout, G., & Elen, J. (2007). In search of pedagogical agents’ modality and dialogue effects in open learning environments. e-Journal of Instructional Science and Technology , 10 [Electronic Version].

Clarke, T., Ayres, P., & Sweller, J. (2005). The impact of sequencing and prior knowledge on learning mathematics through spreadsheet applications. Educational Technology Research & Development, 53 , 15–24.

Cooper, G. (1998). Research into cognitive load theory and instructional design at UNSW. From http://education.arts.unsw.edu.au/CLT_NET_Aug_97.HTML .

Cooper, G., Tindall-Ford, S., Chandler, P., & Sweller, J. (2001). Learning by imagining. Journal of Experimental Psychology: Applied, 7 , 68–82.

Corbalan, G., Kester, L., & van Merriënboer, J. J. G. (2008). Selecting learning tasks: Effects of adaptation and shared control on learning efficiency and task involvement. Contemporary Educational Psychology, 33 , 733–756.

Cowan, N. (2001). The magical number 4 in short-term memory: A reconsideration of mental storage capacity. Behavioral & Brain Sciences, 24 , 87–114.

Craig, S. D., Gholson, B., & Driscoll, D. M. (2002). Animated pedagogical agents in multimedia educational environments: Effects of agent properties, picture features and redundancy. Journal of Educational Psychology, 94 , 428–434.

de Croock, M. B. M., van Merriënboer, J. J. G., & Paas, F. (1998). High vs. low contextual interference in simulation-based training of troubleshooting skills: Effects on transfer performance and invested mental effort. Computers in Human Behavior, 14 , 249–267.

de Jong, T. (2006). Computer simulations—technological advances in inquiry learning. Science, 312 , 532–533.

de Jong, T., Martin, E., Zamarro, J.-M., Esquembre, F., Swaak, J., & van Joolingen, W. R. (1999). The integration of computer simulation and learning support; an example from the physics domain of collisions. Journal of Research in Science Teaching, 36 , 597–615.

de Jonge, P., & de Jong, P. F. (1996). Working memory, intelligence and reading ability in children. Personality and Individual Differences, 21 , 1007–1020.

de Koning, B. B., Tabbers, H., Rikers, R. M. J. P., & Paas, F. (in press). Attention guidance in learning from a complex animation: Seeing is understanding. Learning and Instruction .

de Westelinck, K., Valcke, M., de Craene, B., & Kirschner, P. A. (2005). Multimedia learning in social sciences: Limitations of external graphical representations. Computers in Human Behavior, 21 , 555–573.

DeLeeuw, K. E., & Mayer, R. E. (2008). A comparison of three measures of cognitive load: Evidence for separable measures of intrinsic, extraneous, and germane load. Journal of Educational Psychology, 100 , 223–234.

Diao, Y., & Sweller, J. (2007). Redundancy in foreign language reading comprehension instruction: Concurrent written and spoken presentations. Learning and Instruction, 17 , 78–88.

Dick, W., & Carey, L. (1990). The systematic design of instruction (3rd ed.). New York: Harper Collins College Publishers.

Dosher, B. (2003). Working memory. In Encyclopedia of cognitive science (Vol. 4, pp. 569–577). New York: Wiley.

Dunsworth, Q., & Atkinson, R. K. (2007). Fostering multimedia learning of science: Exploring the role of an animated agent’s image. Computers & Education, 49 , 677–690.

Dutke, S., & Rinck, M. (2006). Multimedia learning: Working memory and the learning of word and picture diagrams. Learning and Instruction, 16 , 526–537.

Elen, J., van Gorp, E., & Kempen, K. H. (2008). The effects of multimedia design features on primary school learning materials. International Journal of Instructional Media, 35 , 7–15.

Fischer, S., Lowe, R. K., & Schwan, S. (2008). Effects of presentation speed of a dynamic visualization on the understanding of a mechanical system. Applied Cognitive Psychology, 22 , 1126–1141.

Forlines, C., Schmidt-Nielsen, B., Raj, B., Wittenburg, K., & Wolf, P. (2005). A comparison between spoken queries and menu-based interfaces for in-car digital music selection. Lecture Notes in Computer Science, 3585 , 536–549.

Fredericks, T. K., Choi, S. D., Hart, J., Butt, S. E., & Mital, A. (2005). An investigation of myocardial aerobic capacity as a measure of both physical and cognitive workloads. International Journal of Industrial Ergonomics, 35 , 1097–1107.

Gagné, R. M., Briggs, L. J., & Wager, W. W. (1988). Principles of instructional design (3rd ed.). Fort Worth, TX: Harcourt Brace Jovanovich.

Gerjets, P., & Scheiter, K. (2003). Goal configurations and processing strategies as moderators between instructional design and cognitive load: Evidence from hypertext-based instruction. Educational Psychologist, 38 , 33–42.

Gerjets, P., Scheiter, K., & Catrambone, R. (2004). Designing instructional examples to reduce intrinsic cognitive load: Molar versus modular presentation of solution procedures. Instructional Science, 32 , 33–58.

Gerjets, P., Scheiter, K., & Catrambone, R. (2006). Can learning from molar and modular worked examples be enhanced by providing instructional explanations and prompting self-explanations? Learning and Instruction, 16 , 104–121.

Gerjets, P., Scheiter, K., & Cierniak, G. (2009a). The scientific value of cognitive load theory: A research agenda based on the structuralist view of theories. Educational Psychology Review, 21 , 43–54.

Gerjets, P., Scheiter, K., Opfermann, M., Hesse, F. W., & Eysink, T. H. S. (2009b). Learning with hypermedia: The influence of representational formats and different levels of learner control on performance and learning behavior. Computers in Human Behavior, 25 , 360–367.

Gerlic, I., & Jausovec, N. (1999). Multimedia: Differences in cognitive processes observed with EEG. Educational Technology Research and Development, 47 , 5–14.

Ginns, P. (2005). Meta-analysis of the modality effect. Learning and Instruction, 15 , 313–331.

Harskamp, E. G., Mayer, R. E., & Suhre, C. (2007). Does the modality principle for multimedia learning apply to science classrooms? Learning and Instruction, 17 , 465–477.

Hasler, B. S., Kersten, B., & Sweller, J. (2007). Learner control, cognitive load and instructional animation. Applied Cognitive Psychology, 21 , 713–729.

Huk, T. (2006). Who benefits from learning with 3D models? The case of spatial ability. Journal of Computer Assisted Learning, 22 , 392–404.

Hummel, H. G. K., Paas, F., & Koper, E. J. R. (2006). Timing of cueing in complex problem-solving tasks: Learner versus system control. Computers in Human Behavior, 22 , 191–205.

Ignacio Madrid, R., van Oostendorp, H., & Puerta Melguizo, M. C. (2009). The effects of the number of links and navigation support on cognitive load and learning with hypertext: The mediating role of reading order. Computers in Human Behavior, 25 , 66–75.

Jaeggi, S. M., Buschkuehl, M., Etienne, A., Ozdoba, C., Perrig, W. J., & Nirkko, A. C. (2007). On how high performers keep cool brains in situations of cognitive overload. Cognitive Affective & Behavioral Neuroscience, 7 , 75–89.

Kablan, Z., & Erden, M. (2008). Instructional efficiency of integrated and separated text with animated presentations in computer-based science instruction. Computers & Education, 51 , 660–668.

Kalyuga, S. (2007). Expertise reversal effect and its implications for learner-tailored instruction. Educational Psychology Review, 19 , 509–539.

Kalyuga, S., Ayres, P., Chandler, P., & Sweller, J. (2003). The expertise reversal effect. Educational Psychologist, 38 , 23–32.

Kalyuga, S., Chandler, P., & Sweller, J. (1998). Levels of expertise and instructional design. Human Factors, 40 , 1–17.

Kalyuga, S., Chandler, P., & Sweller, J. (1999). Managing split-attention and redundancy in multimedia instruction. Applied Cognitive Psychology, 13 , 351–371.

Kalyuga, S., Chandler, P., & Sweller, J. (2000). Incorporating learner experience into the design of multimedia instruction. Journal of Educational Psychology, 92 , 126–136.

Kalyuga, S., Chandler, P., & Sweller, J. (2001a). Learner experience and efficiency of instructional guidance. Educational Psychology Review, 21 , 5–23.

Kalyuga, S., Chandler, P., Tuovinen, J., & Sweller, J. (2001b). When problem solving is superior to studying worked examples. Journal of Educational Psychology, 93 , 579–588.

Kester, L., & Kirschner, P. A. (2009). Effects of fading support on hypertext navigation and performance in student-centered e-learning environments. Interactive Learning Environments, 17 , 165–179.

Kester, L., Kirschner, P. A., & van Merriënboer, J. J. G. (2006a). Just-in-time information presentation: Improving learning a troubleshooting skill. Contemporary Educational Psychology, 31 , 167–185.

Kester, L., Lehnen, C., van Gerven, P. W. M., & Kirschner, P. A. (2006b). Just-in-time, schematic supportive information presentation during cognitive skill acquisition. Computers in Human Behavior, 22 , 93–112.

Khalil, M. K., Paas, F., Johnson, T. E., & Payer, A. F. (2005). Design of interactive and dynamic anatomical visualizations: The implication of cognitive load theory. The Anatomical Record (New Anat.), 286B , 15–20.

Kirschner, P. A. (2002). Cognitive load theory: Implications of cognitive load theory on the design of learning. Learning and Instruction, 12 , 1–10.

Kirschner, P. A., Sweller, J., & Clark, R. E. (2006). Why minimally guided instruction does not work. Educational Psychologist, 41 , 75–86.

Kissane, M., Kalyuga, S., Chandler, P., & Sweller, J. (2008). The consequences of fading instructional guidance on delayed performance: The case of financial services training. Educational Psychology, 28 , 809–822.

Klahr, D., & Robinson, M. (1981). Formal assessment of problem-solving and planning processes in preschool children. Cognitive Psychology, 13 , 113–148.

Lahtinen, T. M. M., Koskelo, J. P., Laitnen, T., & Leino, T. K. (2007). Heart rate and performance during combat missions in a flight simulator. Aviation Space and Environmental Medicine, 78 , 387–391.

Linn, M. C., Lee, H.-S., Tinker, R., Husic, F., & Chiu, J. L. (2006). Teaching and assessing knowledge integration in science. Science, 313 , 1049–1050.

Lou, Y., Abrami, P. C., & d’Apollonia, S. (2001). Small group and individual learning with technology: A meta-analysis. Review of Educational Research, 71 , 449–521.

Low, R., & Sweller, J. (2005). The modality principle in multimedia learning. In R. E. Mayer (Ed.), Cambridge handbook of multimedia learning (pp. 147–158). Cambridge, UK: Cambridge University Press.

Lowe, R. K. (1999). Extracting information from an animation during complex visual learning. European Journal of Psychology of Education, 14 , 225–244.

Lusk, M. M., & Atkinson, R. K. (2007). Animated pedagogical agents: Does their degree of embodiment impact learning from static or animated worked examples? Applied Cognitive Psychology, 21 , 747–764.

Lusk, D. L., Evans, A. D., Jeffrey, T. R., Palmer, K. R., Wikstrom, C. S., & Doolittle, O. E. (2009). Multimedia learning and individual differences: Mediating the effects of working memory capacity with segmentation. British Journal of Educational Technology, 40 , 636–651.

Marcus, N., Cooper, M., & Sweller, J. (1996). Understanding instructions. Journal of Educational Psychology, 88 , 49–63.

Mayer, R. E. (1997). Multimedia learning: Are we asking the right questions? Educational Psychologist, 32 , 1–19.

Mayer, R. E. (2001). Multimedia learning . New York: Cambridge University Press.

Mayer, R. E. (2002). Rote versus meaningful learning. Theory into Practice, 41 , 226–232.

Mayer, R. E. (2004). Should there be a three-strikes rule against pure discovery learning? American Psychologist, 59 , 14–19.

Mayer, R. E., & Chandler, P. (2001). When learning is just a click away: Does simple user interaction foster deeper understanding of multimedia messages? Journal of Educational Psychology, 93 , 390–397.

Mayer, R. E., Dow, G. T., & Mayer, S. (2003). Multimedia learning in an interactive self-explaining environment: What works in the design of agent-based microworlds. Journal of Educational Psychology, 95 , 806–813.

Mayer, R. E., Hegarty, M., Mayer, S., & Campbell, J. (2005). When static media promote active learning: Annotated illustrations versus narrated animations in multimedia instruction. Journal of Experimental Psychology: Applied, 11 , 256–265.

Mayer, R. E., Heiser, J., & Lonn, S. (2001). Cognitive constraints on multimedia learning: When presenting more material results in less understanding. Journal of Educational Psychology, 93 , 187–198.

Mayer, R. E., & Johnson, C. I. (2008). Revising the redundancy principle in multimedia learning. Journal of Educational Psychology, 100 , 380–386.

Mayer, R. E., Mautone, P., & Prothero, W. (2002). Pictorial aids for learning by doing in a multimedia geology simulation game. Journal of Educational Psychology, 94 , 171–185.

Mayer, R. E., & Moreno, R. (1998). A split-attention effect in multimedia learning: Evidence for dual processing systems in working memory. Journal of Educational Psychology, 90 , 312–320.

Mayer, R. E., & Moreno, R. (2002). Aids to computer-based multimedia learning. Learning and Instruction, 12 , 107–119.

Mayer, R. E., & Moreno, R. (2003). Nine ways to reduce cognitive load in multimedia learning. Educational Psychologist, 38 , 43–52.

Mayer, R. E., & Sims, V. K. (1994). For whom is a picture worth a thousand words? Extensions of a dual-coding theory of multimedia learning. Journal of Educational Psychology, 86 , 389–401.

Miller, G. A. (1956). The magical number seven, plus or minus two: Some limits on our capacity for processing information. The Psychological Review, 63 , 81–97.

Miyake, A. (2001). Individual differences in working memory: Introduction to the special section. Journal of Experimental Psychology-General, 130 , 163–168.

Moray, N. (1982). Subjective mental workload. Human Factors, 24 , 25–40.

Moreno, R. (2004). Decreasing cognitive load for novice students: Effects of explanatory versus corrective feedback in discovery-based multimedia. Instructional Science, 32 , 99–113.

Moreno, R. (2006). Does the modality principle hold for different media? A test of the method-affects-learning hypothesis. Journal of Computer Assisted Learning, 22 , 149–158.

Moreno, R. (2007). Optimising learning from animations by minimising cognitive load: Cognitive and affective consequences of signalling and segmentation methods. Applied Cognitive Psychology, 21 , 765–781.

Moreno, R., & Mayer, R. E. (1999a). Cognitive principles of multimedia learning: The role of modality and contiguity. Journal of Educational Psychology, 91 , 358–368.

Moreno, R., & Mayer, R. E. (1999b). Multimedia supported metaphors for meaning making in mathematics. Cognition and Instruction, 17 , 215–248.

Moreno, R., & Mayer, R. (2007). Interactive multimodal learning environments. Educational Psychology Review, 19 , 309–326.

Moreno, R., & Valdez, A. (2005). Cognitive load and learning effects of having students organize pictures and words in multimedia environments: The role of student interactivity and feedback. Educational Technology Research and Development, 53 , 35–45.

Mousavi, S., Low, R., & Sweller, J. (1995). Reducing cognitive load by mixing auditory and visual presentation modes. Journal of Educational Psychology, 87 , 319–334.

Murata, A. (2005). An attempt to evaluate mental workload using wavelet transform of EEG. Human Factors, 47 , 498–508.

Nelson, B., & Erlandson, B. (2008). Managing cognitive load in educational multi-user virtual environments: Reflection on design practice. Educational Technology Research and Development, 56 , 619–641.

Ngu, B., Mit, E., Shahbodin, F., & Tuovinen, J. (2009). Chemistry problem solving instruction: A comparison of three computer-based formats for learning from hierarchical network problem representations. Instructional Science, 37 , 21–42.

Nickel, P., & Nachreiner, F. (2003). Sensitivity and diagnosticity of the 0.1-Hz component of heart rate variability as an indicator of mental workload. Human Factors, 45 (57), 5–590.

Opfermann, M., Gerjets, P., & Scheiter, K. (2005). Exploration of hypermedia environments: Learner control and adaptive user strategies. Paper presented at the European Association for Research in Learning and Instruction (EARLI), Nicosia, Cyprus. August 23–27.

Paas, F. (1992). Training strategies for attaining transfer of problem-solving skill in statistics: A cognitive load approach. Journal of Educational Psychology, 84 , 429–434.

Paas, F., Renkl, A., & Sweller, J. (2003a). Cognitive load theory and instructional design: Recent developments. Educational Psychologist, 38 , 1–4.

Paas, F., Renkl, A., & Sweller, J. (2004). Cognitive load theory: Instructional implications of the interaction between information structures and cognitive architecture. Instructional Science, 32 , 1–8.

Paas, F., Tuovinen, J. E., Tabbers, H., & van Gerven, P. W. M. (2003b). Cognitive load measurement as a means to advance cognitive load theory. Educational Psychologist, 38 , 63–72.

Paas, F., van Gerven, P. W. M., & Wouters, P. (2007). Instructional efficiency of animation: Effects of interactivity through mental reconstruction of static key frames. Applied Cognitive Psychology, 21 , 783–793.

Paas, F., & van Merriënboer, J. (1993). The efficiency of instructional conditions: An approach to combine mental-effort and performance measures. Human Factors, 35 , 737–743.

Paas, F., & van Merriënboer, J. J. G. (1994a). Instructional control of cognitive load in the training of complex cognitive tasks. Educational Psychology Review, 6 , 351–372.

Paas, F., & van Merriënboer, J. J. G. (1994b). Variability of worked examples and transfer of geometric problem-solving skills: A cognitive load approach. Journal of Educational Psychology, 86 , 122–133.

Paas, F., van Merriënboer, J. J. G., & Adam, J. J. (1994). Measurement of cognitive load in instructional research. Perceptual and Motor Skills, 79 , 419–430.

Pardo-Vazquez, J. L., & Fernandez-Rey, J. (2008). External validation of the computerized, group administrable adaptation of the “operation span task”. Behavior Research Methods, 40 , 46–54.

Passolunghi, M. C., & Siegel, L. S. (2004). Working memory and access to numerical information in children with disability in mathematics. Journal of Experimental Child Psychology, 88 , 348–367.

Perlow, R., Jattuso, M., & Moore, D. D. W. (1997). Role of verbal working memory in complex skill acquisition. Human Performance, 10 , 283–302.

Plass, J. L., Chun, D. M., Mayer, R. E., & Leutner, D. (2003). Cognitive load in reading a foreign language text with multimedia aids and the influence of verbal and spatial abilities. Computers in Human Behavior, 19 , 221–243.

Pociask, F. D., & Morrison, G. R. (2008). Controlling split attention and redundancy in physical therapy instruction. Educational Technology Research and Development, 56 , 379–399.

Pollock, E., Chandler, P., & Sweller, J. (2002). Assimilating complex information. Learning and Instruction, 12 , 61–86.

Reber, P. J., & Kotovsky, K. (1997). Implicit learning in problem solving: The role of working memory capacity. Journal of Experimental Psychology-General, 126 , 178–203.

Reigeluth, C. M. (Ed.). (1983). Instructional-design theories and models: An overview of their current status . Hillsdale, NJ: Lawrence Erlbaum Associates.

Renkl, A., & Atkinson, R. K. (2003). Structuring the transition from example study to problem solving in cognitive skill acquisition: A cognitive load perspective. Educational Psychologist, 38 , 15–22.

Renkl, A., Gruber, H., Weber, S., Lerche, T., & Schweizer, K. (2003). Cognitive load during learning from worked-out examples. Zeitschrift Fur Pädagogische Psychologie, 17 , 93–101.

Renkl, A., Hilbert, T., & Schworm, S. (2009). Example-based learning in heuristic domains: A cognitive load theory account. Educational Psychology Review, 21 , 67–78.

Renkl, A., Stark, R., Gruber, H., & Mandl, H. (1998). Learning from worked-out examples: The effects of example variability and elicited self-explanations. Contemporary Educational Psychology, 23 , 90–108.

Rikers, R. M. J. P. (2006). A critical reflection on emerging topics in cognitive load research. Applied Cognitive Psychology, 20 , 359–364.

Rose, J. M., Roberts, F. D., & Rose, A. M. (2004). Affective responses to financial data and multimedia: The effects of information load and cognitive load. International Journal of Accounting Information Systems, 5 , 5–24.

Rourke, A., & Sweller, J. (2009). The worked-example effect using ill-defined problems: Learning to recognise designers’ styles. Learning and Instruction, 19 , 185–199.

Salden, R. J. C. M., Paas, F., Broers, N. J., & van Merriënboer, J. J. G. (2004). Mental effort and performance as determinants for the dynamic selection of learning tasks in air traffic control training. Instructional Science, 32 , 153–172.

Sandberg, E. H., Huttenlocher, J., & Newcombe, N. (1996). The development of hierarchical representation of two-dimensional space. Child Development, 67 , 721–739.

Schnotz, W., & Kürschner, C. (2007). A reconsideration of cognitive load theory. Educational Psychology Review, 19 , 469–508.

Schnotz, W., & Rasch, T. (2005). Enabling, facilitating, and inhibiting effects of animations in multimedia learning: Why reduction of cognitive load can have negative results on learning. Educational Technology Research & Development, 53 , 47–58.

Schultheis, H., & Jameson, A. (2004). Assessing cognitive load in adaptive hypermedia systems: Physiological and behavioral methods. In P. DeBra & W. Nejdl (Eds.), Adaptive hypermedia and adaptive web-based systems (Vol. 3137, pp. 225–234). Berlin: Springer.

Scott, B. M., & Schwartz, N. H. (2007). Navigational spatial displays: The role of metacognition as cognitive load. Learning and Instruction, 17 , 89–105.

Seufert, T., & Brünken, R. (2006). Cognitive load and the format of instructional aids for coherence formation. Applied Cognitive Psychology, 20 , 321–331.

Seufert, T., Jänen, I., & Brünken, R. (2007). The impact of intrinsic cognitive load on the effectiveness of graphical help for coherence formation. Computers in Human Behavior, 23 , 1055–1071.

Seufert, T., Schütze, M., & Brünken, R. (2009). Memory characteristics and modality in multimedia learning: An aptitude-treatment-interaction study. Learning and Instruction, 19 , 28–42.

Slotta, J. D., & Chi, M. T. H. (2006). Helping students understand challenging topics in science through ontology training. Cognition and Instruction, 24 , 261–289.

Smith, E. E., & Jonides, J. (1997). Working memory: A view from neuroimaging. Cognitive Psychology, 33 , 5–42.

Stark, R., Mandl, H., Gruber, H., & Renkl, A. (2002). Conditions and effects of example elaboration. Learning and Instruction, 12 , 39–60.

Stull, A. T., & Mayer, R. E. (2007). Learning by doing versus learning by viewing: Three experimental comparisons of learner-generated versus author-provided graphic organizers. Journal of Educational Psychology, 99 , 808–820.

Swaak, J., & de Jong, T. (2001). System vs. learner control in using on-line support for simulation-based discovery learning; effects on knowledge tests, interaction measures, and a subjective measure of cognitive load. Learning Environments Research, 4 , 217–241.

Sweller, J. (1988). Cognitive load during problem solving: Effects on learning. Cognitive Science, 12 , 257–285.

Article   Google Scholar  

Sweller, J. (1989). Cognitive technology: Some procedures for facilitating learning and problem-solving in mathematics and science. Journal of Educational Psychology, 81 , 457–466.

Sweller, J. (1993). Some cognitive-processes and their consequences for the organization and presentation of information. Australian Journal of Psychology, 45 , 1–8.

Sweller, J. (1994). Cognitive load theory, learning difficulty, and instructional design. Learning and Instruction, 4 , 295–312.

Sweller, J. (2005). The redundancy principle in multimedia learning. In R. E. Mayer (Ed.), Cambridge handbook of multimedia learning (pp. 159–167). Cambridge, UK: Cambridge University Press.

Sweller, J., & Chandler, P. (1991). Evidence for cognitive load theory. Cognition and Instruction, 8 , 351–362.

Sweller, J., & Chandler, P. (1994). Why some material is difficult to learn. Cognition and Instruction, 12 , 185–233.

Sweller, J., Chandler, P., Tierney, P., & Cooper, M. (1990). Cognitive load as a factor in the structuring of technical material. Journal of Experimental Psychology, 119 , 176–192.

Sweller, J., van Merriënboer, J. J. G., & Paas, F. G. W. C. (1998). Cognitive architecture and instructional design. Educational Psychology Review, 10 , 251–296.

Tabbers, H. K., Martens, R. L., & van Merriënboer, J. J. G. (2004). Multimedia instructions and cognitive load theory: Effects of modality and cueing. British Journal of Educational Psychology, 74 , 71–81.

Tindall-Ford, S., Chandler, P., & Sweller, J. (1997). When two sensory modes are better than one. Journal of Experimental Psychology: Applied, 3 , 257–287.

Tuovinen, J. E., & Paas, F. (2004). Exploring multidimensional approaches to the efficiency of instructional conditions. Instructional Science, 32 , 133–152.

Tuovinen, J. E., & Sweller, J. (1999). A comparison of cognitive load associated with discovery learning and worked examples. Journal of Educational Psychology, 91 , 334–341.

Turner, M. L., & Engle, R. W. (1989). Is working memory task dependent. Journal of Memory and Language, 28 , 127–154.

van Gerven, P. W. M., Paas, F., van Merriënboer, J. J. G., & Schmidt, H. G. (2002). Cognitive load theory and aging: Effects of worked examples on training efficiency. Learning and Instruction, 12 , 87–105.

van Gerven, P. W. M., Paas, F., van Merriënboer, J. J. G., & Schmidt, H. G. (2004). Memory load and the cognitive pupillary response in aging. Psychophysiology, 41 , 167–174.

van Gog, T., Kester, L., Nievelstein, F., Giesbers, B., & Paas, F. (2009). Uncovering cognitive processes: Different techniques that can contribute to cognitive load research and instruction. Computers in Human Behavior, 25 , 325–331.

van Gog, T., & Paas, F. (2008). Instructional efficiency: Revisiting the original construct in educational research. Educational Psychologist, 43 , 16–26.

van Gog, T., Paas, F., & van Merriënboer, J. J. G. (2008). Effects of studying sequences of process-oriented and product-oriented worked examples on troubleshooting transfer efficiency. Learning and Instruction, 18 , 211–222.

van Merriënboer, J. J. G., & Ayres, P. (2005). Research on cognitive load theory and its design implications for e-learning. Educational Technology Research & Development, 53 , 5–13.

van Merriënboer, J. J. G., Kester, L., & Paas, F. (2006). Teaching complex rather than simple tasks: Balancing intrinsic and germane load to enhance transfer of learning. Applied Cognitive Psychology, 20 , 343–352.

van Merriënboer, J. J. G., Kirschner, P. A., & Kester, L. (2003). Taking the load off a learner’s mind: Instructional design for complex learning. Educational Psychologist, 38 , 5–14.

van Merriënboer, J. J. G., Schuurman, J. G., de Croock, M. B. M., & Paas, F. (2002). Redirecting learners’ attention during training: Effects on cognitive load, transfer test performance and training efficiency. Learning and Instruction, 12 , 11–37.

van Merriënboer, J. J. G., & Sweller, J. (2005). Cognitive load theory and complex learning: Recent developments and future directions. Educational Psychology Review, 17 , 147–177.

Verwey, W. B., & Veltman, H. A. (1996). Detecting short periods of elevated workload: A comparison of nine workload assessment techniques. Journal of Experimental Psychology: Applied, 2 , 270–285.

Volke, H. J., Dettmar, P., Richter, P., Rudolf, M., & Buhss, U. (1999). Evoked coherences of EEG in mental load: An investigation in chess players. Zeitschrift Fur Psychologie, 207 , 233–262.

Wilson, K. M., & Swanson, H. L. (2001). Are mathematics disabilities due to a domain-general or a domain-specific working memory deficit? Journal of Learning Disabilities, 34 , 237.

Wirth, J., Künsting, J., & Leutner, D. (2009). The impact of goal specificity and goal type on learning outcome and cognitive load. Computers in Human Behavior, 25 , 299–305.

Wouters, P., Paas, F., & van Merriënboer, J. J. G. (2009). Observational learning from animated models: Effects of modality and reflection on transfer. Contemporary Educational Psychology, 34 , 1–8.

Wouters, P., Paas, F., & van Merriënboer, J. J. G. (in press). Observational learning from animated models: Effects of studying-practicing alternation and illusion of control on transfer. Instructional Science .

Xie, B., & Salvendy, G. (2000). Prediction of mental workload in single and multiple task environments. International Journal of Cognitive Ergonomics, 4 , 213–242.

Yeung, A. S., Jin, P., & Sweller, J. (1997). Cognitive load and learner expertise: Split-attention and redundancy effects in reading with explanatory notes. Contemporary Educational Psychology, 23 , 1–21.

Yuan, K., Steedle, J., Shavelson, R., Alonzo, A., & Oppezo, M. (2006). Working memory, fluid intelligence, and science learning. Educational Research Review, 1 , 83–98.

Zheng, R., McAlack, M., Wilmes, B., Kohler-Evans, P., & Williamson, J. (2009). Effects of multimedia on cognitive load, self-efficacy, and multiple rule-based problem solving. British Journal of Educational Technology, 40 , 790–803.

Download references

Acknowledgments

I greatly appreciate the feedback I received from the following colleagues while preparing this article: Shaaron Ainsworth (University of Nottingham), Patricia Alexander (University of Maryland), Geraldine Clarebout, Jan Elen and Lieven Verschaffel (Catholic University of Leuven), Tessa Eysink, Bas Kollöffel, Ard Lazonder, Theo van Leeuwen, Robert de Hoog, Wouter van Joolingen, Hans van der Meij, Jan van der Meij, Armin Weinberger and Pascal Wilhelm (all University of Twente).

Open Access

This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.

Author information

Authors and affiliations.

Faculty of Behavioral Sciences, University of Twente, P.O. Box 217, 7500 AE, Enschede, The Netherlands

Ton de Jong

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Ton de Jong .

Rights and permissions

Open Access This is an open access article distributed under the terms of the Creative Commons Attribution Noncommercial License ( https://creativecommons.org/licenses/by-nc/2.0 ), which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.

Reprints and permissions

About this article

de Jong, T. Cognitive load theory, educational research, and instructional design: some food for thought. Instr Sci 38 , 105–134 (2010). https://doi.org/10.1007/s11251-009-9110-0

Download citation

Received : 23 March 2009

Accepted : 04 August 2009

Published : 27 August 2009

Issue Date : March 2010

DOI : https://doi.org/10.1007/s11251-009-9110-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Cognitive load theory
  • Find a journal
  • Publish with us
  • Track your research
  • DOI: 10.1108/ijmce-07-2023-0064
  • Corpus ID: 272558883

The role of teachers’ direct and emotional mentoring in shaping undergraduates’ research aspirations: a social cognitive career theory perspective

  • Haojun Li , Jun Xu , +1 author Chengliang Wang
  • Published in International Journal of… 11 September 2024
  • Education, Psychology

67 References

Evaluation of a culturally responsive mentorship education program for the advisers of howard hughes medical institute gilliam program graduate students, addressing structural mentoring barriers in postdoctoral training: a qualitative study, social cognitive career theory predictors of goal persistence in african american college students with disabilities, the effect of mentoring on undergraduate mentors: a systematic review of the literature, rethinking research mentoring: a tutorial on how and why to implement a phd student–mediated mentorship model, the role of self-efficacy in the thesis-writing experiences of undergraduate honors students, research on face recognition and privacy in china—based on social cognition and cultural psychology, htmt2-an improved criterion for assessing discriminant validity in structural equation modeling, effect analysis of the driving factors of super-gentrification using structural equation modeling, research self-efficacy: a meta-analysis, related papers.

Showing 1 through 3 of 0 Related Papers

American Psychological Association Logo

Self-Perceived Cognitive Functioning and Item Response Theory

Neuropsychology : Meet the Author

Today, Dr. Scott Sperling and student leader, Ms. Sara Pishdadian, will be discussing the paper,  Linking Self-Perceived Cognitive Functioning Questionnaires Using Item Response Theory: The Subjective Cognitive Decline Initiatives , with three of the study authors, Dr. Rabin, Dr. Elbulok-Charcape and Dr. Jones.  In their study, the authors harmonized secondary data from 24 studies and 40 different questionnaires with item response theory (IRT) to identify the items that made the greatest contribution to measurement precision. Data from over 53,000 neuropsychologically intact older adults were included, from 13 English language and 11 non-English (or mixed) language studies. The results have significant implications for the development and use of new self-perceived cognitive functioning questionnaires with high predictive validity for cognitive and clinical outcomes.

Listen to Episode 2: Self-Perceived Cognitive Functioning and Item Response Theory

About the contributors

Milu Elbulok-Charcape is a recent graduate of the PhD program in Educational Psychology at The Graduate Center of The City University of New York. Her dissertation investigated the statistical and research literacy of undergraduate students and factors that underlie this crucial competency. Current research focuses on improving statistics education for at-risk undergraduate student populations; identifying challenges in the neuropsychological assessment of racial/ethnic minorities; and enhancing the mental health literacy of diverse undergraduate students.

Sara Pishdadian is a PhD candidate in clinical neuropsychology at York University and a predoctoral intern at The Ottawa Hospital. Her research interests surround 1) subjective and objective memory and spatial navigation abilities in older adults and individuals with memory disorders and 2) clinical correlates and cognitive predictors of daily functioning and quality of life in individuals with serious mental illness.

Laura Rabin, PhD, is a professor in the department of psychology at Brooklyn College and is The Graduate Center of The City University of New York visiting professor in the department of neurology at Albert Einstein College of Medicine/Einstein Aging Study. Dr. Rabin investigates cognitive and neurophysiological changes associated with symptomatic prodromal stages of dementia, toward identification of the earliest markers of dementia risk. In addition, she studies changes in judgment and problem-solving along the Alzheimer’s disease continuum with implications for preventing exploitation, unsafe behaviors, and functional dependence among those most vulnerable. Dr. Rabin also conducts educational research that focuses on understanding factors that impact undergraduate student performance and improving academic and mental health outcomes of diverse students.

About the journal

Visit the Neuropsychology homepage.

Dr. Scott Sperling:  Welcome to Meet the Authors, a podcast brought to you by a collaboration of the Society for Clinical Neuropsychology and the journal Neuropsychology. My name is Dr. Scott Sperling, and I'm grateful to be your host. In this podcast, Student Leaders in Neuropsychology will discuss prominent recently published studies with the authors who undertook the research, thereby allowing for a behind-the-scenes look into the development, implementation, analysis, and future implications of cutting-edge neuropsychology research.

Today, our student leader, Ms. Sara Pishdadian, will be discussing an exciting paper entitled “Linking Self-Perceived Cognitive Functioning Questionnaires Using Item Response Theory: The Subjective Cognitive Decline Initiatives” with three of the paper's authors, Dr. Rabin, Dr. Elbulok-Charcape and Dr. Jones. I'd like to now introduce our student leader and esteemed guest. Ms. Sara Pishdadian is a Ph.D. candidate in Clinical Neuropsychology at York University and a pre-doctoral intern at the Ottawa Hospital.

Her research interests include the study of subjective and objective memory and spatial navigation abilities in older adults and individuals with memory disorders, an examination of the clinical correlates and cognitive predictors of daily functioning and quality of life in individuals with serious mental illness. Welcome, Sarah. And for our esteemed interviewees here, Dr. Rabin is a professor of psychology at Brooklyn College and Graduate Center of the City University of New York and visiting Professor in the Department of Neurology at Albert Einstein College of Medicine in the Einstein Aging study. Her work investigates the cognitive and neurophysiological changes associated with symptomatic prodromal stages of dementia to identify the earliest markers of dementia risk. And in addition, she studies changes in judgment and problem solving along the Alzheimer's disease continuum with implications for preventing exploitation, unsafe behaviors and functional dependance among those most vulnerable.

Dr. Elbulok-Charcape is a recent graduate of the doctoral program in Educational Psychology at the Graduate Center of the City University of New York. Her research interests include examination of the factors that underlie statistical and research literacy in undergraduate students. Her current studies focus on improving statistics, education for at risk undergraduate student populations, identifying challenges in the neuropsychological assessment of racial and ethnic minorities, and enhancing the mental health literacy of diverse undergraduate students.

And last but certainly not least, Dr. Jones is a professor of neurology at the Brown University, Warren Alpert Medical School and an epidemiologist and methodologist with expertise in the use of modern psychometric methods, including item response theory and structural equation modeling to better understand and quantify constructs in health research settings.

Welcome, Drs. Rabin, Elbulok-Charcape and Jones.

Dr. Rich Jones:  Thank you for having us. Thank you.

Ms. Sara Pishdadian:  Wonderful. So thank you all for being here. In this paper, the authors used item response theories to calibrate data collected from 24 studies and 40 different self-perceived cognitive functioning questionnaires in 53,000 cognitively intact older adults. For those who may not have read your paper, can you briefly summarize the main findings of your study?

Dr. Laura Rabin:  Sure. Let me begin by saying that this was a huge undertaking which would not have been possible without the collaborative efforts of the 43 coauthors and everyone affiliated with the contributing studies. Special thanks to Doug Tommet, who ran the analysis and to the other primary coauthors, Rich Jones, Paul Crane, and Sietske Sikkes. So I'll start with a brief overview.

Self-perceived cognitive functioning is assessed using numerous different approaches with little consistency or agreed upon standards, and those in the field agree that there is a need to establish a core set of items with predictive validity for cognitive and clinical outcomes. And this item analysis project was intended as the first step in this effort. We published a descriptive study a few years ago that took an in-depth look at the key properties of questionnaires and items used in studies on subjective cognitive decline or SCD.

We found little overlap with approximately 75% of questionnaires used by only one study. There was also wide variation in response options and item content, and most instruments had limited or no psychometric support. Our next step was to use item response theory methods to combine item level data across studies to identify items making the greatest contribution to measurement precision.

So-called top items. So the current study was the first to calibrate self-perceived cognitive functioning data of geographically diverse older adults. With the resulting item score as being on the same metric. And this was not an easy task as no identical item was represented across studies. In some cases, a single item provided the only linkage between studies and this link is essential for harmonization.

After many attempts we were able to pull data from 40 questionnaires with over 600 items, including over 50,000 neuropsychologically intact older adults and representing multiple languages and countries. For the analysis, we identified psychometrically robust items from two main regions on the Leyton trait. The first region captures individuals with better self-perceived cognitive functioning with few concerns. And the second captures individuals with worse self-perceived cognitive functioning or a high level of cognitive concerns.

And as a reminder, all participants were neuropsychologically intact. Some of our main findings were that there were differences in the top items from the two ranges of interest, for example, in the high concern range, almost all of the top items related to memory, while in the low concern range top items related to various domains, not just memory, but also executive functioning.

For example, decision-making, shifting between activities and managing a medication schedule and language, for example, word finding and self-expression. Also in the high concern range, a lot of the items tapped into the consequences of the difficulties. For example, memory problems, making it harder to complete tasks that used to be easy. The top items for the high concern range also tended to have dichotomous response options as opposed to Likert type scales.

And another interesting finding was that while the temporal reference for top items varied comparisons to five years ago and to the present or current functioning were most common. So what does this all mean? Well, different items were likely to be endorsed by neuropsychologically intact older adults with different levels of cognitive concern. Going forward, it may be possible to select items based on the ability to capture concerns most associated with specific levels or subtypes of SCD.

This is an intriguing possibility and one that would have a big impact on questionnaire development and our understanding of SCD. Another important take home message is that considerations about time frames, response options and other key features of items in questionnaires are not trivial. They are really important and hopefully our results can be used to guide such decisions.

Ms. Sara Pishdadian:  Thank you for that wonderful summary of your study findings. The study was part of a special issue in the Neuropsychology Journal on pitfalls and possibilities of harmonization of neuropsychological and other clinical endpoints. Can you speak to the benefits and challenges of using item response theory to examine large datasets and how this methodology can advance clinical research?

Dr. Rich Jones:  Yeah, this is Rich. I'll take that one. Item response theory provides a tool to link measurements across studies. The end goal is to make the data collected more comparable across study. With this tool, we can generate large datasets from multiple smaller datasets, smaller studies, and in so doing, we can better understand how to compare and contrast the findings of different studies.

We could also consider pooling the data from multiple studies to gain greater statistical power, a greater range of measurement to advance clinically interesting research questions. IRT is a very powerful tool with a specific use case laid out by the present research project. Challenges of this type of research, which I would call linking studies, are in finding the items that we can consider to be linking items across different studies.

This can be often challenging and requires a good bit of clinical judgment and requires a good collaboration between substantive researchers and data analysts in achieving those links. And also key to achieving that is that the data providers share their data along with high quality and detailed documentation regarding the data so that the judgments about linkages can be made.

Ms. Sara Pishdadian:  Thank you for sharing that. In your study, the specificity of item phrasing was also shown to be important. So for example, the item ‘Compared to one year ago, do you have trouble remembering things?’ was identified as psychometrically poor, whereas the item ‘Compared to one year ago, do you feel your memory has declined substantially?’ was identified as psychometrically stronger. What are the critical clinical or research takeaways from these results?

Dr. Laura Rabin:  That's a great question because our findings suggest that phrasing can affect how people interpret and subsequently answer a question. Even if the content is similar, nuances in wording and response options can impact reporting of self perceived cognitive functioning. So when we are creating items and designing questionnaires, we should consider not only general content and psychometric issues, but also nuances and key features of items.

For this specific set of items, it's possible that trouble remembering things was too vague relative to asking about substantial decline in memory. In fact, general trends we saw for psychometrically poor items were that they had ambiguous time comparisons, such as compared to how you used to be or how you were ever before, or had very broad content such as asking about memory for things that happened in childhood.

Some item stems were very lengthy and would probably have to be read over a few times by older adult respondents. Others used Likert scales with a lot of response options. For example nine and in some cases these options also ranged from negative to positive values. It was therefore not surprising to us that certain items didn't fare well. Taken together, our results encourage the use of items with simple, straightforward language and clear time frames.

Ms. Sara Pishdadian:  Thank you for that. And I think the clinicians and researchers listening to this podcast will resonate with that finding. One might question if there are significant variations in self-perceived cognitive functioning and or the endorsement of specific questionnaire items based on an individual's ethnicity or other cultural factors. Do you see item response theory as a useful methodology in identifying such differences, or would you recommend other methods to address potential cross-cultural variability?

Dr. Rich Jones:  That's a great question. And you identified one of the other major uses of item response theory in applied research settings, which is what we call identifying differential item functioning or DIF. And differential item functioning is just as you described. It's a difference in the performance on a particular item due to group membership, whether it's an ethnicity or gender or anything like that.

And there's pretty well-established procedures for using methods of item response theory for finding those differences. It is a limited method in that there are certain cases in which you will not be able to use differential item functioning to find biased items. And that's when you're giving a test where all the items are biased against one group or one ethnicity.

So just like we have to have a linking items in our linking study to bring data together across multiple studies, when you're doing DIF detection, you have to have what we call anchor items, which are items that we assume are not affected by differential item functioning or potential bias when trying to test for differential item function. We call the situation when all the items are biased as constant DIF.

So it's a useful method, but it does have its own challenges.

Dr. Milu Elbulok-Charcape:  As Rich touched upon, there are certainly issues of culture, language and education and we do have to consider them because they're important in SCD measurement. In fact, we know that recent research shows the prevalence of SCD may vary by demographic characteristics as well as race and ethnicity. Specifically among racial and ethnic groups, SCD has been lowest among Asian adults and highest among non-Hispanic American Indian or Alaska Native adults.

In a very recent large scale US based study, the prevalence of SCD was higher among individuals with less formal education across all racial and ethnic groups, and this is consistent with the higher risk for dementia among individuals with fewer years of education. We also see that blacks and Latinos with SCD are also disproportionately affected by social inequities and comorbid chronic conditions compared to white individuals with SCD.

So based on these findings, I would say that additional research is needed to better understand how race, education and related cultural and systemic factors impact reporting about cognition and to develop questionnaires that are appropriate for diverse groups. It's also equally important to generate normative data that accounts for the impact of socio cultural factors on self-reported cognition. And this will require a lot of work, both locally and globally, but hopefully it will enhance the accuracy of measurement of self-reported cognition in diverse groups.

Ms. Sara Pishdadian:  Thank you both for that. And the research definitely continues on. Given past research on the role of worry and memory satisfaction, can you speak to the decision to focus your analysis on the items you did in terms of the cognitive domains and specifically memory ability and change coming out, as most of your items did?

Dr. Laura Rabin:  Yeah. So there are a few interesting points that you just raised. The first is that one of the top items that emerged in the study relates to worrying about memory. And this is consistent with much research out of Frank Jessen’s lab and others showing that older adults who express concern about their perceived cognitive difficulties have an increased risk of future dementia.

In fact, concern or worry associated with SCD is one of the so-called SCD plus features or characteristics of SCD for which there is a particular risk of future cognitive decline. For this reason, we feel it makes sense to include an item that taps into worry about cognitive changes on questionnaires. Something else to consider is that we used a single factor model and did not have separate models for each cognitive domain.

So the analytic model included items from various cognitive domains, and top items did not relate exclusively to memory. Also, when older adults report about their cognitive abilities, they may use the term memory, but may in fact be referring to problems with language executive function or cognition more broadly. So another take home message is that questionnaires should include items that assess multiple cognitive domains.

With respect to the other part of your question about ability and change, it turns out that across questionnaires, roughly 60% of items ask about current ability or impairment, while 40% captured change or decline. The question is, which is preferable? Our results showed that roughly two thirds of top items assessed intra-individual change and this is broadly consistent with how SCD was originally conceptualized.

Ultimately, it might not need to be one or the other. For example, the Cognitive Change Index, a questionnaire from Andy Saykin's lab, asks respondents to report on how they are functioning now - so current ability - compared to five years ago. Assuming respondents are able to hold both current ability and change in mind when responding to CCI items, this might be a viable way forward.

Ms. Sara Pishdadian:  Thank you for highlighting some of the nuances of the overlap in the cognitive domains. Getting to designing questionnaires and item selection: If you were to devise a new SCD questionnaire based on your findings, what would you recommend?

Dr. Laura Rabin:  So we’ve given a lot of thought to this critical next step of devising a new questionnaire. And what we know is that self-perceived cognitive functioning is a complex construct with no single cognitive domain, time referent or even type of scale ideal for its measurement in neuropsychologically intact older adults. Also, even though we were able to identify psychometrically strong items, this does not mean that if taken together, these items would magically form a reliable and valid questionnaire.

Items with psychometric support in our study require independent validation. With these caveats in mind, these are our next steps. Right now, we're working to validate the self-perceived cognitive functioning factor scores derived from the IRT analysis against relevant outcomes such as clinical progression. This is being overseen by Sietske Sikkes and is one of the great things about harmonization. That is, we can use our results to create a new scale and also to combine existing data from different studies to learn more about the phenomenon of SCD and how SCD relates to important outcomes.

Next, we want to use the current item bank for the construction of targeted questionnaires. Before questionnaire development, we plan to carry out focus groups with older adults to learn how they are understanding items in relation to key features of those items, such as the content and complexity of the wording, time frames being referenced and number and types of response options.

Also, we recently surveyed experts about their thoughts about the optimal way to assess self perceived cognitive functioning, and ultimately we hope to combine the information garnered from the focus groups, expert opinion and this psychometric analysis to construct a new questionnaire that we would then validate and disseminate broadly.

Dr. Milu Elbulok-Charcape:  Yeah, and I also jump in to note that an important consideration in this process is to adapt the questionnaire across languages and cultures. And there should be an understanding, the subtleties and target languages and cultural differences most definitely lead participants to interpret and respond differently to items. This may account for recent findings which show that greater SCD reporting among Latinos relative to non-Hispanic whites and notably significant heterogeneity in rates of SCD among Latino subgroups in the United States.

So based on what we've learned from the item analysis study, in our future work, we plan to carry out focus groups and cognitive interviewing with diverse community dwelling older adults. This will help us make informed decisions about item content and questionnaire structure in light of how SCD is expressed in these specific cultural subgroups.

Ms. Sara Pishdadian:  Well, thank you very much for discussing your study and sharing your expertise in this important area of research. I'll now turn it over to our host, Dr. Sperling.

Dr. Scott Sperling:  And I'd also like to thank you for this really fantastic line of research. It's really exciting both as a clinician and a researcher to see what you're doing here using these modern statistical methods and really moving the science forward. We talk a lot about innovation, big data and cultural neuropsychology and finding ways to do that and to do it in one fell swoop is really what you've been able to accomplish here.

And I think there's a lot of very exciting ramifications, again, both on the clinical and on the clinical research side for the work that you're doing. So on behalf of the Society for Clinical Neuropsychology and the Journal of Neuropsychology, I just want to extend my appreciation for all the work that you've done and for your time here today, and as well to you, Sara, for your leadership as a student, who I know also has her heart in this kind of research as well here. So thank you all again for your time and expertise. It’s well appreciated, and take care.

Ms. Sara Pishdadian:  Thank you for having us.

Dr. Rich Jones:  Thank you very much.

APA and the editors of the participating journals assume no responsibility for statements and opinions advanced by the guests of APA Journals podcasts.

Contact Journals

IMAGES

  1. Make your dissertation shine with the help from this example of

    cognitive theory research paper

  2. Cognitive Learning Theory

    cognitive theory research paper

  3. Cognitive learning theory for clinical teaching

    cognitive theory research paper

  4. Cognitive Behavioral Therapy Free Essay Example

    cognitive theory research paper

  5. Psychological Perspectives with a Focus on Cognitive Theory Research Paper

    cognitive theory research paper

  6. Cognitive and Behavioral Theories

    cognitive theory research paper

VIDEO

  1. Piaget's Theory of Cognitive Development

  2. CCN 2018 T-B: Cognitive Science

  3. Cognitive Resource Theory of Leadership

  4. Lecture 2.3: Josh Tenenbaum

  5. Cognitive Theory Related to PTSD

  6. Cognitive Theory in a Shared Pre-Cognitive Space

COMMENTS

  1. (PDF) The Cognitive Perspective on Learning: Its Theoretical

    Learn how cognitive theories explain different aspects of learning and inform classroom practices in this PDF article on ResearchGate.

  2. Strengths and Weaknesses of Cognitive Theory

    Abstract. This paper focuses on strengths and weaknesses of the Cognitive Theory. Many theories have been proposed over the years to explain the developmental changes that people undergo over the ...

  3. A 60-Year Evolution of Cognitive Theory and Therapy

    Love is never enough: How couples can overcome misunderstanding, resolve conflicts, and solve relationship problems through cognitive therapy. New York, NY: Harper & Row. Google Scholar. Beck A. T. (1993). Cognitive therapy: Past, present, and future. Journal of Consulting and Clinical Psychology, 61, 194-198.

  4. Cognitive-Load Theory: Methods to Manage Working Memory Load in the

    The human cognitive architecture postulated by CLT has developed over several decades into a model in which the processes and structures are considered closely analogous to the processes and structures associated with evolution by natural selection (e.g., Paas & Sweller, 2012; Sweller & Sweller, 2006).Obviously, being driven by theoretical and empirical research, the cognitive architecture is ...

  5. Cognitive theory development as we know it: specificity, explanatory

    In terms of assessing the explanatory adequacy of cognitive theories, i.e., which experimental data any model should be tested against, there seems to be little role for brain data (e.g., fMRI and ERP). Of course, cognitive neuropsychology has indeed proven decisive to inform the structure of cognitive models (and reading models in particular ...

  6. THE PIAGET THEORY OF COGNITIVE DEVELOPMENT :AN ...

    Piaget, J. (1983) An important implication of P iaget's theory is adaptation of instruction to the learner's developmental. level. The content of instruction needs to be consistent with the ...

  7. Teaching the science of learning

    The science of learning has made a considerable contribution to our understanding of effective teaching and learning strategies. However, few instructors outside of the field are privy to this research. In this tutorial review, we focus on six specific cognitive strategies that have received robust support from decades of research: spaced practice, interleaving, retrieval practice, elaboration ...

  8. The Cognitive Perspective on Learning: Its Theoretical Underpinnings

    The Cognitive Perspective on Learning: Its Theoretical Underpinnings and Implications for Classroom Practices ... In contrast to behaviorism, cognitivism is a relatively recent learning theory and its features are not well known or are confused with constructivism by teachers. This article aims to provide an overview of the core characteristics ...

  9. Journal of Cognitive Psychology

    The Journal of Cognitive Psychology publishes contributions from all areas of cognitive psychology, focusing on sound and theory-driven studies that advance our understanding of cognitive mechanisms and processes. Contributors include experimental cognitive psychologists and cognitive neuroscientists, working in areas as diverse as perception, attention, language processing, numerical ...

  10. The Past, Present, and Future of the Cognitive Theory of Multimedia

    The cognitive theory of multimedia learning (Mayer, 2021, 2022), which seeks to explain how people learn academic material from words and graphics, has developed over the past four decades. Although the name and graphical representation of the theory have evolved over the years, the core ideas have been constant—dual channels (i.e., humans have separate information processing channels for ...

  11. PDF Cognitive Dissonance: Reexamining a Pivotal Theory in Psychology

    the later cognitive revolution that so profoundly altered all of psychology. Dissonance theory and research also played a pioneering role in a second way. By highlighting the often paradoxical and even self-defeating ways that humans frequently deal with the world, dissonance theory was the first comprehen-

  12. Cognitive Theory

    Introduction. The emergence of cognitive theory in the 1950s was a milestone and, for some, a revolution in psychology, for it made the study of thinking and consciousness legitimate again after decades of dominance by behavioral theory. Behaviorists were concerned with the observation, measurement, and manipulation of behavior; mind and ...

  13. Theories of cognitive development: From Piaget to today

    Jean Piaget, by the scope, depth and importance of his work, is undoubtedly the major figure of twentieth-century psychology. As Flavell, Miller, and Miller wrote in their textbook about theories of development: "theories of cognitive development can be divided into B. P. (Before Piaget), and A. P. (After Piaget), because of the impact of his ...

  14. Motivation and social cognitive theory

    1. Introduction. Social cognitive theory is a psychological perspective on human functioning that emphasizes the critical role played by the social environment on motivation, learning, and self-regulation (Schunk & Usher, 2019).Because there are different social cognitive theoretical perspectives, to focus this article the discussion is limited to the social cognitive theory proposed by ...

  15. Two views on the cognitive brain

    Abstract. Cognition can be defined as computation over meaningful representations in the brain to produce adaptive behaviour. There are two views on the relationship between cognition and the ...

  16. Cognition and behavior in context: a framework and theories to explain

    In this section, we describe the development of the HuB-CC framework (Fig. 1), theory selection, and mapping of theories onto the framework (Theory Map; Table 1 and Supplementary Material SM-B.B1). This was an iterative processes, guided by cognitive models of human perception and decision-making, the historical themes identified in "Historical perspective: human cognition and behavior ...

  17. Piaget's Cognitive Developmental Theory: Critical Review

    Moreover, in terms of the methodological approach, Piaget's theory had some ethical and bias problems as he studied his own children. However, Piaget contributions, particularly in regards to the process of education among children and transferring cognition into psychology, have had a significant effect on the science of child development.

  18. (PDF) Social Cognitive Theory

    In this conceptual paper, social cognitive theory was extended to consider how leader cognitions, in addition to behaviors and the ... (SDT), and social cognitive theory. This research uses a ...

  19. Cognitive-Load Theory: Methods to

    The main goal of cognitive-load theory (CLT; Paas, Renkl, & Sweller, 2003; Sweller, Ayres, & Kalyuga, 2011; Sweller, van Merrienboer, & Paas, 1998; Sweller, van Merriënboer, & Paas, 2019) is to optimize learning of complex cognitive tasks by transforming contemporary scientific knowledge on the manner in which cognitive structures and ...

  20. Cognitive load theory, educational research, and instructional design

    Cognitive load is a theoretical notion with an increasingly central role in the educational research literature. The basic idea of cognitive load theory is that cognitive capacity in working memory is limited, so that if a learning task requires too much capacity, learning will be hampered. The recommended remedy is to design instructional systems that optimize the use of working memory ...

  21. Piaget's Cognitive Developmental Theory: Critical Review

    The aim of the current paper is to demonstrate the key aspects of Piaget's cognitive development theory and evaluate Piaget's idea based on later studies. Asian Institute of Research Educati on ...

  22. The role of teachers' direct and emotional mentoring in shaping

    Specifically, the study demonstrates that (1) research self-efficacy, outcome expectations and research interest significantly shape research aspirations; (2) an overemphasis on direct mentoring might impede research aspiration development and (3) a focus on emotional mentoring, while overlooking direct mentoring, could result in diminished ...

  23. The key principles of cognitive behavioural therapy

    Cognitive behavioural therapy (CBT) explores the links between thoughts, emotions and behaviour. It is a directive, time-limited, structured approach used to treat a variety of mental health disorders. It aims to alleviate distress by helping patients to develop more adaptive cognitions and behaviours. It is the most widely researched and ...

  24. Self-Perceived Cognitive Functioning and Item Response Theory

    Today, Dr. Scott Sperling and student leader, Ms. Sara Pishdadian, will be discussing the paper, Linking Self-Perceived Cognitive Functioning Questionnaires Using Item Response Theory: The Subjective Cognitive Decline Initiatives, with three of the study authors, Dr. Rabin, Dr. Elbulok-Charcape and Dr. Jones. In their study, the authors harmonized secondary data from 24 studies and 40 ...

  25. (PDF) COGNITIVE LEARNING THEORIES

    based on cognitive approach. Theory of data processing accepts information as the basic means of learning and explains. learning in terms of memory system. It focuses on how information goes into ...