literature review on online quiz system

Online Assessment in Higher Education: A Systematic Review

  • Joana Heil University of Mannheim
  • Dirk Ifenthaler University of Mannheim Curtin University https://orcid.org/0000-0002-2446-6548

Online assessment is defined as a systematic method of gathering information about a learner and learning processes to draw inferences about the learner’s dispositions. Online assessments provide opportunities for meaningful feedback and interactive support for learners as well as possible influences on the engagement of learners and learning outcomes. The purpose of this systematic literature review is to identify and synthesize original research studies focusing on online assessments in higher education. Out of an initial set of 4,290 publications, a final sample of 114 key publications was identified, according to predefined inclusion criteria. The synthesis yielded four main categories of online assessment modes: peer, teacher, automated, and self-assessment. The synthesis of findings supports the assumption that online assessments have promising potential in supporting and improving online learning processes and outcomes. A summary of success factors for implementing online assessments includes instructional support as well as clear-defined assessment criteria. Future research may focus on online assessments harnessing formative and summative data from stakeholders and learning environments to facilitate learning processes in real-time and help decision-makers to improve learning environments, i.e., analytics-

(*) indicates publications included in the systematic review.

*Abbakumov, D., Desmet, P., & Van den Noortgate, W. (2020). Rasch model extensions for en-hanced formative assessments in MOOCs. Applied Measurement in Education, 33(2), 113–123.

*Acosta-Gonzaga, E., & Walet, N. R. (2018). The role of attitudinal factors in mathematical online assessments: A study of undergraduate STEM students. Assessment & Evaluation in Higher Education, 43(5), 710–726.

Admiraal, W., Huisman, B., & van de Ven, M. (2014). Self- and peer assessment in Massive Open Online Courses. International Journal of Higher Education, 3(3), 119–128. https://doi.org/10.5430/ijhe.v3n3p119

*Admiraal, W., Huisman, B., & Pilli, O. (2015). Assessment in Massive Open Online Courses. Electronic Journal of E-Learning, 13(4), 207–216.

Ahmed, A., & Pollitt, A. (2010). The support model for interactive assessment. Assessment in Ed-ucation: Principles, Policy & Practice, 17(2), 133–167.

*Amhag, L. (2020). Student reflections and self-assessments in vocational training supported by a mobile learning hub. International Journal of Mobile and Blended Learning, 12(1), 1–16.

*ArchMiller, A., Fieberg, J., Walker, J. D., & Holm, N. (2017). Group peer assessment for sum-mative evaluation in a graduate-level statistics course for ecologists. Assessment & Evalua-tion in Higher Education, 42(8), 1208–1220. http://dx.doi.org/10.1080/02602938.2016.1243219

*Ashton, S., & Davies, R. S. (2015). Using scaffolded rubrics to improve peer assessment in a MOOC writing course. Distance Education, 36(3), 312–334. http://dx.doi.org/10.1080/01587919.2015.1081733

*Azevedo, B. F., Pereira, A. I., Fernandes, F. P., & Pacheco, M. F. (2022). Mathematics learning and assessment using MathE platform: A case study. Education and Information Technol-ogies, 27(2), 1747–1769. https://doi.org/10.1007/s10639-021-10669-y

*Babo, R., Babo, L., Suhonen, J., & Tukiainen, M. (2020). E-Assessment with multiple-choice questions: A 5-year study of students’ opinions and experience. Journal of Information Technology Education: Innovations in Practice, 19, 1–29. https://doi.org/10.28945/4491

*Bacca-Acosta, J., & Avila-Garzon, C. (2021). Student engagement with mobile-based assessment systems: A survival analysis. Journal of Computer Assisted Learning, 37(1), 158–171. https://doi.org/10.1111/jcal.12475

Baker, E., Chung, G., & Cai, L. (2016). Assessment, gaze, refraction, and blur: The course of achievement testing in the past 100 years. Review of Research in Education, 40, 94–142. https://doi.org/10.3102/0091732X16679806

Baleni, Z. (2015). Online formative assessment in higher education: Its pros and cons. Electronic Journal of e-Learning, 13(4), 228–226.

*Bekmanova, G., Ongarbayev, Y., Somzhurek, B., & Mukatayev, N. (2021). Personalized training model for organizing blended and lifelong distance learning courses and its effectiveness in higher education. Journal of Computing in Higher Education, 33(3), 668–683. https://doi.org/10.1007/s12528-021-09282-2

Bektik, D. (2019). Issues and challenges for implementing writing analytics at higher education. In D. Ifenthaler, J. Y.-K. Yau, & D.-K. Mah (Eds.), Utilizing learning analytics to support study success (pp. 143–155). Springer.

Bellotti, F., Kapralos, B., Lee, K., Moreno-Ger, P., & Berta, R. (2013). Assessment in and of seri-ous games: An overview. Advances in Human-Computer Interaction, 2013, 1:1. https://doi.org/10.1155/2013/136864

Bennett, R. E. (2015). The changing nature of educational assessment. Review of Research in Edu-cation, 39(1), 370–407. https://doi.org/10.3102/0091732x14554179

*Birks, M., Hartin, P., Woods, C., Emmanuel, E., & Hitchins, M. (2016). Students’ perceptions of the use of eportfolios in nursing and midwifery education. Nurse Education in Practice, 18, 46–51. https://doi.org/10.1016/j.nepr.2016.03.003

Black, P. J. (1998). Testing: friend or foe? The theory and practice of assessment and testing. Falmer Press.

Black, P., & Wiliam, D. (2009). Developing the theory of formative assessment. Educational Assessment, Evaluation and Accountability, 21, 5–31. https://doi.org/10.1007/s11092-008-9068-5

*Bohndick, C., Menne, C. M., Kohlmeyer, S., & Buhl, H. M. (2020). Feedback in Internet-based self-assessments and its effects on acceptance and motivation. Journal of Further and Higher Education, 44(6), 717–728. https://doi.org/10.1080/0309877X.2019.1596233

Bonk, C. J., Lee, M. M., Reeves, T. C., & Reynolds, T. H. (Eds.). (2015). MOOCs and open education around the world. Routledge. https://doi.org/10.4324/9781315751108 .

Boud, D. (2000). Sustainable assessment: rethinking assessment for the learning society. Studies in Continuing Education, 22(2), 151–167. https://doi.org/10.1080/713695728

Carless, D. (2007). Learning-oriented assessment: conceptual bases and practical implications. Innovations in Education and Teaching International, 44(1), 57–66. https://doi.org/10.1080/14703290601081332

*Carnegie, J. (2015). Use of feedback-oriented online exercises to help physiology students con-struct well-organized answers to short-answer questions. CBE—Life Sciences Education, 14(3), ar25. https://doi.org/10.1187/cbe.14-08-0132

*Carpenter, S. K., Rahman, S., Lund, T. J. S., Armstrong, P. I., Lamm, M. H., Reason, R. D., & Coffman, C. R. (2017). Students’ use of optional online reviews and its relationship to summative assessment outcomes in introductory biology. CBE—Life Sciences Education, 16(2), ar23. https://doi.org/10.1187/cbe.16-06-0205

*Caspari-Sadeghi, S., Forster-Heinlein, B., Maegdefrau, J., & Bachl, L. (2021). Student-generated questions: developing mathematical competence through online assessment. International Journal for the Scholarship of Teaching and Learning, 15(1), 8. https://doi.org/10.20429/ijsotl.2021.150108

*Chaudy, Y., & Connolly, T. (2018). Specification and evaluation of an assessment engine for educational games: Empowering educators with an assessment editor and a learning analyt-ics dashboard. Entertainment Computing, 27, 209–224. https://doi.org/10.1016/j.entcom.2018.07.003

*Chen, X., Breslow, L., & DeBoer, J. (2018). Analyzing productive learning behaviors for stu-dents using immediate corrective feedback in a blended learning environment. Computers & Education, 117, 59–74. https://doi.org/10.1016/j.compedu.2017.09.013

*Chen, Z., Jiao, J., & Hu, K. (2021). Formative assessment as an online instruction intervention: Student engagement, outcomes, and perceptions. International Journal of Distance Educa-tion Technologies, 19(1), 50–65. https://doi.org/10.4018/IJDET.20210101.oa1

*Chew, E., Snee, H., & Price, T. (2016). Enhancing international postgraduates’ learning experi-ence with online peer assessment and feedback innovation. Innovations in Education and Teaching International, 53(3), 247–259. https://doi.org/10.1080/14703297.2014.937729

Conrad, D., & Openo, J. (2018). Assessment strategies for online learning: engagement and au-thenticity. Athabasca University Press. https://doi.org/10.15215/aupress/9781771992329.01

*Davis, M. C., Duryee, L. A., Schilling, A. H., Loar, E. A., & Hammond, H. G. (2020). Examin-ing the impact of multiple practice quiz attempts on student exam performance. Journal of Educators Online, 17(2).

*Dermo, J., & Boyne, J. (2014). Assessing understanding of complex learning outcomes and real-world skills using an authentic software tool: A study from biomedical sciences. Practi-tioner Research in Higher Education, 8(1), 101–112.

Dochy, F. J. R. C., Segers, M., & Sluijsmans, D. (1999). The use of self-, peer and co-assessment in higher education: A review. Studies in Higher Education, 24(3), 331–350.

*Elizondo-Garcia, J., Schunn, C., & Gallardo, K. (2019). Quality of peer feedback in relation to instructional design: a comparative study in energy and sustainability MOOCs. Interna-tional Journal of Instruction, 12(1), 1025–1040.

Ellis, C. (2013). Broadening the scope and increasing usefulness of learning analytics: the case for assessment analytics. British Journal of Educational Technology, 44(4), 662–664. https://doi.org/10.1111/bjet.12028

*Ellis, S., & Barber, J. (2016). Expanding and personalising feedback in online assessment: a case study in a school of pharmacy. Practitioner Research in Higher Education, 10(1), 121–129.

*Farrelly, D., & Kaplin, D. (2019). Using student feedback to inform change within a community college teacher education program’s ePortfolio initiative. Community College Enterprise, 25(2), 9–38.

*Faulkner, M., Mahfuzul Aziz, S., Waye, V., & Smith, E. (2013). Exploring ways that eportfolios can support the progressive development of graduate qualities and professional competen-cies. Higher Education Research and Development, 32(6), 871–887.

*Filius, R. M., de Kleijn, R. A. M., Uijl, S. G., Prins, F. J., van Rijen, H. V. M., & Grobbee, D. E. (2019). Audio peer feedback to promote deep learning in online education. Journal of Computer Assisted Learning, 35(5), 607–619. https://doi.org/10.1111/jcal.12363

*Filius, R. M., Kleijn, R. A. M. de, Uijl, S. G., Prins, F. J., Rijen, H. V. M. van, & Grobbee, D. E. (2018). Strengthening dialogic peer feedback aiming for deep learning in SPOCs. Comput-ers & Education, 125, 86–100. https://doi.org/10.1016/j.compedu.2018.06.004

*Formanek, M., Wenger, M. C., Buxner, S. R., Impey, C. D., & Sonam, T. (2017). Insights about large-scale online peer assessment from an analysis of an astronomy MOOC. Computers & Education, 113, 243–262. https://doi.org/10.1016/j.compedu.2017.05.019

*Förster, M., Weiser, C., & Maur, A. (2018). How feedback provided by voluntary electronic quizzes affects learning outcomes of university students in large classes. Computers & Ed-ucation, 121, 100–114. https://doi.org/10.1016/j.compedu.2018.02.012

*Fratter, I., & Marigo, L. (2018). Integrated forms of self-assessment and placement testing for Italian L2 aimed at incoming foreign university exchange students at the University of Pad-ua. Language Learning in Higher Education, 8(1), 91–114. https://doi.org/10.1515/cercles-2018-0005

*Gamage, S. H. P. W., Ayres, J. R., Behrend, M. B., & Smith, E. J. (2019). Optimising Moodle quizzes for online assessments. International Journal of STEM Education, 6(1), 1–14. https://doi.org/10.1186/s40594-019-0181-4

*Gámiz Sánchez, V., Montes Soldado, R., & Pérez López, M. C. (2014). Self-assessment via a blended-learning strategy to improve performance in an accounting subject. International Journal of Educational Technology in Higher Education, 11(2), 43–54. https://doi.org/10.7238/rusc.v11i2.2055

*Garcia-Peñalvo, F. J., Garcia-Holgado, A., Vazquez-Ingelmo, A., & Carlos Sanchez-Prieto, J. (2021). Planning, communication and active methodologies: online assessment of the soft-ware engineering subject during the COVID-19 crisis. Ried-Revista Iberoamericana De Educacion A Distancia, 24(2), 41–66. https://doi.org/10.5944/ried.24.2.27689

Gašević, D., Greiff, S., & Shaffer, D. (2022). Towards strengthening links between learning analytics and assessment: Challenges and potentials of a promising new bond. Computers in Human Behavior, 134, 107304. https://doi.org/10.1016/j.chb.2022.107304

Gašević, D., Joksimović, S., Eagan, B. R., & Shaffer, D. W. (2019). SENS: Network analytics to combine social and cognitive perspectives of collaborative learning. Computers in Human Behavior, 92, 562–577. https://doi.org/10.1016/j.chb.2018.07.003

Gašević, D., Jovanović, J., Pardo, A., & Dawson, S. (2017). Detecting learning strategies with analytics: Links with self-reported measures and academic performance. Journal of Learning Analytics, 4(2), 113–128. https://doi.org/jla.2017.42.10

Gikandi, J. W., Morrow, D., & Davis, N. E. (2011). Online formative assessment in higher education: A review of the literature. Computers & Education, 57(4), 2333–2351. https://doi.org/10.1016/j.compedu.2011.06.004

*Gleason, J. (2012). Using technology-assisted instruction and assessment to reduce the effect of class size on student outcomes in undergraduate mathematics courses. College Teaching, 60(3), 87–94. https://doi.org/ 10.1080/87567555.2011.637249

*González-Gómez, D., Jeong, J. S., & Canada-Canada, F. (2020). Examining the effect of an online formative assessment tool (O Fat) of students’ motivation and achievement for a university science education. Journal of Baltic Science Education, 19(3), 401–414. https://doi.org/10.33225/jbse/20.19.401

Gottipati, S., Shankararaman, V., & Lin, J. R. (2018). Text analytics approach to extract course improvement suggestions from students’ feedback. Research and Practice in Technology Enhanced Learning, 13(6). https://doi.org/10.1186/s41039-018-0073-0

*Guerrero-Roldán, A.-E., & Noguera, I. (2018). A model for aligning assessment with compe-tences and learning activities in online courses. Internet And Higher Education, 38, 36–46. https://doi.org/10.1016/j.iheduc.2018.04.005

*Hains-Wesson, R., Wakeling, L., & Aldred, P. (2014). A university-wide ePortfolio initiative at Federation University Australia: Software analysis, test-to-production, and evaluation phases. International Journal of EPortfolio, 4(2), 143–156.

* Hashim, H., Salam, S., Mohamad, S. N. M., & Sazali, N. S. S. (2018). The designing of adap-tive self-assessment activities in second language learning using massive open online courses (MOOCs). International Journal of Advanced Computer Science and Applica-tions, 9(9), 276–282.

*Hay, P. J., Engstrom, C., Green, A., Friis, P., Dickens, S., & Macdonald, D. (2013). Promoting assessment efficacy through an integrated system for online clinical assessment of practical skills. Assessment & Evaluation in Higher Education, 38(5), 520–535. https://doi.org/10.1080/02602938.2012.658019

*Herzog, M. A., & Katzlinger, E. (2017). The multiple faces of peer review in higher education. five learning scenarios developed for digital business. EURASIA Journal of Mathematics, Science & Technology Education, 13(4), 1121–1143. https://doi.org/ 10.12973/eurasia.2017.00662a

*Hickey, D., & Rehak, A. (2013). Wikifolios and participatory assessment for engagement, under-standing, and achievement in online courses. Journal of Educational Multimedia and Hy-permedia, 22(4), 407–441.

*Holmes, N. (2018). Engaging with assessment: increasing student engagement through continu-ous assessment. Active Learning in Higher Education, 19(1), 23–34. https://doi.org/10.1177/1469787417723230

*Hughes, M., Salamonson, Y., & Metcalfe, L. (2020). Student engagement using multiple-attempt "Weekly Participation Task" quizzes with undergraduate nursing students. Nurse Educa-tion in Practice, 46, 102803. https://doi.org/10.1016/j.nepr.2020.102803

*Huisman, B., Admiraal, W., Pilli, O., van de Ven, M., & Saab, N. (2018). Peer assessment in moocs: the relationship between peer reviewers’ ability and authors’ essay performance. British Journal of Educational Technology, 49(1), 101–110. https://doi.org/ 10.1111/bjet.12520

*Hwang, W.-Y., Hsu, J.-L., Shadiev, R., Chang, C.-L., & Huang, Y.-M. (2015). Employing self-assessment, journaling, and peer sharing to enhance learning from an online course. Jour-nal of Computing in Higher Education, 27(2), 114–133.

Ifenthaler, D. (2012). Determining the effectiveness of prompts for self-regulated learning in problem-solving scenarios. Journal of Educational Technology & Society, 15(1), 38–52.

Ifenthaler, D. (2023). Automated essay grading systems. In O. Zawacki-Richter & I. Jung (Eds.), Hanboock of open, distance and digital education (pp. 1057–1071). Springer. https://doi.org/10.1007/978-981-19-2080-6_59

Ifenthaler, D., & Greiff, S. (2021). Leveraging learning analytics for assessment and feedback. In J. Liebowitz (Ed.), Online learning analytics (pp. 1–18). Auerbach Publications. https://doi.org/10.1201/9781003194620

Ifenthaler, D., Greiff, S., & Gibson, D. C. (2018). Making use of data for assessments: harnessing analytics and data science. In J. Voogt, G. Knezek, R. Christensen, & K.-W. Lai (Eds.), International Handbook of IT in Primary and Secondary Education (2nd ed., pp. 649–663). Springer. https://doi.org/10.1007/978-3-319-71054-9_41

Ifenthaler, D., Schumacher, C., & Kuzilek, J. (2023). Investigating students’ use of self-assessments in higher education using learning analytics. Journal of Computer Assisted Learning, 39(1), 255–268. https://doi.org/10.1111/jcal.12744

*James, R. (2016). Tertiary student attitudes to invigilated, online summative examinations. Inter-national Journal of Educational Technology in Higher Education, 13(1), 19. https://doi.org/10.1186/s41239-016-0015-0

*Jarrott, S., & Gambrel, L. E. (2011). The bottomless file box: electronic portfolios for learning and evaluation purposes. International Journal of EPortfolio, 1(1), 85–94.

Johnson, W. L., & Lester, J. C. (2016). Face-to-Face interaction with pedagogical agents, twenty years later. International Journal of Artificial Intelligence in Education, 26(1), 25–36. https://doi.org/10.1007/s40593-015-0065-9

*Kim, Y. A., Rezende, L., Eadie, E., Maximillian, J., Southard, K., Elfring, L., Blowers, P., & Ta-lanquer, V. (2021). Responsive teaching in online learning environments: using an instruc-tional team to promote formative assessment and sense of community. Journal of College Science Teaching, 50(4), 17–24.

Kim, Y. J., & Ifenthaler, D. (2019). Game-based assessment: The past ten years and moving forward. In D. Ifenthaler & Y. J. Kim (Eds.), Game-based assessment revisted (pp. 3–12). Springer. https://doi.org/10.1007/978-3-030-15569-8_1

*Kristanto, Y. D. (2018). Technology-enhanced pre-instructional peer assessment: Exploring stu-dents’ perceptions in a statistical methods course. Online Submission, 4(2), 105–116.

*Küchemann, S., Malone, S., Edelsbrunner, P., Lichtenberger, A., Stern, E.,

Schumacher, R., Brünken, R., Vaterlaus, A., & Kuhn, J. (2021). Inventory for the

assessment of representational competence of vector fields. Physical Review

Physics Education Research, 17(2), 20126.

https://doi.org/10.1103/PhysRevPhysEducRes.17.020126

*Kühbeck, F., Berberat, P. O., Engelhardt, S., & Sarikas, A. (2019). Correlation of online assess-ment parameters with summative exam performance in undergraduate medical education of pharmacology: A prospective cohort study. BMC Medical Education, 19(1), 412. https://doi.org/10.1186/s12909-019-1814-5

*Law, S. (2019). Using digital tools to assess and improve college student writing. Higher Educa-tion Studies, 9(2), 117–123.

Lee, H.-S., Gweon, G.-H., Lord, T., Paessel, N., Pallant, A., & Pryputniewicz, S. (2021). Machine learning-enabled automated feedback: Supporting students’ revision of scientific arguments based on data drawn from simulation. Journal of Science Education and Technology, 30(2), 168–192. https://doi.org/10.1007/s10956-020-09889-7

Lenhard, W., Baier, H., Hoffmann, J., & Schneider, W. (2007). Automatische Bewertung offener Antworten mittels Latenter Semantischer Analyse [Automatic scoring of constructed-response items with latent semantic analysis]. Diagnostica, 53(3), 155–165. https://doi.org/10.1026/0012-1924.53.3.155

*Li, L., & Gao, F. (2016). The effect of peer assessment on project performance of students at dif-ferent learning levels. Assessment & Evaluation in Higher Education, 41(6), 885–900.

*Li, L., Liu, X., & Steckelberg, A. L. (2010). Assessor or assessee: How student learning im-proves by giving and receiving peer feedback. British Journal of Educational Technology, 41(3), 525–536. https://doi.org/ 10.1111/j.1467-8535.2009.00968.x

*Liu, E. Z.-F., & Lee, C.-Y. (2013). Using peer feedback to improve learning via online peer as-sessment. Turkish Online Journal of Educational Technology—TOJET, 12(1), 187–199.

*Liu, X., Li, L., & Zhang, Z. (2018). Small group discussion as a key component in online as-sessment training for enhanced student learning in web-based peer assessment. Assessment & Evaluation in Higher Education, 43(2), 207–222. https://doi.org/10.1080/02602938.2017.1324018

Lockyer, L., Heathcote, E., & Dawson, S. (2013). Informing pedagogical action: Aligning learning analytics with learning design. American Behavioral Scientist, 57(10), 1439–1459. https://doi.org/10.1177/0002764213479367

*López-Tocón, I. (2021). Moodle quizzes as a continuous assessment in higher education: An ex-ploratory approach in physical chemistry. Education Sciences, 11(9), 500. https://

doi.org/10.3390/educsci11090500

*Luaces, O., Díez, J., Alonso-Betanzos, A., Troncoso, A., & Bahamonde, A. (2017).

Content-based methods in peer assessment of open-response questions to grade students as authors and as graders. Knowledge-Based Systems, 117, 79–87. https://doi.org/10.1016/j.knosys.2016.06.024

*MacKenzie, L. M. (2019). Improving learning outcomes: Unlimited vs. limited attempts and time for supplemental interactive online learning activities. Journal of Curriculum and Teaching, 8(4), 36–45. https://doi.org/10.5430/jct.v8n4p36

*Mao, J., & Peck, K. (2013). Assessment strategies, self-regulated learning skills, and perceptions of assessment in online learning. Quarterly Review of Distance Education, 14(2), 75–95.

*Martin, F., Ritzhaupt, A., Kumar, S., & Budhrani, K. (2019). Award-winning faculty online teaching practices: Course design, assessment and evaluation, and facilitation. The Internet and Higher Education, 42, 34–43. https://doi.org/10.1016/j.iheduc.2019.04.001

Martin, F., & Whitmer, J. C. (2016). Applying learning analytics to investigate timed release in online learning. Technology, Knowledge and Learning, 21(1), 59–74. https://doi.org/10.1007/s10758-015-9261-9

*Mason, R., & Williams, B. (2016). Using ePortfolio’s to assess undergraduate paramedic stu-dents: a proof of concept evaluation. International Journal of Higher Education, 5(3), 146–154. https://doi.org/ 10.5430/ijhe.v5n3p146

*McCarthy, J. (2017). Enhancing feedback in higher education: Students’ attitudes towards online and in-class formative assessment feedback models. Active Learning in Higher Education, 18(2), 127–141. https://doi.org/10.1177/146978741770761

*McCracken, J., Cho, S., Sharif, A., Wilson, B., & Miller, J. (2012). Principled assessment strate-gy design for online courses and programs. Electronic Journal of E-Learning, 10(1), 107–119.

*McNeill, M., Gosper, M., & Xu, J. (2012). Assessment choices to target higher order learning outcomes: the power of academic empowerment. Research in Learning Technology, 20(3), 283–296.

*McWhorter, R. R., Delello, J. A., Roberts, P. B., Raisor, C. M., & Fowler, D. A. (2013). A cross-case analysis of the use of web-based eportfolios in higher education. Journal of In-formation Technology Education: Innovations in Practice, 12, 253–286.

*Meek, S. E. M., Blakemore, L., & Marks, L. (2017). Is peer review an appropriate form of as-sessment in a MOOC? Student participation and performance in formative peer review. As-sessment & Evaluation in Higher Education, 42(6), 1000–1013.

*Milne, L., McCann, J., Bolton, K., Savage, J., & Spence, A. (2020). Student satisfaction with feedback in a third year Nutrition unit: A strategic approach. Journal of University Teach-ing and Learning Practice, 17(5), 67–83. https://doi.org/10.53761/1.17.5.5

Montenegro-Rueda, M., Luque-de la Rosa, A., Sarasola Sánchez-Serrano, J. L., & Fernández-Cerero, J. (2021). Assessment in higher education during the COVID-19 pandemic: A sys-tematic review. Sustainability, 13(19), 10509.

Moore, M. G., & Kearsley, G. (2011). Distance education: a systems view of online learning. Wadsworth Cengage Learning.

*Mora, M. C., Sancho-Bru, J. L., Iserte, J. L., & Sanchez, F. T. (2012). An e-assessment approach for evaluation in engineering overcrowded groups. Computers & Education, 59(2), 732–740. https://doi.org/10.1016/j.compedu.2012.03.011

Newton, P. E. (2007). Clarifying the purposes of educational assessment. Assessment in Education: Principles, Policy & Practice, 14(2), 149–170. https://doi.org/10.1080/09695940701478321

*Nguyen, Q., Rienties, B., Toetenel, L., Ferguson, R., & Whitelock, D. (2017). Examining the designs of computer-based assessment and its impact on student engagement, satisfaction, and pass rates. Computers in Human Behavior, 76, 703–714. https://doi.org/10.1016/j.chb.2017.03.028

*Nicholson, D. T. (2018). Enhancing student engagement through online portfolio assessment. Practitioner Research in Higher Education, 11(1), 15–31.

*Ogange, B. O., Agak, J. O., Okelo, K. O., & Kiprotich, P. (2018). Student perceptions of the effectiveness of formative assessment in an online learning environment. Open Praxis, 10(1), 29–39.

*Ortega-Arranz, A., Bote-Lorenzo, M. L., Asensio-Pérez, J. I., Martínez-Monés, A., Gómez-Sánchez, E., & Dimitriadis, Y. (2019). To reward and beyond: Analyzing the effect of re-ward-based strategies in a MOOC. Computers & Education, 142, 103639. https://doi.org/10.1016/j.compedu.2019.103639

Page, M. J., McKenzie, J. E., Bossuyt, P. M., Boutron, I., Hoffmann, T. C., Mulrow, C. D., Shamseer, L., Tetzlaff, J. M., Akl, E. A., Brennan, S. E., Chou, R., Glanville, J., Grim-shaw, J. M., Hróbjartsson, A., Lalu, M. M., Li, T., Loder, E. W., Mayo-Wilson, E., McDonald, S., . . . Moher, D. (2021). The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ, 372, n71. https://doi.org/10.1136/bmj.n71

Pellegrino, J. W., Chudowsky, N., & Glaser, R. (Eds.). (2001). Knowing what students know: The science and design of educational assessment. National Academy Press.

*Pinargote-Ortega, M., Bowen-Mendoza, L., Meza, J., & Ventura, S. (2021). Peer assessment using soft computing techniques. Journal of Computing in Higher Education, 33(3), 684–726. https://doi.org/10.1007/s12528-021-09296-w

*Polito, G., & Temperini, M. (2021). A gamified web based system for computer programming learning. Computers and Education: Artificial Intelligence, 2, 100029. https://doi.org/10.1016/j.caeai.2021.100029

*Reilly, E. D., Williams, K. M., Stafford, R. E., Corliss, S. B., Walkow, J. C., & Kidwell, D. K. (2016). Global times call for global measures: investigating automated essay scoring in lin-guistically-diverse MOOCs. Online Learning, 20(2), 217–229.

*Rogerson-Revell, P. (2015). Constructively aligning technologies with learning and assessment in a distance education master’s programme. Distance Education, 36(1), 129–147.

*Ross, B., Chase, A.-M., Robbie, D., Oates, G., & Absalom, Y. (2018). Adaptive quizzes to in-crease motivation, Mark engagement and learning outcomes in a first year accounting unit. International Journal Of Educational Technology In Higher Education, 15(1), 1–14. https://doi.org/10.1186/s41239-018-0113-2

*Sampaio-Maia, B., Maia, J. S., Leitao, S., Amaral, M., & Vieira-Marques, P. (2014). Wiki as a tool for Microbiology teaching, learning and assessment. European Journal of Dental Edu-cation, 18(2), 91–97. https://doi.org/10.1111/eje.12061

*Sancho-Vinuesa, T., Masià, R., Fuertes-Alpiste, M., & Molas-Castells, N. (2018). Exploring the effectiveness of continuous activity with automatic feedback in online calculus. Computer Applications in Engineering Education, 26(1), 62–74. https://doi.org/10.1002/cae.21861

*Santamaría Lancho, M., Hernández, M., Sánchez-Elvira Paniagua, Á., Luzón Encabo, J. M., & de Jorge-Botana, G. (2018). Using semantic technologies for formative assessment and scor-ing in large courses and MOOCs. Journal of Interactive Media in Education, 2018(1), 1–10. https://doi.org/10.5334/jime.468 .

*Sarcona, A., Dirhan, D., & Davidson, P. (2020). An overview of audio and written feedback from students’ and instructors’ perspective. Educational Media International, 57(1), 47–60. https://doi.org/10.1080/09523987.2020.1744853

*Scalise, K., Douskey, M., & Stacy, A. (2018). Measuring learning gains and examining implica-tions for student success in STEM. Higher Education Pedagogies, 3(1), 183–195. https://doi.org/10.1080/23752696.2018.1425096

*Schaffer, H. E., Young, K. R., Ligon, E. W., & Chapman, D. D. (2017). Automating individual-ized formative feedback in large classes based on a directed concept graph. Frontiers in Psychology, 8, 260. https://doi.org/10.1080/23752696.2018.1425096

Schumacher, C., & Ifenthaler, D. (2021). Investigating prompts for supporting students' self-regulation—A remaining challenge for learning analytics approaches? The Internet and Higher Education, 49, 100791. https://doi.org/10.1016/j.iheduc.2020.100791

*Schultz, M., Young, K., K. Gunning, T., & Harvey, M. L. (2022). Defining and measuring au-thentic assessment: a case study in the context of tertiary science. Assessment & Evaluation in Higher Education, 47(1), 77–94. https://doi.org/10.1080/02602938.2021.1887811

*Sekendiz, B. (2018). Utilisation of formative peer-assessment in distance online education: A case study of a multi-model sport management unit. Interactive Learning Environments, 26(5), 682–694. https://doi.org/10.1080/10494820.2017.1396229

*Senel, S., & Senel, H. C. (2021). Remote assessment in higher education during COVID-19 pan-demic. International Journal of Assessment Tools in Education, 8(2), 181–199.

*Shaw, L., MacIsaac, J., & Singleton-Jackson, J. (2019). The efficacy of an online cognitive as-sessment tool for enhancing and improving student academic outcomes. Online Learning Journal, 23(2), 124–144. https://doi.org/ 10.24059/olj.v23i2.1490

Shute, V. J., Wang, L., Greiff, S., Zhao, W., & Moore, G. (2016). Measuring problem solving skills via stealth assessment in an engaging video game. Computers in Human Behavior, 63, 106–117. https://doi.org/10.1016/j.chb.2016.05.047

Stödberg, U. (2012). A research review of e-assessment. Assessment & Evaluation in Higher Ed-ucation, 37(5), 591–604. https://doi.org/10.1080/02602938.2011.557496

*Stratling, R. (2017). The complementary use of audience response systems and online tests to implement repeat testing: a case study. British Journal of Educational Technology, 48(2), 370–384. https://doi.org/ 10.1111/bjet.12362

*Sullivan, D., & Watson, S. (2015). Peer assessment within hybrid and online courses: Students’ view of its potential and performance. Journal of Educational Issues, 1(1), 1–18. https://doi.org/10.5296/jei.v1i1.7255

*Taghizadeh, M., Alavi, S. M., & Rezaee, A. A. (2014). Diagnosing L2 learners’ language skills based on the use of a web-based assessment tool called DIALANG. International Journal of E-Learning & Distance Education, 29(2), n2.

*Tawafak, R. M., Romli, A. M., & Alsinani, M. J. (2019). Student assessment feedback effective-ness model for enhancing teaching method and developing academic performance. Interna-tional Journal of Information and Communication Technology Education, 15(3), 75–88. https://doi.org/10.4018/IJICTE.2019070106

*Tempelaar, D. (2020). Supporting the less-adaptive student: The role of learning analytics, forma-tive assessment and blended learning. Assessment & Evaluation in Higher Education, 45(4), 579–593.

Tempelaar, D. T., Rienties, B., Mittelmeier, J., & Nguyen, Q. (2018). Student profiling in a dispositional learning analytics application using formative assessment. Computers in Human Behavior, 78, 408–420. https://doi.org/10.1016/j.chb.2017.08.010

*Tenório, T., Bittencourt, I. I., Isotani, S., Pedro, A., & Ospina, P. (2016). A gamified peer as-sessment model for on-line learning environments in a competitive context. Computers in Human Behavior, 64, 247–263. https://doi.org/10.1016/j.chb.2016.06.049

*Thille, C., Schneider, E., Kizilcec, R. F., Piech, C., Halawa, S. A., & Greene, D. K. (2014). The future of data-enriched assessment. Research & Practice in Assessment, 9, 5–16.

*Tsai, N. W. (2016). Assessment of students’ learning behavior and academic misconduct in a student-pulled online learning and student-governed testing environment: A case study. Journal of Education for Business, 91(7), 387–392. https://dx.doi.org/10.1080/08832323.2016.1238808

*Tucker, C., Pursel, B. K., & Divinsky, A. (2014). Mining student-generated textual data in MOOCs and quantifying their effects on student performance and learning outcomes. Computers in Education Journal, 5(4), 84–95.

*Tucker, R. (2014). Sex does not matter: Gender bias and gender differences in peer assessments of contributions to group work. Assessment & Evaluation in Higher Education, 39(3), 293–309. http://dx.doi.org/10.1080/02602938.2013.830282

Turkay, S., & Tirthali, D. (2010). Youth leadership development in virtual worlds: A case study. Procedia - Social and Behavioral Sciences, 2(2), 3175–3179. https://doi.org/10.1016/j.sbspro.2010.03.485

*Turner, J., & Briggs, G. (2018). To see or not to see? Comparing the effectiveness of examina-tions and end of module assessments in online distance learning. Assessment & Evaluation in Higher Education, 43(7), 1048–1060. https://doi.org/10.1080/02602938.2018.1428730

*Vaughan, N. (2014). Student engagement and blended learning: Making the assessment connec-tion. Education Sciences, 4(4), 247–264. https://doi.org/10.3390/educsci4040247

*Wadmany, R., & Melamed, O. (2018). "New Media in Education" MOOC: Improving peer as-sessments of students’ plans and their innovativeness. Journal of Education and E-Learning Research, 5(2), 122–130. https://doi.org/10.20448/journal.509.2018.52.122.130

*Wang, S., & Wang, H. (2012). Organizational schemata of e-portfolios for fostering higher-order thinking. Information Systems Frontiers, 14(2), 395–407. https:// doi.org/10.1007/s10796-010-9262-0

*Wang, Y.-M. (2019). Enhancing the quality of online discussion—assessment matters. Journal of Educational Technology Systems, 48(1), 112–129. https://doi.org/10.1177/0047239519861

*Watson, S. L., Watson, W. R., & Kim, W. (2017). Primary assessment activity and learner per-ceptions of attitude change in four MOOCs. Educational Media International, 54(3), 245–260. https://doi.org/10.1080/09523987.2017.1384165

Webb, M., Gibson, D. C., & Forkosh-Baruch, A. (2013). Challenges for information technology supporting educational assessment. Journal of Computer Assisted Learning, 29(5), 451–462. https://doi.org/10.1111/jcal.12033

Webb, M., & Ifenthaler, D. (2018). Assessment as, for and of 21st century learning using infor-mation technology: An overview. In J. Voogt, G. Knezek, R. Christensen, & K.-W. Lai (Eds.), International Handbook of IT in Primary and Secondary Education (2nd ed., pp. 1–20). Springer.

Wei, X., Saab, N., & Admiraal, W. (2021). Assessment of cognitive, behavioral, and affective learning outcomes in massive open online courses: A systematic literature review. Comput-ers & Education, 163, 104097.

*Wells, J., Spence, A., & McKenzie, S. (2021). Student participation in computing studies to un-derstand engagement and grade outcome. Journal of Information Technology Education, 20, 385–403. https://doi.org/10.28945/4817

*West, J., & Turner, W. (2016). Enhancing the assessment experience: Improving student percep-tions, engagement and understanding using online video feedback. Innovations in Educa-tion and Teaching International, 53(4), 400–410. http://dx.doi.org/10.1080/14703297.2014.1003954

Whitelock, D., & Bektik, D. (2018). Progress and challenges for automated scoring and feedback systems for large-scale assessments. In J. Voogt, G. Knezek, R. Christensen, & K.-W. Lai (Eds.), International Handbook of IT in Primary and Secondary Education (2nd ed., pp. 617–634). Springer.

*Wilkinson, K., Dafoulas, G., Garelick, H., & Huyck, C. (2020). Are quiz-games an effective re-vision tool in anatomical sciences for higher education and what do students think of them? British Journal of Educational Technology, 51(3), 761–777. https://doi.org/ 10.1111/bjet.12883

*Wu, C., Chanda, E., & Willison, J. (2014). Implementation and outcomes of online self and peer assessment on group based honours research projects. Assessment & Evaluation in Higher Education, 39(1), 21–37. http://dx.doi.org/10.1080/02602938.2013.779634

*Xian, L. (2020). The effectiveness of dynamic assessment in linguistic accuracy in efl writing: an investigation assisted by online scoring systems. Language Teaching Research Quarterly, 18, 98–114.

*Xiao, Y. A. N. G., & Hao, G. A. O. (2018). Teaching business english course: Incorporating portfolio assessment-based blended learning and MOOC. Journal of Literature and Art Studies, 8(9), 1364–1369. https://doi.org/10.17265/2159-5836/2018.09.008

*Yang, T. C., Chen, S. Y., & Chen, M. C. (2016). An investigation of a two-tier test strategy in a university calculus course: Causes versus consequences. IEEE Transactions on Learning Technologies, 9(2), 146–156.

*Yeh, H.-C., & Lai, P.-Y. (2012). Implementing online question generation to foster reading com-prehension. Australasian Journal of Educational Technology, 28(7), 1152–1175.

*Zhan, Y. (2021). What matters in design? Cultivating undergraduates’ critical thinking through online peer assessment in a confucian heritage context. Assessment & Evaluation in Higher Education, 46(4), 615–630. https://doi.org/10.1080/02602938.2020.1804826

*Zong, Z., Schunn, C. D., & Wang, Y. (2021). What aspects of online peer feedback robustly pre-dict growth in students’ task performance? Computers in Human Behavior, 124, 106924. https://doi.org/10.1016/j.chb.2021.106924

As a condition of publication, the author agrees to apply the Creative Commons – Attribution International 4.0 (CC-BY) License to OLJ articles. See: https://creativecommons.org/licenses/by/4.0/ .

This licence allows anyone to reproduce OLJ articles at no cost and without further permission as long as they attribute the author and the journal. This permission includes printing, sharing and other forms of distribution.

Author(s) hold copyright in their work, and retain publishing rights without restrictions

literature review on online quiz system

The DOAJ Seal is awarded to journals that demonstrate best practice in open access publishing

OLC Membership

Join the OLC

OLC Research Center

literature review on online quiz system

Information

  • For Readers
  • For Authors
  • For Librarians

More information about the publishing system, Platform and Workflow by OJS/PKP.

  • Open access
  • Published: 07 August 2017

Envisioning the use of online tests in assessing twenty-first century learning: a literature review

  • Bopelo Boitshwarelo 1 ,
  • Alison Kay Reedy 1 &
  • Trevor Billany 1  

Research and Practice in Technology Enhanced Learning volume  12 , Article number:  16 ( 2017 ) Cite this article

20k Accesses

57 Citations

3 Altmetric

Metrics details

The digital world brings with it more and more opportunities to be innovative around assessment. With a variety of digital tools and the pervasive availability of information anywhere anytime, there is a tremendous capacity to creatively employ a diversity of assessment approaches to support and evaluate student learning in higher education. The challenge in a digital world is to harness the possibilities afforded by technology to drive and assess deep learning that prepares graduates for a changing and uncertain future. One widespread method of online assessment used in higher education is online tests. The increase in the use of online tests necessitates an investigation into their role in evaluating twenty-first century learning. This paper draws on the literature to explore the role of online tests in higher education, particularly their relationship to student learning in a digital and changing world, and the issues and challenges they present. We conclude that online tests, when used effectively, can be valuable in the assessment of twenty-first century learning and we synthesise the literature to extract principles for the optimisation of online tests in a digital age.

Introduction

In recent times, there has been widespread interest from governments, industry, and educators in identifying a model of learning and assessment in higher education that meets the challenges of learning in the digital present and prepares students for an uncertain future (Kinash 2015 ). The term twenty-first century learning is widely used to encapsulate the idea that fundamental changes in the nature of learning and education have occurred in the twenty-first century as a consequence of rapidly changing technologies and globalisation (Kereluik, Mishra, Fahnoe, & Terry, 2013 ). Hence, different forms of assessment that are commensurate with the twenty-first century are needed.

Multiple and disparate interpretations exist as to what kind of knowledge and skills are needed to live and work in the twenty-first century, and hence, there is little clarity as to the forms of assessment that can be used most effectively to assess the knowledge and skills required for a digital age. For the purposes of this paper, our understanding of twenty-first century learning is based on the overarching framework of twenty-first century learning developed by Kereluik et al. ( 2013 ). Their synthesis of 15 widely used frameworks of twenty-first century knowledge produced “a coherent integrative framework” (Kereluik et al., 2013 , p. 128) for conceptualising twenty-first century knowledge. This “framework of frameworks” (Kereluik et al., 2013 , p. 129) contains three knowledge domains inherent in twenty-first century learning: foundational knowledge, meta-knowledge, and humanistic knowledge (see Fig.  1 ), with each domain containing three subcategories.

A framework of 21st Century Learning (Kereluik et al., 2013 , p. 130)

In addition, there are other approaches that contribute to our understanding of assessment in a digital age. Scott ( 2016 ) introduced the idea of “right” assessment within the context of flipping the curriculum or “FlipCurric”, where the focus is on considering assessment forms and practices that evaluate competencies and capabilities for the twenty-first century. Scott describes assessment for the digital age as being powerful, fit for purpose, valid, and focused on preparing graduates to be “work ready plus” (Scott, 2016 ), that is, ready to meet the challenges of current and future job markets. Assessment types such as problem-based learning, authentic learning tasks, and case studies feature highly as types of powerful assessment in that they evaluate students’ ability to consolidate learning across knowledge domains and to apply knowledge, skills, and capabilities that are relevant to living and working in a “volatile and rapidly transforming world” (Scott, 2016 ). These powerful forms of assessment align strongly with a constructivist approach to learning, which is the learning perspective most widely accepted in higher education as enhancing student learning (Anderson, 2016 ).

There has been a rapid growth in the use of online tests especially since the widespread implementation of LMS in higher education in the early part of the twenty-first century (Stone 2014 ). This trend, which is also evident at the authors’ institution, raises questions as to why online tests are being used so extensively, and whether their use aligns with the conceptualisation of twenty-first century learning and commensurate assessment practices. To respond to these questions, in this paper we review the literature to explore the current use of online tests in higher education, and particularly their relationship to student learning in a digital and changing world. We also identify issues associated with their use and propose principles to guide the use of online tests in the assessment of twenty-first century learning.

We use the term “online tests” to specify a particular type of ICT-based assessment, or e-assessment that can be used for diagnostic, formative, and summative purposes. While e-assessment can be used to broadly refer to any practice where technology is used to enhance or support assessment and feedback activities, online tests specifically refer to computer-assisted assessment where the deployment and marking is automated (Davies, 2010 ; Gipps, 2005 ). Online tests (also known as online quizzes Footnote 1 ) are used extensively within learning management systems (LMS) in online and mixed mode delivery. For the purpose of this paper, online tests are distinguished from “online exams”, which are typically invigilated and conducted on computers in a controlled exam centre. The investigation of literature presented in this paper focuses on identifying the ways online tests are used and the extent to which their use supports the assessment of twenty-first century knowledge and skills.

Design of the literature review

We conducted a scoping literature review to identify academic papers and research reports that discuss the use of online tests in higher education. The review of literature focused on academic papers contained in English language journals using key search terms including online assessment , online tests , online quizzes , multiple choice questions and other related aspects, particularly in a higher education context. The search was done through Google Scholar and through our institution’s library databases. The review focused on literature published since the year 2000, which aligns with the widespread take up of the LMS by higher education institutions in the first few years of the twenty-first century, and reinforces the pivotal role the LMS has played in the digitisation of learning and assessment in higher education.

The literature search revealed over 50 relevant papers from publications (primarily scholarly journals) focused on general teaching and learning, educational technology or discipline-specific education (e.g. Advances in Physiology Education). The literature review was not discipline specific and online tests were identified that cross a range of disciplines, with the natural sciences and social sciences (including business and economics) quite highly represented, with arts and humanities less so. A significant number of the empirical studies reviewed were in specific discipline areas, including earth sciences, physiology, nursing, medical sciences/biology, psychology and business (for example, see Angus & Watson, 2009 ; Brady, 2005 ; Buckles & Siegfried, 2006 ; Smith, 2007 ; Yonker, 2011 ). The number of articles identified provided a large enough pool to gain insight into the use of online tests in higher education.

In the scan of literature, we identified only a few review studies related to online tests, three of which were in the broad field of e-assessment (Gipps, 2005 ; Stödberg, 2012 ; Sweeney et al., 2017 ) and one with a specific focus on feedback practices in online tests (Nicol & Macfarlane‐Dick 2006 ). The most recent of these three broad reviews (Sweeney et al., 2017 ) involved a systematic literature review of scholarly articles about technology-enhanced assessment published in the 3 years 2014 to 2016. This study looked at what technologies are being used in e-assessment and whether they are enhancing or transforming assessment practice; however, while it referred to e-assessment within the LMS, it did not refer specifically to online tests. The wide scope of the Sweeney et al. study contrasts with the targeted focus and review of case studies about feedback in online tests in the Nichol and McFarlane-Dick paper. The identification of so few review studies indicates the need for a synthesis of the disparate body of scholarship in relation to online tests such as presented in this paper.

Exploring online tests

In this section, we review the literature around online tests to identify some of the research on their use in higher education contexts including the rationale for their use, their relationship to student learning, and trends in practice.

The rationale for using online tests

The use of e-assessment generally, and online tests in particular, has increased in higher education over the last two decades. This is a corollary of reduced resources for teaching and increased student numbers, meaning academics are required to do more with less while adapting to the increasing usage of technology in teaching (Donnelly, 2014 ; Nicol, 2007 ). The potential of technology has been harnessed to ameliorate the challenge of heavy academic workloads in teaching and for assessment, with the use of e-assessment providing a way to “avoid disjunction between teaching and assessment modes” (Gipps, 2005 , p. 173). In other words, the growth in the use of ICTs as a mode of teaching necessitates their growth as a mode of assessment, which, Gipps’ claims, increases the mode validity of the assessment practices.

Additionally, Gipps ( 2005 ) also points to efficiency and pedagogic reasons for using online tests. Because of the automated marking and feedback, online tests are viewed as highly efficient, fast and reliable, making them especially useful where large numbers of students are concerned. Consequently, online tests are very common with large first year classes. Their efficiency also emanates from the ability to test a wide range of topics in one test in a short duration as compared to assessment where responses need to be constructed (Brady, 2005 ). The capability to create, manage, and deploy online tests within an LMS means that a lot of previously manual work is automated (Pifia, 2013 ). To add to the efficiency, most major textbook publishers, such as Cengage Learning, Pearson Education and McGraw-Hill Education, have linked online question banks to their textbooks, at least in disciplines where the use of online tests is common which easily integrates with the more popular LMSs such as Blackboard™. Instead of academics creating questions from scratch, they can instead select or import them wholesale from these test banks.

While the mode validity and efficiency reasons for using online tests are easily observable, it is the pedagogic reasons for using online tests that are undoubtedly the most critical. It is imperative to unpack whether indeed online tests support and assess student learning in higher education, and if so, what kind of learning they facilitate and in what circumstances. The next few sections explore the literature around these questions.

Cognitive levels of questions

Typically, online tests involve the use of multiple choice questions (MCQs), true/false questions, matching questions as well as predetermined short answer questions. LMSs provide opportunities for these and other different types of questions to be included in the creation and deployment of online tests. Of these, MCQs are the most commonly used question type (Davies, 2010 ; Nicol, 2007 ; Simkin & Kuechler, 2005 ). Hence, the discussion will primarily, but not exclusively, focus on them.

The focus of MCQs can vary from recall type questions to questions that demand higher cognitive levels of engagement (Douglas et al, 2012 ). Therefore, MCQs can be used to assess different types of learning outcomes. For example, a two-factor study between study approach and performance (Yonker, 2011 ) made a distinction between factual MCQs and application MCQs. The distinction was a reflection of the level of difficulty or cognitive demand of the questions as shown by the fact, in this study, that students who employed surface learning approaches were found to have performed relatively poorly in application MCQs as compared to those who used deep learning approaches. Using Bloom’s taxonomy, some authors have asserted that MCQs are most suitable for the first three cognitive levels of remember, comprehend and apply (Simkin & Kuechler 2005 ; Douglas et al, 2012 ) and, to some extent, the level of analysis (Buckles & Siegfried, 2006 ; Brady, 2005 ).

Online tests in context

The effectiveness of online tests can be realised when they are implemented in the context of the whole learning experience, including using other assessment types. Indeed, Douglas ( 2012 ) recommends that online tests can be used to full effect when used in conjunction with other forms of assessment. Smith (2005), in a study that investigated how performance on formative assessment (including online tests) related to learning as assessed by final examination, concluded that frequent and diverse assessment can enhance student engagement and performance. Essentially, each assessment type in a suite of assessment approaches in a particular unit of study, including online tests, should target appropriate learning outcomes and complement other types of assessment. Furthermore, a study by Donnelly ( 2014 ) found that case-study-based MCQs led to a higher level of learning and deeper processing of information over MCQs not based on case studies. This led the author to conclude that blending of assessment methods (in this case MCQs and case studies) can lead to enhanced student learning while also addressing challenges of large class sizes. This is consistent with Nicol ( 2007 ), who concludes if MCQs are designed creatively and the context of their implementation is managed accordingly, they can be used to achieve the Seven Principles of Good Feedback Practice (Nicol & Macfarlane-Dick, 2006 ) including clarifying good performance, self-assessment and reflection, and dialogue.

Online tests and formative learning

The literature points to online tests being best suited for formative purposes, that is, as assessment for learning. In particular, the studies reveal nuances in the relationship between online formative tests and student learning. Pedagogic reasons for using online tests include the opportunities they provide for automated rich, descriptive, formative feedback which potentially scaffolds the learning process and allows learners to self-evaluate and improve their performance in preparation for summative assessment (Gipps, 2005 ; Nicol, 2007 ). Formative online tests can also be used for diagnostic purposes and assisting staff to identify where they should focus their teaching efforts.

Formative online tests contribute to student learning, as measured by summative assessment and particularly examinations (Angus & Watson, 2009 ; Kibble, 2007 ; Smith, 2007 ). This is particularly the case if the same types of outcomes or cognitive abilities are assessed by both the formative tests and the summative assessment (Simkin & Kuechler, 2005 ). The positive correlation between online formative tests and student learning (as indicated by summative achievement) is enhanced by task design that includes the following specific features.

Firstly, in a study using statistical analysis to compare formative scores to summative scores, Smith ( 2007 ) found that increased student learning takes place where there is engaged participation in online tests, which can be encouraged by assigning credit/marks to them.

Secondly, studies using statistical analysis (Angus & Watson, 2009 ; Smith, 2007 ) and those which combine statistical analysis with mixed method surveys on student and staff perceptions on the role of online formative tests (Kibble, 2007 ) show that student learning is enhanced where the online tests are regular and of low stakes credit and not too overwhelming.

Thirdly, surveys of staff and/or students perceptions of online tests (Baleni, 2015 ; Kibble, 2007 ) identified student learning is enhanced where multiple attempts (at least two) at a test are available, with qualitative feedback given after each attempt. The multiple attempts not only provide an opportunity for feedback and revision of material but they also play a role in building confidence in taking the online tests and subsequent exams.

The nature of feedback

The nature of feedback practice is critical in facilitating the learning process. While online tests are commonly set up to give feedback about the correctness of the student’s response to the question, some of the scholarly articles reviewed reported on improved student performance when immediate corrective feedback, or feedback about how to improve performance, was built into the test (Epstein, Lazarus, Calvano, & Matthews, 2002 ; Gipps, 2005 ; Kibble, 2007 ). This corrective feedback could include referring students to a particular module or page in the text for further study.

Feedback can be in the form of quantitative feedback to inform students about their grades and qualitative feedback that allows students to review their understanding of the content. For example, Voelkel ( 2013 ) describes a two-stage approach to online tests where the first stage is characterised by a formative test in which prompt qualitative feedback is provided to students. During this stage, students have multiple attempts to achieve at least 80%, and once this is achieved they are given access to the second stage of the test. The second stage is summative in nature and contains variants of the questions used in the first stage. This staged use of online tests reportedly improves performance not only of good students but also of weak ones.

Beyond automated immediate feedback and multiple attempts, Nicol ( 2007 ) presents case studies of good feedback practices in online tests (Nicol & Macfarlane‐Dick, 2006 ). The practices identified include the staging of online tests across the semester to facilitate provision of feedback to students as well as diagnostic feedback to the lecturer to inform teaching (Bull & Danson, 2004 ); facilitating reflection/self-assessment and enhancing motivation to engage students more deeply with test questions (and answers) through confidence-based marking (Gardner-Medwin, 2006 ); and using online MCQs and electronic voting systems/polling and subsequent peer interactions to facilitate reflection/self-assessment and dialogue (Boyle & Nicol, 2003 ; Nicol & Boyle, 2003 ). The major emphasis of Nicol’s ( 2007 ) analysis of these cases was to highlight how MCQs can be used to good effect to enhance learner self-regulation.

Student attitudes to online tests

While a positive correlation generally exists between formative tasks and summative performance, student learning is also dependent on factors that are not directly related to assessment but which nonetheless have a direct bearing on students’ participation and engagement with formative assessment. For example, formative assessment will only benefit students who are motivated to achieve high performance (Kibble, 2007 ; Smith, 2007 ) and make an effort to engage and learn (Baleni, 2015 ). Students who do not engage, either because of constraining circumstances such as being time poor or as a result of a lack of interest in excelling, are unlikely to significantly benefit from formative assessment tasks (Smith, 2007 ).

While most of the studies reviewed focused on comparative analysis between engagement with formative online tests and summative performance, a few studies also investigated student and staff perceptions of online tests. These studies generally revealed a positive attitude towards MCQs by both staff and students (Baleni, 2015 ; Donnelly, 2014 ; Kibble, 2007 ). The reasons for students’ positive attitudes towards online tests are varied but seemed to be mostly ascribed to the perceived easiness of MCQs (Donnelly, 2014 ; Gipps, 2005 ). In addition, some students liked the idea of multiple attempts and feedback (Baleni, 2015 ; Kibble, 2007 ). Other students thought that having a choice of answers in MCQs assisted their memory and thinking process (Donnelly, 2014 ). The convenience of being able to take online tests anywhere was also a favourable factor for students (Baleni, 2015 ). However, on the negative side, a reason that students gave for not liking MCQs was that this form of assessment did not allow them to demonstrate their level of knowledge (Donnelly, 2014 ).

Online tests and twenty-first century learning

Overall, the literature reviewed reveals that online tests should be fit for purpose, assess appropriate learning outcomes and be used in conjunction with other forms of formative and summative assessment targeting different types of outcomes if they are to effectively lead to student learning (Brady, 2005 ; Smith, 2007 ; Yonker, 2011 ). Online tests should be used strategically to facilitate learner engagement and self-regulation. Online tests are used predominantly to assess the foundational knowledge domain; however, with some creative thought, effort, time, and appropriate tools within and outside the LMS, online tests could be applied to the other twenty-first century learning domains of humanistic knowledge and meta-knowledge as well as to achieve the concept of powerful assessment (Scott, 2016 ).

Examples from the literature show ways in which online tests can be used in the assessment of twenty-first century learning. When designed and used effectively, they can assist academic staff teach large student cohorts and provide students with immediate and corrective feedback that can enhance subsequent performance. Yet while online test clearly have a specific and important role in the assessment of student learning in higher education, they are not without challenges. The following section reviews some of those challenges and issues associated with using online tests.

Challenges and issues around using online tests

A number of challenges and issues are commonly raised in the literature concerning online tests. These include cheating by students (Arnold, 2016 ; Fontaine, 2012 ); concern that online tests largely test only the lower levels of comprehension (McAllister & Guidice, 2012 ); an increased dependency on data banks of MCQs developed and provided by text book publishers (Masters, 2001 ); over or under testing based on the frequency of online tests; and the inflexibility of online tests to cater for diverse groups of students (Stupans, 2006 ).

The literature indicates numerous practices that students engage in while doing online tests that are often considered to be cheating. This includes students treating them as open book tests, which may involve using multiple computers for fast searching for answers (Fontaine, 2012 ). If not explicitly stated otherwise, students may consider the practice of online searching during an online test as acceptable, indeed resourceful. On a more serious level, in online tests there is an increased possibility of students using a proxy to complete the test or of colluding to do them in small groups. Another issue is multiple people logging in under the same username at the same time on different computers to help each other take the test. In order to counter some of these practices, e-proctoring systems that monitor students visually and digitally while they are in the process of doing an online test are available and used increasingly by higher education institutions. However, regardless of their use, online tests remain susceptible to some of these practices.

Studies are inconclusive as to whether there is an increase in cheating in online courses as opposed to face-to-face (f2f) courses (Harmon, Lambrinos, & Buffolino, 2010 ) but they do show that cheating risk is higher in unproctored online assessments. Cheating in low- or zero-value unproctored online tests raises different levels of concern for different lecturers, but it has been shown (Arnold, 2016 ) that cheating in formative online unproctored tests does not pay off in the long run as those students are likely to perform worse in final summative assessments than students who have not cheated.

Cheating is often deterred by utilizing control features that may be built into the online test delivery software in an LMS, for example, randomization of questions and responses, single question delivery on each screen, no backtracking to previous questions, and/or setting very tight time frames in which to answer the questions. One survey of students (Harmon et al., 2010 ) concerning tactics to deter cheating rated the following as the top 4, in order of effectiveness: using multiple versions of a test so that students do not all receive the same questions, randomizing question order and response order, not using identical questions from previous semesters, and proctor vigilance.

Harmon et al. ( 2010 ) provide a warning to higher education institutions that do not address issues of cheating in online tests. They point out that higher education institutions that are “tone deaf to the issue of proctoring online multiple choice assessments may understandably find other institutions reluctant to accept these courses for transfer credit” (Harmon et al., 2010 , Summary para.2).

An additional and emerging threat to online tests, as well as to other forms of e-assessment, is their susceptibility to emerging cyber security threats such as hacking. Dawson ( 2016 ) identifies this as a particular concern in the context of invigilated online exams conducted on students’ own devices. While this threat cannot be disregarded for online tests, hacking and other cyber security threats are more likely to impact high-stake examinations and be less of a concern for low-stake formative assessment of the type that is more usual in online tests.

Feedback and learning

Although the provision of immediate feedback is a positive feature that can be enabled in online tests, this feature is often disabled to reduce the opportunity for cheating in online tests that are used for summative purposes. Lack of feedback can have negative memorial consequences on student learning particularly when MCQs are used. MCQs expose students to answers that are incorrect and this can reinforce incorrect understandings and influence students to learn false facts if feedback is not given (Fazio, Agarwal, Marsh & Roediger, 2010 ; Roediger & Marsh, 2005 ). This negative impact of MCQs is reduced when immediate or delayed feedback is provided (Butler & Roediger, 2008 ).

Targeting low cognitive levels

While online tests can be used to assess learning at a range of cognitive levels (McAllister & Guidice, 2012 ), they generally are only used to assess low-level cognition. There is evidence to indicate, “multiple-choice testing all too frequently does not incorporate, encourage, or evaluate higher-level cognitive processes and skills” (McAllister & Guidice, 2012 , p.194). The impact of this on student learning will be dependent on the level and learning outcomes for the unit, as well as the weighting of the assessment and the mix of assessment types being used.

In previous years, before the advent of widespread use of the online medium, it was not uncommon to have textbook questions that were shallow. For example, Hampton ( 1993 ) found that 85% of MCQs and true/false format questions provided by a textbook publisher were aimed at remembering and recalling facts. This is no different from current online test banks. However, as previously mentioned, depending on the underpinning pedagogical principles and their context, online tests can be used to facilitate higher-order learning, for instance through the use of case-study-based MCQs (Donnelly, 2014 ; Hemming, 2010 ).

Multiple choice questions are also known to encourage guessing by students (Douglas, Wilson, & Ennis, 2012 ) and the scoring systems in LMSs may not be able to adequately cope with negative scoring techniques which provide a means to statistically counteract guessing. Furthermore, in formative assessment, students are often not inclined to find the correct answers and the reasoning behind them for questions that they answered incorrectly. In summative assessment, students often use a pragmatic approach to gaining the score that they need, taking a strategic rather than a deep approach to learning through assessment (Douglas et al., 2012 ). These factors contribute to the concerns of academic staff about whether the use of online tests represents good practice in assessment design (Bennett et al., 2017 ).

Publishers’ test banks

Many textbook publishers provide banks of test questions that can be deployed as online tests through institutional LMSs. The influence that profit-driven publishers have in dictating assessment practices in higher education raises concerns for many, particularly about the quality of the assessment questions and the suitability of MCQ format assessment in testing twenty-first century skills and knowledge (Vista & Care, 2017 ). The cognitive level of the questions in publisher test banks is often at the recall level, as discussed above. Additionally, there is often little security around the storage of test bank questions, and students usually have access to the same publisher test bank questions as academic staff when they purchase the textbook. While this provides an opportunity for students to access and practice test bank questions at their leisure, it can raise concerns for academic staff if they do not want students to see questions that may form part of graded assessment.

A highly publicised example of students having access to a publisher’s test bank questions was in 2010 at the University of Central Florida (Good, 2010 ) when approximately 200 students admitted to having access to the test bank prior to a mid-term online test. Much of the discussion around this case centred around whether the students were using justifiable resources for revision purposes and whether the publisher’s questions should only be used for formative purposes, implying that the lecturer should have written the summative test questions himself but did not and was therefore, in some way, negligent.

An additional problem with publishers’ test banks is that the evaluation of questions in them has revealed that they may not always be as rigorously assessed for reliability and validity as would be necessary for summative assessment (Masters, 2001 ). Ibbett and Wheldon ( 2016 )) identify a number of flaws that are associated with construction of MCQs, particularly those sourced from publishers’ test banks. They contend that one of the serious flaws in these test banks relates to clueing signals in questions, which increase the possibility of students guessing the correct answers. In a review of a sample of six well-established accounting textbooks, they found that at least two thirds of the questions had some clueing signals. Their findings point to a greater need for scrutiny of questions from publishers’ test banks. Indeed, while lecturers expect that questions contained in publishers’ test banks will all function correctly, are error-free and are well written, this is not always the case.

There are also issues around the cost and accessibility of publishers’ test banks. As a security measure, some online publisher tests can only be accessed in versions of the textbook that include a specific code. That is a challenge for the lecturer if the cohort includes students who do not purchase that version (Bennett et al., 2017 ). There is strong evidence of high levels of financial stress on higher education students, particularly those from low socioeconomic backgrounds (Karimshah, 2013 ). The use of highly protected publisher test banks can disproportionately impact on these students.

Running regular low-stake (for example, weekly) online tests is a popular approach (Bennett et al., 2017 ) designed to prevent students from falling behind. The provision of practice online tests, which are not graded, is common as a method to familiarise students with the functionality and requirements of summative tests. It is generally found that the number of students attempting the practice tests reduces as the unit progresses (Lowe, 2015 ) but those that do complete the practice tests tend to perform better in the summative tests.

While regular online tests and multiple attempts are recommended, there is a risk that a high frequency of tests can become overwhelming for both staff and students. Therefore, a balance needs to be struck between optimising student learning and consideration for staff and student workload.

Student diversity

Twenty-first century learning takes place in a globalised world, where university classes are characterised by cultural as well as demographic diversity (Arkoudis, 2014 ). Students’ views of assessment and their understanding of the purpose of assessment are related to their culturally linked experiences of education (Wong, 2004 ). Students’ differing knowledge, skills, and confidence in using digital technologies may also impact on assessment outcomes and where online tests are used, consideration needs to be given to preparing students so they are not disadvantaged by the technology or procedures used (Stödberg, 2012 ). This is particularly the case for older students, indigenous students and international students from certain countries, who may not have current or strong knowledge of digital technologies used in higher education. There is, however, little in the literature reviewed that addresses the use of online tests with diverse student cohorts.

The defined and tight time frames set for online tests, often used to deter cheating, can cause problems for students with a slow reading speed, which may include students for whom the language of instruction is not their first language. As a consequence, questions used in online tests do need to be reviewed for whether they assess the topic knowledge or whether they are testing high levels of language proficiency as well as the topic knowledge (Stupans, 2006 ).

An increasing number of students from non-traditional backgrounds are entering into higher education, not as direct school leavers but entering or returning to tertiary study later in life. Yonker ( 2011 ) shows that older students tend to perform better in online tests irrespective of whether they are based on factual or applied knowledge. The implication here is less to do with the design of online tests but relates more to analysing student cohort demographics and adapting the teaching strategies accordingly. This includes diagnosing problem areas for a specific group and giving targeted feedback.

A vision for online tests in a digital future

Online tests have an immense potential to play an important educative role in student learning in higher education despite the challenges and issues associated with their use. However, given the range of assessment options available, and particularly given the emphasis on authentic forms of assessment in Scott’s vision of “right” assessment for twenty-first century learning, the use of online tests needs to be considered carefully to ensure that they are fit for purpose. Decisions about when and how to use online tests in higher education are important questions that need to take into account a range of factors including the benefits for both students and academics.

In this section, we utilise the Assessment Design Decisions Framework (Bearman et al., 2014 , 2016 ) to guide assessment decision making around the use of online tests. The framework can assist curriculum developers, learning designers and academics to systematically work through key questions in determining the type of assessment to be applied. The Assessment Design Decisions Framework breaks the decision-making process around assessment into six parts: purposes of assessment, context of assessment, learner outcomes, tasks, feedback processes, and interactions. The first three parts of the Assessment Decisions Design Framework emphasise the considerations that take place in determining the assessment type. These include clarifying the purpose of the assessment and the context in which it takes place, as well as determining how the assessment task aligns with unit and the course level outcomes. The third and fourth parts of the framework relate to aspects of the task design itself and feedback processes built into the task design. Finally, the sixth part relates to interactions between all stakeholders about the assessment that are integral to its review and continual improvement.

In Table 1 , we have synthesised the literature around online tests and present principles to maximise the use of online tests in evaluating twenty-first century learning. The framing of the principles against each stage of the Assessment Decisions Design Framework provides a systematic approach to considering the use of online tests at each stage of the assessment decision-making process.

The principles articulated in Table 1 provide a systematic approach for academic staff and educational designers to make decisions about whether to use online tests as part of their assessment strategy in a particular unit. They also provide a guide for the design, development and deployment of online tests that enhance student learning as well as evaluate it. As a word of caution, these principles have not yet been applied to practical, real-world decision making about online tests. They have been developed as the first stage of a study exploring the widespread use of online tests at the authors’ institution. We anticipate that the principles will be refined as they are applied and evaluated as part of the study.

Our review of the literature indicates that while online tests are often poorly designed and are predominantly used in the assessment of low-level thinking, they can be used effectively to assess twenty-first century learning, particularly but not exclusively in the foundational knowledge domain. Online tests can be designed to align with the concept of powerful assessment through selecting the format of the questions, the cognitive level of the questions, and the philosophical approach embedded in the task design that are fit for purpose and focused on authentic contexts. For example, this may be through using a case study approach, targeting cognitive engagement beyond the level of recall, and providing opportunities for group or peer learning around the online test to align with a constructivist philosophy.

The literature points to the ways in which online tests can enhance and even transform assessment through effective, innovative and creative use of their inherent features such as immediate feedback and scalability. In so doing there is clear evidence that online tests can be used to counteract high workloads of academics and particularly in the assessment and marking of large student groups, while providing students with immediate and quality feedback that contributes to their learning. The literature also indicates some of the challenges involved in the use of online tests such as the widespread use of publisher test banks focused on euro-centric knowledge contexts and the overrepresentation of online test questions that assess low levels of cognition. These challenges lead us not to dismiss the value of online tests in the assessment of twenty-first century learning but to identify how these concerns can be addressed.

In conducting the review, we also found significant gaps in the literature around online tests that point to the need for investigation. While online tests are mainly used in the assessment of foundational knowledge, there is some evidence in the literature around the use of online tests in the assessment of the meta-knowledge domains of twenty-first century learning but almost none in assessing humanistic knowledge. This may be a result of the limited use of online tests in the humanities. There are also gaps in understanding the experiences of students from diverse linguistic and cultural backgrounds in online tests. This area of research is particularly needed given the internationalisation of higher education in terms of curricula and student mobility yet the extensive reliance of Anglo-centric publishers’ test banks in the development of online tests.

In conclusion, we have drawn from the literature on online tests to distil a set of initial principles for decision making in relation to the selection and design of online tests in the context of twenty-first century learning. This study concludes that the limitations that are evident in the use of online tests in the digital present are not inherent features of online tests, but are a product of poorly conceived design, development, and deployment of online tests. Using the principles to guide strategic decision making about online test, we envision a digital future where online tests are used when they are fit for purpose and are optimised for the assessment of and for twenty-first century learning.

The term online tests is preferred because it is more encompassing

Abbreviations

Information and communications technology

Learning management system

Multiple choice question

Anderson, T. (2016). Theories for learning with emerging technologies. In G. Veletsianos (Ed.), Emergence and innovation in digital learning: Foundations and applications (pp. 35–50). Edmonton: AU.

Google Scholar  

Angus, S. D., & Watson, J. (2009). Does regular online testing enhance student learning in the numerical sciences? Robust evidence from a large data set. British Journal of Educational Technology, 40 (2), 255–272.

Article   Google Scholar  

Arkoudis, S., & Baik, C. (2014). Crossing the interaction divide between international and domestic students in higher education. HERDSA Review of Higher Education, 1 , 47–62.

Arnold, I. J. M. (2016). Cheating at online formative tests: does it pay off? The Internet and Higher Education, 29 , 98–106.

Baleni, Z. G. (2015). Online formative assessment in higher education: its pros and cons. Electronic Journal of e-Learning, 13 (4), 228–236.

Bearman, M., Dawson, P., Boud, D., Hall, M., Bennett, S., Molloy, E., Joughin, G. (2014). Guide to the Assessment Design Decisions Framework. http://www.assessmentdecisions.org/guide .

Bearman, M., Dawson, P., Boud, D., Bennett, S., Hall, M., & Molloy, E. (2016). Support for assessment practice: developing the Assessment Design Decisions Framework. Teaching in Higher Education, 21 (5), 545–556.

Bennett, S., Dawson, P., Bearman, M., Molloy, E., & Boud, D. (2017). How technology shapes assessment design: findings from a study of university teachers. British Journal of Educational Technology, 48 , 672–682.

Boyle, J., & Nicol, D. (2003). Using classroom communication systems to support interaction and discussion in large class settings. Association for Learning Technology Journal, 11 (3), 43–57.

Brady, A. M. (2005). Assessment of learning with multiple-choice questions. Nurse Education in Practice, 5 (4), 238–242.

Buckles, S., & Siegfried, J. J. (2006). Using multiple-choice questions to evaluate in-depth learning of economics. The Journal of Economic Education, 37 (1), 48–57.

Bull, J., & Danson, M. (2004). Computer-aided assessment (CAA) . York: LTSN Generic Centre.

Butler, A. C., & Roediger, H. L. (2008). Feedback enhances the positive effects and reduces the negative effects of multiple-choice testing. Memory & Cognition, 36 (3), 604–616.

Davies, S (2010). Effective assessment in a digital age. Bristol: JISC Innovation Group. https://www.webarchive.org.uk/wayback/archive/20140614115719/http://www.jisc.ac.uk/media/documents/programmes/elearning/digiassass_eada.pdf . Accessed 21 July 2017.

Dawson, P. (2016). Five ways to hack and cheat with bring‐your‐own‐device electronic examinations. British Journal of Educational Technology, 47 (4), 592–600.

Donnelly, C. (2014). The use of case based multiple choice questions for assessing large group teaching: Implications on student’s learning. Irish Journal of Academic Practice, 3 (1), 12.

Douglas, M., Wilson, J., & Ennis, S. (2012). Multiple-choice question tests: a convenient, flexible and effective learning tool? A case study. Innovations in Education and Teaching International, 49 (2), 111–121.

Epstein, M. L., Lazarus, A. D., Calvano, T. B., & Matthews, K. A. (2002). Immediate feedback assessment technique promotes learning and corrects inaccurate first responses. The Psychological Record, 52 (2), 187.

Fazio, L. K., Agarwal, P. K., Marsh, E. J., & Roediger, H. L. (2010). Memorial consequences of multiple-choice testing on immediate and delayed tests. Memory & Cognition, 38 (4), 407–418.

Fontaine, J. (2012). Online classes see cheating go high-tech. Chronicle of Higher Education, 58 (38),A1-2.

Gardner-Medwin, A. (2006). Confidence-based marking. In C. Bryan & K. Clegg (Eds.), Innovative assessment in higher education . London: Routledge, Taylor and Francis Group Ltd.

Gipps, C. V. (2005). What is the role for ICT‐based assessment in universities? Studies in Higher Education, 30 (2), 171–180.

Good, A (2010). 200 students admit cheating after professor's online rant. The telegraph . Retrieved from http://www.telegraph.co.uk/news/newsvideo/weirdnewsvideo/8140456/200-students-admit-cheating-after-professors-online-rant.html.

Hampton, D. (1993). Textbook test file multiple-choice questions can measure (a) knowledge, (b) intellectual ability, (c) neither, (d) both. Journal of Management Education, 17 (4),454-471.

Harmon, O.R., Lambrinos, J., Buffolino, J. (2010). Assessment design and cheating risk in online instruction. Online Journal of Distance Learning Administration, 13 (3). http://www.westga.edu/~distance/ojdla/Fall133/harmon_lambrinos_buffolino133.htm . Accessed 21 July 2017.

Hemming, A. (2010). Online tests and exams: lower standards or improved learning? The Law Teacher, 44 (3), 283–308.

Ibbett, N. L., & Wheldon, B. J. (2016). The incidence of clueing in multiple choice testbank questions in accounting: some evidence from Australia. The E-Journal of Business Education & Scholarship of Teaching, 10 (1), 20.

Karimshah, A., Wyder, M., Henman, P., Tay, D., Capelin, E., & Short, P. (2013). Overcoming adversity among low SES students: a study of strategies for retention. The Australian Universities' Review, 55 (2), 5–14.

Kereluik, K., Mishra, P., Fahnoe, C., & Terry, L. (2013). What knowledge is of most worth: teacher knowledge for 21st century learning. Journal of Digital Learning in Teacher Education, 29 (4), 127–140.

Kibble, J. (2007). Use of unsupervised online quizzes as formative assessment in a medical physiology course: effects of incentives on student participation and performance. Advances in Physiology Education, 31 (3), 253–260.

Kinash, S., Crane, L., Judd, M. M., Mitchell, K., McLean, M., Knight, C., Dowling, D., & Schulz, M. (2015). Supporting graduate employability from generalist disciplines through employer and private institution collaboration . Sydney: Australian Government, Office for Learning and Teaching.

Lowe, T. W. (2015). Online quizzes for distance learning of mathematics. Teaching Mathematics and Its Applications : An International Journal of the IMA, 34 (3), 138–148.

Masters, J., Hulsmeyer, B., Pike, M., Leichty, K., Miller, M., & Verst, A. (2001). Assessment of multiple-choice questions in selected test banks accompanying test books used in nursing education. Journal of Nursing Education, 40 (1), 25–32.

McAllister, D., & Guidice, R. M. (2012). This is only a test: a machine-graded improvement to the multiple-choice and true-false examination. Teaching in Higher Education, 17 (2), 193–207.

Nicol, D. (2007). E‐assessment by design: using multiple‐choice tests to good effect. Journal of Further and Higher Education, 31 (1), 53–64.

Nicol, D., & Boyle, J. T. (2003). Peer instruction versus class-wide discussion in large classes: a comparison of two interaction methods in the wired classroom. Studies in Higher Education, 28 (4), 457–473.

Nicol, D., & Macfarlane‐Dick, D. (2006). Formative assessment and self‐regulated learning: a model and seven principles of good feedback practice. Studies in Higher Education, 31 (2), 199–218.

Pifia, A.A. (2013). Learning management systems: A look at the big picture. In Y. Kats (Ed.), Learning management systems and instructional design: Best practices in online education (pp. 1-19). Hershey: Idea Group Inc (IGI).

Roediger, H. L., & Marsh, E. J. (2005). The positive and negative consequences of multiple-choice testing. Journal of Experimental Psychology: Learning, Memory & Cognition, 31 (5), 1155–1159.

Scott, G (2016). FLIPCurric. http://flipcurric.edu.au/

Simkin, M. G., & Kuechler, W. L. (2005). Multiple‐choice tests and student understanding: what is the connection? Decision Sciences Journal of Innovative Education, 3 (1), 73–98.

Smith, G. (2007). How does student performance on formative assessments relate to learning assessed by exams? Journal of College Science Teaching, 36 (7), 28.

Stödberg, U. (2012). A research review of e-assessment. Assessment & Evaluation in Higher Education, 37 (5), 591–604.

Stone, D. E., & Zheng, G. (2014). Learning management systems in a changing environment. In V. C. X. Wang (Ed.), Handbook of research on education and technology in a changing society (pp. 756–767). Hershey: IGI Global.

Chapter   Google Scholar  

Stupans, I. (2006). Multiple choice questions: can they examine application of knowledge? Pharmacy Education, 6 (1), 59–63.

Sweeney, T., West, D., Groessler, A., Haynie, A., Higgs, B. M., Macaulay, J., & Yeo, M. (2017). Where’s the Transformation? Unlocking the Potential of Technology-Enhanced Assessment. Teaching & Learning Inquiry, 5 (1), 1–13.

Vista, A, & Care, E (2017). It’s time to mobilize around a new approach to educational assessment. Stanford social innovation review . Retrieved from https://ssir.org/articles/entry/its_time_to_mobilize_around_a_new_approach_to_educational_assessment1 . Accessed 21 July 2017.

Voelkel, S (2013). Combining the formative with the summative: the development of a two-stage online test to encourage engagement and provide personal feedback in large classes. Research in Learning Technology, 21 (1).

Wong, J.K.K. (2004). Are the Learning Styles of Asian International Students Culturally or Contextually Based? International Education Journal, 4 (4), 154-166.

Yonker, J. E. (2011). The relationship of deep and surface study approaches on factual and applied test‐bank multiple‐choice question performance. Assessment & Evaluation in Higher Education, 36 (6), 673–686.

Download references

Author information

Authors and affiliations.

Charles Darwin University, Ellengowan Dr, Casuarina, NT, 0810, Australia

Bopelo Boitshwarelo, Alison Kay Reedy & Trevor Billany

You can also search for this author in PubMed   Google Scholar

Contributions

All three authors BB, AR, and TB have approved the manuscript for submission.

Corresponding author

Correspondence to Bopelo Boitshwarelo .

Ethics declarations

Authors’ information.

The authors are part of a central team of educational developers working in the higher education sector at Charles Darwin University, a regional university in the north of Australia.

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Cite this article.

Boitshwarelo, B., Reedy, A.K. & Billany, T. Envisioning the use of online tests in assessing twenty-first century learning: a literature review. RPTEL 12 , 16 (2017). https://doi.org/10.1186/s41039-017-0055-7

Download citation

Received : 03 February 2017

Accepted : 11 May 2017

Published : 07 August 2017

DOI : https://doi.org/10.1186/s41039-017-0055-7

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Online tests
  • Online quizzes
  • Multiple choice questions
  • Higher education
  • Twenty-first century learning
  • Formative assessment
  • E-assessment

literature review on online quiz system

IMAGES

  1. Online quiz system case study

    literature review on online quiz system

  2. Quizizz

    literature review on online quiz system

  3. Use AI to Start Your Literature Review in a second|| Paper Digest Literature Review Tool Tutorial

    literature review on online quiz system

  4. (PDF) A Review of Literature on E-Learning Systems in Higher Education

    literature review on online quiz system

  5. PDF test cases for online quiz system PDF Télécharger Download

    literature review on online quiz system

  6. Ace your research with these 5 literature review tools

    literature review on online quiz system

VIDEO

  1. Systematic Literature Review: An Introduction [Urdu/Hindi]

  2. How to Create Online Quiz for Students

  3. Quiz in php

  4. 3 Types of Literature Review!

  5. online quiz system in php hindi part- 4(registeration 1-2)

  6. online quiz system in php hindi part -3(registration-1)