Assessment Question Types in Essential Assessment

How our question formats support validity, reliability, and evidence-informed learning

Drag and Drop

What It Is:

Interactive tasks where students move numbers, labels, or statements into designated positions for activities like ordering, matching, grouping, or classifying.

Why It Suits an Online Environment:

  • It mimics hands-on manipulatives, effectively supporting dual coding and reducing cognitive load.
  • It provides instant validation, which supports responsive teaching and immediate feedback.
  • It allows teachers to observe partial understanding by seeing both correct placements and "near misses."

Best-practice design features:

  • Limited number of elements to reduce working-memory load.
  • Clear drop zones, sufficient spacing, accessible labels.
  • Distractors based on real misconceptions or near-examples.
  • Each task measures a single clear construct (e.g., ordering decimals).

See example below:

Short Answer

What it is:

Students type a brief response – numbers, key vocabulary, or a short phrase.

Why it suits an online environment:

  • Requires knowledge generation, not recognition.
  • Great for capturing procedural accuracy and precise conceptual understanding.
  • Auto-marking can accept multiple equivalent correct forms (e.g., 3/4, 0.75, ¾).

Best-practice design features:

  • Very clear expectations (e.g., “Write your answer as a simplified fraction”).
  • Marking rules account for expected variations.
  • Focus on the intended construct, not spelling or formatting quirks.
  • Avoid overly long responses that belong in extended writing tasks.

See example below:

Multiple Choice (MCQ)

What it is:

Students select one correct option from several choices, with carefully designed distractors reflecting real misconceptions.

Why well-designed MCQs work online:

  • Fast, reliable, automatic marking supports frequent formative checks.
  • Online delivery allows randomisation, adaptive sequencing, and large-scale consistency.
  • MCQs can sample broad curriculum coverage, increasing reliability of overall judgements.
  • Research shows MCQs support retrieval practice (“testing effect”) when followed by feedback, improving long-term retention.

Best Practices for Assessment Item Design

To ensure the validity and reliability of assessment items, adhere to the following design features:

  • Each item targets one clear idea that is directly aligned with a specific content descriptor or learning indicator.
  • Item stems are written in plain English to actively avoid construct-irrelevant complexity.
  • Distractors (incorrect options) must be plausible, based on common misconceptions, and parallel in grammatical structure to the correct answer.
  • Avoid using options like "all/none of the above," trick clues, or stylistic giveaways that might cue the correct answer.
  • Use typically four options to effectively balance the required student thinking time with the desired measurement accuracy.

See examples below:

Why These Three Question Types Work Together

Using drag-and-drop, short answer, and MCQs in combination enables Essential Assessment to deliver:

  • A balanced measure of understanding across recognition, recall, classification, ordering, and conceptual reasoning.
  • Strong validity through construct-focused items aligned to state and national curriculum descriptors.
  • High reliability through consistent automatic marking and broad domain sampling.
  • Powerful, data-informed insights that support explicit instruction, daily review, and targeted teaching.
  • A student experience that is interactive, accessible, and appropriate for online learning—without unnecessary cognitive load.

Why a Balance of Number-Sentence Items and Wording-Based Items Matters

High-quality assessment requires a deliberate balance between number-sentence questions (e.g., 43 + 28, 7 × 6, ¼ of 32) and worded questions that embed mathematics within language and context. Each serves a distinct purpose, aligned directly to curriculum expectations across AC v9.0, Victorian Curriculum 2.0 and the NSW K–10 Syllabus. Number-sentence items assess a student’s procedural fluency, automaticity, and understanding of core operations—key proficiencies emphasised across all Australian mathematics curricula. However, curriculum/syllabus documents also require students to apply this knowledge in real-world and representational contexts, interpret mathematical vocabulary, and select appropriate strategies based on a situation. Worded questions allow students to demonstrate these proficiencies by revealing whether they can translate language into mathematical structure, identify relevant information, and reason using mathematical concepts.

Research in cognitive load and language processing (Sweller et al., 2021; AERO, 2023) emphasises that mathematics performance depends not only on procedural fluency but also on a learner’s capacity to interpret linguistic cues without excessive cognitive strain. A balanced assessment ensures that students are not disadvantaged by over-reliance on either pure symbol manipulation or purely verbal reasoning. It meets curriculum requirements by assessing both “understanding and fluency” and “problem-solving and reasoning” (ACARA, 2022). In digital environments like Essential Assessment, this balance provides a more complete picture of mathematical understanding, allowing teachers to differentiate between conceptual gaps, language barriers, and procedural misconceptions. A mixed item set therefore, strengthens validity, equity, and instructional usefulness.

Did you find this article helpful? Thanks for the feedback! There was a problem submitting your feedback. Please try again later.