Selected-Response Tests

Designing Assessment Tools: Selected-Response Tests

Considerations. Best Practices. Implementation.

A downloadable PDF of this resource is available at the bottom of the page.


Selected-response tests are those that are composed of questions to which there is typically one best answer. They are sometimes referred to as objective assessments (Suskie, 2018). Some of the most commonly used selected-response tests include multiple choice, fill-in-the-blank, true-false, and/or matching questions/items.

Purpose for Using Selected-Response Tests

Nilson (2016) notes that these tests are good for assessing students’ ability to remember and understand course concepts and materials, but cannot “measure students’ abilities to create, organize, communicate, define problems, or conduct research” (p. 291). 

The following are some of the advantages and disadvantages of using selected-response tests. The list can help you determine whether this type of assessment is the most appropriate for what you want to assess.

Advantages

  • Efficiently assesses broad range of learning goals
  • Can measure abilities to make judgments about next steps, draw inferences, interpret data, and apply information
  • Can reliably assess lower-order thinking skills 
  • Can be efficient and objective to score
  • Provides diagnostic information    

Disadvantages

  • Good and clear test items are hard to write
  • Cannot assess higher-order thinking skills (e.g., synthesis, organization, original thinking)
  • Cannot provide direct evidence of real-world skills, particularly productive or creative skills 
  • Students can guess correct answers

Best Design Practices

One of the most commonly used selected-response tests is the Multiple-Choice Test. This resource will focus on suggestions for designing multiple-choice questions. 

  • Tie questions to specific learning goals for the course/unit.
  • Zimmaro (2016) suggests avoiding asking about trivial information or unimportant facts, as doing so can lead to further test anxiety when students don’t know what information is actually important.
  • Make sure the statement or question is clear and concise.
  • Lengthy, unclear multiple-choice questions can easily direct even prepared students to the incorrect answer and produce considerable anxiety and frustration for students (Suskie, 2018).
  • Use consistent and clear language. 
  • Ambiguous terms and statements might lead students to answer incorrectly even when they know the correct answer (Brown, 2005). 
  • Only include distractors that are truly “distracting.”
  • If a distractor is obviously non-plausible, it will increase chances for students to guess the correct answer.
    Avoid overlapping answers by using mutually exclusive response options, and include only one correct, clearly best answer.
  • Randomize the order of correct responses 
  • Avoid patterns in the order of correct responses. For example, making the correct answer for the first question A, for the next question B, then C, etc.
    If there is a pattern in the order of correct responses, students might recognize the pattern and guess the correct answer/s.
  • Avoid using negative phrasing, or clearly signal the negative word to students.
  • Using negative phrasing can confuse a student, even if they know the material, especially if they are short on time (Clegg & Cashin, 1986; Haladyna, 2004; Suskie, 2018).
    If you choose to use negative phrasing, one way to clearly signal it is by putting the negative word (e.g., NOT, EXCEPT) in all caps and in bold.
  • Avoid assigning “all of the above” and/or “none of the above” options as the correct response.
  • Choices such as “all of the above,” “none of the above,” “A & C,” or “B & D,” make it harder to distinguish between students who know the material and those who don’t.
  • Avoid giving clues that could be used in answering other items.
  • Providing clues to answer other items correctly will make it difficult to discern whether students really knew the concept or skills assessed in the latter item.
  • If possible, have one or more colleagues look over the items.
  • Another set of eyes could help you identify items that are ambiguous or that are not assessing what you are intending to assess.
  • If you are re-using a previously administered multiple-choice question test, use the results of previously completed tests to identify items or distractors that were not working correctly. 
  • For example, if you notice that in one of the questions, a given distractor was rarely chosen by students, revise the distracter.

Scoring

Selected response questions are easily scorable by using learning platforms, such as CANVAS. Instructors can choose to set a variety of quiz options, including automatic scoring with immediate feedback, delayed feedback, etc. Machines like a Scantron can also be used to automatically score these type of tests. Such scoring mechanisms often generate statistics for the entire test as well as for each item. These statistics can be used to evaluate and improve the test items and to address student errors or misconceptions.

  • References
  • Brown, J. D. (2005). Testing in language programs: A comprehensive guide to English language assessment (New edition). New York: McGraw-Hill.
    Clegg, V. L, & Cashin, W. E. (1986). Improving Multiple-Choice Tests. IDEA Paper No. 16. Retrieved from: www.theideacenter.org.
    Haladyna, T. (2004). Developing and Validating Multiple-choice Test Items. Ebook: Ebrary, Inc.
    Nilson, L. B. (2016). Teaching at its best: A research-based resource for college instructors (4th ed.). San Francisco, CA: Jossey-Bass.
    Suskie, L. (2018). Assessing student learning: A common sense guide. (3rd ed.). San Francisco, CA: Jossey-Bass.
    Zimmaro, D. M. (2016). Writing good multiple-choice exams. Measurement and Evaluation Center. University of Texas at Austin. Retrieved from https://facultyinnovate.utexas.edu/sites/default/files/writing-good-multiple-choice-exams-fic-120116.pdf

Barbara Mills, Learning & Teaching Support, Center for Educational Effectiveness 

Young-A Son, Academic Assessment, Center for Educational Effectiveness