Step 1: Articulate research question(s)

"Good assessments are not once-and-done affairs. They are part of an ongoing, organized, and systematized effort to understand and improve teaching and learning" (Suskie, 2009, p.50).

Most faculty already know at an intuitive level when and whether their students are learning. In the INQUIRE stage, we bring those questions to an explicit level. Often, inquiry into student learning begins with the question: What do you want to know about student learning in your program? Your approach to inquiring about student learning should help you answer the questions that most interest you and your colleagues.

Here are a few example questions that might inform the planning of your assessment initiative:

  • What percentage of our graduating students meet (or exceed) our expectations, as expressed in the program learning outcomes statements?
  • What are our students’ areas of strengths and weaknesses?
  • How can we confirm or disprove general impressions that our students aren’t achieving our expectations, as expressed in the program learning outcomes statements?
  • How well can students demonstrate particular skills at different points in the course of study?

Step 2: Develop plan to collect evidence of student learning

As with your other scholarly work, your approach to gathering evidence of student learning derives from the questions you want to answer. Similarly, your conclusions about student learning will be more valid and actionable when you include multiple lines of evidence and/or multiple points of analysis. To support sustainable program assessment, focus on data that: 

  • provide sufficient information to accurately measure students’ mastery of each learning outcome;
  • do not create excessive additional work for departments; and
  • pertain to locally- or externally-defined standards, which will inform analyses.

If you have not already collected evidence of student learning, review the program’s curriculum matrix to identify courses in which the PLO(s) are addressed. The sources from which you draw evidence will depend on your purpose. If you are interested in tracking growth over time, gather and analyze samples of student work at different points in the program. If you are primarily interested in knowing whether students are meeting program expectations at graduation, analyze work from senior courses. In order to make sound, evidence-based decisions that promote student learning, best practice recommends the use of direct and indirect evidence (Allen, 2004; Baker, Jankowski, Provezis, & Kinzie, 2012; Bresciani, 2004, 2006; Hutchings, Ewell , & Banta, 2012; Kuh & Ikenberry, 2009; Suskie, 2009; Walvoord, 2010).

Types of evidence

Direct Evidence

Direct evidence provides concrete examples of students’ ability to perform a particular task or exhibit a particular skill. 

Course-embedded sources of direct evidence include:

  • Pre-/post-tests of students’ knowledge or skills; exams and quizzes aligned to program learning outcome(s); research projects, presentations, performances and/or exhibitions, written work
  • Capstone projects / portfolios
  • Standardized and certification exams
  • Internship supervisor evaluations

Indirect Evidence

Data from which it is possible to make inferences about student learning. 

  • Graduation, time-to-degree, retention data
  • University of California Undergraduate Experience Survey
  • Recent Baccalaureate Recipients (includes post-graduate employment and post-graduate degree program enrollment)
  • Focus groups and interviews

Sources of evidence

When selecting evidence for the current assessment cycle, consider your research question. Are you interested in identifying students’ strengths and weaknesses at a particular point-in-time? Or are you interested in assessing how students’ learning progresses over time?

Sources of evidence aligned to research questions (based on Walvoord, 2010)
Gather evidence from… to…
lower division required course(s) identify strengths and weaknesses as students enter the program
course(s) at / toward the end of the program identify strengths and weaknesses as students exit the program
lower division required course AND course at / toward the end of the program consider the development of student learning over time

When selecting evidence for the current assessment cycle, consider the varied purposes of assessment to locate useful evidence.

  • Assessment-to plan for-learning

    Diagnostic assessment can play a role beyond remediation. For example, a diagnostic assessment conducted at the beginning of a course or program can yield actionable information about students’ prior knowledge. Diagnostic assessment data also provide useful information for students about what they will be expected to know and do at the conclusion.
  • Assessment-for-learning

    Formative assessment is an integral part of excellent instruction, because it provides actionable evidence related to students’ progress toward mastery of the learning outcomes during the quarter (or class period). When conducted often, formative assessment provides valuable information to faculty regarding instructional strategies that are producing student learning (or not); formative assessment also provides students with information about their progress in a course. Data collected through formative assessment can provide actionable information at the program level.
  • Assessment-of-learning

    Summative assessment provides a snapshot of student learning at a particular point-in-time (usually at the end of a course or program). Data from summative assessment can inform an individual faculty member planning for the next quarter; these data are useful for program faculty interested in assessing students’ mastery of program learning outcomes at a particular time.

Amount of evidence (sampling)

SAMPLE SIZES NEEDED FOR 5% ERROR MARGIN
Number of students from which to draw samples Random sample size
Adapted from Suskie, 2009.
1000 278
500 217
350 184
200 132
100 80
50 44

When considering how much evidence you should collect, consider the overall question that guides the inquiry, and how the results will be applied. If you are planning a programmatic overhaul, then you’ll probably want a larger sample with a lower error margin. However, the larger the sample, the greater the commitment of time required from faculty. A smaller sample is acceptable if the results will be used to inform minor curricular changes.

Your goal should be to gather a sample about which you will feel confident in using the results to inform program decisions. Suskie (2009) suggests: “collect enough evidence to feel reasonably confident that you have a representative sample of what your students have learned and can do” (p. 47).

  • “Simple” random samples are a straightforward way to obtain a representative sample as they give every student an equal chance of being selected.
  • Cluster random samples can be used with larger groups in the same fashion, by choosing a random sample of subgroups of students and collecting information from everyone in each subgroup.
  • Purposeful or judgment samples are “carefully, but not randomly chosen so that, in your judgment, they are representative of the students you are assessing” (Suskie, 2009, p. 49-50).

Step 3: Develop or adapt assessment instruments

Because faculty are already regularly engaged in the evaluation of student performance, most have already thought about the criteria with which they will assess student work. In this step, you intentionally articulate your expectations, with descriptions of expected performance levels for the program or course learning outcomes being considered in this assessment cycle.

Designing a course to provide students with opportunities demonstrate myriad levels of learning encourages students to take responsibility for self-assessment and approaching coursework metacognitively. Therefore, the tools with which we assess student learning should be transparent, provided in advance, and clearly linked to learning outcomes. The instruments should also facilitate analysis to enable you to answer the question(s) guiding the current cycle of inquiry. The following questions may be useful in developing and articulating assessment criteria:

  • What is acceptable evidence of understanding?
  • What is acceptable evidence of demonstration?
  • What specific characteristics of mastery do you expect? What does mastery look like?
  • Does the evidence you plan to gather provide valid documentation of student learning (rather than information from which you might infer that students have learned)?

Step 4: Collect evidence

Once you have determined the types, sources, and amount of evidence you need to answer the guiding research question, proceed with gathering the evidence, keeping in mind the following:

  • How and where will student work samples be organized and stored?
  • How will identifying information be removed from student work?
  • How and where will the aggregated data be organized and stored?