"Collecting data is one thing, but making sense of them is something else. We want to use analytic techniques that are simple, direct, and effective" (Allen, 2004, p. 131).
As with your own scholarship, the research question drives the inquiry. In the analysis stage of outcomes assessment, your questions inform your data analysis. The methods you choose should reflect alignment among: purpose of the inquiry; research question(s); types of evidence; and audience with whom you will share the results.
|Purposes for outcomes assessment||Research questions|
|Inform planning for the next time a particular course will be taught||Do these data provide acceptable evidence of students’ ability to:
|Identify students’ areas of strengths and weaknesses within and across a sequence of courses||What can we learn—from the data we have—about students’ achievement of course, program, and/or institutional goals?|
|Confirm or disprove general impressions we have about our students’ learning in relationship to learning outcomes statements||What do these data suggest about what students know and can demonstrate?|
|Inform discussions about curricular and/or program change||What does the evidence indicate about alignment between courses?|
Data and methods
You are already familiar with methods of analysis in your discipline; the techniques for analyzing outcomes assessment data will also be familiar. In addition to drawing on your expertise, and to strengthen the conclusions you draw from your analyses, combine measures that efficiently and effectively measure knowledge. For example, follow a written exam with a competence interview. When you use a mixed-approach—applying quantitative and qualitative analytic strategies—you can learn about what students can do, as well as why and how. See this Assessment Field Note for examples of aligned data sources and methods.
The results of assessment activities are most meaningful when they provide insight into student learning in comparison to something else, e.g., analysis of aggregated students’ performance in relation to program learning outcomes. Establishing target percentages allows faculty to determine whether the program has successfully produced student learning. The most important stage in establishing targets is to consider the level of performance that will allow program faculty to confidently determine that students are achieving the intended outcomes. For example, consider the following sample targets.
|Final exam||Responses to exam questions which pertain directly to PLO #3.||80% of all responses correct|
|Written course work||Samples of students’ written work, using 4-point analytic rubric aligned to PLO #1||80% of scores on criteria related to specific PLOs are 3 or higher|
|Current UCUES data||Student satisfaction rates related to PLO #2 and PLO# 5||75% or greater|
For UC Davis faculty, it is especially important the standards used are locally-defined and meaningful. Although in the past, many faculty felt pressure to accept externally-defined standards, the new WASC 2013 Handbook of Accreditation emphasizes the importance of defining contextually relevant targets for student success: “Standards of performance are best set through internal discussion among faculty and other campus educators” (2013, p. 31).
When identifying performance standards, it is important to keep in mind the ultimate purpose, and the following caveat: “Comparison of universities or departments serves curiosity but serves little the managerial or public good. The more valuable assessments are those that are formative, developmental, aimed at avoidance of dysfunction, repairing weakness and shaping new teaching, research, and public service” (Stake, Contreras, & Arbesú, 2012).
Gather the faculty assessment working group or program subcommittee to analyze the aggregated data from across the courses in the program. Remember that outcomes assessment is intended to provide information that faculty can use to improve opportunities for learning, as well as to make evidence-based strategic plans and budget requests.