A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

A

Accreditation: An outward-focused activity in which an institution reports on its financial health; physical and technological infrastructure; staff and faculty capacities; and educational effectiveness. The purpose of accreditation is to provide public accountability to external audiences.

Alignment: The process of intentionally connecting course, program, general education, and institutional learning outcomes. At the program level, alignment represents the ideal cohesive relationship between curriculum and outcomes. Checking alignment allows program faculty to determine whether the curriculum provides sufficient and appropriately sequenced opportunities for students to develop the knowledge, skills, and dispositions identified in Program Learning Outcomes (PLOs).

Analytical Scoring: An approach to assessing student work in which instructors assign scores on multiple criteria. For example, when assessing a written assignment, an instructor might assign a score for each of the following: accuracy of content; appropriate use of evidence to support argument; organization; and adherence to the conventions of academic English. (See also as Criterion-referenced Assessment; opposite of Holistic Scoring.)

Analytics (also known as learner or learning academic analytics): Applying analyses of large data sets about students to make predictions of optimal learning contexts and other factors likely to produce student success.

Assessment: A research and teaching tool to intentionally and systematically inquire about aggregate student success at the institutional, program, and course level. At its core, assessment is about collecting, analyzing, and applying interpretations of data. The results of assessment activities support evidence-based decision making.

Assessment Plan: A collaboratively-developed, planning document that establishes a multi-year plan for outcomes assessment. Assessment plans articulate when each LOs will be assessed; the types of direct and indirect evidence (aligned to each learning outcome) that will be collected and analyzed; plans for analyzing the data; procedures to guide discussion and application of results; and timelines and responsibilities.

Authentic Assessment: Provides direct evidence of learners’ knowledge or skill by engaging them in a “real world” task. Authentic assessment provides opportunities for learners to demonstrate what they know and can do within the context of a likely scenario. For example, an authentic assessment of a speaker’s language proficiency might take the form of a conversation with a more advanced speaker.

B

Benchmark: A point of reference to which aggregated evidence of student learning can be compared. The most useful benchmarks are established internally by faculty and/or programs.

C

Capstone: A course or experience toward the end of a program in which students have the opportunity to demonstrate their cumulative knowledge, skills, and dispositions related to some or all of the PLOs. In capstone courses/experiences, students produce direct evidence of their learning. Examples of capstone assignments include: theses, oral defenses, exhibitions, presentations, performances, and/or research papers.

Course Map: see Curriculum Map

Course Learning Outcomes (CLOs):  Statements which articulate, in measurable terms, what students should know and be able to demonstrate as a result of and at the conclusion of a course. CLOs connect course and program learning outcomes; communicate course goals explicitly; and foster transfer of responsibility for learning from faculty to students.

Course-level Assessment:  The intentional collection of evidence of student learning with which the instructor can assess mastery of one or more Course Learning Outcomes (CLOs). Through course-level assessment, faculty provide timely and useful feedback to students, use data to assign grades, and record data related to students’ achievement of the CLOs in question. Course-embedded assessment that occurs towards the end of a program can also yield data for program outcomes assessment efforts. (See also Embedded Evidence).

Criteria: The discrete domains of a subject against which a learning performance is rated. For example, criteria included in an assessment of student writing might include accuracy of content, appropriate use of evidence to support argument, organization, and adherence to the conventions of academic English.

Criterion-referenced: Assessment of learning in which evidence of student learning is compared to defined (and articulated) criteria, rather than to other students’ performances. Criterion-referenced assessment yields data that inform discussions about whether a program is providing appropriately sequenced opportunities to learn. (See Analytical Scoring; opposite of Norm-referenced).

Curriculum Alignment: See Alignment.

Curriculum Mapping: The analytic process in which faculty examine the alignment between program learning outcomes and curricula. The primary purpose of curriculum mapping is to identify courses in which PLOs are introduced (I), practiced (P), or should be demonstrated (D). Ideally, this analytic process results in a publicly available visual representation; in addition to promoting transparency, curriculum mapping helps faculty identify courses from which to gather student work for the assessment of a particular PLO.

D

Descriptive: The overriding goal of outcomes assessment efforts is to provide an accurate and actionable description of what students know and can do to inform discussions about educational effectiveness.

Diagnostic Assessment: Information gathering at the beginning of a course or program. Diagnostic assessment can yield actionable information about students’ prior knowledge; additionally, diagnostic assessment activities provide information for students about what they will be expected to know and do at the conclusion of a course or program.

Direct Evidence: Concrete examples of students’ ability to perform a particular task or exhibit a particular skill. Course-embedded sources of direct evidence include: Pre-/post-tests of students’ knowledge or skills, exams and quizzes aligned to program learning outcome(s), research projects, presentations, performances and/or exhibitions, written work, and competency interviews. Other sources of direct evidence include: capstone projects/portfolios, standardized and certification exams, and internship supervisor evaluations.

E

Embedded Evidence: Evidence of student learning already collected and assessed by faculty in courses. (See Course-level Assessment).

Evaluation: A judgment about whether course/program/institutional goals were achieved.

F

Formative Assessment: Information gathering strategies that provide actionable evidence related to students’ progress toward mastery of the learning outcomes during the term (or class period). An integral part of excellent instruction, regular formative assessment provides valuable information to faculty regarding instructional strategies that are/aren’t producing student learning. Formative assessment also provides students with information about their progress in a course. Data collected can provide actionable information at the program level.

G

General Education Core Literacies (GECLs): The revised General Education requirements (2011) include four core literacies, the mastery of which UC Davis faculty “consider crucial for success in one’s profession … [and] to thoughtful, engaged participation in the community, nation, and world.”

Goals: General statements of what the faculty/program/institution expect of students related to learning. (See Learning Objectives).

Grades/Grading: The process of evaluating students’ work in a particular course. Grades include criteria (e.g., attendance, effort, participation) that provide information about students’ behaviors, but not the degree to which students are able to demonstrate the knowledge, skills, and/or dispositions associated with Course or Program Learning Outcomes.

H

Holistic Scoring: An approach to assessing student work in which instructors assign scores that reflect the importance of both the whole and its interdependent parts. (See Analytical Scoring).

I

Indirect Evidence: Data from which it is possible to make inferences about student learning. Sources of indirect evidence include students’ perceptions of their own learning gathered through self-report surveys; focus groups; exit interviews; alumni and current student surveys (e.g., UCUES); and graduation and retention data and reports. Indirect evidence alone is insufficient to make meaningful decisions about program or institutional effectiveness.

Inter-rater Reliability: The degree to which different raters/readers agree in their ratings using a common rubric or scoring guide. (See Norming).

J

K

L

Learning Objectives: The goals identified by the faculty/program/institution which shape instruction, programs, curricula, and/or activities. The UC Davis Undergraduate Educational Objectives describe the aspirations the institution has for its undergraduate students. To avoid confusion, it’s OK to refer to “goals” rather than “objectives” when they refer to expectations defined by the faculty/program/institution.

Learning Outcomes: Statements that describe the knowledge, skills, and/or dispositions students are expected to demonstrate as the result of instruction, programs, curricula, and/or activities. They focus on what students should be able to demonstrate/produce/represent as a result of successfully completing a course or academic program (rather than what the course covers). Effective learning outcomes statements are measurable and reflect intentional alignment with campus goals for student learning. At UC Davis, we distinguish between Program Learning Outcomes and Course Learning Outcomes in order to highlight the context in which students will demonstrate their learning.

M

Measurement: According to Secolsky and Denison (2012): “Measurement is the harnessing of responses to test items or other stimuli and/or the collection and analysis of expert or examinee judgments for the purposes of making inferences and ultimately to arrive a decisions based on those inferences” (p. xviii).

Metrics: The end product of measurement; also known as results. Must be contextualized to carry meaning.

N

Norming: A process of conversation and analysis through which assessors reach consistent agreement about the meaning and applicability of assessment criteria, such as a rubric. When such agreement is reached, the readers are said to be “normed” to the particular instrument. It is important to check for Inter-rater Agreement and re-norm as needed. Also known as calibration, this process promotes consistent application of assessment standards.

Norm-referenced Assessment: Measurement of relative performance; e.g., a measure of how well one student performs in comparison to other students completing the same evaluative activity. The usefulness of norm-referencing is limited for program assessment because it does not provide information about students’ performances on criteria related to Program or Course Learning Outcomes.

O

Objectives: See Learning Objectives.

Outcomes assessment: See Assessment.

Outcomes: See Learning Outcomes.

Performance Indicator: A sign that something has happened (i.e., an indicator of learning). A performance indicator provides examples or concrete descriptions of what is expected at varying levels of mastery.

P

Portfolio: A collection of student work over time used to show student development. Working portfolios contain all work related to a class, project, or assignment. Growth portfolios contain samples of students’ work over time. Best-work (or showcase) portfolios include student-selected best work, along with self-assessment documentation. Electronic portfolios can be any of the above.

Program Learning Outcomes (PLOs): Statements which articulate, in measurable terms, what students should know and be able to demonstrate as a result of and at conclusion of an academic or co-curricular program. Effective PLOs represent the program’s goals and values, reveal alignment to professional organizations’ standards (as appropriate) and institutional goals, and foster students’ metacognitive awareness about their own learning within and across the program.

Program Review: A cyclical process through which program faculty engage in inquiry to support evidence-based decision making. The purpose of program review is to generate actionable and meaningful results to inform discussions about program effectiveness, sustainability, budget, and strategic planning. Best practices for program review call for the inclusion of multiple sources of indirect and direct evidence, gathered and analyzed over time, rather than all at once in advance of a self-study. At UC Davis, the Academic Senate oversees program review. For more information, see the guidelines for undergraduate progam review or the guidelines for graduate program review.

Program-level Assessment: The systematic and intentional collection and analysis of aggregated evidence (direct and indirect) of student learning to inform conversations about program effectiveness. The results of program-level assessment can be used in self-studies prepared as part of regular Program Review. Program-level assessment is inquiry-driven. Following are few example questions that program-level assessment can help answer: What percentage of our graduating students meet or exceed our expectations as expressed in the program learning outcomes statements? What are our students’ areas of strengths and weaknesses? How can we confirm or disprove general impressions that our students aren’t achieving our expectations as expressed in the program learning outcomes statements? How well can students demonstrate particular skills at different points in the course of study?

Q

Qualitative vs. Quantitative: These terms describe many different things: research traditions, methods of data collection and analysis, types of data, etc. In the most general sense, quantitatively-oriented approaches to research seek to explain phenomena through mathematical (often statistical) means. Important concerns in quantitative research traditions include generalizability and causality. On the other hand, qualitatively-oriented inquiry results in descriptions of people, cases, and/or phenomena, which are interpreted according to disciplinary and/or methodological principles. A common focus of qualitative research is the description and interpretation of processes, contexts, and meaning. Responsible outcomes assessment efforts draw from both quantitative and qualitative approaches to making sense of phenomena related to student learning. Using a mixed-approach can reveal both what students have learned and inform analyses of why and how they’ve learned.

R

Reliability: An indicator of the extent to which analyses, interpretations, and/or results will remain consistent over time. Instruments used to assess student learning can be checked for reliability through repeated testing (see Norming) and refinement. The more reliable an instrument is, the more useful the results it produces. However, it is important to keep in mind that reliability is inextricably related to Validity. The goal should be to create reliable measures that also yield useful information about student learning. For example, measures of lower-level thinking skills (such as recall) are reliably assessed, but provide limited information about students’ mastery of Program or Course Learning Outcomes, which generally call for the development and demonstration of higher-level thinking skills.

Rubric: An instrument that describes the knowledge and skills required to demonstrate mastery in an assignment. Rubrics include the specific Criteria linked to Program or Course Learning Outcomes that students are expected to master. Rubrics often use scales that include four or more categories. Unlike checklists, rubrics are designed as scoring guides, which clearly articulate what mastery looks like at each performance level. Rubrics communicate the expectations of a given assignment or task, if shared beforehand, and structure how student work is evaluated.

S

Summative Assessment: A snapshot of student learning at a particular point-in-time; usually at the end of a course or program. Data from summative assessment can inform an individual faculty member’s planning for the next quarter or program faculty interested in assessing students’ mastery of program learning outcomes at a particular time.

T

Target Percentage: A standard identified by faculty, programs, or the institution that expresses the quantifiable expectations for students’ collective performance related to each Program Learning Outcome. Targets, which should always be expressed as percentages rather than means, identify the minimum threshold for aggregated student work in relation to specific criterion. For example, program faculty may decide that for a particular PLO, effectiveness will have been achieved if 80% of the student work sampled received a score of 3 or above on a 4-point analytic rubric. Similar targets can be set for indirect evidence, such as student satisfaction surveys.

U

V

Validity: Describes how well an instrument measures what it is intended to measure. Also refers to the trustworthiness of conclusions drawn from analyses. It is important to consider validity when making claims about the effectiveness of a particular program or instructional approach. (See Authentic Assessment).

W

X

Y

Z