Glossary

A B C D E F G H I J K L M N O P Q R S T U V W X Y Z 

A 

Accreditation: An outward-focused activity in which an institution reports on its financial health; physical and technological infrastructure; staff and faculty capacities; and educational effectiveness. The purpose of accreditation is to provide public accountability to external audiences.

Aligned Curriculum: The product of curriculum alignment, it clarifies the relationship between outcomes and opportunities to learn by demonstrating that both opportunities to learn skills/knowledge related to the PLO and opportunities to demonstrate learning of the PLO are sufficient and sufficiently scaffolded throughout the curriculum.

Aligned Evidence of Student Learning: Evidence that results from a task that is likely to elicit a performance of the skill/knowledge necessary to demonstrate level of mastery of the intended learning outcome.

Alignment: The process of intentionally connecting course, program, general education, and institutional learning outcomes. At the program level, alignment represents the ideal cohesive relationship between curriculum and outcomes. Checking alignment allows program faculty to determine whether the curriculum provides sufficient and appropriately sequenced opportunities for students to develop the knowledge, skills, and dispositions identified in Program Learning Outcomes (PLOs).

Analysis: Making sense of collected data by cleaning, inspecting, aggregating/disaggregating, and describing it in order to develop interpretations, inform conclusions, and support decision-making.

Analytical Scoring: An approach to assessing student work in which instructors assign scores on multiple criteria. For example, when assessing a written assignment, an instructor might assign a score for each of the following: accuracy of content; appropriate use of evidence to support argument; organization; and adherence to the conventions of academic English. (See also as Criterion-referenced Assessment; opposite of Holistic Scoring.)

Analytics (also known as learner or learning academic analytics): Applying analyses of large data sets about students to make predictions of optimal learning contexts and other factors likely to produce student success in relation to institutional outcomes, such as persistence, time-to-degree, and graduation rates.

Assessment: An intentional, systematic, and faculty-led process which answers a core question: What proportion of students demonstrate attainment of program learning outcomes? Assessment of Student Learning Outcomes (SLOs) occurs includes six stages:  articulate goals for student learning; collect evidence from required courses; analyze and interrogate results; and develop concrete plans for acting to improve student learning and increase program effectiveness.

Assessment Plan: A collaboratively-developed, planning document that establishes a multi-year plan for outcomes assessment. Assessment plans articulate when each LOs will be assessed; the types of direct and indirect evidence (aligned to each learning outcome) that will be collected and analyzed; plans for analyzing the data; procedures to guide discussion and application of results; and timelines and responsibilities.

Attainment Targets: The faculty-determined percentage of students needed to demonstrate mastery of all of the PLOs in order to consider the program successful.

Authentic Assessment: Provides direct evidence of learners’ knowledge or skill by engaging them in a “real world” task. Authentic assessment provides opportunities for learners to demonstrate what they know and can do within the context of a likely scenario. For example, an authentic assessment of a speaker’s language proficiency might take the form of a conversation with a more advanced speaker.

B

Benchmark: A point of reference to which aggregated evidence of student learning can be compared. The most useful benchmarks are established internally by faculty and/or programs.

C

Capstone: A course or experience toward the end of a program in which students have the opportunity to demonstrate their cumulative knowledge, skills, and dispositions related to some or all of the PLOs. In capstone courses/experiences, students produce direct evidence of their learning. Examples of capstone assignments include: theses, oral defenses, exhibitions, presentations, performances, and/or research papers.

Course Map: see Curriculum Map

Course Learning Outcomes (CLOs): see Learning Outcomes

Course-level Assessment:  The intentional collection of evidence of student learning with which the instructor can assess mastery of one or more Course Learning Outcomes (CLOs). Through course-level assessment, faculty provide timely and useful feedback to students, use data to assign grades, and record data related to students’ achievement of the CLOs in question. Course-embedded assessment that occurs towards the end of a program can also yield data for program outcomes assessment efforts. (See also Embedded Evidence).

Criteria for Evaluation (CE): Describe the requisite components or aspects of desired performance that will be assessed. For example, in a scientific paper writing assignment, the CE are likely to include methods, results, discussion, etc. Criteria for evaluation can be found in the left-hand column of an analytic rubric. (Also called "traits.")

Criterion-Referenced: Assessment of learning in which evidence of student learning is compared to defined (and articulated) criteria, rather than to other students’ performances. Criterion-referenced assessment yields data that inform discussions about whether a program is providing appropriately sequenced opportunities to learn. (See Analytical Scoring; opposite of Norm-referenced).

Curriculum Alignment: An iterative process of examining the curriculum to determine the degree to which it is intentionally and explicitly designed to provide sufficient opportunities for students to develop and demonstrate their learning (with respect to the PLOs).

Curriculum Mapping: The product of curriculum mapping, a visual depiction of the relationship between required courses in the program's curriculum and the PLOs. Often visualized in a table. 

D

Data Disaggregation: The result of the process of separating aggregate data into smaller sub-categories, typically defined by demographic characteristics.

Data Interrogation: The systematic and intentional examination of data to answer research questions.

Diagnostic Assessment: Information gathering at the beginning of a course or program. Diagnostic assessment can yield actionable information about students’ prior knowledge; additionally, diagnostic assessment activities provide information for students about what they will be expected to know and do at the conclusion of a course or program.

Direct Evidence: Concrete examples of students’ ability to perform a particular task or exhibit a particular skill. Course-embedded sources of direct evidence include: Pre-/post-tests of students’ knowledge or skills, exams and quizzes aligned to program learning outcome(s), research projects, presentations, performances and/or exhibitions, written work, and competency interviews. Other sources of direct evidence include: capstone projects/portfolios, standardized and certification exams, and internship supervisor evaluations.

E

Embedded Evidence: Embedded assessments: Any course assignment (or part an assignment) that has been purposefully designed to elicit direct evidence aligned to one or more PLOs. Embedded assessments may be work from activities or assignments that students do in various courses or may be designed overtly for assessment purposes and then incorporated into the courses. NOTE: "Embedded" describes the source of evidence (rather than a particular assignment). 

Equity-mindedness: An intentional and critical examination undertaken in order to identify, reckon with, and dismantle inherited (or "normalized") practices that privilege some students' lived experiences and opportunities over others'; individuals' unexamined biases and assumptions; and structural and policy barriers that perpetuate or exacerbate opportunity gaps. Equity-mindedness views responsibility for student success as intrinsically tied to the power of the institution and its practitioners to effect change. (Adapted from The Center for Urban Education.) 

F

Formative Assessment: Information gathering strategies that provide actionable evidence related to students’ progress toward mastery of the learning outcomes during the term (or class period). An integral part of excellent instruction, regular formative assessment provides valuable information to faculty regarding instructional strategies that are/aren’t producing student learning. Formative assessment also provides students with information about their progress in a course. Data collected can provide actionable information at the program level.

G

General Education Core Literacies: The revised General Education requirements (2011) include four core literacies, the mastery of which UC Davis faculty “consider crucial for success in one’s profession … [and] to thoughtful, engaged participation in the community, nation, and world.”

Goals: General statements of what the faculty/program/institution expect of students related to learning. (See Learning Objectives).

Grades/Grading: The process of evaluating students’ work in a particular course. Grades include criteria (e.g., attendance, effort, participation) that provide information about students’ behaviors, but not necessarily the degree to which students are able to demonstrate the knowledge, skills, and/or dispositions associated with Course or Program Learning Outcomes.

H

Holistic Scoring: An approach to assessing student work in which instructors assign scores that reflect the importance of both the whole and its interdependent parts. (See Analytical Scoring).

I

Indirect Evidence: Data from which it is possible to make inferences about student learning. Sources of indirect evidence include: Self-reports of students’ perceptions of their own learning; results of Recent Baccalaureate and / or University of California Undergraduate Experience Surveys; and graduation, time-to-degree, and persistence data and reports. NOTE: Indirect evidence alone is insufficient to make meaningful decisions about program or institutional effectiveness.

Inter-rater Reliability: The degree to which different raters/readers agree in their ratings using a common rubric or scoring guide. (See Norming).

J

K

L

Learning Goals: General statements of what the faculty/program/institution expect of students related to learning. Goals are broadly stated, meaningful, achievable, and are not measured

Learning Outcomes Statements: Statements that describe, in measurable terms, what students should be able to demonstrate as a result of and at the conclusion of instruction, programs, curricula, and/or activities. Learning Outcomes Statements focus on what learners should be able to demonstrate, not what a course / program covers. The context in which the statements exist is indicated by the addition of specific language. For example:

  • Course Learning Outcomes (CLOs)
  • Program Learning Outcomes (PLOs) 
  • Workshop Learning Outcomes (WLOs)

M

Mastery: The knowledge or ability to perform a task (at a designated performance level) without assistance. Unconscious competence (Ambrose et al., 2010).

Measurement: According to Secolsky and Denison (2012): “Measurement is the harnessing of responses to test items or other stimuli and/or the collection and analysis of expert or examinee judgments for the purposes of making inferences and ultimately to arrive a decisions based on those inferences” (p. xviii).

Metrics: The end product of measurement; also known as results. Must be contextualized to carry meaning.

N

Norming: A process of conversation and analysis through which assessors reach consistent agreement about the meaning and applicability of assessment criteria, such as a rubric. When such agreement is reached, the readers are said to be “normed” to the particular instrument. It is important to check for Inter-rater Agreement and re-norm as needed. Also known as calibration, this process promotes consistent application of assessment standards.

Norm-Referenced Assessment: Measurement of relative performance; e.g., a measure of how well one student performs in comparison to other students completing the same evaluative activity. The usefulness of norm-referencing is limited for program assessment because it does not provide information about students’ performances on criteria related to Program or Course Learning Outcomes.

O

Observable Action Verbs: A verb that describes a specific action that a student will be able to perform and that is able to be observed.

Outcomes: See Learning Outcomes Statements.

P

Performance Descriptors: The characteristics associated with each performance level within the criteria for evaluation of student learning. 

Performance Indicator (PI): Specific and measurable descriptions of what students are expected to know and/or be able to do at varying levels of mastery. The expected behavior must be specified by name, using an observable action verb such as describe, distinguish, or define.

Performance Levels: A rating scale that identifies students’ level of mastery within each criterion for evaluation of student learning.

Portfolio: A collection of student work over time used to show student development. Working portfolios contain all work related to a class, project, or assignment. Growth portfolios contain samples of students’ work over time. Best-work (or showcase) portfolios include student-selected best work, along with self-assessment documentation. Electronic portfolios can be any of the above.

Program Learning Outcomes (PLOs): Statements which articulate, in measurable terms, what students should know and be able to demonstrate as a result of and at conclusion of an academic or co-curricular program. Effective PLOs represent the program’s goals and values, reveal alignment to professional organizations’ standards (as appropriate) and institutional goals, and foster students’ metacognitive awareness about their own learning within and across the program.

Program Learning Outcomes Scores: A summary of PI scores (from multiple assignments) relevant to a given PLO. 

Program Review: A cyclical process through which program faculty engage in inquiry to support evidence-based decision making. The purpose of program review is to generate actionable and meaningful results to inform discussions about program effectiveness, sustainability, budget, and strategic planning. Best practices for program review call for the inclusion of multiple sources of indirect and direct evidence, gathered and analyzed over time, rather than all at once in advance of a self-study. At UC Davis, the Academic Senate oversees program review. For more information, see the guidelines for undergraduate progam review or the guidelines for graduate program review.

Program-level Assessment: The systematic and intentional collection and analysis of aggregated evidence (direct and indirect) of student learning to inform conversations about program effectiveness. The results of program-level assessment can be used in self-studies prepared as part of regular Program Review. Program-level assessment is inquiry-driven. Following are few example questions that program-level assessment can help answer: What percentage of our graduating students meet or exceed our expectations as expressed in the program learning outcomes statements? What are our students’ areas of strengths and weaknesses? How can we confirm or disprove general impressions that our students aren’t achieving our expectations as expressed in the program learning outcomes statements? How well can students demonstrate particular skills at different points in the course of study?

Q

Qualitative vs. Quantitative: These terms describe many different things: research traditions, methods of data collection and analysis, types of data, etc. In the most general sense, quantitatively-oriented approaches to research seek to explain phenomena through mathematical (often statistical) means. Important concerns in quantitative research traditions include generalizability and causality. On the other hand, qualitatively-oriented inquiry results in descriptions of people, cases, and/or phenomena, which are interpreted according to disciplinary and/or methodological principles. A common focus of qualitative research is the description and interpretation of processes, contexts, and meaning. Responsible outcomes assessment efforts draw from both quantitative and qualitative approaches to making sense of phenomena related to student learning. Using a mixed-approach can reveal both what students have learned and inform analyses of why and how they’ve learned.

R

Recursive: Describes the intentional return to the assessment process and/or inquiry on a regular basis. 

Reliability: An indicator of the extent to which analyses, interpretations, and/or results will remain consistent over time. Instruments used to assess student learning can be checked for reliability through repeated testing (see Norming) and refinement. The more reliable an instrument is, the more useful the results it produces. However, it is important to keep in mind that reliability is inextricably related to Validity. The goal should be to create reliable measures that also yield useful information about student learning. For example, measures of lower-level thinking skills (such as recall) are reliably assessed, but provide limited information about students’ mastery of Program or Course Learning Outcomes, which generally call for the development and demonstration of higher-level thinking skills.

Rubric: An instrument that describes the knowledge and skills required to demonstrate mastery in an assignment. Rubrics include the specific Criteria linked to Program or Course Learning Outcomes that students are expected to master. Rubrics often use scales that include four or more categories. Unlike checklists, rubrics are designed as scoring guides, which clearly articulate what mastery looks like at each performance level. Rubrics communicate the expectations of a given assignment or task, if shared beforehand, and structure how student work is evaluated.

S

Scaffolding: An approach to designing courses and curricula that provide students with appropriate levels of support to learn and opportunities to practice demonstrating their learning as they move progressively towards mastery of PLOs.

Signature Assignment: Designed collaboratively by faculty, a signature assignment is a generic assignment, task, activity, project and/or exam that is purposefully and collaboratively designed (or revised) to elicit direct evidence aligned to one or more program learning outcomes (PLO)s. Signature assignments enable the program to collect common data across course sections for program-level assessment and review. 

Single-barreled: Referring to a single construct, expectation, or outcome.

SMART Performance Inidcators: Specific, Measurable, Achievable, Relevant, Time-bound

Summative Assessment: A snapshot of student learning at a particular point-in-time; usually at the end of a course or program. Data from summative assessment can inform an individual faculty member’s planning for the next quarter or program faculty interested in assessing students’ mastery of program learning outcomes at a particular time.

T

Target Percentage: A standard identified by faculty, programs, or the institution that expresses the quantifiable expectations for students’ collective performance related to each Program Learning Outcome. Targets, which should always be expressed as percentages rather than means, identify the minimum threshold for aggregated student work in relation to specific criterion. For example, program faculty may decide that for a particular PLO, effectiveness will have been achieved if 80% of the student work sampled received a score of 3 or above on a 4-point analytic rubric. Similar targets can be set for indirect evidence, such as student satisfaction surveys.

U

V

Validity: Describes how well an instrument measures what it is intended to measure. Also refers to the trustworthiness of conclusions drawn from analyses. It is important to consider validity when making claims about the effectiveness of a particular program or instructional approach. (See Authentic Assessment).

W

X

Y

Z

Tags