Assessment in Support and Service Programs
For each program outcome currently being assessed, describe the sources of evidence that will be used to measure it. Determining an assessment tool that that matches the outcome is sometimes very easy. Other times it takes effort. Are there current practices that will provide the information, or will the process require adopting new practices?
For program outcomes that rely on quantities (membership, number of students being served, amount of grant funding obtained), you can generally rely on mechanisms that are already part of your administrative processes.
For program outcomes that are based on meeting client needs or on customer satisfaction, it is imperative to have mechanisms for getting feedback from a broad representation of users. Typical tools are surveys and focus groups. Feedback from one-on-one interactions such as conversations or emails is appropriate to also include. However, these tend to be uni-dimensional, and may not represent the full breadth of feedback. Often, you will want to get evidence from multiple measure, so that you can triangulate the results to be sure that the information is valid and reliable.
Program outcomes that are based on impacting users can assessed through direct or indirect measures, although direct measures are more reliable. A direct measure of participant increased understanding of gender roles in society would come from a pretest/posttest assessment. An indirect measure would be a survey asking participants if they better understand gender roles in society after a presentation.
Many outcomes should also have a target or benchmark associated with them. If your outcome is to increase attendance, each year you should state the current target.
Sample Assessment Tools:
Program Outcome: Additional funds necessary for quality programming will be raised through grants.
Assessment Tools: Dollar amount of grant requests; dollar amount of grants awarded; target for 2013-2014 is $75,000 based on current estimated budget for 2014-2015.
Program Outcome: Groups requesting conference space will be satisfied with all aspects of their experience.
Assessment Tools: Results of post-conference user surveys; feedback received through correspondence; review of notes from day-of-conference interactions
Program Outcome: Membership in the institute will increase.
Assessment Tools: membership records from consecutive years; target for 2013-2014 is a 5% increase.
Program Outcome: Maintain financial systems in accordance with generally accepted accounting principles.
Assessment Tools: Results of annual audit. Target: the university will receive an unqualified audit report each year.
Program Outcome: Students will be aware of employment opportunities.
Assessment Tools: job fair attendance numbers as a percent of graduating seniors; career center appointments as a percent of students; target for 2013-2014 is that 75% of seniors will participate in some type of career event or counseling; graduating senior survey questions; one-year alumni survey questions
NOTE: This is a good place to note that the ways you go about increasing membership or satisfaction are different than the assessment tools. For example, you might advertize to increase membership or conduct a customer appreciation week. These means might be actions adopted because of previous assessment results, in which case they were listed in Actions Taken in the previous report. They might, however, simply be typical functions of the department. In either case, they are not assessment tools.
Tools for Assessing Student Learning Outcomes
For some units, it is appropriate to set student learning outcomes as well as program outcomes. Many student affairs units, for instance, set expectations for student learning through participation. The library is another example of a unit that sets student learning outcomes. More information on student learning outcomes and their assessment can be found at:
The College of Arts and Sciences Assessment Planning and Reporting.