General Education Assessment
- Philosophy and rationale
- Work product collection
- Work product review
- Dissemination of results
- Program Improvement
Each academic year, the following questions are examined:
- What are the overall abilities of students taking University Studies courses with regard to the UNCW Learning Goals?
- What are the relative strengths and weaknesses within the subskills of those goals?
- Are there any differences in performance based on course delivery method or demographic and preparedness variables, such as gender, race or ethnicity, transfer students vs. freshman admits, honors vs. non-honors students, total hours completed, or entrance test scores?
- What are the strengths and weaknesses of the assessment process itself?
UNCW has adopted an approach to assessing its Learning Goals at the University Studies level that uses assignments that are a regular part of the course content. A strength of this approach is that the student work products are an authentic part of the curriculum, and hence there is a natural alignment often missing in standardized assessments. Students are motivated to perform at their best because the assignments are part of the course content and course grade. The assessment activities require little additional effort on the part of course faculty because the assignments used are a regular part of the coursework. An additional strength of this method is faculty collaboration and full participation in both the selection of the assignments and the scoring of the student work products.
The student work products collected are scored independently on a common rubric by trained scorers. The results of this scoring provide quantitative estimates of students’ performance and qualitative descriptions of what each performance level looks like, which provides valuable information for the process of improvement. The normal disadvantage to this type of approach when compared to standardized tests is that results cannot be compared to other institutions. This disadvantage is mitigated in part by the use of the AAC&U VALUE rubrics for many of the learning goals. This concern is also addressed by the regular administration of standardized assessments, giving the university the opportunity to make such comparisons. back to top
The eight Learning Goals are assessed on a rotating basis. The schedule has flexibility to increase the frequency of assessment of any learning goals, for example, one with inconclusive or confusing results. back to top
The sampling method used lays the foundation for the generalizability of the results. No one part of the University Studies curriculum, nor for that matter no one part of the university experience, is solely responsible for helping students to write well, think critically, or conduct responsible inquiry and analysis. These skills are practiced in many courses. Therefore, a matrix approach to sampling is taken, so that, over time, work products will be selected from all general education components that are aligned to each UNCW Learning Goal. The University Studies Curriculum Map illustrates this alignment.
Once courses are selected for sampling, section selection is done to insure a representative mix of course offerings (for example, by in class or distance methods, and by instructor type – tenure-line, lecturer, or part time). For General Education Assessment purposes, courses are selected that not only meet the learning goals, but are also among those that are taken by a large number of students, in order to represent as much as possible the work of “typical” UNCW students.
Prior to the start of each semester, the General Education Assessment Director meets with the selected course instructors to familiarize them with the VALUE rubrics. Instructors are asked to review their course content and assignments, and to select one assignment that they believe fits the dimensions of learning goal being assessed and the corresponding rubric. Instructors should include in the course syllabus the General Education Assessment Statement for Students which discloses the use of their work for the purpose of General Education Assessment.
The General Education Assessment office retrieves a copy of the course roster from Banner in order to compile the student demographic information in university records for the purpose of analysis based on demographic and preparedness variables. back to top
Scorers are recruited from UNCW faculty and, in some cases, teaching assistants. A recruitment email is sent to chairs, sometimes to all university chairs, and sometimes to only chairs in selected departments (based on the Learning Goals and course content being assessed), asking them to forward the email to all full- and part-time faculty in their department.
The desire is to include reviewers from a broad spectrum of departments. The intent is to give all faculty an opportunity to participate, to learn about the process and rubrics, and to see the learning students experience as they begin their programs. However, in some cases, the scoring is best done by discipline experts. It is also important to try to have a least one faculty member from each of the departments from which student work products are being reviewed. back to top
Metarubrics, such as the VALUE rubrics, are constructed so that they can be used to score a variety of student artifacts across disciplines, across universities, and across preparation levels. But their strength is also a weakness: the generality of the rubric makes it more difficult to use than a rubric that is created for one specific assignment. To address this issue, a process must be created that not only introduces the rubric to the scorers, but also makes its use more manageable.
Volunteer scorers attend a workshop on the rubric they will be using. During the workshop, scorers review the rubric in detail and are introduced to the general assumptions adopted for applying the rubrics. After reviewing the rubric and initial assumptions, the volunteers read and score sample student work products. Scoring is followed by a detailed discussion, so that scorers can better see the nuances of the rubric and learn what fellow scorers saw in the work products. From these discussions, scorer norming begins and assumptions are developed for applying the rubric to each specific assignment.
Some initial assumptions hold true across all assignments:
- When scoring, we are comparing each separate work product to the characteristics we want the work of UNCW graduates to demonstrate (considered to be Level 4).
- Goals can be scored independently from each other.
- Relative strengths and weaknesses within each goal emerge through seeking evidence for each dimension separately.
- Common practice and the instructor’s directions guide the scorer’s interpretation of the rubric dimensions in relation to each assignment.
- Additional assumptions will need to be made when each rubric is applied to individual assignments. After reviewing the rubric and initial assumptions, the volunteers read and score two to four student work products. Scoring is followed by a detailed discussion, so that scorers can better see the nuances of the rubric and learn what fellow scorers see in the work products. From these discussions, assumptions can be developed for applying the rubric to each specific assignment.
Scoring of the student work products is done during a scoring event. Pairs of scorers are assigned a packet of student work on the same assignment. Scorers read the first paper, which is the same in each packet, score it, and then discuss their scores, coming to concensus on that score and specific interpretation of the rubric as it relates to the specific assignment. The packets also contain addition common papers which the scorers score individually. These are used to measure interrater reliability. back to top
A general education assessment report is written annually by the General Education Director. This report is first presented to the Learning Assessment Council, which may make recommendations to the Provost Office based on results. The report is also presented to the University Studies Advisory Committee, which may also make recommendations to the Faculty Senate. The reports, with recommendations, are published. Findings categorized by Learning Goal are also published on the web. back to top
Bodies responsible for implementing program improvements based on general education assessment results include the Provost's Office, the Faculty Senate, the University Studies Advisory Committee, and the faculty in departments that offer University Studies courses. back to top