Resources

Assessment Handbooks


Assessment Process

Goals: What do you want to do or achieve overall? Goals should be tied to your mission, vision, and strategic plan. Example:

  • To provide useful training and support for employees regarding strategic planning and assessment.

Outcomes: What needs attention or what could be better? You must determine what the desired result/improvement is for your department. Examples:

  • Staff will indicate the trainings have increased their knowledge of assessment methods.
  • Departments will submit quality assessment plans reflecting their desired elements.

Methods: What will success look like, and how can it be measured? What will be different if the outcome is reached?

Criteria of Excellence: What is your criteria for success regarding each outcome? You must have a threshold in order to measure the success of your outcomes. Examples:

  • 85% of training participants will accurately identify direct versus indirect assessment methods.

Results: What was the output once the assessment plan was implemented? This data should then be used to enhance your assessment plan.

Action Plan/Use of Results: What do our results mean? What are the next steps to be taken to improve these results? This information will allow you to create an action plan. Examples:

  • Reformat the content of training based on knowledge gaps.
  • Conduct survey for other training needs.

Back to Top


Simple Assessment Example

Goal: Improve my health.

Outcome: I will lose five pounds in the next month.

Method: Use scale to determine if weight was lost.

Action Plan: Did not meet goal therefore I will now exercise three times a week.

Back to Top


Strategic Planning

Institutions create a streamlined planning process which strategically connects academic departments and units to achieve overall goals set by the University. Strategic planning is necessary to set direction and to ensure goals are reached based on the University's mission and vision. The Office of Strategic Planning and Assessment provides the resources and tools needed to create and implement a strategic plan.

Back to Top


Common Assessment Myths

Assessment can often be viewed as an overwhelming process and the purpose can be misconstrued. The University of Central Florida (2008) compiled most common assessment misconceptions and detailed how assessment should be viewed by dispelling the myths.

Myth 1: The results of assessment will be used to evaluate faculty performance.

Nothing could be further from the truth. Faculty awareness, participation, and ownership are essential for successful program assessment, but assessment results should never be used to evaluate or judge individual faculty performance. The results of program assessment are used to improve programs.

Myth 2: Our program is working well, our students are learning; we don’t need to bother with assessment.

The primary purpose of program assessment is to improve the quality of educational programs by improving student learning. Even if you feel that the quality of your program is good, there is always room for improvement. In addition, various accrediting bodies mandate conducting student outcomes assessment. For example, the Southern Association of Colleges and Schools (SACS) requires that every program assess its student outcomes and uses the results to improve programs. Not to conduct assessment is not an option.

Myth 3: We will assign a single faculty member to conduct the assessment. Too many opinions would only delay and hinder the process.

While it is a good idea to have one or two faculty members head the assessment process for the department, it is really important and beneficial to have all faculty members involved. Each person brings to the table different perspectives and ideas for improving the academic program. Also it is important that all faculty members understand and agree to the mission (i.e., purpose) and goals of the academic program.

Myth 4: The administration might use the results to eliminate some of the department’s programs.

There are two types of evaluation processes: summative and formative. The purpose of summative program evaluation is to judge the quality and worth of a program. On the other hand, the purpose of formative program evaluation is to provide feedback to help improve and modify a program. Program assessment is intended as a formative evaluation and not a summative evaluation. The results of program assessment will not be used to eliminate programs.

Myth 5: Assessment is a waste of time and does not benefit the students.

The primary purpose of assessment is to identify the important objectives and learning outcomes for your program with the purpose of improving student learning. Anything that enhances and improves the learning, knowledge and growth of your students cannot be considered a waste of time.

Myth 6: We will come up with an assessment plan for this year and use it every year thereafter.

For program assessment to be successful, it must be an ongoing and continuous process. Just as your program should be improving, so should your assessment plan and measurement methods. Each academic department must look at its programs and its learning outcomes on a continual basis and determine if there are better ways to measure student learning and other program outcomes. Your assessment plan should be continuously reviewed and improved.

Myth 7: Program assessment sounds like a good idea, but it is time- consuming and complex.

It is impossible to “get something for nothing.” Effective program assessment will take some of your time and effort, but there are steps that you can follow that can help you to develop an assessment plan that will lead to improving student learning.

Source: University of Central Florida. (2008). Academic Program Assessment Handbook. Retrieved from UEP Assessment Handbook http://oeas.ucf.edu/doc/acad_assess_handbook.pdf .

Back to Top


Common Assessment Terms

Academic Program
An academic program is a program of study over a period of time that leads to a degree.
Examples: BBA-Marketing, BA–History, BS- Biology, MBA– Finance, MS- Statistics, PhD- Environmental Science and Engineering

Academic Success Measures
Typically associated with measures of retention, grades, and transfer rates; these success measures do not give us information about what students have learned and therefore should not be confused with student learning outcomes.

Assessment plan
An assessment plan describes the student learning outcomes to be achieved, a description of the direct and indirect assessment methods used to evaluate the attainment of each outcome, the intervals at which evidence is collected and reviewed, and the individual(s) responsible for the collection/review of evidence.

Assessment Instruments
Assessment instruments are used to gather data about student learning. These instruments can be either quantitative or qualitative, and may be both traditional tests (multiple choice, essay, or other formats), as well as to alternative forms of assessment such as oral examinations, group problem solving, performances, and demonstrations, portfolios, peer observations, etc. The most important factor in choosing an instrument is ensuring that it (a) is gathering information about the desired outcome, not something else and, (b) that the results gathered from using it can be used to make improvements.

Keep in mind that when using any test that assesses more than one concept as an assessment measure, it is necessary to align individual test items with specific outcomes as opposed to using an entire test score as a measure.

Benchmark
A description or example of student or institutional performance that serves as a standard of comparison for evaluation or judging quality. Benchmarks can be “internal” (i.e. comparing performance against past performance) as well as “external” (i.e. comparing performance against the performance of another institution/department/program).

Direct Measures of Learning
Direct measures require students to demonstrate their knowledge, skills and abilities. Examples:

  • standardized exams and/or exam items
  • locally developed exams and/or exam items
  • essays scored through a common rubric or scoring matrix
  • capstone experiences
  • portfolios, state and national licensure exams and/or exam items
  • review of performances in the arts

Indirect Measures of Learning
Indirect measures asks students to reflect on their learning rather than to demonstrate it. Examples:

  • alumni, employer and student surveys
  • exit interviews of graduates
  • graduate follow-up studies
  • focus groups

Formative Assessment
Formative Assessment methods involves gathering information about student-learning outcomes during the progression of a course or program to improve student-learning

  • teacher observations
  • analysis of student work
  • feedback on assignments
  • group discussions
  • portfolios, oral presentations
  • peer assessment
  • student journals

Summative Assessment
Summative Assessment methods involves gathering information about student-learning at the conclusion of a course or program to improve student-learning

  • standardized senior exit exams
  • juried review of essays
  • senior exit interviews
  • performance on state and national licensure exams.

Learning Goal
Statements that describe broad learning concepts, for example clear communication, problem solving, and ethical awareness. A description of what our students will be or what our students will have upon successfully completing the program.
Example: Students in the Chemistry Program will gain a strong foundation in chemistry concepts and research principles and techniques and develop the skills needed to become successful chemists and researchers..

Learning Outcome
Examine what a student (or other stakeholders) is to do, as a result of the program or service.

Program Outcome (Operational)
Examine what a program or process is to do, achieve or accomplish for its own improvement; generally needs/satisfaction driven.

Methods of Assessment
Techniques or instruments used in assessment.

Qualitative Assessment Methods
Methods which rely on descriptions rather than numerical analyses. Examples are ethnographic field studies, logs, journals, participant observation, interviews and focus groups, and open-ended questions on interviews and surveys. The analysis of qualitative results requires non-statistical skills and tools.

Quantitative Assessment Methods
Methods which rely on numerical scores or ratings such as surveys, inventories, institutional/departmental data, departmental/course-level exams (locally constructed, standardized, etc.). In order to analyze quantitative results, either descriptive or inferential statistics are needed.

Portfolio
An accumulation of evidence about individual proficiencies, especially in relation to learning outcomes. Examples include but are not limited to samples of student work including projects, journals, exams, papers, presentations, videos of speeches and performances. The evaluation of portfolios requires the application of tools using the judgment of experts (faculty members).

Reliability
Reliable measures are measures that produce consistent responses over time.

Rubrics (Scoring Guidelines)
A set of categories, based on learning outcomes that define and describe the important components of the work being evaluated. Each category contains varying levels of completion or competence with a score assigned to each level and a clear description of what criteria need to be met to attain the score at each level. Written and shared for judging performance to differentiate levels of performance, and to anchor judgments about the degree of achievement.

Student Learning Outcome Assessment Cycle
An institutional pattern of identifying outcomes, assessment, and improvement plans based on the assessment.

Teaching-Improvement Loop
Teaching, learning, outcomes assessment, and improvement may be defined as elements of a feedback loop in which teaching influences learning, and the assessment of learning outcomes is used to improve teaching and learning.

Triangulation
A method where multiple assessment methods are used to provide a more complete description of student learning. An example of triangulation would be the use of a survey, interview, and observation of student behavior to assess a learning outcome.

Validity
As applied to a test refers to a judgment concerning how well a test/instrument does in fact measure what it purports to measure.

Back to Top