Google Analytics

Showing posts with label Evaluation. Show all posts
Showing posts with label Evaluation. Show all posts

Monday, September 13, 2010

Likert-Type Scales: Examples, Samples and Information

If you are doing a formative or summative evaluation of instruction or doing a needs assessment or a program evaluation, sooner or later you are going to need a Likert-type scale.

You also may have a question about using odd or even numbers in the Likert-Type Scale, here is some insight on that.

Here are several types of scales that you might find handy.

Level of Frequency– 7 point
1 – Never
2 – Rarely, in less than 10% of the chances when I could have
3 – Occasionally, in about 30% of the chances when I could have
4 – Sometimes, in about 50% of the chances when I could have
5 – Frequently, in about 70% of the chances when I could have
6 – Usually, in about 90% of the chances I could have.
7 – Every time

Level of Quality – 5 point
1 – Poor
2 – Fair
3 – Good
4 – Very good
5 – Excellent

Level of Satisfaction – 5 point
1 – Not at all satisfied
2 – slightly satisfied
3 – moderately satisfied
4 – Very satisfied
5 – Extremely satisfied

Level of Satisfaction – 7 point
1 – Completely dissatisfied
2 – Mostly dissatisfied
3 – Somewhat dissatisfied
4 – neither satisfied or dissatisfied
5 – Somewhat satisfied
6 – Mostly satisfied
7 – Completely satisfied

Level of Quality – 5 point
1 – Poor
2 – Fair
3 – Good
4 – Very good
5 – Excellent

Here a list of other Likert-type scales you may find helpful.

Also, here is an article about preparing a Likert Scale.

Here is a link to a page that has some templates that might be helpful.

Finally, are you really pronouncing "Likert" correctly. I'll bet not. Here is the proper pronunciation of "Likert".


__
Bookmark and Share
Catalog of Recommended Books, Games and Gadgets Recommended Games and Gadgets Recommended Books Content Guide

Thursday, July 15, 2010

Learn About The Role Evaluation Plays in Grants

I do work as an evaluator on several National Science Foundation (NSF) Advanced Technology Education (ATE) grants. (learn a little more about the ATE programs)

I have been asked to share some of my experiences in an upcoming webinar about NSF ATE Evaluations. Here are the details in case you are interested in attending.

Webinar: Making Evaluation Integral to Your ATE Proposal

WEDNESDAY, JULY 21 | 1-2:30 PM ET

In this free, 90-minute webinar, participants will learn how to
make evaluation a strong component of their ATE proposals. Staff
from the ATE Evaluation Resource Center will provide guidance
about how to focus an ATE evaluation, develop a plan for data
collection and analysis, describe the evaluation in a proposal, and work with
an evaluator.

The webinar will feature NSF-ATE program officer Linnea Fletcher,
who will provide NSF’s perspective on these topics. Gordon
Snyder and Karl Kapp, a veteran ATE PI-evaluator team, will also
join the webinar, talking about their successful experiences working
together on funded ATE proposals.

Register here.

Participants will leave the webinar with the knowledge and tools
and they need to
  • Define the purpose of the evaluation and how the results will be
    used
  • Write clear and useful evaluation questions tied to the
    project’s intended outcomes
  • Be responsive to NSF’s expectations for ATE evaluations
  • Systematically identify question- and context-appropriate data
    collection methods and information sources
  • Locate data collection instruments
  • Create a feedback loop so project staff can use evaluation
    results to improve their efforts
  • Find competent evaluators and budget for their work
__
Bookmark and Share
Catalog of Recommended Books, Games and Gadgets Recommended Games and Gadgets Recommended Books Content Guide

Thursday, March 25, 2010

Creating Assessment Questions that Measure Performance

While knowledge assessments have become more and more popular both in business and academia, there are still issues with the creation of valid and reliable test questions. It is imperative that an assessment item actually measure knowledge and potential performance.  The goal is to create a test question that is linked to the objective of the training and that measures the right level of learning.

The most effective method of ensuring a valid test item is to ensure that the questions and the desired responses link back to the course objectives. A good method of doing this is the three column chart as shown below:

Task
Objective
Performance Criteria
Inspect a vial and accept or reject based on three critical criteria.
When given a vial, the employee will be able to correctly accept or reject the vial based on three evaluation criteria within 4 minutes with 100% accuracy.
Given an image of a vial that can be rotated on the computer screen, the learner will be required to indicate if the vial should be accepted or rejected with 100% accuracy within 4 minutes which are counted down on the screen.










Using this type of chart, you can distinguish between different types of learning when you create your assessment items. You can tell if you are testing at one level but the objective is at another.

For example, you may want to distinguish between a learner’s ability to memorize a concept and the learner’s ability to apply the concept.

An inappropriate question for the above task would be “Identify the three criteria that lead to the rejection of a vial.” In this case, this question is asking for identification and not application of a skill. Just because a learner may know the three criteria, that doesn’t mean he or she can apply that criteria.

The test question must ask the learner to apply the concept, not merely repeat the memorized criteria. Do not ask the learner to identify the three success criteria when you really want them to apply the criteria. The level at which you are testing must match the level at which you expect the learner to perform. When this happens, the questions are valid.

Other Assessment Resources

Test Creation Tips

Thought Unit

Job Aid for Writing Thought Provoking Questions

**ADDITION Steven Just's Testing Best Practices contains a wealth of information on creating effective assessments, a must read.
__

Bookmark and Share
Catalog of Recommended Books, Games and Gadgets
Recommended Games and Gadgets
Recommended Books
Content Guide

Friday, January 30, 2009

Cool Level One Evaluation

As learning professionals, we are familiar with Donald Kirkpatrick and his four Levels of Evaluation.If you need to get up to speed, you can read an interview with him here.

However, Level 1 Evaluations which measure initial learner reaction to instruction are typically boring checklists or Likert-type scales which are painful to complete and don't provide much more information than how was the room temperature or donuts. Many of them could be better designed to ask more work relevant questions (but that's a different post.)

A few months ago when I stayed in a hotel in San Francisco (Hotel Diva)and like every other hotel I've ever stayed in, they want my opinion of the hotel and my stay. But, unlike the hundreds of other times, I actually completed the evaluation this time...why?

Because they had a really awesome Level One Evaluation which was fun to complete. Creative, inspiring and certainly not your "run of the mill" evaluation sheet even though many of the questions were familiar "Would you recommend this hotel to others?"


So if you are having trouble getting learners to complete your Level One evaluation sheets, perhaps you need to apply some creativity to your evaluations. And, when you have a creative evaluation sheet, it encourages the learners to be creative as well. The hotel actually framed and placed some of the more creative evaluation on a wall near the lobby.



So maybe one of your 2009 resolutions should be to develop a more creative Level One evaluation sheet.
__

Catalog of Recommended Books, Games and Gadgets
Recommended Games and Gadgets
Recommended Books
Content Guide

Thursday, February 14, 2008

Evaluation Introduction

Doing a lot in terms of evaluation lately, here is part of the introduction for an evaluation plan I have developed. Does it make sense to you, would you use this technique?

Introduction

The primary area of focus for this evaluation plan is to measure the impact of this learning intervention on employee and organizational performance. The impact will be measured using a combination of qualitative and quantitative data gathering techniques.

The qualitative approach will be undertaken using an Appreciative Inquiry perspective. The Appreciative Inquiry perspective on organizational development was first articulated in 1987 by two professors at the Case Western Reserve University's Weatherhead School of Management. The two professors, David Cooperrider and Suresh Srivastva, advocated an approach to evaluating and improving organizations that emphasized identification of what is working effectively as opposed to identifying problems. Appreciative Inquiry “involves, in a central way, the art and practice of asking questions that strengthen a system’s capacity to apprehend, anticipate, and heighten positive potential.” (See the Appreciative Inquiry Commons.)

The idea is to build on the positive aspects of a learning intervention and to develop ways in which the organization can leverage the potential learning success and expand its impact through examination of how its stakeholders view and value the learning intervention.

Measurement

Gathering data for Appreciative Inquiry will include the following qualitative techniques: one-on-one structured interviews, focus groups, surveys and observation. The data gathering for the evaluation will be focused on answering impact questions along three dimensions.

These three dimensions are listed in the table below:
  • Adoption—Is the organization using the learning intervention that was developed. Are they adapting it to their needs?
  • Performance-Is the organization/individual performing at a higher level because of the learning intervention?
  • Satisfaction-Are the employees satisfied with their learning experience and see it as having value?


What other dimensions do you use to measure learning outcomes? Do we need to measure both organizational and employee results or just organizational results?

I like thinking about evaluation in a positive manner instead of simply trying to find out what is wrong.
__

Catalog of Recommended Books, Games and Gadgets
Recommended Games and Gadgets
Recommended Books
Content Guide