Email list hosting service & mailing list manager


Re: Assessment of Research Office Bill Kirby 14 Feb 1996 09:05 EST

I've been following the discussion about faculty surveys and
assessments with interest. At the Federal Quality Institute I worked
with many organizations in assisting them with organizational
assessments, developing organizational performance measures, and
developing "customer" related surveys. In my experience, the
development of valid, reliable, and meaningful customer surveys is an
extremely tricky and difficult undertaking. More often than not they
result in misleading or less than useful information about
organizational performance. (Ever try to to figure out exactly what
performing 3.6 on a 5 point scale means? And remember that survey
respondents will generally give "high" marks.) My advice is generally
NOT to do surveys unless you are prepared (e.g. have the expertise and
resources) to do them well.

If the main interest is in "customer perception" about how you're
doing, you'll get far more valuable information (and at least as valid
and reliable) by simply talking to a lot of them, perhaps in an
informal, but structured setting. If you have expertise in focus group
methodology, and can find faculty willing to participate, it would be
a very effective way of getting some rich data. (I'd be leery of a
"committee" to "advise" on performance.) If you decide you want to
proceed with a survey, I strongly advise spending a lot of time
talking with faculty in depth BEFORE constructing the survey
instrument. This is extremely useful in helping to find out exactly
what questions to ask, and to probe for the operational significance
of people's comments. Many times in a survey we end up asking the
wrong questions only to and find out we're doing wonderfully (or
pitifully) -- on things that are of limited value to the customer.
Framing the questions is one of the most critical aspects of survey
methodology.  You should also try to pin down customer perceptions and
translate them into operational terms and measurable performance
indicators. For example, knowing that you're performing 3.7 out of 5
in "timeliness" is far less useful than knowing that your customers
feel that X days is a reasonable standard for timeliness -- while your
process meets that standard Y% of the time.

I would be happy to talk to any of you further about surveys,
appropriate organizational assessment tools and methodology. I wish
you luck.

===========================================================================

Bill Kirby
NSF

xxxxxx@nsf.gov
703-306-1102