Mes Top 5 Ouvrages
Termes les plus recherchés
[PDF](+48👁️) Télécharger ERIC ED137417: Evaluating Evaluation. pdf
The various needs for evaluating evaluators and their efforts are discussed in this paper. The argument is presented that evaluators should not themselves carry out summative evaluation on their own efforts. Several possible purposes of evaluation of evaluation staffs and products are pursued, and the methods and persons most appropriate to each purpose are described. Planning an evaluation of evaluation to best meet the needs of evaluators is also discussed. (Author/MV)Télécharger gratuit ERIC ED137417: Evaluating Evaluation. pdf
ED 137 ai7
Ifl 006 226
Matiiszek, Paula} lee^ Inn
luatin inaQpendent School District^ Tmm, Offics of
lasearch and Evaluation,
23p*j Papeir presented at the Annual fleeting of the
American iduQational Sesearch Aasociation (61st ^ New
York, KeM York, April ^^8, 1977)
MP^iO^aa HC^f1.67 plus postage.
Decision Making; ^Educational Besearchers}
Evaluation; Evaluation Criteria | ^Evaluation flethQdsi
Evaluation Keedsi ^Personnel Evaluations
Questionnairesi *Self Evaluation^ Surveys
*Evaluatorsi *Meta Evaluation
The various needs for evaluating e valuators and tfeeir
efforts are discussed in this paper* The argument is presented that
avaluators should not themselves carry out summative evaluetion on
their own efforts. Several possible purposes of evaluation of
evaluation staffs and products arp pursued, and the methods and
persons most appropriate to each purpose are described, planning an
avaluation of evaluation to best meet the needs of evaluators is also
* Documents acguired by EEXC include fflany informal unpublished *
* materials not available from other sources, EBIC makes every effort *
* to obtain the best copy available. Nevertheless, items of marglpal *
* reproducibility are often encountered and this affects the quality *
* of the microflclL€ and hardcopy reproductions EBIC makes available *
* via the EBIC Document Eeproduction Service (EDES) , EDBS is not *
* responsible for the quality of the original document, Beproductions *
* supplied by EDES are the best that can be made from the original. *
us Df pARTMEN^OFHfALTH,
NATIONAL iNSTiTUTi OF
THIS DQCUMlNT HAl BEE« PRQ^
DUCiO iXACTkV AS HyCgJvSD PRQA'^
THE PERSON 0^ QRGANfZATiON ORiGJN*
AUNG jT POINTS of v»iw or JP'NJON^
STATiD DO NOT NECellAHr^Y RepSfi-
SENT OFFICIAL NATIONAL iWSTiTUrE ©F
EDUCATtON POStTlON OP POLICE
a paper presented at the annual meeting of the
American Educational Research Association
Authors: Paula Matuszek, Ph.D,
Ann Lee. Ph.D.
Office of Research and Evaluation
Austin Independent School District
- 7^52 : 2
Evaluators are people who question* They question whether some pro-
grams in the schools are worthwhile. They queatlon whether other programs
might be carried out more effectively* They questlom because they are the
kind of people who want to know what is going on and believe that such
questions are best answered by examining data relevant to the isEue*
One of the Issues which evaluators inevitably question is avaluation
itself- They queatlon whether they are doing the best job they can with
the resources at hand. They question the accuracy and technical acceptability
of their work. More basically^ they question the worth of evaluaclon
efforts* They seek answers to questions about evaluation as much and as
eagerly as to questions about programs. Thus Is bom meta-^evaluatlon*
Evaluation Is still a relatively new field. Oflly recently has It
had the time to pause In Its tittempts to get its evaluations accomplished
to deal In depth with the question of Its o^ evaluation. Thera are taany
signs to show that this concern is now growing-**-MRA workshops In mata-
evaluation, 65 advance requests for this paper p increasing dlscuseiotis
among evaluators gatherlDig together any ttoe and any place This search
for Improvement is a healthy proeesSp which should contribute in the long
run to a general improvement In the whole field of avaluatlon. However,
It is not necessarily appropriate for evaluators th^salves to carry out a
meta--evaluatlon. Just as evaluators dlatrust program personnel who feel
they can aomplete an unbiased evaluation of their own worth, the credibility
'Of an evaluator trying to answer questions on which his job will depend is
'O ■ - - ■ - ■■ ■■■■ ■ '.' ■ ' V" ■ - ■ ■ ' ■■'■-''-■^
necessarily in questioa. Ho one should be asked Co decide on Ms owi existence*
Not all avaluations are carrlad out,, however, to decide whether a program
should be continued* It is less clear whether an evaluator can do an effective
job of formative evaluation of his o%m activities. Perhaps for some aspects
of evaluatioa a meta-evaluaClon carried out by evaluators thmselves would
be mo^t infomative. In any case, the question of who should evaluate
evaJLuators la not one which is going to go away if ignored* Bealdas the
evaluators thems elves ^ the clients of evaluation—school board, program
officers J administrators, the public^^will inevitably ask s^^ative questions
about evaluation* The task must be done-— and fonsally or informlly it wilj^
be done* The only real question Is who will do it*
The answer to this question must inevitably depend On another i Wiat is
the ipurpose of the laeta^evaluatlon? It may be to decide whetlier evaluation
is a wise ^penditure of resources i and should therefore be continued. It
may bei to provide information to evaluators to Jxaprove their own functioning*
It say be to validate the quality of the evaluator's work. It may be an
attempt to ensure survival^ or placate angry school personnel^ or even to
meet some mandated gyldeline. In short the reasons for meta-evaluatlon cover
as wide a scope as the reasons for progiram evaluation. Each of these
reasons carries different information needs and Implies different persons
or groujps to do the me ta- evaluation,
Mthough, as noted earlier, the topic of meta--evaluatloii Is beginning
to gain in interest and attention^ it can hardly be considered a well-^devel-
oped topic, Stuf flebaaml noted in 1974 that ffleta--evaluatlon was limited In
^Stufflebeamj D. Toward a Technology for Evaluating Ivaluation* ERIC
Accession K^ber ED 090 319
scope, with minimal rasearch and little anamination of the problems involved
la actual evaluation work. Since that time awareness of the importance of
the topic has been increasing, but this Increase has not been accompanied by
a corresponding increase in research and eKaminatlon In the literature,
WE ASKED RESEARCH
In order to explore eome of these questions, we conducted a survey of 58
large city research directors ±n the United States. These directors are
members of a group who meet annually at ^RA* Two questionnaires were sent to
each peraon on the list of this group. One questionnaire asked for
research dlrec tors' attitudes toward meta^evaluation and also elicited their
perceived needs for meta--evaluation of their own office's activities. The
second questionnaire, to be filled out only by those who had some kind of
evaluation conducted of their office * asked for Inforaatlon concerning this
evaluatlon'-^who carried It out, whether it was worth It, etc. The general
questionnaire was returned by 39 of the 58 people to whom it was mailed.
Responses to this questionnaire are shown In Figure 1.
''m SHOULD PRACTICE WMT WE PmAGH J'
In general the results of this questionnaire support the conclusion
that meta--evaluation is a concerni 36 of the respondents said they were in
favor of evaluation of public school activities. The responses Included
conments such as "We should practice what we preach" , and "Evaluation Is
essential to monitoring effective services." One of the two dissenters
Indicated that evaluation criteria would be very difficult to establishi
some persons in favor of meta--evaluation eKpressed the same concern. In
spite of this overwhelming support for the Idea^ however , only 10 (26%)
of the respondents reported having had some kind of formal evaluation of
Have your department's activities ever been forMlly evaluated?
1 no response
2* On hor^ many oecaaiong have your evaluation activities been formally
four. or more tiBea
3* Are you in favor of "evaluation of publie school evaluation activities"?
4, Why would you want to have your department's activities evaluated?
general seif Imptovement
self improvement In specific areas
establish or clarify our role
establish credibility for us
for "good public relations" and/or to gain staff
set priorities and allocate resources
Improve professional standards
general positive conments
5* Please rank the following evaluation activities in order of their greatest
need for evaluation in your department (l-greatest need) .
No. of people ranking itam
1 2 3
Conmunicatlon of results
External comunlcation with school personnel
Data-quality . .
Managraent skills of evaluators
28 mrnvQ 2
3 once 3
2 twice 1
QOTSTIONNAIRE RESPONSES, lasponse^ to questlOMalre sent to directors
of large city research dlraetors in ipring, 1977. Baaed on 39
reipoQdenta, of 58 mailed* (IPage 1 of 3)
6. What kind of infermation would you want to see collectad in such an
The responses to this question were so varied that it seemed likely that
the question coraunicated different things to diffefemt peoplei we could
not make a meaningfiil tally of responses^
7* Do you believe that your department could do an adequate job of evaluating
its own activities?
21 no '
1 no response
8* If money were no object^ whom would you ask to do an external evaluation
of your departttent's activities? (You may nrae a specific person or
describe the qualifications, affiliation, etc* , of the "Ideal evaluator'%
The data were first categorized by types of persons desired:
18 public school ^valuator
12 university professor
7 specifically named individuals or groups
3 public school administrator a
2 management experts
1 external flras
. don ' t know
The data were then categorl^tid by whether a team was specifically Indicated
21 team specified
11 individual :
2 not clear from response whether team or Individual was preferred.
9* How would you locate this person?
. • ( ■ . ■ - -. -
12 send out a MT
6 select someone from my own personal knowledge or contacts
6 Through AEBA or the Large City Research Directors
3 Ask a colleague's advice
3 Involve others within the dlotrict in the search
Figure 1 Continued, (Page 2 of 3)
10 • If your department contracted an external evaluation of Ics aetlvlties
do you predict that it would be worth the money it cost?
15 dsn't Imow
2 no responee
11. What political payoffs would be created by an evaluation of your departaent^s
9 l^m not aure
9 it would have positive effects (prestige^ public relationsp respect ^
4 no effect
4 improve our credibility
3 improve our activities/skills
2 client woL^ld understand our activities better
1 improve our funding situation
12, Do you have any othef convictions or attitudes about "evaluating public
The responses we received to this question todlcate that yeSs people do
have other convictions and attitudes—voluminously * The essays we received
defied coding and tallylngi we have referenced some In the text and Integrated
them into our thinking, but we can-t reduce thm to numbers.
Figure 1 Continued v (Page 3 of 3)
The reasons for evaluatloa glvan by the reapondents are focui6d heavily
on formative information— 26 reapondents listed gome general or specific self--
ImprovMent reasons for wanting an evaluatlonp Other reasons focused on
persons external to the evaluator™such as credlbillty"formed a much smaller
category. Of the kinds of activities seen as needing improvement, communi-^
cation of results clearly stands out in ths minds of evaluators. Evaluation
designs and connnunicatlon with school personnel were also ranked high in
concern* Data analysis and management skills of evaluators were not ranked at
the top of their concerns by many respondents , but they were ranked third by
many people — ^these are evidently relatively small, but prevalent, concerns*,
Raaponsas to the question about what kind of Infoiniation should be
collected in such an evaluation support the concern of those who feel that
criteria would be difficult to establieh* tfost answers were very general
— -uaefulnesa of results t quality of datat cost effectiveness. Itost respondants
in fact indicated the area they would like to see addressed, rather than the
specific Infomation which would elucidate this area* Indeed, the answers
sound very much like those which program people give initially when their
input into evaluation designs is sought!
"WE TELL IT LIKE IT IS''
Surprisingly (to us, at least), nearly half the evaluators felt that
they eould do a good job of evaluating thraselves* These persons felt that
they had competent staff with a good understanding of the framework in which
they work. The primary concern of those who did not feel that they could
adequately evaluate th^selves was bias or lack of objectivity *
Besides the evaluator, who is perceived as a desirable evaluator?
The majority of rasponses to a question about whom the evaluator would
choose to carry out a meta-evaluatlon fell Into two groups— public school
evaluators and persons associated with a university. They would be located
by MT's, by persoMl contacts, through AEIA. Some people had individuals
already In mind | the majority did not. Of the 39 respondents, 21 specifically
Indicated that they wanted a team or group of Individuals.
"AND I'M JTILL NOT SURE"
Although there is a strong Interest In evaluation and strong belief that It
should be done, there was definitely concern about whether it would be worth
while. Nineteen of the respondents felt It would be worthwhile to contract
a meta-evaluatlon, and only a few said It definitely would not be war th the
cost, but 15 respondents expressed doubt about the value. Reasons for this
doubt included lack of good criteria, bad experiences from past contracted
evaluations by exterual firms, and cost of the evaluation. The general
state of uncertainty concerning the whole area was graphically illustrated
by one respondent, who said "We did fontract an avaluatlon] and I'm still
Evaluators' general convictions regarding ffleta-evaluatlon confirmed our
own feellngsin this area— there Is strong Interest In, and strong perceived
need for, meta-evaluatlon. There is also strong concern about the criteria to
be used and the background of the meta-evaluators, as well am about the cost
Concerns such as these are certainly familiar to any evaluator — they are
Identical to those expressed by most of the people we evaluate! It was a
•bit startling to discover that we have the same doubts about and fears of
our evaluators as our "evaluatens" have of us I
WHO EVALUATES? WHY ARE YOU EVALUATING?
In addition to the general results given above, we wanted to address the
question of whether the perceptions of who makes the best evaluator differ
depending on the reasons for the evaluation and the greatest needs for
evaluation. To study this Issue we carried out a series of two-way cross-
tabulations between "who" (as Indicated by questions 7 and 8) and "what" (as
Indicated by questions 4 and 5), It was imnedlately clear that, as expected,
different needs tended to be associated with different meta-evaluator
preferenues. Results are sumarlzed in Plgure 2 below.
Priority and resource
Figure 2: PIRSONS CHOSm FOR VARIOUS EVALUATION PURPOSES. Cross-tabulation of
reason given for an evaluation ( question 4) wlch 'psf son oAned to c^rry
out an evaluation (quastloa 8) * : : *
There was considerable variety in when the varloui meta-evaluators named
ware chosen* For reasons which are essentially Intci^nal to an evaluation
unlt—eelf laiprovraent, for Instance-— the most frequent choice for evaluator
was another public school evaluator, Evaluators evidently have most trusC In
one of their own kind understanding the context and being able to provide
useful feedback. When the purpose was eternally motivated ^ on the other
hand — credibility, for instance— a university professor or eternal firm
became more desirable. This probably reflects a belief that another public
school evaluator might be perceived by the "outside" as too close to the
evaluator to be unbiased. It may also reflect a feeling that aKternal
concerns need other viewpoints reflected in the meta-^evaluation.
Examining the ranking of specific activities by need for evaluation and
comparing this ranking to the evaluator recommended refines this general
observation somewhat. These results are sunmarl^ed in Figure 3* Public
school evaluators were in general recommended more often than any other
category of meta-evaluator for evaluating internal matters> but persons who
ranked their need for internal management evaluation high named imlversity
professors to carry out the evaluation 50% of the time and public school
evaluators only 28% of the time* Conversely, while public school evaluators
were not in general the most frequent choice when the reason for meta--
evaluation expressed was externally oriented^ for the specific activity
of external communication with school personnel, those rating it high la
need for evaluation named public school evaluators 41% of the time, and
university prof essors only 19% of the time* It seCTs as though there are soma
categorlee, ■such as managCTent, which are not perceived as prljBarlly in the
realm of the public school evaluator. There are others, such as external
^ communication with the achools, where in spite of the axlatence of experts
in cottmunication the general knowledge of school contexts which another
public school avaluator would have tends to outweigh other conalderat lone.
H - . U
g. ...... ... •©.••:••••
O 4J 03 al
m M 4* y a d
g *h ^fh ^H-i cea oia -H y*H
m ^ M m *H o g ft 4J M s 01 ,^ so
Evaluation Designs ' 10 2 6 6 0 1 2 4 1 1 3
Data Analyses 2 0 0 0 2 0 0 2 0 0 1
Instrument Design 2 0 2 1 0 0 0 1 ^ 1 0
Jtenagement Skills 2 2 3 0 0 0 0 1 1 0 1
CosBQunlcation of 12 4 3 6 0 0 1 ^ 3 5 0 0
In'^amal Manag^ent 4 3 5 9 0 0 1 1 2 0 0
Internal Conmunicationa 8 7 9 4 1 0 2 3 3 0 0
Data Quality 62440103002
Figure 3: PERSONS CHOSEN FOR EVALUATION FOR DIFFERENT i^EAS RANKING HIGH IN
MED OF EVALUATION. Cross-tabulation of person named to carry out
evaluation (question 8) with activities ranked fir at or second
in Importance (question 5).
Figure 3 alaoilluatratas another facet of the choice of evaluator"
teams versus individuals* For most categories of activltleas teams and -
individuals were nmed with appro^toately eqiial frequency* However ^ respondents
ranking data quality and communication of results high in need for evaluation ,
specified a team 75% of the ttoe, ^d persons ranking evaluation deaigna high
in need specified a team 83% of the time. This may indicate a perception of
these areas as particularly complete and multi-faceted, thus requiring a,teMi approach.
Mother Interegting Insight into choice of meta-evaiuators ia given by
/ looking at the relatianship between activities ranked high in need for
evaluation and response to the question about carrying out an adequate self-
evaluation. These results are s™marized in Figure 4 below. .
- 1-. :
Conmunication, of Results
: ; , . 9 ' ■
Figure 4 r SELF-EVALUATION RELATED TO ACTIVITIES IN NEED OF EVALUATION. Cros;
tabulation jof response to whether^^
adequately evaluate its OTO aetivlties (question 7 by activities
ranfcetf first or second in need of evaluation (question 5) .
Evaluators ranking internal manag^ent high in need for evaluation seldom
felt that they could do an adequate job of self evaluation, whereas those
concerned about eounnunication of results and data qtmlity were more likely^
to feel that they could do an adequate Jobr This pattern reflects that
shown in Figure 3™public school evaluators feel they could do an adequate
job of self evaluation re^aghly the same areas as they feel most strongly
that a public school evaluator would make a suitable meta^evaluator. Categories
where they feel least likely to be able to evalijate thraselyes are also *
.those where 1 they tended to list university profesaore and eternal fims
with greater frequency* _ \_ .[...-y,^,^:-,--:;^:
"A TEAM CONSISTING OF"
Another Intereatlttg toslght into choice of evaluators is illustrated in
Figure 5~the relationihip between kinds of evaluators named and whether a
team was speeifieally indicated. Most of the categories of meta-e'valuator were
most likely to be mentioned as part of a team. However, one third of those
mentioning another publia school evaluator did not specify a teami only 18%
of those specifying a university professor did not do so as part of a team*
A team was not generally specified whea the choice was a firmi this may be
because the firm is perceived as a tean already. These data suggest that
evaluators see their field as many^-facetedp raquiring ^perts from several
different backgrounds to evaluate their efforts* However , another public
school evaluator is evidently more likely to be considered capable of carrying
out an entire evaluation alone~presimably because he must also have some
background in all the facets of evaluation.
Team Individual % Team
Public School Evaluator
ManagCTent or Consultant Fira
Ajtoinlstrator of District
12 6 .67
9 2 .89
0 2 .00
0 1 ,00
3 0 1.00
6 1 .86
4 1 .80
0 0 .00
Figure 5 I
CHOICE Of EVALUATOR RELATED TO CHOICE OF TE^ OR lOTIVIDUAL. Cross
tabulation of what kind of person was named and whether a t&Bm^imB
specifically requested on item 8.
"ALWAYS WpimiWHILE TO BE Si^ED UP"
Clearly who should carry out a ffieta--evaluation depends on the purpose
for the evaluation and the grMtest needs for evaluation* Does the expected
worth of the evaluation alao depend on what is being evaluated? Figure 6
sheds some light on this question.
Coimunlcation, of Results
Figure 6% MED FOR EVALUATION RELATED TO WiET^R EVALUATION WOUIJ) BE
WORTHWHILE* Cross- tabulation of areas rankad first or aeeond
on need for evaluation (question 5) with whether an evaluation
would be worth the money (question 10) ,
Clearly there are perceived differences in the TOrth of evaluation^ depending
on the category to be evaluated* Persons concerned about managemeat skills
of evaluators, data analyses and evaluation designs were most likely to
feel that an evaluation would be worth it* Persons who rated internal
management high in need for evaluation were much less likely to feel that it
would be worth it.
This survey confirms two beliefs with which we started an examination of
the question of who evaluates the evaluators
The toportance of meta^evaluation is increasing.
The person who is perceived as best to carry out a meta^evaluatlon
depends on the purpose of the evaluation* -
: " . . .INFORMATION FOR DECISION MAKING"
- Meta-avaluatlon shares many charaeterlstlea with any othar evalimtioii.
One way o£ examining the data Just reported and bringing some resolution
to the persistent question of "who" is to examina the evaluation of svalu-ation
in a framework whleh has proved fruitful for many program evaluatlone^-^that
of providing Information for decision making* There are many declalona which
a meta-evaluation might address:
* Should the evaluation office be ^panded, cut back, refunded^ etc?
* Should the evaluation office's findings be used for making decisions
about program contlc latlon^ rafundlngs etc?
* Should the evaluation officers findings be used for making decisions
about program focua 5 -organization J etc?
. What should the evaluation office evaluate?
* Should the evaluation office be reorganized? ] '
. Should the valuation office change the kinds of questions it asks?
Should the evaluation of flee make changes in the technical aspects
of its work? .
* Should the evaluation office change the way it praeants its results?
* Should the avaluation off ice make changes the way it "Interacts
with the rest of the district?
Lat us consider some of these decisions, - .
"EVALUATION GOES ON AmJALLY AS WE SUBMIT OUR BUDGETS FOR THE CQ^gNG YEAR
" tQ QI^^MINISTRATIVE SUPERIORS." \
Ultimately the decision about continuing* cutting backj etc* wHl ba
meda» clear ly, by whoever approves the evaluation's budget"dlstrlct school
board,- federal program officer, etc. Kinds of criteria likely to be used
-Jjieluda: What proportion of tha total budget goes into evaluation? Did -
the unit provide InfoTOation useful In taakiiig important decieions about
programs? If avaluatlon findings wars aeted on by progrmSp did the programs
Improve? This deelsion will clearly be a pQlitlcally oriented one, with
relatively little hard data to answer it. This suggests that the TOeta--
evaluator should be first of all one familiar with the polities of a school
distrlet, The level of technleal skills required is relatively lowj but the
level of credibility required is very highs slnee the data are likely to
be "soft". From the evaluation office -s viewpoint, a meta-^evaluation addressln
this point will probably not have much internal benefit^ however , the
external PR value of having had an evaluation earried out cfta be ^iaormous*
"TO PROVIDE M INDEPENDENT JITO^NT THAT TOE OFFICE IS FUNCTIONING ON A ~
PROFESSIONALLY SOTOD BASIS . "
The next three decisions listed above are all related^^-basically they
deal with the worth and credibility of the findings of the evaluation of flee.
Some of the questions which would be addressed here are technical ones*
Are the analyses appropriate? Axe the conclusions drawn warranted by the
data? Are proper testing and scoring procedures uped to minlTnize error?
In addition, there are some less technical questions which will certainly
Inifluence the decision: ^e the criteria used for the evaluation appropriate
in the Qpinioii of the decision maker? How much influence on the. findings
presented did the staff of Che progieam being evaluated have?
The decision maher probably has f omed an opinion on most of these
questions « Thus, an evaluat^ion^office may want to Initiate an evaluation ■
in this area, staply to ensure that some accurate data is available tq the
decision maker* The evaluator here will clearly need a good technical
background — possibly a university professor. This is probably an area where
a public echool evaluator Is not a good choice for evaluation — his credibility
as meta-'evaluator might not be sufficiently high. On the other hands
most public schoola are f Miliar with the uialversity researcher who Insists
that all studies have to be carried out following a strict experlmeatal--
control models a review by someone with orientation would probably rule
out as Invalid 90% of the work performed by all evaluation offices. Therefore
the person rasponslble for this meta^evaluatlon should ideally also have
an understanding of the context in which public school evaluation takes place*
This Is one area in which a team approach could be of value -
The evaluator cannot hlmsalf affectively carry out an evaluation addressing
either the question of continuation of his office or the questions relating to
the use of his data to make program changes. Soma of the data can Indeed
be gathered Internally by evaluation offices t but the need for objectivity
(or rather ^ for perceived objectivity) is too M^h to allow an effective
self evaluation to take place. However^ by initiating the process of
such a meta--evaluatlon, the evaluator of ten has some choice; over the sval^
uator chosen* using this choice wisely can avoid the problem of an evaluation
beiag carried out without an understanding of cont^t^ while making sura
that the expertise to do a good job is present.
"TO ESTABLISH PRIORITIES" * "
What topics/programs shou^ 1 the evaluation office evaluate? This Is
a decision for which an office can gather information- Questions to be
addressed might Include! Wmt programs receive a large proportion of the
budget? What areas are perceived as important by district administrators? "
In what areas are major decisions going to be made during the eoming year?
In what areas will information In fact be used? Ideally the evaluation
off iee will not itself set Its evaluation areaSj but will work with adsln*
Istratlve or policy-aetting personnel above the office to eatablljh these'
areas. The evaluation office can very well provide much of the Information
which goes into this decision. This would not be the case only If the
evaluation office were perceived as having a bias~being "out to get" a
program, for instance* '
"TO IMPROVE EVALUATION DESIGNS , TEdMIQUES , ATO STRATEGIES"
The raaalning questions deal with decisions which will probably be
made by the evaluation office Itself. Thus the choice of meta-evaluator is
probably largely the evaluator'Si The objectivity of the meta-evaluator
needs to be answered. only .to the satisfaction of the avaluator. The best
choice for the evaluation depends on the strongaat needs for Improvement
Jn the office, as noted In the survey responses* For internal management,
management skills of evaluators^ and communication, outside everts may
well be aware of techniques and Ideas not normally in an evaluator-s
background* For teclmlcal areas such as data quality and a^lyses,
strong technical skills are needed* If he Is to benefit the evaluation
office, the meta^evaluator must be capable of addressing complex statistical
Issues which have no obvious or simple answers. This evaliiator has less
need for an understanding of the cont^t of evaluation In the schools, sincfe
the findings will be filtered through the understandljig that the office has
before they are implemented, In a department wlth.many . resources^ and a-.
variety of backgrounds staff mCTbers may be able to serve this fimction
18 ; :
for each ot Her J more typleally in this situation a university personiias
.. . . . . _ J , . . . . . . .... . . ... " •
the most appropriate combination of skills. This is alao likely to be a
ttoe*-consuming area of decisions to address^ since it involves familiarity
with an office's activities and feedback about the office which cannot be
given in an hour. Thus* working with local university personnel over a
period of time may be a desirable option*
There is obviously no single answer to the question of who can best
avaluata the evaluator. External persons provide relatively higher objectivity
and relatively less understanding of the local conteKt compared to self
evaluation. Other public school evaluators may have a good "awareness
of the conteKt and of specific problems^ but they will also have some
problems with credibility. In general, there tends to be a trade off between
knowledge of context and credibility*
■ ■ . ■ • ■ • . _ - " ■ ■ - ■ .- . ■ . . ■■■ .
The moat important thing for evaluators : to realise at present is not
that some particular category of evaluator is bests but that there are
both costs and benefits from any category of evaluation* Meta-evaluat ion is
coming—by addressing the issue now evaluators mastoize their likelihood
of having the benefits to them of an evaluation outweigh the costs -
" POSSIBILITIES FOR RESEAHCH ALONG THESE LINES* KEEP UP THE GOOD WOPK>"
Wa intend to! We are particularly interested in reaching other persons
to gather their opinions and feelings regarding meta-evaluatlon* If you
are interested in being Included in a survey. Just get your nme and address
to one of ue (there Is a foni attached you can use If you wish) * Wa are
eapaelally Interested ±n eontaating persons who have had their evaluation
aetlvitiee formally evaliiated and who have thmaalvea carried out aeta'-
evaluations . :.
YES, I WODLD LIKE TO BE INCLTOED IN A SURVEY.
I have been Involvad In meta--evalimtlons ©f my work*
I have earried ©ut ffieta-evaluatlQtis of other people' b
Paula Matuazefc or Ann Lee
6100 Guadalupe, Bon 79
Austin, Tk 78752
Lire la suite
- 1.38 MB