Compiled by Patricia Rogers
This is a resource file which supports the regular public program "areol" (action research and evaluation on line) offered twice a year beginning in mid-February and mid-July. For details email Bob Dick email@example.com or firstname.lastname@example.org
- Conceptual frameworks for meta-evaluation
- Emphasis on technical competence
- Critical analyses of evaluation
- Managerial meta-evaluations
- Some examples of meta-evaluation
Conceptual frameworks for meta-evaluation
Analyses of evaluation and meta-evaluation using different perspectives or paradigms
These articles show how different conceptualizations of the purpose of evaluation and its role in programs lead to different understandings of meta-evaluation. I recommend you start with these; they reveal the assumptions and limitations of the other literature.
Banks, J., & Clark, R. F. (1983). Administrative Structure Theory and Evaluation: An Example from the Community Services Administration. Evaluation and Program Planning, 6, 39-47.
Beyer, J. M., & Trice, H. M. (1982). The Utilization Process: A Conceptual Framework and Synthesis of Empirical Findings. Administrative Science Quarterly, 27, 591-622.
Codd, J. A. (1988). Knowledge and control in the evaluation of educational organisations . Geelong, Vic: Deakin University.
Floden, R. E., & Weiner, S. S. (1978). Rationality to Ritual: The Multiple Roles of Evaluation in Government Processes. Policy Sciences, 9(9), 9-18.
O'Reilly III, C. (1981). Evaluation Information and Decision Making in Organizations: Some Constraints on the Utilization of Evaluation Research. In A. Bank, & R. C. Williams (Ed.), Evaluation in School Districts: Organizational Perspectives (pp. 25-64). Los Angeles, CA: Center for the Study of Evaluation, UCLA Graduate School of Education, University of California.
Palumbo, D. J., & Nachmias, D. (1983). The Preconditions for Successful Evaluation: Is There An Ideal Paradigm? Policy Sciences, 16, 67-79.
Rogers, P. J. (1994). Evaluating approaches to program evaluation: the development of a new theoretical framework and its application to the utilization-focused approach to evaluation. Research Seminar Series. Department of Policy, Context and Evaluation Studies, University of Melbourne.
Rogers, P. J., & Hough, G. (1994). Improving the effectiveness of evaluations: making the link to organizational theory. Unpublished manuscript
Ryan, A. G. (1988). Program Evaluation Within The Paradigms: Mapping The Territory. Knowledge: Creation, Diffusion, Utilization, 10 (1 September), 25-47.
Shapiro, J. Z. (1984). Conceptualizing Evaluation Use: Implications of Alternative Models of Organizational Decision Making. In ? (Ed.), Evaluation Studies Review Annual 1984
Smith, N. L. (1987). Toward the Justification of Claims in Evaluation Research. Evaluation and Program Planning, 10, 309-314.
These articles use more sophisticated perspectives on program evaluation and implementation than most research in the area:
Huberman, M. (1987). Steps toward an integrated model of research utilization. Knowledge: Creation, Diffusion, Utilization, 8(4), 586-611.
Huberman, M., & Cox, P. (1990). Evaluation utilization: building links between action and reflection. Studies in Educational Evaluation, 16, 157-179.
Huberman, M., & Levinson, N. (1984). Knowledge Transfer and the University: Facilitators and Barriers. Review of Higher Education, 8(1 Fall 1984), 55-77.
Emphasis on technical competence:
Mainstream literature on evaluation and meta-evaluation focuses primarily on its technical competence, seeing this as both a necessary and sufficient condition of "good" evaluation.
Berk, R. A., & Rossi, P. H. (1990). Thinking About Program Evaluation. Newbury Park, CA: Sage Publications.
Caron, D. J. (1993). Knowledge Required To Perform The Duties of an Evaluator. The Canadian Journal of Program Evaluation, 8(1), 59-78.
Chelimsky, E. (1987). The politics of program evaluation. In D.S.Cordray, H.S.Bloom, & R.J.Light (Ed.), Evaluation Practice in Review (pp. 5-21). San Francisco: Jossey-Bass.
Cook, T. (1974). The potential and limitations of secondary evaluation. In A. Michael W, M. J. Subkoviak, & J. Henry S. Lufler (Ed.), Educational evaluation: Analysis and reponsibility. Berkeley, CA: McCutchan Publishing.
Cook, T. D., & Gruder, C. L. (1978). Metaevaluation research. Evaluation Quarterly, 2(1), 5-51.
Scriven, M. (1969). An introduction to meta-evaluation. Educational Product Report No. 2, , 36-38.
Scriven, M. (1975). Evaluation Bias and Its Control (4). College of Education, Western Michigan University, Kalamazoo, Michigan.
Critical analyses of evaluation
These analyses emphasise the impact of evaluations on the disempowered, often the intended clients of the program.
House, E. (1992). Multicultural Evaluation in Canada and the United States. Canadian Journal of Program Evaluation, 7(1), 133-156.
Kelly, R. M. (1987). The Politics of Meaning and Policy Inquiry. In D. J. Palumbo (Ed.), The Politics of Program Evaluation (pp. 270-296). Newbury Park, CA: Sage Publications.
Lakomski, G. (1983). Ways of knowing and ways of evaluating: Or, how democratic is "democratic evaluation"? Journal of Curriculum Studies, 15(3), 265-276. Reprinted in Codd, J. A. (ed) (1988). Knowledge and control in the evaluation of educational organisations . Geelong, Vic: Deakin University pp 55-67.
McTaggart, R. (1991). When Democratic Evaluation Doesn't Seem Democratic. Evaluation Practice, 12(1), 9-21.
O'Sullivan, E., Burleson, G. W., & Lamb, W. E. (1985). Avoiding Evaluation Co-Optation. Evaluation and Program Planning, 8, 255-259.
These references are based on the premise that program evaluation ought to serve program managers, and therefore meta-evaluation should focus on the satisfaction of program managers. Most of the extensive literature on evaluation utilization is based on this premise ( there are literally hundreds of references here, so I've just put in a selection).
This position raises equity issues (can program managers' interests realistically be seen as identical to the interests of program clients?) and implementation issues - there is a lot more to implementing a program than having a manager make a decision, and managers' judgements on the
usefulness of an evaluation may ignore how other stakeholders use the evaluation or are affected by it when implementing the program.
Alkin, M. C., Daillak, R., & White, P. Using evaluations: Does evaluation make a difference? (2nd ed.). Beverly Hills, CA: Sage Publications.
Barkdoll, G. L. (1980). Type III evaluations: consultation and consensus. Public Administration Review, 40(2), 174-179.
Bedell, J. R., Ward, J. C., Jr, Archer, R. P., & Stokes, M. K. (1985). An Empirical Evaluation of a Model of Knowledge Utilization. Evaluation Review, 9( 2 (April)), 109-126.
Braskamp, L. A., Brown, R. D., & Newman, D. L. (1982). Studying Evaluation Utilization Through Simulations. Evaluation Review, 6(1, February 1982), 114-126.
Burry, J., and others. (1985). Organizing Evaluations for Use as a Management Tool. Studies in Educational Evaluation, 11(1), 131-157.
Connolly, T., & Porter, A. L. (1981). A user-focused model for the utilization of evaluation. Evaluation and Program Planning, 4, 131-140.
Cousins, J. B., & Leithwood, K. A. (1986). Current Empirical Research on Evaluation Utilization. Review of Educational Research, 56(3), 331-364.
Cousins, J. B., & Leithwood, K. A. (1993). Enhancing Knowledge Utilization as a Strategy for School Improvement. Knowledge: Creation, Diffusion, Utilization, 14(3 March 1993), 305-333.
Crossfield, L., & Macinsh, R. (1992). The Influence of Evaluation on Decision-Making in the 1991-92 Commonwealth Government Budget. International Conference of the Australasian Evaluation Society. Melbourne.
Dawson, J. A., & D'Amico, J. J. (1985). Involving Program Staff in Evaluation Studies: A Strategy for Increasing Use and Enriching the Data Base. Evaluation Review, 9(2 (April)), 173-188.
Funnell, S., & Harrison, C. (1993). Utility is in the Eye of the User: Evaluating the Usefulness of Program Evaluations. 1993 International Conference of the Australasian Evaluation Society. Sydney.
Hegarty, T. W., & Sporn, D. L. (1988). Effective Engagement of Decisionmakers in Program Evaluation. Evaluation and Program Planning, 11, 335-339.
Lawrence, J. E. S., & Cook, T. J. (1982). Designing Useful Evaluations: The Stakeholder Survey. Evaluation and Program Planning, 5(4), 327-336.
Leviton, L. C., & Hughes, E. F. X. (1981). Research on the Utilization of Evaluations: A Review and Synthesis. Evaluation Review, 5(4), 525-548.
Mackay, K. (1992). The use of evaluation in the budget process. Australian Journal of Public Administration, 51(4), 436-439.
Newman, D. L., Brown, R. D., & Rivers, L. S. (1983). Locus of control and evaluation use: Does sense of control affect information needs and decision-making? Studies in Educational Evaluation, 9, 77-89.
Newman, D. L., Brown, R. D., & Rivers, L. S. (1987). Factors Influencing The Decision-Making Process: An Examination of the Effect of Contextual Variables. Studies in Educational Evaluation, 13, 199-209.
Newman, D. L., Brown, R. D., Rivers, L. S., & Glock, R. F. (1983). School Boards' and Administrators' Use of Evaluative Information: Influencing Factors. Evaluation Review, 7(1 February), 110-125.
Owen, J. M. (1993). Towards a Meta-model of Evaluation Utilization. International Conference of the Australasian Evaluation Society. Brisbane.
Siegel, K., & Tuckel, P. (1985). The Utilization of Evaluation Research: A Case Analysis. Evaluation Review, 9(3 (June)), 307-328.
Torres, R. T. (1991). Improving the Quality of Internal Evaluation: The Evaluator as Consultant-Mediator. Evaluation and Program Planning, 14, 189-198.
Some examples of meta-evaluationMeta-evaluation of program evaluations
Greene, J. C. (1992). A case study of evaluation auditing as metaevaluation. Evaluation and Program Planning, 15, 71-74.
House, E. R., Glass, G. V., McLean, L. D., & Walker, D. F. (1978). No simple answer: Critique of the Follow-Through evaluation. Harvard Education Review, 48, 128-160.
McCorcle, M. D. (1984). The Operation Was A Success But The Patient Died: A Critique of "The Implementation and Evaluation of a Problem-Solving Training Program for Adolescents". Evaluation and Program Planning, 7, 193-198.
Meta-evaluation of program evaluation approaches
Calsyn, R. J., & Davidson, W. S. (1978). Do We Really Want a Program Evaluation Strategy Based Solely on Individualized Goals? A Critique of Goal Attainment Scaling. Community Mental Health, Reprinted in Evaluation Studies Review Annual.
Gallegos, A. (1994). Meta-evaluation of School Evaluation Models. Studies in Educational Evaluation, 20, 41-54.
Mertens, D. M. (1990). Practical Evidence of the Feasibility of the Utilization-Focused Approach to Evaluation. Studies in Educational Evaluation, 16, 181-194.
Rogers, P. J. (1992). The Utilization-Focused Approach to Evaluation: Does It Ensure Utilization? International Conference of the Australasian Evaluation Society. Melbourne.
Smith, N. L., & Hauer, W. J. (1990). The Applicability of Selected Evaluation Models to Evolving Investigative Designs. Studies in Educational Evaluation, 16, 489-500.
Weiss, C. H. (1983). Toward the Future of Stakeholder Approaches in Evaluation. In Kryk (Ed.), Stakeholder-Based Evaluation (pp. 83-96). San Francisco: Jossey-Bass.
Winberg, A. (1991). Maximizing the Contribution of Internal Evaluation Units. Evaluation and Program Planning, 14, 167-172.
Approaches to program evaluation which could be used for meta-evaluation
My suggestion is to try using program evaluation techniques to evaluate program evaluations and program evaluation approaches. The following references are on different types of program theory where intended outcomes are linked to activities and intermediate outcomes. This could be done for evaluations as well as for programs.
Bennett, C. F. (1979). Analyzing Impacts of Extension Programs . Washington, DC: U.S. Department of Agriculture. Cited by Patton, M.Q. (1986) Utilization-focused Evaluation, Beverley Hills, CA: Sage Publications.
Bennett, C. F. (1982). Reflective Appraisal of Programs. Ithaca, NY: Cornell University. Cited by Patton, M.Q. (1986) Utilization-focused Evaluation, Beverley Hills, CA: Sage Publications.
Chen, H.-T. (1990). Theory-driven evaluations . Newbury Park, CA: Sage Publications.
Chen, H.-T. (1994). Theory-Driven Evaluations: Need, Difficulties, and Options. Evaluation Practice, 15(1), 79-82.
Funnell, S., & Lenne, B. (1990). Clarifying Program Objectives for Program Evaluation (Program Evaluation Bulletin 1/90). New South Wales Public Service Board.
Lenne, B., & Cleland, H. (1987). Describing Program Logic (Program Evaluation Bulletin 2/87). New South Wales Public Service Board.
Professor of Public Sector Evaluation
CIRCLE (Collaboration for Interdisciplinary Research, Consulting and Learning in Evaluation)
Centre for Applied Social Research
Royal Melbourne Institute of Technology
Building 15 Room 4.09 124 Latrobe Street Melbourne VIC 3000
ph (03) 9925 2854 m 04 09 386 499
(from overseas ph 613 9925 2854 m 614 09 386 499)
Site maintained by Bob Dick; this version 1.05w;
this page last revised 20081103