Delta Partners Management Consultants
Your trusted advisors.

What is Program Evaluation and Does it Really Matter?

Greg Tricklebank

A great deal is written about the self-identity of ‘evaluation’ as a discipline and/or a profession.  Viewpoints abound concerning the role and expectations of evaluators, ranging in scope from mere collector and presenter of factual data to virtual philosopher-king, arbitrating complex issues concerning what is of value to society. And, of course, there is much in between...

State of Evaluation Practice

The Journal of the Canadian Evaluation Society has recently published an interesting overview of evaluation practice in Canada that, among other things, suggests the following:

  • Although Canada leads the world in developing a professional designation system for evaluators, there is no commonly accepted core body of knowledge.
  • One of the core notions that distinguish evaluation from other disciplines – attribution (assessing cause and effect relationships) – is not clear to everyone, even within the profession, and there is no agreement about what is entailed in addressing attribution questions.
  • Partly as a result of these professional limitations, combined with limited resources and a lack of independence, it is difficult for evaluators to articulate convincing arguments that address difficult issues.  Rather, evaluation tends to be overly centered on performance measurement and accountability.

Quoting from Foundations of Program Evaluation: Theories of Practice, “Without its unique theories, program evaluation would be just a set of loosely conglomerated researchers with principal allegiances to diverse disciplines, seeking to apply social science methods to studying social programs. Program evaluation is more than this, more than applied methodology.”

This perspective should be of no small interest to evaluation practitioners, especially those in the federal public service, since the vast majority of them are social scientists by training. 

Is evaluation anything more than applied social science methodology?

Rethinking Evaluation

Although I suspect that most practicing evaluators pretty much know what they are doing, it seems that ‘evaluation’ itself - as a discipline, profession or whatever - is in danger of sharing the fate of much analytic philosophy – that of disappearing up its own backside.

I recently came across a blog post by evaluation guru Michael Scriven suggesting that it’s time to re-conceptualize evaluation from the ground up. Using the metaphor of a Copernican revolution (the discovery that the earth revolves around the sun and not vice versa), Scriven suggests that evaluation is in fact the logical backbone of every discipline, implying that evaluation is the sun around which all other disciplines revolve.

To quote Scriven’s reply to one of the comments:

“It is a logical fact that no discipline can lay claim to that title unless it has standards of quality for its data, hypotheses, theories, and methods. Applying those standards, or showing that they have not been met, is by definition evaluation. Hence it’s a logical fact that the key component of establishing and maintaining a discipline’s credentials is evaluation. QED”

It’s the ‘QED’ (the conventional way of signaling the completion of a proof in philosophy) that gives away the game.   There are logical facts and this may be one of them, given the meaning ascribed to the terms ‘discipline’ and ‘evaluation’.   However, the truth of the proposition is trivial with respect to the question at hand.  It does not address the issue of whether evaluation itself is a discipline or, alternatively, an activity that is legitimately distributed across numerous disciplines.  This form of argumentation could go on – and on and on – hence my reference to the backside.

Adding Some Perspective

The broad context within which a rethinking may be placed is nicely expressed in terms of the three branches of the ‘Evaluation Theory Tree’ (pdf).

  • Social Science and Statistics (questions of method)
  • Analytic Philosophy (questions of value)
  • Public Administration (questions of utility)

Historically, the social science perspective established the randomized experiment and the more practical quasi-experimental method as the gold standard. However, it was also recognized that the experimental paradigm might limit the evaluation perspective to the official goals of the program.  Subsequent developments emphasized the necessity for using social science knowledge and theory related to the subject matter in question to identify plausible and defensible models of how programs can be expected to work and possible unintended consequences.

The major contribution of analytic philosophy, as represented by Scriven, is the way in which he defined the role of the evaluator in making value judgments.  According to Scriven,

“Bad is bad and good is good and it is the job of evaluators to decide which is which.”

This absolutist position is based on theories of natural justice which bridge the fact/value gap by arguing that there are natural human needs which everyone has a right to have satisfied and that these universal human needs form the basis of what ought to be valued.  The only alternative position is that all values are equally valid, in which case ‘might makes right’.

In the interests of balance, it should be pointed out that there are other approaches to questions of value, including those based on a constructivist paradigm.   That is, in place of the existence of a single reality, individuals ‘construct’ their perceptions of reality.  The role of the evaluator is to orchestrate a negotiation process that aims to culminate in consensus on better-informed and more sophisticated constructions.

Turning to questions of utility, the principle concern is the production of useful evaluations and taking steps to ensure that they get used.  This generally entails some form of stakeholder engagement in the evaluation process and tends to place the evaluator in a consultant role.   Approaches that emphasize the utility of evaluations range from the simple consultation of decision-makers to ensure that the evaluation meets their needs to more inclusive forms that are sensitive to the needs of program and/or organization managers.  Additional approaches that go even further in the intensity of stakeholder engagement include ‘practical participatory evaluation’ where primary users and evaluators are explicitly recognized as collaborators in the evaluation process and ‘empowerment evaluation’, where evaluators allow participants to shape the direction of the evaluation, suggest ideal solutions to their problems, and then take an active role in making social change.

How are evaluation and social science research different?

Admitting that I am biased, I like definitions of evaluation that suggest, as a discipline, it is the application of social science methodology (and theory) to social problems and issues - broadly defined as any issue with a behavioral dimension. 

As I understand social science, this is certainly broad enough to encompass the relevant philosophical issues of valuation and the practical issues of decision-making and action.  Moreover, it encompasses a variety of approaches ranging from quantitative performance measurement to varying degrees of participative evaluation and constructivist assessment.

However, the practice of evaluation, as a profession, is indeed more than the practice of social science research.  This is well expressed in the following quote from William Trochim:

 “Evaluation is a methodological area that is closely related to, but distinguishable from more traditional social research. Evaluation utilizes many of the same methodologies used in traditional social research, but because evaluation takes place within a political and organizational context, it requires group skills, management ability, political dexterity, sensitivity to multiple stakeholders and other skills that social research in general does not rely on as much.”

I do believe that questions of value belong to the realm of philosophy.  The question is, should such judgments be left to the professional philosopher/evaluator as Scriven and others suggest?

I think not.

Concluding Remarks

Evaluation is not a science, and I’m not even sure it’s a discipline.  At best, it might be considered a profession.  However, it seems this type of internal speculation has tied evaluation practitioners in a knot concerning their self-identity.

 

What do you think?

Is evaluation more than applied social science?

Is there an ‘identity crisis’ amongst evaluation practitioners?

Do practicing evaluators even care about this?

 

I look forward to hearing your thoughts in the comments below.

Comments

There is so much to say about this topic! The problem has its roots in modernism and its politically supporting ontology (ideology of what is considered real and legitimate). Positivism rules the game, but is dwindling in the face of complexity theory. Positivism is about decontextualizing, atomizing and universalizing understanding while complexity is about situating, looking at the causal scale of things and localizing what we know.

There are also two related ideas relative to your post if you’re interested: http://aparr62.blogspot.com/2011/03/it-is-mistake-to-confuse-methodology.html and page 18-19 of http://books.google.com/books?id=tcOQ0uIaLQEC&q=positivism#v=snippet&q=positivist&f=false

By Zaphod on 2011/07/23

Thanks for your comment, Zaphod, and for the links.  I guess my bottom line opinion on all this stuff is that there is an objective reality, but that it is not directly accessible by any methodology.  I will be blogging on this more extensively in the future.

By Greg Tricklebank on 2011/07/25

Add a Comment

Commenting is not available in this channel entry.

About this Article

Posted by Greg Tricklebank
Posted on July 22, 2011
2 Comments

Share |

Categories: lessons learned, management, performance measurement, planning & policy, program evaluation