Miri Levin Rozalis

Program evaluation is a methodical process for answering questions about specific programs, projects, policies, initiatives and organizations. Evaluators use systematic methods and almost every known methodology for collecting, processing, organizing and analyzing data from all sources of information, in order to create useful knowledge for different stakeholders and interested parties.

Usually, the aim of evaluation is to improve a process, but sometimes there are hidden agendas such as control or mastering power.

There are many approaches—and even ideologies—among evaluators and evaluation customers as to what is the best way to conduct an evaluation. Two major epistemological or even ontological approaches exist: the Human Agency and the System Agency, as shown below.[1]

Criteria System Agency Human Agency
Function Supervision, control, command Learning
Goal Standardization Recognition and examination of otherness, diversity, difference
Frame Systemic, structural Diagnostic: diagnosing differences within and between
Target Products Processes
Benefit Classification Reinforcement, development, enhancement, advancement
Outcome (educational) Knowledge Skills, learning processes
Research approach Analytical Mainly holistic
Methodology Experimental Responsive
Focus External to the evaluee Internal to the evaluee

 

These are two completely different perceptions of the nature of the world and of the role of program evaluation. One, on the left, sees permanence in the world, distributions that can be predicted, and tries to understand them in order to obtain better control of the system. The second sees the world as a mosaic of otherness that is relative and context-dependent, where it is difficult to infer anything about one from what is observed in another. It deals with complexities that are ever-changing and impossible to predict. Some evaluators will swear on one, and some, on the other. I myself believe that it depends on the question we need to evaluate, but the orthodox might claim that even the questions of one kind or another are not the right questions.

We can add to that the conflict between the RCT (Randomized Controlled Trials) party versus the qualitative party versus the mixed-methods group, which can be violent at times.

My claim is that all these paradigms and methodological disagreements among the practitioners of program evaluation are futile and lead nowhere. I believe that the role of evaluation as a profession is to convert information into knowledge, and that can be done using any methodology that does not misrepresent reality.

The emphasis in most evaluation courses and training is on research methods, which we cannot do without. But all the research methods, all the ontological and epistemological approaches, all the ideological approaches are part of a whole that gives them meaning: the mission of the profession, which is to create new knowledge. What’s happening at present is that parts of that whole are fighting among themselves for supremacy. The moment it becomes clear that they are part of something, that an analytical method is a means to an end, that an evaluation approach serves a process greater than itself, they will no longer be the be-all and end-all, but a school of thought within a unifying professional entity.

This does not change anything about what evaluators need to know; it changes only the emphasis. It means that, apart from the basics, evaluators must be fully conversant with what’s happening in their professional world—evaluation theories, fields of knowledge, the various schools of thought—so that they know to ask for help; they know that in this matter I need an evaluator who knows A, B, or C (like a medical doctor knows that for that matter I need to consult a cardiologist or an orthopedist). Additionally, they need to be educated in multidimensional critical thinking. They must possess the ability to constantly ask questions about everything—even about their own work—not to take anything for granted. And then their need for viewpoints other than their own—for collaboration with schools of thought or work methods different from their own—will grow.

 

Having said that, it is important for me to emphasize the integral aspect of evaluation, which is that it is a purpose-driven act that has a direct influence on people’s lives. The research objects in evaluation are people. Always. Even when the evaluation’s objective is to examine whether a work approach is successful, the research objects will be people and the program in question will be for people. The inevitable conclusion of this is that evaluators must take into account where their evaluation is leading. We, as evaluators, have to address the implications of the evaluation and the effect it might have in the broadest possible sense, far beyond the question of the nature and quality of the program or its expansion or termination.

It is no less important to remember that the most significant criterion for examining the implications and effects of an evaluation is not our personal worldview, what we believe or don’t, nor that of the people involved in the intervention. The significant criterion is in the effect of the evaluation on the life and worldview of those who will be affected by those implications and effects: the evaluees and their environment.

And this is a very great responsibility. As evaluators, we bear this social responsibility: to look broadly, to look forward, to be responsible for our actions, and as far as possible, to be sure that the purpose is indeed for the benefit of those who will ultimately be affected by our work.

This is the compass that should lead all the professional decisions we make while conducting an evaluation.

To conclude, our aim is to create knowledge—knowledge that is meant to lead actions in a specific program, project, organization, or intervention—actions that will potentially make a difference in people’s lives. And this we must not forget. It is our professional responsibility to make sure that this is ultimately for the benefit of those who will be affected by it.

 

[1] Levin-Rozalis, M. (2014) Let’s Talk Program Evaluation in Theory and Practice. Samuel Wactman’s Sons Inc.: Monterey, CA.