PPA 670 POLICY ANALYSIS

IMPLEMENTATION, MONITORING, & EVALUATION

Implementation Analysis
Policy Monitoring
Policy Evaluation
Formative Evaluation
Summative Evaluation
Evaluation Design
 

IMPLEMENTATION ANALYSIS

The full policy process is often described by the following steps:
1) problem definition
2) alternative generation
3) analysis of alternatives
4) policy adoption
5) policy implementation
6) policy evaluation
While this course has focused on the first three steps, the last three steps are equally important. A thorough policy analysis will include some consideration of policy implementation, monitoring, and evaluation.

The policy analyst can sketch out an implementation plan for the most highly ranked alternative(s) that considers:
1) relevant actors and their interests
2) required resources and who might provide them
3) facilitators and barriers likely to be encountered
4) reasonable time frame

Implementation analysis might involve writing a "best-case" scenario and a "worst-case" scenario for each policy alternative, as well as the "most likely" outcome. The idea is to think systematically through the implementation process, identify potential problems, and develop actions that can be taken to either avert catastrophes or reduce losses.
 

POLICY MONITORING

Policy maintenance refers to keeping the policy or program going after it is adopted. Policy monitoring refers to the process of detecting how the policy is doing.

To monitor a policy, some data about the policy must be obtained. A good implementation plan will suggest some ways in which ongoing data about the policy can be generated in the regular course of policy maintenance, for example, from records, documents, feedback from program clients, diary entries of staff, ratings by peers, tests, observation, and physical evidence.
 

POLICY EVALUATION

Policy evaluation is the last step in the policy process. It may ask deep and wide-ranging questions, such as:
1) was the problem correctly identified, or was the correct problem identified?
2) were any important aspects overlooked?
3) were any important data left out of the analysis? did this influence the analysis?
4) were recommendations properly implemented?
5) is the policy having the desired effect?
6) are there any needs for modification, change, or re-design? what should be done differently next time?
When policies fail to have the intended effect, it is usually due to one of two types of failure: theory failure, or program failure.

A theory failure occurs when the policy was implemented as intended, but failed to have the desired effect. This may occur when, for example, a school adopts school uniforms to curb violence in the school, but the violence remains at the same level. The policy was implemented (uniforms were adopted) but the expected change did not occur. The theory that violence occurs due to style of dress is wrong. There must be some other cause of school violence, which would require a different policy to address.

An implementation failure occurs when the policy is not implemented as intended. For example, the school may adopt a uniform policy, but the majority of the students ignore it. The level of violence in the school does not change. We still do not know whether adopting school uniforms would lower the level of violence in the schools; we only know that uniforms were not adopted.
 

FORMATIVE EVALUATION

If adequate monitoring processes are in effect, it should be fairly easy to detect whether a policy has been implemented as intended. This type of policy monitoring has been referred to as formative evaluation. Formative evaluation documents and analyzes how a policy is implemented, with the objective of making improvements as the implementation process unfolds.
 

SUMMATIVE EVALUATION

Summative evaluation is conducted after a program has been fully implemented. It looks at whether the program is meeting its objectives, and why or why not.
 
Evaluations may be unpopular for many reasons:
1) the program is controversial;
2) there are strong political interests in seeing it succeed or fail;
3) there are difficulties in measuring program accomplishments;
4) those involved may be uncooperative;
5) program effects may be influenced by outside developments.
To decide whether an evaluation will be helpful, the answer to the following questions should be "yes":
1) will the evaluation be accepted by politicians, administrators, and/or participants?
2) has an evaluator been involved from the beginning?
3) are there measurable objectives?
4) are data available?
5) are multiple evaluation methods plausible?
6) has the program remained stable over time?
7) can program staff become involved in the evaluation?
8) will the findings be made widely available?

EVALUATION DESIGN

Policy evaluation applies accepted social science research methods to public programs. The same research designs used in laboratory experiments are not always practicable in the field, but the same principles can guide the planning and execution of policy evaluation.

Before-and-After Evaluation: a policy is evaluated for the changes it has produced since its implementation; the situation is controlled to exclude other possible influences on the outcome.

With-and-Without Evaluation: a policy is evaluated for producing changes in the target population, compared to another population without the policy.

After-Only Evaluation: the extent to which the policy goals were achieved, compared to the state of affairs before the policy was implemented; but the situation is not controlled to exclude other possible influences on the outcome.

Time-Series Evaluation: the changes produced by the policy, tracked over a long time period.