Research report: Experiences of the Member States performing evaluations in projects and activities aimed at crime prevention

Download the full report below

The evaluation of crime prevention interventions involves the systematic collection and analysis of information about the changes that occur in the different components of a criminal problem produced by the activities of the intervention. The principal objective of analyzing such information is to determine at what level the goals were achieved, and at what cost. Different groups benefit from the results of the evaluations, including those who design and implement the intervention, managers, stakeholders, sponsors, policy advisors, target groups, etc. The information produced by the evaluation is useful for guiding decisions about how to redesign the intervention, how to orient the future allocation of resources, and how to advise on policy directions. Whether or not to use the results of the evaluation is ultimately a management decision, but professionals and evaluators are reinforced when they see that the effort they put in to evaluating the interventions is useful to introducing improvements.
Evaluation entails important methodological aspects, and evaluation must be planned at the same time that the intervention is planned in order to ensure the intervention’s evaluability (i.e., the capacity to be evaluated in a reliable and credible manner). Misalignments between the crime problem, the objectives, and the activities that the intervention comprises result in low evaluability and might seriously compromise the quality of the evaluation. In this sense, evaluation is a tool that contributes to the design of the intervention.

Key findings

The results showed that there is still a considerable amount of work to do in order to achieve full crime prevention practice based on evidence. In many cases the evaluability might have been compromised because the Needs Assessment was in general unstructured and was done by professionals working in the area, but lacking the methodological support of experts in crime prevention. More concerning was that a portion of the participants reported that Needs Assessment did not occur at all, and the decision to implement the intervention followed managerial and political pressures. On the basis of these findings, we asked the questions: To what extent are the crime problems that the interventions are supposed to prevent known by those responsible for designing and implementing the intervention? What objectives are proposed to prevent a crime problem that has not been properly studied?
A second finding of this study indicates that the great majority of interventions were tailored to the specific crime problem and circumstances or that they used available interventions but proceeded with major adaptations. This indicates that crime prevention practice in the EU might not be taking advantage of validated and scientifically demonstrated work. Furthermore, more than 50% of the participants reported that the interventions they implemented were not grounded on theoretical or empirical knowledge, and more than 40% reported that the crime prevention mechanisms underlying the intervention had not been identified a priori. Under these circumstances, the Logic Models might run the risk of not being logical at all, and once again the evaluability might have been compromised.
The intervention outcomes were formally evaluated in only 44% of the cases, while 36% had been informally evaluated (i.e. by staff members or other persons, but no systematically measured or registered in an official report) and 10% had not been evaluated at all. This is bad news for crime prevention managers. Why would managers and policy makers want to employ resources in applying an intervention for which there is no evidence for its efficacy? Are the crime rates in the EU countries the results of our inefficient interventions and strategies?
The good news is that in general the experience of doing evaluation was seen as positive and necessary by both those participants whose interventions had been evaluated and by those whose interventions had not. They pointed out three reasons why evaluation should be done – (1) it provides feedback that be used to improve the interventions and avoid pitfalls, (2) it is a driving force to further develop the interventions, and (3) it motivates the persons who implement them. However, it was also suggested that evaluations might be considered a bureaucratic burden, and when resources are scarce they are not seen as a priority. In those cases in which the results of the evaluations are not used to improve the interventions, persons on the teams will likely develop negative attitudes for doing evaluations.
In those interventions that had been formally evaluated, almost 30% of the cases indicated that the outcome evaluation involved external evaluators, but they were enrolled in late stages of the implementation period. This raises the concern that those who had not been involved from the beginning might have found shortcomings in the planning of the intervention that might have hindered a proper evaluation.
The participants indicated advantages of doing evaluations internally. In their opinion, the persons in charge of the evaluation know the intervention better, and improvements can take place faster because the results are known faster than if the evaluation is done externally. The lack of expertise within the organizations was the strongest motive for commissioning the evaluation to external experts.
Regarding the scientific design employed in the evaluations of those projects that had been formally evaluation, less than 20% used experimental or quasi-experimental designs. The greatest percentage used pre-post designs without a control group. Taking into account that the majority of the interventions had been tailored to address the assessed needs or had introduced major changes in previously developed interventions, we would expect extensive work of testing and validation involving experimental designs before applying them to a target population. This does not seem to be the case. Furthermore, only 50% of the participants indicated that the formal evaluations included the measurement of possible unintentional effects. In sum, in many cases we do not know if the interventions are useful, if they are harmless or if they have unintentional effects that can produce more problems than the ones they try to solve.
Several factors were highlighted as having a negative impact on the outcome evaluation. Among others were the lack of involvement of all the parties (e.g., stakeholders, persons in the target group, etc.), the large amount of time required to plan and carry out the evaluation, difficulties in getting access to necessary data, problems related with data protection, and the lack of expertise of the people responsible for the evaluation. In addition, the participants pointed out the difficulty in identifying which data are necessary for doing the evaluation properly and how the different indicators should be measured, which reflects a basic lack of knowledge in methodology.
Informal evaluations were carried out by persons involved in the design and implementation of the interventions. However, the competence of these professionals to properly plan the evaluation is not beyond question. If we insist on not using expert evaluators, it is necessary to make an effort to educate these professionals in the methodology of evaluation and to create a culture inside the organizations so that we can increase the amount of interventions being formally evaluated.
The only indicator that showed an increased likelihood for the evaluation to occur was if the intervention had a budget or allocated funds, which is most likely if the evaluation is a requirement for receiving funding for the intervention. Factors such as the type of institution responsible for implement the intervention or the type of intervention in itself did not have an impact on the practice of evaluation. This suggests that any potential solution for encouraging the evaluation of interventions must be applied across all the institutions and organizations responsible for crime prevention practice in the EU without exception.

Recommendations

A small percentage of our respondents reported to following good practices when doing evaluations. In addition, the majority showed a positive attitude for doing evaluations. However, we identified many shortcomings that need to be addressed in order to drive crime prevention in the way of best practices. These shortcoming are directly related with gaps in four major areas.
First, there is a lack of knowledge on the methodology of evaluation among those responsible for doing it, mainly when evaluations are internally produced. Managers should decide between appointing external experts who can produce evaluations of high quality each time they need to evaluate an intervention or to educate their own professionals. In one way or another, it is necessary to guarantee the competence of those involved in the process of evaluating crime prevention interventions. The education should imply academic literacy along with practice in crime prevention and in evaluation. It is also necessary that managers, stakeholders, and policy-makers have sufficient knowledge to understand what evaluators do in order to be able to communicate with them and to interpret the evaluation results. Encouraging a culture of evaluation among institutions and organizations would help to increase evidence-based crime prevention practice, increase the effectiveness and efficiency of our work, and make our service more valuable for individuals, communities, and governments.
Second, evaluation requires the employment of human resources that in general are scarce. Although planning for the evaluation can be done by one person or a small group of experts, the implementation of the evaluation procedures, especially the data collection, requires manpower. The time for doing the evaluation tasks should be calculated separately from the time employed for implementing the intervention. The evaluation plan should justify the personnel required to execute each one of the evaluation tasks in each one of the follow-ups or evaluation periods, and the managers should ensure the availability of sufficient resources to accomplish the plan. The quality of the evaluation depends on it.
Third, the participants pointed out the general lack of financial resources to perform evaluations. The budget of the evaluation should be calculated apart from the budget for the intervention. Managers should ensure that the evaluation can be financed before they decide to go ahead with the implementation of the intervention. When interventions and evaluations receive financial support from different funding budgets, it is important to secure the evaluation funding as soon as possible, preferably before the intervention starts. Funds from the evaluation should not be diverted to the intervention.
Fourth, the difficulties in gaining access to necessary data was an obstacle highlighted by many of the participants. The evaluation plan should provide logical arguments regarding the data required to perform the evaluation properly. Unnecessary data should not be requested or collected. However, it might be necessary, for example, to have access to detailed crime statistics, the social profiles of young offenders, financial information about groups of people, etc. Whenever it is justified, evaluators should have guaranteed access to such information. Moreover, the evaluation plan should include an ethic strategy for enrolling and keeping track of persons in the target group if it is necessary and for as long as it is necessary.

 

Research commissioned by the EUCPN secretariat
Mid Sweden University, 2020