Improving disaster response evaluations : Supporting advances in disaster risk management through the enhancement of response evaluation usefulness
Summary, in English
The first part of this research characterises current evaluation practice, both in the scientific literature and in Dutch practice, based on a scoping study, document and content analyses, and expert judgements. The findings highlight that despite a recent increase in research attention, few studies focus on disaster management exercise evaluation. It is unclear whether current evaluations achieve their purpose, or how they contribute to disaster preparedness. Both theory and practice tend to view, and present evaluations in isolation. This limited focus creates a fragmented field that lacks coherence and depth. Furthermore, most evaluation documentation fails to justify or discuss the rational underlying the selected methods, and their link to the overall purpose or context of the exercise. The process of collecting and analysing contextual, evidence-based data, and using it to reach conclusions and make recommendations lacks methodological transparency and rigour. Consequently, professionals lack reliable guidance when designing evaluations.
Therefore, the second part of this research aimed to gain an insights into what make evaluations useful, and suggest improvements. In particular, it highlights the values associated with the methodology used to record and present evaluation outcomes to end users. The notion of an ‘evaluation description’ is introduced to support the identification of four components that are assumed to influence the usefulness of an evaluation: its purpose, object description, analysis and conclusion. Survey experiments identified that how these elements – notably, the analysis and/ or conclusions – are documented significantly influences the usefulness of the product. Furthermore, different components are more useful depending on the purpose of the report (for learning or accountability). Crisis management professionals expect the analysis to go beyond the object of the evaluation, and focus on the broader context. They expect a rigorous evaluation to provide them with evidence-based judgements that deliver actionable conclusions and support future learning.
Overall, this research shows that the design and execution of evaluations should provide systematic, rigorous, evidence-based and actionable outcomes. It suggests some ways to manage both the process and the products of an evaluation to improve its usefulness. Finally, it underlines that it is not the evaluation itself that leads to improvement, but its use. Evaluation should, therefore, be seen as a means to an end.
Division of Risk Management and Societal Safety, Faculty of Engineering, Lund University
- Other Civil Engineering
- Other Social Sciences not elsewhere specified
- disaster risk management (DRM)
- The Netherlands
- Henrik Tehler
- Nils Rosmuller
- ISBN: 978-91-7895-922-8
- ISBN: 978-91-7895-923-5
3 september 2021
Lecture hall V:B, building V, John Ericssons väg 1, Faculty of Engineering LTH, Lund University, Lund. Zoom: https://lu-se.zoom.us/j/65999762322?pwd=dnJ0Q1pOdlVWdVk2MndEZjg1akpyUT09
- Björn Ivar Kruke (Ass. Prof.)