Evidence based impact evaluations

HR&S works with evidence-based impact and evidence-based impact evaluations. Evidence-based evaluations involve developing and testing hypotheses. The purpose of impact evaluations is to determine whether a programme has an impact and also to quantify how large that impact is.
TestE presents the HR&S strategy for seeking and testing evidence related to the activities that are expected to lead to the outcomes identified as preconditions for achieving the expected impact. We measure outcome and impact in relation to baseline and benefit from randomized groups. The TestE data is collected with the purpose of measuring whether expected outcomes were achieved or where not achieved. The monitoring targets collecting information for testing a claim by investigating whether evidence exists to support or reject it. A claim is simply a statement about the role a project may have played in bringing about change.
Our strategy is participatory and involves all stakeholders at every step. The full team is involved with formulating the case (the contribution claim), determining what evidence to look for, testing the evidence and making conclusions. Each programme develops its own evaluation.
We choose a method for the evaluation of monitoring data. The evaluation method deals with collecting, organizing and analyzing data, and drawing inferences from the samples to the whole population. The results and inferences are precise only if proper evaluation tests are used. This requires a proper design of the study, an appropriate selection of the study sample and choice of a suitable evaluation test.  We identify the study sample and the control, prior to implementing the programme. We reflect over sample size and external validity to meet the requirements of the selected statistical method, and benefit from a simple randomized evaluation, whenever possible. Please find more details in the appendices.
By continuously capturing data, evaluation can be used as a management tool as well as for analysis and will improve data quality (by allowing follow-up on questionable data).

ROPE data

Monitoring data is uploaded real-time on, a designated IT platform accessible through computers, tablets and cell phones. We offer ROPEdata to our partners, thus enabling efficient communication about tools, strategies and results.

The ambition is to learn

The main objective with TestE is to learn, get support with lessons to be learned and the related programme redesign and improvement. Our evaluations are real-time. If those responsible for the day-to-day operations of a programme have critical questions, evaluations can help find answers. By doing so, we are also provided with data for evaluation sessions. Our programmes are built on equal-partnership, and our evaluations include all stakeholders. Reports are translated to local language or share orally in the local language, to ensure knowledge sharing and reviews by all.
Programme evaluation can be associated with positive or negative sentiments, depending on whether it is motivated by a desire to learn or a demand for accountability.  We also report on our programmes’ status to external stakeholders, but the motivation for evaluation planning is the desire to learn.

When to conduct an evaluation

The value added by rigorously evaluation of programme or policy changes depending on when the evaluation is conducted in the programme or policy lifecycle . The evaluation should not come too soon: when the programme is still taking shape, and challenges are being ironed out. And the evaluation should not come too late: after the money has been allocated, and the programme rolled out, so that there is no longer space for a control group.

HR&S evaluations are real-time. The planning is done as the programme is designed, and certain data can be collected prior to implementation, and then we benefit from all opportunities while implementing.

Time and costs

Collecting original survey data is often the most expensive part of an evaluation. The length of time required to measure the impact of intervention largely depends on the outcomes of interest. It shall also be noted, the time and expense of conducting, for example, a randomized evaluation should be balanced against the value of the evidence produced and the long-term costs of continuing to implement an intervention without understanding its impact. The strongest cases use multiple forms of evidence, some addressing the weaknesses of others.
HR&S is doing real-time evaluation planning and benefiting from the opportunity to collect data in parallel with implementing programmes. Thus, when we are on-site, we can invite focal-group discussion and do interviews. For this reason, the evaluation design and the monitoring tools shall be developed at the time of designing the programme. It can also be cost-efficient to conduct evaluations measuring outcomes using existing administrative data, instead of collecting survey data.
It is generally agreed that the nonprofit world rewards good stories, charismatic leaders, and strong performance on raising money (all of which are relatively easy to assess) rather than rewarding positive impact on the world (which is much harder to assess).