PROGRESS & IMPACT REPORTS

Reports are coninuously updated as achievements are made and lessons are learned.

ABOUT OUR IMPACT REPORTS

Evidence based impact evaluations

HR&S works with evidence based impact and evidence based impact evaluations. Evidence based evaluations involve developing and testing hypotheses. The purpose with impact evaluations is to determine whether a programme has impact, and also to quantify how large that impact is.
TestE presents the HR&S strategy for seeking and testing evidence related to the activities that are expected to lead to the outcomes identified as preconditions for achieving the expected impact.  We measure outcome and impact in relation to base line and benefit from randomized groups. The TestE data is collected with the purpose of measuring whether expected outcomes were achieved or where not achieved. The monitoring targets collecting information for testing a claim by investigating whether evidence exists to support or reject it. A claim is simply a statement about the role a project may have played in bringing about change.
Our strategy is participatory and involves all stakeholders at every step. The full team is involved with formulating the case (the contribution claim), determining what evidence to look for, testing the evidence and making conclusions. Each programme develops its own evaluation.
We choose a method for the evaluation of monitoring data. The evaluation method deals with the collection, organisation and analysis of data, and drawing of inferences from the samples to the whole population. The results and inferences are precise only if proper evaluation tests are used. This requires a proper design of the study, an appropriate selection of the study sample and choice of a suitable evaluation test.  We identify the study sample and the control, prior to implementing the programme, we reflect over sample size and external validity to meet the requirements of the selected statistical method, and benefit from simple randomized evaluation, whenever possible. Please find more details in the appendices.
By continuously capture data, evaluation can be used as management tool as well as for analysis, and will improve data quality (by allowing follow-up on questionable data).

ROPE data

Monitoring data is uploaded real-time on ROPEdata.se, a designated IT platform accessible through computers, tablets and cell phones. We offer ROPEdata to our partners thus enabling efficient communication around tools, strategies and results.

The ambition is to learn

The main objective with TestE is to learn, to get support with lessons to be learned and the related programme redesign and improvement. Our evaluations are real-time. If those responsible for the day-to-day operations of a programme have critical questions, evaluations can help find answers.  By doing so we are also provided with data for evaluation sessions. Our programmes are built on equal-partnership and our evaluations include all stake-holders. Reports are translated to local language or share orally in the local language, to ensure knowledge sharing and reviews by all.
Programme evaluation, can be associated with positive or negative sentiments, depending on whether it is motivated by a desire to learn or a demand for accountability.  We obviously also report on the status of our programmes to external stakeholders, but the motivation for evaluation planning is the desire to learn.

When to conduct an evaluation

The value added by rigorously evaluating a programme or policy changes depending on when in the programme or policy life cycle the evaluation is conducted. The evaluation should not come too soon: when the programme is still taking shape, and challenges are being ironed out. And the evaluation should not come too late: after money has been allocated, and the programme, rolled out, so that there is no longer space for a control group.

The HR&S evaluations are real-time, the planning is done as the programme is designed and certain data can be collected prior to implementation and then we benefit from all opportunities while implementing.

Time and costs

Collecting original survey data is often the most expensive part of an evaluation. The length of time required to measure the impact of an intervention largely depends on the outcomes of interest. It shall be noted also, the time and expense of conducting a for example a randomized evaluation should be balanced against the value of the evidence produced and the long-term costs of continuing to implement an intervention without understanding its impact.The strongest cases use multiple forms of evidence, some addressing the weaknesses of others.
HR&S is doing real-time evaluation planning and benefit from the opportunity to collect data in parallel with implementing programmes. Thus, when we are on site anyway we can invite focal-group discussion and make interviews. For this reason also, shall the evaluation design and the monitoring tools be developed at the time of designing the programme. It can also be cost efficient to conduct evaluations measuring outcomes using existing administrative data, instead of collecting survey data.
It is generally agreed that the nonprofit world rewards good stories, charismatic leaders, and strong performance on raising money (all of which are relatively easy to assess) rather than rewarding positive impact on the world (which is much harder to assess).

Close Menu