HR&S PROGRAMME MANAGEMENT
SUSTAINABILITY, LOCAL OWNERSHIP & EXIT STRATEGY
A supporter may want an explanation of for example how HR&S manages aspects such as sustainability, local ownership and exit strategy.
Sustainability requires a capable team with motivation, institutional capacity and a sustainable economy. Local ownership is crucial for the success of an intervention and HR&S benefits from its user and needs driven practical strategy. The exit strategy benefits from the HR&S definition of expected impact.
Capable team with motivation – Agency for change
The programme introduces change, thus the key partners and staff members must be persons that are driving change. HR&S claims that agency for change has to be based on a volunteer approach, it cannot be forced, and that is why finding the right agency is key.
Stakeholder analysis is the process of the assessing a decision’s impact on relevant parties. The stakeholders are organised into a grid with different matrices according to their interest and their power. Power mapping provides a theoretical framework and a set of tools to tap the power needed to make things happen. Power mapping is helpful in coalition building; with whom should we develop a relationship.
Public relations (PR) is the practice of deliberately managing the release and spread of information between our programme and the public. We aim to convince an audience, inside and outside our usual sphere of influence, to promote our idea, purchase our product, support our position, or recognize our accomplishments. Social media can augment PR efforts and serve as an amplifier.
Team-building & Motivation
- Team member ability
- Team composition
- Positive team atmosphere
- Team coordination
- Team leadership
- Reward teams as teams (not as individuals)
- Team training
- Ensuring that teams feel accountable for the success of the whole company
- Ensure that our teams have the necessary authority to succeed
- Ensure a process for problem-solving.
The institutional capacity concerns the capacity of the partner institutions to manage the programme; governance, management and operations; transparency and accountability in ethics and governance, as well as cross-cultural understanding.
The programme management is responsible for the local running of an “Accountability management programme”.
Ensure annual financial and programme reports. Share management meeting minutes. Arrange seminars, workshops and other events for members as well as for external stakeholders. All stakeholders are active on social media platforms. Financial reports based on a bank account statement is developed by the programme management quarterly and shared.
Ensure ross-cultural understanding awareness raising exercises.
Identify milestones for the Programme activities and outputs. Our milestones are scheduling tools and define certain points in our programme schedules. These points note the start and finish of a sequence of activities, and mark the completion of a major phase of work.
In an equal partnership set up the partners share input, responsibility and benefits equally.
If social problems are to be tackled successfully, institutions seeking to solve them need sustainable revenues to be able to be innovative and to grow, thus the programme itself must generate funds to cover the programme running costs, why each programme must have a sustainable economy. A programme may require a start-up capital, but shall never depend on external funding for sustainability.
Social Enterprise – Business model
Our business model presents our plan for generating income to cover the costs for running the programme and thus making it sustainable. It identifies the products and services that we will sell, the target market we have identified, and the expenses we anticipate. We think through a set of overarching questions, our business model, and outline them before we dive in to the details of our business plan research. Our Business Model Canvas is a strategic visual chart with elements describing value proposition, infrastructure, customers, and finances it covers: 1. Customer segments, 2. Value propositions, 3. Distribution channels, 4. Customer relationships, 5. Revenue model, 6. Key Activities, 7. Key Resources, 8. Key Partnerships, 9. Cost Structure.
Although researchers, innovators, and entrepreneurs in lower-income countries present amazing ideas, their solutions are often unrecognised and unsupported. Consequently, locally developed and locally adapted solutions are not implemented and local enterprises, that would address the needs of the local people, are not started. As a consequence, these societies lack access to locally relevant scientific findings and innovations, products, services, and employment opportunities that would otherwise have improved people’s lives. HR&S claims that people in lower-income countries will work their way out of poverty if they are given the opportunities.
According to HR&S, a needs driven programme is a programme that meets a need defined by the person or persons who are expected to benefit from the change, who according to the HR&S approach also will be the implementer of the activities in actual practice and who will make the programme sustainable long-term. Thus, a needs driven programme builds on the ambitions of the implementer and is defined as a set of activities identified, designed, implemented and maintained by one or more Target partners. If the Programme is needs driven, actually, then the Target partner Customer will be willing to pay for products and services delivered, thus the Programme will eventually have a sustainable economy.
We define Expected Impact a programme that have become sustainable over time and does not require further backup from an external stakeholder. The Expected impact is measured at the time of closing the programme. We may in addition aim to measure if our impact is still sustainable some period after we have closed the programme, maybe one, two, five and event ten years after. In order to be able to expect impact, we benefit from the HR&S evaluation planning practical strategy ROPE.
Real-time Outcome Planning & Evaluation (ROPE)
The Real-time Outcome Planning & Evaluation (ROPE) is a practical strategy that enables local developers to implement their solutions in collaboration with international partners. We compile and address the necessary conditions required to bring about a given impact. A new ROPE programme starts with setting a goal and developing indicators to measure results. Then we develop an implementation plan, we secure finances, staff, and infrastructure, then we ensure knowledge sharing, the accounting procedures and the cross-cultural understanding. Thereafter we make an activity plan and assign people and institutions; who will do what, how and when. Now we implement, and after we measure the results and analyse. Thereafter we complement with what did not go well until we reach the goal we set up in the beginning.
The programme idea shall take into account what has already been implemented in relation to the programme idea, and by whom? What can be strengthened and how? Who are potential Strategic partners? There shall also be a justification for taking an initiative in the context. Do we have the institutional capacity? Do we see an opportunity for a sustainable economy? What would be the honest motivation for the Programme management partners to take this initiative on? We present the Context and challenge addressed. Programme idea is often a narrative of the replies to the questions: i) “What do you want to do?”, ii) “How do you want to do it?”, iii) Why did you not do it already?” iv) “Which are the country and local authority regulations?”, v) “Which are the surrounding policies?”, vi) “How do you plan to reach a sustainable economy / what is the business idea?”.
This section explains what the target partner has identified as the solution to her situation. What she wants to do and achieve right now in her life. What are the goals of the Target Partners? It is the answer to the question “What do you want (to do)?”
Outcome challenges: Here we discuss in general the challenges that the Target partner face. This is a compilation of the reasons for why the Target Partners are not doing what they want to do to implement their ambitions. It is the answer to the question “Why did you not do it already?”.
Output & Outcome: The implementation plan of the Target partner presents what she wants to do in actual practice. What are the actual activities and steps in order to achieve her ambitions? What needs to be done in actual practice making it happen. It is the answer to the question “How do you want to do it?”
Progress markers & Sources of Evidence
Then we identify a Progress marker for each outcome. Progress markers are measurable indicators of progress or non-progress. We compile the baseline, thus the situation prior to implementing our programme. Thereafter we identify sources of evidence for each progress marker and outcome and for the expected impact. Then we identify the statistical method chosen to measure progress together with the objects for collecting evidence and controls.
Activity plan & Input required
It is now time to develop a concrete activity plan which defines who is going to do what, when and how. The activity plan will identify the needs for input including; staff, skills, training, work hours, network, and funds.
Strategy for Change
The last step in the ROPE Design is to develop a Strategy for Change (SfC). A Strategy for Change is essentially a comprehensive description and illustration of how and why a desired change is expected to happen in a particular context.
Implement the Activity plan
The implementation of the activity plan includes for example; Evaluation planning workshops; Seeking Start-up funding, and collecting evidence.
Expected outcome and impact – Compile evidence for each expected outcome and each expected impact. Unexpected outcome and impact: Identify and compile unexpected outcome. It can be positive and it can be negative – Unexpected output and outcome challenges- Identify and compile unexpected challenges.
Testing the strength of Evidence for Impact
The evaluations are made real-time and the purpose is learning lessons.
Evaluation planning & Conclusions
The lessons learned are compiled and constitutes the platform for the evaluation planning. The programme strategy is adjusted in relation to the lessons learned. When reaching the expected impact, the programme can be concluded and the previous partners become Strategic partners. A new collaboration may be initiated later. We also take it as an important responsibility to share our lessons learned.
Evidence based impact (TestE)
We design surveys and measure progress in relation to outcome and income according to the HR&S impact assessment practical strategy Testing the strength of scientific evidence for impact (TestE).
Micro data survey design
Quantitative analysis – Statistical method
Basic statistics: The basic assumption to be made is that a set of data, obtained under the same conditions, has a normal or Gaussian distribution. The primary parameters used are the mean (or average) and the standard deviation, and the main tools F-test for precision, t-Tests for bias, Linear correlation and regression and Analysis of variance (ANOVA).
Simple comparison: With randomized evaluations, the simplest method is to measure the average outcome of the targeted group and compare it to the average outcome of the control group. The difference represents the programme’s impact. To determine whether this impact is statistically significant, one can test the equality of means, using a simple t-test. One of the many benefits of randomized evaluations is that the impact can be measured without advanced statistical techniques.
Propagation of errors: The final result of a Programme is calculated from several activities (outputs) performed during the implementation and the total error in a programme is an adding-up of the sub-errors made in the various steps. The bias and precision of the whole Programme are usually relevant parameters.
Qualitative assessment – Probability methods
With qualitative assessments, and contrary to statistical methods, the quality of the evidence is not judged by the sample size (the number of observations) but rather the probability of observing certain pieces of evidence. Qualitative impact evaluation includes assessing the contribution made by a particular intervention in achieving one or more outcomes, commonly referred to as a ‘contribution claim’. TestE benefit from process tracing to assess our Strategy for Change and from contribution tracing to examine the contribution by external stakeholders. We also address Team operations, Cost- benefit, Needs driven, Equal partnership and Unexpected effects.
i)Ensure that the person asked is representative for the group, and that we would expect the same answer if we asked someone else,
ii) reflect over if the results of our randomized evaluations is generalizable to other contexts.
Simple randomized evaluations: impact evaluations that are scientifically sound usually compares outcomes of those (individuals, communities, etc.) who participated in the programme against those who did not participate.