Vous êtes sur la page 1sur 4

Impact Assessment Thursday, 9 May 2013 Panelists: Jeremy Nicholls (Moderator), Chief Executive Officer, The SROI Network

k Lance Potter, Director of Evaluation, New Profit Lisa Hehenberger, Research Director, European Venture Philanthropy Association Kevin Robbie, Executive Director, Social Ventures Australia Summary: In this session, Lance Potter talked about how trends in impact assessment in the US are like a pendulum. It swings from being the silver bullet to the albatross, he said. Right now, were trending towards the silver bullet. The sources of trending are multiple: after years of not liking impact assessment, people realize things arent working. The federal government is taking an interest in funding based on program outcomes versus program management and it is having a ripple effect throughout the nonprofit sector. It will continue for some time, in his opinion. Finally, the US is experiencing a change in education curricula, and much of the world of social program measurement is driven by education. Now there is enormous change in education which will affect the next decade. He pointed out a few trends: Collective Impact: we can assess the impact of multiple programs. The idea is that it could lead us to scale and could require high levels of integration. This is very trendy, but it is also uncertain. Social Impact Bonds: pay for success models are new in the US. These are just now taking hold. It is a Pay for Success concept. These models require experimental designs and randomization to prove that outcomes are happening. Big Data: the desire to integrate data sources in the US. Its big, its here, and its coming. Lisa spoke about the European Venture Philanthropy Networks focus on impact measurement. The EVPAs members requested it, she said. The EVPA had some workshops on methods and tools. You can use scorecards, SROI, or IRIS indicators, and people said thats all fine, but how do I get started? So we put together an expert group. We brought together experts to identify what is best practice. Are there common threads that we could apply? We just published a manual on how to measure impact. We came up with 5 steps that are common: 1) Setting objectives 2) Analyzing stakeholders 3) Measuring results 4) Verifying and valuing impact

5) Monitoring and reporting Impact measurement is a means to an end, and by measuring impact, one can learn how to manage impact better. The EVPA is looking at the funder level, she said. If youre a foundation, you have to manage your impact. Its about setting priorities. You have to know what they are trying to achieve. You have to consider how I can help them achieve that impact. A lot of this is about the process learning about how you achieve impact. It is useful for anyone: the funder and the organization youre investing in. Its about taking time to set your objectives, engaging key stakeholders, and involving groups in developing indicators. The key challenges are reporting standards, aggregating measures across a portfolio and designating appropriate resources to support measurement, she said. Speaking about Social Ventures Australia (SVA), Kevin said it is a 10 year old venture philanthropy organization focused on education and employment. It has been moving into the area of collaborative projects. He asked: How do you bring nonprofits and government agencies into collaborative projects? SVA set up a consulting business modeled on Bridgespan and an impact investing business. There are a lot of challenges around impact investing in the space, he said. SVA focuses on how to improve capital flow, attract experienced talent and build an evidence base. In the last three years, SVA has done 60 SROI analyses which include forecasts or baselines of organizations. It has been trying to encourage organizations to do it. Some considerations are: involving stakeholders, Theory of Change development, transparency, not over-claiming, and verification. It also does a lot of work around program logic. There is a lot of hype around collective impact, and the complications are compounded when program logics need clarification across groups. Many collective impact efforts have yet to establish their measurement systems. There is a lot of hype around Collective Impact that one needs to be aware of, he said. There is a lot of confusion, Jeremy said, but there is also convergence. There is convergence towards consistency in answering if we are making the most difference that we can. We need to get the equivalent level of consistency, just as there is a worldwide way of doing accounting. One of the big questions you have to ask is What question are you asking and who is it for? For government policy, the level of rigor is high, versus some other needs. Organizations are summarizing basic data to make better decisions; the other is a potential conflict between the investors and the investees beneficiaries are missing from that conversation. We risk rubber-stamping claims and eventually not fighting levels of social inequality. In response to a question about whether secondary effects are assessed while measuring impact, Kevin said that they are. He said they would do a year-on-year SROI and get a better understanding of the ripple effects. We see for example, reduced pregnancy, drug use, criminal behavior, and we find more effects over time the more analysis we do.

Lance agreed. It goes to what is the goal of your theory of change, he said. You may start with wanting to have effects on the individuals but then move to also impacting the community and the family. So if its in your theory of change, then you can begin to measure it. Actual measurement then becomes a matter of resources and complexity. The number of variables goes up and up. There seems to be a disconnect between the time needed to measure outcomes and the frequency by which social enterprises change their business model. For example, findings might be reported after two years, but you need three or four years. At the same time, were trying to support organizations that reinvent themselves every three months, and we push them to be innovative. How do you get to a steady state to measure a model? In response to this question from the audience, Lance said the tension is whether it is a problem of the evaluator or the programme people. You may change your program very often, otherwise you may be inflexible, he said. It does not make sense to do a big expensive evaluation on a model that will evolve. If the core of the model is staying the same, then you can start evaluating it and take into consideration the noise. But if the program is continuously changing, then you are not ready for an outcome evaluation, but rather an implementation or design evaluation. That will be more valuable to the program than trying to measure a moving target. Lisa added that evaluation is also about learning, and one has to be flexible. Because of the impact that you measure, you might find that the organization is not implementing the right model. Its hard to move from measuring outputs to outcomes. Outcomes are more about the long term effects. A step further, on impact, can be quite challenging (i.e. attribution). You can do an impact evaluation, but usually its not about coming up with an exact value. You may try to get to outcomes and not just outputs. We think whats important is using evaluation as a learning process. One question related to how the ripple effect of outputs can be attributed accurately. In response, Lance said that sometimes he wished more organizations would acknowledge that some programmes are in a contribution framework and not expecting to claim attribution. Maybe some changes among individuals are attributable to a program, but for the family, neighborhood, and community levels, you are moving into a contribution framework. What you need is to be able to understand that, and indicate what you are contributing to the change, he said. One member of the audience asked: Given that social impact bonds rely on multi-party collaborations, and are designed to pay based on outcome, what is the appropriate level of complexity? Who should design it and measure it? What trends in time, complexity, and costs do organizations need to build in? To this, Lance said social impact bonds will not go forward without randomized studies. How much certainty do you need? If a difference between 13.5 and 13.7 is going to change government funding, then you may need a randomized controlled trial. If you dont know if the impact is between 3 and say 29, you dont need a randomized controlled trial. The government is willing to

pay for uncertainty, so they want rigor. These are being designed by external firms who are usually being hired by those organizing the social impact investment. He also added that resources should be appropriate to size and scale. You may not need more resources, but rather more time. As you grow, resources needed are complex. As you get large, the eeds are very complex. In the US, the rule of thumb is 7 percent to 10 percent devoted to evaluation. As an evaluator, Id say it should be more like 10 percent to 15 percent. Lisa said it is important that there is someone in the fund or the foundation who is responsible for measurement. It could be a full time job, or it could be a part time job. We also look at how to improve impact in the chain of investing. Starting from selecting the deals and doing portfolio management, its important that the person responsible for impact is part of those decisions. Theres a dance between funders wanting measurement but not wanting to fund it, Kevin said. SVA will ask why they havent done it before getting into the fund. We have one project without a RCT but it has a control group. Social Impact bonds have been designed quite badly. If you go into a sector you must analyze what are the problems and solutions, and then would you chose someone for a bond. If I were an investor, I wouldnt put my money in it. Those making money on bonds are 1) academics, and 2) consultants. Theyre the most over-hyped thing. In response to another question, Kevin stated that by asking the questions of the right stakeholders, you can address other dimensions. Lisa added that qualitative ways of assessment are important to validate data. Takeaways: Current trends in impact assessment include collective impact, social impact bonds, and big data. However, there is a lot of hype around these ideas, and experts advise exercising a certain level of caution. The complexity of impact assessment efforts can be daunting, and stakeholders need to consider the purpose of the impact assessment (policymaking, learning), readiness of a program (undergoing change or steady), data collection methods (quantitative, qualitative, low-cost approaches), and audience needs (governmental policymakers, program managers). Impact assessment can be very expensive, and some suggested guidelines range from 7-15% of program costs. Funders should dedicate a staff person to impact measurement and ensuring that processes are in place for managing impact. Specific responsibilities include setting objectives, engaging key stakeholders, and involving groups in developing indicators. Key challenges are establishing reporting standards, aggregating measures across a portfolio, and designating appropriate resources to support measurement.

Vous aimerez peut-être aussi