Do evaluations actually serve a useful purpose? | India Development Review

Do evaluations actually serve a useful purpose?

Our sector routinely commissions evaluations, but do individual reports feed substantively into key decisions? It appears that they’re used more as an institutional accountability requirement. Fortunately, there are steps we can take to change this.

6 min read

Development programmes are difficult to run, and can fail for a number of reasons, including flawed policy or programme design, poor implementation, under-investment, and want of political will. Sometimes, the problem itself is too hard to solve.

A lot of money is at stake when it comes to development programmes, especially in recent times as India’s GDP and tax collections have grown considerably over the years. In the 2018-19 budget, the central government alone allotted INR 2.98 lakh crore to social sector programmes. But, how do we know if a programme was successful, or, how to improve it?

Enter evaluations

Turns out, this is not easy either. Evaluations need two things: first, a programme should clearly state the outcomes it expects to improve by a certain time; and second, those outcomes must be measured accurately through specialised evaluation methods.

Related article: M&E: Whose job is it anyway?

There are three types of evaluations:

  • Concurrent evaluations that start in the early or middle stages of the project and provide feedback on the quality of implementation and intermediate outcomes.
  • Ex-post programme evaluations, which are done at closure and measure effectiveness in achieving target outcomes, using large sample surveys, or qualitative methods.
    They measure the combined impact of the efficacy of the intervention concept, the correctness of design, and the quality of implementation. They also try to gain insights into the reasons for success or failure, and make recommendations. Some include impact assessments, which measure that portion of the change in outcomes which can be attributed to participating in the programme. They draw from research methods such as randomised controlled trials.
  • Social audits of public schemes, which sometimes the government provides the opportunity for.

We need policies that mandate the government to define and measure outcomes rigorously.

Given the many demands on scarce budgets, evaluations are intended to provide evidence for decision makers to invest more in impactful projects and less in poor performers, or, to design and implement better projects.

But do they actually serve a useful purpose?

Donors routinely commission evaluations, as do increasingly, central and state governments. There are also many research institutions such as the Jameel Poverty Action Lab (J-PAL) that conduct them of their own accord.

But when I asked a number of funders, implementers, and evaluation consulting agencies, none were able to say that their evaluation reports contributed substantively to important decisions.

So, despite the fact that there are many advocates for evaluations, the evaluations themselves do not seem to be helping much. And they are not cheap.

Evaluations are an institutional accountability requirement and certainly seem like the responsible thing to do especially when budget outlay is high.

Some funders say that they closed projects because the evaluation confirmed poor performance; but projects largely get yanked when they are an obvious disaster, and get extended based on broad perceptions of success. Neither of these needs the precision of rigorous evaluations.

Implementing ministries have funds earmarked for self-learning. However, “hundreds of evaluation studies are being carried out in the country each year, but not much is known about the follow up actions taken on their findings”. In an illustrative anecdote, I participated in one review meeting in which the department secretary asked each of eight states if they had completed impact assessments, but wasn’t curious to ask any of them if they had found anything interesting.

It appears that evaluations are an institutional accountability requirement and certainly seem like the responsible thing to do especially when budget outlay is high. However, while a body of evidence may collectively shape long-term institutional strategy, individual evaluation reports do not seem to guide near-term decisions for the project stakeholders.

What are the reasons for this?

Lack of clarity of purpose

If the intention is to assess a new or unproven concept, then one should test options in a pilot, evaluate which ones performed well, and then scale those up.

However, there is little appetite within government to launch schemes at a small scale. There is political pressure to reach out to and report large numbers and do a middling job, rather than doing it slower but better.

Related article: Use evidence to make funding decisions

If it is a proven concept (such as completing an immunisation course), then one only needs to do a concurrent evaluation to see to it that it is being implemented properly. This is easier said than done, since it is a rare implementer that would have an open mind to take advice from an external agency. The agency too should have the competence to provide useful and actionable advice, diplomatically.

It is less helpful however, to wait for five years for a project to end and do an ex-post evaluation after the money has already been spent, and nothing can be done to salvage it. If the evaluation can find ways in which the programme could have been run better, then it’s more useful that this information be available during the project implementation, rather than after its conclusion.

There is resistance to being evaluated

Elected representatives who–one would expect–would want to find out if a voter is sufficiently satisfied with their services to re-elect them, may worry that a poor evaluation report might provide ammunition to critics. Voters’ perceptions may hence influence decisions, rather than impact.

Secondly, if not made mandatory, no bureaucrat would want to be held accountable after investing considerable time and money in an unsuccessful programme. Though the government recommends that two percent of programme funds be set aside for evaluation, “a large part of them are controlled by line ministries which resist critical evaluations“.

Poor domestic evaluation capabilities, plus methods that are not informative enough

For a country our size, there are very few evaluators who are technically strong in carrying out quantitative socio-economic evaluations correctly, write well, and are able to convince decision makers. Qualitative studies don’t carry the same credibility since they use small samples and are perceived as reflecting the subjective judgment of the researcher.

Quantitative impact evaluations are not easy to master, as they use the same advanced and ever-changing research techniques devised by the most skilled empirical development economists in the world.

Though improving, government procurement procedures (lowest cost bid) often lead to poorly qualified bidders being selected, which makes things worse. Hence it is not surprising that evaluators don’t seem to have contributed much to decision making.

We need more qualified institutions that specialise in quick, credible data collection and research; evaluate interventions; and build a body of evidence to support implementers.

Second, impact evaluations can tell us if there was impact or not. But the teams and methods used for this are not equipped to identify which part of a complex suite of interventions drove that impact and hence should be scaled up. Nor can they always identify the reasons why they worked or didn’t work, how to redesign them for better impact, or how to customise them in future projects.

They can answer small, specific questions well, when there is enough time: for instance, will changing the repayment frequency of microfinance borrowers from weekly to monthly reduce lender’s costs without worsening default rates after one year?

They are less useful in helping a project director who must work within the confines of our bureaucracy and find quick answers to a million design decisions. Project designers instead use intuition, given the lack of good and timely evidence. Finally, similar studies asking the same questions do not always arrive at the same conclusions. Hence, decision makers will want a body of evidence in order to be convinced, rather than one study.

What is the way forward?

So, can research help at all? A systematic, evidence-based approach to design will certainly help make some better decisions, and mitigate common threats to achieving outcomes.

There are three suggestions that donors, academics, research institutions, and government could focus on.

Invest in research to make the best possible design

Projects should front-end efforts to invest more in theory of change, conduct formative research, and draw from available knowledge and implementation science to fashion the best possible design and implementation arrangements. A parallel to this is practiced in some OECD countries, where the government is required to do an analysis of the potential consequences of a new policy and consider alternatives before zeroing in on one.

This approach may delay projects but can improve effectiveness and will likely prove cost-beneficial.

Ex-post evaluations as incentives for success

Ex-post evaluations are still required to provide incentives for programmes to succeed, and to justify spending especially since studies find no correlation between government expenditure and development outcomes in some sectors.

Related article: Test and prepare before you implement

We need policies that mandate the government to define and measure outcomes rigorously. This should be done as stringently as expenditure is audited, and attainment of outcomes should be the main criteria for projects to seek budgets for expansion. In cases where the right approach is not clear, projects should first prove themselves in demonstration sites before scale-up funds are allocated.

The time may be right for this. The ex-finance minister’s budget speech in 2008 referred to this – “I think we do not pay enough attention to outcomes as we do to outlays; or to physical targets as we do to financial targets; or to quality as we do to quantity”.

However, commissioning evaluations should be in the hands of a separate entity, say, the finance ministry, rather than the implementing body, to retain independence and objectivity.

Strengthen domestic research and evaluation capabilities

We need more qualified institutions that specialise in quick, credible data collection and research; evaluate interventions; and build a body of evidence to support implementers. These could serve as platforms for intelligent debate for improving policy and design. Dedicated trade journals with peer review mechanisms may be promoted to share lessons and to assure the correctness of the findings.

The cost of high-quality research may be high, but one is sure to find that the social cost of investing in ineffectual programmes is higher.

Views expressed are personal.

We want IDR to be as much yours as it is ours. Tell us what you want to read.
Karuna Krishnaswamy Profile
Karuna Krishnaswamy

Karuna is an evaluation consultant, with more than 22 years of experience. He specialises in quantitative impact assessments as Principal Investigator, and advises programme design through research and theory of change development. He has worked with leading funding agencies, government, and nonprofits across financial inclusion, health, farm livelihoods, and governance. His interest is to help improve the effectiveness of development projects. Karuna has a bachelors degree in Technology from IIT Madras, and masters degrees in Economics and Computing Sciences. He can be reached at