December 16, 2020

Evaluate schemes for better outcomes

Programme evaluations could improve the effectiveness of public policies and drive more efficient use of public funds.

7 min read

Evaluations would perhaps be the last thing on the agenda of a government in firefighting mode. India’s gross domestic product (GDP) at constant prices has contracted by 23.9 percent in the first quarter of financial year (FY) 2020-21 and by 7.5 percent in the second quarter, compared to the previous fiscal year. Revenues are down by one-third in the first quarter and expenditures—especially on food subsidies—are expected to balloon.

Anticipating increased pressure on government finances, the Ministry of Finance has decided to:

As a result, we are likely to see either uniform budget cuts across all programmes or specific reductions based on political priorities and their perceived importance. Programmes that do not show quick results could feel the resource crunch harder than others. This traditional approach of ad hoc budget rationalisation, though well-intentioned, may also have negative implications for a broad-based economic recovery, especially in a post-COVID-19 world. An example commonly used by experts to illustrate this point is the investment in human resource development (HRD)—wherein a government incurs the cost in one calendar year, but benefits accrue in future years. Consequently, HRD may be the first to undergo rationalisation when budgets are tight. But on the flip side, a reduced HRD budget can have long-term downward effects on education- and employment-related outcomes.

Related article: Too difficult, too disruptive, and too slow?

donate now banner

This issue is not limited to a specific ministry or department, but rather a systemic challenge of public expenditure planning in India. About three years ago, the Lal Bahadur Shastri National Academy of Administration (LBSNAA), Mussoorie, hosted a workshop on health systems. Among the attendees was a senior Indian Administrative Services (IAS) officer who lamented that budgetary allocations for health were highly inadequate and blamed the finance department for it. Ironically, when the same officer became the Principal Secretary, Finance, with Health as an additional charge, the officer could still not allocate more funds to the health department.

The reason cited by the officer was that while departments such as the public works department (PWD) could show results in terms of physical outputs and some outcomes, health had little to show. Had the system institutionalised evaluations, the officer’s dilemma would have been resolved by subjecting the PWD and the health department to smarter budget allocations, based on evaluation findings.

Evaluations enable better decision-making

Globally, governance has evolved from trying to solve simple problems such as building hospitals or schools to complex problems such as delivering development outcomes sustainably and with quality, making evaluations necessary. Here, the role of evaluation is to continually investigate whether a particular programme works or not.

Thanks to Nobel Prize-winning economists Abhijit Banerjee and Esther Duflo, Randomised Controlled Trials (RCTs)—a popular impact evaluation technique—became a well-known and popular approach to impact evaluation. Despite this, the economic rationale for undertaking evaluations has not struck a lasting chord with policymakers and implementers. This could be because RCTs are not easy to implement, are time-consuming, and are sometimes a costly affair. Policymakers often need quick inputs on a programme and choose a less rigorous study, that can be delivered in months. After all, a democratic government facing continuous election cycles requires quick results and is unlikely to have the appetite for long-drawn-out RCTs for every national programme.

Evaluations are not limited to RCTs and can be tailored for diverse policy contexts and fast-moving programme cycles.

However, evaluations are not limited to RCTs and include new and cost-effective techniques that can be tailored for diverse policy contexts and fast-moving programme cycles. Interim design evaluations and rapid assessments are two such techniques. While low on technical rigour, design evaluations can be used to check the soundness of a programme by mapping its objectives, implementation architecture, and expected results. Similarly, rapid assessments are shorter exercises to check the quality of service delivery, the end-user uptake, and satisfaction with the services. Countries such as Mexico and Chile are examples where governments have adopted cost-effective evaluation products with shorter timelines to assess programme-level performance.

In fact, Mexico, Indonesia, and South Africa have made excellent use of evaluations to improve their policy effectiveness. The Mexican national evaluation system (CONEVAL), established in 2005, can be dubbed as one of the world’s best and contributes to Mexico’s national developmental efforts. According to the World Bank, the national evaluation of Prospera, a conditional cash transfer (CCT) programme in Mexico, was so successful that not only did it pass the muster of subsequent opposition-led governments, the programme was also replicated in 52 countries.

closeup of man using biometrics for PDS_©Gates Archive_Prashant Panjiar

Several ministries in India continue to treat evaluation as an exercise in accountability rather than learning. | Picture courtesy: ©Gates Archive/Prashant Panjiar

In India, a good example can be drawn from the recent evaluation of the Bhagyalakshmi CCT scheme in Karnataka, under which a girl child belonging to a below poverty line (BPL) family is eligible for financial assistance after completing 18 years of age. While evaluation results found that the scheme reduced gender disparity in school attendance rates, it also noted that the scheme was somewhat marriage-oriented, ie, parents expressed that they would like to use the money from the scheme for their daughters’ marriages. Based on the findings, the age limit to withdraw the full amount under this scheme was increased from 18 to 21 years.

Related article: Single studies cannot inform policy-making

We need to strengthen the evaluation system for government programmes

If evaluation is a logical exercise with obvious benefits, why hasn’t it been used proactively in India? We have observed three major reasons for this:

  • Several ministries and departments continue to view evaluations conservatively, treating them as an exercise in accountability rather than learning. They prefer creating ad hoc evaluation units at scheme-level or getting evaluated through a limited set of research institutions. One plausible reason is that evaluation expertise or evaluation thinking is seen as an ‘external’ requirement, rather than a core part of programme management skills. Often, the scheme-level evaluation units consist of members with limited experience of evaluations. As a result, one finds that financial reporting is a typical feature of scheme performance reports with little focus on outcomes.
  • India is yet to establish minimum standards of evaluation for public programmes. Their absence leads to a wide variation in the quality and frequency of evaluations and comparing the same programme across states and time becomes challenging. This also makes it easier to question findings, especially if they don’t meet the desired expectations.
  • Evaluations in India are predominantly used to validate the successes of programmes, rather than for policy and budgetary planning, and this needs to change. In a number of other countries, the usual practice is for annual budgetary allocations to consider evaluation results, and for ministries and departments to propose an annual evaluation plan for this purpose. Therefore, government programmes undergo regular evaluation in sync with the policy calendar and those that show poor results are required to justify why they should be continued. But India is yet to formally adopt similar systems or use tools such as annual evaluation plans. Hence, evaluations continue to play a limited role in planning and budget allocations.

Karnataka is the only one to have an evaluation policy and a specialised evaluation authority

Currently, at the national level the Development Monitoring and Evaluation Office (DMEO), which is an office attached to NITI Aayog, is responsible for driving evidence-based policymaking by monitoring and evaluating government policies and programmes. Amongst the states, Karnataka is the only one to have an evaluation policy and a specialised evaluation authority. While we have seen an increase in Management Information Systems (MIS) for monitoring programmes, evaluations have remained a need-based exercise only. This may not be adequate to measure whether a programme has achieved its intended goal.

So, what needs to be done to bring evaluation centre stage of the Indian development discourse? Given the challenges involved, the DMEO would need a three-pronged approach:

Improve collaboration with the Department of Expenditure (DoE)

The DoE, which is responsible for efficient use of public resources, has already made evaluation of programmes mandatory for their continuation as well as budget rationalisation. As part of this, the DMEO was asked to undertake evaluations of centrally sponsored schemes that could be used to provide inputs to the 15th Finance Commission.1 Line ministries 2 have also been nudged to undertake third-party evaluations of their flagship schemes, which are typically schemes with the highest outreach and budget allocation. In addition, creating an evaluation division within the DoE would go a long way in ensuring timely completion of these studies with quality.

While the DMEO provides the technical expertise, the DoE adds the administrative muscle and makes budgetary allocations contingent upon evaluation findings. The DoE can mandate all schemes have one to two percent of their programme outlays for evaluations, and financial advisors in the line ministry can ensure institutionalisation of evaluations.

Related article: Using data to improve programme efficiency

Provide technical assistance to central ministries and states

Ministries with high allocations could also be asked to create monitoring and evaluation (M&E) cells that undertake regular evaluations of their schemes. A similar approach can be adapted for the states, by helping them setup specialised M&E offices within their departments of planning, programme monitoring, and statistics. The DMEO and its partner organisations could be roped in for technical support to strengthen evaluation design and planning, standardise procurement processes to engage evaluation or survey agencies, and create standard operating protocols for ministries and states. This can help create a cadre of champion ministries and states to demonstrate evidence-based policymaking for improved outcomes. A national evaluation policy is the need of the hour to sustain these efforts.

Build the capacity of key government personnel

Finally, the DMEO will have to build M&E capacities of government personnel at the central and state levels, in collaboration with its partners. This can include introducing M&E curricula as a part of pre- and in-service training for government officers, training faculty in the central and state administrative training institutes, and continued learning initiatives for central ministries, departments, and states through the new Integrated Government Online Training platform.

While a well-functioning administrative system has to institutionally guarantee checks and balances, it is ultimately the larger ecosystem that will create a non-coercive belief in the value of evaluation.

 

  1. The 15th Finance Commission is mandated to provide recommendations on distribution of resources between the union and the states, grants-in-aid for the states, and augmenting the consolidated funds for a state.
  2. Line ministries are ministries that have a mandate for a specific thematic sector, such as agriculture, rural development, education, or health.

Know more

  • Understand the basics of monitoring and evaluation.
  • Explore this lecture series by 3ie which provides an overview of key concepts related to evaluation for those who are new to the space.
  • Explore this guide to building evaluation capacity.

We want IDR to be as much yours as it is ours. Tell us what you want to read.
ABOUT THE AUTHORS
Alok Mishra-Image
Alok Mishra

Alok Mishra is the Deputy Director-General, Development Monitoring and Evaluation Office, NITI Aayog where he oversees evaluation studies, external engagement, and capacity building initiatives. Prior to this, he served as the senior deputy director at Lal Bahadur Shastri National Academy of Administration, as director (IITs and IIITs) and director (international cooperation) at the Ministry of Education, and in the media units of the Ministry of Information and Broadcasting. Alok holds a master’s in public policy and management from IIM-Bangalore and Syracuse University, USA.

Vijay Avinandan-Image
Vijay Avinandan

Vijay Avinandan is the Monitoring and Evaluation Officer, United Nations World Food Programme (WFP), NITI Aayog. He has more than seven years of experience in monitoring and evaluation (M&E), across sectors such as public health and nutrition, education, water, sanitation, hygiene, and livelihoods in India and East Africa. Vijay is also a member of Evaluation Community of India (ECOI) and its EvalYouth chapter, a community of evaluators that seeks to promote knowledge sharing in the area of M&E.

COMMENTS
READ NEXT