Data and evidence-based policy and programming has emerged as a norm in global development. In India, as in many other developing countries, the effort has been led by a plethora of development actors including aid agencies, donors, and research institutes, among others. Now, this is also manifesting in the government’s and policymakers’ thinking, a prominent example being the setting up of the Development Monitoring and Evaluation Office at NITI Aayog, a policy think tank at the Government of India.
Improving India’s evidence ecosystem requires every stakeholder to make routine administrative data more reliable, high-quality, and self-sufficient. In fact, administrative data—for instance, registration of a pregnancy at a health facility, or a child’s height and weight data that is measured at an anganwadi centre—is a crucial missing piece in the narrative on evidence-based policy. We need to put it firmly back into the list of priority action areas, because it has the potential to be the go-to source of on-the-ground information for decision-makers and policymakers. It can further enable near real-time course corrections of programmes. We need the data system to work well at all levels—recording, checking, and aggregation.
Current issues with administrative data
We tend to inject a lot of energy and resources into the top end of the data value chain: analytics and dashboards, which visually analyse and display programme performance. While these are useful, they are an end product whose chief ingredient is data, where known quality issues have persisted for a long time.
The problems with administrative data are down to two factors. One, contradictions arising out of unstandardised data formats or indicator definitions, coupled with data entry mistakes and incomplete data. These are technical data recording and management problems, which are at least theoretically more amenable to solutions, such as imparting technical skills or digitisation.
But the second problem, which is bigger and more difficult to solve, pertains to behaviour and capacity of administrative staff across levels. While most government schemes, programmes, and missions have funds and staff for monitoring and MIS (Management Information Systems), there are gaps in data recording, checks and validation of the recorded data and an aggregation of the data. This happens at both the village level, where frontline workers operate, and at supervisory levels such as blocks and districts. The inability to produce reliable data from within the system may lead to over-dependence on external actors for even rudimentary evidence on programme implementation.
When the data is faulty, aggregation and analytics is unproductive.
In addition to these two issues, functionaries working on data reportage at the ground level also face skewed incentives. This happens when the recorded data influences rewards or attracts a penalty. In other words, functionaries are prone to misrepresent data from the ground to attract rewards (eg, an incentive-based pay) or avoid punishment (eg, rebuke by managers or delayed pay release). When the data is faulty, aggregation and analytics is unproductive.
Towards fixing the problems
Our efforts to fix administrative data should rest on two planks: capacity building and sensitisation. First, there is a need to train community functionaries (eg, ASHA workers, anganwadi workers, and so on) on proper data recording (manual or app-based); and supervisors on quality assurance and simple validation techniques (eg, keeping data formats similar when combining data for several blocks or regions). Building capacity here also means that administrative functionaries at all levels are aware of the significance of (and inherent value of) these individual data points as indicators for monitoring and analyses, when aggregated. It is often the case that implementation is not properly tracked by the system for gaps to fully emerge early on; this results in a compounding of recurring implementation gaps. Therefore, where necessary, one could look at seeking external technical assistance to improve administrative data. This might be a better approach than always directly attempting service improvements.
We need a culture and attitude shift, one that should be enforced top-down across administration units.
Second, the importance of sensitising officials on the potential utility of administrative data to improve services cannot be overstated. The true value of quality data is in its capacity to operate as a friendly critic. But it is often seen by actors as an inspectorial tool. Therefore, we need a culture and attitude shift, one that should be enforced top-down across administration units. Meanwhile, a potential way to address the problem of skewed incentives would be to link rewards to outputs, but also top it with stronger penalties if misreporting of data is clearly established.
Making better use of external technical support
Once we’ve achieved a level of reliability with administrative data, carrying out routine data analysis can reliably inform us of gaps in programmes and services across sectors, and in real-time. This will also provide an opportunity for us to downsize the effort we currently invest in external sample surveys, which are costly, repetitive (inducing respondent fatigue), done in a piecemeal manner, and lack local ownership and buy-in. Outside surveys also often tend to be perceived as audits.
External expertise can then move one step up to produce deeper insights into the why and how of development outcomes.
We can then pivot the role of external surveys to periodic authentication exercises on random samples of administrative data. Over time, through training, capacity building, and sensitisation of communities and functionaries within the system, administrative data inaccuracies should become an exception, rather than the norm. Then, additional primary data collection can mainly focus on parameters which validated administrative data is unable to inform. These should seldom be at the level of outputs or service coverage, and more focused on the pathways to outcomes and impact. In other words, external expertise can then move one step up to produce deeper insights into the why and how of development outcomes.
To sum up, improving India’s evidence ecosystem requires us to first, make routine administrative data more reliable, high-quality, and self-sufficient; and second, to focus external efforts on generating more high-value added evidence that explains development pathways, rather than simply tracking or measuring indicators.