March 26, 2025

Measure twice, pay once

Outcomes-based financing demands more than just tracking data—it requires a shift in how nonprofits use monitoring and evaluation to drive real impact.

6 min read

Whenever someone mentions outcomes-based financing (OBF) and what nonprofits need in order to deliver under an OBF initiative, the first thing that comes to mind is monitoring, evaluation, and learning (MEL) systems and processes. This is indeed true—OBF does require high standards of MEL, and the maturity of a nonprofit’s MEL systems can be a good signal for when it is ready to adopt OBF.

However, MEL is not a new concept. Many nonprofits have structured MEL systems in place to generate data reports, track key indicators, and conduct evaluations. Then, what is different about MEL under OBF, and what gets added on to the business-as-usual approach?  

This article draws on learnings from in-depth interviews with nonprofits, funders, and ecosystem stakeholders engaged in OBF in India as part of the development of the Outcomes Readiness Self-Assessment. The assessment is a public good created specifically for nonprofits in India to define, measure, and enhance their preparedness for OBF. Through a comprehensive assessment consisting of multiple-choice questions and real-world scenarios, it offers clarity on a range of organisation and programme indicators, including MEL. It then provides expert insights and curated resources to strengthen capacity for OBF.

More than metrics: Making MEL matter

In an OBF initiative, a nonprofit agrees to achieve pre-defined, pre-set outcome targets within a specific period. For example, a 10 percent reduction in open defecation in three years, or 5 percent increase in income over two years, or 20 percent fall in anaemia over five years. The process of target setting is typically a collaborative and consultative negotiation involving all parties who aim to strike a balance between reasonable yet aspirational outcome targets.

What is IDR Answers Page Banner

If the nonprofit does not achieve the targets, it could face financial loss, reputational loss, or both. As such, the nonprofit’s MEL system and the data and insights it generates need to enable and empower the teams to achieve these targets. Making this shift requires the following key transformations:

1. From reporting to real-time learning

MEL should not be limited to documenting past activities. It must actively inform strategic decision-making for ongoing programmatic interventions. This means establishing monitoring systems that allow teams to continuously collect, interpret, and apply data to refine interventions. For example, an organisation might create a dropout tracking system based on live data from the field to gauge where, when, and why students are dropping out of school and use this information to adapt programming, in real time rather than retroactively.

2. A balance between measuring inputs and outcomes

There is a common misconception that in an OBF project, only the outcomes are measured, without the need to track or measure inputs and outputs. However, inputs and process indicators show if the quality of interventions is good enough and if these interventions are consistently delivered to yield the desired outcomes. Hence, they must be measured in addition to outcomes data that shows how far or close one is from the target.

four blue chairs in a row-outcomes-based financing
Learning is proactive, retrospective, and driven by data-backed decisions and adaptability. | Picture courtesy: Rajika Seth

3. From a stand-alone function to integrated decision-making

To truly support an outcomes focus, MEL must be embedded into decision-making at all levels. Leadership, programme, finance, and technology teams as well as frontline staff should actively engage with MEL insights to align strategies, effectively allocate resources, and continuously refine their approaches based on evidence. MEL data should eventually also start informing programme budgeting, where one starts to think of how to become more productive and effective in delivery.

4. From internal to external verification

Independent verification of outcomes is a core premise of any OBF initiative, as donor payments are based on verified outcomes. More often than not, however, the nonprofit’s MEL teams will have to deeply engage with a third-party evaluator to ensure that the verification has accounted for on-ground realities; the data collection methods are feasible; the communities being surveyed to study impact can understand the way in which questions are asked; and so on. This calls for an internal MEL team that is not only technically sound but also well versed with the programme, confident about its approach, and has its own robust evidence to supplement the third party’s findings.

donate banner

5. A culture of performance, knowledge, and continuous learning

An outcomes-focused MEL system thrives in an organisational culture that prioritises improvements in performance year on year, continuous learning, and iterative decision-making. Learning is proactive, retrospective, and driven by data-backed decisions and adaptability. Underperformance is seen as an opportunity to course-correct and re-engineer programme delivery until the desired outcomes are achieved.  

From theory to practice: A real-world example of MEL under OBF

To better understand the points above, let us examine how a skilling nonprofit might conduct MEL under two funding models:

  1. Input-based programme, where funding is provided based on activities and outputs such as skill training certification. This is the business-as-usual approach.
  2. Pay-for-results (PFR) programme, where funding is tied to outcomes such as job placement and retention. This approach focuses on outcomes.

1. Selection of indicators

A key distinction between an input-based programme and a PFR programme lies in how they measure success.

In a business-as-usual input-based programme, funders and nonprofits often focus on measuring inputs and outputs, such as the number of people enrolled, number of hours of training delivered, and number of people certified.

A PFR programme on the other hand prioritises outcomes, such as the number of people placed in jobs, number of people in employment three months after placement, and salary value. While inputs and outputs are still tracked, they are seen as means to achieving agreed-upon outcomes. The targets themselves are centred on results.

This shift allows greater flexibility for nonprofits, reducing the pressure to follow rigid processes. For example, if a candidate achieves placement and retention with fewer training hours than initially planned, it’s still considered a success. This approach encourages a broader focus on programme achievement, rather than merely tracking programme execution.

2. Data collection, analysis, and verification

Data collection and verification serve as the backbone of a PFR programme and introduce a higher level of rigour.

In an input-based programme, a nonprofit might rely on an in-house data collection team to gather information from sources such as attendance logs, hours of training completed, and self-reported data on placements.

For a nonprofit new to the PFR model, the need for third-party verification may initially seem to be a challenge.

In a PFR programme, third-party verification and validation from data sources is paramount. Beyond nonprofit-reported data, a skilling programme in this model collects independent evidence that triangulates outcomes from different sources. For example, the trainee might say they have a job, but the employer might refute this, or vice versa.

For a nonprofit new to the PFR model, the need for third-party verification may initially seem to be a challenge. However, as they work with verifiers, they not only meet these requirements but also build internal systems that enhance overall operational efficiency.

Collecting previously hard-to-get data—such as salary slips or placement offers from blue-collar companies—becomes a regular part of their process, raising the standard for data management and transparency. This shift strengthens the credibility of their reporting while also improving programme design, allowing for better monitoring of beneficiary outcomes, and enhancing decision-making.

3. Identification of areas of improvement

In PFR programmes, the goal is not only to prove high-quality outcomes but also to identify areas for improvement.

For instance, if data reveals that 15 percent of women drop out between placement and retention phases, this isn’t just a statistic—it directly affects outcomes and, in turn, payments for nonprofits.

Naturally, partners dig deeper to understand the reasons behind these dropouts. They may review candidate feedback, look at past information such as candidates’ intake forms and records, and correlate motivations with dropout reasons. Perhaps some candidates intended to continue their studies and dropped out because they couldn’t balance skill training with studies, or maybe they were placed far from home even though they preferred a job closer to where they live. Perhaps some women faced health challenges and could not deal with the demands of their work.

The nonprofit can then explore if it is worth introducing other interventions such as one on health and nutrition, if health challenges emerged as a prominent reason. They would also need to evaluate what it would cost, how it would be implemented, and its likely influence on the final outcomes.

Consider another example. A nonprofit diligently tracks retention across different employers in a PFR contract. If they observe that many women drop out within the first month at a particular employer, it signals a challenge that they cannot ignore. Since retention is a key payment-linked outcome, they proactively address the issue, either by collaborating with the employer to improve working conditions—by providing safer transport, better hygiene, or childcare support, for instance—or by discontinuing future placements with that employer if no solution is found.

Making continuous learning an embedded practice

The individual trends that data throws up provide actionable insights that help inform strategies as the programme progresses. Nonprofits now begin to think beyond just responding to reporting requirements, and instead start actively using data to enhance programme effectiveness. This makes continuous learning and programme improvement an embedded practice rather than an afterthought.

At its core, MEL is meant to help shape learning, drive decisions, and support the continuous improvement of programmes.

Ideally, all the nuances and uses of data—as seen in the above women-in-the-workforce example—can, and should, be built into business-as-usual grant programmes. However, the reality is that a PFR programme makes it a must-have rather than a good-to-have, as both the donor and the implementing nonprofit are financially incentivised to accord highest priority to achieving outcomes.

At its core, MEL is meant to help shape learning, drive decisions, and support the continuous improvement of programmes, all of which are also hallmarks of outcomes readiness. The question goes beyond ‘what’ you are doing in MEL to ‘how’ you are doing it, and, more importantly, how you are using it.

But this shift doesn’t happen on its own—it requires strong leadership, a culture that enables learning, and a genuine commitment to using data for driving action. To quote American management theorist W Edwards Deming, “In God we trust; all others must bring data.”

The Outcomes Readiness Framework and Self-Assessment have been developed by British Asian Trust, Indian School of Development Management (ISDM), and Atma, with support from 360 ONE Foundation.

Know more

  • Learn more about outcomes-based funding.
  • Browse this compendium of resources that includes articles, frameworks, workshop guides, workbooks, and more for MEL under OBF.
  • Learn about the four types of data necessary for OBF.

Do more

donate banner
We want IDR to be as much yours as it is ours. Tell us what you want to read.
ABOUT THE AUTHORS
Anushree Parekh-Image
Anushree Parekh

Anushree Parekh leads British Asian Trust's social finance work in India. She is a social impact consultant and has more than 15 years of experience in corporate social responsibility, public policy, and nonprofit sectors in India and the UK. Before joining BAT, Anushree led impact advisory at Waterfield Advisors and set up the research and strategy practice at Samhita Social Ventures. She has an MSc in Development Studies from SOAS, University of London and a BA in Economics from St. Xavier's College, Mumbai.

Priyanshi Chauhan-Image
Priyanshi Chauhan

Dr Priyanshi Chauhan is a senior research associate at the Centre for Innovative Finance & Social Impact, Indian School of Development Management (ISDM). She drives research on innovative and outcomes-based financing models for the social sector.

Saumya Lashkari-Image
Saumya Lashkari

After a decade in tech at General Electric USA, Saumya led global Corporate Social Responsibility platforms at conglomerates like Genpact and Godrej Industries, and advised UHNI donors. Under her leadership, 360 ONE Foundation has reimagined traditional philanthropy and pioneered a more catalytic approach powered by blended finance and outcomes-based financing. Throughout her career, Saumya has been a change agent to optimise giving: making philanthropic capital more outcome-oriented, efficient, and effective.

COMMENTS
READ NEXT