September 27, 2017

M&E: Five things you are doing wrong

It is easy to overlook the 'process' of implementing your M&E; but paying closer attention to the process can improve the quality of your programme’s results, besides enabling better decision-making and funder reporting.

4 min read

Effective monitoring and evaluation (M&E) is often called the holy grail in development. While certain aspects of M&E have been standardised and codified over time, there are many that need to be considered and tweaked case-by-case.

For instance, these core elements of a standard M&E process rarely vary, regardless of the programme, context, or geography:

  • Formulating outcomes and goals
  • Selecting outcome indicators
  • Gathering baseline information
  • Setting specific targets and dates for reaching them
  • Regularly collecting data to assess whether targets are being met
  • Analysing and reporting results

Similarly, there are questions many of us typically seek to answer through M&E programme outcomes, such as whether the programme was effective in achieving its intended goals, or what change and how much change occurred at the programme or beneficiary level that could be attributed to the programme.

These are important questions to consider because of their sharp focus on the programme’s results.

donate now banner

However, equally important are questions that focus on the operational aspects of M&E – something that organisations at times neglect to ask. Keeping these in mind helps design and implement the M&E process well, improving overall results.

1. Are your indicators culturally contextual?

Performance assessment indicators should be reflective of the region’s culture, because they can help in understanding the nuances that often get lost and cause the evaluation to lose its edge. Without cultural understanding, the purpose of the endeavour is lost.

For example, many indigenous cultures in India do not accept the way that Western medicine is delivered. Men may not allow their wives or daughters to visit health centres. Thus, an indicator that measures ‘number of pregnant women who do not consume iron and folic tablets’ might not represent reality because it assumes that women face no barriers visiting health centres. It’s important to understand the cause behind pregnant women not consuming the tablets.

In this not-so-unique context, healthcare delivery programmes must construct the indicator to measure programme outcomes that are culturally contextual.

2.  Do target groups have ownership of the programme? Were they consulted on programme design, and involved with management, monitoring and evaluation?

Your beneficiaries should be aware of how the programme will benefit them. In most cases, sending an external field surveyor into an urban slum to collect details regarding income levels will not give a clear representation of data.

Families might be tempted to lie — they might be under the impression that stating a higher monthly income might make them eligible for a loan, or stating a lower monthly income might make them eligible for a subsidy.

Building trust with the community is therefore incredibly important to ensure that your field staff collects accurate data.

Related article: M&E: Whose job is it anyway?

3. Do you have buffer capacity in terms of resources to accommodate shocks from the field?

Your Gantt chart might do a good job of detailing the step-by-step implementation plan, but it often collapses in the heat of ‘the field’. For this reason, being agile about the project’s implementation is often the best way to allow for course correction and maintaining your team’s morale.

For instance, data collected from phase one of the project might reveal that the intervention is not leading to better outcomes, necessitating a restructuring of indicators.

You might also want to consider leveraging mobile technology for data collection and monitoring, which allows for dynamic changes to the data collection questionnaires and metrics.

4. Are you using the right mix of methods to gather data? Is your data reliable?

Mixed-method designs combine the attributes of quantitative methods with qualitative methods to describe in-depth, the lived experiences of individual participants, groups or communities.

There are three main kinds of mixed-method design:

Sequential
The evaluation begins with quantitative data collection and analysis followed by qualitative data collection and analysis, or vice versa. Designs can also be classified according to whether the quantitative or qualitative components of the overall design are dominant.

Take, for example, a sequential mixed-method evaluation of the adoption of new seed varieties by different types of farmer. The evaluation begins with a quantitative survey to understand the various types of farmers.

This is followed by qualitative data collection (observation and in-depth interviews) and the preparation of case studies. The analysis is conducted qualitatively. This would be classified as a sequential mixed-method design because there were more qualitative methods employed in the overall process.

Parallel
The quantitative and qualitative components are conducted at the same time.

For example, quantitative observation checklists of student behaviour in classrooms might be applied at the same time as qualitative in-depth interviews are being conducted with teachers.

Related article: Integrating the qualitative and quantitative

Multi-level
The evaluation is conducted on various levels at the same time.

For example, think of a multi-level evaluation of the effects of a school-feeding programme on school enrolment and attendance. The evaluation is conducted at the level of the school district, the school, classrooms, teachers, students, and families. At each level, both quantitative and qualitative methods of data collection are used.

Multi-level designs are particularly useful for studying the delivery of public services such as education, health, and agricultural extension, where it is necessary to study both how the programme operates at each level and also the interactions between different levels.

5. Is your M&E programme designed to help you track progress and learn lessons from the past?

Keep in mind the Data Triangle–reliability, validity and timeliness. Any M&E programme has to evoke trust and credibility from its beneficiaries, funders, programme officers, and field staff.

Whether the programme uses process monitoring (checking whether process milestones are on track) or beneficiary monitoring (measuring impact on the recipient), the data shouldn’t just meet funder requirements but also help the nonprofit make better decisions towards improving outcomes.

 

This is a shorter version of this article that first appeared on the SocialCops blog.

Tags:
We want IDR to be as much yours as it is ours. Tell us what you want to read.
ABOUT THE AUTHORS
SocialCops-Image
SocialCops

SocialCops is a data intelligence company on a mission to confront the world’s most critical problems with data. We work with 150 organisations across 7 countries, including the Tata Trusts, Ministry of Rural Development (India), Gates Foundation, Unilever, BASF, Niti Aayog, and the United Nations.

COMMENTS
READ NEXT