November 28, 2017

How to use data to improve decision-making

Everyone knows that collecting data at a programme and organisation level is important. But once you collect it, how often do you actually refer to it or use it? Here are some practical steps to help ensure that your data improves decision making.

4 min read

It is a truth universally acknowledged that an organisation seeking to scale (either in terms of reach or depth) can do so only on the strength of existing evidence, both primary and secondary.

Evaluations are perhaps the more glamourous side of the use of data in developing and delivering programmes. Whereas monitoring–that slow, regular, plodding filling up of forms–is less discussed, but just as important.

At Magic Bus, the past year has been a significant one for the way in which we have looked at and used monitoring data, and the change it has brought about in organisational culture.

For the past six years, a National Outreach report that calculated outputs across the organisation, was compiled by the central monitoring and evaluation (M&E) team. Each month, the report would appear in inboxes and files would be dutifully saved/archived, but it was rarely opened. The central M&E resource even prepared dashboards–both for national team and regional heads–yet, we failed to build a conversation around the outputs.

donate now banner
Data Collection Image_Nov 2017

Photo Courtesy: Magic Bus

Today, we are in a situation where the data is extensively discussed, used and referred to by our field teams, regional offices and the national team. It is our go-to report for figures sent to funders, for stakeholder communications, and for internal reviews from the block level upwards.

How did this happen?

Here are some simple practices that helped.

1. Driving from the top: Our new CEO insisted first that the central M&E team improve timeliness–if the outreach report is late even by a day, we are asked why. In addition, on the weekly call for the senior management team, leaders are asked to explain the numbers reported for their region. Basis the explanations, restorative action is discussed.

2. Making outputs and outcomes the basis for all performance measurements: Field staff, managerial staff, regional leaders and head office staff all have KPIs related to organisational outputs and outcomes. As a result there’s investment in the outreach report at every level.

3. Creating an index that rates districts and programmes on the basis of key indicators in the report: Every month, the top districts are listed and patterns in performance are highlighted, encouraging those who are ‘high performers’ to continue to strive for outputs and the remaining teams to endeavor to reach the top ten.

4. Keeping it simple: We have identified four key areas for monitoring–questions and follow ups involve these four areas particularly, even at the risk of leaving out several activities that may be significant in volume, but may not be directly linked to programme outcomes.

5. Ensuring validity of data: Part of our monitoring process now includes ‘surprise visits’ that help to ensure that reporting matches activities on the ground. Additionally, central teams carry out detailed validation exercises every month, tracing a path from the outreach report, back to a set of programme participants. This is done for a representative sample of our participants, to ensure the validity of data that is reported.

Related article – M&E: Five things you are doing wrong

Challenges we have faced

1. Having accurate and reliable data: One of our chicken-egg questions was whether we should begin to analyse data or first ensure for accuracy. We thought we would focus on validity before using the data to score districts, or for any other number crunching. However, we soon realised that this was getting into a loop–teams wouldn’t take the data collation exercise seriously if the data wasn’t being used anyway! So in a sense, beginning to use the data helped us to get cleaner data.

2. It takes time and effort to get buy in internally: Teams can feel uncomfortable with this much focus on quantitative data, especially when it does not capture the several tasks they do on a daily basis. This can be counter-productive unless the reasons for prioritising certain indicators over others are clarified. Intermittent collection of case stories and narratives from the field complements the quantitative monitoring process.

SONY DSC

Photo Courtesy: Magic Bus

What we have learned

1. Have one central report to focus on: Prioritise what data are needed and at what regularity. Compile one report that helps capture this data; determine its regularity and then execute without exception. Over time, this report will need changing–new data will be needed, some will become redundant. But the fundamental purpose and structure of the report will probably remain.

2. Identify the outputs that need attention: Once the report is compiled, prioritise those that are critical to the programme. Limit these to five indicators at the most. Ask questions about only those outputs regularly. This is perhaps the most difficult part of the exercise because field teams do multiple tasks in order to enable a programme to run. Yet, it is crucial to enable an understanding of those tasks which are mandatory and critical, and those which are ‘good to have’.

3. Create a sense of competition: Not only does this communicate a sense of the importance of this data for internal stakeholders, it also helps field teams and their leaders get recognised for results, and ensures a data-focused culture. It is crucial that this is balanced with encouragement of honest reporting of challenges–some geographies are simply much harder to programme in, and that has to be factored in.

4. Ensure all external reporting follows through from this one report: This allows for consistent communication and reduces time spent by field teams reporting to different funders, thus improving overall efficiency.

5. Don’t wait for technology: This past year has shown us that while technology can be a wonderful enabler, its lack is not a reason to be anything but rigorous about programme monitoring, and subsequently using that data to enable better results for stakeholders.

Tags:
We want IDR to be as much yours as it is ours. Tell us what you want to read.
ABOUT THE AUTHORS
Havovi Wadia-Image
Havovi Wadia

Havovi Wadia is CEO at STCI. She has over two decades of experience across diverse sectors having started her career as a teacher, moved into banking, and then to the development sector. She is committed to the rights of children and to an understanding of childhoods. Her current focus is on visiblising the particular vulnerabilities of survivors of trafficking and children with disabilities and finding ways to mobilise support, and long-term change for them from a range of stakeholders.

COMMENTS
READ NEXT