In the second blog of his two-part series, 3ie Senior Research Fellow Johannes Linn builds on the discussion in Part I around the factors that support and hinder the scaling process and pathway. In this piece, he writes about both quantitative and qualitative evidence-based evaluation of scaling efforts and the practical application of these approaches.
So how do we evaluate scaling efforts or, in other words, how do we use evidence to inform the scaling process? There are as many answers to this as there are projects, programs, sectors, thematic areas, etc., but some general ideas may be helpful in addressing this question.
First, consider evidence on whether the intervention “works” as intended at a given (usually small) scale and under given circumstances – here the use of randomized controlled trials (RCTs) is preferred, but qualitative evidence may also be needed; having multiple RCTs in different contexts helps since it allows an evidence-based assessment of contextual factors.
Second, look for evidence to inform the vision of scale – it helps to know what the potential market is, who are the expected adopters or beneficiaries, etc. (e.g., small-holder farmers, where they live and what their characteristics are); here one can rely predominantly on quantitative data (surveys).
Third, consider evidence on the enabling factors – this will generally involve a combination of quantitative and qualitative data, for example:
At the simplest level, the evaluator will ask five questions:
Question 1: Is the project design based on a clear conception of the overall scaling pathway, i.e., is the project addressing a well-specified problem, and is there a vision of scale if the project is successful?
Question 2: Are the range of interventions under the project clearly identified and is there evidence that they are appropriate, i.e., are they likely to have the expected impact at the particular stage in the scaling process?
Question 3: Have the critical potential enabling factors been appropriately considered and put in place, to the extent possible; if certain constraints cannot be altered (e.g., policy constraints, lack of financing, institutional weaknesses, political opposition, etc.), has the project design been adjusted to reflect the constraints?
Question 4: Is program sequencing appropriate, in terms of continuity beyond project end, in terms of vertical and horizontal sequencing (i.e., local replication across different areas or population groups, versus regional or nationwide program development), in terms of building systematically on the experience of pilots or prototypes, and in terms of a systematic assessment of scalability?
Question 5: Does the ME&L approach include an explicit focus on scaling?
The author has used this simple set of questions in working with various development institutions (including IFAD, UNDP, AfDB) and their project and program teams to assess whether their project design and implementation adequately reflected scaling considerations and what needed to be done to improve the scaling focus.
By way of example take Figure 2, which shows a summary analysis for three projects/programs in Moldova supported by an international development agency, including a bio-energy program that involved a carefully sequenced multi-year, multi-project program in support of biomass energy development for rural communities. As one can see from the color-coded assessment, the biomass project was particularly strong (blue) in the overall design areas on top of Figure 2, but had a mixed record for the enabling conditions, since it only partially addressed policy, fiscal and partnership aspects. And while excellent on continuity of project engagement, it only had a partial use of scalability assessment and limited consideration of scaling aspects in its ME&L.
The practical approach presented here is only one of the many available, some more elaborate than others. The best advice to the evaluation practitioner is this: search for an approach that fits your program’s needs, and preferably one that keeps it simple – but in any case, do not forget about the scaling dimension.
There are five lessons for scaling design and evaluation:
The Scaling Community of Practice and its evaluation working group is where you will encounter many peers searching for answers to address some of these challenges and willing to share their experiences.
This article was originally published on International Initiative for Impact Evaluation (3ie).