Does better evidence lead to better policies and programs? Massive amounts of reliable evidence, drawing on scientifically strong methods, including randomized controlled trials, mixed-methods approaches, and more, have been generated and disseminated in recent decades. A Nobel Prize has been awarded for that pathbreaking work. Yet the impact of that evidence—on what policymakers and program implementers think and do—has been far below expectations, even pitifully tiny according to some accounts. Especially in international development work, which focuses on countries in Africa, Asia, and Latin America that are striving to rise faster out of poverty and are home to over 80 percent of the planet’s eight billion people. The evidence on the power of evidence to change the world has been disappointing.
Most commentary on that awkward reality has come, perhaps inevitably, from the generators and purveyors of evidence. More attention needs to be given now to what others say about this conundrum, especially the intended main recipients of the evidence—the decision-makers and managers in the world of action, including government officials and managers. What is their take on the problem of weak policy uptake of evidence? If both sides of the market for evidence—the suppliers and the users—understood each other’s perspectives better, both could benefit, getting more of what they want with less waste of effort.
Getting that to happen will not be easy. The suppliers are mostly researchers. They have views on what users think and want but rarely know firsthand what it is really like to be in the hotseat of policymaking and program delivery day after day. The users, similarly, often have only hazy notions of the finer points of the evidence available—how it was derived, how reliable it is, and what its limitations are. The two camps speak different languages, reside in different universes.
One impediment to bridging that gap is that there is no simple way to pin down users’ perspectives. Doing surveys or using other scientific tools to try to lock down a precise parsing of what policymakers and program implementers think about evidence and why, when, and how they use it or not will never be sufficient. The only dependable way to assess evidence users’ thinking and behavior in relation to evidence uptake is to spend considerable time being a policymaker or program implementer or working closely with them, experiencing the rough and tumble of advising, or supporting or negotiating with them. Not everyone has the time, opportunity, or inclination to do that. But a lot can be learned from talking with—and listening carefully to—people who have.
As someone who has worked as a policymaker and worked at 3ie—a supplier of evidence and an advocate for its use in decision-making—I care deeply about and understand all the complex aspects of these challenges. In my current role, as a 3ie senior fellow, I am focusing on how to improve the evidence-supplier to evidence-user interface at all levels, and working closely with the organization’s Evidence for Policy and Learning Team.
Presuming that practical policymaking and implementational realities are “someone else’s business” that evidence producers can stay apart from is a sure ticket to irrelevance.
Drawing from my own experience and networks, I had the privilege to complete an in-depth examination of five examples of particularly interesting policymakers (see Reformers in International Development: Five Remarkable Lives, published by Routledge).
Conversations with these individuals have helped highlight some fundamental principles important for facilitating and enhancing evidence uptake in policymaking. Seemingly obvious at first look, these principles reveal, on closer inspection, challenging complexities, along with practical steps that can help.
First, if the creators, providers, and advocates of evidence truly want to promote more and better uptake of it that results in improved policy and programs, they need to approach that task by putting themselves more in the shoes of the people who decide policy and oversee programs. Data people need to learn to think the way doer people do. This means learning their language and meeting them on their turf—not just figuratively but also literally—by spending time with doer people whenever, and as much as, possible. Evidence producers need to own the fact that the constraints that policymakers face, the barriers they must overcome, and the gauntlet they have to traverse in order to get anything adopted are fully a part of what a good researcher must take into account. Presuming that those practical policymaking and implementational realities are ‘someone else’s business’ that evidence producers can stay apart from is a sure ticket to irrelevance. As examples of doers, the five decisionmakers in my Reformers book were hungry for evidence that settled key pragmatic questions, not distant general propositions. Ela Bhatt, when helping millions of impoverished working women in India to build better lives for themselves, needed to know what would work for them and what not. When the women needed to create their own bank, she needed to know how it should be designed to be sustainably viable. When another of the five— Dzingai Mutumbuka (now a 3ie Board Member)—was a cabinet minister charged with creating a new education system in a newly independent African country where 97 percent of its population had never had the chance to go to school before, he needed to know what his initial top priority should be. When donors pursued him with what they thought he should do—but failed to provide convincing evidentiary support for them—he had to work hard to find better answers on his own, tailored better to the context he had to deal with.
Second, researchers need to recognize that an essential aspect of putting oneself in the shoes of policymakers is helping them explain evidence compellingly to their many and diverse stakeholders. If decisionmakers are going to stick their neck out to act upon some crucial piece of evidence, they will need to present and defend it well—across the whole trajectory of the decisionmaking journey, from floating a new policy initially among close colleagues, to sharing it widely with parliamentarians and voters, to coping with attacks from critics, to commenting on how it has turned out when implemented. To be good at all that, decisionmakers need to understand the evidence thoroughly themselves and be comfortable walking others through it. Researchers need to help with that.
If politics is the art of compromise, policymaking is the science of choosing better when best is out of reach.
Everything about a piece of evidence—where it came from, how it was developed, what it means, and how reliable it is—must be totally transparent in the sense of being understandable by those who might want to know. When Ngozi Okonjo-Iweala—another of the five main characters in Reformers—was the Nigerian cabinet minister responsible for bringing government spending back from the chaos left by the military regime that preceded the democratically elected government she came in with, she needed evidence that was incontrovertible. Shrewdly disarming critics, she had all the details of her proposed new budget published publicly—in a book that immediately became a bestseller across the country. When Adolfo Figueroa, still another of the five, was working out his proposals for tackling the extreme poverty among the large indigenous populations in the Andean high areas of his native Peru, he insisted on finding arguments that could be understood even by any ordinary “shoeshine boy.”
Third, putting oneself in the shoes of policymakers may require researchers to add tasks in their research that go beyond what would be necessary from a research perspective alone. For example, in the real world, first-best solutions are often not feasible, whether because of political impasses, administrative limitations, or other reasons. So, policymakers need evidence showing not only the best course of action but also second- and third-best alternatives that may be more attainable in their specific context. If politics is the art of compromise, policymaking is the science of choosing better when best is out of reach. Evidence generators and disseminators can do themselves—and policymakers—a favor by providing guidance on what to do, in various circumstances, when optimal solutions cannot be achieved. In addition, evidence producers should have a sensitive ear for the exact nature—including degree of precision—of the information that decision-makers require. Sometimes policymakers need most to know if a certain value is at least above a certain threshold—for instance, that the rate of return for some program will be at least greater than, say, 10 percent. In that case, trying to determine a good point estimate—say that the rate of return is 16 percent with a confidence interval of +/- 4 percentage points is of secondary interest for the policymaker. Simply knowing that the answer is almost assuredly more than a critical threshold (10% in this example) is enough. When Domingo Cavallo, the fifth of the five in Reformers, was deciding how best to ratchet down the hyperinflation that was ravaging his country, Argentina, in the early 1990s, he could not wait for finely calibrated point estimates of the reforms he was considering; he just needed to know whether their impact would, grosso modo (roughly speaking), be large or small.
Drawing lessons from the evidence on how to make evidence most useful will continue to be a key factor in driving the change.
This article was originally published on 3ie.