July 19, 2019

Single studies cannot inform policy-making

While using evidence is important to form policies and programmes, it may not always lead to the right decision, particularly when there is not enough material.

3 min read

For a development evaluator, the holy grail is to have evidence from one’s study be taken up and used in policy or programming decisions that improve people’s lives. It’s not easy. Decisions are based on many factors. The availability of evidence is just one of them. And of course, even when evidence is taken up, it does not mean that it will lead to the right decision.

A dramatic example of the complexity of take up is the political and judicial controversy that has swirled in the Philippines recently. The events could lead to a prominent researcher and senior officials (including a former health minister) being prosecuted precisely for acting on evidence of impact. The problem is that they are alleged to have done so a bit prematurely.

Dengue is a mosquito-borne disease that threatens the health and sometimes life of millions in many countries, including here in India. In 2014, an article published in The Lancet reported the efficacy of a new vaccine, Dengvaxia, in dramatically lowering the incidence of the disease. The drug was approved in 19 countries, including the Philippines. In late 2016, the Philippines procured millions of doses and, over the span of 18-24 months, proceeded to inoculate some 800,000 school children.

A child being given polio vaccine_wikimedia commons

Picture courtesy: Wikimedia Commons

Related article: Do evaluations actually serve a useful purpose?

Arguably, the original study ticked many of the boxes for success. Its findings were taken up and turned into policy that could improve lives. An added bonus was that the lead author of the article, which had passed the most rigorous of peer reviews, was from a low- and middle-income country, the Philippines.

But life is seldom so simple. Recent articles in Science and Scientific American document why this study has stirred up such controversy in the country. There are now criminal charges being considered against the researcher and health officials, who sanctioned the programme. While the clinical trials were apparently well-conducted, there is doubt about how policies were drawn from them. Apparently, the vaccine, while very effective as a prophylactic for those who have already been exposed to dengue, may actually be dangerous for those who have never had it. Subsequent studies may be used by prosecutors to imply that policy influence can even be deadly.

It would be a shame if this example disincentivises decision makers from using evidence. Especially these days, when the standards for evidence seem under attack, rigorous research cannot be neglected. It wasn’t long ago that another vaccine study, now discredited, suggested a link between the measles, mumps, and rubella vaccine and autism. Although the study has been debunked, it continues to fuel vaccine hesitancy across Europe. This is why it’s so important to assess when and how evidence from rigorous studies should be taken up. At 3ie, our team has been using such experiences to draw lessons.

When there is not enough evidence to synthesise, it is important to generate more evidence.

One lesson is the danger of relying on single studies to inform policy. Multiple contextual factors can influence effectiveness. To the extent possible, synthesise evidence through rigorous theory-based systematic reviews that include meta-analysis. 3ie is not only supporting this work but it is also facilitating access to evidence through our systematic review repository.

When there is not enough evidence to synthesise, it is important to generate more evidence. For example, 3ie has an evidence programme that is supporting studies testing innovative approaches in engaging communities for improving low and stagnating immunisation rates across different contexts. Study findings are showing that in low- and middle-income countries, there are important practical barriers to immunisation. Lack of information on when and where to get children immunised, inadequate information on immunisation, travel costs and the long waits at health centres, and a dearth of health staff and services, are big challenges.

A final lesson is the importance of promoting research transparency and replications. The public inquiry into the Dengvaxia episode was prompted by a healthy scholarly debate which included policymakers. A critical aspect is the replication of scientific work. The ‘crisis’ caused by the inability to replicate some highly influential studies (including in the social sciences) is beginning to change how research is done. 3ie has been supporting this work through its replication programme and by our strong commitment to improving transparency.

Hopefully lessons like these on the use of rigorous evidence will be helpful, especially in a world when facts are so easily distorted.

With inputs from Radhika Menon. This article was originally published on 3ie.

We want IDR to be as much yours as it is ours. Tell us what you want to read.
ABOUT THE AUTHORS
Emmanuel Jimenez Profile
Emmanuel Jimenez

Emmanuel Jimenez is Executive Director of the International Initiative on Impact Evaluation (3ie). He came to 3ie after 30 years at the World Bank Group where he provided technical expertise and strategic leadership in a number of research and operational positions. He was lead author of the World Bank’s 2007 World Development Report, Development & the Next Generation. Before joining the bank, Dr Jimenez was a member of the economics faculty at the University of Western Ontario in London, Canada. He received his Ph.D. from Brown University.

COMMENTS
READ NEXT