What India will look like in the next two decades will depend on what we do today–how we invest in our young and the extent to which we close the gap between our girls and boys.
Given that one in five Indians–a total of 253 million–are adolescents aged 10-19 years, policy makers are recognising the importance of developing a healthier, better educated, and skilled young population. And now, more players–foundations, donor agencies, corporates and philanthropists–are joining this effort, committed in particular to empowering India’s girls.
Despite this growing commitment, a question that too few of us are asking is: are we reaching the population of girls most in need of our intervention?
We assume that it’s enough to locate our programme in a poor village and invite any girl interested in participating, to join. After all, we are working in a poor village, within a poor state. Surely that in itself establishes that we are reaching the most vulnerable girls, and girls most in need of our intervention?
Well, yes. And no. It is true that a programme intending to reach girls in poor districts or villages will benefit the girls who are participating. But expecting that the programme will also automatically benefit the most vulnerable among these girls is a false assumption.
To really make a difference, programmes must be designed with an eye on including the most vulnerable.
Some years ago, a project I was involved with aimed to develop the life and livelihood skills of girls in rural Uttar Pradesh. We aimed to raise their awareness of good health practices, change traditional attitudes and build their skills and aspirations for future livelihoods.
Any and every girl between the ages of 13 and 19 years from that village was eligible to participate in the weekly sessions held at the local anganwadi. Before the project began, we went house to house, identifying all the girls in these ages.
We talked to the girls and their families about the project, why it mattered, where it would be held and at what time, making sure that we were addressing all their concerns. We then invited all these girls to attend, and many did enroll in the 12 month programme and even attended its sessions regularly.
We measured the effect of our intervention through what is called a quasi-experimental design. Prior to the start of the programme, a survey assessing girls’ knowledge, attitudes and practices was carried out among girls in our intervention villages as well as in similar villages where the intervention was not being implemented. When the programme concluded we repeated the survey, making special efforts to include those who may have moved away in the intervening period.
This design allowed us, upon programme completion, to compare two things:
- The change in the girls in intervention villages who had and had not attended the sessions, before and after their exposure to the intervention
- The changes so experienced with corresponding changes among girls in villages with similar background characteristics where this programme wasn’t conducted
Our evaluation found that girls in the ‘project’ villages exhibited significantly higher levels of knowledge, about the minimum legal age at marriage, contraception and pregnancy. They also displayed more positive gender role attitudes–for example, that boys are no better than girls in studies, that girls should be allowed to decide when and whom they marry, and that girls can take up occupations traditionally reserved for men.
While these results validated our intervention, like many others they also exposed a critical exclusion on our part.
The girls who participated in our programme were a self-selected group. They came from better off, landed and general caste households, and were better educated than the girls who had not joined the programme.
We had failed to reach the most vulnerable.
Many of these girls were already displaying greater self-confidence and could communicate confidently with peers in group settings. We had failed to reach the most vulnerable.
What we had been optimistically attributing to the programme was nothing more than a comparison between better off girls in our programme and a representative sample of all girls in the comparison sites. And so the behavioural changes that we were associating with our intervention were skewed. We had unwittingly excluded the most vulnerable–the Dalit, the Muslim, the out-of-school and the married.
So what lessons can a programme implementer and an evaluator draw from this example?
Those who are most vulnerable will continue to be excluded unless special efforts are made to include them. This could happen for many reasons. Parents may not appreciate the value of such exposure for their girls, girls may have other responsibilities such as wage or housework, or they may be more socially isolated and not permitted freedom of movement. For some, caste dynamics may make them feel unwelcome at best, and actively excluded from joining the programme at worst. As programme implementers we must be aware of this likelihood and recruit accordingly.
A quick mapping of the caste and poverty status of the household, and the marital status and school going status of the girl are good indicators of vulnerability.
Those who are most vulnerable will continue to be excluded unless special efforts are made to include them.
And contrary to belief, this doesn’t impose a significant additional cost, and can be done over the course of 1-2 days in each project village as demonstrated in the Meri Life Meri Choice (MLMC) intervention undertaken by MAMTA and evaluated by Population Council.
The MLMC project team made a conscious decision to focus on vulnerable adolescent girls, defined as those belonging to economically poor households or socially disadvantaged religious or caste groups. Teams went house to house collecting information on each household’s status, and made deliberate efforts to ensure that girls from vulnerable households were not left out of the intervention.
Evaluators too need to be alert to this potential selectivity of participants.
Analysis must explore the characteristics of those eligible girls who opt to participate in the programme and those who are excluded. If the most vulnerable are under-represented, the programme will fail to reach its intended audience.
Evaluations must likewise ensure that when programme effect is measured, statistical methods are used that obtain the ‘pure’ effect of the programme, after isolating the effects of any differences that may exist between the intervention and comparison in terms of social and economic indicators.
For example, if, for some reason, the intervention group is better educated or contains fewer girls from socially isolated castes than the control group, the analysis must take this into consideration. It must demonstrate that the greater empowerment observed among girls in the intervention site is attributable to the intervention and not to the facts that girls in the intervention sites are better off than those in the comparison sites.
We need to reach enough girls to make a difference at village level.
Ideally, effects should be measured at population level, rather than at beneficiary level. We need to show that our programme has improved the life of all girls in the village and not just those who were exposed to our intervention. This means we need to reach enough girls to make a difference at village level.
Reflections presented here seem obvious, in hindsight, but too often in our enthusiasm to empower girls, we gloss over such issues. Real progress happens only when programmes are truly inclusive.