Both of us started our journeys in education at the grassroots around the year 2020. We immersed ourselves in classrooms and communities, listening to children, working with teachers, problem-solving with administrative staff, and navigating state machinery to see change happen (or stall) at scale. We joined the State Teacher Professional Development team at Simple Education Foundation (SEF), closely working with the State Council of Educational Research and Training (SCERT) in Delhi, with a shared conviction—that change from within was possible, and that teachers and children needed solutions designed with them and grounded in their realities.
Since then, our paths have diverged. One of us designs large-scale learning experiences for teachers, attempting to make state-level professional development meaningful and relevant; the other builds systems to analyse data from lakhs of teachers and students, track implementations, and uncover what’s really happening on the ground. Yet, we continue to wrestle with the same question: How do you scale a solution without losing its local contextual uniqueness?
The universal dilemma: Standardisation versus contextualisation
For nonprofits working with public systems, there is a constant trade-off between standardising their work and contextualising it. Standardisation ensures scale, quality, and quick growth in reach, but risks overlooking the particularities of each case. Contextualisation, on the other hand, makes programmes relevant, owned, and effective on the ground, yet can lead to fragmentation and inconsistency.
In our work with state governments, this tension is not theoretical; it shows up in real choices. Till 2021 our core focus was working with principals, teachers, and parents in seven to eight– schools. But when we got an opportunity to expand to 1,000 schools, we had to decide: Whom should the programme centre—teachers, principals, or parents? Even while designing teacher training modules for the state, we had to ask if every teacher should receive the same training, or if we should tailor modules by subject, grade level, and geography? When a teaching strategy works in our partner schools, we ask ourselves: Do we scale it exactly as it is, or design only those solutions that can be successfully scaled? Even simple tools give rise to these questions. Do we create a single observation tool for all classrooms, or customise it for different grades, subjects, and school contexts?
Constraints of time, resources, and reach often force trade-offs. Deeply adapted models risk becoming so rooted that they cannot be adapted elsewhere, while broad solutions risk losing relevance altogether. The challenge is not choosing one over the other but designing solutions that hold both in balance, a tension that continues to shape our work every day.
This ongoing search for that balance has shaped our approach and provided us with the following insights.

Two challenges and a learning
Case for standardisation: Forming a common ‘how’
Designing effective professional development for teachers at scale is both urgent and complex. The education landscape—and the needs of children—is evolving faster than ever. Teachers need continuous, high-quality learning spaces to adapt their practice meaningfully. But reaching lakhs of teachers across any state with relevant and engaging training is no small feat.
One of our biggest challenges was fragmentation. We were operating in a landscape where several central actors—government departments, SCERT faculty, DIET (District Institute of Education and Training) professors, nonprofits—were all designing training modules for the same set of teachers. Each brought valuable perspectives, but also their own priorities and methods. Some designs were deeply insightful, while others overlooked the teacher’s voice completely. The result was a patchwork of solutions: sometimes effective, often inconsistent, and rarely connected by a shared vision.
To address this, we co-created the Teacher Competency Framework (TCF)—a common anchor for all training design. It articulated the knowledge, skills, and mindsets teachers need to help children learn deeply and joyfully. The framework was built over nine months of deliberation with expert educators, drawing on national and international research and on the voices of thousands of teachers.
We standardised how training was designed, without prescribing what it had to contain.
Alongside the TCF, we worked with the state to develop a standardised training design process. Every training would begin with a needs analysis and be co-created with system actors—teachers, teacher coaches, and DIET professors—ensuring the teacher’s voice remained present throughout. In other words, we standardised how training was designed, without prescribing what it had to contain.
We saw this approach come alive when training was decentralised in Delhi. Across 10 districts, we followed the same steps: Conduct a needs analysis, align with the TCF, co-design the training module, and make it engaging. The result? Each district developed its own module—aligned with shared standards of pedagogy and self-development, but customised to its local context. For instance, some districts focused on managing stress, while others trained teachers on using technology.
Teacher feedback was overwhelmingly positive. The standardised process became the foundation for better contextualisation. In public systems, standardisation isn’t about making everything identical; it’s about creating a shared starting point. It allows everyone to speak the same language, measure the same outcomes, and focus adaptation where it truly matters.
For us, the TCF and standard operating procedure (SOP) for training have become that starting point—a compass that keeps us heading in the same direction, no matter the terrain.
Case for contextualisation: Forming an adaptable ‘what’
One of the toughest challenges in public systems is that what works well in one state often fails the moment you cross a border, especially when it comes to technology. State governments have different administrative structures, data practices, and training ecosystems. A system that is too closely tailored to one context becomes unusable in another.
We experienced this with our first attempt at a training data system. When the TCF began shaping Delhi’s professional development efforts, we needed a way to track training delivery and outcomes for more than 70,000 teachers across 13 programmes. Our solution was the Compass management information system (MIS)—a digital platform that enables monitoring and evaluation (M&E) of large-scale teacher training programmes at the state level.
In Delhi, it worked well. The system was built around a closed, master database of every employee—teachers, mentor teachers, and principals, each mapped to their zone, district, and training programme. It worked well in Delhi because the state had well-maintained records and a clear cadre of facilitators with fixed designations. This also made it easy to use and scale.
The problems started when we tried to take it beyond Delhi.
- Geographical mismatches: Delhi was organised into zones. Other states are organised as districts, blocks, and clusters. Our system did not recognise these categories. Drop-down menus built for ‘zones’ could not accommodate ‘blocks’ or ‘clusters’.
- Incomplete databases: Unlike Delhi, many states did not maintain a centralised, updated list of employees. Some had partial data; others none at all. Without this data backbone, the validations and autofills that Compass relied on stopped working.
- Different facilitator structures: Delhi had a relatively small and clearly defined cadre of facilitators. Other states had larger and more varied groups with different designations. Our system could not validate these roles because it was coded only for Delhi’s hierarchy.The result was a system that worked in Delhi but collapsed elsewhere. We had not kept future expansion in mind. By tying the MIS so closely to one state’s database, geography, and hierarchy, we had designed ourselves into a corner.
This forced us to rethink the challenge. It was not only about building a strong system for Delhi. It was about building a system that could travel. The question shifted from “How do we capture every detail of one state?” to “What parts of this system must remain universal and what must stay open for local adaptation?”
The answer lay in returning to the intervention design. We rebuilt Compass with the design of the intervention at its core. The elements that defined the purpose of the training system—how data was collected, how quality was assured, how outcomes were tracked—remained steady everywhere. Around this core, we created flexible modules that states could adapt to their own realities.
- Consistent intervention core: Data protocols, assessment structures, and quality assurance processes stayed the same across states, ensuring comparability and coherence.
- Flexible geography modules: States could map their own administrative units, whether these were zones in Delhi or districts, blocks, and clusters in Punjab.
- Customisable facilitator roles: Local designations could be added without disrupting the system, unlike the rigid roles in the earlier version.
- Adaptable language and workflows: States could run the system in their preferred languages and tailor workflows to their administrative pace and structures.
True contextualisation means keeping the intervention design steady, while building flexibility into everything else.
This approach gave states both grounding in the programme and flexibility to explore their specific interests. The core kept the intervention design intact. The flexible modules made the system feel like it belonged to the state. For example, Punjab could map block-level facilitators directly into the MIS, while states with weaker databases could get started with partial data and strengthen it over time.
The lesson was clear: Over-contextualisation locks you in. True contextualisation means keeping the intervention design steady, while building flexibility into everything else.
Lessons from the balancing act
The big takeaway? Balancing contextualisation and standardisation isn’t a one-time design choice; it’s an ongoing negotiation as your work grows. Over the years, through many cycles of trial, error, and redesign, we’ve learned three lessons that now shape how we approach this balance:
1. We standardised the “how,” not the “what”
In the beginning, we tried to keep the content uniform across states. It seemed efficient but it ignored local nuances. A training example that worked in Delhi often made little sense in Uttarakhand’s multi-grade classrooms.
So we shifted our approach. The process—how registration happened, how attendance was tracked, how feedback was collected—remained the same everywhere. This gave us clean, comparable data and smoother coordination. But the content could be flexible. States could swap case studies, examples, and role-plays to match their teachers’ realities.
2. We stopped looking only at formal hierarchies
Scaling within government systems meant navigating increasingly layered bureaucracies and a deeply entrenched top-down approach. Early on, we realised that formal organisational charts told only part of the story; they rarely captured how decisions were actually made or how programmes moved on the ground.
To work effectively, we had to look beyond titles and reporting lines. We began identifying multiple champions across levels—state coordinators, district officials, senior mentors, and others—and focused on building authentic relationships with each. These informal networks, grounded in trust and shared purpose, often proved more influential than formal structures. They helped us unlock bottlenecks, adapt faster, and deliver programmes with greater ownership and relevance.
3. We built modular systems, not rigid ones
Our first tech tools were all-or-nothing. If one state needed an extra feature, it meant rebuilding the whole system—it was exhausting.
We now design platforms like Compass with a strong shared core (attendance, feedback, automated dashboard) and optional modules that states can choose from. This way, every state feels the tool is ‘theirs’, while still running one coherent system.
Looking back, we realise that the ‘balance’ we were searching for isn’t a fixed formula; it’s a practice. It’s about listening as much as leading, keeping the structure tight but the content loose, and building systems that can bend without breaking.
When done well, contextualisation makes training relevant and engaging, while standardisation makes it scalable and measurable. Both are essential—and possible—if we design with, and not just for, the people we serve.
—
Know more
- Read this article to understand why contextualising pedagogy is important.
- Learn how peer-led teacher training is reshaping classrooms.
- Read this case study on scaling early childhood education.






