Learning Tree Global LMS

A multi-country learning platform delivering localized, culturally-relevant education to 1,200+ children across Guatemala, Colombia, Ukraine, Poland, and Pakistan. Built on a single codebase with offline-first modules and per-country content layers.

RoleBusiness Analyst / PM
CompanySave the Children International
Timeline2022 — 2023
Reach5 Countries · 1,200+ Children
DomainEdTech · Localization · Humanitarian
Type0-to-1: Greenfield multi-country platform
The Problem
Five countries needed one platform, but language barriers, infrastructure gaps, and different educational standards made a single design seem impossible to deliver.
The Solution
A single codebase with a localization layer, offline-first modules, and per-country content packs so each region could adapt the platform without forking the code.
My Role
BA/PM designing the multi-country architecture, managing localization decisions, and running feedback sessions directly with educators across all five regions.
The Impact
1,200+ children reached across five countries, 30% engagement lift, and the architecture became the blueprint for Save the Children's subsequent LMS deployments worldwide.

One platform, five countries, infinite complexity

Teachers and caregivers in Guatemala, Colombia, Ukraine, Poland, and Pakistan needed a unified yet locally adaptable learning platform. Language barriers, infrastructure differences, and varying educational standards made this a deeply complex product challenge.

01

Existing tools were built for one context. No platform was designed to flex across cultural, linguistic, and technological divides simultaneously. Any solution had to feel local to each country while being manageable from a single admin layer.

02

Five country offices meant five sets of constraints. Each had its own curriculum teams, technical infrastructure, and working hours. Coordinating delivery was as complex as building the product itself.

03

Connectivity could not be assumed. Ukraine faced power cuts, Pakistan had unreliable internet in target schools, and Guatemala operated in areas with limited bandwidth. A platform requiring active internet was unusable half the time.

Localization-first, async-ready delivery

The solution required a single codebase with a modular localization layer, letting each country have its own language, curriculum, and content without branching into separate systems. Delivery had to work across wildly different time zones and working styles.

Five core decisions shaped what got built in Phase 1 and what got pushed to Phase 2, each made from user research, not stakeholder preference.

5 countries served from a single codebase with no duplication

How the platform evolved across six dimensions

These six areas represent the full scope of what needed to change. Scores are estimates out of 10, reflecting capability before launch and where the platform reached by end of 2023.

At launch (2022) Single language, connectivity-dependent, fragmented by country, minimal teacher input, manual reporting.
End of project (2023) Offline-first, localized per country, unified analytics, continuous field feedback, 30% engagement lift.
Localization Coverage
Before: 1After: 9
Offline Capability
Before: 1After: 8
Content Accessibility
Before: 3After: 8
Field Feedback Integration
Before: 2After: 8
Stakeholder Alignment
Before: 4After: 8
Reporting Clarity
Before: 2After: 7

Five decisions that made the platform work

Each feature below is paired with the problem it solved, the decision made, and the measurable result. Each one came from country coordinator interviews, not from a stakeholder wishlist.

Shipped

Multi-Country Localization — one platform, five local contexts

Before
A single English-language platform forced teachers to adapt content to local contexts. What worked in Guatemala's curriculum made no sense in Pakistan's. Five countries, one rigid experience.
After
One platform with a localization layer. Each country has its own language, curriculum, and content pack. No separate codebases, no duplicated maintenance burden.
5 countries, one codebase, zero duplication
Guatemala
Colombia
Ukraine
Poland
Pakistan
Shipped

Offline-First Modules — learning continues without connectivity

Before
Platform required active internet. In Ukraine during power cuts and Pakistan with limited bandwidth, students saw loading spinners more than content. Learning stopped the moment connectivity dropped.
After
Full offline capability. Text and audio modules download for local access. Progress syncs when connectivity returns. Learning never stops because infrastructure is unreliable.
100% of core modules function without internet
Online Mode
Offline Mode
Shipped

Audio & Interactive Content — learning that works for every literacy level

Before
Content was entirely text-based. Students with interrupted schooling or lower literacy levels were immediately excluded. Teachers requested audio and visual formats repeatedly.
After
Mixed content types: text, audio lessons, and interactive exercises, all available offline. Students engage at their actual level without needing strong reading ability to start.
30% engagement increase during early adoption across cohorts
Text Modules
Audio Lessons
Interactive Exercises
Offline Quizzes
Shipped

Continuous Field Feedback — built with country teams, not for them

Before
Requirements came from central stakeholders and donors. Country coordinators and teachers were consulted only at UAT. Problems surfaced late and required full sprint rewrites to fix.
After
Monthly review sessions with coordinators in each country throughout the project. Issues caught at wireframe stage. Changes driven by actual classroom usage, not assumptions from headquarters.
Monthly feedback cycles across all 5 country teams
Deploy
Collect
Diagnose
Fix
Shipped

Per-Region Analytics — diagnose by country, not global averages

Before
No per-country breakdown. Global engagement averages masked issues. Ukraine's low completion rate was invisible until it dragged down the overall number significantly.
After
Module completion rate and 30-day return rate tracked per country cohort. Ukraine's issue (power cuts interrupting sessions) was diagnosed independently and fixed with auto-save.
Country-level diagnostics, not misleading global averages
Global average only
Issues masked
Manual tracking
Per-country cohorts
Issues isolated
Auto-tracked

How I decided what to build and what to skip

Five countries meant five different opinions on what the platform should be. The work was deciding which differences were real requirements and which would fracture the architecture.

Finding the real constraints

  • Interviewed country coordinators across all 5 regions before any requirements were written
  • Every team wanted something different: Guatemala wanted gamification, Ukraine needed offline-first due to power cuts, Pakistan needed content controls, Colombia wanted bilingual support
  • The question shifted from "what features to build" to "how much customization can one platform support before it splits into 5 separate codebases"

How I cut the feature list

  • Built a feature matrix scoring each request on 3 dimensions: how many countries need it, how easily it localizes without branching the code, and expected impact on learning outcomes
  • Offline module support scored highest: every country needed it, it was architecturally clean, and coordinators ranked access reliability first
  • Gamification scored high on engagement but was hard to localize across cultures. Moved to Phase 2
  • Parental controls required in two markets, neutral in others. Added as a configurable option in Phase 1

One platform vs. five separate systems

  • The choice was a single platform with a localization layer versus separate per-country deployments
  • Per-country would have been faster for each team and allowed more customization
  • But it would have created 5 separate maintenance burdens, diverged over time, and made consolidated reporting impossible
  • Chose the single platform, accepting higher upfront complexity for long-term sustainability

How engagement was measured

  • Login frequency was the wrong metric. Students used shared devices in scheduled lab sessions, not on demand
  • Tracked module completion rate and 30-day return rate per country cohort
  • Ukraine scored lower early on. Not content quality, but power cuts interrupting mid-module sessions
  • Fix: auto-save every 2 minutes, resume from the exact point. Ukraine completion rate came in line with other cohorts after that

What I would do differently

  • Would invest more time upfront setting communication rhythms with the country teams
  • Each team had a different review cadence. We lost weeks waiting on feedback that clearer agreements would have surfaced faster
  • Defining turnaround windows, one channel per decision type, and a clear escalation path in week 1 would have been worth more than most scope decisions made later

See it live

Learning Tree is a live, multi-country platform. The web dashboard is publicly accessible and gives a real sense of the localization, content architecture, and user experience across regions.

The admin content management system, per-region analytics, and full localization workflow can be walked through in detail upon request.

A global blueprint, locally delivered

The platform's modular architecture became the blueprint for subsequent Save the Children LMS projects worldwide. It proved that a single product, thoughtfully designed, can serve radically different contexts without splitting into separate systems.

1,200+
Children reached with localized, accessible learning content across 5 countries
30%
Increase in engagement during early adoption across country cohorts
Blueprint
Architecture became the reference model for subsequent Save the Children LMS projects worldwide