Time flies fast! I am in the last week of my internship now – I have spent 8 weeks in Uganda and 2 weeks in Ethiopia, and I officially don’t know where the time went.

It’s been a great 10 weeks – since my last update, I got to see the source of the Nile, attended the BEST music festival of all time (Nyege Nyege), bid farewell to the wonderful Geneva Global team in Uganda, and flew to Addis Ababa and met the wonderful Geneva Global team in Ethiopia.

The team in Ethiopia went above and beyond to help me land on my feet – at work and outside. Since coming to Addis Ababa, I traveled to the Tigray region for work and to Gondar for pleasure (with a friend who visited from India!). I also got to spend time with my wonderful IEDP cohort-mate Raquel (you can find her blog about her internship at UNESCO here). I highly recommend each of these experiences – Raquel might be the hardest to find, but I encourage you to try.

This slideshow requires JavaScript.

This time has been very exciting in terms of the work I had the opportunity to do as well. I supported a training of trainer workshop in Uganda, the use of ICT in classrooms in Ethiopia and the re-design of the monitoring and evaluation (M&E) systems in both countries. I want to focus on the last project in this blog – because it’s the project I have been working on throughout my internship and I have been reflecting on the role of M&E for education programs.

Let’s talk Monitoring & Evaluation (M&E)

Over the last two months, I have been working with the Geneva Global team on their M&E system. In Ethiopia, this meant tweaking an existing system to make it more usable (I wish I had more space to describe the existing system – it’s great! The team here developed a technology-based data collection and reporting system in-house). In Uganda, this meant developing a new M&E system based on Ethiopia’s system (because Uganda is the newer program).

Here was our challenge: in today’s world of cloud backup, smartphones and massive storage, it’s way easier to collect data than to know what to do with it. Building a good monitoring system usually runs into the challenge of:

  • Data accuracy: How do we know if the data collected is reliable and valid? This is where technology needs to be supported by people on the ground, and vice versa
Accuracy
Accuracy – how we know the data is valid and reliable?
  • Usability of data: What are the right indicators to show? Who do we show them to? How do we make it simple to use? It’s way easier to show an Excel file with a lot of numbers and way harder to find the one metric which can drive decision making
complex-data
Usability of data – to drive decision making
  • Incentives: How do we incentivize stakeholders so they are invested in collecting accurate data and ensuring it is usable? For good program managers, who know their work very well, watching a data dashboard isn’t immediately intuitive. They know their program – e.g. they know which teachers are high performing and which teachers need more support. Why do they need data? However, the problem with this approach is:
    • The definition of a “good teacher” can vary
    • The knowledge of who is a good teacher might be lost if the program manager leaves
    • How do you scale? The program manager currently manages 40 schools – how can he manage 100 and still know if a teacher is good or not?
data-interpretation
Incentives – for using data to inform decision making

Flashbacks to lessons from old bosses

You might remember from my last blog – I used to have a boss (let’s call him SM) who had the habit of dampening a data analyst’s joy by asking what the point of their analysis was. Oh, you did this wonderfully complicated analysis worthy of a PhD? That’s great – “so what action can I take based on this data”? Words to chill a data analyst’s heart.

The trick, according to SM, was to keep asking “so what?” till you got to a question you really wanted to be answered. Here’s an example:

Analyst: Here, attendance register from all the classes we have

SM: Ok, whoa..so what?

Analyst: Here, the same data in a pretty graph

SM (stops himself from holding his head in his hands): So what? Why do we care

Analyst: Students with high absenteeism will lag behind. They might be less likely to come back and might drop out.

SM: Ok, we are getting somewhere. We want to see absenteeism data because we want to identify students at high risk of dropping out. Let’s look at the data

Analyst (pulls out the pretty absenteeism graph again): Voila!

SM (holds his head in his hands this time): Is this the best chart/indicator for us to be looking at if we want to identify students at risk of dropout? What’s the best possible indicator?

So. Usable and accurate M&E with appropriate incentives- How do we do this?

I definitely don’t have the answers to these questions, but based on my previous learnings from SM and from the team in Ethiopia and Uganda, we came up with the following design principles:

  • Identify users of the M&E system, and understand what data is useful for them: this is an iterative process, because users may not know what is useful. A good place to start was to support them in fulfilling their program reporting requirements
  • Design and support conversations around M&E system: A monitoring system is as good as the people who collect and use the data. If the data is used to penalize and not support a system (as happens with a lot of standardized assessments), there is a high risk that the data reported will be inaccurate. Hence, we designed a classroom observation tool, but we also focused on how the data would be interpreted and used.
  • Minimize data collection: Don’t collect data we don’t know how to use, or data no one will miss if we stop collecting it

The great team at GG will continue to develop and improve the M&E system even after this internship, but I sure am glad I had the opportunity to learn from them and contribute to their work. More soon!