Monitoring for delivery, evaluation for causality

Monitoring is playing the game, evaluation is the post-match analysis — don't mix them up!

Mark

Mark Winters Co-founder and CEO

Football

Monitoring tracks progress in real time - the signals teams use to steer delivery. Evaluation substantiates what changed, and examines causality. That’s the conventional split, and it’s a useful one. Still, in practice this distinction is not always internalized: questions like ‘How do we measure contribution in our monitoring plan?’ or advice to monitor indicators that both ‘prove’ and ‘improve’ often crop up. That blurring makes monitoring heavier than it needs to be, and that leaning into the split makes both tools more useful.

To understand impact, two questions matter most:

  1. What's changed?
  2. What caused those changes to happen?

Answering these questions requires two distinct processes:

  1. Monitoring to track what's changed (or perhaps more accurately, what's changing)
  2. Evaluation to substantiate what's changed and explore what caused those changes

Evaluations can, of course, do more, from judging value to drawing wider lessons. However, such issues cannot be addressed until you understand change and it's causes - these are the core.

This framing makes a difference. It gives implementation teams confidence to be selective in what they monitor, knowing they don’t have to capture everything. They can focus their limited attention on data that supports implementation. Evaluation can come in later to answer complex questions on causality. This is about choosing the right tools, for the right job.

Monitoring: A first pass at “what‘s changed?”

Monitoring should be led by implementing teams. It's the process of tracking change as it happens – so we can understand what’s changing (or not), and what that might mean for our work.

Evaluation: A second look at change and what caused it

Evaluations can be led internally or externally. They can range from large cross portfolio studies to smaller case studies. Either way, they give us a chance to return to key interventions and investments with more time, distance, and rigour.

They revisit the question “what has changed?” but can go further to ask: what factors caused those changes? And what role did the intervention or investment play?

Why this framing matters

In my experience, this way of thinking is hugely freeing for implementing teams:

It gives them the confidence to monitor selectively. It allows them to focus on indicators and data that: (a) test the central hypotheses being made; and (b) provide fast feedback for adaptation.

Monitoring is just the first pass - if an intervention or investment makes real progress, you can always return to it later with an evaluation.

An example

This is a real and current example. Imagine a team working with regional water companies. The team identified a problem: Companies were haemorrhaging water through leaks undermining commerciality and household access. They put this down to out-dated infrastructure and a lack of know-how.

The hypothesis was simple: If they could subsidise strategic infrastructure and close the knowledge gap, water companies would:

They partnered with several water companies.

Monitoring: what’s changing?

They developed a ToC and a measurement plan. They were selective, choosing indicators to confirm or refute the expected changes. Whenever data deviated from expectations (as it did for some water companies), they dug deeper.

Evaluation: did it really change and what caused this?

Jump forward a few years. The team felt their hypotheses broadly held. But before talking too extensively about “their impact” an evaluation was commissioned.

The evaluation has clarified their claims and is in the process of assembling evidence to check what really changed and whether the team played a role. It is also weighing other explanations – regulatory incentives had shifted, and other donors were active. Perhaps these factors explain more than the team's efforts; or perhaps they complement them, showing how multiple forces combined to drive change.


This way of thinking about monitoring and evaluation can be powerful. Knowing they have an evaluation at their back, gives implementing teams the confidence to be more discerning in what they monitor. The result is monitoring that’s practical - and directly useful to implementation.

A can't resist closing with a football analogy. Monitoring is watching the game as it happens - adjusting tactics. Evaluation is the post-game analysis, when you can take the time to go deeper, to understand what happened and why. The message: don’t try to do the post-match analysis while the game is being played - or you might not do either very well.