Predictions and alarm clocks
How to link monitoring to decision-making
Mark Winters Co-founder and CEO
Most monitoring systems are broken in the same way. They sit parallel to implementation. Evidence gets collected but it doesn't inform decisions.
The problem isn't a lack of effort. It's a confusion about how monitoring actually works. You'll hear a lot about what monitoring is for — it's to track progress, generate learning, inform adaptation — but much less about how all this is achieved.
What are the mechanics exactly?
The mechanics: predictions and alarm clocks
Consider how a scientist approaches an experiment. Before running it, she commits to a prediction. Whether she's right or wrong, something has been learned. The prediction is what makes learning possible.
We do the same thing. A theory of change is a set of predictions. If those predictions are clear and explicit, reality will either confirm or contradict them — and contradiction is precisely what you're looking for. It functions like an alarm that demands your attention.
Absent or vague predictions mean the alarm never sounds — because you can never be proven wrong.
Take a concrete example. Suppose you're supporting a company to introduce a new agricultural input to smallholder farmers. If you commit — clearly, explicitly — to the prediction that most farmers who try the input will purchase it the following season, it follows that you should monitor repurchase rates. If repeat purchases turn out to be low, you’ve found a problem that demands explanation. Similarly, if you've predicted — clearly, explicitly — that the company will scale-up their distribution on the back of promising sales, it follows that you should monitor company investment. If in fact their investment stays flat despite growing sales, once again, you've found a problem that demands explanation.
The process:
- Design a ToC with clear and explicit predictions. Not vague aspirations but testable hypotheses — specific enough that reality can contradict them. "Farmers will adopt the input" is too vague to test and falsify. "Most farmers who try the input will repurchase it the following season" is not.
- Collect data regularly. Frequently enough that problems surface while there is still time to act. This means both formal data — surveys, sales figures, indicator tracking — and informal data — conversations with partners, site visits, a phone call to ask how things are going.
- Look for gaps. Examine your data for predictions that haven't held. In addition, look with purpose for any changes you didn't expect.
- Respond. A gap between prediction and reality demands a response. Before deciding how to respond, find out why the gap exists. Two questions will usually get you there: Is the rationale still there — is it genuinely in this actor's interest to change? If yes, what's blocking them?
As your intervention progresses your monitoring will evolve. Sometimes you'll adjust your approach to the intervention without needing to update your theory of change, or the things you're monitoring. At other times, what you learn reveals that your theory was incomplete. A step you hadn't anticipated, turns out to be important. When that happens, update your theory accordingly so it continues to function as your alarm.