Predictions and alarm clocks
How to link monitoring to decision-making
Mark Winters Co-founder and CEO
Most monitoring systems are broken in the same way. They sit parallel to implementation. Evidence gets collected but it doesn't inform decisions.
The problem isn't a lack of effort. It's a confusion about how monitoring actually works. You'll hear a lot about what monitoring is for (track progress, generate learning, inform adaptation) but much less about how all this is achieved.
So, how does all this work?
The mechanics: predictions and alarm clocks
Consider how a scientist approaches an experiment. She doesn't proceed with some vague aim; she commits to a prediction. Whether she's right or wrong, something has been learned. The prediction is what makes learning possible.
We do the same thing. A theory of change is a set of predictions. If those predictions are clear and explicit, reality will confirm or contradict them. Contradiction is what you're looking for; it functions like an alarm that demands your attention.
Take a concrete example. Suppose you're supporting a company to introduce an agricultural input to smallholder farmers. If you commit (clearly, explicitly) to the prediction that most farmers who try the input will purchase it the following season, it follows that you should monitor repurchase rates. If repeat purchases turn out to be low, in a sense this is good news - you’ve found a problem that demands explanation.
Similarly, if you've predicted (clearly, explicitly) that the company will scale-up their distribution on the back of promising sales, it follows that you should monitor company investment. If in fact their investment stays flat despite growing sales, again, you've found a problem that demands explanation.
The process:
- Design your theories of change with clear predictions. No vague aspirations but testable hypotheses. Be specific enough so that reality can contradict them. "Farmers will adopt the input" is too vague to test and falsify. "Most farmers who try the input will repurchase it the following season" is not.
- Collect formal and informal 'data' regularly. Collect both formal data (think indicator tracking, designed surveys) and informal data (think conversations, observations, a phone call to ask how things are going). Do so frequently, so that issues surface while there's still time to act.
- Look for gaps between your theory and reality. Examine all your data for predictions that haven't held. Look too for things that are happening, that you didn't expect. Look with purpose for anything that surprises, disappoints, or otherwise feels "off". Sometimes, you don't have to look hard; you already know something's not working. That's good; discuss with your colleagues and move quickly to the next step.
- Investigate gaps. A gap between prediction and reality demands an explanation. Speak to people to understand what's going on. Start with the most obvious actors, and go from there. Two lines of inquiry will usually get you there: Is it genuinely in their interest to do what you predicted? If yes, what's blocking them? More on this here.
- Respond. Once you understand the gap, you need to decide what to do. You have three broad options.
- You can decided to wait and watch: "Something's in motion but it would be premature to change anything".
- You can adjust your approach: "Given what we're hearing, there's something else/different we should be doing here".
- You can exit: "We need to face the fact that there's nothing more we can do here". Whatever the choice, make a proactive decisions; don't drift.
- Refresh your theory. If you adjust your approach, significant changes will need reflecting in your theory of change (or results chains). If your theory no longer reflects what you believe, it can no longer function as your alarm.
So look at your theories of change and ask: Are my predictions clear? Are they the most important ones? Am I really testing them? Am I monitoring?