Getting to the root

A recent evaluation has called into question the usefulness of "lean data". This argument is right, but it overlooks something important

Mark

Mark Winters Co-founder and CEO

Phone

Root Capital evaluated a digital extension service for macadamia farmers in Kenya; SMS-based agronomic advice. It was a replication of a model that had showed promise with coffee farmers in Rwanda.

The evaluation concluded that whilst most farmers said they'd recommend the service to a friend, a rigorous randomised controlled trial found no evidence that they had gained knowledge.

The evaluation and an accompanying article are now doing the rounds. Both make an important point: in a field increasingly reliant on "lean data" (simple perception surveys that ask beneficiaries how they feel about a product or service) it's easy to confuse satisfaction for impact. A positive "Net Promoter Score" - a classic lean data tool - tells you something, but it doesn't tell you whether the intervention or investment actually worked.

I agree with this argument.

This argument is right, but it overlooks something important

I think there is another lesson to be drawn from this evaluation.

For conducting a rigorous assessment and publishing a failed result, Root Capital deserve tremendous credit. They've set an example for the industry. The problem is that the evaluation confirmed the intervention failed but then, couldn't tell us why it had failed.

The intended behaviour change here was straightforward: macadamia farmers adopting better agronomic practices. In designing the intervention, several assumptions were made: that adoption was genuinely in farmers' interests; that a lack of agronomic information was what was stopping them; and that SMS was a plausible way to fix that.

Problem is, the evaluation stopped short of exploring whether these assumptions held:

Even if the rationale was sound and agronomic information really was the blocker, the RCT was never going to tell us which part of the SMS approach failed: the content, the channel, the frequency, or something else. The authors suggest two possible explanations: that unlike Rwanda, the SMS wasn't backed up by active support from processors; and that Kenyan farmers may be oversaturated with mobile-based interventions generally. Both are plausible but untested.

What was missing?

The RCT did its job; it told us that farmers didn't gain knowledge. But it was never going to tell us why. To do this, some research exploring assumptions was needed; assumptions around the mechanism by which change was suppose to happen. Coupling the RCT with that kind of research would have given Root Capital both: confirmation that the programme wasn't working, and an understanding of why.

Root Capital is not an outlier. Interventions and investments are often evaluated for impact without exploring the assumptions that underpin them. We learn that things didn't work, but we rarely understand why.