By Stephen Pratt, President, Impact Catalysts
and Francesca Moree, Analyst, Impact Catalysts

We spent the better part of last year talking with nonprofit and foundation leaders about performance measurement—what’s working, what’s standing in the way, and how the obstacles could be overcome. We also brought together 15 nonprofit leaders in Washington, DC, drawn from direct services, policy, and advocacy. Here are a few highlights from the conversation:

What’s Challenging:

    • The “RCT Fever Dream.” While everyone is in favor of working from the best available evidence and building the standards of evidence in their field, there has been a rush on the part of government and some private funders to the Randomized Control Trial (RCT). When done right, RCTs can dramatically raise the game by standardizing interventions, getting consistent impact and organizational efficiencies. The Chronic Disease Self-Management model, supported by the findings from an RCT, is a widely recognized example. But, as the participants around the table agreed, RCTs apply to a narrow range of interventions and organizational settings. Most nonprofits are not candidates for an RCT. Nonprofits need to be able to offer alternatives to the RCT in conversations with funders, not all of whom have a deep understanding of RCT’s limitations.
    • Unfunded mandates. Organizations are managing a range of idiosyncratic data requests from public and private funders. Bespoke metrics, conflicting data formats, and competing data platforms were all cited. Navigating these demands takes people’s time, and as the old saying goes, time is money—and funders rarely want to pay for that time, even when it’s in service of their own data demands.
    • Contribution vs. Attribution. Nonprofit organizations don’t pursue a theory of change in a vacuum. Their interventions happen in concert—sometimes in conflict—with a host of other interventions. Measuring their impact on a complex issue over a long period of time and identifying and isolating the contribution that their program uniquely makes is a particular challenge without simple solutions.

What’s Working:

    • Use data for learning vs. compliance. In a compliance culture, people view see measurement as a means of checking off boxes, proving that they are doing their work. In a learning culture, staff can see how routine data collection connects to efforts to improve programs, and they become more invested in the process. Organizations with strong learning cultures have regular conversations about how data is being leveraged.
    • Acting locally and thinking globally. Organizations focus their measurement efforts at their point of intervention, grounded in a realistic assessment of their capacity. In other words, they measure what matters and avoid the temptation to drown in an ocean of data. At the same time, they stay current on the new research in their field, particularly emerging evidence that can be applied to their own program models and approach to measurement.
    • Collaboration. While the complex environment in which organizations intervene poses a challenge for attribution, it also offers opportunities for more ambitious approaches to measurement. Partners can share measurement costs and personnel and get insight into their programmatic approaches that might not be feasible for a single organization to pursue.

*This article was originally posted on the Root Cause blog when the authors were respective Director of Advisory Services and an Advisory Services Analyst there.

Image © Jens Johnsson via Unsplash