If you’re working to bring about big systems change, how will you know you’re getting anywhere? For organizations working to engage citizens in democratic life, can success really be measured? Is it worth the time and cost of trying to determine an organization’s impact when the work is structured around a series of issue campaigns?

The past decade has witnessed both notable advances and retreats in the development of the social sector: restructuring and consolidation, microfinance and crowd funding, and tax scandals, just to name a few. No trend will have a more profound or longer-lasting impact, however, than the adoption of results-based grantmaking by major institutional donors in the U.S.

Lord Kelvin said more than a century ago that “what cannot be measured cannot be improved,” and Bill Gates echoed the sentiment in his annual letter at the end of 2012:

I have been struck again and again by how important measurement is to improving the human condition. You can achieve amazing progress if you set a clear goal and find a measure that will drive progress toward that goal… This may seem pretty basic, but it is amazing to me how often it is not done and how hard it is to get right.

In health care and public health, measurement is nothing new. One cannot get NIH grants based on vague notions of impact. As the scale of grantmaking in education has grown over the past 20 years, the pressure to apply evidence-based standards has grown along with the investments.

Civic engagement and advocacy have not traditionally been affected by these trends due to a widespread belief on the part of donors and grantees alike that the nature of the work does not lend itself to prevailing performance measurement approaches—the impact is too long-term and too diffuse to be easily be captured or attributed.

This prevailing sentiment has had real economic consequences for the sector. The types of programs and interventions that can be more easily measured—health care, education and economic development—are the ones that receive the most support for measurement. As a result, donor attention and dollars will flow to more concrete, less risky endeavors and away from more amorphous “systems change” efforts.

Some grantmakers with deep commitment to systems change work have not been willing to accept this prevailing sentiment without a fight. The Omidyar Network and The David and Lucile Packard Foundation have supported the efforts of a number of grantees to develop performance measurement systems with promising results.

Another grantmaker, The Rita Allen Foundation, recruited a cohort of six emerging organizations in the U.S. democracy field to spend four months exploring practical ways to apply performance measurement approaches during three day-long sessions. The curriculum for these sessions, which was developed in partnership with us, combined group work with individual organizational work in between the sessions.

The curriculum was built with an overarching thesis: every social change organization has a theory of change—explicit or implicit—and every theory of change has embedded in it a set of assumptions about the connection between activities and outcomes that can be tested. Too often, evaluation work is reduced to a focus on the final outcome, but for most organizations, what emerges from testing the assumptions within a theory of change is far more actionable. Program designs can be refined based on those tests. If one waits until the end to measure the effect and the results fall short of expectations, the time for action has passed.

Four key lessons emerged from the process:

  1. Get the intended impact right. Many organizations conflate the mission statement language about the change that they want to see in the world with the intended impact statement. Without a clear statement of the objective toward which they are driving, the theory of change falls apart, and it is unlikely that the performance measurement system will produce actionable findings.
  2. You and what army? There is an inherent tension in social change work between the scale of the problem to be solved and the resources available to solve the problem. Organizations that strictly limit themselves to solutions within the scope of their current capabilities lack the sort of “big, hairy, audacious” vision that is necessary to effect hard-won change. On the other hand, organizations that aspire to outcomes that are light years away from their current capacity are unlikely to take a disciplined approach to delivering and scaling impact. Getting this balance right seems to be particularly challenging to organizations in the U.S. democracy field, largely due to the mismatch between the scale of the problems to be solved (e.g., government fiscal practices or global warming) and the size of the organizations involved.
  3. Distinguish between the direct and the indirect. A potential solution to the “you and what army” challenge is for organizations to develop more detailed maps of the broader fields in which they work. Too often, organizations think about their theories of change in a vacuum, rather than as part of a network of interventions that collectively build the sort of lasting change most are seeking. Social change organizations must distinguish objectives they seek on their own merits from those that leverage the work of others.
  4. Measurement doesn’t have to break the bank. All six organizations developed comprehensive performance measurement systems and plans that allowed them to regularly test their assumptions and assess their program designs. These were not six-figure evaluation plans. The plans averaged less than $25,000 per year to implement. The work these organizations pursue to reinvigorate our democracy is vital, challenging, and complex. The best organizations in the field are experimental and audacious—the approach to measurement cannot be timid or defeatist. The effort should be equal to these organizations’ visions, allowing them to question themselves with the same discipline with which they question our democracy.

*Note: This article originally appeared on the Root Cause blog when Stephen M. Pratt was Director of Consulting there.

Image © Tamarcus Brown via Unsplash