Mission Possible: Evaluating Advocacy Grants
In the late 1990s, The George Gund Foundation began funding a modest but very capable statewide homelessness coalition in Ohio to pursue a lofty goal: get the state legislature to create a $100 million permanent trust fund to support low-income needs across the state and identify an ongoing, dedicated source of public revenue to continue funding it.
The coalition tried unsuccessfully for three successive biennial budget cycles to get this done. The fourth time was the charm. After years of banging on the statehouse door, the coalition succeeded in securing a dedicated revenue source capped at $100 million per biennium (in the most recent budget negotiations, the coalition actually got the cap raised another $16 million).
Gund's total contribution to the effort: A little less than $150,000 for education, communication and other advocacy efforts. By any measure, that is quite a return on investment. Yet, had any traditional type of evaluation been used to gauge the success of the first three grants, they would have been considered failures. The stated goal of those grants was not met. By standing back, however, and looking at what had been gained at each step, a very different picture emerged. Over time, the coalition had: built great relationships with key legislative leaders who eventually became champions; increased the size and diversity of its membership with each successive effort; developed growing credibility with the statehouse press corps; and convinced policymakers to provide interim funding for the trust fund from state general revenues while building support for a permanent revenue source. Each of these achievements turned out to be crucial to the eventual victory and more accurately reflected what each grant had achieved other than the ultimate goal.
This grantee's effort took time to come together in a successful configuration. Typical, by-the-numbers methods of evaluationhow many people were served or how many houses were rehabbedwould never have captured the messy process that eventually led to a grand outcome.
To accurately gauge what came out of those grants, Gund staff knew they had to do something different, but they didn't know quite what to do. Informally, the grantmaker could figure out that standard operating procedures for evaluating grants like this wouldn't be a true gauge of the grantee's challenges and accomplishments. Since a practical tool to capture what all involved had learned from these grants didn't seem to exist,they set out to create one.
Alliance for Justice (AFJ) entered the evaluation of advocacy business for other reasons. While providing training and technical assistance to funders on legal rules for supporting nonprofit advocacy, AFJ staff found that grantmakers were openly increasing their support for such work. But new questions were popping up at advocacy workshops and other meetings. Grantmakers wanted to know how to show the effectiveness of their grantees' advocacy work, and, ultimately, to demonstrate the effectiveness of their funding choices.
Some funders evaluated advocacy work using the same criteria they used for service deliverywhether or not grantees successfully accomplished what they said they would in their grant proposals. But it just didn't seem to work well. Most disturbing were those funders who said they chose not to support advocacy simply to avoid the evaluation quandary. Knowing that the value of advocacy work could be demonstrated, AFJ developed materials to help grantmakers and grantees do so. A chapter on evaluation of advocacy was included in the December 2004 AFJ publication Investing in Change: A Funder's Guide to Supporting Advocacy (available online at www.allianceforjustice.org).
A New Way of Doing Business
Like many foundations, The George Gund Foundation believes engaging in social change grantmaking is an essential component of effective private philanthropy. While private philanthropy annually contributes an extraordinary amount of money to causes promoting the common good, its combined resources pale in comparison with those of the public sector. Therefore, targeted investments in advocacy can leverage public dollars far beyond any funds for direct service a foundation might be able to provide.
Perhaps one of the most potent rationales for funding advocacy is promoting the nonprofit sector's role as the voice for the powerless. When foundations underwrite such work, they bring the views represented by millions of Americans with little or no direct access to the halls of power into the marketplace of ideas.
This work takes on increasing importance in the current environment of growing needs among vulnerable populations and increasing pressures on state budgets, in particular. Yet, as the rationale for engaging in advocacy grantmaking takes on greater urgency, so does the need for understanding the effectiveness of such investments. Clearly, we are in an era of increasing accountability and transparency for private grantmaking. Formal methods of assessing grant outcomes are needed to maintain public trust in the sector. Such methods are false guides, however, if they do not accurately account for grant activity.
So, how can foundations more formally measure grants in a way that supports the flexibility needed to maneuver through the policy process, without alienating the public or skittish trustees looking for near-term, demonstrable gains? The answer is to create a new framework for evaluating advocacy grants.
In an effort to more accurately reflect the outcomes of policy-related grants and promote more support for nonprofit advocacy, the Alliance for Justice teamed with The George Gund Foundation and Mosaica: The Center for Nonprofit Development and Pluralism to fashion tools for grantmakers that show the effectiveness of nonprofit advocacy work. These new resourcesthe Advocacy Capacity Assessment Tool and Advocacy Evaluation Toolare now available through the Alliance for Justice. (Web-based versions will debut in 2006.) Designed to be used in conjunction with Investing in Change, the tools are hands-on guides for funders working with applicants and grantees to establish a grant applicant's capacity for advocacy prior to awarding a grant. They can also be used to evaluate grantees' progress in achieving their advocacy goals and for funders' own advocacy work.
What's the Difference?
Advocacy activities differ from those conducted in typical direct-service grants. With advocacy grants, we are not interested in measuring units of service to a client population. Instead, we care about big-picture gains. When the local food bank gets funded, we are interested in how many hot meals were provided. When the state association of food banks gets a grant, we are interested in how much new public funding was secured for emergency food programs statewideenabling thousands more to be served.
In advocacy funding, long-term investment is, or should be, the norm. Very seldom does the situation call for a three-years-and-out strategy. For example, the ten-year, successful campaign the Rosenberg Foundation supported to reform the California child-support system was not unusual in its length. A different level of patience is called for when trying to promote social change that can move frustratingly slowly, so tools need to clearly identify what progress is being made along the path toward the desired objective. In the advocacy arena, making incremental gains can be nearly as important as meeting the ultimate goal.
In evaluating advocacy grantmaking, defense can often be the best offense. Half of the time, it is more important to see if something didn't happen (i.e., were proposed cuts to that children's health program averted?) than if it did.
In addition, because outside controls reign in the public policy arena, flexibility is often the key to successful advocacy efforts. Therefore, funders should be less interested in seeing if the grantee stuck with the original strategies outlined in a proposal than whether the organization was nimble enough to adjust its strategies to respond to policy changes.
Finally, funders and grantees must find a way to capture the importance of relationship-building to successful advocacy grant outcomes. In the food bank example, the number of hot meals served may not increase if the food bank director doesn't get along with the local city councilwoman. In advocacy grants, on the other hand, the lead grantee's relationship with a legislative committee chairman or a key statehouse reporter may signal the difference between success and failure.
These factors are undoubtedly less tangible than the data normally used to evaluate grants. While they may be more abstract than counting widgets, they can be effectively measured. You just have to know what to look for. The new tools from Alliance for Justice and The George Gund Foundation help funders do precisely that.
This new approach to evaluating advocacy focuses on what is different about advocacy work, such as the need for organizational flexibility to shift strategies when circumstances dictate. It is also a pragmatic approach. A simple evaluation framework based on advocacy experience is more manageable for most nonprofits than complex evaluation requirements that unduly tax already sparse resources, particularly staff time. Finally, the approach also recognizes the value of developing organizational advocacy capacity, and builds that into the evaluation process.
We started out thinking one tool was needed a template for evaluating a grant after the grant period ended. We quickly realized that it was equally important to gauge an applicant's readiness for doing what was outlined in its proposal on the front end of the grant review process. Thus, the second tool was born. By identifying which areas of the organization's advocacy capacity need to be strengthened, a foundation can decide if the applicant's proposed advocacy program matches its resources. For instance, is the applicant's board familiar enough with the advocacy process and the local political environment to realize that the organization might clash with local public officials on a particular project? If not, the funder might direct support toward board training on advocacy and the current policy arena, or even decide the organization is ill-equipped for advocacy, but that another organization can be recruited to do the work for or with them.
The statewide homelessness coalition in Ohio failed to accomplish its goal of leveraging a permanent source of public funds after each of the first three grants. However, the capacity it built during that three-year process led to its ultimate success. Each year, the coalition increased the size and diversity of its membership, gaining a stronger base from which to approach public policymakers. Since building advocacy capacity is vital to success and occurs whenever advocacy work is done, it can and should be included as an objective to be evaluated.
Evaluation Tips for Grantmakers
During the last 30 years, many conservative foundations have operated with the understanding that evaluation of advocacy work should be consistent with the nature of advocacy and should not be burdensome to nonprofits. Instead, it should be aimed at learning how to work more effectively, as well as how to fund more effectively.
According to Bill Schambra, director of Hudson Institute's Bradley Center for Philanthropy and Civic Renewal, having an expansive view of what constitutes evaluation was one strategy conservative foundations used to successfully support advocacy. In a recent interview in AFJ's Foundation Advocacy Bulletin, he discussed his previous work with The Lynde and Harry Bradley Foundation supporting efforts to promote public school vouchers: The work is very difficult to measure. Had we sat down at any point over the years with specific benchmarks and said that we would walk away from the project if the benchmarks were not reached in a certain amount of time, it would have meant that school choice would not have happened, or at least that we wouldn't have been along for the ride.
Here are tips to evaluate grantees' work that are consistent with the nature of advocacy.
Recently, the Public Broadcasting System aired the television program The Sixties. In it, the narrator said that the April 1971 March on Washington of 500,000 demonstrators turned the tide on the Nixon Administration's Vietnam War policy. It was not the biggest anti-Vietnam War march, but it built upon years of protest marches, lobbying, publicity and other advocacy efforts of hundreds of organizations and millions of individuals. No one would know for years to come how important that march was, or how effective the organizers were. The same could be said of many other actions related to that antiwar effort, and it would be difficult to credit any one group with changing our government's policy in Vietnam. But each advocacy group could discuss the role it played throughout the anti-Vietnam War campaign, steps that were taken towards meeting its final goal and how those steps strengthened their ability to act effectively over time.
While uncertainty is the norm in the policy arena today, nonprofits often have a greater impact than they realize. However, advocates do have tools that can help them to inform grantmakers of the value of the work being supported and of their organizations' capacity for influencing policy debates.
Developing these tools reinforced our theory that evaluating advocacy grants takes a different mindset than what we were used to. If typical methods of grant evaluation are basically used to help funders avoid unnecessary risks, a new framework for evaluating advocacy grants should help them embrace calculated risk for greater gain. Going back to our housing coalition example, grants could be made for 50 years and never approach the $100 million that will now be available every two years in Ohio for an issue and constituency that The George Gund Foundation care deeply about.
As more funders tiptoe, walk, run or gallop headlong into the world of funding public policy and advocacy, we hope these new tools help alleviate a common worry that such work is impossible to measure. And we hope to encourage funders who may have avoided advocacy funding altogether to enter the fray, confident that there are ways to gauge the impact of money spent.
Benchmark: Grantee gained support from water authority director for reducing lead levels in city water supply by 10 percent in order to address childhood health problems.
Telling the story: Although we hoped the water authority would draft new regulations this year requiring lower levels of lead in the water, the city has temporarily frozen enactment of any new regulations requiring money to enforce. We will build upon the director's support and continue to work with her for future policy changes. In the meantime, we will develop a legal complaint urging the court to issue a temporary injunction on illegal commercial dumping.
Build Your Advocacy Grantmaking Capacity Assessment Tool: Indicators of Nonprofit Advocacy Capacity
Knowledge and Skill Indicators
Build Your Advocacy Grantmaking Evaluation Tool
The Evaluation Tool breaks down the advocacy process in the following way:
Goals: Long-term accomplishments that will advance the organization's mission
Strategies: Administrative, legislative, nonpartisan election-related and legal approaches to accomplishing a goal
Outcome benchmarks: Specific activities or accomplishments that demonstrate success in reaching objectives for each strategy undertaken
Progress benchmarks: Specific activities or accomplishments that demonstrate significant progress towards reaching desired outcomes
Advocacy Evaluation Resources
Some of the resources available include:
Build Your Advocacy Grantmaking: Advocacy Evaluation and Advocacy Capacity Assessment Tools, Alliance for Justice for The George Gund Foundation Evaluation of Advocacy Project, with assistance from Mosaica: The Center for Nonprofit Development and Pluralism, 2005 (www.afj.org/foundation/research_publications/index.html).
Media Evaluation Project established by the Communications Consortium Media Center (a multi-year project that aims to provide foundations and nonprofits with methods of gauging the effects of strategic communications campaigns), with initial support from The David and Lucile Packard Foundation, the Carnegie Corporation of New York and the W.K. Kellogg Foundation, (www.mediaevaluationproject.org/overview.htm).
The Challenge of Assessing Advocacy: Strategies for a Prospective Approach to Evaluating Policy Change and Advocacy (a report commissioned by The California Endowment and produced by Blueprint Research and Design that provides an evaluation framework based on a scan of the field's policy and advocacy evaluations practices). E-mail firstname.lastname@example.org.
Voter Engagement Evaluation ProjectThe Funders' Committee for Civic Participation and the Proteus Fund undertook a research and analysis effort that assessed the effectiveness of nonpartisan strategies to increase voter engagement in the 2004 election cycle, and to inform grantmaking by the funder community. A summary report will be available in January 2006. Now available: Top Ten Lessons for Funders regarding 501(c)(3) voter engagement work conducted during the 2004 election cycle, by Heather Booth and Stephanie Firestone, June 2005. E-mail email@example.com.
Evaluating Philanthropic Support of Public Policy Advocacy: A Resource for Funders is a report included in a section of the Northern California Grantmakers websiteAssessing Public Policy Grantmakingthat includes case studies of six foundations' experiences with evaluating public policy grants, (www.ncg.org/toolkit/html/diggingdeeper/assessing2.htm).
Innovation Network is working to develop a framework for evaluating advocacy, with support from the Atlantic Philanthropies and the JEHT Foundation. It will be based on the Media Evaluation Project model. (www.innonet.org).
Measuring Social Change Investments, Deborah L. Puntenney, Ph.D., wrote this report on the 2002 research project of the Women's Funding Network (www.wfnet.org).
The Keystone Model is one of a number of models for evaluating advocacy that have been developed for international nonprofit work (www.keystone.reporting.org). A website with related information is MandE News, A news service focusing on developments in monitoring and evaluation methods relevant to development projects and programmes with social development objectives (www.mande.co.uk).
Marcia Egbert is a senior program officer at The George Gund Foundation in Cleveland.
Susan Hoechstetter is foundation advocacy director at Alliance for Justice.