Realizing the Potential of Program Evaluation
Foundation boards and program staff have a long history of evaluating their grantmaking. Their jobs require them to assess proposals and regularly make decisions regarding grant renewal and expansion, and dissemination of results and replication of programs. Evaluation units have evolved in many foundations as specialized extensions of the program staff role because of the increased complexity of grant initiatives, the overall expansion in grantmaking and, with the call for greater accountability, the expanded need for objective evidence.
Because the practice of evaluation within foundations has grown substantially in recent years, our study focused on the functional questions of evaluationwhy this tool has come to the fore, how well it's working, what conditions govern its success, and what effect this trend is having on the philanthropic community. Based on the experiences of 21 foundations with diverse philanthropic agendas but a shared commitment to the use of evaluation (see box, page 34, for their names), our study discusses the role of evaluation as a part of say-to-day activities of today's foundations.
The importance of evaluation within foundation activities has increased. One indication is that eight of 16 foundations responding to this question say it has increased significantly in the past five years, and another five say it has increased slightly. This increase, in most cases, reflects rising concern among foundation board members about accountability and outcomes.
One more sign of the rising importance of evaluation within foundations has been the recent creation of evaluation units in several foundations, including the Edna McConnell Clark and James Irvine foundations, and the California Endowment. At the David and Lucile Packard Foundation, an internal committee (cross-program) has been at work to plan the evaluation function; a decision about creating an evaluation unit at Packard will be made in 1999. Additionally, at least five foundation staff told us that internal memos or white papers examining the role of evaluation had been prepared during the past year.
Foundation evaluation staff vary substantially among the 21 funders we studied. Fourteen of them have staff specifically assigned to evaluation. In six of the 12 foundations with directors of evaluation, the director is the sole professional staff assigned to evaluation, and six foundations with evaluation units have three or more professionals. But, there are three areas where evaluation staff among our 21 funders share characteristics. They are:
What It's for
Evaluation is an emerging tool because funders increasingly see many purposes for it, performing one.
Perhaps just as interesting are purposes that were not considered "very important" by most of those responding to our survey. Those are: identifying new areas of foundation giving, and helping to decide on grant renewals and project replication. (See "What About the Future?," left.)
The wide range of purposes in the "top five" above raises significant questions about the ability of evaluations to meet successfully the broad array of demands placed on themdifferent purposes often imply different questions and focus and often require different evaluation strategies and approaches.
We also asked foundation staff to rank the relative priorities that foundations give to the purposes they identified as "very important." We found that public policy planning and development rarely are identified as a top priority among those foundations who say they are very important. Instead, high priority is given to using evaluation to strengthen grantee, foundation and field practice. This implies a much more limited approach to evaluation and use of evaluation findings than is typically presented by foundations.
Who It's for?
Primary target audiences for evaluation results and findings among virtually all the funders we studied are the board, management and staff. Grantees are still given a bit lower priority. Policymakers and practitioners and others are given even less priority.
But problems emerge within these target audiencesand concomitantly, the purpose for evaluationwith direct implications for foundation satisfaction with the finished product.
For example, boards are increasingly outcomes-oriented. They are asking general questions regarding the bottom line and efficacy of foundation strategies. Although the board is seen as a primary target of evaluation work, foundation staff report that their boards want to be able to attribute the effect of a program directly to the foundation's contribution, even when that is difficult.
Similarly, while most respondents report that management is a primary audience for evaluation results, many also say that their management is minimally involved in evaluation. In 11 foundations, staff report that management demonstrated little interest in evaluations of individual programs. This was true especially in foundations with no specific evaluation unit or strategic plan for conducting evaluations.
Staff increasingly need information to help them better define problems, find plausible solutions and monitor their grant investments over time. One issue raised with this audience is that evaluation has a tendency to be out-of-sync with the calendar of grantmaking. Program staff are keenly and resolutely focused on the future and the grants to be made. By its nature, evaluation looks backward to what has happened.
A look at grantees as both a client of and an audience for evaluation reveals discrepancies. One foundation evaluation director describes this, saying, "We say we want to improve grantee practice and learning, but we're unwilling to pay for some of the things that might help, like data systems or research, since we're not going to use them ourselves." This dilemma reflects how foundations and grantees may be working at cross purposes. Many foundation-initiated evaluations are not particularly helpful to grantees. At the same time, grantee-initiated evaluations are either viewed with skepticism or are not pertinent to the information needs of foundations.
The competing informational needs of evaluation's main constituencies place a substantial burden on evaluation units trying to meet all needs. As one evaluation director pointed out, it is often unfeasible to plan and fund distinct evaluation approaches and products tailored to each constituency. No single evaluation strategy or report can effectively respond to each of those constituencies.
Because foundation staff have a growing need for new kinds of information, several foundations have employed new strategies to gather it such as:
Those are, for the most part, nascentalthough promisingefforts. Some foundations experimenting in this regard include Robert Wood Johnson, W.K. Kellogg, Rockefeller, the Ewing Marion Kauffman, Charles Stewart Mott foundations, and Pew Charitable Trusts.
Who Controls it?
We were not surprised to learn that the role and activities of evaluation are defined through a continuing process of negotiation within foundations. Few evaluation directors believe that their roles are clearly demarcated within the foundation. Evaluations produce information, an important type of currency within any organization. Consequently, the role, strategic importance and use of evaluation are often critical junctures of compromise among foundation staff.
We heard from numerous respondents that high-stakes evaluation questions can set program and evaluation staff at odds and complicate the agenda for evaluation. These questions include:
In all foundations with separate evaluation units, the director of that unit reports to top management of the foundation, but in almost all foundations we studied, evaluation funding is generally drawn from program grant budgets. So, there is a considerable tension between the staff role of evaluation and the way in which its activities are funded. Although the authority of evaluation in theory derives from management, each evaluation budget must be negotiated with program staff or program directors.
In few of the foundations studied has management's role in the evaluation process been solidly secured. Several evaluation directors report that they receive little guidance or direction about management's expectations for evaluation, often leaving their responsibilities open for ongoing interpretation.
Complaints and Solutions
When foundation staff describe evaluations unfavorably, much of their irritation is focused on the finished productthe report or briefing presented by the evaluation team. Their dissatisfaction centers on lack of relevance, timeliness or ability to use it in improving either grantee practice or foundation grantmaking. Some of the solutions being tried to create a better finished product are:
Dissemination of Results
Internal distribution of evaluation findings to those most deeply involved in a particular project or initiativegrantees, related program staffis common across all foundations. But, broader dissemination of evaluation results is not a common practice among foundations surveyed, with a few notable exceptions.
For example, the Robert Wood Johnson Foundation publishes an annual anthology of evaluation-based findings from its grant programs. The Edna McConnell Clark Foundation and the DeWitt and Lila Wallace-Reader's Digest funds issue syntheses of lessons learned derived from evaluations of their major programs. The Ewing Marion Kauffman Foundation places results of impact studies on its Web site and publishes them as monographs. The W. K. Kellogg Foundation also includes lessons learned in its annual report and highlights these for internal resources to improve grantmaking.
Several foundations have used practitioner and policymaker conferences as a forum for sharing evaluation findings. Overall, however, it appears that dissemination of evaluation findings remains a relatively low priority.
A broader topicorganizational learning or knowledge managementis emerging as a core concern for many, if not most, of the foundations we sampled. Our respondents report that a range of learning initiatives are successful to the extent that foundation management supports them. A challenge to sustaining these efforts is a widespread misunderstanding about why learning is importantwho should be learning and for what purpose. One director of evaluation stated that "learning needs to be connected to a purpose, not just a philosophy," reflecting frustration with what appeared to be "just another trend in philanthropy." Several respondents questioned whether learning from the past was valued when every indication from management and board was that only the future mattered.
Learning needs to be considered within the context of the tenure of staff; when tenure is long, then history is more likely to be relevant and interesting to those involved. When the typical tenure of foundation staff is short, it is more important to get staff quickly up to speed on making grants and focused on the problems they will tackle during their stay; organization history and its previous experiences or strategies in grantmaking are seen as less relevant.
We found that funders tend to hold three beliefs that undermine the potential effectiveness of evaluation. They relate to:
From our survey, we conclude that evaluation has been useful to foundations when:
Survey respondents emphasize that no evaluation can improve a poorly designed program. Evaluation can be used to move the organization toward good programming. As a tool, evaluation will be most effective when it assists program staff and management to strengthen standards for grantmaking resulting in better programs.
Patricia Patrizi and Bernard McMullan are principals of Patrizi-McMullan Consulting in Wyncote, Pennsylvania. This article is adapted from their 1998 report to the W. K. Kellogg Foundation's Evaluation Department which had commissioned them to study the evaluation efforts in similar-sized foundations. The full text of the report can be accessed at the W.K. Kellogg Foundation's Web site at www.wkkf.org.