Shortly after I finished working as a White House speechwriter for Bill Clinton, I decided to join the philanthropic sector. I was thoroughly burned out on politics. I felt that 80 percent of my 80-hour weeks were consumed by partisan food fights. I wanted a job that would give me a chance to create more light—that is, more helping, healing, and improving.
Twenty years later, I’m grateful I made the switch. In a country with enormous and growing social and environmental challenges, we are very lucky to have such a robust philanthropic sector to help drive change—whether it’s in the form of better schools, improved health outcomes, a more informed civic discourse, or faster energy innovation. And yet I’ve come to see that there’s plenty of waste in the philanthropic sector, too. It usually doesn’t come from ideological battles. Instead, it comes in the form of underperformance.
Governments receive constant feedback from constituents in the form of calls, emails, visits, polling, and votes. For-profit companies solicit feedback from their customers in many ways—from focus groups to surveys to sales. Philanthropy, in contrast, has “no built-in systemic forces to motivate continuous improvement,” in the words of philanthropy giants Joel Fleishman and Tom Tierney.
Consider a recent unpublished regression analysis produced by the nonprofit Center for Effective Philanthropy. Looking across tens of thousands of data points from 15 years of grantee surveys, CEP discovered that, on the whole, foundations are not improving in the eyes of their grantees.
CEP is the pioneer of a survey called the Grantee Perception Report. Because the survey is anonymous, grantees get a rare opportunity to speak truth to power. They rate their funders on a wide range of measures, including how well the funder understands its grantees’ work, how much value the funder adds beyond its checks, and how much influence the funder has on public policy. When you look at foundations that have commissioned GPRs for the first time recently, the results look almost identical to those from first-time GPR users 15 years ago. There’s no upward trend.
But CEP’s data shows that one group of funders is making consistent improvements over time: those who survey their grantees repeatedly. According to Ellie Buteau, CEP’s vice president of research, “We see a strong and clear association between more positive grantee experiences and funders who receive regular feedback from their grantees.” In other words, when donors listen, they become more effective.
Over time, these funders tend to receive higher ratings from their grantees on a variety of important relationship measures, including the clarity of their communications, their responsiveness, and their approachability when problems arise. Many seem to be maintaining these improvements over time. Best of all, in grantees’ eyes, these funders are having more impact on organizations, communities, and fields.
When I met Blandin Foundation Vice President Wade Fauth this past May at the Center for Effective Philanthropy conference, it was clear to me that he was a rare bird. He was a rural funder at a conference dominated by big-city foundations. He was a lifelong Republican in a field full of progressives. Rarest of all, he was willing—eager, really—to talk about his foundation’s missteps.
The Blandin Foundation has provided sorely needed resources to nonprofits serving the educational, health, and social-service needs of its rural community since the early 1940s. However, many of those living in the foundation’s hometown of Grand Rapids, Minnesota, came to see it as an arrogant power broker. “When I joined the foundation in the early 2000s, there was such animosity against the foundation,” Fauth explained. At a community meeting to discuss plans for a new power-generation facility, one resident got so angry with the way the foundation was pushing for coal that he got in Fauth’s face and shouted, “You can have your damn money back!”
In 2005, Blandin commissioned a GPR for the first time. It scored near the bottom of its cohort on dozens of different measures. “I remember sitting with the board and reading the report,” Fauth told me. “A board member . . . got up out of his seat and was pointing right at me. ‘I want to know what you’re going to do about this!’ ”
Fauth started an annual gathering of grantees to improve communications, and he pulled together a kitchen cabinet made up of some of the region’s strongest nonprofit leaders. He dramatically simplified grant processes. He convinced the board to allow staff to approve medium-sized grants to speed up grant responses. The foundation retained a capacity-building entity to help any grantee work on strategic planning, fund-raising, managing volunteers, or any other organizational need—without having to air any of its dirty laundry with Blandin. And the foundation literally and figuratively opened its front door to the public, accepting applications from a wider variety of organizations and turning its office into a meeting space for any grantee or community member.
By the fourth GPR, the foundation rated above 75 percent of funders on the strength of its relationship with grantees. In open-ended comments, many grantees shared examples of Blandin’s positive influence on their localities. One reported that Blandin “is one of the most important organizations [of any kind] in the community.”
The Conrad N. Hilton Foundation, a multibillion-dollar institution fighting poverty and homelessness, experienced a similar trend. Its first GPR revealed that grantees viewed the foundation as near the bottom of its cohort on influence and impact. In response, the foundation added top leaders in its chosen fields. Then it added staff in other capacities, such as communications. This made a significant difference. Hilton’s GPR results soared.
For example, Hilton’s ratings for impact on their field went from the bottom quarter in 2007, to just above the 50th percentile in 2014, and then to the 92nd percentile in 2017. One grantee commented in 2017, “The Hilton Foundation is . . . a leader in the field of ending homelessness. They have had a huge impact on the field and ensuring that evidence-based practices are utilized and embraced.” Another reported, “The foundation has been instrumental in shining a light on how systems of care and communities need to behave differently in order to achieve the better outcomes we all desire.”
The Maine Health Access Foundation (MeHAF) had a different challenge on its hands. The foundation, which focuses on expanding access to high-quality health care and improving the overall health of Mainers, was staffed from the start with people who had advanced degrees in public health, management, business, and law. The problem was not a lack of rigor but the opposite.
When the foundation received its first survey data from CEP, it learned that its grant forms were way too complicated (it took 80 hours on average to complete them), its response times were way too long, its support came with too many strings, and its grantees were in the dark on many aspects of the foundation’s approach. Figuring out the right changes to make took years and several additional cycles of surveying and other forms of learning.
But between MeHAF’s second and third GPRs, the foundation reaped significant returns from doubling the size of its program staff. “We had more staff that could be out in Maine learning about what was happening and where there were gaps,” said the foundation’s CEO, Barbara Leonard. It also simplified the application process and began funding good ideas that didn’t fit into the foundation’s relatively narrow grant criteria. These changes went over incredibly well with grantees. “I love that MeHAF keeps growing as a thought leader and innovator in the foundation world,” one grantee reported. “MeHAF cares deeply about healthcare in Maine.”
CEP’s analysis did not—and could not—prove that there’s a causal link between surveying grantees repeatedly and making improvements. Perhaps the repeaters would have made big leaps even if they had not commissioned any surveys. But whether the link is causal or merely associative, there’s no question that the foundations that invest in listening to those who are closest to the problems they seek to solve—their grantees, their constituents, and other key stakeholders—are the ones most likely to learn, adapt, and improve over time.
Conducting one survey is a start. Conducting two surveys is better. But, as with the best institutions from any sector, real learning seems to take place only when it’s part of a culture of continuous improvement.