#119 Sep/Oct 2001 — Evaluation

Roundtable on Evaluation in Community Groups

Shelterforce recently brought together a small group of practitioners, evaluators, and funders to discuss their experiences with evaluation, the strengths and weaknesses of current practice, and ideas about how it […]

Shelterforce recently brought together a small group of practitioners, evaluators, and funders to discuss their experiences with evaluation, the strengths and weaknesses of current practice, and ideas about how it might improved:

Norman J. Glickman is University Professor and former director of the Center for Policy Research at Rutgers University in New Jersey.

Karl Hilgert is the director of community organizing for the Sacramento Mutual Housing Association, a California nonprofit housing development corporation that develops and manages resident-driven affordable housing.

Martin Johnson is the president of Isles, Inc., a community-based organization in Trenton, NJ, that does community planning, real estate development, job training, environmental programs, brownfield cleanup and more.

Lynette Lee is the executive director of the East Bay Asian Local Development Corporation, a 26-year-old CDC in Oakland, CA, that develops affordable housing, community facilities and mixed use projects, and assists local groups with neighborhood planning.

Abdul Rasheed and Ebonie Alexander are the president/chief operating officer and senior vice president, respectively, of the North Carolina Community Development Initiative, an intermediary that provides grants, loans, and “strategic issue management support” to mature CDCs across North Carolina.

Harold Simon is the former editor of Shelterforce and former executive director of NHI.

Sharon Yates is the executive director of Stop Abusive Family Environments, a domestic violence advocacy program in Welch, WV, that provides both transitional and permanent housing to its clients.

The roundtable discussion was moderated by Mary Jo Mullan, vice president of program at The Heron Foundation, a national funder of wealth creation strategies in low-income urban and rural areas.

A Shelterforce ad seeking donations from readers. On the left there's a photo of a person wearing a red shirt that reads "Because the Rent Can't Wait."

 


Mary Jo Mullan: We work in a field in which people are very passionate about improving conditions in low-income communities and helping people to take control over their lives, and they work hard to do so. We can see tangible changes occurring in many communities. Yet, evaluation seems to have developed a bad reputation. What’s wrong with it?
Sharon Yates: A group of funders met with me about a month ago. One of their questions was: What is your success rate? Now, we have a database, and our statistics are great. But we’re dealing with domestic violence victims living in abusive situations. It’s really hard to evaluate how much you increased someone’s self-esteem or to measure how much you changed people’s lives by bringing them out of an abusive situation, giving their children a nonviolent way of life, and maybe breaking the cycle of abuse. It’s hard for funders to understand that. They wanted a cost per person. We might serve a person for a day or for two years in our transitional housing facility. I haven’t found a way to put a cost per person on that. One of the funders did speak up and say, “I think it’s a success every time a domestic violence victim leaves the situation.” I thought that was such a good answer.

Martin Johnson: Sharon’s experience raises an important question about starting points. We have to think about how one might measure how hard it is to work in certain communities, or those communities are always going to get penalized in a competitive funding world.

Yates: I appreciate that point. In my housing projects I have to put in sewage treatment systems, which most urban groups wouldn’t have to do. Or, I’m often asked how many people got their GEDs, or jobs. They should be asking how many people learned to write their names! We’re continually looking for ways to evaluate our success. We have toiled to find a measurement tool that would give valid results for self-esteem. We’re also looking at how we increased the quality of people’s lives by putting them in safe, decent, affordable housing. We take pictures of where they lived before and then of the houses we moved them into.

Karl Hilgert: When I wanted to measure self-esteem, I did the old thing. I went to the university folks. They immediately handed me the Rosenberg self-esteem scale [a standard psychological test]. I took it to our resident committee, they read through the psycho-gobbledy-gook, and their response was, if I answer these questions, no matter how I answer, they’re going to think I’m crazy. So we sat around the table and said, what do you think measures self-esteem for people? They said things like: Can I talk comfortably to a person one-on-one or with somebody in a small group? Do I manage reasonably well? Can I do it without being too intimidated? They used all of these words that I’ve never seen on a scaled measurement before.

I think part of the question is how to frame the thing in ways that make sense to people so that they’re not intimidated by it and it really measures what they think they’re doing to succeed in life.

The traditional models of evaluation are so cold and so statistical and seem to always be imposed from the outside. As both an undergrad and graduate student in social work I abhorred research. The Success Measures Project model is exactly what I think should happen: addressing who’s accountable to whom, who develops the tools and the measurements, who determines what is successful. This is probably the only time in my life that I’ve been interested in evaluation and research – because of how it’s being done. This is the kind of thing that should be built into all of the planning that we do from the start, so that it doesn’t get tagged on at the end just to satisfy funders.

Mullan: How are you putting the Success Measures Project into practice?

Hilgert: We develop our indicators with the residents’ total input. It was the residents that began it. They created a benefit picture that described what they thought mutual housing should be for them. Then we took the Success Measures Guidebook and looked for indicators that we thought would measure the different things articulated in the benefit picture. We’ve done a survey of people before they moved into our mutual housing, and we’re going to compare those results with the results people report when they move in, after a year, and then a second year after that.

Lynette Lee: I think timing is really important when getting participants or folks who benefit from the work that we do to be part of the evaluation process. We’ve helped complete 71 single-family homes for first-time homebuyers. If they had been interviewed immediately after purchase they probably would have been ecstatic. Six months to a year later the complaints start coming in about warranty issues or maintenance. Two years later, after we’ve worked through those problems, they may be happy again.

Johnson: Absolutely. We’re in this for the long haul and we have to have outcome measures that are sensitive enough to pick up what’s gone on over time.

Lee: Our evaluation has been spotty. I think sometimes groups are so focused on programs that we don’t take the time to do evaluations right. In the past when we’ve had planners we were able to build in evaluation more, but as our planners got into running programs or administrative work, that piece got lost. We have done some evaluation with consultants. The results have been spotty, depending on how closely we worked with the consultant, and whether it was funder-imposed.

Abdul Rasheed: This suggests putting more investment on the thinking end, giving organizations money to do more planning and test ideas before they make commitments. Most nonprofits don’t have money to do research and developement, to experiment. We are always doing a project that has a beginning and an end.

Norman J. Glickman: That’s related to the way foundations fund community development. There’s a reluctance to fund operating support – although there are cases when this is done to great effect. The Ford Foundation helped community development corporations develop through operating support programs. With more access to operating support, organizations wouldn’t be required to chase nickels all the time in areas where they’re not necessarily adept in order to make payroll next month.

Hilgert: Usually funders give you just enough money to do the evaluations they want you to do.

Mullan: Were you funded to do any evaluation before you came upon the Success Measures Project?

Hilgert: Never really funded to do it. Always expected to do it as sort of a tack-on.

Glickman: From an academic’s perspective, one problem with current evaluation is that it’s so funder-driven, with funder goals and funder misunderstandings. That makes it very hard for evaluators to do a good job. The rules are set out in advance, so it’s sometimes hard to get a handle on the real situation.

Johnson: After doing this work for 20 years and talking to folks who’ve been around for a while, I think that the relationship between funders and grantees is a pretty sick one in many ways. A lot of organizations that have been around for a while would like to see a field where there can be a connection between performance and rewards, where good organizations get supported and not-so-good organizations either change or go away.

But right now, there are a lot of good projects that don’t get funded and lousy projects that continue to get funded. Funders always expect grantees to be doing whatever it takes to get their money, and then grantees play the game. There are few sectors in society where you can ever find such a disconnect between accountability for performance and actual rewards.

Glickman: There’s a related problem, which is that we need to give huge doses of Ritalin to funders because their attention spans are so short. They will give you money for something they think is innovative, but before you’ve even finished testing your program, they’re moving on to something else without even seeing if you did a good job.

Johnson: What is behind that? Is it the notion of innovation being paramount?

Glickman: It’s partly that. It’s partly that they follow trends like everyone else. What is hip today may not be hip tomorrow. So they move on.

Rasheed: If funders are principally interested in short-term results, then I think their short-term investments are legitimate. But if you’re trying to turn around communities that have been broken, it can’t be a short-term commitment. If you build a house on a block where everything else is blighted, the question becomes: What have you accomplished, other than having something to point at and say we had some involvement in that house right there?

We try to get funders to understand our mission and goal, and our commitment to the organizations we work with. We say we need you to be with us for some extended amount of time. Otherwise we won’t have any legitimate results to show you except the success of some limited project.

Mullan: Are you seeing any positive response from the funders?

Rasheed: We’ve been on this journey for 15-plus years and a majority of the funders that started with me are still in the funding mix. I think they’re pleased with what they see in terms of the growth of the organizations we invest in and the change in the communities. They’re most happy with the leveraging of their dollars. We can say that we’ve raised $30 million from the private and public sectors, and can show some $170 million of leveraged development and results. They like that a whole lot once they understand it.

Harold Simon: Is there a way to address the accountability of the funder to the community? I know funders are accountable to their boards of trustees, but is there some opportunity for your grantees to assess the work that you do?

Rasheed: Not that I’m aware of. They certainly have the opportunity to give feedback to the legislature, which votes on our state appropriations. They have access to the foundations and banks in their areas who are investors in what we do. In that regard, yes. But in terms of some formal instrument, I’m not aware of one. If I don’t like what a funder has done, do I have any power to affect that funder’s decision about what to do next?

Mullan: Well no, frankly, not the way the roles and power balance are currently organized.

Johnson: It’s a great question. One of the things we did with the Success Measures Project is ask about 400 folks from around the country: Who do you compare your work to? Then I asked that question of a group of funders. They all looked around the table in silence. It was amazing. They don’t compare themselves to anybody and don’t want to be compared.

Mullan: Program staff in foundations often go to their boards and say: We have this initiative we’ve developed that’s going to be in these three cities and these two rural areas. We’ll fund it for three years, see the following results, declare a victory, and head home.

But funders may also seem fickle because of the lack of results that practitioners are able to demonstrate. Frankly, from what we’ve seen at Heron, I still think the community development field is in a very early stage of assessing impact. We’ve come across two general kinds of groups. The first group will say: We really want to know whether we’re fulfilling our mission, but we’re so busy running our programs, fundraising, meeting reporting requirements for funders, we just don’t have time to evaluate the impact of our work. Those are all valid concerns, but there’s a second group who say: That’s all true but we still want to know and be accountable to our constituents, so we’re going to take charge of this. They recognize that if they don’t do evaluation, it will be done to them. That second set are the groups that we’re interested in identifying and supporting.

Glickman: Another problem has always been community organizations’ abject fear of evaluators. Most of the evaluations I’ve been involved in have not been “gotcha” evaluations. They have been more of the friendly kind. Even so, we’ve had very little cooperation from the community-based organizations because of their fear of somehow being “found out.” It doesn’t matter whether it’s an empirical/quantitative evaluation or a less quantitative, more touchy-feely one. It makes it hard to do an evaluation when folks are not willing to level with you. The field will advance when there is more information about what didn’t work well and how programs can be improved with mid-course corrections.

Mullan: “Found out,” meaning that it will be demonstrated that the programs are not successful?

Glickman: That’s the fear. In many cases we say going into the evaluation, “We’re not here to beat up on you and report you to ‘the authorities.’ We’re here to try to help you do the work you’re doing.” We don’t even call them evaluations anymore. We call them assessments.

Johnson: From a practitioners’ perspective the biggest concern I’ve always had working with outside researchers is less a fear of getting “turned in” and more knowing that the work we see succeeding is relatively complicated to describe in the ways researchers would like to see it described. There is this whole structure of connected benefits. You do a good job in housing and people tend to do better in civic life, there tends to be better security, kids tend to do better in school, and people tend to have better performance at work. All these things you see going on all the time and you can talk about them, but you’re worried about being held up against some kind of cost/benefit analysis where you can’t really demonstrate the actual value of the benefits in a way that aligns with the costs. So you tell stories, but you know, if you’re out there long enough in the research world and elsewhere, that telling stories has inherent limitations.

Glickman: I thought you were going to say something very different. I thought you were going to say that researchers don’t tell enough stories, that they try to do the numbers and don’t get a good feel for what’s really going on. I think that’s often the case in evaluation.

Johnson: What I said and what you said are very related. I think you’re right, the researchers don’t tell enough stories. But organizations don’t manage data well enough either.

Harold Simon: Abdul, is there a possibility that you would in fact use evaluation results in deciding whether to continue funding an organization?

Rasheed: Absolutely. That is why we are involved in the process, because we hope that we can have better and more objective means to make good decisions about organizational performance. The whole idea is that we have limited resources and we’ve got to make hard decisions.

Mullan: Given that, how do you address the question that Norm raised earlier about candor from your CDCs?

Rasheed: We’ve tried to address it by making them a part of the process on the front end. Basically, we enter a contract with our grantees that first talks about the things they’re going to work on during the year in terms of governance, management and projects. Where do they hope to strengthen their governance? What should the board be doing better a year from today? How should the group be improving its organizational management, fiscal management, internal systems, technology, etc.? And then the contract talks about what they will have in terms of output.

In other words, what we will measure and how we will measure it is negotiated beforehand. It’s not imposed on them by us. At the end of the year they get a chance to tell us how well they did before we make any kind of judgment.

One question we’re still trying to get our hands more firmly around is the “so what” question. So you’ve in fact made these improvements, but what does that mean in terms of the long-term stability of communities or increased assets in families, or performance of children, or reduction in dropout and teenage pregnancy rates?

Mullan: What process are you going through to try to get at the “so what” question, to try to get at the harder impact measures?

Ebonie Alexander: What we’re looking at right now are three basic sets of impact indicators. The first would be economic, looking to see things like: Has there been new business development since the project has been completed? Is there a merchants’ association now? Are there more bank branches in the area? Fewer money exchange stores?

The second would be individual, looking at lifestyle changes among participants. Do they now have a checking or savings account? Are they saving for retirement? Have they changed jobs as a result of their homeownership status? Are their children doing better in school? Are they volunteering more in their community outside of their churches? Are they participating in the political process? Are they registered to vote and are they actually voting?

The third would be social, the involvement of the community and community pride. Is the surrounding community beginning to see this community as a model? Have there been new traffic lights put in? Are there discussions with the local municipality about changes in transportation? Is the community college or library system making educational opportunities available in the area? Are there new after-school programs or continuing education?

What we want to do after we’ve put it all together is test this throughout North Carolina to see if in fact we can track these indicators, if there is some positive impact of our work that we can quantify.

Mullan: Is this a system that you’re helping the CDCs to put into place that they will manage themselves, are you doing it in collaboration, or is NCCDI actually managing this and collecting the data?

Rasheed: We’re collecting it centrally, and we’re actually the energy behind the process. It’s not necessarily something that’s being welcomed by CDCs, because they always have fear of any assessment or measurement tools based on experience. We hope that at some point in time they will embrace it as a good learning tool, something that will allow them to make better management and organizational decisions.

Mullan: How do organizational cultures need to change to advance impact evaluation?

Glickman: I think the question is backwards. Impact assessment has to change to take account of the varying cultures of organizations. That’s more important. I hear behind that question a goal of making organizations into a bunch of green eye-shade types who are interested only in doing the assessment and not dealing with the neighborhood they work in. That worries me.

Mullan: As I understand from the SMP folks, it’s not necessarily the high-resource organizations that are having the easiest time with this. Sometimes it’s smaller organizations that are a little more flexible and don’t have bureaucracies.

Lee: Part of it is consciously building it in at a lot of levels, from making sure you have community folks on the board who can really talk to about the kind of impacts they see in the neighborhood, to making sure program staff build in time for sessions where folks who receive the benefits can give feedback through focus groups, surveys, etc.

Johnson: I still think Mary Jo’s question about organizational cultures is a good one. If we’re going to align interests of funders, managers of organizations, and those they serve, a couple of things have to be in place. There has to be greater investment in the thinking process, whether it’s focus groups or survey instruments or other kinds of data management. Most organizations don’t invest much in that or do a lot of analysis. And we have to figure out a way to make managers of organizations understand how much funders create mission creep. The SMP showed us time and again how people understand what good work looks like and what it would take, but they are doing something else, inevitably what they could get funded, or what they thought would get funded.

Simon: What can organizations do to take the first step in challenging funders to work more collaboratively? What experiences have you had in doing that, where you’ve helped the funder not be driven by their current theory of change?

Hilgert: I’m not there yet. We’re trying to get our own house in order enough to feel like we’re doing a good job measuring our success according to the people we’re serving. That’s the first step. Then we need to be able to show people that we’ve done something that’s of value. We think you need to pay attention to that before you even talk about addressing power and control and all those things.

Johnson: The Success Measures Guidebook is online. Organizations could go there and start to get a sense of how other organizations around the country are setting up indicators for the things they think are most important. They can begin to get ideas about how to have the discussion internally about what it is they care most about happening.

Folks need to get on their back legs and say to the funders, “Your theory of change doesn’t work in the face of what we’re trying to do here, do you understand that?” Often our analysis is just as good as that of those who have historically been in decision-making or gate-keeping roles. However, the challenge for us with SMP throughout has been to get managers of organizations to internalize evaluation as their issue. We need to know whether we’re having the impact we set out to have. So many people want to blame everything on funders.

Simon: You say organizations ought to internalize it, but there are a lot of organizations that aren’t performing all that well. They’re just going from funder to funder and project to project and hoping no one notices. They’re not going to internalize it.

Johnson: The only way to fix that is to have clear measures at hand that are respected by all the players, including the consumers of the programs, the organization’s management, and funders. That’s why the SMP sought to develop a common language and set of indicators that are understood in all three sectors. To keep the funding world from coming in with a new set of indicators and totally discounting previous work, there need to be enough organizations rooted in an understanding of what is most important to them and what performance really looks like. They need to be willing to fight for how they want to be assessed.

OTHER ARTICLES IN THIS ISSUE

  • The New World

    October 31, 2001

    The world did seem to change on September 11. America’s prosperity, its geographic isolation, and its persistent optimism led us to believe we were immune to the violence of the […]

  • The Evaluation Imperative

    January 1, 2001

    In the aftermath of September 11th, many of us felt the initial shock and sadness turn to a deeper quest to connect with those things that matter most: family, friends, […]

  • A Look In The Mirror

    January 1, 2001

    My first experience with organizational evaluation came early. I was a secretary at my local neighborhood center – known by its constituents as “the Center” – sitting in the tiny […]