The Nonprofit Fix

The Road to Thriving Nonprofits: Exploring Outcome Measures and Data Systems

September 21, 2023 Pete York & Ken Berger Episode 3
The Road to Thriving Nonprofits: Exploring Outcome Measures and Data Systems
The Nonprofit Fix
More Info
The Nonprofit Fix
The Road to Thriving Nonprofits: Exploring Outcome Measures and Data Systems
Sep 21, 2023 Episode 3
Pete York & Ken Berger

Ever wondered why some nonprofits thrive while others struggle to make a real difference? Join us, your co-hosts Ken Berger and Peter York, in cracking the code of impact measurement in nonprofits. With Peter's expertise in the field, we unlock the complexities of this vital tool that goes beyond mere data collection, homing in on the metrics of success and exploring who gets to define them. Buckle in as we challenge the notion of success and highlight the need to consult those being served about their desired outcomes.

Our journey doesn't stop there. We delve into the murky waters of outcome measurement and the stark divide between funders who push for certain results and those who simply monitor outputs. We tackle the tension between desired outcomes and resource constraints and highlight the role of nonprofits in designing programs that yield realistic and beneficial results for those they serve. Listen closely as we navigate the difficult terrain where market transactions don't always prioritize those in need.

Lastly, we draw back the curtains on data collection from beneficiaries, discussing its role in shaping impactful programs. We question the limitations of satisfaction metrics and advocate for investing in data systems that yield more precise outcomes. While the road to data-driven decision-making in nonprofits may be steep, we tie up our discussion with collaborative solutions that encourage shared ownership of impact measurement systems. Don't miss out on this enlightening conversation that puts nonprofits, their beneficiaries, and their funders under the microscope.

Show Notes Transcript Chapter Markers

Ever wondered why some nonprofits thrive while others struggle to make a real difference? Join us, your co-hosts Ken Berger and Peter York, in cracking the code of impact measurement in nonprofits. With Peter's expertise in the field, we unlock the complexities of this vital tool that goes beyond mere data collection, homing in on the metrics of success and exploring who gets to define them. Buckle in as we challenge the notion of success and highlight the need to consult those being served about their desired outcomes.

Our journey doesn't stop there. We delve into the murky waters of outcome measurement and the stark divide between funders who push for certain results and those who simply monitor outputs. We tackle the tension between desired outcomes and resource constraints and highlight the role of nonprofits in designing programs that yield realistic and beneficial results for those they serve. Listen closely as we navigate the difficult terrain where market transactions don't always prioritize those in need.

Lastly, we draw back the curtains on data collection from beneficiaries, discussing its role in shaping impactful programs. We question the limitations of satisfaction metrics and advocate for investing in data systems that yield more precise outcomes. While the road to data-driven decision-making in nonprofits may be steep, we tie up our discussion with collaborative solutions that encourage shared ownership of impact measurement systems. Don't miss out on this enlightening conversation that puts nonprofits, their beneficiaries, and their funders under the microscope.

Speaker 2:

Welcome to the Nonprofit Fix a podcast about the nonprofit sector where we talk openly and honestly about the many challenges that face the sector where we will discuss current and future solutions to those challenges where we explore how the nonprofit sector can have much more positive impact in the world a podcast where we believe that once we fix the nonprofit sector, we can much more dramatically help to fix our broken world.

Speaker 1:

Hi everybody, this is Ken Berger here with Peter York for episode number three of our podcast and you may recall, in our last episode, episode number two, we began to highlight the scale and scope of the nonprofit sector, as well as some of the challenges associated with it, and we also began to discuss possible solutions to at least some of the challenges related to the use of data analytics. In today's episode we want to provide another level, setting first order of business conversation about what is called impact measurement. Our hope is to shed some light and hope as to how to do it and how important it is. But first some definitions and, of course, our problem identification part of the podcast. So, kicking it off, I'm going to be asking Peter a whole bunch of questions because Peter is the expert here on all of this. So first off, peter, what exactly is impact measurement?

Speaker 2:

Good question and welcome everybody.

Speaker 2:

I'm happy to be joining my co-host here, ken, for another episode. And so, from the standpoint of impact measurement, the way to think about impact measurement is when any nonprofit program or strategy and we want to be able to evaluate the impact, and usually funders, board members, leaders of organizations and programs want to know is their program making a difference? So when we talk about impact measurement, broadly speaking, we're talking about the act of actually being able to gather data and information about how a program is being implemented and how many are served, which is called outputs. And even more importantly, at least in my opinion, is what results or outcomes are happening. And what impact measurement is is really the act of deliberately developing an evaluation and conducting an evaluation of a program. Now, these are things that happen, sometimes for organizations and programs that they do independently or on their own, and there are other times that funders are funding the evaluations because they've provided grants or made significant donations and want to see or learn about the impact or the outcomes of their investments or their grants.

Speaker 1:

Isn't that difficult to do? To measure your impact it is.

Speaker 2:

It's very challenging because it's one of those things that there's a big challenge around. What is it we measure? What's our outcome of success? And there's a lot to talk about with respect to what we mean by outcomes for those that we are serving. So it's difficult sometimes because, as you can imagine, there are many programs that only get so much time with those they serve or for those that are participating, and so the question often is what is the result? What's the outcome? What kind of change is going to happen? What could we expect reasonably? And so it is difficult and challenging to figure out exactly what is our metric of success or what is the outcome we're striving to achieve. It sounds like it would be easy, but it is not an easy task.

Speaker 2:

Secondly, methodologically, there's a lot of work that has to be done to be able to collect data. Do it in a way that we know follows kind of good research practices, is objective, because, as we all know, if we're the ones implementing a program and we're evaluating that program and we're implementing it, there's a bias there in terms of what you do. So oftentimes you have to have a way to evaluate it more objectively, excuse me, rather than subjectively. And so there are many times and I've spent the past 27 years of my career as an evaluator where someone comes in as an external evaluator and conducts it more objectively helping to design, helping to plan, design and implement that evaluation. That is oftentimes costly and makes it very difficult for organizations to do it. That's not to say that they don't also sometimes internally do their own evaluation work. That is the case but they're always having to be thoughtful about how they try to make sure they're getting objective information and data on how their programs are doing and the results they're getting.

Speaker 1:

So but is it the nonprofit itself that decides or sets the outcomes that they want to achieve, or is it the funder or is it some third party who typically decides what the outcomes should be?

Speaker 2:

That's part of the challenge of our sector, because the answer to that question is it depends. There are some nonprofits. If they receive significant funding from a government agency, they may be asked to track and monitor, for example, how many people they serve, which we call the outputs not the outcomes, but the outputs and oftentimes it comes with the expectation that they're doing that in spite of the fact that they may not have the resources to do it. If you're a private funder, a philanthropy, a corporation, and you're making a significant grant let's say tens of thousands, if not hundreds of thousands of dollars to an organization to implement a program or to enhance the existing program, sometimes they will also fund. And if they're funding an evaluation, they would also have their own requirements as to what you were required, that program was required to do or that you proposed to do, that meets that funder's agenda, and then you have your own agenda. So they're your own program and you often have multiple funders.

Speaker 2:

Herein lies one of the big problems is that, when it comes to who defines the metric of success, that is a very sticky, complicated question. In an ideal world, in my opinion, the definition of the metric of success would be driven by the people that are getting the services and what would they say they want as a result and I don't think it would just be serve me. I think there's certain results that they would want. However, they often are not consulted when it comes to choosing the outcome measure of success. The programs themselves get a chance to do it until they start to get some significant external funding, in which case those external funders will have their say and oftentimes organizations and programs will end up with multiple outcomes that they have to measure in order to be able to address what's going on and measure those outcomes.

Speaker 1:

So let me just take a step back and tell you a quick story, and it leads to another question for you.

Speaker 1:

So, during the time that I was a charity navigator, we were criticized by a lot of nonprofits because of this very question of who defines what is a metric of success, and they felt that charity navigators' metrics were not the right metrics for success. And so we tried to be open to that feedback and we said, great, what should be the right metrics? And the answer that came back was, well, it should be the results of our work, our outcomes. And we said, great, okay, send us your outcomes, come on, send them down and we'll use that as the metric of success. And, as you can imagine, peter, you could hear a pin drop, and it wasn't that those handful of charities that had the metrics weren't willing to send them. It was that they were a handful of charities that had those metrics to share. But the other part of this quick story I want to tell you was while we were exploring this issue of trying to measure the outcomes and speaking with thought leaders around the country and around the world I think I've told you this story before privately I had a conversation with a colleague in the United Kingdom and she said you Americans, you better watch out, because if you don't get ahead of this thing and if you, as individual nonprofits and, as you say, with input from your beneficiaries, if you don't develop these metrics, what eventually is going to happen is the government is going to impose upon you because government was their largest funder and I guess they also meant foundations they will impose upon you outcomes that they deem to be the metrics of success, even if you on the ground don't feel that way.

Speaker 1:

But in the absence of you're getting ahead of the curve and having input into this process, that'll be one thing that you better watch out for, because the amount of money that's going into your nonprofits, if the tools are there and if you're not using those tools, then the government at some point is going to expect these metrics, whether you want to do them or not. So what are your thoughts on the question of the issue around having a third party like a funder defining the metrics? I suppose yeah, I'll leave it there and wait for your answer to that one yeah, no, that's okay.

Speaker 2:

So, first of all, I don't really see, in the, like I said, 27, let's call it close to 30 years of work that I've been doing in the field, I haven't really yet seen what your friend was concerned about at scale. That doesn't mean that government and private philanthropy, corporate philanthropy, significant funders, are not asking and even in some cases demanding certain outcomes and, by the way, determining those for you. Meaning, if you're going to accept our funding, these are the outcomes we want that funding to strive towards achieving, and we're going to request that you evaluate those outcomes and maybe we're even going to request that you use some of the money to do that. That is not a widespread practice. It's happening with a lot of the larger organizations, yes, but government right now, a lot of what they're doing is, and continuing to do is, fund programs and they are really tracking outputs. They're basically saying did you deliver what you said you would deliver in the quantity and quality that you said you would deliver it? So there's much more about monitoring and quality, implementation and outputs and people served. That's where they tend to really put in requirements and stipulations. So the actual outcome question they're not forcing any measures on mass, don't get me wrong, some of them do, but as a whole big picture it's still not there.

Speaker 2:

Private philanthropy and a lot of the funders, foundations and others they're also in their requests for proposals and the projects that they fund. They speak about outcomes and wanna know that you're proposing to achieve outcomes. Most of them are actually, in fact, leave it open enough for the grantee to put in the proposing organization to put in. But they will sometimes set parameters. And again, there are organizations, especially ones that write really big checks, who are definitely driving and in some cases pushing specific metrics of success.

Speaker 2:

And I have plenty of examples where I've had corporate funders, private philanthropies, who definitely imposed outcome metrics as a part of a major funding initiative or program. And so it definitely happens. Again, it tends to be the 1%, the top 5%, that are getting that kind of demand. And the problem which you may be alluding to most importantly is that by doing that we again are not really getting realistic. I spoke on a previous episode about the delusions of impact Granger. When you're removed from the beneficiaries, from the programs, you don't really see what's going on in the ground. You're often, if and when you do impose those outcome metrics, you're often doing so in a way that is theoretically very challenging, meaning there's a lot of steps that you want to get to your impact, but it's unfair to expect one program to be able to accomplish it's just too complex.

Speaker 1:

I would almost call it the mythology of outcome expectations of a lot of these funders. So they'll give you a small amount of money and they want outcomes. And so I think I've also told you. One of the last presentations I did when I was a charity navigator was a play with a couple of my colleagues, and the main part of the play was you had this nonprofit and it's for years and years it's providing outputs to its funders outputs, outputs, outputs.

Speaker 1:

And now the funders want outcomes. And so there's this machine, and the nonprofit goes through this machine to get to outcomes, and on the other side it says oh, it's now outcomes. And when you looked at what were outputs before and are now being called outcomes, it's the exact same thing but it's repackaged to be called outcomes because the funders that are not funding you to measure are asking for that. So everybody's doing a kabuki dance those that are not in that 1%, those that may not use what you might even call the evaluation industrial complex. They just have to morph and look like they're doing it, Otherwise they won't get funding at all. Because now it is very much in vogue for all funders, at all levels, to be asking what are your outcomes, what are your results?

Speaker 2:

Yeah, when it comes to outcomes, the bottom line is that, in fact, good outcome measurement that is anchored on what is realistically possible with a given program for the beneficiaries as a direct part of your intervention, as a direct result of your intervention that you're actually doing that work, is still not widespread. We're not really getting there. So oftentimes we do have the large scale program funding or initiative funding that gets the external evaluation dollars to come in and somewhat imposed sometimes it's self-derived kind of outcome metrics. It's a multi-year kind of value. The funding and the resources are there. That's a rare exception and, by the way, it goes away when the funding goes away. What we really aren't seeing, which is my biggest concern, is that organizations themselves are not owning a set of direct outcomes resulting from their programs that they can use and measure repeatedly, regularly as a way to derive feedback about what works for whom and leverage that as well for outcome evaluation. So right now, if they're doing it, it's because somebody's asked them to do it as a part of a grant. What I don't see enough of is the adoption of an orientation of choosing the kind of metrics anchored in what your beneficiaries would say they're trying to achieve, and I have a whole theory of kind of change, if you will, with respect to what kind of results you can expect from certain programs. But you need a realistic set of outcomes, not outputs that you could imagine your beneficiaries wanting. Or, even better, talk to them and find out what they want, categorize what they want into outcome metrics and let that be your lens through which, if you're gathering that data longitudinally baseline, longitudinally use that as your lens to figure out well what's working for whom. And herein lies the pathway to get to where we need to go If we want a data-driven world.

Speaker 2:

Right now, the problem is all these proxy buyers, funders, government checkwriters they're all not the beneficiaries, but they're paying the bill, so to speak, and so the beneficiaries are silenced. I'm saying that they're silenced in a way because the market transaction is not encouraging anybody to kind of measure the direct results for the beneficiary. It's more about telling big stories to funders and their stakeholders and or getting really rigorous evaluations for the elite few with the large grants that can afford and the project can afford to have some of the top evaluators out there in the field doing their work. We really are not, and that, by the way, is more for the purposes of? Yes, some of it is formative and learning-based, but some of it is also accountability to make sure that they're getting the results for those big dollars. But at the end of the day, none of it is really in my opinion, and I shouldn't say none of it.

Speaker 2:

But not enough is going towards using the tools of outcome measures and gathering data about your program experience in a way that we can figure out what works for whom. Evaluation should be a secondary benefit. The primary benefit is we need to measure outcomes so that we have the feedback, the more real-time feedback, as to how our programs are doing, because that's the kind of R&D, that's the program design. It's anchored in the beneficiary. If this were for-profit model, we'd have to listen to our customers. In this case, we're not. We're listening to our proxy buyers and it's just this really messy, noisy, and the best we're getting is stories and numbers sir, can I let me just tell you a few things.

Speaker 1:

When you talk about beneficiaries and how important that is to get their input and I completely agree with you, but I just wanna talk to you about some of my observations over the years as to how that can get distorted. So one place that I worked, we were required by our funder, our federal funders, to have over half of our board members as beneficiaries of the services. However, in that instance my observation is that the organization largely cherry-picked the people that were beneficiaries, that joined the board, that were largely what you might call rubber stampers, that sort of, were enamored by the leadership and just went along. I had a similar situation where I was running a homeless shelter and I formed a group of the beneficiaries to get their input and their involvement and their engagement and I think that part of the dynamic once again that distorted what came out of that was the power dynamics that here I am, I'm the CEO and I'm sitting down with them and I don't know that the level of candor, the honest truth about their needs and what should be and how they saw the services was there. So it seems to me like it would be very, very important.

Speaker 1:

Who is the one that's talking with the beneficiaries? Seems like a really important question, and then I'm gonna throw in there for you to also respond to the notion. Well, one way to do this anonymously is to have what do you call it? Feedback surveys, constituent voice, kind of things, and so your thoughts on that as well. And how do you ask who asks and how the beneficiaries?

Speaker 2:

Yeah. So the underlying reason why that happens so often is because, again, I really believe that the nonprofit sector in the market exchange is about telling stories, and so what we tend to do is we wanna tell the stories that are going to help us acquire the resources we need to sustain and grow. What it is we're doing, because stories move people. What really does move the check writing, in fact, is not so much the numbers, it's the stories and, as a result, what ends up happening. There's a certain bias, even with respect to bringing, for example, beneficiaries onto the board. What's gonna happen when you call it cherry picking? It's gonna be those that sing the story of the programs, because they are the anecdotal success stories. Right, and everybody wants those success stories. It takes a very different mindset and, quite frankly, to be fair to nonprofits, the market exchange has to be changed because it's a play so you can sit here and look at it and go. Well, they shouldn't tell the stories and sell the stories. They shouldn't bias their engagement of beneficiaries with just those that sing the praises At the same time. That's the way the game is played in terms of how you get donations and money and funding and all that kind of stuff. And, by the way, you check the boxes. A lot of people are in the box checking business. When it comes to these kind of things. In terms of doing it the right way, what we really need to do if we're really gonna shift this is, again, it's gonna be data-driven. So imagine instead what you actually do if you had data that actually captured information through surveys or whatever the case may be in it can be anonymous or whatever the case may be, but imagine if you could, but hopefully some of it would be identifiable in the sense that you could study this. But as you start to look at what you learned from your data. So, instead of looking at just cherry-picking stories, imagine instead being able to see in your data and have your data analyzed and presented for all your program data and all of your customers, so to speak, your beneficiaries. Imagine being able to find those that are your success stories, but also the areas where we didn't succeed. Find some of the cases that we're still trying to figure out what to do with. There are now data tools, methodologies and techniques that allow us to find all the different types of cases that represent the experience of our programs and results that we can help accomplish and once you have that data, you can be more thoughtful about who you talk to.

Speaker 2:

The problem is right now we're leading in a lot of ways.

Speaker 2:

I like to say it this way we're doing a lot of this work through the lens of the role of fundraiser in a nonprofit or fundraising, and so if you always go through the framework of fundraising versus outcome evaluation or kind of learning and learning what works, or program design, instead you're going through a fundraising lens, You're obviously going to change what it is you measure.

Speaker 2:

You're gonna want your stories and anecdotes and highlight those, because you move emotionally, you move people to give you money. Your whole orientation is very different, but in fact, if we really want to learn what works and do right by the beneficiaries, we need to actually be gathering data from every beneficiary about their direct results. We need to be tracking and monitoring what we're providing them and we need to study that data and ask people and engage different groups, not just those that did well, but those that are not being served well. We need to ask questions about the diversity and the equity of how we're treating people. All of that requires data and it requires an orientation that we're not at which is actually getting into and studying and getting data on the programs. What we're doing is we're satisfying fundraising and funding accountability needs. It's just a very different orientation.

Speaker 1:

So let me tease out what I think I'm hearing, based reflecting back on the question that I had posed. One part of the question I had posed relates to what's been called constituent voice and that sort of tool, and it sounds like the difference between what that you know, which that typically I guess for the audience, for those that aren't familiar with it, is largely and it's my understanding of it has to do with making certain that your beneficiaries are satisfied with the services that are being delivered. There's a tool, there's a metric called the net promoter score, which typically you would ask somebody would you recommend this service to a family member or friend? And it's a 10 point scale, blah, blah, blah. But the difference between that sort of thing and what you're talking about is much more than about satisfaction. It's much more rigorous data that you're asking people for about. You know success and failure and their experiences, and that's part of the difference here in getting the feedback from the beneficiaries. Is that correct?

Speaker 2:

Yeah, yeah, so to get to unpack this a little more, there's certain kinds of information we have to gather and there's certain data points that we need, and it fits within what is called a theory of change or logic model. But let's just talk about, like, the program elements. Any program has a number of elements. They have the first of all. One element that often gets ignored is what people come to the program with, that's kind of their background, the community they come from, their context, those. There are data points that we need to gather about what people come with, especially the factors that will affect how they engage in our services. That's one area we call that context. The second type of information we need is any information about what is being provided, the program, the program experience. That's the. What did we provide? Do we provide some training and coaching and how many hours of that? And what's the transactional data we have on that, often captured in program administrative data systems. But we need to be able to capture all of the elements of the program experience. Herein lies what you were talking about so oftentimes satisfaction, a net promoter. These aren't necessarily directly measuring outcomes. What I mean is changes for people, and I'll talk about that in a moment. Oftentimes they measure the quality of the program experience, the perceived quality of the experience. Sometimes it's satisfaction, whatever the case may be. But those are measures of the program experience, just like attendance. Did you attend? How often did you come to therapy? Like what did we deliver to you? Oh, we had seven hours of cognitive behavioral therapy, and that gets all tracked. And then, as I just said, sometimes we'll use other survey instruments to ask you what you felt about the program experience. Okay, so those are the program experience data points. We have to have A third kind, and that's where a lot of constituent voice comes in. A third kind of data are the outcomes.

Speaker 2:

These are the questions we ask directly or indirectly about things like changes in attitude before and after, changes in knowledge. We could test people's knowledge as to if you were providing training. Let's do a before and after in terms of whether you actually have. How much knowledge did you have about what we trained you on and how much do you now? We might also have tried to get scales or tools that assess and measure your attitude, your beliefs, how much you value what it is we were trying to help you with. We might also ask you questions about what new opportunities you partook in in your life outside the program? We might. Actually, what we're really trying to get out oftentimes is has your behavior changed? Or maybe your skill set? Do you have a skills which is different than knowledge? Skills is like the ability to do something right and your capacity to do something, and there's a number of different ways you can directly or indirectly gather that information in the context of a program. But those outcomes are what we need and they should be realistically tied to what's possible given your program.

Speaker 2:

Sometimes programs can't do more than just give knowledge and skills. They're not gonna move behavior permanently okay, but they provide one part of the change pie and that is knowledge and skills. But those are the outcome metrics that are going to help us understand what works. So, if we have the context, your background and intake information, if we have your program experience and we have your outcomes, those separate data points are being gathered. We can now use that gathered data and analyze that data to ask and answer the question what works for whom? What works is the program? For whom is the context and works in terms of what works? Ultimately, we're implying what gets results, which is the outcomes, but a lot of times what we're doing when we're gathering outputs how many people get served and satisfaction, net promoter, those kind of program experience feedback which, by the way, is very important and super valuable, but I don't think that it does a good enough job of it can't measure outcomes.

Speaker 2:

Now some may say something like a net promoter would you refer somebody Might by proxy be possibly also about outcomes, because why would you promote a service to a friend of yours if you didn't achieve some results?

Speaker 2:

The problem is not everybody answers that question the same way. A lot of people typically answer it because when they went to the training they got a wonderful meal for free and got to talk to their colleagues and friends and family. Maybe at times they never do and so they would refer their friends to it, but they didn't intuitively understand. You didn't get a result from it, you just had a positive experience and so it's not precise enough. So anytime we're in net promoter, satisfaction, other types of scales again really important and really valuable, but if we don't have the data that shows the connection, the correlation between those satisfaction and promoter scores and an outcome, then we really aren't doing what we need to do to really get to what we talk about when we say that kind of data-driven learning and the kind of potential we're gonna be talking about on this show for what can really happen.

Speaker 1:

So self-report, like in some of the typical constituent voice kind of surveys, my observation is that when you compare somebody that got a service and somebody who didn't get a service, more often than not the person who got a service off the bat is appreciative and positively inclined because they got something that they needed, which is understandable. But I think there are limitations to anybody's objectivity when they're doing a self-report Like how much did I learn? A tremendous amount I learned or how much did I accomplish here? But it sounds like the ways to gather information include, you said, testing beneficiaries, surveying beneficiaries. You could interview the beneficiaries, this can. And then also you could you could survey people in their lives.

Speaker 2:

You could talk to their families.

Speaker 1:

Call the other Call the other community.

Speaker 2:

You could basically have, like, we did a project in an inpatient psychiatric setting for kids, and one of the ideas that got talked about and some version of it got implemented was gathering data from the families, because the families are an interesting perspective on how the child is doing. If the child is in treatment, then the child goes home or the child spends visits at home. Family feedback is very important.

Speaker 1:

So and I'm not actually down on self-report either- no, I'm saying-, no, I think it's one, I think you can even have self-report in terms of like outcomes.

Speaker 2:

Sure, self-report sometimes gets associated with satisfaction, but you can have a self-report when you ask somebody after an intervention. You say to them before this did you value this particular tool or skill or behavior and what do you feel about it now? Now you may say, well, that's self-report. It is, but as long as there's enough variance in the responses across who you serve that variance, as long as any survey is not worth a grain of salt if there's no variance in how people answer it. So you have to craft tools to give you a distribution of answers. Once you get a distribution of answers, you now can analyze it to understand what works and, of course, the meaning you make from it.

Speaker 2:

You have to be thoughtful about what I call, what is called in the research field, face validity. Whatever metrics you choose, everybody should have kind of a common sense agreement that that metric feels like a good, fair metric of an outcome, not an output that we all can agree has value. And that's something that, by the way, no scientist or social scientist or academic should be dictating, because our ultimate outcomes are a reflection of our values, right Whatever is. The measures of success we all know are culturally defined. So the key is to make sure that, whatever metrics of success and how you measure your outcomes, no matter what format or structure they come in, you're doing it in a way that all of your stakeholders internally, especially beneficiaries, but also other stakeholders, including funders and others that they all hold what you're measuring as valuable measures, if not directly, indirectly, of what everybody's striving to achieve.

Speaker 1:

Yeah, and, by the way, I was not meaning to be dismissive of self-reporting as one of the tools, but I guess my impression, my assumption, is that the more angles, if you will, or audiences, that you get input from, and the more tools you have available to you to gather information, the better. So what I was saying, what I was in the middle of saying, there was one source of the information very important source are the beneficiaries. You can also have information that's being gathered by those people who are providing the services. They can be reporting that information and it certainly sounds like third-party researchers can be involved in gathering some of the information as well.

Speaker 2:

So that sounds like yeah, yeah, that is the case and I think they could be helpful. The only thing I would emphasize if we're going to change the way we learn from this kind of measurement, these kind of tools and data-driven learning, is that we have to have methods that can be replicated on a longitudinal, ongoing basis. So one of the things I always wanna caution people to do is, if it requires an outside evaluator to do the data collection, it's important to figure out can you afford that? Well, that longitude, yeah, on a longitudinal basis.

Speaker 2:

And if you can't, just know, there are a lot of other tools now, including Technological tools, that can help your way ahead of me, that's as the world of, as the world of AI keeps advancing, there's even more ways to do this work, and the point being is that you don't. You have to be very thoughtful about the resources you have, how much you can bring on external resources, all that kind of stuff. I argue you should always try to Sort of bake it into the way you work. And and if the metric is going to be too hard to collect without some objective, expensive, outsider, then you need to find some other metrics that you can all agree to, because the learning is really the priority here, and and and and perfection in the evaluation space, with. With respect to methodology, I'll talk later about the, the idea of the perfect randomized control trial as being this, the platinum standard, but also, you know, being able to make sure that our metrics of success are, you know, formulated from a thorough review of the literature and research, knowledge and everything else. All that I get and I value highly.

Speaker 2:

But at some point we also have to realize that that if we go that direction too far, it's so costly, expensive and perfection becomes the enemy of progress, and what we want to do is go listen, we want to inform it as much as we can.

Speaker 2:

I really wish funders and everybody else. We're actually putting 10 to 15 percent of program resources into this kind of work, so we could get to that better, better level, but we can still. With lower costs and the tools of technology, ai and other things, we can now do things we couldn't before at a lower cost and still have it be a part of the system, a part of the process, a Part of our everyday work, and even automated. So we're we're now at another place and I think that we can. We can move into where we have been unable to go, which is to truly develop data-driven systems that measure context, program Outcomes, do it in an administrative data system with some tools training, ai, other things and and be able to do this in a way that Doesn't cost an arm and a leg and can bring these kind of resources to the community.

Speaker 1:

Well, that's where. That's where I wanted to go next with you. I wanted to talk a little bit about the cost of Getting to outcomes and you know you were talking about the platinum version of randomized controlled trials and, I think, most of our nonprofits. I wanted to make sure I got this right. You know, we're typically using cubic zirconia, that's that's fake diamonds, by the way, for those of you don't know, and so for those of us that are using cubic zirconia and Need to have a much more Lower cost, I mean, I think you know, I think you know, when you talked, like what a 10% or something of that nature.

Speaker 1:

I, you know, I even think that, depending on the scale of the organization, I think, for example, you know, an organization that is a million dollar operation Versus an organization that's 10 or 50 million dollar operation, top 2% of charities, by the way, getting over the 10 million the, the, the capacity, it varies and then, of course, that platinum level which is, like you know, top one-tenth of one percent of charity. So so, of those tools, you know it, it sounds like to me that the, the most affordable tools, like, if you're very small, often circle around at the survey level, or maybe you know some rudimentary data collection, so I'm just curious, you know when, if you think of this on a honest, if you are like trying to go up a ladder from small to large, if you, what would you say to a relatively small organization when they should start and trying to get to outcomes?

Speaker 2:

So I will answer that, let me. Let me start by saying something about the. The 10% Part of the problem is, first of all, I think the conversation needs to change. I don't think we should be using evaluation or outcome measurement or data creating data systems as overhead, as an indirect part of that's good.

Speaker 1:

I actually believe I'm with you, actually, I actually believe is.

Speaker 2:

I actually believe it needs to be funded and invested and supported by the organization and and Expensed out as a part of program. I think I think we need to stop putting it as sort of an indirect, operational kind of element of what we do?

Speaker 1:

Can I just jump?

Speaker 2:

it needs to be woven in, I just want to to the point hold on one second, and definitely, but to the point that I really wish the funding community Government especially, but also private flamethrower being other things I wish they would start Ensuring that when they provide program grants like the only provide program funding, yeah that they include that that the dollars can be used to evaluate or design systems or data systems and use that.

Speaker 1:

So that's my first answer Go ahead, so so so you know virtually Every nonprofit that I've worked for that does direct service over the years, as is typical of most nonprofits, the lion's share of the money Comes from government sources for human service, direct services, and then foundations follow that and Certainly, when it comes to government, like many of these funders, there's no way, unfortunately, that they would define the Value or that the tools for Impact measurement as something other than Not allowable or overhead or administrative costs that that are not Primary part of their funding.

Speaker 1:

Unfortunately and I completely agree with you that it shouldn't be that way and I too wish that and I too would will do everything I can to advocate for a change in their thinking about this, because the whole point being that, until we know whether the outcomes are truly there, until we know how to learn to do better, what the hell is the point of Restricting that incredibly valuable tool so we can do better at what we're doing? But unfortunately, I think, in defense of most nonprofits, it's that's not the way it is, and and the amount of money that they have available in the scarcity environment that or they are typically in it, really, you know it, it takes. It's a big. I'm not saying it shouldn't be, but it's a big lift, I think, to get to that. Making sure that money is earmark even though it should be mission critical, it's tough.

Speaker 2:

Yeah, it is. It's tough and and I'm sympathetic to why it is tough, even for nonprofits to stomach it and why they think about it. Again, I come back to the market transaction. But let me come back to the, the small question like, if you're a small organization, how do you even get started? The bottom line is this is that Nowadays, what you need to start doing is you need a program, administrative data system.

Speaker 2:

You need a way that you can and Some way and there are light touch, inexpensive, all the way to Cadillacs and Mercedes-Benz's of data systems, but at a minimum you should start in a digital way, with computers. Okay, not the old days, like when I was a caseworker and we had a file system because it was 1989 or 1990, whatever. You need to have a some type of a data system when what you're doing is incorporating all the transactional measures, attendance and and like dosage of Whatever you're providing. So you're tracking what you're doing. So if you're meeting with somebody, they're attending a program of interactivity, you're gathering that data and you're entering it into this administrative data system. You're adding on whatever field-based Surveys and tools may already be out there that you can incorporate into that data system and that data system is able to notify.

Speaker 1:

You know your frontline workers, the people that are doing the work on the front lines, when they should be administering those assessments and hopefully they're longitudinal, so between assessments and transactional data in a program administrative data system, you are are giving yourself the capability of getting to where we're talking isn't there good news in what you're saying in the sense that Virtually every nonprofit that's getting funding from government, foundations and whatnot especially from government by the nature of those funders, they require you to be reporting on a variety of these metrics from the get-go and Given the cost of computers and whatnot if you're not digitizing that information? That should not be. In fact, that should help you to streamline to some degree, so that part of of what you're talking about should be more achievable now than ever before.

Speaker 2:

Yes, it really is, and and and I Would argue that also you, you have to develop a value of, of having that. So part of it is, yes, it does create a little bit more work, but eventually it streamlines things that you don't realize. So, while there's a learning curve and an adoption of these tools the administrative data system at some point it's going to save you so much time and energy for other types of things that you're doing. And, most importantly, your highlighting, ken, is it also it is your ears and eyes on what's really going on. It is the data that's going to make you data driven so that you can begin to really leverage the tools of data science and other things which we're gonna spend time talking about on this call, on this podcast.

Speaker 2:

So I think that it's really important to get started and it's important to get started on that digital data collection and you may not be, you know, in the, the health world. You know all health care providers and others. You know there's a lot more mandates when they're getting third-party payers and all that stuff. They're already there. They're already there electronic health electronic health records?

Speaker 2:

Yes, exactly, but there are other other sectors that have not had to. Nobody's asked them to.

Speaker 1:

If you're most other sectors right.

Speaker 2:

So it's time to start to own that, because you can't learn, you can't get the feedback those systems can also hold. You know that promoter surveys or constituent feedback and and constituent voice tools. So it's a container and a technology that will take any organization to the next level. However, what I would say is this that that alone is not the key. Another thing that I will share as an observation is that organization still, even when they have that data and have those systems, in my experience, when I approach and talk to them about what's potential with that and what they could do with data science and being able to automate rigorous evaluation and Recommender engines and all that I said they have to be they'll say that's nice to have when we can get the money and when we can get the financing and funding to do it. First of all, the funding. It is less, it is decreasing in terms of what it costs, but it is an investment. There's no doubt about it.

Speaker 2:

However, the reason I think that a lot of the lack of impact measurement, even though many more organizations have these administrative data systems, is because they've been using it for the purposes of communicating with their funders and saying how much they're doing.

Speaker 2:

They're just like tracking and monitoring tracking and monitoring outputs, outputs, outputs, outputs.

Speaker 2:

However, we now have experience Building algorithms that sit on top of those systems that deliver real-time predictions, recommendations and tools and resources to frontline workers on a case-by-case basis.

Speaker 2:

That is, helping them to plan the work they are with their case, to see how they're progressing, to get real-time feedback. All of it anchored in traditional evaluation methods, but it's happening Algorithmically and what's happening is these individuals, once they start using it to help them Plan and engage better, with the client sitting in front of them, the customer sitting in front of them. It's amazing how, all of a sudden, what felt like a system that was for accountability and a nuisance, for monitoring and tracking, to tell everybody else what they're doing and the value of what they're doing, all the sudden they're now using it as a part of their program and as a part of what they do for people on a one-on-one basis, and what we're finding is that cements a whole different motivation. I Always say if you really want to get Impact measurement, outcome measurement and impact learning happening in an organization, make the information and insights useful in real time to the front lines and you will be able to achieve great things.

Speaker 1:

How long? How long do you need to be gathering data before you can start learning from the from, if you?

Speaker 2:

if you initiated your data gathering and, depending on how many people you serve on a periodic basis, once you get to 200 250 Closed cases may take you two or three years you can start building tools and algorithms and Evaluation automate, automate some evaluation Off of that beginning data and you could get started. If you're already an organization that has that, yeah, you have the data. And, by the way, I'm speaking here and we'll get more into this Later, especially on other episodes about how we're combining data science and evaluation science or social science techniques To be able to do this work. But the technology, the machine learning, all of it's there. The point is that you just need to have enough cases closed cases to be able to learn, and you need about 250 300 closed cases and we say closed, like I mean.

Speaker 1:

I think in our case, where we're, we have a school, typical, you know, you know I mean a student could be there from the time they're three till the time they're 21, I assume, like it might be a Couple years or two or three years, you know. The other thing about what you're saying that strikes me is when you talk about having that many cases let's assume that it's you know, they're digitizing, they're starting from scratch, as many organizations unfortunately are. I Would see that that two, three-year period, let's say, would be the hardest period of time for this to occur, because it's not, it's only, it's only after that that you're gonna see the results. So, convincing people in that interim period maybe a bit of a challenge and Seems to be like one of the ways that you could get past that challenge quicker is if you were able to collaborate with colleagues at other agencies and Then you'd have more, as you refer to them closed cases sooner. So you'd get to that number sooner if you collaborated. Is that true that?

Speaker 2:

is true. So we're doing work right now. When in and again, when I say we, I was saying this on episode two a lot of the work we're doing here at BCT partners Is we are working with Organizations now that provide foster care services in multiple states, that provide extended foster care for youth that are aging out of the foster care system, in patient psychiatric, outpatient psychiatric services for kids and, and, to your point, some of these Organizations are more data rich, if you will, and we're capable of doing this kind of work. But what they're also beginning to do is bring on other partners who adopt their system, including their, their tools and their, their algorithms, even though they're tuned to their setting.

Speaker 2:

There's a lot about that that they can leverage to get started, to be able to do what you're talking about, which I believe is the biggest motivating factor, is real-time feedback to the frontline practitioner on the K on on every case. They serve right, individualized, it's individualized, it's tailored. We call it, it's what we call as precision modeling or precision analytics. It's an allius to precision medicine. So once you get that kind of work started and and practitioners use it, we have one inpatient setting where they're using it hundreds of times a month on different venues and floors of all the services that they provide, that allows for the kind of feedback I know when I was a social worker I would have wanted, because it could tell me oh, we've made progress by trying x, y and z. Oh, we still haven't made progress on this, what else can I try? Oh, this is a case like so-and-so. I could go talk to another social worker who had a similar case, who they did well with. Let's learn from them that kind of real-time feedback and learning that is going on, where you're helping Somebody figure out how to do their job better and, most importantly as well, helping the beneficiary and and that your customer, your constituent, those kind of feedback mechanisms yes, it takes a little while to get to.

Speaker 2:

However, if we could collaborate and somebody in the field already has a beginning algorithm, if you will, or a beginning model that works, you then can start or seed a system that's new for you but automatically gets that feedback mechanism going, which is so critically important, because it really is that feedback loop.

Speaker 2:

That is where you see the actual benefit and that's the key. And I wanna emphasize something here we are all talking again about data systems that measure context program experiences transactions and everything and assesses their outcomes on a regular basis, and it doesn't have to be complicated or big time follow up and all that. It could be direct results. But with those key data points in your system, we can do a much more precise job of figuring out what works for whom through the use of some of the very current contemporary data science algorithms machine learning there's some really great techniques. We do what's called precision causal modeling. We've been building ways to do that with significant funders out there, including government agencies, and so there really are the tools and technologies and techniques there. Again, coming back to what are the smalls or what do people do, start by getting a system going, but also think about how you could collaborate with some of your peer organizations out there. Some of your peers might have more data ready.

Speaker 1:

Join together, ask some funders to sponsor, support some preliminary algorithms, and everybody could start to best from it and we could scale this and so that, in terms of one of the solutions there, also, when you talk about foundations, from my experience I think typically foundations definitely like it when they see collaboration between a variety of agencies and they also like it when they see themselves in a role to kickstart something that could be innovative and transformative, like this.

Speaker 1:

And so we talked earlier about the outcomes best being driven by having from the ground up, beneficiaries and so forth. And but there's also this notion of a collaboration, a hybrid collaboration, where the foundation can be a change agent for good to help the organizations get to where they need to get to. So I think I'm just somewhat mindful of the time, peter, and I wanna make sure anything further you wanted to go over. But I have to say from this episode that when we started this thing, for goodness sake, it certainly I deeply believe, as I know you do, that impact measurement and the tools to help organizations become learning organizations and outcome driven organizations is so potentially transformative and could make the nonprofit sector so much more effective at changing lives for the better and communities for the better and serving people for the better. I mean, I really think this is a centerpiece of what needs to happen, for goodness sake.

Speaker 2:

So that's what I agree. I agree, and I think, if I were to summarize some key points that we've been talking about, I would say that currently there's really not enough happening with measuring the kinds of outcomes that can give us the feedback, especially to those that are the beneficiaries customers, clients and so in order to do that, what I would say is this we have to get started where organizations that provide direct services do put in these administrative data systems To your point. Let's start collaborating and look to the funders who are willing and encourage other funders to fund collaboration to build systems and algorithms and tools that can really be put in place, which could take things to a whole new level. It will require some humility because we have to make sure that the outcome measures that get decided upon are actually grounded in what's possible for the beneficiaries and for those providing the services. So we're gonna have to gain a little bit of humility about what those metrics are.

Speaker 2:

If you're funding a teacher professional development program, don't try to measure whether the kids pass test scores a year later. You're not even intervening with the kids. Measure what changes in attitudes, knowledge, some early implementation of some of the lessons they learned through the teacher professional development experience in the classroom. And just stop there. Make sure you're measuring those outcomes. Make sure you're measuring the program experience. What did the teachers experience during the program? Ask how they were satisfied yes, that net promoter questions, that's fine. But also measure how much they were there, how, what their participation rates were, what their perception of other qualities, key elements of the program were, and make sure you gather some information about who they are and where they come from before they enter your program.

Speaker 2:

If you do some of these basic things, get a system together, you will have the component parts, and then the key motivator to really get this going is we have to design it for the practitioner and the beneficiary, the constituent. If they value the insights coming off of a good system like this, I guarantee you it will be cemented. It will become a part of program as opposed to something that has to be. It's something that's imposed from outside. It actually has got its benefits and what will happen is there will be collateral benefits, meaning we'll actually be able to have the data we need to more formally and rigorously evaluate. We can even use some of the current modern techniques that I'm.

Speaker 2:

I'll talk about another episode around precision modeling and how causal modeling can be done so that it actually has some rigor to it, but at the same time, we're getting the motivation right. One of the biggest reasons right now that a lot of impact measurement isn't happening is because the financial transaction, the incentives and the motivation is just too externally driven. It's too costly and we're not building, we're not funding the capacity to make this a part of program and really, if we spent more time there and almost human center designed this and focused on the constituent beneficiary and the practitioner, it'd be probably amazing what we'd be able to accomplish. And if we did it collaboratively, I think we'd be in a very different place. Yeah.

Speaker 1:

I'm with you and I think that there still is a ways to go in getting a lot of people past the I don't know, the not full understanding, let's put it that way. We are looking, and so one of the things I think we're both hoping for from episodes like this one is we're looking for more early adopters who really come to understand the transformative power of tools like this and in our ability to serve those we care about. And so that's the real hope here that that motivation you're talking about, that we can help spark some of that and more organizations will either come together or on their own behalf, depending on where they're at, to really explore this so that they can do so much more to help so many more people and to, as we said from the get, go in our when we created this thing, so we can better fix a broken world. So I'll leave it there.

Speaker 2:

I like it Well. This has been a really rich discussion. This wraps our episode three of. For Goodness' Sake, this is great, and these are the kind of conversations that we're gonna continue to have, and so we look forward to everybody on the upcoming episode four. So thanks, ken, this was fun, this was good.

Speaker 1:

Well, thank you for all those pearls of wisdom, and I just wanna conclude by being a nerd and a geek and just remind those of you who are at all afraid of going down this road that, in the words of a very wise sage, fear is the path to the dark side.

Speaker 2:

That's it All right, thanks. See you later, love it. All right, take care, bye, bye. So, we enjoy championship here today.

Impact Measurement in Nonprofits
Measuring Outcomes in Nonprofit Organizations
Gathering Data on Program Outcomes
Impact Measurement and Data Systems Importance
Measuring Outcomes and Collaborative Solutions