The Nonprofit Fix

The Ratings Game: Exploring the Future of Data-Driven Equitable Charitable Giving

November 17, 2023 Pete York & Ken Berger Episode 4
The Ratings Game: Exploring the Future of Data-Driven Equitable Charitable Giving
The Nonprofit Fix
More Info
The Nonprofit Fix
The Ratings Game: Exploring the Future of Data-Driven Equitable Charitable Giving
Nov 17, 2023 Episode 4
Pete York & Ken Berger

What if the agencies you trust to guide your charitable giving aren't as reliable as you thought? This is the provocative question that we, Ken Berger and Peter York, explore in our latest podcast episode. We examine the world of nonprofit rating agencies, scrutinizing their strengths, weaknesses, and the methodologies they use to evaluate nonprofits. From Charity Navigator to BBB Wise Giving Alliance, we discuss how these vitally important and well-intentioned agencies struggle to measure the true impact of the nonprofits they rate, and why.

We dive into the cloudy waters of financial analysis, transparency, and accountability. We discuss the misconceptions about the charitable sector, particularly surrounding the controversial topic of overhead costs. But it's not all doom and gloom; we also explore the potential for change. We envision a future where big data is harnessed to create more equitable and accurate benchmarks for nonprofits, transforming how donors make informed decisions about where their money goes.

Lastly, we examine the application of fairness in rating systems and funding challenges. We discuss the importance of accessibility and how new tools, such as BCT Partners' Equitable Impact Platform (EquIP), could revolutionize how donors access information about nonprofits. We delve into the significance of overhead costs in charitable giving, emphasizing the need for a new perspective that appreciates the value of investing in operational costs. Tune in to discover a new perspective on nonprofit rating agencies and join us in envisioning a future of informed and equitable charitable giving.

Show Notes Transcript Chapter Markers

What if the agencies you trust to guide your charitable giving aren't as reliable as you thought? This is the provocative question that we, Ken Berger and Peter York, explore in our latest podcast episode. We examine the world of nonprofit rating agencies, scrutinizing their strengths, weaknesses, and the methodologies they use to evaluate nonprofits. From Charity Navigator to BBB Wise Giving Alliance, we discuss how these vitally important and well-intentioned agencies struggle to measure the true impact of the nonprofits they rate, and why.

We dive into the cloudy waters of financial analysis, transparency, and accountability. We discuss the misconceptions about the charitable sector, particularly surrounding the controversial topic of overhead costs. But it's not all doom and gloom; we also explore the potential for change. We envision a future where big data is harnessed to create more equitable and accurate benchmarks for nonprofits, transforming how donors make informed decisions about where their money goes.

Lastly, we examine the application of fairness in rating systems and funding challenges. We discuss the importance of accessibility and how new tools, such as BCT Partners' Equitable Impact Platform (EquIP), could revolutionize how donors access information about nonprofits. We delve into the significance of overhead costs in charitable giving, emphasizing the need for a new perspective that appreciates the value of investing in operational costs. Tune in to discover a new perspective on nonprofit rating agencies and join us in envisioning a future of informed and equitable charitable giving.

Speaker 2:

Welcome to the Nonprofit Fix a podcast about the nonprofit sector where we talk openly and honestly about the many challenges that face the sector where we will discuss current and future solutions to those challenges where we explore how the nonprofit sector can have much more positive impact in the world a podcast where we believe that once we fix the nonprofit sector, we can much more dramatically help to fix our broken world.

Speaker 1:

Welcome back. I'm Ken Berger and I'm joined by my wise co-host, peter York. Today we have a thought-provoking topic near and dear to my heart to discuss, recalling it the ratings game and how third-party rating agencies may be missing the mark. I'm sure many of you have heard of these rating agencies, but have you thought through whether they truly capture the essence of what's most important in terms of the work of nonprofits? Today we will explore that question and much more.

Speaker 2:

Okay all right.

Speaker 1:

So just first, I just want to name some names here of some of the larger agencies and you know we're not. You know, I don't think we're intending to critique any agency in particular, or I think our discussion is probably going to be more general, although sometimes we might go a bit into it. But just here are some of the more well-known and more frequently used agencies. First, of course, there's Charity Navigator as you know, that's where I used to work for about seven years and they are known to be the largest of the rating agencies in the United States and thereby probably in the world, and I think they're currently upwards of over about 160,000 charities, and those are the vast majority of charities that are of a reasonable size. You can refer back to our earlier episode on size. And then another one is the BBB Wise Giving Alliance. A third is Candid, and then there are a number of other smaller agencies that are doing this work. There's Giving Compass, charity Watch Donors Choose Great Nonprofits, give Well but again, the largest three by far are Charity Navigator, bbb and Candid.

Speaker 1:

So what is their purpose? What are they purporting to do? Well, fundamentally, most of them state as part of their purpose to assist donors in making their decisions about where they're going to donate their money. They also some of them want to sort of encourage nonprofits to improve in the areas measured by implication, if you want to get more money from donors. They try to drive more money to what they see as good nonprofits, and I think another purpose they strive for is to try to make funding of nonprofits more objective and rational. That's what they aspire to. How do they go about doing it?

Speaker 1:

Well, the vast majority of the emphasis in most of these rating agencies has to do with financial analysis, including measures like overhead fundraising, salaries, program expenses. There's another area that's called transparency and accountability, things like what information do you share on your website, how accessible is it to the public, and a host of other measures. But those are the most typical ones and in terms of where they typically get their information, I think there are basically typically two major sources and then a few other less or so. One is the 990. That's the informational report that every charity is required to file with the IRS on an annual basis. Some people call it the tax return, but for nonprofits it's an informational report because they don't pay taxes. And then another area, increasingly, is selfish. And then, finally, is self-report of the charities themselves. Those are the two most typical places that they pull from. Some have other things like surveys and getting audit documents and the like.

Speaker 2:

And I do think some of them do some search on their websites.

Speaker 1:

Yeah, that's true, and culling from other outlets as well. That is certainly true, but in terms of the bulk of what typically is done, it's typically those areas. And how are they funded? Advertising, for example. If you go to Charity Navigator, they will very quickly I'm partly responsible for this There'll be a pop-up that says did you know that about 1% of you donate to us rating agencies and we tour charities, donations, grants from foundations and, in some cases, such as the BBBY's Gisving Alliance, fees that are paid for by the charities themselves to be reviewed. So that sort of is a quick overview of the universe and who they are and what they typically do.

Speaker 1:

So I think, now that we told you about that, let's talk about some of the problems that are out there when it comes to these organizations and some of the arguments that have been made regarding their work. Basic argument that I think has been made by a number of experts as well as a number of nonprofits, is that these rating agencies do more harm than good, that, for a variety of reasons, they just so miss the mark in terms of what matters most and in terms of what's important, and because they're measuring, let's say, secondary or tertiary measures, some make the argument that they do more harm than good, and when I was at Charity Navigator I heard that a number of times. Another argument that some made is a fellow by the name of William Chandra that I debated a couple times, who referred to Charity Navigator as the Death Star, who basically and that was real insult to me as a Star Wars lover, but at any rate he basically said that the way that you Well, wait a minute.

Speaker 2:

You can be a Star Wars lover and still love the Death Star.

Speaker 1:

Yes, but I'm a Jedi, so it's hard for me. It's hard for me. But so he basically made the argument that the way you can judge Charity's effectiveness he gave us an example is the number of people in a waiting room waiting for services, and that that would be a better measure of success than all these rating agencies. Yeah, but the more typical, the standard is that the basic notion is this that what matters most I think there's a general consensus, I think this is just basic logic is that what matters most is the impact or the outcome of the work of Charity's, and that most of the measures out there don't really capture that information and, as a result, we're, at best, working on the secondary or tertiary measures, not what matters most. And so you can have a high rating because you're financially efficient, because you're transparent with information, but that doesn't mean that you're having a meaningful impact.

Speaker 2:

It just means that you are thrifty and that you are communicating well, and so I think yeah, well, let me just ask then, because a lot of this is so based on the IRS 90. So the metrics are very financial, although it's important to note that the IRS 90 does measure other factors like their governance. It does ask them surveys about that. It's got a survey about that.

Speaker 1:

And that's in the accountability and transparency measures, for example at Charity's and Avgar.

Speaker 2:

yes, yeah, but so there's all of that data, but one of the things that you and you may be getting there is the question of kind of what is it that the ratings are emphasizing, given that they're so dependent upon the IRS 990 in many ways as its core data source? What so what's at the centerpiece of it? Is it you were talking about? It's not outcomes.

Speaker 1:

It's financial performance, based on a number of metrics of finance, and what other information about how the organization is managed is provided. So it's basically it's providing whatever the IRS is asking for.

Speaker 2:

It's pulling that information and so you need to be transparent about your finances. We need to know what's going on financially. We need to know who's on your board. We need to know that you're strapped correctly, that you do have a board, that it's a functioning board. We need to know what your finances are to make sure that you're financially okay. It doesn't say anything about the problem and how effective they are at addressing the problem. It doesn't say anything about their programmatic best practices. It doesn't say anything about their outcomes. It doesn't say anything about the community that they're actually serving.

Speaker 1:

Indeed, yes, indeed, you've nailed it. I mean, there's so much that's not there that really matters and is so important. So that's I mean, that's a top level critique, and I think we can go further at some point with numerous articles that we culled. To take a look at this, perhaps, make. Actually, I'm thinking out loud with you here, peter. Maybe now's the point to do that. Before we argue on, I'd like to also argue on the other side. I'd like to argue, at least for a few minutes, in defense of the, the rating agencies, but maybe we need to just take a look see at some of the research that's out there. I know that.

Speaker 2:

Well, I think you're defensive, Like I do think that let's let's lead off by, because we're about to to go into some of the challenges and we're already beginning to hint at it, but I do think it's really important for listeners to understand a little bit about pre-charity ratings and that the time period that existed before this happened the true, the true benefit of these of this, because we're on this podcast, we talk a lot about being data driven and valuing being data driven.

Speaker 2:

Now that that unto itself, is is is potentially going to be a point of contention with some of the folks in the nonprofit sector, because there have been arguments even against sort of being so data driven.

Speaker 2:

But if we take data drivenness and and all of this, both the impetus and and the way it's unfolded, there's a lot of benefit and there's a lot of really important reasons and something that that that these charity rating agencies and and just the act of rating charities by using data has brought to the field in a way that I do think is important to acknowledge and that definitely pivoted the world of of funding donating to nonprofits in a way that was moving the ball forward. It it, it was progress and I think it still does provide value that is really important to the, to the donor community, and I and I think that that that we're going to raise some stuff that needs to go to the next level, but would you at least acknowledge that there there was a lot of advancement and it continues to be something of value, and that there's certain things that is providing that that are definitely beneficial and needed in the sector?

Speaker 1:

The short answer to your question is yes. Yes, I do think that there's value. So I may have told this story in one of our earlier episodes, but I'll. I'll be brief and I just want to start off my defense with the following situation. So when I joined Charity Navigator and I started hearing these criticisms of the system, I was new to this whole rating business and I was quite open to the critiques and said, yeah, okay, well, we are perfectly willing to reevaluate and look at outcomes. It makes a lot of sense. I remember meeting with the founder of Charity Navigator and he basically said the same thing yeah, okay, well, sure, if they, if they've got the information, let's, let's have at it. And so then I began to ask for that information, and that's when there was the following it's called crickets.

Speaker 2:

Or radio silence in our terms.

Speaker 1:

So it was like so the thing was it's like okay, everybody, you're telling us to do it this way, but there's no this way. So what are we supposed to do? Are we supposed to sit on our hands and measure nothing? Because you're saying we're doing more harm than good? And I guess it's sort of my belief and I think a lot of us, even a lot of the non-profits, when we really had frank discussions with them, agreed that at least we know these organizations are managing their finances in a responsible fashion. At least we know that they're being open about how they're operating in terms of their governance and salaries and the like. At least it's the beginning of something. And so, yeah, we've got a long way to go. And I'll tell you for the first.

Speaker 1:

I've been away from Charity Navigator for about eight years or so and I took a look at the current state of their rating methodology and preparation for this podcast and I really have to say, if you Google it, there's an 80-page report that Charity Navigator has put out explaining their methodology. And in the process of developing that more expanded number of metrics that they're using, they created a panel of experts. They really consulted a lot of people on what else, what other measures they could consider, and it's much more robust, it's much more multi-dimensional. Also, when it comes to the criticism of overhead, I still think there's a problem there. But whereas before they had this flat 10%, they have varied it more to as much as 30%. Again, I'm not excusing that or saying that that's good enough, but they are trying, and they are even trying to begin to have measures of impact. But I will say this One of the things that you can find on the Charity Navigator website this is Charity Navigator saying this, and I quote currently few charities are calculating their cost per outcome.

Speaker 1:

Most organizations, especially those producing more nebulous outcomes I think by that I would say more complicated to develop are getting to the step before calculating the cost per outcome. In other words, an increasing number of charities are getting to the point where they're beginning to do monitoring and program evaluation. But in my experience in the trenches here, even that is still in its early phases the number of charities that are really doing program evaluation. So we're still. The bottom line is this we're still, I believe, years away from most of these rating agencies getting to that point, and partly it is because we are still probably years away from the point at which many charities are truly measuring their impact.

Speaker 2:

Yeah, and I'll go as far as to say that I truly believe that it's a very small percentage that are in fact measuring their outcomes and, by the way, it's even smaller if they're doing so on a regular, consistent basis.

Speaker 2:

So only the top nonprofits can afford a one-time evaluation of their outcomes and or to do this work, very few of them have systematized it. There are studies that have been put out there. There was a survey conducted by the Urban Institute that found that 70% of nonprofits reported measuring their outcomes at least occasionally. The thing that I challenge about even that finding is that, first of all, those studies tend to be skewed towards organizations that are of size and scale that would be participants in these types of studies, which is not describing most of the nonprofits, and I think the key there is that they're doing it occasionally, and when you actually find out how they're doing it, you realize how non-scientific non-bringers even their outcomes. When organizations self-report that they're measuring outcomes, oftentimes they would consider it an outcome that a person was served. So they don't even always make the distinction between this definition of the distinction between an output and an output.

Speaker 1:

So I think that's really important. Along those lines, when I looked a little further at those nonprofits that have been identified let's say a charity navigator for measuring their outcomes or their impact that typically they are those organizations that well, first of all it's more outputs than outcomes, and there are some organizations where that may be the metric. So if you're giving a particular product the number of, like water purifiers for example, that's an easy thing to measure, and so you could make an argument that those two things in some cases are somewhat synonymous. But when it comes to real having an impact on a child's life or an adult's life and making meaningful, lasting changes in their lives, which is the real bottom line measure for so many of the direct service charities, I think we're still far, far away from that.

Speaker 2:

Yeah, I really think so too. And even some of these studies. I know there was a Center for Effective Philanthropy study that was talking about 40% of nonprofits are conducting some kind of outcome measurement. Again, I think the devil's in the details in terms of what that actually means, how they define their outcomes, how consistent it is, and I also think that a lot of the organizations that are being surveyed and studied in a lot of things and that's just because they're hard to access, this is no fault of the folks that are doing these studies, but the reality is you're just not getting a small and medium-sized organizations typically representative in those samples, and so, at the end of the day, the bottom line is we don't have outcome data. And when you actually do the work and I myself am doing a lot of work with some of the larger organizations to help them do a better job with measurement of outcomes, and even those that are rich in program data it turns out that their outcome measurement work, even for the big organizations can.

Speaker 2:

You're running a very large organization and I think it's very hard to measure these outcomes, even in your own administrative data systems and everything else. Their organizations we're working with I know very closely and these are some of the largest organizations well-funded by philanthropy, government, et cetera and true outcome measurement. When you get in there through the eyes of what an outcome really is and I always wear the hat of an evaluator I can honestly say, once you start to pull the hood, open the hood on the car, you begin to realize oh no, there's a lot of exhaust, but this car's not moving very fast. If outcomes were the gasoline, yeah.

Speaker 1:

And I think where I had left it and it looks like it's still pretty much there eight years later a charity navigator was. What we're looking for are organizations that are working toward measuring their outcomes, and even if their outcomes, once they begin to measure, are not great, the fact that they're beginning and the fact that they are wanting to learn and to improve at this stage is probably what's most important out there. So, again, to expect gangbuster outcomes as when you're out of the gate, whenever you start measuring I don't think that that is where we are as a sector. We need to at least start keeping track and trying to make those improvements. We need the data to tell us where we are. Where is our starting point?

Speaker 2:

And I'll make it very simple. When it comes down to it, and we're talking about outcomes and impact, the question is who's gathering data on the actual clients, beneficiaries, recipients, tracking them to find out if they actually are better off as a result of the services? The minute I say that, common sense should kick in to say that's not easy to do right, because also what is implied by outcomes is results that occur as a result of a program, which means we have to let the program end, or at least we have to get a certain enough of the program in, and then ask some very specific questions and try to gather it as accurately as possible to find out if, in fact, each and every client served case beneficiary, however we frame it, did they actually pre-post change? And then, of course, eventually being able to figure out if that change was something that would have happened on their own or it was actually caused by a program. So when we talk about outcomes, those are just some simple ingredients. We need to do it at an individual case level. We need to do it at a level that there's enough programming that's occurred that we can see that there was some result and we need to be able to know does that result compare to others like them that didn't get it, or their own selves before they began the program or early on in the program? So if you just put those basic criteria in place common sense but research-based we begin to realize that no matter what studies and surveys of self-reported information as to what outcomes are happening out there, they're not out there.

Speaker 2:

And so these rating agencies, when they're looking at this stuff, they really are inferring or are using proxies or guesstimating based on literature, research and what I would call expert opinions as to what they are likely accomplishing if they implement programs using best practices. And so it's not where we would hope to be at this point in the system. And it is not the rating agency's fault, it's just the data's not there. I mean because the rating agencies, I would argue it's not their fault. And the reason I say that is because the onus of measuring outcomes is really on the organizations and their program administrative processes, implementation and systems, and that's where the data has to happen. And nobody is funding nonprofits to do that work, which the evaluation community and philanthropy and even the federal government has estimated really would require an investment, because 5% to 10% of what we spend on programs should be spent on being able to do this work, and so it's not there, and so that's a long-winded way.

Speaker 2:

I know this is about the rating agencies, but it's a long-winded way of also trying to not and you and I might have an interesting discussion about this I don't think it's the rating agency's fault, nor do I think do they hold the key to solving this.

Speaker 2:

It's actually funders and donors and this gets to something else in the rating agencies who won't tolerate investments in anything but direct programming and they don't include things like measuring outcomes as a part of programming and they call it overhead. And now we get into kind of this discussion of well, part of the problem is that if we don't fund overhead and we don't give organizations more money for overhead or indirect costs, we don't give them more unrestricted funding, how are we ever going to expect them to invest and yes, it costs money to measure and do the kinds of things, particularly outcomes, if we're ever going to get where we need to go. So I just unleashed a lot, but that's kind of. In thinking about the rating agencies, I just think it's really important to humbly put them in a place that's important, but also appreciate their role in all of this. If everybody collected outcome data and resourced themselves to do so, I guarantee the rating agencies would leverage. So it's just not there because we don't fund it.

Speaker 1:

So I want to jump off your point about not default to the rating agencies and I was defending them as well, but I do want to still circle back and dig a little bit deeper on the rating agencies and the place where I do think they're at fault and where they do, at times, really miss the mark, in the sense that they either go deep on in a one-dimensional way, unlike some of the others, and they have other conflicts. So, for example and I'm not gonna name names of agencies, but I'll give you some examples of things that I've seen out there there are some there's at least one, maybe two of these agencies where their ratings are entirely based upon customer satisfaction surveys, so anybody. It's like Google, where anybody can give the agency a rating and the metric is entirely driven by that. So it could be somebody who and it's based on their self-report so it can be somebody who was served. It could be somebody who's a donor, it could be somebody who is a disgruntled employee. Whatever it might be. That is the one metric that they use.

Speaker 1:

In another instance, when you talk about data, there are some of these agencies where they will give you a rating based on what I would call a data dump. In other words, the more that the nonprofit feeds the individual information into their system, the higher your rating goes. So self-reporting, and so you're being reported, rewarded rather for self-report of whatever you like. And by the way, speaking of self-report, I can tell you just from my observations at Charity Navigator when it comes to the IRS 994,. So because Charity Navigator's looking at overhead and so the way that your expenses are broken out and it's different buckets, the agency's, there's a gray area. So if you've got a CEO that's spending time in the programs, how do you allocate salaries between the management line and the program line and the fundraising line?

Speaker 2:

Let me pause you for a second because I think we need to explain something about the form that maybe others don't understand. So in the IRS tax form there are three types of for every expenditure that is made, a line in the form, every expense that's made. You can take that expense like the expense of a CEO and you can allocate that expense and apportion that expense to program. What do they call it? I forget the exact term, but it's. There's fundraising program and I think it's management or yeah.

Speaker 2:

And so essentially you're taking it and taking the whole pie of the expense, of that cost. Say a CEO, and you're basically saying 80% is for program. Now, a CEO, that might not be the case, it might be more like 20% is program. So whatever their salary is, you would take their salary and benefits and fringe and all of that and you take 20% and call it program and maybe 60% and call it management, and then you would take the remaining 20% and call it fundraising. If they're doing a lot of fundraising as a part of their role and their function. So every line of the expense in the IRS tax form gets broken out. So when we talk about overhead or indirect rate, we're talking about non-programmatic, as in management and fundraising. So that's what we're referencing. I just wanna make sure everybody, like is, knows what we're talking about.

Speaker 1:

It's actually so. Yeah, so the three buckets are program costs, fundraising costs and then management and general costs, and overhead is generally defined as both the fundraising and the management and general costs, whereas and so, point being that, this is very you can game that system and there are gray lines between them, and so agencies will play that up so it can give you a false.

Speaker 2:

Well, they all know everyone knows when you're reporting this data and that's an area where you're less likely to get an IRS audit or a quest for an audit or something. If you don't split that up, always the best, that's not where you're gonna get audited. You're gonna get audited on bottom line numbers and so, at the end of the day, this is an area where it's a little fussy and, by the way, most nonprofits know that it looks better to make sure that your programs are significantly 80%, 85%. So we know that that pie slicing is definitely more discretionary and it's not like there's always a motivation Because, by the way, this is not just this is not the rating agencies that necessarily set that far. The problem is, this goes back to an ethos that goes back to the origins of the charitable sector, the nonprofit sector, which is the idea. And if you talk to people, just the general public, and say, hey, they'll tell you. I write a check to you. I wanna know that my dollars all go towards the programs, they have no concept of what that means, but they go. No, I don't want you paying for CEO salaries no, I don't want you paying for the building no, I don't want you to paying for the office supplies? No, you can't pay for your phones None of that. Well then, how are we supposed to build the services? And it's this complete.

Speaker 2:

This is the truth, but the truth the matter is, this is still very much a part of what everybody, the general public, perceives. The nonprofit sector quote should be. It should be a true charity, it should be volunteer. Everybody should be just giving. If I give you money, it should go directly to buying food for you know, your food pantry. None of it should be going to staff or overhead or office supplies or technology or the wifi that you need to communicate with the world. Oh, you need a website? No, no, no, no. I don't want my money going towards that. And so that actually I just wanna note, was so I really believe that we can debate this that the rating agencies were not causing the overhead problem, the overhead emphasis. They magnified it, but it was always there. It's always a part of the ethos, especially in the United States, and I do believe there's some element of this that has to do with its origins, and understandable and good origins, of kind of like we do good just because it's good to do good.

Speaker 1:

Yeah, and when it comes to overhead, you know as to what is an appropriate percentage. I think you indicated that you've looked at some research that indicates that on average it should be much higher than, let's say, the 10% that at one time was the metric of charity navigator. Now I see, when you're looking at their methodology, that there's a range depending on the type of organization and even then that's caused some real frustration by some.

Speaker 2:

Well, and I'm gonna also get into where I do think and come back to something you were just talking about, which is like some of the challenges that are being perpetuated with the rating agencies and how they rate organizations. And now I can go into a little bit, if desired, a little bit of a critique of kind of how this happens. And part of the problem is and it's a really important part of this who decides which measures of finance? And there are many, many measures, many data points in the IRS 990 form of all kinds of numerous expenses, revenues, debts, assets, debt to asset ratios, cash you know all kinds of stuff that you know. You look at all these different metrics, financial metrics, and you can ask the question who decides which of those measures or which of those data points are important at all? And then, how important Meaning, how do you weight each other?

Speaker 1:

Can I?

Speaker 1:

So let me just yeah, I think let me tell you what I think has typically occurred in answer to that, and then I'd like to hear from you because I think you can help tease out what I think is a better way.

Speaker 1:

So let me just say that, when it comes to these rating agencies, it ranges from the following At one end of the spectrum, it's either the founder of an organization or the CEO of that rating agency makes up their own mind, and so it's a one person, or a couple people, you know, that are hired to come up with it.

Speaker 1:

I think the best case scenarios that you typically see out there are the impaneling of a group of experts that are making a judgment, based upon their years of experience, what they think are the measures that matter most and how they should be rated. So at least, at least it's a bit more participatory, it has a little bit more. It has not a little bit, it has a lot more input than the one dimensional or the two dimensional founder scenario. But again, even then it's a very subjective judgment by a group of people who may have a particular slant and an angle and an attitude, and also if it's a bunch of nonprofit experts. Once again, there's an inclination, I think, to want to make the metrics as easy for them to do well on as possible, and that's a natural inclination Back to you.

Speaker 2:

Yeah, and while I agree, I think that if you have a group of experts, that's better. It's like a scientific panel, right. You're going to be able to look at stuff, and so, from that standpoint, I think that's very helpful and I think it's a step in the right direction. However, what I would argue is and I think there's plenty of research showing this is that even when it comes to experts, they have their biases and a group of experts. In an ideal world and this is the world that I've now entered, and I will be talking about how directly some of the work that I'm doing here at BCT with the Equitable Impact Platform, where we are leveraging the IRS 990 data and American Community Survey data in a big data platform to be able to look at this what I would tell you is this is that it is now time and we have the data and we have the technologies and techniques, including machine learning and AI, to take this to the next level. And so one of the things I've always known, but we couldn't answer before all this kind of big data world has really kind of rapidly, kind of awoken, if you go a little bit before then.

Speaker 2:

I had always wondered in hindsight when we were talking about a lot of the ratings. What are the things I've always known is when somebody says what's the right overhead let's take overhead as an example what's the right indirect? How much, what proportion should be going to program? I always had a feeling that, just common sense-wise as a former nonprofit practitioner working in nonprofit organizations and somebody who actually research and studied investments in nonprofit capacity building the answer to the question is what's the right overhead? Is it depends. It depends on what it is you do. I guarantee you that a large university, as a nonprofit, is not going to have the same overhead needs as a small grassroots arts organization who's putting on local shows or an organization that is a youth development after school programming You're just not going to. A hospital healthcare system is going to have a very different need for how they balance their management, fundraising and programmatic dollars. It's ludicrous to say you can come up with one size fits all. It doesn't matter whether you have a group of experts or not. It's ridiculous. At some point now it didn't matter that. It was the best we could do, but now we're in a different world. We're in a big data world. This is the world that I've been in now for over five years working with IRS data as well as other data, we're taking a different approach.

Speaker 2:

We were involved in a project that was seeking in a study that was basically seeking to understand what should the overhead be? Part of the problem is, oftentimes we base it on this idea of there's some standard, an average standard, and there isn't. And then also, if we really think about it, we have an opportunity, which is what if we could first take all these hundreds of thousands of nonprofits and segment them into different groups. We put similar organizations doing the same types of work, meaning same type of similar programming, who also exist in a similar community socioeconomically and a similar community in terms of how much philanthropic and government resources are available to them, and if we said, okay, let's find 250, 300 organizations that are almost matched on what they do, their size and scale and the communities they serve and their communities, and we matched them all. Now imagine we could match them all and then look longitudinally in history and go ask another question that nobody's asked, which is oftentimes we're trying to determine what's the right rate and we haven't asked the question for what? So if we are supposed to have a certain percentage that is programmed then for what? And so what are we predicting? So there was one project that we did where we actually started to look at this to help a philanthropy really figure out what's their right policy for their overhead spending. And what we did was first match organizations and then ask the question and then anchor this in an analysis that said what's the right indirect rate that predicts financial health? And we use liquid, unrestricted net assets as the financial health measure.

Speaker 2:

Okay, and this is a study that actually was referenced in the Chronicle of Philanthropy. It was the MacArthur Foundation, it's in the Chronicle of Philanthropy, it's an article that's available. And then there was another paper produced by BDO. We were actually doing this work working for at the time it was FMA and now it is. They're a part of BDO and the fiscal management. We should put those in the show notes. Yeah, we'll put those in the show notes so you can see this methodology and you can see the reference.

Speaker 2:

But essentially what we did was we said let's just match organizations on kind of who they are and who they serve, and then what we need to do is then see what levels, what percentage, what's the range of ideal, the ideal range of indirect that predicted good, liquid, unrestricted net assets, which, if you think about it in very simple terms, it's like trying to there's a more calculated formula there that you can look at the article for but essentially think of it like being able to have enough in the bank two months or three months of cash liquid assets that you could survive on if something happened, if a major funder went away, anything else.

Speaker 2:

So it's really about that and that's one of the, by the way, a very four central metric of financial health, and there's a number of variables that go into it, but essentially it's just knowing how financially sustainable or stable you are. And so if we had that metric as a to predict to, what we started to learn was and it's kind of like precision medicine is once you start matching, which makes it more fair, because part of the problems with a lot of these the rating agencies that are using a standard there's a great book by Todd Rose called the End of Averages is when you use averages you're not respecting all these differences we're talking about when you have hundreds of thousands of cases of data now and now it's exhausted because everybody has to file electronically. We now and we have the capability of studying rapidly. No, let's find the match, comparison groups of similar groups like you, similar communities like yours, and we can tell you well, what's the range that gets you to a healthy Luna? A healthy predicts future, healthy finances.

Speaker 2:

And so when we did that, what we actually became to realize was that and I think we mentioned this on another show but we came to realize that the overhead, the answer to the question, is it depends, and so what we should be first doing is developing and delivering tools not based on a standard, but to say okay, who are you, what type of community do you serve, how large are you? If we know these things now we can go and say okay, for your type of organization and your type of setting, you're really gonna need 29%, 30%, 35% for what you do in your type of setting. Some places, like large healthcare systems, might need as much as 50, 55% overhead in order to be able to survive. The point is, the point is we found a range that was actually quite wide ranging and would surprise people as to how much they really need.

Speaker 1:

Just a quick comment, which is that I do think that they're like when I was looking at the current state of rating at charity navigator. There definitely is an attempt to break out and vary the overhead benchmarks based upon the type of charity. But there's two things that you've mentioned, at least two, three things that are different here. One is that you're also looking at the community and the socioeconomics of that community. You're also looking at the size of the organizations. You're looking at a bunch of other variables and, most importantly of all, I think from what you're saying, you're looking at what the data tells you Not what an expert's opinion is, but what the objective data is telling you about what matters and where it would be.

Speaker 2:

And we're not, yeah, and we're not imposing, we're not doing a lot of manipulating of the data so that we can make it comport with what experts say, with some standards.

Speaker 2:

We're literally letting the patterns of longitudinal data show us. So if we look today at your data and we know what you're gonna look like four years from now because we have that historical data we can basically project, we can say, okay, what's the overhead that three or four years later is going to predict that you're gonna be financially healthy? And then we can find that range and we can do it for, again, your group. And when I said 250 to 350 organizations, just know, think about it this way we're talking about, especially for mid-sized, large organizations, almost a half a million organizations. So if you're down to the best, 250 matches. So let me just say a little bit more about the matching.

Speaker 2:

We now, with the big data that we have, we can match on numerous variables. So we can match your organization based on what you do youth development, programming, arts and edgy, excuse me arts, humanities, whatever, you can also do it. Arts and culture. You can also do it based on the size of your organization. But we also matched on other factors. So when I was talking about community factors, think about community factors like this. We can match on what we call community wellbeing, or it's a measure called area deprivation index or a social determinant of health, which has 17 indicators. So we're matching your nonprofit to similar nonprofits who also, by the way, serve very similar. There's 17 indicators socioeconomics, poverty, overcrowding, proportion of the population has less than a ninth grade education. There's a number of factors in what's called the area deprivation index. We also match, by the way, because we have all this data together. We actually know we also have all the funder data. So the private foundation data, the community foundation data, we know where the government funding flows. We know the community you're in, as to how much, comparatively, your organization has access to in the way of government funding, private philanthropy funding, community foundation funding and other types of funding, donors, contributions and government. We know that from the IRS data.

Speaker 2:

And so now we can also match you, because if I were a nonprofit, I'd be like, if you're gonna compare me, compare me against the youth, like I'm a youth development organization on half a million dollars in budget. I'm in a community of a certain staz and, by the way, please make sure, if you're gonna compare me and benchmark me and rate me. I would want you to rate me based on organizations that have to either be in the same type of what I would call giving context, like we don't have a lot of private philanthropy in Mississippi. We don't have a lot of in where my community is in Mississippi, we don't have a lot of community foundations, maybe we don't have any that we have access to. So don't compare me in terms of overhead or anything else against others.

Speaker 2:

And my point is what's so exciting about the world where we're in right now is that if we now know we can and I'm telling you at BCT partners, we are with the equitable impact platform we know we can get that precise. That's a matter of being equitable and fair right, because what ends up happening is we now can stop the problem of basically standardizing everything. And what standardization does? People don't realize this. What standardization does is it fits everything into a bell curve. Well, what that means is the tails of the bell curve get ignored. We always pay attention to what's the average Well, what's the average and direct speed, but the problem is there's so much lack of survival in some of the other tails, especially the tail, that's get less behind that if that's not appropriate for them, not only might it might not help them, it'll actually be detrimental.

Speaker 2:

And so we are now in a place, with this data and everything else, that context matters and we can bring context into this. And so if we've got a half a million organizations and I can match you with the best 250 and then study what's the right balance and I can look at all the variables and I can use machine learning to really train and learn what's the right mix of revenue expenditures, that's gonna predict some outcome or some result. I should say Now we're in a whole different world and, by the way, this is the world we've been in in the work that we're doing and I'm doing for the past five years, and we're just getting ready to really kind of go full on in terms of presenting this. And the point being is that, when it comes down to it, we now know we can surpass this idea of rating by standards that may not be fair to those that are basically being left behind right, who don't show up in the average is because they're being left behind right or they're being misunderstood, they're being misclassified, they're being benchmarked unfairly against the long types of organizations. We'll talk later about how this impacts, even from the standpoint of race, culture, ethnicity, with respect to communities as well, and how that plays into all the inequities here.

Speaker 2:

But at the end of the day, the beauty of the big data world we now exist in and the fact that everything is electronic. We now have a census of form, easy, like greater than $50,000 in receipts. Any organization with greater than $50,000 has to report electronically, so the data's all there. It takes some work to figure out how to use it and make work of it, but this is where we're capable of going and in my mind, through an equity lens, a fairness lens, not only can we go here, we should be mandated to, because it's unfair any other way. It was fine, it worked in the past, but it's time to recognize, because we now can get better, that we should make this up.

Speaker 1:

It's I was looking up for. In preparation for this, I was looking up an effort that was undertaken by charity navigator, the BBBY Skipping Alliance and what was then guide star and is now subsumed within Candid. The Foundation Center also became Candid. The letter that we put together called the overhead myth, and that letter it turns out that letter was just 10 years ago. It was in June of 2013. And I was just looking over that letter and I was thinking to myself it's almost like it's time for a new letter, and maybe I might even call it the standardization myth or something like that, where, if we could get to a point where the sector is sophisticated enough to realize that we're, we have tools out there and we're just shooting ourselves in the foot by, we're not just missing the mark by measuring the things that don't matter most, but we're all even those things that do matter most. The way that we go about making those measures and calculating those measures is so important, so I'm completely with you it is.

Speaker 2:

And the other thing I kind of dropped in here along those lines is both our ability to disaggregate, to respectfully understand and meet organizations and communities where they are, and that goes for people individually. Meet them where they are right. We're gonna get better results because we can contextualize what's gonna be best for you. We all know one size doesn't fit at all, and so the other element that I just described was this disaggregation. The other thing that's really important that we don't do, which is it's causal modeling, it's predictive causal modeling. It's actually and what do I mean by that? Well, what I mean is there are natural experiments. The first thing I do when I match communities on similar characteristics and organizations on similar characteristics. What I'm doing in that circumstance is what the research community calls controlling for selection bias right, because there's a bias right the standardization bias. Part of the problem is that if we select the average, we'll be able to move the needle right, but the problem is that's assuming we're going to basically be able to always see the average, which, by the way, the weird thing about averages and please do read Todd Rose's book called the End of Average is it doesn't exist. It's a figment of our imagination, you'll never find anything with an average on all dimensions that we're measuring. But as you start to look at this stuff, the other element that is really important that I'm introducing here and I'm trying to figure out how to talk about this in a way that doesn't sound too data geeky but it is this causal predictive modeling.

Speaker 2:

Right now, we are literally rating just based on the whatever that factor is, and we're saying here's the threshold. You shouldn't have more than this percentage in overhead and we're not basing it on. Well, based on what Is it predictive of something? A good way of thinking about this is we really aren't using data to answer the question. So why is this threshold important? It's important to accomplish some end. Some future state right, so we need to be able to predict it Now. We need to do so in a causal modeling approach, as correlational prediction is a big, huge problem. That's for a session later. But from the standpoint of being able to predict some future, none of the rating agencies currently are doing predictive modeling. As far as I can, I think you're right. I don't know. I mean, they might be behind the scenes and perhaps somebody will respond accordingly, but go ahead.

Speaker 2:

But really we should be longitudinally using this predictively, because if we're going to be rating organizations, it should be based on this disaggregation of matching which is research-based, important for causal, for understanding at least quasi-experimental, causal kind of approaches. We should do that matching and then we should also predict against some outcome, like we did with Luna. But now in the work we're doing here at BCT with the Equitable Impact Platform, or EQUIP, we're actually modeling it to certain factors, certain outcomes that pertain to contributing to community improvement over time, and I'll talk about that later. But at the end of the day, if we can anchor it on a predictive modeling, what ends up happening is we are now not just saying, oh why? Well, because the experts said so, or our values tell us we want 90% to go to the programs. That's all great, but there's no predictive value. One of the interesting tests that I would love to be able to see is just how predictive are these rating agencies' ratings?

Speaker 2:

And if they're not predictive of some future state, if the accuracy of their ratings is not such that we can be better than 75% to 80% accurate at predicting and knowing they're going to be financially healthy or other key indicators of success three or four years later, then we have to ask ourselves what's the point. And we again, with machine learning and the tools and AI now and the big data, there's no reason we cannot anchor on a predictive value. So, in addition to matching and disaggregating out of fairness, the second component that's missing in a lot of the rating agency's work is to anchor this in predictive modeling. That's what we did in the study that was looking at the overhead study that we contributed to with respect to the MacArthur Chronicle Philanthropy piece, and it's a piece that is baked into all that we do when we're doing the big data platform we built that combines IRIS 990 and American Community Survey data down to the census.

Speaker 1:

I think the only data that I'm aware of in terms of the impact of the rating agencies is how it might affect donor behavior and that if an agency gets a higher rating it drives a bit more money in that direction and if it gets a low rating it can reduce it.

Speaker 1:

But even then, I don't know that it's a significant that's significant in an amount.

Speaker 1:

I know that when I was at Charity Navigator, we referred to most of the people that came to the site as what I called sort of a drive-by donors, where they would spend maybe 60 seconds because they had as charity in mind.

Speaker 1:

They looked for the rating to validate what they were intending to do, and the only time that they might change their behavior is if it was a lower score than they had expected, and even then the evidence as to how much of a change it would be was not necessarily that significant. So the very question of the impact of these rating organizations is also in question, how much of an impact they have. And maybe that's not such a bad thing at this point, because, until we get this more nuanced approach, I think that when it comes to the extreme outliers, some fraudulent agencies or some agencies that have way off the far end of the spectrum kind of metrics, it can be helpful, but for the garden variety, typical nonprofit, it may have a nominal amount of impact, especially with this more nuanced realization of how we should be going about doing things.

Speaker 2:

This brings me to a third point, the other thing that you were talking about, like the donors and the drive-by donors, and they only last 60 seconds to make a decision. If we just accept a certain reality people are busy, things happen, there's certain timing for when we donate, thank goodness there is a tax benefit, or where would the sector be? We understand, we wish they took full.

Speaker 1:

Although there is some mixed data on that whole question of taxes and how that impacts charitable giving, but we won't go there. Yeah, well, that's a bigger discussion.

Speaker 2:

But the third piece I want to mention also so I mentioned the fact that we're not disaggregating, we're going with standards. That's problem one with respect to the rating agencies. Problem two is that they're not really using data analytics, ai, machine learning, to do causal, predictive modeling. So we actually know that, whatever these factors are, we're judging that, we're establishing what works or what those thresholds are, based on the prediction of the data, the patterns in the data. Instead, we're just inserting our standards there. So those are two elements. But there's a third element that I think is really a big challenge and this is a challenge from the standpoint and this is what we at BCT really are focused on which is the issue of equity. So the problem is that the third challenge is that we aren't asking, I think, an important question. First, it's like we often say what's your cause area? And let us show you organizations that are big and awesome that are in your cause area. So then also it's like oh, my cause area is homelessness, and up comes the top 10 homeless organizations in the country, and then I write my check. The problem is that what if those top 10 organizations are actually accessible only to communities that aren't accessible. Let's put this way If they're not accessible to communities that have the greatest needs, services direct services, especially for vulnerable populations are locally accessed. So there's a fundamental question that is completely missing, which is what's the needs in the community for this organization? If you have a big organization that's grown a lot now we assume that it's addressing a problem, but yet it's actually located in a place that is not predominantly a particular communities of need, then is it as wise to give to that organization than an organization half its size where it's actually accessible and all the communities that's accessible are actually communities of need. And add to that that when there are communities of need, we also know from our data that there are also inequities in that communities of need that are communities of color, in particular black, latino, hispanic proportionately larger communities there than the counties they sit in. When there's those communities, they often have nonprofits that actually receive the least amount.

Speaker 2:

There's a study we did with the program to aid citizen enterprise in Pittsburgh where we studied the entire region and really looked at these issues like do communities that are most disadvantaged and, by the way, that also are disproportionately people of color compared to the counties they sit in are they having access to nonprofit service providers that are well-funded, strong and healthy, and it turns out that those organizations themselves are not. There's an inequity. First of all, disadvantaged communities often are being served by organizations that don't have the resources they need to be as financially healthy. And there are inequities. And we found that across a 10-county region and you can look that study up. It's on the program to aid citizen enterprise and we'll send you notes, show notes In the show notes.

Speaker 2:

But what that means is this this third component that is often really not looked at is, and we should be asking as donors if I'm going to donate to a nonprofit first, what's the cause I want? Where is the need for that cause? So where is there a real homelessness problem? Find those communities and then ask the question which nonprofit service them? And now we're anchored in the place and the community first, and then we can also examine that with the data that we have to know there may be communities, for example, communities of color, that are actually being neglected even more because of other issues and systemic issues.

Speaker 1:

But yes, yes, but as it stands today, the tools that are available to donors to the average donor I'm not talking about to a large foundation or whatnot or the tools for them to get that information typically are not readily available and or it would take a tremendous amount of research for them to be able to get to that point. And that's why we need these rating agencies or perhaps some other new tools like EQUIP. We need a different tool to make it possible for the donors to get that information.

Speaker 2:

Yeah, and I think that we do, and that's where. Ok, so this is also one of those things that, yes, and I do think that's where the work that we're doing here at BCT. So we're not having a commercial in this podcast, but in some ways, what I want to do is I want to just highlight, because I think it's amazing, what, over the past five years, our team here at BCT partners, whose really focus is on equity and is centralized there. We have built the equitable impact platform with the purpose of doing exactly what we're saying, which is a simple because you point out about those 60 seconds a simple way to say what's the cause, area, where do you want to fund? And we have all the geospatial pieces there. Then you can basically go OK, which non-profit service them? And then we can do the rating of the organizations that service those communities, are accessible to those communities, and then you can say, oh, now we can give you an understanding of a fair understanding of their capacity, if you will, or their financial well-being, predictively like which of these organizations are pretty strong, and if none of them are.

Speaker 2:

Here's the thing, because the way we're going to eventually solve a lot of the inequities we have is one way is for nonprofits that are servicing communities with the greatest need and the greatest disparities and inequities to be able to get the resources. But let's say you find these communities and go, wow, most of the organizations here are not doing that. Well, well, your answer shouldn't be don't fund them, which is kind of the mantra in a more standardized rating mantra. It's kind of like oh no, they've got a low charity navigator rating where they've got a low rating from BBB or whatever the case may be. But the problem is, if they're the only game, if they're only service in town, so to speak, if they're only services accessible, then shame on all of us for not funding them to get them stronger. And part of the reason that they're challenged or weaker is because they're being neglected unfairly and inequitably.

Speaker 2:

And so we now have the tools, the capabilities, the analytics, all of that, and yes, we do have a tool at VCT Partners that you can also see how this works. But just in principle, we can do this now with the data. So my point being is that the three things I just described the need for disaggregation, causal predictive modeling and community-first orientation to where to give we should be able to change the trajectory of what we're talking about and, I think, address a lot of the pain points and complaints that many in the nonprofit and philanthropic sector have made of the rating agencies. And again, we so appreciate what they've been able to do and we always value and applaud being data-driven. It's just now we can take things to another level and in this world where we're trying to make things more equitable, it's not a we have proof that it's not just a theoretical solution. We have real solutions that are showing no you really can do this.

Speaker 1:

I just have two closing thoughts because I'm mindful of the time here. One is in that scenario of an organization that may not have as high a rating in whatever the metric is, but it's really serving the community of greatest need. I think it makes a lot of sense to support that organization but at the same time, along the lines of something you've indicated at a number of points, I think we need to, as donors increasingly indicate that I'm throwing this out to you that at least five or 10% of the funds that we're donating would be dedicated to data analytics for those organizations that are in that state. And then the second point that I wanted to make. When you said you didn't mean to make a commercial for a quip, the thing is it's but no actually.

Speaker 1:

No, actually, what I was going to say was, although you have that tool, until it sort of is readily available to the public at large, it's really not a commercial. You're sort of like it may be. This is like the trailer for Ahsoka. It's like you're showing us a trailer for your future Star Wars episode, but until it launches, or you find a way to launch it such that it's available to the general public, it's not really a commercial.

Speaker 2:

I mean, you're definitely wetting people's appetites, but we're not there yet, and we are getting ready to launch it but we're also doing so in a way that we believe we need to be responsible and respectful of, kind of where the field is and how to introduce it. So we're taking some time because we don't want to, but the reality is we've come a long way, so our anticipation is that we will be launching this over the next few weeks.

Speaker 1:

Well, that would be amazing, because my takeaway, the working title we had for this episode is the Rating Game Missing the Mark by Third Party Rating Agencies. And I think we've indicated I think pretty well here that in a wide variety of ways, and I think we've come to appreciate more than ever before that we are missing a lot of the marks. And when we get to the point where we not only have agency nonprofits that are measuring their outcomes, but that we have rating agencies that are using much more sophisticated, non-standardized, more customized to communities and all these other metrics, then we will have really made things truly much more, for goodness' sake.

Speaker 2:

Yeah, I agree.

Speaker 2:

I think that where we're going with this and the capabilities of this is being able to really take to really help donors, and I think everybody's heart wants to find the right organization in the right community. That's going to make the biggest difference, and I think that that's very important and we need to. Also, I just want to put a plug in here for a moment and say listen, the research we did do found that most of the standards for overhead are way too low for organizations to be healthy, and what this should also do is help us understand the importance of unrestricted, the importance of not restricting your donations to programs alone. Please know, truly, you could be anywhere between 30% and 50%, depending on the type of organization and community you're in, that you need to basically be able to pay the bills, keep the lights on and run the organization strategically, intentionally and effectively, and I think that to me, I just want to put a plug in and say listen, the whole overhead piece is really important, and the other reason for that, Ken, is what you just said, which is, I think also the sector needs to invest in itself 5% to 10% and actually stop calling it overhead and call it part of program to get to measure outcomes, to evaluate and measure your programs and outcomes.

Speaker 2:

And again, the tools, the current data tools and all the things I was telling you about with respect to equip and what's capable. Those tools are also available to do the program evaluation and outcome work in a much more inefficient, efficient. I was thinking, I was channeling you when you always say to me oh, you've basically figured out how to do it cheaper, faster, better. And that's the point. We are in a new world so we can take things to the next level.

Speaker 1:

So let me, I want to show you how much we've changed in 10 years. I want to read you a couple of lines here, and maybe this will be my closing thought here. So here's what it says, and this is from the overhead myth letter Many charities should spend more on overhead costs. Overhead costs include important investments charities make to improve their work, investments in training, planning, evaluation and internal systems, as well as their efforts to raise money so they can operate their programs. These expenses allow a charity to sustain itself the way a family has to pay the electric bill or to improve itself the way a family might invest in college tuition. And here's the blockbuster conclusion. So when you are making your charitable giving decisions, please consider the whole picture, the people and communities served by charities. They don't need low overhead, they need high performance.

Speaker 2:

Boom, boom, boom, boom. That's a mic drop. That's a mic drop, I love it. And it still is that All right, and actually it's still at Propo and I think we're in a place where we can actually show that it can be acted upon and that's a wrap, brother. So that is a wrap, ken. This was another great episode. Pass it back to you to put your final comment on it.

Speaker 1:

Rock on Until next time, all right, be well. Wait a minute for back colours to pass].

An Introduction to Nonprofit Rating Agencies
Challenges in Measuring Nonprofit Outcomes
Rating Agencies and Nonprofit Overhead
Determining Ideal Overhead for Nonprofit Organizations
Equitable and Predictive Rating of Nonprofits
Nonprofit Rating, Equitable Funding Challenges
Overhead Costs in Charitable Giving