The Nonprofit Fix

How AI Can Solve the Nonprofit Overhead Problem

March 10, 2024 Pete York & Ken Berger Season 1 Episode 9
How AI Can Solve the Nonprofit Overhead Problem
The Nonprofit Fix
More Info
The Nonprofit Fix
How AI Can Solve the Nonprofit Overhead Problem
Mar 10, 2024 Season 1 Episode 9
Pete York & Ken Berger

Discover the hidden truth behind the stigma of overhead costs in nonprofit organizations as we tackle the misconceptions that stifle their growth and effectiveness. We lay bare the conflict between the need for robust operational support and the hesitation of donors to finance what's often mislabeled as frivolous spending. This conversation isn't just about numbers; it's a candid look at how outdated funding practices are undercutting the very missions they aim to support. We bring to light the real-world consequences of financial constraints on staff performance and program success, urging a radical rethinking of how we value and fund nonprofit operations, including the use of AI to better study and understand what’s needed. 

The debate heats up as we challenge the conventional wisdom of gauging a nonprofit's worth by its overhead ratio. We dissect the complex relationship between spending and impact, arguing that our sector's future hinges on embracing a results-oriented framework. As we peel back the layers of data, tools, analysis and now AI, we underscore the need for a fundamental shift from penny-pinching to impact assessment. The episode draws a bold line in the sand, calling for donors to evolve, prioritizing the palpable effects of their generosity over the deceptive comfort of low overhead percentages.

Rounding off our deep dive, we scrutinize the financial health and surplus strategies of nonprofits, navigating the nuances of sustainability and growth. We highlight how advancements in AI and data science, including machine learning and predictive analytics, are poised to revolutionize our approach to financial health in the sector. Ken and I argue for a more sophisticated understanding of financial metrics, moving beyond averages to a tailored AI-driven analysis that reflects the distinct needs and circumstances of each organization. By leveraging AI and fostering a culture of transparency, we envision a nonprofit landscape where leaders are equipped to make strategic decisions and funders are inspired to invest in a more impactful and sustainable future. Join us on this journey to reframe the narrative and champion the true potential of nonprofits.

Show Notes Transcript Chapter Markers

Discover the hidden truth behind the stigma of overhead costs in nonprofit organizations as we tackle the misconceptions that stifle their growth and effectiveness. We lay bare the conflict between the need for robust operational support and the hesitation of donors to finance what's often mislabeled as frivolous spending. This conversation isn't just about numbers; it's a candid look at how outdated funding practices are undercutting the very missions they aim to support. We bring to light the real-world consequences of financial constraints on staff performance and program success, urging a radical rethinking of how we value and fund nonprofit operations, including the use of AI to better study and understand what’s needed. 

The debate heats up as we challenge the conventional wisdom of gauging a nonprofit's worth by its overhead ratio. We dissect the complex relationship between spending and impact, arguing that our sector's future hinges on embracing a results-oriented framework. As we peel back the layers of data, tools, analysis and now AI, we underscore the need for a fundamental shift from penny-pinching to impact assessment. The episode draws a bold line in the sand, calling for donors to evolve, prioritizing the palpable effects of their generosity over the deceptive comfort of low overhead percentages.

Rounding off our deep dive, we scrutinize the financial health and surplus strategies of nonprofits, navigating the nuances of sustainability and growth. We highlight how advancements in AI and data science, including machine learning and predictive analytics, are poised to revolutionize our approach to financial health in the sector. Ken and I argue for a more sophisticated understanding of financial metrics, moving beyond averages to a tailored AI-driven analysis that reflects the distinct needs and circumstances of each organization. By leveraging AI and fostering a culture of transparency, we envision a nonprofit landscape where leaders are equipped to make strategic decisions and funders are inspired to invest in a more impactful and sustainable future. Join us on this journey to reframe the narrative and champion the true potential of nonprofits.

Speaker 1:

Welcome to the Nonprofit Fix, a podcast about the nonprofit sector where we talk openly and honestly about the many challenges that face the sector where we will discuss current and future solutions to those challenges where we explore how the nonprofit sector can have much more positive impact in the world.

Speaker 2:

A podcast where we believe that once we fix the nonprofit sector, we can much more dramatically help to fix our broken world. Welcome to the Nonprofit Fix the podcast where, as the intro just rang through, dives deep into the challenges and opportunities facing the nonprofit sector. I'm here with Ken Berger. Say hi Ken, hi Ken. Today we're tackling a topic that's critical to every nonprofit organization out there, yet it is really often misunderstood. This is a topic that, by the way, is a perennial topic that comes up all the time in different literature, gray literature in the field, everything from the Chronicle of Philanthropy. The articles come out probably every year about this or multiple times, writing all kinds of stuff, and that is the overhead problem. The first thing that's important to talk about the overhead problem is just what it means when we talk about overhead and what the problem is. We're referring to overhead. We're referring to the proportion of an organization's total expenses that are allocated to anything that's not programmed, to administrative, operational costs, fundraising, technology, infrastructure all the things it actually takes to run an organization. Yes, ken.

Speaker 1:

It's a good way to distinguish it.

Speaker 2:

That's what we're talking about when we mean overhead. Part of the challenge we have is for a very long time and Ken and I have been in the sector for a long time and we have talked about this, we even talked about this in a previous podcast episodes and just acknowledging just the challenge with this and almost the frustration with the fact that the conversation really hasn't changed, the results have really not changed much and that is that the nonprofit's, the donor community both the individual donor community as well as the grant making community still wants their dollars to go to program operations. So the operating, the programs, they don't like their dollars and they will stipulate they want as little of those dollars as possible to go towards overhead.

Speaker 1:

Now we're making a gross generalization and we'll talk a moment about how things are changing, but that's kind of the way it is yes, I think the unfortunate reality is the truth of what you're saying especially applies to those places where the most money comes from other than for fees. So government funding typically puts caps on that, foundations typically put caps on that, whereas individual donors that may perhaps not give quite as much, but they often aren't as concerned with those stipulations. They're not in that universe. So the bigger the bucket of money, the more likely it is you're going to have these kind of restrictions and therefore it has the greatest impact on the organization.

Speaker 2:

That's true, the kind of individual gift giver directly to a non-profit, especially if they're not a big checkwriter, usually doesn't come with those strings attached.

Speaker 1:

Board members too, I might add. They don't do it either, because they know where the money is.

Speaker 2:

So there really is a difference there. However, when you do talk to individual donors, they are inquiring about this and the fact that places like Charity Navigator guys are others, while doing a much better job of de-emphasizing this one metric overhead, as much as it used to be emphasized, it's still a factor in the rating systems of multiple organizations, rating organizations, et cetera, and, as a result, it still is front and center, and I did some research in advance of this episode, and the challenge we have is that all the data continues to show. When you look at donor behavior data, it is a big factor. They don't want, if asked, they don't want their dollars to go to overhead, and so, again, this problem has been playing out for a very long time.

Speaker 2:

16 years ago, in 2009, the Stanford Social Innovation Review came out with an article the non-profit starvation cycle. This is probably one of the most cited articles, and there's actually a study that came out at not too different time that was similarly named, and it's just talking about the impact of this emphasis on overhead not being something that donors want to fund, and what happens is, as a consequence is that really what we're doing? Is we're underfunding overhead, which means that the non-profits. They're not spending enough on overhead. It's because these unrealistic expectations and what that really leads to is non-functioning computers, inadequately trained staff that hinders quality service delivery indirectly, but it does, because they don't have the tools and the resources they need to do what they do.

Speaker 1:

Yeah, I can tell you that I live that reality every day. I mean, I actually had meetings this morning with staff talking about how we don't have the infrastructure, we don't have the supports we'd like to have and we can't get things done in a way that we want to get them done because of that reality, of that scarcity and the challenges. So it's never ending in a constant reality for those of us who are doing this work, that's for sure. And I know that we've talked before about scale, so I can't speak to the largest agencies, but I mean I know I've been on the boards of $200 million operations and even there I see the same kind of problems as I see in smaller agencies. It's on a bigger scale. When you have 1,000 staff you need a very much stronger, bigger fundraising arm. Typically you need a bigger HR department. It seems to be one of those things that does cut across almost all the nonprofits.

Speaker 2:

Yeah, and it really does create this sense of starvation too, because even when, if you're fortunate to have a kind of relationship with a funder or two that actually understands will write a check. Sometimes in the philanthropic world there is more energy going in the direction of providing general operating support grants, which is more flexible, not allocated. These things are happening out there, but the problem is they're far into you between and part of the other issue is, when you do get those dollars, you can't rely on them year in and year out. Overhead is one of those things that doesn't go away just because your fiscal year has ended, so oftentimes it's also about sustaining those particular costs. Yeah, and, as you point out, as you're scaling it and being able to deal with that.

Speaker 2:

So this SSIR article, the Stanford Social Innovation Review I should have said that article in 2009, it just talks about this vicious cycle. The low overhead expectations of funders leads to nonprofits that under report expenses, under invest in infrastructure, and the cycle continues and it just continues. They're arguing in that paper the need for funders to change their expectations and provide sufficient overhead allocation to break the cycle, allowing nonprofits to invest in the infrastructure that's needed to improve their effectiveness. Fast forward four years later and charity navigator, when Ken Berger was the CEO there and guide star and the BBB Better Giving Alliance, issued a letter to the donor community. And let me pause there and say, ken, do you want to say something about that letter?

Speaker 1:

Well, you, know the interesting thing that comes to my mind when it came to that letter. I mean, I mentioned it in another podcast, but one of the things that I didn't mention was I've talked before about how it struck me when I was a charity navigator that the thought leaders were often divorced from the reality and the trenches, and so when that later came out, one of the premier I won't name names, but one of the premier thought leaders put out a tweet after that letter came out and said now overhead is dead. And here we are years later and yet again, another thought leaders prediction is completely off the mark as to the reality. So I think it's a perfect example of you know that this is a long, slow road and things often take much longer than we think they will, and I think that what we had hoped for with that letter, you know, it's not that it wasn't important. I think it was an important as a Kickstarter and I'd had some influence, but the fundamental problem remains.

Speaker 2:

Yeah, it really does. And if you take that from 2013, now flash forward another decade to now and we ask ourselves the question. So what's the situation now and again? And the research that I was doing before this call, before this episode, is really trying to see, like, what are we finding? Is anything changing out there? And it's still the same frustrations, so much so that there's even books and films being made about it.

Speaker 2:

There's a gentleman by the name of Dan Pallada who's done a lot of critique of this exact problem. Ken and I can have a separate conversation at some point about it. No, no, we'll have one as soon as you're done. So there's a little bit of a. He's an interesting person in the field, creates a lot of interesting controversial conversations, but I just think it's interesting that the resources were there to make a film off of a book called Uncharitable that he wrote that.

Speaker 2:

I do agree with the basic tenant, which is the idea that we're still starving nonprofits. You know they don't have the ability to invest in growth and infrastructure. Now I think we'll talk a little bit about Dan's perspective on growth, but the point is, I want to zero in on the point that we can agree on, which is that nonprofits really are not able to invest in their infrastructure and, as a result, it's starving them. But I also don't want to make this just about and this is contemporary these are books and films, and there's all kinds of other articles and research that just continue to say the same thing again. So this has been a long time decades it's just been a real problem, and the thing that I want to emphasize is it's not just about the fact that nonprofits are being starved. I mean, that's a key element, but we have to remember that nonprofits are also just a vehicle, a means to actually affect the change in their communities and people's lives.

Speaker 2:

It's really about that. It's about the programs and their ability to get outcomes. That is being hindered by the lack of infrastructure, and for me I'm an evaluator I find it so frustrating. We're evaluating programs and that's overhead and something that can't be funded and can't be funded on a regular basis. I do a lot of work trying to help organizations learn from their program data, their program data. That's called overhead, and yet this is where we're trying to figure out if beneficiaries are better. So we really do struggle continuously with the starvation problem, and the starvation is we're starving those who benefit from the programs. That's really who we're starving. Yes, yeah.

Speaker 1:

And so I agree with everything you said. I just want to scroll back for one minute to Dan Pilata. So when his book came out Uncharitable, I remember to this day reading that book and if you were in the other room you'd hear shouting, you'd hear yelling. It was extremely disturbing to me because, whereas I agree with you, peter, that the thing that was the basic message, that the way we're looking at overhead is wrong, in that sense, at the 50,000 foot level I'm certainly a lot High level, yeah, at the high level I'm in alignment with Dan Pilata, but the way I don't know he's in alignment with us.

Speaker 1:

Yeah, and he characterized it. I mean, first of all, he did things like he was incredibly insulting to those of us in the nonprofit sector, suggesting that because we don't pay the same, that we don't get the best and the brightest, or a bunch of dummies. That's just one example. And then also that if it costs you 99 cents to raise a dollar, that's okay. I mean, some of his arguments are so off the mark it's unbelievable. So I just wanted to make a point that, in my opinion, in a way, dan Pilata is actually hurting our ability to make inroads, because some of his arguments are so absurd and just so way beyond that it can be more damaging than helpful, and that's why I just want to say, yeah, at a certain level, sure, but the way he's going about it, it's just, it's divorced and on the other hand. So one other thing I'll say at this point is and I suppose if I was still a charity navigator, this would make headlines, whereas now it's not I actually think that, ultimately, where we need to get to is that donors shouldn't care at all about overhead.

Speaker 1:

Not at all the nonprofits, with the kind of work that you're doing, where you can identify the range of overhead. That evidence shows you need to have to have the greatest impact. For nonprofits, that will be incredibly important information, but for donors, I'm hoping that where we're going to get to is overhead will not matter to them at all. And so a charity navigator when it provides information to donors, the question of overhead becomes almost irrelevant as a tool for a donor. The key tool for a donor in the future needs to be what is the impact of this organization in this community as compared to some other organization at a comparable amount of money being spent? So how the organization allocates that money, as long as it gets to the result as far as the donor is concerned, should be the ultimate, just like the analogy that has been used by some criticizing nonprofit overhead. They say well, if Apple spends 90% of the money that you pay for an iPhone on overhead, do you care? If it's the best phone you can buy, does that matter to you?

Speaker 2:

Well, because, to your point, the reason that it shouldn't matter is because, again, we understand that it achieves the results. Yes, if you're achieving results, that's it, I will tell you. I will slightly not disagree, but this week you'll tweak it Weak. I'm going to tweak this in the sense of this is what I know is that once we do understand how to maximize outcomes at cost, like the cost per outcomes, how we maximize cost per outcomes for whatever your service type is, whatever you do as programming and I do believe that what you do programmatically affects our understanding of what the cost per outcome is but once we understand that, that becomes the lens and this is a good segue to our next part of the conversation that becomes the lens to figure out what is the right level of overhead. So I do think at some point we can answer a question for accountability purposes, because I am somebody I love the nonprofit sector. I love nonprofits, but one thing I always put first is nonprofits, again, are not an end to themselves. The size of a nonprofit is not a for-profit company that is basically putting shareholder value in their pockets through profit. That's not what we're doing. So from that standpoint, it is outcomes and eventually cost per outcome is the metric that we ultimately and we haven't gotten to yet. We have some proxies we can get to, I'll talk about, but that's the number that should drive this. So the only difference I have with what you're saying, or the tweak I have with what you're saying, is that I do think that how much you put towards overhead matters.

Speaker 2:

The issue is that right now we haven't got the knowledge and the data and the research to be able to prove what that is. We haven't even scratched the surface enough, and I think that some of the work that we're doing Well, you have scratched the surface, well we have, but again, what's published out there? We've got to get our stuff out. Oh, yeah, yeah, yeah, yeah. But at the end of the day, we have the methodology now and I'll talk about an article that talks about MacArthur's, the work we did in partnership with now BVO, but at the time it was another organization, fma.

Speaker 2:

I will share what we are learning, but we now have the opportunity to learn what that range is, and the point being is that once we understand that and we do it fair, which is it's about the outcomes, or at least about the program output. We put it about the lives being served Once we start there. Now we have a better lens to look through to say, okay, now we can have a fair answer, a fair answer to the question what's the right level of overhead? But we also have to accept that it's going to change over time. It's messy, context matters, but that's where we've got to be and what I know.

Speaker 2:

And the thing I would hope to push the envelope in this conversation and push the field a little bit on is we actually have the data to do this now. We have the tools and the methodologies to do this now. So the opportunity is here and so we can answer this question and I will tell you right now. The answer is definitely it depends. That's a good way it really does. One size does not fit all. There is no such thing as a magic overhead range, and that's the truth, and so we can switch gears to that.

Speaker 1:

But wait, wait, wait. I understand that we have the tools, but and I know from conversations we've had before and, I believe, on the podcast, we have evidence of the positive impact on a community when it has a certain amount of needed non-profit services relative to a community that doesn't. But until we have an order of magnitude of non-profits that are measuring their impact, measuring their impact we have the tools but we don't have the organizations yet. We don't have the enough organizations within that to use those tools adequately.

Speaker 2:

Absolutely so to anchor on outcomes or ultimately the ultimate one. I think that is really the judge's cost per outcome, because that's the you're bringing the finance, the cost of that outcome. Yes, till we get there. You're absolutely correct, the best we have right now is outputs, but at least we've got something to anchor on. That we know have and say, when you look at, if you want to know one of the top predictors of school performance, people won't believe what one of the top predictors are. It's attendance, and it doesn't sound great. It's not definitely about academic. We know there's a whole lot of other indicators to really advance the conversation, but the first thing is attendance. Right, that's an output, that's an output.

Speaker 2:

Fannie's in the seats yes. So at a minimum, we do know and we'll talk about this a little bit and the research that we're doing here at PCT partners with the Equitable Impact Platform, combining IRS 990 with community level data from the Census Bureau's American Community Survey, bringing all that data together and we've talked about this on other podcast episodes. So we have that big data platform and looking at this stuff. We do know the relationship between programmatic output, having access to programming, youth development, education, and what that does over a four-year period to a community's well-being, as we know that output alone is something that helps that nonprofit output matters. Now we know that is not outcomes. We don't pretend that it is outcomes, but what that does is it gives us a lens to then ask the question.

Speaker 2:

So if we really want to learn about what's the right overhead, you take your IRS financial data on any one nonprofit. You look at how much their program expenses are compared to their administrative costs and you get a sense of what their overhead is and then you look at that in relation to what's the right level of overhead that predicts four years out that they are sustaining and or growing their programs. Program output has measured by dollars of program expenditures. So the point being is that when you then look at that ratio and, by the way, you look at how that interacts with things, other financial indicators like cash on hand and that asset ratios and things like that, you look at all those variables in there and say, okay, what is uniquely the right level of overhead?

Speaker 2:

We can now answer that and the answer is it does depend. It depends on what you do, meaning are you a youth development organization, arts and culture? Are you an education, special education school? Are you a policy advocate? It depends on what you do. It depends on the size you've gotten to. Your overhead is not the same depending on what your size, your developmental stages and your size. It's also not the same Another factor we don't talk enough about. We now know that also it depends on the funding community that surrounds you.

Speaker 1:

I actually have a question on that and it goes back to what you were talking about earlier in terms of the scarcity, and I think a working hypothesis that's very reasonable is that the vast majority of nonprofits are living in a state of scarcity where they have inadequate overhead, and so if that is a given and then we look at the existing data on overhead, couldn't there be an argument made that what is the range currently is not great, because the range is typically based upon the majority of those having scarcity, so it's a snapshot of a range that invariably is still a problem, because the overarching environment is one of scarcity.

Speaker 2:

Yeah, and this comes back down to something that's fascinating about what we're talking about, because part of it is how nonprofits report this. As you know, the IRS form there's. No, it is an art. It's art Like how do you take your expenses and divide them by program expenses and administrative expenses, management, fundraising and that's what the IRS form is asking you to do.

Speaker 2:

So many of you are probably listening to this and thinking well, how can you trust this data? Indeed, what I'm going to tell you is that what we have found is that there are organizations. So one of the first things we do with the data is we match organizations based on these factors your community factors, what you do, size, that kind of stuff. We find 250 organizations, 350 organizations that match you on the type of community you're in the level of giving, that community supports the nonprofit sector, how many nonprofits there are, what you do, what your size is, and we kind of match you. Now imagine we match you. We then look at all your data, your financial data. You may say, well, wait a minute, we know you just told us that it's not in a nonprofit's best interest to report high overhead or low program expenditures. Right, yes, but the really cool thing about all this data is you have these natural experiments because what ends up happening is, within those 350 matched groups, there are some that might overinflate and some that might underinflate or some that might be accurately putting it in there, and there's a whole range of things, and what the algorithms are doing is evaluating all those ranges in relation to a data point that is less manipulated. You can't manipulate it as easily and those are some of your basic numbers, like just your output, your revenue, your output, and so we just look at that, and just look at it from a raw standpoint and we predict that. Now, the advantage of that is this is that all those natural experiments of those and we know that some of it is reporting errors and some is not but what happens is, with all that, some of it is reporting errors, but some of it is true, and there's a wide range, yeah, and what we can begin to see is what that range looks like and we begin to realize certain things.

Speaker 2:

So, for example, we can see that arts organizations or let's take hospitals and healthcare clinics, large hospitals and healthcare clinics, things like that that are located in major metropolitan, where there's a lot of funders, there's a lot of money around, there's a large nonprofit sector around you and those organizations have a pretty high overhead in order to see that they actually sustain and grow over a four-year period. Right, and my shock people that some of those organizations 35%, 40% maybe wouldn't. If you really think, if you're in the business, you probably know yeah, right, and we find this kind of honest zone where it's pretty clear. But then I'll give you another example. I don't realize this.

Speaker 2:

There are hundreds, hundreds, if not thousands of organizations across the United States that manage and run all of the sports leagues that kids play in, all those baseball, basketball, football, all those sports leagues.

Speaker 2:

There's nonprofits that manage those particular leagues and all those systems.

Speaker 2:

And it turns out that those organizations, when it comes down to it, most, if not all of their dollars need to be put into infrastructure, because it's all just funding uniforms and all the other stuff.

Speaker 2:

It's not because, by the way, this is the other thing the nonprofit sector doesn't talk about is the role of volunteers, a completely volunteer-run program like recreational sports leagues for kids and stuff like that. If you held the same standard of the old 30% or whatever, you'd be way off the program funding of all the healthy program funding of those recreational sports leagues is like 20% program funding, 80% overhead, and that's because they have a pure volunteer model for how they or a very low-cost programmatic model. Maybe they're paying some of their coaches or whatever, but it's much more of a fundraising model, it's much more of a different model. So the point being is that if we don't get nuanced and leverage our data better and figure out what the right zones are and, by the way, in some cases the overhead rate to your point, ken, from before turns out to not be a predictor of health for some types of organizations, it's not a key variable, it's not very important, in which case you just say don't worry about it.

Speaker 1:

Well, I guess I still circle back to. I make another hypothesis here, which is that if we again, if we assume that the vast majority of nonprofits are functioning in a scarcity environment where they're starved for overhead, that if we are providing them with a range based upon the aggregate data of similarly sized purpose and community and so forth, my hypothesis is it's better to be on the higher end of that range than in the middle or the lower end of that range, because that will reduce the likelihood that you'll be in as much of an environment of scarcity.

Speaker 2:

I wouldn't argue against that. I think that if I stick to the data I'll tell you, just be within the range. And to me what's more important about the range is that any given moment in time, if every nonprofit leader understood what that range was, it allows them another way to adjust for different other realities. So part of it is we shouldn't just be looking at overhead by itself. What should be is what's the right range of overhead? What's the right range of cash on hand? How many months should we have? What's the right range of debt to take on in order to be able to grow? I'll give you another variable.

Speaker 1:

But within that context and again another reason why I'm honing in on this issue of the range of overhead If and when funders finally embrace this and understand this and come to accept this, there is always the danger, I think, even in cases where some of this data has been used before, there's always the danger that they'll end up going for the average or the middle of the range, as opposed to saying, okay, you have the latitude to go up to this amount, which is the maximum of the range, but instead, in their never-ending zeal for thrift, we'll just go for middle of that standard deviation or whatever it might be. So that's why I just I think and I guess the other part of this is, as you've mentioned, these other things that I agree with you are equally, if not more, important. How much of those other metrics will funders be willing to consider in making these decisions?

Speaker 2:

So we're talking about overhead, but let me give you an example of another financial health indicator that we don't talk about in the sector enough, and that is how much surplus you have at the end of the year. And if you think about surplus in a for-profit, that goes to the shareholders, but in an a non-profit, it carries over, and I like to think of surplus in the non-profit sector as a way to invest in other things next year. It's a future-oriented indicator and it turns out when we look at these matched comparison groups I was telling you about, and then we look at all these variables cash surplus, debt-to-asset ratio, cash measured by liquid unrestricted net assets. We compare all these different financial indicators together and look at the interaction effect. There are groups of nonprofits that are matched that, in fact, the number one predictor of programmatic output is in fact carrying surplus, getting a surplus.

Speaker 2:

This is another one that I would imagine the donor community would be like oh my God, you're a non-profit, you shouldn't be getting surplus. The point isn't that it's profit, it's surplus to invest in the future. I can tell you. Let me just make this final point, and my point being is that, with respect to that surplus, there are different types not all, but there are some types of nonprofits and, based on where they are and what they do, that actually having 10, 20, 30% surplus to carry over in the next year is a better guarantee four years out that their programs are going to be investing in their programs to grow them, sustain them and scale them. We don't talk about that but, by the way, there are some organizations that that would be an inappropriate level, it would hinder them. The point is it's fascinating but we hang so much on overhead but we don't talk about. Well, what about surplus?

Speaker 1:

Well, what I was going to say is that for those of us who are in New Jersey, it can be anywhere. Government funders I know our government funder, our major government funder 2.5% surplus is the maximum that you are allowed, and it is 95% of our budget comes from that one source. Therefore, our ability to have a surplus is vastly impeded, and on top of that, that surplus can only be used for those programs. It can't be used for innovation or something else on the mission. It has to be for that. So those kind of restrictions are a problem.

Speaker 1:

The other thing that is often debated when it comes to surplus is what to do with it, because there are some donors, there are some board members, there are some leaders who hold the philosophy that we shouldn't have any surplus, we should spend every dime on the mission and we're not about holding on to any money, we should just spend it. And then, at the other end of the spectrum, you have places like some universities, where they literally hold on to billions and billions and billions of dollars, to the point of absurdity. And so the whole the point of endowments.

Speaker 1:

Yeah, yeah. And so it's like yeah, yeah, we're right over the parking lot with this.

Speaker 2:

Yeah, and it's one of those things that I think we need more conversation about and again, just like we're anchoring on this episode on the issue of overhead, when it comes to the surplus question, as I pointed out, that goes for debt too Is debt that could help you and there's debt that could hurt you, and that the asset ratio and there's a range there as well. Yeah, if you could imagine being able to be an organization and where data is going and where we at BCT, with equip, are really focusing on is, imagine we can first figure out who you are so we're going to make a fair comparison who you are and where you sit. Once we know that, and what you do now. If you imagine you just like we go to the doctor and we get a lab test and it tells us what our ranges are for all of our different blood work items and says, okay, we need to be within these ranges, and then the doctor is able to go okay, well, we need to actually get you a little more of this, a little bit less of this. In some cases, those different elements are things that you need to adjust together. That's what our sector really needs. We need a much more Amazing. That would be amazing and that's really where we're going with this data.

Speaker 2:

It's not perfect, it's not precise. I mean it is precise, it's not perfect. Obviously, if we're at 85% accurate of predicting, we're wrong 15% of the time. But the point is we're definitely seeing stuff in here that, I think, is even the research literature is starting to unearth the U-shaped for arts organizations in terms of overhead as they grow. We kind of all know this intuitively, but imagine being able to continuously do this and refine it and, as a result, I think, change the conversation in the field around what overhead means. And I also think not just in isolation. It's like the one number. Like I said, I don't even think the field has enough conversation around surplus. But how are we supposed to have nonprofits that invest in their future in order to grow their programs and, by the way, I've got a recommendation for every nonprofit that wants to get a surplus and then, on evaluating your outcomes, that'll tell you enough, because it'll tell you what's working, what's not, and that's the core of your business, that's your mission vehicle.

Speaker 1:

Yes. Yes, so there really is a solution there, and I would say to my colleagues out there step one even before, that is, make sure you have your data digitized and compile a couple of years of that data so that it can then be used to make the analysis for your impact.

Speaker 2:

And along these lines, as we start to work with more funders about this too. I want to come back to something you just mentioned, which is there is also a tendency to try to and a lot of the work that we're doing. There's a tendency when we have conversation with funders, they often will say well, that's just too nuanced to understand, it's too complicated, it's complex. We need one number. We need one number, even if we admit we're not providing enough, even if we believe and we support the idea that the starvation cycle has to end. We're going to provide general operating support, but we need one number and a lot of times we're always trying to roll everything up into one magic number and the problem is those ranges are wide ranging. But what I would argue is this is because we still think like researchers and policymakers when we want to roll everything up and we want one simple solution, one simple number. Give me what's the number? What's the number? If it's 30%, it's 30%.

Speaker 1:

Isn't it, 42?

Speaker 2:

Yeah, I don't know Right.

Speaker 1:

The Hitchhiker's Guide to the Galaxy.

Speaker 2:

I think it's 42. Oh yeah, I didn't guess the reference, but it's been a while since I've read that. But at the end of the day, when you really look at that, what I want to say is right, because we're also not looking at how we can communicate these findings. So part of it is if you've got algorithms that can figure out we are using AI and machine learning to really help do this stuff with transparency, focus on equity and reducing bias, but if you have a way to begin to really understand what it is that those levels really are for each one. The point is you should be able to see your data, not in a report that rolls everything up. That's fine for the funders and policymakers, but if you're an organization, you should be able to look your organization up, know that behind the scenes, the technology is matching you with other organizations and it's sharing with you, like the doctor's lab results. It's telling you what's there. It's giving you counterfactual proof. If you're not in these zones, this is the consequence. Right, your program is going to decrease by 12% over the next four years, and it gives you a sense of what to do about it. All that information can come out Instead of trying to think about all of this. To roll it up, so we all have one number we can magically attune our policies, whether it's our grant making policies or our government policies too. We need to start to adapt as policymakers and government a more, and with technology, the way it is is to be able to do this in a way that you get those insights about each case and understand each case.

Speaker 2:

It's a classic case if you've read the End of Averages by Todd Rose and he talks about the example, and it's a fantastic book if you haven't read it. He talks about the example of back in the day when they were first the jet pilots, and he talks about the fact that they made the planes and there were all kinds of crashes all the time and the reason was because then they had some. Finally, they got some data nerd to come in and study the problem and what they encountered was that they had made the seat based on the average of the pilots in relation to the steering wheel and the column and all the instrumentation, and so what happened was that seat was set and stuff like set to the average. This data geek then went and looked at all the pilots and couldn't find one pilot that actually was the average, because that's the nature of the average. And then he came back and he said your problem is your seat is put in the average and there's no such thing as average. You need to make a seat that can adjust to the pilot and you will reduce your fatalities.

Speaker 2:

And at first the Air Force kind of scratched his head and was like, well, if we do that, it's going to be awfully expensive.

Speaker 2:

We can't do that, and this is the idea I'm trying to get the analogy of the idea of like the one size fits all, as opposed to being welcoming. But actually, if you think about it, you just had to engineer a seat that could move back and forth, right? So what we need to do is stop trying to roll these numbers up for all of you who kind of want that seat in the one place, and realize there's a lot of starvation, our analog of fatalities, because we're trying to do that. Instead, embrace the idea that the tools and technology here, but what you got to get comfortable with is the fact that it depends as the answer and you need to understand context to be able to figure out what to do. I know it's hard to tune your policy, because policy by nature is about moving the average, but you need to start to think about decision support tools and technologies that break you away from that and get you into a different space.

Speaker 1:

You know it's funny when you talk about that. The analogy that came to my mind is that there should be a profile of the qualities of leadership that are most likely to lead a nonprofit to crash and the qualities of leadership that are most like. So it's like do you have basic ethics? Are you a control freak? Are you? You know? It's like.

Speaker 2:

Well, the point being on those things, ken, is that each one of those are dimensions of leadership and what you need to understand is that all those dimensions are what make up leadership. There is no average kind of picture and it's a jagged, as Todd Rose talks about. I really do recommend the book. Talks about the jaggedness and the whole point is the reason we can't find averages is because no construct we come up with like leadership is made up of like one dimension, one simple variable. You're not just stepping on a scale and it's one metric. There's all these dimensions to it. Some of us are strong and some are weak, and others are in the right zone on others, and we have to put the package together. It sounds messy but at the end of the day, it creates pilots that can be effective because you've tuned it to them right.

Speaker 1:

Yeah, we need better nonprofit pilot training too.

Speaker 2:

Yeah, that's right. So that's kind of where we are. So with respect to this topic, to circle back around, we wanted to revisit this overhead question and dig a little bit into it, and we really did want to say something about the fact that we're entering a new era where it's possible to do great work. We've got the data and, by the way, now, with everything being electronic and open and open, by the way, we are so fortunate.

Speaker 2:

In the United States I know people don't always trust the IRS data. They're like oh, the data's a mess. The data's a mess, it's like compared to other countries, it's amazing how many variables and data we have in this data set. And then what? I would also tell you, if it's such a mess, why am I getting 85% to 90% accuracy at predicting what you're going to look like four years from now? So, at the end of the day, sure it's a mess. Not everybody reports things fairly. But guess what? All that variance gives our model something to chew on, to know and find the right zones. If everybody was over-inflating, if everybody said 90% overhead would never be a predictor of anything.

Speaker 1:

That's mind-blowing because all the kind of presentations that I'd had heard in the past on this sort of stuff from GuideStar and others have said they were characterizing the data as such a mess. But actually you have dug deeper and looked more closely than I think anybody that I know of has, and so I much more believe in what you're saying and it's kind of mind-blowing to hear that.

Speaker 2:

Well, it's the element of this. When we think about data are a mess, I always challenge people because they say that about. I do a lot of work with program administrative data. I do a lot of work with survey data that's been collected haphazardly and loosely. I do a lot of data where I'm looking at government data sets that are a mess. When I say a mess, what I mean is that there's multiple tables and not all the fields are everything, not all, there, and that's all true.

Speaker 2:

But in this era of machine learning and with predictive analytics, we can actually evaluate how bad the data are, because part of the validity of the data is can it predict something in the future? And we do causal modeling. So can it causally predict something where we're controlling for variables? And the truth of the matter is, if the data were so bad, our models would be 50, 60 percent. I mean, it'd be a chance. It'd be, just really wouldn't have the accuracy. But what we're seeing is no, that's not true.

Speaker 2:

And actually there are some really good articles out there on the IRS data from years ago. The dean of the Social Work School that I attended at Case Western, I think, darleen Bailey, I think she has an article that she had published talking about the validity of using IRS 990 data to do research. And when you really dig into it, even then there was some good. There was some good knowledge that it was effective. I'm telling you the reason you know what's effective is because, as with anything, when you're doing good scale and development of research tools, the question is, is it correlated with something that we can kind of empirically see? Does it predict anything in the future? These are elements of validity. It also has to do with construct validity, causal modeling, all that kind of stuff. I can get very data geeky on this.

Speaker 2:

But, at the end of the day, the key thing is is, if data truly are as messy as everybody thinks, it would predict nothing, because what you're really saying is you can't trust it. But if I can tell you that I could take the IRS data now and eight out of 10 times know what you're going to look like four years from now, I would say that data is sure it's got some mess, it's got 20 percent mess, but I got 80 percent. And for those of you who are data geeks things like area under the curve and things like that we do all the analysis of this. It's amazing what we're seeing. All that is a long way to way of saying it's good enough for us to understand what these ranges are and to understand what's in there and to understand what differences, and we circle back around the overhead.

Speaker 2:

When it comes down to overhead, what we are seeing is, if you take anything away from this particular episode, it's that anybody asked the question what's the right level of overhead is?

Speaker 2:

It depends and we actually can show you what that range is, as well as many other factors as to your long term programmatic output, growth and sustainability and one day when everybody starts to pick up on how we can use these tools to evaluate your outcomes and programs.

Speaker 2:

I think we'll start to really understand what it means to run these, to have the right level of overhead, and all that to say, funders, donors, rating agencies, rating agencies, you know start to really embrace the idea that it depends, and we have the tools, technology, ai, ml and data to get this stuff done. And I'll put a plug in for BCT it's our podcast. Give us a call, talk to us, because we got these. We got the our equitable impact platform that's already doing some of this work and we'd love to partner with researchers and others to do this. So overhead doesn't have to be a challenge, a burden or a myth, as long as we learn what it is and how it depends, what it depends upon and we embrace things like the philosophy of the end of averages, like Todd Rose, the overhead truth, the overhead truth. So, ken, any closing remarks or thoughts on your side?

Speaker 1:

Well, I just I just circled back.

Speaker 1:

I just want to say one more time that I really, you know, as we talk about solutions to these problems, I think you've laid out where we need to get to, and I do again. When I look down the road, I I see a place where these, you know, using the analogy of like a blood test and having a nonprofit leader looking at these various data points of ranges of where an organization should be and seeing that where they fall off of those ranges to work toward, that could be an incredibly powerful tool, and then, at the same time, for donors to see even less and less of an emphasis on overhead and more and more of an emphasis on, you know, cost per out outcomes is the future. You know, we've been, we've been talking about this. I mean, I was talking about this back at Charity Navigator and, as you and I have talked about many times, one of my universal truths is that things take three times as long as you expect them to. So I still believe we're going to get there, but we still are on a long road.

Speaker 1:

So yes onward, onward we go onward, we go. But your work is critical in that. I say it again and again.

Speaker 2:

Well, along those lines, I'm going to put one final plug in and say listen, go to bcdpartnerscom, forward, slash, equip, eq, uip If you want to watch the. We did a webinar and it kind of unpacks everything that's in there If you want to learn more about that. And thank you, ken, this is always fun to do these episodes Same same. Be well, till next time. Bye everybody, bye, bye Music.

The Nonprofit Overhead Problem
Debating Nonprofit Overhead and Impact
Nonprofit Financial Health and Surplus
Debating Surplus and Nonprofit Leadership
AI to Solve the Overhead Problem