Each of the 100&Change proposals was evaluated by a group of our expert judges using four criteria: meaningful, verifiable, feasible, and durable.
We quickly settled on meaningful as the first criteria. It is the goal of the competition: tackle a significant problem that would really matter. We knew going in that there were a lot of problems $100 million could not solve and we were comfortable with people addressing a slice of a problem, but it needed to be a compelling slice of the problem.
Our intent was to define meaningful broadly; however, we probably should have been clearer. The solution did not require a global impact or have to impact a large number of people to meet our standard of meaningful. It could also include a solution to a serious, but devastating problem for a well-defined number of people or a single geography.
The second and third criteria, verifiable and feasible, emerged out of a desire to answer two questions. We wanted to know, will the solution work and can the applicant do it? We wanted to mitigate against the risk of picking a proposal that was completely untested or untried. There is a space for competitions like the XPrize, which is focused on breakthrough innovations. MacArthur was not seeking to occupy that space.
In philanthropy, there is a tendency to want to be the first to fund an idea or project. But we perceived a gap in the philanthropic field, a need for funding to take tested ideas to scale. We saw 100&Change as a way to address that gap. By requiring evidence, we recognized the proposals submitted were likely to have significant funding from other sources. Having evidence a proposal worked – at least once, somewhere and on some scale, was important to us.
When it comes to feasibility, the kinds of questions we wanted judges to consider were: Does the team have the right expertise, capacity, and skills to deliver the proposed solution? Do the budget and project plan line up with realistic costs and tasks? MacArthur explicitly asked applicants to address potential risks and how they planned to mitigate them.
The last criteria, durable, is the one that sets 100&Change apart. If we were focused on solving a problem, we didn’t want the solution to be temporary and transitory. We wanted whatever we chose to have a long-term impact. We thought about durability in a few ways.
The first is that $100 million can fix a problem forever. Once you fixed it, you have no need to address it again. The second is that $100 million may set up the infrastructure required so the ongoing marginal cost is very low and there is an identifiable revenue stream to cover it. And the third is, $100 million may allow you to unlock resources and identify others who will commit to funding it over the longer haul.
We asked a few questions of applicants: If this is going to cost more than $100 million, how much more, and how do you plan to fund it? What are the long-term ongoing costs and what is your plan to cover them? Many applicants either ignored the sustainability question or gave vague answers, making it a challenge for the judges to assess the durability criteria. Out of all the criteria scored by the judges, durable had the lowest median score.
While 100&Change was open to problems from any domain or field, the four criteria – meaningful verifiable, feasible, and durable – implicitly restricted the types of problems and solutions that would be competitive.
For example, a project to deliver meals to homebound seniors addresses a serious need and might have strong evidence to support its efficacy. But the sustainability of the project likely requires an ongoing, continuous flow of philanthropic dollars. So, it wouldn’t score high on durability. A project to develop a mobile phone app to reduce youth violence would solve a meaningful problem, but likely would not have a body of evidence to prove it would work. Therefore, it would score low on verifiability.
In both examples, the proposed projects are likely to yield significant social benefits and deserve philanthropic support. Yet they would not fit well within the 100&Change parameters. There may be several cases like this among submitted proposals, where applicants addressed a significant problem, yet their scores reflected the parameters of the competition, not the quality of the idea.
We will analyze the 100&Change database to examine what specific fields or types of problems were at greatest disadvantage and use what we learn to reconsider the criteria for future rounds of the competition.