Skip to main content
SearchLoginLogin or Signup

What we learned doing Garden Grants

Consolidating and sharing our learnings from the launch of Homeworld's community-centric grantmaking program

Published onMar 11, 2024
What we learned doing Garden Grants
·

Introduction

Don’t confuse a clear view for a short distance. This is age-old wisdom often heard in entrepreneurial mentorship conversations. When we started building our grantmaking program for climate biotech, we certainly had a very clear view. We were going to get substantial support to ambitious teams quickly (weeks, not years). We were going to be radically transparent in the process. And, we were going to make the whole process of grantmaking a positive addition to the community. It was a clear view, but like all meaningful visions, it was not a short distance. When we launched the program in August 2023, with the radical ask for applicants to put half of their big idea public, we didn’t know if anybody would respond. 

We set out to work on grants because we have been the practitioners stuck for funding. Before Homeworld, we struggled as individuals to fundraise for work that we wanted to do, first in Biotech+CDR while we were at MIT, then Dan in pollution, then we saw others all struggling to find financial support for sustainability areas they were most excited about. 

We did the analysis and talked with the community and it all comes out that there is very little funding for the ambitious efforts. Deploying biotech to climate challenges means operating biology at the industrial scale, and that is hard in both science and business. Very few funders feel the knowledge or appetite to bet on the first experiment. If we can’t take the first swing for big ideas, how can we possibly expect to make the 1000x technological improvements the IPCC, CAP, NASEM and others all say we need? 

So we just made the funding platform ourselves. But we knew from the outset that we were going to do it our way, which is rooted in the community. We wanted the process of grantmaking to build conversation and surface collaborators. Even though we knew were going to say “no” to a lot of people, we want even the rejections to be positive experiences that help the scientist. All in, this became four big experiments we took in Garden Grants: pitching in public, non-anonymous reviews, in-line commenting and radical transparency of returning all reviewer comments to applicants.  

If you build it, will they care?

We announced Garden Grants in August 2023, closed the application window in October, and sent decisions out in December. For a brand new program, in a brand new organization, that’s quite a clip! Here is the data:

  • 65 quality submissions, $5.5M total ask, which have public problem statements. This is a difference-of-kind transparency, with every applicant announcing to the world the problem they are trying to solve and the dollar amount they are requesting.

  • 220 technical reviews and 130 meta reviews on the confidential solution statements. 

  • 750+ inline comments from technical reviewers, marking up the grant applications just like a Google doc and available for the applicant to respond, encouraging the community to grow with warmth and rigor.  

  • 62% of applicants said they probably wouldn't be able to secure funding for this project otherwise.

  • We're getting talent out of medical biotech to work on climate: several of these applicants were established biotech and this is their first effort for climate.

The four experiments inside Garden Grants

Homeworld is working to increase transparency in science philanthropy and change the ways grantmaking is done. Below we list the four main experiments, what their best/worst outcomes could have been. In the following section we’ll go through the 10 holistic learnings

Experiment 1: Pitching in public

The Garden Grant application is based on a public problem statement and private solution. Our hypothesis is that public problems create discourse around important areas while still protecting a practitioner’s right to hold their solutions confidential until publication or patents. 

At its best, this openness would facilitate new collaborations, stimulate community knowledge and discourse, and increase philanthropic funding into climate biotech. 

At its worst, nobody would have applied out of some fear of sharing their problem. Because we saw so many excellent applications, we conclude that this public/private split is a viable approach moving forward.

Experiment 2: Non-anonymous reviews 

One of the subtle parts of the Garden Grants design, by the advice of our Advisor David Lang, was to clearly separate technical reviewers (that assessed the proposal) from core reviews (that made funding decisions). Our experiment was to keep core reviewers anonymous, but to make technical reviews non-anonymous by default. 

At its best, we would see increased warmth and supportiveness between the applicants and reviews. “Reviewer 3” is always a jerk, and Homeworld’s experience with community building shows us that people can be both rigorous AND supportive when going by real identities.

At its worst, we would have seen power dynamics or other forms of outright bias.

We gave reviewers the ability to choose to remove their names from the review. In the end, only about ~15% of reviews opted to be anonymous, and we qualitatively observed more warmth and supportiveness throughout the reviews. We saw lines like the following “I’m giving you a 3/10, but please don’t be discouraged! If you take my feedback and improve parts X and Y, I think it could be a competitive proposal next year.” We conclude this default non-anonymous review is the way to continue. 

Experiment 3: All reviewer feedback provided to applicants

We gave each applicant a complete summary of the reviewer scores and comments. Our vision is that such feedback would help strengthen the framing of their project in the future. 

At its best, this would have been a gift to the applicants to venerate their effort by giving them complete feedback. Maybe it would even lead to collaborations by reviewers that gave favorable assessments.

At its worst, this would have led to lingering dissatisfaction by the applicant or even direct messages to the reviewers.

We saw an overwhelmingly positive response from the applicants. That said, we did receive some amount of pushback, such as 3 requests for rebuttals which we simply did not have the bandwidth to address. We do know of at least one case of collaborations forming between applicants and reviewers after the fact (note: the proposal was not funded), and we are still checking for any signs of negative discourse between applicants and reviewers. Overall, we conclude that committing to complete feedback is the right way to do it.

Experiment 4: In-line commenting on proposals

By utilizing the Experiment.com platform, reviewers had the opportunity to provide in-line comments on the confidential solution statement. If a reviewer chose to engage & be non-anonymous, they had an opportunity to create a discourse between the reviewer and applicant.

At its best, this would have been an opportunity for applicants to clarify their thinking and impress the review teams for their knowledge.

At its worst, this would have either been a feature ignored by reviewers been a bullying or pile-on experience. 

In the end, we saw 750 inline comments. That said, this was a power law distribution, meaning that some projects had significant comments and discussion and others had none. We conclude that this is a good feature, but we can do a better job in communicating expectations of the review.

10 things we learned during garden grants

While conclusions of each of the experiments are useful in isolation, we think it’s best to consider the holistic takeaway when all of these experiments are working together. We write up our top 10 takeaways in the hopes it sparks useful conversations for others.

1: Everybody benefits from the grantor giving their best to applicants throughout the process.

By always doing our best to rapidly respond to emails, hosting office hours and hosting two Q&As, we got to connect personally with many applicants. This taught us how to make the process more streamlined, including the discovering and reporting of important bugs. We had several applicants very engaged with the grant process, which led to the Homeworld team being able to make several improvements. This value went both ways: teams that engaged with the Homeworld team got active feedback on their problems and solution statement, which is educational for everybody involved. 

2: We can all write better problem statements, and it starts with pulling away the solution. 

People have to care about your problem before they care how you solve it. A core design of Garden Grants was to build a discourse around important problems that needs solving. We discovered that the default of science pitches is to interweave problems and solutions (see our blog post Important Problems Lead to Important Work). Overall, this leaves much room for improvement in our next round of Garden Grants.
A great problem statement is dispassionate towards any particular solution. A common failure we saw looks like this fictional example: “Despite high rates of carbon fixation, microalgae, specifically cyanobacteria, are not widely used as a tool for capture of industrial CO2 emissions. …Our project’s goal is to hyper-optimize substrain PX1234 for highly alkaline conditions.“ This is a bad problem statement because:

  1. It misses the core constraint of current practice: WHY is cyanobacteria not widely used? 

  2. What is a milestone in solving such a constraint? Can you frame this milestone independent of what solution you have in mind?

  3. Such reads to us like a lab that works with PX1234 just trying to pigeonhole their current work into our grant program, which is uninspiring. 

Overall, we did see better, more concise pitches than we’ve seen in other grantmaking programs. But, there is still a lot of growth for everyone in the field to learn to write excellent problem statements, including us! We’re getting our own practice in as we build the Problem Statement Repository.

3: Let’s build a culture of de-risking: what is your target milestone and how do you know if you reach it?

A good milestone is directly stapled to the de-risking process of a big idea. It’s a skill to plan a good experiment, and when that experiment is necessarily part of a larger road to impact, it greatly reduces the space of possible experiments. This is a skill that is typically more present in startups, but needs to be present in science research as well. Overall, pitching the impact of an early science project is very hard and there is room for improvement for the whole space to learn this art. In future versions of Garden Grants, we are going to be harder on the importance of clearly saying why their funded experiment has a discrete 

4: Science can be done outside academia, but it’s hard and needs support.

On a per-hour basis, the ~6 applications we got from community biolabs got multiples more hours of attention from the Homeworld Team than university applicants. On one hand, that’s credit to the community biolabs people: they were generally scrappy and simply asked for more of our time. However, we got a decent sense of the strengths and weaknesses of the community biolabs.


Speaking very generally, these applicants were better at coming up with magnificent big ideas and writing them down as if in a blog, but weaker in describing a scientific journey. For example, if you’re proposing to go straight into inserting a 3-gene pathway into a non-model organism, what will you do if you observe no pathway activity? The gold standard to learning biology is to apprentice under somebody stronger than you, and we currently saw a shortage of senior scientist mentors in these applications. 

5: Non-anonymous reviews can be interactive and beneficial to applicants.

We generally saw very positive discourse between applicants and reviewers through the non-anonymous reviewer option. This non-anonymized review process always gives a chance for future touchpoints and opportunities for collaboration. Transparent feedback is beneficial to applicants. We now know why most funding teams just give black box “yes/nos.” It’s a ton of work to do right! 

6: A new process without precedent can be hard to communicate.

In the next round of Garden Grants, we can point to a previous round to show how the system works. But in the inaugural Garden Grants, it was quite hard to succinctly describe how the process would work without going into word soup. To make it even harder on applicants, there was redundancy in our application instructions and a lot of parts were not clear. This is a place for improvement on Homeworld’s part, we plan to spend a lot of time developing new instructions into a single source of truth that is succinct.


Beyond the application itself, our single biggest area for improvement to applicants is being more clear about funding decisions. We were very transparent with technical reviews, but quite opaque with what got funded and why.

7: The protocol of Problem Statements is good and needs to keep going.

Homeworld’s Problem Statement Repository directly led to over $500k in funding with just 9 vetted problem statements. This shows that the repository can be used as a resource that researchers can look to and identify interesting problems to work on.

However, we saw lower diversity than expected in the pitched projects, especially for first-time protein engineers getting into climate. From the feedback given by the applicants, this was attributed to the fact that we only had a specific subset on problems written in the Problem Statement Repository. This was our lack of clarity not specifying that applications did not have to exclusively be based on the problems we listed.

8: Biotech practitioners want to work on climate problems.

We have noticed the shift of the biotech community wanting to work on climate problems through our discussions with practitioners. This shift was clearly exemplified by our survey results - for 47% of applicants, this was their first proposal in climate. A total of 75% of applicants have been working in the space for less than 5 years.

9: Even a short application took people longer than we expected.

We modeled our grants program after Fast Grants with the hope of a low-lift application (3-5 hrs of work) and short turnaround time (1 month) to fund projects. We asked applicants to complete a ~1800 word problem statement complimented by a solution statement with distinct categories. From our survey results, we saw most applicants took a lot longer than we expected, 75% of applicants spending 11+ hours on their application. 

10: This is a repeatable model.

Spark Climate Solutions credited Homeworld Collective’s approach to open research problem statements during their launch of a methane research funding opportunity - Exploratory Grants for Atmospheric Methane Research. Spark launched this $300k funding opportunity to support research on atmospheric methane removal. They just concluded round 1 of their application process, with round 2 coming later in the year. We see this as early evidence that this model can be adapted and scaled to other funding opportunities.

Conclusion 

Well, did it work? We think so. 

Our main takeaway is that Garden Grants were a success and we plan on doing it again. As a V1, we did well, but there’s a lot we can improve on. We deeply thank all the participants in the process for their feedback and collaboration.

Through the lens of 100+ applications, we've seen the eagerness of the biotech community to tackle climate challenges. The enthusiasm and quality of these submissions have demonstrated the depth of talent and the breadth of ideas waiting for the right support. This response reinforces our belief in the power of targeted funding to catalyze significant advancements in climate biotech.

Comments
1
?
Gnimdo Tako:

This sentence is unfinished