The Challenge
Catalyst Fund 14 has begun. Similar to the precedent fund, we decided to set aside some of our activities and allocate time to review proposals. Because our area of expertise is software development, my team has been focusing primarily on the ‘Cardano Open: Developers’ category. Let’s start by putting some numbers in perspective. In this category alone, we have 324 proposals, with an average budget ask of 79K ADA. Besides, 3.1M ADA are available in this category, which means that about 40 proposals can be funded. Or, to put it differently, 88% of proposals will be rejected. In a certain way, choosing is refusing, and the category likely contains more than 40 relevant and deserving proposals. That is the reality of Catalyst we have to cope with.
Additionally, we regret that our Fund 13 evaluation came too late and surprised many builders. So this time, we’re doing our best to publish our results as quickly as reasonably possible.
Additionally, we want to ensure that proposals receive a fair review. Yet time is limited. Let’s assume we’re aiming for a whole week of review. Someone spending an equal amount of time on each proposal and working full-time during a whole week will manage to spend around 7 minutes per proposal. Allocating more than a whole week is challenging because life continues in the meantime. Any time spent on Catalyst reviews is time not spent elsewhere (e.g., improving tools for the ecosystem, developing Amaru, or helping builders on Discord). Hence, if we want to spend a proper amount of time reviewing proposals, we must proceed more quickly on some to allow more time to be spent on others.
The Method
To best tackle this, we have followed the same base methodology we used for Fund 13 and tried to improve on it, thanks to our own learnings and also to the feedback we received from the broader Cardano community. To recap briefly, we’ve divided the team into two sub-groups: champions and contrarians.
-
The role of champions is to reduce the initial set of proposals to a manageable subset that can be more thoroughly reviewed. Reaching the ultimate shortlist involves several key steps, which we’ll outline shortly. Besides, champions are subject matter experts on the field (we are software engineers who could be writing such proposals).
-
The role of contrarians is to conduct deeper reviews on the proposals that have been shortlisted by champions. They are also subject matter experts, yet they approach their review with a more critical mindset, challenging the champions’ selection. They are also slightly involved in the shortlisting, as we’ll explain shortly.
Note that, to reduce bias and errors, we have assigned 2 champions and 2 contrarians, so that the work isn’t distributed amongst them, but is replicated. These dual reviews enable us to consistently balance someone’s opinion with that of another’s point of view. In the end, this means that a proposal which ends up being shortlisted has gone through 4 pairs of eyes. It isn’t infallible, yet we believe it produces good overall results within the given timeframe.
Step 1: Champions Screening
This is probably one of the toughest and least rewarding moments where champions go over ALL proposals within the category and label them amongst three buckets:
promising: looks reasonable, fits the category, well-structured, intriguing enough;rejected: unfit for the category, too many red flags, poorly constructed;potential: hard to assess, unclear value proposition;
This screening has to be “quick” because there are many proposals at this point. This step isn’t about reviewing proposals per se, but about quickly scanning proposals to identify those that fit the category, seem reasonable/plausible, are open-source, and have the potential to advance the ecosystem further.
That means champions are only given about 1 to 2 minutes per proposal. With 324 proposals, that’s at least 8 hours to go through everything. This is done by the 2 champions, and once done, we can establish the initial screening shortlist by combining their “votes” as such:
promisingxpromising=promisingrejectedxrejected=rejectedpotentialxpromising=promisingpotentialxrejected=rejectedpotentialxpotential=potentialpromisingxrejected= ad hoc discussion
In cases where there’s a substantial divergence of opinions, we bring the proposals to the table and discuss them on a case-by-case basis. In our screening, we had three such cases. We ultimately decided to promote all three to promising for the simple reason that if they looked interesting enough to one of the two champions, then they probably deserved a more thorough review.
At the end of this process, 90 proposals were shortlisted as promising, 199 were rejected straight away, and 35 ended up staying as potential.
The latter are mostly proposals that we were unable to assess, either because we lacked expertise in that particular sub-area or because we were unsure whether they were a good fit. In an ideal world, we would revisit them more carefully. Unfortunately, some good proposals will inevitably be dropped. Again, that’s the harsh reality of Catalyst.
Step 2: Contrarians Recovery
Due to the screening intensity and its merciless nature, we considered it necessary to include a step of recovery, where contrarians may give a second chance to proposals that have been rejected or remained stuck in potential. So, we took two (non-overlapping) random samples – one for each contrarian – of 10 potential and 15 rejected proposals. That’s 50 proposals in total that end up being re-evaluated by contrarians.
Yet, none of the randomly selected proposals were promoted back into the queue.
Parenthesis: quick maths
How large should those random samples be? The exercise of picking proposals that may have been wrongly rejected follows a hypergeometric distribution. Suppose we assume that champions have made about 5% mistakes (1 every 20 proposals). In that case, we can define the probability of selecting
0wrongly rejected proposals fromnrandom picks as:
Hence, the probability of picking at least one wrongly rejected proposal is given by:
For
n = 50, we obtain a probability of ~95% chances of catching a mistake. Thus, having each contrarian review 25 proposals, each selected at random, provides a good safety net to salvage potentially mislabeled proposals during the initial screening. We distributed this amongpotentialandrejectedwith an obvious skew towards thepotentialones, as we reckon this is where the highest chance of mislabelling occurs.
Step 3: Champions Star-Rating
At the end of the screening step, the total budget asked of the remaining proposals would sum up to more than 8M ADA, whereas the Developers category has room for 3.1M. Although we aren’t aiming specifically for this exact number (more on that later), we are still way above the threshold.
We need to further reduce that list if we want to ensure that the proposals we select have the highest chance of being funded. However, proposals are still too numerous at this point to do full in-depth reviews.
For that matter, the problem appeared to be very similar to evaluating speaker sessions at a conference. At least we thought so. And we already knew of a fantastic tool to solve this problem: sessionize. We’ve used this platform in the past when organizing the first Buidler Fest. It comes with great evaluation methods we could leverage. One of them, the star rating, allows us to give proposals a rating from 0 to 5 amongst different criteria, which we defined as follows:
★★★★★ Strategic fit
- ☆ aligns with the ‘Cardano Open: Developers’ Catalyst category;
- ☆ supports mature, mainnet-deployed products or enterprise collaborations;
- ☆ directly supports one of CF’s strategic pillars for this fund (Technology, Governance, or Adoption);
- ☆ aligns with the specific goals and values of “Our Cardano”;
- ☆ the requested budget is reasonable and adequately justified.
★★★★★ Feasibility
- ☆ relevant problem statement and effective solution to the problem statement;
- ☆ solution can be maintained and scales well over time;
- ☆ goals, milestones, and KPIs are clearly defined, measurable, and time-bound;
- ☆ evidence(s) of relevant skills, expertise, or successful delivery available;
- ☆ team previously received grants and successfully delivered.
★★★★★ Impact
- ☆ offers a novel approach, or significantly improves upon existing solutions;
- ☆ clear path to measurable adoption;
- ☆ strong potential to generate significant on-chain activity or transactions;
- ☆ likely to attract new developers to the ecosystem;
- ☆ significantly or visibly reduces development efforts/costs;
Essentially, each proposal has been evaluated according to this grid, with each bullet point receiving zero, half, or a full star. The final note is obtained by computing a weighted average with the following weights, across both champions:
The motivation for these weights stems from Catalyst being an innovation fund. Therefore, we deem feasibility slightly less important than the other two. It’s okay for teams to sometimes fail, and newcomers reaching out to Catalyst for the first time shouldn’t be overly penalized.
Regarding the time allocation, we initially allocated between 5 and 6 minutes per proposal (although in practice, we often spent slightly more than 10 minutes per proposal), again with two champions. The focus this time would be on the proposed solution, the team, and the milestones. After this step, we would have spent about 15 minutes per proposal.
Besides, the overall ranking wouldn’t matter much for this step (rating is always quite subjective to the reviewer’s appreciation, so that’s not the best way to make a final ranking…). But, it is helpful to trim another chunk of proposals from the list by dropping the tail after the 3rd quartile. In our case, that meant removing 30 proposals rated below 2.5.
Step 4: Contrarians 2nd Recovery
Here again, mistakes are possible. We want to prevent this as much as possible by allowing contrarians to rescue some of the discarded proposals. The list of newly rejected proposals is passed to the contrarians without the champions’ rating to avoid influencing their decision.
They are allowed to quickly skim through those proposals and bring some forth for discussions with the champions. This can be useful in cases where champions missed something crucial, or when contrarians may have additional insights on a team or a project.
Proposals flagged by contrarians are discussed with champions, who can explain why they’ve been discarded, and a final conclusion is reached.
We had 4 such cases that were brought back to discussion by contrarians:
- Cardano Passkey by Eric Le
- Datum Explorer: Advanced CBOR tools & Interactive Features by WingRiders
- EIP-712 Typed structured data signing for Cardano by Anastasia Labs
- UTxO Timeline Graph: Visual Time-Travel Debugging for Devs by Manh Nguyen
In the end, none were rescued and made it to the next phase, for contrarians agreed with points raised by champions.
Step 5: Champions Condorcet Comparison
The star-rating only removed ~30% of the initial shortlist. The remaining proposals would still sum up to 4.87M, so 1.77M above the category budget. So we still needed a way to select amongst those proposals, even though choices are getting harder and harder to make.
That is where the comparison ranking comes into play. The idea is simple: proposals are compared three by three by champions. Each round, champions must order them by preference (among the triple only), with as many rounds as necessary to compare any two proposals with one another.
This is done by both champions independently, and the final ranking is obtained by following a Condorcet’s Method. This type of preferential ranking tends to be more effective at revealing the actual preferences in a voting system with many candidates (here, proposals), than examining any ordering following a per-criteria rating.
Thus, while the star rating helped trim down a tail of not-so-great proposals (according to the selected criteria), it isn’t ideal for establishing a final ranking. Nevertheless, the head-to-head comparison is quite effective in that matter. With that final ordered list, we can now proceed by taking the head of the list with a few extra rules:
- we selected proposals in order, up to the category’s budget;
- we skipped proposals from the same authors if we had already selected 2 proposals from them, to give a better chance to everyone.
After this step, we’ve therefore let go of 15 additional proposals, which represent our least favorite from proposers with multiple proposals.
- CSL‑Lite: Modular Cardano Serialization Toolkit by Zachary Soesbee
- Chrysalis.CBOR – .NET’s CBOR (De)serialization Engine by SAIB Inc.
- Chrysalis.Plutus.Builtins - Core Runtime Operations by SAIB Inc.
- Chrysalis.Plutus.Core - Parser & CEK Engine for .NET by SAIB Inc.
- Chrysalis.Tx – Advanced Transaction Building for .NET by SAIB Inc.
- Cometa.cpp: Cardano SDK for C++ with Full Conway Era Support by Angel Castillo
- Cometa.php: Cardano SDK for PHP with Full Conway Era Support by Angel Castillo
- Cometa.rb: Cardano SDK for Ruby with Full Conway Era Support by Angel Castillo
- Common Vulnerabilities Patterns by No Witness Labs
- Early Aiken vulnerability detection using AI by Tx Pipe
- Hydrozoa L2: New R&D and Cool Features by George Flerovsky
- MACS coin selection algorithm for Evolution SKD by No Witness Labs
- Orcfax Live Feed Extension – Latest Price Smart Contract by 3rd Eye Labs
- Shipping Oracle to unlock Cardano e-Commerce by Tx Pipe
- White-label e-Commerce Platform on Cardano by Tx Pipe
Step 6: Contrarians In-Depth Review
Now comes the final in-depth review by contrarians, who each examine all the selected proposals resulting from Step 5. The target goal is to spend between 10 minutes and 15 minutes per proposal. At this point, that still represents about 8h of work for each contrarian. Yet, at the end of this process, we would have collectively spent about 35 minutes per proposal at this final step. This is still not a lot, but far more than what we could have achieved if we had simply split the proposals among four people and processed them sequentially.
As stated at the beginning, the role of contrarians in this final round is to challenge the Champions’ decision and examine the milestones and teams behind each proposal in more depth. Should too many proposals be dropped at this step, so that we would fall under the category’s budget, we would consider those we just set aside from the previous step, in order.
Only 2 proposals failed to convince contrarians and were dropped after this step:
- Agentic DApp Testing SDK by Dquadrant by Kuber Team
- Exura Transaction Intelligence API + GameChanger Integration by Exura Labs
The Final List
Ultimately, we reached a final list that we are confident in supporting and presenting as a recommendation for voting. Note that this list is the (somewhat educated) recommendation of our team in our capacity of Cardano developers and community members. It does not constitute a CF-endorsed recommendation.
This final list sums up to 3.353M ADA, which is slightly above the category budget. However:
-
We do not know how to reduce this list further at this point.
-
In Catalyst F13, we did not have 100% correlation between our selection and the rest of the community; which means that voting for slightly more than the budget is preferable. The best strategy to adopt is an interesting game theory problem which can likely be solved by statistically modelling the problem, but we don’t really have the time for it.
So, we’ll leave the recommendation (and the process described here-above) as is, with the hope that it will be useful to others. We’re also looking forward to hearing more from other builders (proposers and voters alike) regarding the approach we took, and the final list that results from it – keeping in mind the usual caveats that comes with Catalyst.
Finally, we wish all the luck to all proposers genuinely engaging with Catalyst in good faith – even those that we didn’t end up shortlisting.
Appendix: Recommendations for the Open Source Committee
Catalyst is framed as an innovation fund, yet it has historically been one of the main and only funding vehicles in the ecosystem. So naturally, many open-source projects turned to Catalyst for funding their development and maintenance costs. In our selection, we tried to show preference for proposals that were leaning towards innovation. Yet, we believe that some discarded proposals deserve a second chance outside of Catalyst. Maintenance of useful open-source projects is critical, and Catalyst is simply not tailored for supporting these kinds of efforts.
Yet, it has become apparent that the Open Source Committee (abbrev. OSC) at IntersectMBO is interested in the matter. Hence, we took the opportunity during our review process to flag proposals we saw (and rejected) that would have been otherwise good candidate for grants and funding within the OSC Budget. As a conclusion, we share this list of proposals we would recommend for a second chance (and we encourage their authors to engage with the OSC on that regard!):
- CSL‑Lite: Modular Cardano Serialization Toolkit by Anvil
- CardanoWeb3js — Enhancing the TypeScript SDK for Cardano by XRay
- Chrysalis.CBOR – .NET’s CBOR (De)serialization Engine by SAIB Inc.
- Chrysalis.Network — Ouroboros Protocols for .NET by SAIB Inc.
- Chrysalis.Plutus.Core - Parser & CEK Engine for .NET by SAIB Inc.
- Chrysalis.Tx – Advanced Transaction Building for .NET by SAIB Inc.
- Common Vulnerabilities Patterns by No Witness Labs
- Exura Transaction Intelligence API + GameChanger Integration by Exura Labs
- MACS coin selection algorithm for Evolution SKD by No Witness Labs
- Smart Contract Design Patterns Conway+ by Anastasia Labs
- Weld — Massive Updates and Seamless Cardano for dApps by Anvil
- Whisky V2 — Cardano Rust SDK with Pallas by SIDAN Lab









