Since there has been a lot of talk about the results of the latest Catalyst Fund9, I’m going to summarise what in my view is wrong with the Catalyst system as a whole.
EDIT: Post #33 in this thread has a poll on two (meta) questions how to go on. Please click it! Thank you!
We don’t know who the assessors are. A proposal could plant favourable assessors and if they are just smart enough, nobody, not even a vPA could really detect it. There are a lot of very high and very low assessments that made it through the vPA vetting that could have very well come from such bought assessors.
But also for honest assessors, we do not know how qualified they really are to answer the assessment questions, which diligence they exercise when checking if a team is really able to execute what they promise, if they know enough about the technological basis to assess a proposal in depth (some even honestly say they are not in their texts, but their stars are worth exactly the same, nevertheless).
And also for honest and qualified assessors, we do not know if they employ a comparable scheme for the scoring. In the end, nobody has the time to read all these walls of text and only the average score in comparison to other proposals counts. And since assessors seem to only use four or five stars for not totally unacceptable proposals, it is more or less random if a proposal gets assessors tending to give four if not perfect or assessors tending to give five if not really broken.
This is all made worse by the fact that the payment for the assessments incentivises to assess as many proposals as possible. You do not have to give a very good assessment, just enough that it passes the vPAs.
We do not know which assessments have been filtered out for what reasons. There are a lot of projects complaining that their good assessments have been disregarded. On the other hand, the much discussed “Daedalus Turbo” proposal only had five star assessments left after filtering. Was there really not one assessment that saw how hardly implementable it is?
At the very least all assessments including the filtered ones have to be public after the vote, so that the community can see if there is something suspicious or just going wrong in the process.
Even if the vPAs do the best they can, don’t make obviously wrong decisions, … they can only work with the assessments that are there. If they look at it, see that the proposal should have gotten much more critical assessments, but there are only 5 stars, there is nothing they can do to express that.
A comparatively small UX issue is that nearly all assessments are “good” (the others being “excellent”), which a naïve user would think means something like “above average”. You have to click through a bit to learn that there is no vPA meta assessment grade between “good” and “to be filtered out”, so “good” really means “just enough to not be filtered out” and not “above average”.
Even if the process would not have any of the previous problems, it would not be fit for the purpose.
The scores finally decide in which order voters see the proposals. Even if they do not simply do “high score, I’ll vote for it”, proposals with mediocre score are buried in the middle of too many to grasp. They will not even be seen by voters.
But the assessment does not answer the question: “Should this be funded over all the competing proposals?” It answers three questions – “addresses the challenge”, “experience and plan”, and “sufficient to audit” – weighted equally as far as I can see.
An honest assessor does not have any possibility to express: “Yes, this proposal fulfils all of the three points, but this other proposal is much more worthwhile, although it lacks a bit in auditability.” They would have to abuse the “addresses the challenge” section for it and exaggerate at it to counter the equally weighted auditability.
dReps just shift the problem. The question on which proposals to vote for and against becomes the question which dRep to delegate to. We cannot know beforehand how a dRep would vote in detail, what their stance on certain details is, to what extent they will follow the broken assessments (if they are also done in the new system), …
Popular dReps would become an easy target for corruption. It does not actually have to be bribery. It is already enough if the dRep has some favourite proposers in a certain area, has just some biases that are not completely in line with the principles they think they are following and I expect from them.
Instead of PAs, vPAs, and dReps there should be public recommendations on how to vote by a diverse range of people.
They should state their general principles. – “I’m going to vote for projects with a proven track record of providing essential technology for the ecosystem.”, “I’m going to assess the proposals of newcomers very carefully.”, “I don’t believe that Metaverse/NFTs/SSI are a promising use case and am going to vote against them.”, “I will put an emphasis on ecological sustainability.”, …
And they should give a whole slate of votes also considering the comparison between proposals, optionally with rationales for single proposals. – “This does not seem auditable.”, “We cannot continue without this.”, “I’ve looked at the proposers in detail and don’t think they can do it.”, “This seems to be one of the killer applications for Cardano and the team already showed they can deliver.”, “This would be nice, but it requests too much money and the other proposals in the challenge are more important.”, …
A voter could then choose multiple of these recommenders who they think are trustworthy and the voting app would give them options to automatically vote if they agree, show the proposals where they disagree, override on single proposals where the voter has a strong opinion themselves, …
Having them public will hopefully also spark much more discussion before the vote instead of the host of “I had no idea! Why didn’t anybody tell me beforehand?” we have now. Additionally, if a recommender followed by a significant share of the voters is publishing something that is not okay in the view of the public, it can be called out, discussed, and hopefully corrected before the recommendation becomes relevant in the voting phase.
Of course, there is ample source for potential conflicts here, but better have them publicly before the vote than half-publicly in assessment QA and only really publicly after the vote when it is too late.
If we think that a monetary incentive – like for PAs now – is necessary, it could be given by letting the voters distribute their voting power among the recommenders they found particularly helpful and distribute the recommendation rewards according to the cumulative voting power shares.
There is no central website, where we can see all the previously funded projects and their results. There is just some Google Docs spreadsheet with the progress reports and check marks if someone in the Catalyst team thought they are enough.
A public, accessible, well presented result overview should have been there from the start for at least three reasons:
- It is indispensable for marketing. If we want to show the general public how good this Catalyst thing is, we have to show what comes out of it. Obviously!
- If “the community” funds something it should have easy access to the results. I want to download the software that was programmed, watch the videos that were created, read the documents that were written.
- To assess if a proposer should get money again, it is very much needed to see how well they fulfilled their promises in previous projects if there are any. We don’t want to give money to hot air producers again and again!
Ideally, there should also be a community vote if those projects were worth it. The yes/no decisions if a progress report is sufficient done by very few Catalyst team members at IOG cannot grasp all the aspects going into “Would I vote for it again?” from the quality of the results to the cost efficiency.
At the moment, a variant of score voting with scores +1 (Yes), 0 (not voting), and -1 (No) is used. The proposals are sorted by Yes-No, i.e., the sum of all these scores.
It is well-known (https://en.wikipedia.org/wiki/Tactical_voting#Score_voting) that the best strategy for these types of voting system is to up-vote your favourites up to a certain threshold and down-vote all others.
If the other voters don’t do it, you exercise your voting power more efficiently than them. If the other voters also do it, you have to do it to not have a disadvantage compared to them.
Given the sorry state of the assessment system above, there are also a lot of legitimate reasons to do so. If a lot of proposals that you deem very essential for the ecosystem got mediocre scores, but a lot of proposals you deem nice-to-have or (worse) outright bad got better ones, you want to do everything to make sure the essential ones get funding. And that includes down-voting even the nice-to-have ones, since they are a risk to the essential ones.
One of the two thresholds that a proposal has to pass to be considered “approved” is that the sum of Yes and No votes, the total votes given to the proposal are more than 1% of the registered voting power.
This has the consequence that a number of proposals only got approved and a number growing from fund to fund also got funded, because people voted No on them.
This could simply be fixed by a threshold not on Yes+No, but on Yes or Yes-No.
This has been discussed in more detail in this thread:
Voting systems that do not use the average or the sum of the votes, but the median of grades given by the voters (https://en.wikipedia.org/wiki/Highest_median_voting_rules) have the advantage that there is much less incentive to exaggerate down-voting.
They also have a nice voting experience: Voters give grades – “Excellent”, “Good”, “Okay”, “Bad” – and the result is the grade, where half the voters gave a better and half the voters a worse grade. The more detailed result/ranking is then given by a fractional part expressing how close they are to getting a better or worse median grade. (There is some choice in how to do this exactly.)
Grading proposals instead of having to decide between voting for or against them seems much more intuitive. And the chance to hurt your favourites by not giving the competition the worst grade is much less.
Since we want to have a ranking of the proposals from funded first to funded last, Condorcet methods (https://en.wikipedia.org/wiki/Condorcet_method) are a natural choice. Each voter gives a ranking of the proposals from first to last and the method ensures that a proposal ranked above all other (remaining) proposals by a majority in pair-wise comparisons will also be the (next) top-ranked proposal in the result.
The biggest drawback of such methods is that they tend to be very complicated and therefore not necessarily transparent to the voters/users.
If none of the less known voting methods are acceptable for the community, plain approval – only voting for certain proposals with no option to say “No” – is still better than the current system.
It at least removes the strategic decision between abstaining and down-voting, just leaving the question if a voter should vote for an okayish proposal if there is a risk that it could harm their absolute favourites.
A lot of the discussions in the previous days were around possible illegitimate influence of whales on the results. We cannot conclude much about it from the data that is public. We do not even see if the wallets voting Yes or No were larger on average, just if the wallets voting on a proposal at all were suspiciously large.
And it’s not easy to decide, when the vote of a whale becomes illegitimate. If I have a million ADA, why should it not be okay if I vote for my own proposal or that of a friend? And also down-vote the competition, since the used voting system incentivises me to? How little do the results of a project have to be to diagnose a deliberate cash grab and not just a very badly executed project?
Only if we have proof enough that it was never planned to honestly do the proposed project, that it was planned all along to just get the funding, fake some progress, and give the whale a share of the profit, it would be enough to exclude from further funding. And we need a process for that, too – a process where voters can see that it gets employed, where the results are shown also if they are: “We do not see fraud here for this and that reason.”
Unfortunately, we cannot really fix the grossly different voting power. Voting power by ADA is the only thing we have. “One person, one vote” cannot be reliably ensured with the tools we have – and, no, the SSI pipe dreams won’t bring a solution anytime soon. And even if it could, is it fair if people with just a couple of ADA, with very little skin in the game, get the same voting power in “our” ecosystem? “One wallet, one vote” would simply not make any sense, since large amounts of ADA can be distributed arbitrarily to wallets.
The hope can only be that better assessment and voting systems mitigate a bit of this, that the larger influence of whales is not that detrimental anymore.
Presentation of proposals on Ideascale – a paid third-party service, where most of the functionality is not even used, since we do the voting elsewhere (at least there is no mandatory account registration anymore and it can be read by all voters), registration for voting on Cardano with the wallet app of your choice, voting with a mobile-only voting app (more on that below), which submits the votes to an opaque voting system using an abandoned blockchain prototype – Jormungandr – running behind closed doors at IOG, voting result evaluation and tracking of project progress in some hand-woven Google Docs spreadsheets.
This is only made a little more graspable by third-party services like https://cardanocataly.st/ or https://www.lidonation.com/en/project-catalyst/projects.
The very, very least should be that the votes can be audited publicly by the community. That they are published in a way that I can check the signatures of the voting keys whose registration I can see on Cardano proper. Maybe, this can only be done after the voting has closed – since there could be strategies when knowing other peoples’ previous votes – but after vote closing it simply has to be done.
“Don’t trust! Verify!” – We cannot wait years for one of the basic features of cryptographic solutions … and be left with something that from the voters’ perspective could also be Google Forms regarding verifiability.
Ideally – meaning months, not decades – it has to be a system, which is integrated up to the result presentation already mentioned above. This cannot be impossible to achieve. And it has to be open to third-party clients, where it makes sense. We, you, IOG know how to do it. We can choose with which wallet app to manage our ADA. We have to be able to choose with which voting app to scroll the proposals, plan and submit our votes, track the results.
The voting app is a joke! Honestly!
For a task that is as complex as voting for hundreds of proposals, there has to be a desktop version. Period.
If you carefully deliberate your decision and plan it, you probably have it prepared somewhere in https://cardanocataly.st/voter-tool/, your own spreadsheet or whatnot. You have to have the possibility to import that. Having to click hundreds of times to put that into the voting app is simply not acceptable.
Import (and export) of votes is also indispensable for voting with several wallets. At least, we don’t have to deinstall the app anymore to do that (was an even bigger joke). But it is also something that just has to be possible to replay the exact same votes with your second, third, and fourth wallet with just a few clicks.
It has to be visible in the (challenge) overview if a vote was already given for a certain proposal. It is just really bad UX that you can only see if you have visited that proposal, but not if you have actually voted on it and already submitted the vote.
And all of these UX improvements have to be available to anyone in the standard voting app. It does not help if there are some technological work-arounds if you really know your way around
The bad UX actively influences the result. Just look at how many more wallets vote in the challenge that is displayed first in the app. It is not more important or more interesting than the other challenges. It is just displayed first and the voters give up after going through it.
Altogether, this is “back to the drawing board” broken!
In my opinion, the entities holding the keys necessary to distribute the treasury funds should not allow another round to be distributed, before a significant part of the above points is addressed.
I understand that with the switch to the dRep system, some improvements are planned. The CIP-62 draft does look like a lot of it shall be shifted to dApps, but other than that not much is known about the plans. We have to get the chance to have a thorough look if it really solves enough of the problems. For that it has to be presented and discussed in detail (in written form and by a variety of people, not just by some insiders).
And, no, this is not solved by “Come to our town halls, let’s talk about it and build together.”! I know a lot of people fed up to the back teeth with this derailing – “We are working on it.”, “It’s still an experiment.”, “Just write a proposal on how you would fix it and maybe you are gonna get funded and extra-maybe the powers that be will even use it.”, …
This system is distributing millions of USD/ADA right now. It needs to be reworked completely with community input through a lot of channels. Channels open also to people who do not have the time to use their Wednesday evening talking about how a-ma-zing this Catalyst thing is.
Obviously, this cannot be fixed by one or two people responsible for it at IOG. The best time to put a lot more manpower into it and at least fix the obvious things would have been before Fund1. The next best time is now!
- Show us all assesments! Also the filtered out ones.
- Show us the number of Yes and No voting wallets! We want to see if only a few whales pushed a proposal to funding.
- Show us the raw data from the voting chain! We want to see the signatures of the voting keys.
- Start a process to repair this! A process not just among the hardcore Catalyst bubble, but with the whole Cardano community.
- Do not start a new funding round, before this is fixed!