Overview
Prior to fund 9 the assessment process had a more direct influence on the voting behaviours of the community as it was the default approach used to rank the proposals being shown to the voters. This larger influence on the voting behaviours was an issue as the assessment process could be easily gamed and have low quality or inaccurate assessments.
In fund 10 the value of the assessment process was more severely limited due to the need for voters to view the assessments in each of the proposal individually. This currently is a preferred outcome as the quality of the assessments has been low due to how easy it is to game the incentive process.
Moving forward it would be more useful from a learning perspective to either use the assessment funds elsewhere to experiment with other processes and initiatives or to overhaul how the assessment process works and gets incentivised.
Problems
- Easily gamed - Anyone can contribute towards the assessment of proposals anonymously. Assessments can influence the outcome of the vote if a voter decides to act on a single assessment or the aggregated scores that come out of a proposals assessment. This creates an environment where the process can easily be gamed by malicious actors who try to introduce biased, incorrect or malicious assessments into the process.
- Highly influenced outcome from poor reviews - A single malicious assessment is enough to drastically change the outcome of a proposals ranking compared to others. For instance one bad assessment out of five would lead to a 20% weighted opinion that could help or hinder a proposals chances of success. This is a significant problem for the effectiveness of the assessment process as it means that even when there are other high quality assessments the total score will be highly influenced by the lower quality assessments.
- High assessment approach variability - Currently the assessors will review a proposal in three broad assessment areas - impact, feasibility and value for money. There are a number of factors in each of these areas that an assessor could focus on to justify their final score and opinion. This variability in assessment approach can lead to assessors providing very different scores and viewpoints compared to others due to the fact they are able to interpret the assessment process and criteria differently than others. Assessors could decide to focus more on one factor than another. The assessment process could benefit from using more refined and explicit assessment factors to reduce this variability in approach and make it easier for community members to assess the proposals more accurately and consistently. Alternatively, to accommodate for a more complex and variable assessment process another solution would be to fund more highly skilled assessors that would be responsible for being balanced in their assessments.
- Flawed incentives - The assessment process is a skilled job that requires the assessor to understand the idea to a certain level that they are reviewing and to then fully understand and apply the assessment process. A skilled task such as this benefits from people who are sufficiently well informed about the idea, the ecosystem and the assessment process to make more well informed judgements and opinions. Currently the process incentivises anyone to participate in the assessment process which is an issue as it easily leads to situations where the people who are doing the assessments are no more well informed than the voters when judging a proposal. The other flaw with the incentive is that its split between many people meaning that it limits the potential talent that would be willing to support the process due to a lower amount of compensation provided. This low incentive results in potentially preventing skilled people from participating who might have been interested in the assessment process if the compensation was more reflective of their skill level.
- Lack of accountability - As assessors are anonymous there is a lack of accountability for assessors to be accurate and responsible for the outcomes of their assessments. This results in making it easier for assessors to game the system due to a lack of accountability for their actions. The same assessor is incentivised to complete the task in front of them as fast as possible to maximise their compensation and can do so without risking their reputation as they can remain anonymous.
Suggestion option #1 - Swap smaller task incentive for full time incentives
Instead of paying a larger number of anonymous people to assess proposals the community review process could instead elect a number of publicly identifiable community members for full time contribution during that assessment phase. Electing people publicly into this role with a more meaningful incentive will help with removing the lack of accountability, better aligning the incentives for attracting skilled assessors, reduce how easily the process can be gamed and also reduce the amount of poor reviews that currently negatively influence the proposals average score.
Suggestion option #2 - Remove community review process
The assessment process does not deliver easy to digest information that is insightful and useful to the voters when making decisions due to how it is currently designed and incentivised. The assessment process as it currently designed is not effective. The resources that are used for this process could be better utilised for other experiments rather than on a process that is not yielding the insightful and useful results. The community review process should be removed until further research and analysis is conducted to design a better process or to determine better ways a similar outcome could be achieved.
Suggestion option #3 - Have another suggestion
Provide your suggestion and any rationale in the comments below.
Suggestion option #4 - Disagree, leave review process as is
Leave the community review assessment process as it currently is. Provide any rationale in the comments below.
- #1 - Swap smaller task incentive for full time incentives
- #2 - Remove community review process
- #3 - Have another suggestion
- #4 - Disagree, leave review process as is
3 Likes
It will be hard to remove bias, and limiting reviewers to a preset of people feels like a step backward.
But maybe we can achieve two things at a time when it comes to increasing the quality by increasing and testing knowledge.
Community reviewers could be tested on challenges before being assigned to review.
In this way, reviewers would have to learn about the philosophy, technology, and constraints that are inherent to the challenge, creating a natural selection process.
These tests and the preparation sources would have to be set up by the challenge teams
They should craft them with benevolence. To provide transparency and ensure fairness, the tests have to be reviewed as well. In some cases there could be requirements such as having passed one of the specific courses (e.g. Atala Prism)
The longer the timeframe to prepare, the more reviewers you will potentially get.
This could lead another time horizon to the catalyst rounds
The decision process can be structured in different ways; one could be that knowledge must be testifiable within a very short time span prior to the actual review process.
Ranking influences impact
The lower the score on your reviewer test, the lower is your review impact.
Setting up these barriers would need to be part of a budget
The setup would have to be a budgeted part, which - depending on the time horizon set - would have to be issued up to two funds prior to the next one.
2 Likes
The bias factor is a big reason why having anonymous reviewers is such as problem. Whether it’s a person or an AI language model writing the review they can easily push a bias towards supporting some proposals over others. At least if the reviewers are public facing you would be able to filter out the bad actors much more quickly and permanently.
However making reviewers public opens up the door to bribery and corruption. This is why i prefer the idea of a smaller amount of more topic based expert reviewers / or generally talented assessors who are sufficiently well compensated to reward their good assessment skills and align the incentives better due to it being easier to spot bad actors and the standard expected can be far higher.
Alternatively the assessment process could also overhauled and made much simpler. A scale of 1 to 5 or similar with much more precise criteria on what is required to achieve each level of score. This removes the ambiguity of why one proposal is given higher than others. This pushes the assessment process into more fact checking which makes it more standardised and automated. This approach would lack the nuances of what an expert assessment would provide but helps to remove bias and increase speed / scalability.
Another areas of improvement is to add other ways to filter / sort / rank / compare proposals to make it easier to find and compare certain proposals to better enable the voters to make their own decisions. This should be happening irregardless of the assessment process being adopted to make the voter more well informed.
And the completely separate option I’m most excited about testing instead of all these other options is to test a contributor focused funding model such as the delegated idea selection approach. Under a contributor model you don’t need to assess ideas, instead the voters would select the best contributors. Idea based funding adds a serious amount of complexity and poorly scales with voters - It adds an unnecessary amount of information for an entire community to review and compare. The very design of the process already centralises the potential voters who will be willing and capable of participating due to how complex and time consuming it is to read and compare ideas before voting. When complexity increases the voter who are willing to participate may often just find an easier path like just picking who they know will deliver.
1 Like
thanks, I’m happy to see the emerging discussion here. I think any of the suggested changes to the review process… educating reviewers, trimming the set of reviewers to those with demonstrated committent or qualification, or eliminating the review process entirely… would be an improvement over what was brought forward in Fund 10.
TL;DR most importantly we need to restore the ability for reviews to be challenged by proposers instead of just the anonymous and unaccountable Catalyst auditors.
The potential damage done to our Fund 10 project’s same presentation in Fund 9 (an annual project designed to renew funding every year) by verifiably uninformed or spurious reviews was limited because we (and others) had a chance to flag the offending reviews. Fund 10 leaves this audit process entirely centralised to 2nd level reviewers recruited from the same population… this assumes skill, responsibility and attention levels from those auditors that we are certainly not getting.
We had spurious, uninformed, and potentially malicious reviews that were upheld in the audit process and this time there was nothing we could do about it… and it almost ruined the project, which would have left Cardano’s CIP Editing team without the bulk of its routine support.
A good part of our project statement was about the flawless execution of the same project in Fund 9 and reviewers still made the following unchallenged statements which can be seen here:
-
Reviewer makes repeated references to “the team” and “no information about other team members” for a solitary project (my own funding as a CIP editor, on a team funded from different sources including IOG and the Cardano Foundation). This easily spotted mistake on the reviewer’s part suggested “potential risks to completion” and that it was only “partially feasible” … i.e. no observation of GitHub and Catalyst prior proofs of completion with full visibility to the Catalyst and general community.
-
The longest review makes a complete inventory of all past accomplishments that confirm feasibility, concludes that our project is both realistic and feasible, and then gives it a 3/5 score.
-
Another review marks it down to 4/5 because the proposal, which he says contains detailed demonstrations of feasibility and demonstrations of trust in the Cardano community, was “too much reading”.
The greatest risk to our project in a close vote (due to another Catalyst problem identified in this series) was this low Feasibility score even after satisfying the requirements 100% for documentation and proof of Feasibility… with anonymous reviewers shooting it down without any accountability either from themselves or from equally anonymous auditors. This is not “decentralisation”… this is a mob covered by an impenetrable bureaucracy.
If Catalyst is going to keep passing out tickets for casual workers to come make casual income, either they need to be highly vetted / trained / audited or we need to stop giving out the tickets.
The same solution recommended here for Voters (education) might also work for Reviewers… but given that not all these opportunities for education will be followed, we have to insist upon visibility and community audit. Otherwise we will keep seeing small projects vital to our system challenged & derailed by reviewers who don’t even understand what the proposals are about. 
p.s. @danny_cryptofay I apologise if I missed this in one of the “town halls” or governance deliberations… but can you please post here about why Catalyst dropped the Review public audit process in the first place?