The star rating process is fundamentally broken in Catalyst.
PA’s essentially give high ratings to proposals that are “well written” while remaining “neutral” instead of being critical on the idea & asking difficult questions.
That’s why we find ourselves with highly rated BS.
This is misleading to voters because they tend to think that proposals with 4/5 stars are solid ideas & should be funded.
Average voters don’t have time to read through 1000’s of proposals, so they heavily rely on ratings.
I think we have two options:
1- We fix Catalyst
2- We change the way users are taxed
Let’s start with the easy fix.
Catalyst needs to overhaul the rating system, saying that PA’s need to remain “neutral” and ignoring the fundamental idea behind the proposal isn’t efficient or effective.
If PA’s chime in with their personal opinions then some sort of corruption and bias may take place.
How do we fight this ?
Simply by having more people involved, being critical and let the cream rise to the top.
PA’s that pushed profitable & successful proposals will have an increase in reputation.
PA’s that pushed unprofitable & failed proposals will have a decrease in reputation.
If you obtain a lower than average reputation then you don’t get to participate. This is accountability.
Changing the way we are taxed.
If we can’t fix Catalyst then let’s change the way we are taxed.
At the moment, we are sending our tax money to a treasury that’s going to fund grifters & scammers instead of pushing Cardano forward…
Why not overhaul the way we are taxed ?
For example, we can have a specific section in our wallets that’s called “Tax ADA”
This ADA is only meant to be spent on proposals, essentially we have a mini treasury in each wallet.
This way we’re pushing power to the absolute edge and allowing users to decide where their Tax ADA is spent.
Once a proposal has reached it’s requested fund cap, the Tax ADA is removed from the user’s mini treasury and sent to the proposer.
This would heavily increase competitiveness & quality in the proposer market.
With this mini treasury solution, we solve the problem & all become PA’s over night.
If you disagree with this approach, are you saying we aren’t qualified to make our own individual decisions?
Please let me know if any of this is feasible, looking forward to feedback from all & acknowledgment by the powers at be.
I totally agree with the star rating being broken. Even totally honest assessments that intrinsically also are somehow correct lead to a quite strange order in the overall view. They don’t answer the question “Should this be funded over the others?”, but really just “How fitting is it? How good is the work plan? How good is the auditability?”, but they are then used by voters as an answer to “Should I vote for it?”.
A lot of the proposals that I want to see funded, because they are from the projects that already keep this ecosystem running in this very moment, that come from teams, where I know they can deliver, got quite mediocre assessments, probably not even totally in bad faith if you look at the three assessment questions, but leaving them in the middle field of the non-graspable host of proposals. I hope they just got enough votes through the other channels. …
I don’t think your proposal 2 – changing the “taxation” mechanism – is doable. You would have to keep a record of all transaction fees paid by each voter and then assign them the share of the treasury that they can decide on. And for this decision, they would again need much the same guidance as they need for voting now.
So, this leaves us with improving the assessments, maybe the voting system itself, and both in the light of Catalyst moving to this dRep thing.
I believe there are elements in the process of Quality Assurance that are missing. Look into the incentive models as well as the QA guide or manually. Also of interest may be “proposal-splitting” and the guidelines around Challenge Teams, proposal flagging, slashing, and flash assessments. The proposers are able to ask to remove the critical assessments and most often they are removed. This process is called Proposer Flagging. After an assessment is flagged by a proposer, it is then reviewed by a Veteran Assessor.
It would be nice if more people were involved and there is a dire need for people to be involved in the beginning of the campaign as opposed to the end.
I never understood why there is tens of millions of Ada funded towards dApps that are going to likely not be open sourced to the ecosystem and only created so the Devs can mint and sell their useless token. These projects are fine on their own, the potential of profit should drive development. But giving these teams thousands of dollars in funding when the proposal basically reads like they are gonna sit around for half a year and then get around to putting some code together just seems so flawed and gross. It makes me wish the treasury wasn’t a thing.
In my opinion, treasury funds should be only around 5% of reserve minted every epoch and it should only fund core protocol development (Where the devs don’t stand to make money off of it if successful). The funds need to be used to incentivize work that is critical and lacks proper attention since on most blockchains it would be on more of a volunteer basis.
Some valid points and good idea seeds. Here’s feedback that sprung up.
Some follow up Qs on
PA’s that pushed profitable & successful proposals will have an increase in reputation. PA’s that pushed unprofitable & failed proposals […]
Q What metrics to use for success or fail? What is profitable and to whom? Q Who is deciding these parameters how? Q Who verifies the KPI and how? Q What stops PA from just creating a new account when rep is low?
2.Regarding this statement:
“Simply by having more people involved, being critical and let the cream rise to the top.”
Q → How to get more non biased quality (critical) involvement/engagement if already with PA rewards that high, the quality of engagement is that low (as you suggest)
Q->What about limiting amount of (non FOSS) proposals per teams/individuals?
Can you please elaborate on that,
“Once a proposal has reached it’s requested fund cap, the Tax ADA is removed from the user’s mini treasury and sent to the proposer.”
The main way to know if it was a success is to be able to check with verifiable evidence if what they claimed they could provide has actually been delivered.
These Metrics should be provided by the proposer in the proposal.
All proposals should be profitable to Cardano, no one else, period.
Be it an increase in innovation for the system, profitability in the $ sense or an increase in adoption to bring in more users regardless of gender or race.
The PA should be there to measure if the proposal is realistic & good for Cardano.
It also has to be achievable, make financial sense as our resources are limited.
We need be able to verify each Rep with real identity so that they can’t just start from scratch & game the system.
You remove the financial incentive from the equation to remove PA’s who are only in it for the money and replace them with people who fundamentally care about Cardano, whom are protecting their investment & the treasury from scammers & grifters.
Funded proposals don’t have to be Free and Open Source Software as long they are profitable and prove to be a good investment whether it be high ROA for the treasury, provable adoption, innovation.
To elaborate on the request fund cap :
Once the SUM of mini treasury wallets having approved the proposal with an amount of their choice (within their means) then the requested fund cap is reached and funds are locked in and distributed over an arch of time until the project is complete.
There also should be technical PA’s to assess appropriateness of solution, technique, level of skill etc. I noticed a couple of proposals where I was rather off put by the level of skill of the proposer for the problem to be solved. Voters without technical knowledge or skill would most likely not be able to pick up that the proposers were more ‘designers’ than developers of code that involved care in securing the code. On one level, I thought well, ok, as long as it is properly audited, but then the overall solutions sometimes just pushed the ‘problem to be solved’ to another ‘problem to be solved’ and they were asking for quite a bit of funding. I voted both down, but I suspect many others were attracted to the potential solution and believed the proposers fully competent. Rating for both was 5 stars.
I’m not saying we should dispatch ADA from the treasury & send it to all individuals whom contributed.
I’m saying we should stop the way we are being taxed and implement a new way to solve the problem long term.
The argument the OP puts forth places the star rating system in the spotlight, and I can agree that these ratings are garnered from a “neutral” point of view and are more meant to assess the quality of the proposal itself and its executability, rather than its value add for the ecosystem. In that light, then, what if we looked at improving the information received by the voter at the point of voting?
What if each proposal had a list of positive and negative statements that relate to the proposed plan presented to the voters on the voting page. This gives voters a chance to see what Cardano and the ecosystem stand to gain from a proposal (all gains are rarely well articulated in the 140 characters Solution) and where this proposal may also be lacking or things that may be overlooked in the criteria set by the PA process.
Positives statements could include things like: results in open source tooling or education, increases research in an under researched field of interest, bridge’s blockchain communities, increases ecosystem security, etc.
Negatives could include things like: funds not for Cardano ecosystem development, project has already raised funds in another means, project is one of many of the same type already existing in the ecosystem, team is unknown or under-qualified, solution already exists, etc.
These positive and negative statements can be proposed by the proposers (yes they will rarely propose a negative) and PAs. These are then reviewed and voted on by vPAs and those statements that win a large enough share of the votes from the PAs who provide a valid assessment for that proposal will appear on the voting app. (proposers may not get to view the positives and negatives proposed by the PAs when reviewing assessments). This will have to be streamlined in someway to prevent vPAs from having to reorganize thousands of similar comments about a piece. We could use a madlibs format for PA submissions as a streamlining mechanism.
This will allows for decision influencing pieces of information about the outcomes of a proposal to be displayed to the users as they vote; information which is currently not available to them unless they read each proposal they vote on in full. This can result in better informed voting decisions and hopefully more impactful projects funded and maybe the non-funding of the proposals that don’t absolutely deserve or need it.
Agreed, PA’s need to be curating the proposal for better star ratings.
The voter is already satured with proposals by the thousands.
It’s up to the assessors to curate and request more information during the scoring phase.
I agree, this would make it easier for the voter to see the + / - and then they can make up their own mind.
Hard disagree, a proposer shouldn’t be commenting on his own proposal.
It’s up to the PA’s to be critical on it and have an unbiased analysis.
I think there should be some kind of rating system as well for past performance from an entity that has been funded and completed projects on Catalyst with a link for users to look at if they wish. That way they don’t need to waste precious space in describing the new project and voters have an idea of this proposers history. Showing a history of past projects submitted by the proposers showing if funding was approved or not can also give a better idea for those voting on the project.
Another thing that should be rated is the ones rating the projects themselves.
I think the current rating system is bad and very easily can be gamed by teaming up with certain people and giving each other 5 star ratings.
It takes too much time to go through so many project descriptions, teams, history, etc to be able to properly assess all projects. Ratings simplify this process a lot and the rating of the project itself is the most important. People rating projects can downvote other notable projects in order for a project the Rater is involved with/teamed with to stand a chance. People rating projects must be graded at some point based on the ratings they have given and measuring that against something like amount of projects actually approved and voted for by the community.
Projects need to be followed up on but that’s the difficult part as the Catalyst team is small.
There’s a work group getting prepared by Daniel Ribar, if you want to help in any way, make sure to send him your email address.