Open standard for incentives-based community-rating systems

Hi,
i was wondering is there is an open standard (smart-contract based) which could be used in community-based rating systems.

The prime example for what I call a “rating system” is stack exchange:
Users give up or down votes and thereby “rate” the validity of answers … in stack exchange the validity is re-inforced by the person who asks the question, but in a smart-contract system the validity could be generated based on some smart contract rules aswell.

This general system of course can apply to many different web applications, so if one wants to use
a smart-contract based version of lets say the “stack exchange rating system”, it does not have be done from scratch.

My question: is there an open standard for smart-contract based implementations of such a system, and if not, wouldnt it make sense to create such an open standard? Developers don’t have to implement such a system from scratch, but can rather customize the existing standard for their own specific app needs?

Best,
T

Interesting topic, I’m not sure if there are standards being worked on, but I think the main issue in order to create rating systems in permissionless networks is the vulnerability to sybil attacks. Basically you can’t give every user account equal voting power, or else the system can be gamed easily. Meaningful rating scores need to derive influence from some form of stake, ideally in a more meritocratic way than solely using monetary assets.

I guess these fundamental questions of how voting power is allocated need to be resolved first, before it makes sense to define standards. The only feasible way I could think of to achieve quality rating systems would be to establish decentralized reputation systems first.

Yes, Sybil attacks is one thing you want to avoid, and probably staking plays a part here.
Another application, besides stack exchange, is something where a group of people have to find the “correct” solution. One example is the “re-captcha” algorithm. Another example are translations of sentences in language apps. Internally the algorithms are set up as to compare the given solutions by different people, and someone who just translates wildly would end up having wrong solutions often, and thus getting less credibility, or less “ADA” , if he was to be paid for some correct answer in ADA.

But I do think that the structures that are required for these applications are fairly general, and the different use cases can be achieved by changing a particular parameter in the algorithm - for example the number of people that have to check a particular item before it is deemed “true”. I dont think we necessarily have to wait for reputation systems to be build, because reputation is only one part in this - and I am not sure it is the most important one.