I loved Itn and Cardano, but I may not love mainet because of a0!

And even if I agreed with that ( Which I do not ) a0 would probably not be the way to go. There seemingly would be much less nasty side effects to enforce that a maximum of k’ pools would have rewards and chances to validate a block in an epoch (those who have the more stake). You could take k’=10 000 or whatever. Then a newcomer would only have to become bigger than the last k’th pool to enter the game, not come with a pledge 3 times higher than the currently largest established pools.

The premise you have your discussion on is wrong. Staking is not about having equal chances at power and money or a perverted form of decentralization, it’s about securing the network. And just because ‘the richest get richer’ (arguably and only to a degree) does not mean the protocol is going to be centralized and useless.

There are many ways to become rich so making staking 100% fair and equal makes very little difference. I can sell my house, buy ADA with the money and then triple it by trading. This would also make me (‘the rich’) richer and give me more power and I can start running pools to become even richer. Of course there needs to be a some degree of fairness and inclusiveness so pools can compete and we create a healthy market, which there will be, but I hope you can see how useless it is to advocate that it should be 100% fair, equal and inclusive.

And your notion that ‘poor’ people can not participate is not true either. You can run a pool with no pledge and you can work together with others to get to a more optimal pledge for example. Or you can try to accumulate more ADA.

4 Likes

It’ll all get worked out in time l hope.Power to the wealthy is the exact opposite of Cardano philosophy.Apparently 8 whales( and l absolutely hate whales with a passion ) have more than 30% of ADA.Not sure how much of that is/will be staked.

He is right. The top 8 addresses own 30%.

1 Like

Of course. If you have a group of friends you trust you can pool your ADA together and pledge it to your pool to make it more desirable for delegates.

Staking is not about having equal chances at power and money or a perverted form of decentralization, it’s about securing the network.

Man I really think you missed my point here. I’m not saying that we should not scarify a little fairness, and inclusiveness for security, if that’s really needed. But that this formula is not only needlessly unfair (the fact that it is unfair does not bring anything in terms of security), but actually even worse: it is less secure than the 100% fair formula.

Anyways I’ve done my proposal here:

I think we’ve covered the topic now, I did what I could and will stop posting here, except if (very) new elements come to be.

Thank’s to everyone for your time! :slight_smile:

2 Likes

No I didn’t. Read the rest of my reply. So annoying when someone quotes one sentence and then ignores everythings else. If you don’t want to discuss things then don’t start a discussion.

You based your whole discussion on this premise:

This means that we will artificially give more (power and money) to those who already have much, and less to those who already have little. This can only lead to centralization in the long run. Decentralized is not only about how many pools there are, it is also about the fact that anyone should have the potentiality to participate directly to the network in proportion to their means, no more, no less.

I debunked that. You want to change the formula based on wrong believes.

With the current formula there will be a healthy stake pool market and the network will be secure. Adding fairness adds no value and is risky because every change has a risk to it. Things can go wrong so you need a better reason.

And you claiming it’s more secure without much research and a discussion that goes all over the place doesn’t give me any confidence that’s true. IO set a very high standard of due dilligence which every person who makes a proposal should at least attempt to replicate otherwise it’s unlikely they are going to be taken seriously.

5 Likes

Hi pparent 76,

I have been following this conversation, but have delayed responding as I have been working hard to understand the pledge concept myself.

I think that the point of a0 term is to limit the influence that “whales”, or large ADA holders can have, while at the same time preventing against sybil attacks. I know you are frustrated by the amount of work you have already spent trying to understand this, but by chance have you looked at just the first 8 minutes of this video:

Emurgo walks through the Incentive Paper and explains the math behind the reward function. A0 works to bound the reward function so that the total rewards a pool receives plateau at the saturation point, leading to a diversity in the number of pools and guarding against concentration in a small number of pools. The reward function also ensures that a leader with a very large stake does NOT have an unfair advantage at the beginning, and leads to diversity within the pool.

Emurgo also set up a calculator showing concrete examples of the reward function at work. It is a 4 minute video.

I sense you are a bit burned out on this topic, but I welcome your thoughts.

4 Likes

@AllisonFromm Thank’s a lot for this video which seems to cover the topic of the reward formula, and should give insight about why this formula was chosen this way! :slight_smile:

I will watch it, this week-end, when I have time to think about it and can give it all my fullest concentration, because I’m very busy right now. I’ll keep you updated if it changes my views or not! (And if so in what way).

Thank’s again!

1 Like

Thanks Allison(@AllisonFromm) for the share, it’s always refreshing to get a fresh perspective, welcome to the community :metal:

@pparent76 I thought of you as I was reading the updated paper on the reward sharing formula, which was just released earlier this month: https://arxiv.org/pdf/1807.11218.pdf

The authors specifically address why the current Cardano formula is more fair than a directly proportional reward sharing scheme.

Below are just two sections, from the abstract and from the introduction. The rest of the paper goes through the math that explains these conclusions.

If you have the time and energy to read through it, I welcome your thoughts.

Abstract:

We introduce and study reward sharing schemes (RSS) that promote the fair formation of
stake pools in collaborative projects that involve a large number of stakeholders such as the
maintenance of a proof-of-stake (PoS) blockchain. Our mechanisms are parameterized by a target
value for the desired number of pools. We show that by properly incentivizing participants, the
desired number of stake pools is a Nash equilibrium arising from rational play. Our equilibria
also exhibit an efficiency / security tradeoff via a parameter that calibrates between including
pools with the smallest cost and providing protection against Sybil attacks, the setting where a
single stakeholder creates a large number of pools in the hopes to dominate the collaborative
project. We then describe how RSS can be deployed in the PoS setting, mitigating a number
of potential deployment attacks and protocol deviations that include censoring transactions,
performing Sybil attacks with the objective to control the majority of stake, lying about the actual
cost and others. Finally, we experimentally demonstrate fast convergence to equilibria in dynamic
environments where players react to each other’s strategic moves over an indefinite period of
interactive play. We also show how simple reward sharing schemes that are seemingly more “fair”,
perhaps counterintuitively, converge to centralized equilibria.

Introduction:

There are three dominant approaches that have been considered in the PoS context. In the
“direct democracy” approach, every stakeholder participates proportionally to their stake, which has
downside that the operational costs can be so high that they discourage participation from small
stakeholders resulting in so-called “whales” completely dominating the system or, in the worst-case,
having operations stopping altogether. In the “jury” approach, followed by PoS systems like [30, 12],
a random subset of k stakeholders is elected at various intervals to carry out the task, which has
the downside that either the jury tenure is short and most of the nodes need to be either constantly
operationally ready without necessarily doing anything, or the jury tenure is long (or predictable way
ahead of time) and then the risk of someone subverting the project by paying the elected nodes with
small stake is high. Finally, in the “representative democracy” approach, broadly followed by [25, 11,
27], the stakeholders can empower other stakeholders to represent them in project maintenance and
subsequently share the rewards. Given that empowering is performed via stake as recorded in the
ledger, representatives can be thought to form “stake pools” in analogy to the mining pools of Bitcoin.
The focus of this work is to develop reward mechanisms and analyze them game theoretically for this
third approach.

4 Likes

@ AllisonFromm Ok thanks a lot for sharing those very interesting links, I did not yet found time to study that in details, but I will try to find time soon, and will give you my thoughts about all this!

Quick quote that seems interesting at first sight from the conclusion of the paper (I will read it more in details later):

With respect to Sibyl attacks and the “rich get richer” problem, in our scheme, increasing Sibyl-resilience affects inverse proportionally egalitarianism. An interesting research direction is to formally characterize the relationship between these two properties across the whole design space.

Funny that they release this paper now. I wonder wether the authors read this thread.

I used to work as an academic researcher. Having just glanced through it, I can assure you that paper took many months to write! :grin:

1 Like

Yes sure of course, they did not write it overnight, a paper like that is a lot of work, that’s not what I meant! :wink:

The preliminary works/papers were available for very long time (as this is just a summary) in the IOHK’s public repos/builds. You just had to do what some of us did, find and read them.
These are very well known for those.

2 Likes

I’ve watched several times in a row the part 3 of Incentive paper lecture. (I did not find time to read the full paper that was released earlier in this month). Here are my current thoughts and questions about it

Until 3:30 I’m fine although he does not cover at all why Sybil attacks should be prevented with incentives to have pledge (a0-like) in the first place, rather than enforcement. Which is the most important, for me to change my mind.

3:40 Another thing we wanted to take into account is to kind of make the advantage that a pool leader has kind of dampened, to avoid the unfair advantage a pool leader might get, when our curve was kind of like that

This is the sentence introduces the need for what I’ve called until now the “strange factor”. But I really don’t get what he’s talking about. The advantage that a pool leader over who?

3:50 "Because remember when you get stake early on it’s worth a lot, so we wanted to kind of limit that parameter"

I really don’t get what he means by that. Does he mean when you get delegated stake early on, or pledged stake early on? Maybe he refers that pools at early stages with large amount of pledge would be advantaged at early stages, without the strange factor? I think it seems likely to be that, but if so why is it a problem since the advantage is only temporary until he gets enough delegates (which he will if he’s got high RoS, and is in the best interest of the pool leader)?

Does he realize that this strange factor will give a huge unfair advantage to already established pools over newcomer as I showed earlier?
here
and here

I’m keeping it to these questions for now, because I’m OK with all the rest, I do understand his explanations about the formula. But I don’t understand the motivations for it and why it should be better than the most obvious, limit the stake proportionally to the pledge formula. If someone did understand and wants to clarify it would be most welcome. Also if the authors want to come and explain or discuss, it would also be great. I will try to find more time to read the paper in some days.

I’ve just watched the part 6 and he says:

"Because there is kind of a high jump at the beginning, if the pool leader has a high amount of stake they can quickly get an advantage at the beginning and we want to kind of dampen that effect"

This confirms me that, with the strange factor they have been trying to solve this “pools with large proportion of pledge would be advantaged at early stages” problem. But that sounds to me as a false problem. Because if the pool leader has a high amount of pledge in comparison to delegated stake, then they have the advantage to have high RoS, but it is compensated by the disadvantage to get very few fee rewards from delegates, in comparison to what they could have while keeping RoS competitive. So all taken into account, there is a certain % of fees for which the pool leader will have interest to accept delegated stake in their pool because it will over-compensate loss in RoS. And delegates have interest to get into that pool because it has high RoS, until the pool reaches the average RoS, and gains balance again. Therefore without the strange factor the “unfair advantage” he is talking about and trying to “dampen” would actually be only a temporary dissbalance reflecting the fact that the pool needs to grow according to their pledge, but would not be a real advantage to the pool leader, and a real problem at all. ( And the strange factor introduces as a side effect other huge unfair advantage, as shown above, that he does not seem to discuss at all ).

Again if anyone has anymore explanation, do not hesitate.

For now this video strengthen me in my opinion, and actually tends to worry me even more on how this formula was chosen.

I still have to read the June’s paper.

Ive read the June’s paper:

"(page 2)Finally, in the “representative democracy” approach, broadly followed by [25,11,27], the stakeholders can empower other stakeholders to represent them in project maintenance andsubsequently share the rewards. Given that empowering is performed via stake as recorded in theledger, representatives can be thought to form “stake pools” in analogy to the mining pools of Bitcoin.The focus of this work is to develop reward mechanisms and analyze them game theoretically for thisthird approach."

I really don’t get it! How do you hope that people will choose pools to represent them best in regard of maintenance and security of the system selecting trustworhty people as in the concept of “representative democracy” , AND that they will follow AT THE SAME TIME incentives of higher RoS they are putting in place with a0 to incentivize pool operators to get minimum amount of pledge. This leaves me clueless really.

2.2 Fair RSS’s and their Failure to Decentralise
Specifically consider the fair allocation that sets r(σi,ai,i)=σi·R

Is this the scheme that are seemingly more “fair” that is mentioned in the abstract to “converge to centralized equilibria”? If so, of course if we don’t set any limits it sounds very obvious that the optimal is to share costs across one big pool so it will be the Nash equilibrium. I hope we’re not making a straw man argument here, because it seems obvious that r(σi,ai,i)=σi·R, is bad, but it is not the case of all other “fair” reward schemes.

(page 14) To address this issue we design a reward sharing scheme that guarantees that players can attract stake from other players only if they commit substantial stake to their own pool.

Then why don’t you simply limit the delegated stake proportionally to pledge stake. rk(σ,λ)= min ( σ , 1/k , b0 * λ ). So simple, so natural! Why don’t you even mention this possibility? (at least to show it’s bad like you did with other possibilities). Didn’t you think about? Didn’t you read this post?

The objective is to design a reward scheme that provides incentives to obtain an equilibrium that compares well with the above optimal solution. On the other hand, we feel that it is important that the mechanism is not unnecessarily restrictive and all players have the “right” to become pool leaders.

Why incentives? Why not enforce that maximum delegated stake should be proportional to pledged stake? It would not unnecessarily be restrictive and all players would have the “right” to become pool leaders and to start small and grow progressively contrary to the reward scheme you propose. Why please?

The natural way to accomodate this in our scheme, would be to use the above reward functionbut apply it to σ + αλ, a weighted sum of the total pool stakeσand the allocated pool leader stake λ.

To my mind the natural way is to apply min ( σ , 1/k , b0 * λ ) . I would never have thought of σ + αλ naturally.

Quite frankly, I’ve spent crazy amounts of time on this to find nothing new. I don’t understand why authors don’t come here to explain/discuss their papers, which would be much simpler. If they don’t want to spend their time on it, then I will not keep spending mine. I’m not paid for this and my time is precious.

I really do want to go into the rabbit hole again. But, this paragraph simply just explain the 3rd approach the Stake Pools approach, when the stakeholders delegate their stake to a pool stake (other stake holders by delegated pledge i.e.representatives) they have selected. The reason is why they’ve selected that pool is not relevant here as the ppl have different incentives (moral financial) in different degree. Also, financial incentives also a range which can be risk related e.g. from low risk to high risk. So, you cannot point out what individual would do but trying to find an equilibrium based on the ppl incentives. So, that’s why a statement seeking for high RoS is not really relevant. Cos the yearly RoS range is does not differ too much. for example imagine the yearly interest between 10.3% and 11.7% some ppl would choose pools /w less interest if ideologically they’re attached to them. But, it’s complex. It’s like population genetics mall nuances in individuals.

Anyway, I won’t reply from now.

1 Like

@pparent76 You have clearly invested a HUGE amount of time in going through all the resources. I am impressed! I will also freely confess that your math skills are much better than mine. I can’t follow the the formal proofs in the papers. I have focused on the conclusions, and have taken it on faith that the math substantiates the arguments. So I very much welcome your challenges to the authors’ conclusions.

I will try to explain in my own words what I think the formula is trying to accomplish. If you have the time (and I wouldn’t at all blame you if you didn’t want to invest anything further at this point!!) I would welcome your feedback (or feedback from anyone else in the community) on a couple of questions:

  • Do you agree that my conclusions accurately reflect the authors intentions?
  • If not, I welcome your thoughts on where I went wrong.
  • If so, is your issue with the theory? Or is your concern that the formula doesn’t actually support the intentions?

Allison’s Summary:

“Pledge” is the amount of ADA a Stake Pool Operator must leave in their own Pool.

The amount of the Pledge should be high enough to make it very expensive for anyone to create multiple pools (thus preventing Sibyl attacks).

If the rewards were distributed purely proportionally, someone with a large amount of ADA could create a Pool, fill it entirely with their own ADA, and capture all (or a lot) of the rewards based on their large amount of ADA. As a result, the rich would get richer.

So the problem is how to have a high enough Pledge to prevent Sibyl attacks, but not such a high pledge that small pool operators aren’t discouraged from participating, and that diversity is created amongst pools and within pools.

Rewards are therefore capped once the Pool reaches a certain amount of delegated stake (the saturation point). Purely for the sake of an example, I will assume a Pool’s reward is capped at 10 ADA. If only 1 ADA is staked in that pool, the reward will be 10 per ADA. If there are 2 ADA, the reward will be 5 per ADA. If there are 10 ADA, the reward for each ADA is 1. After that point, any additional Stake will reduce everyone’s reward.

But the above example is an oversimplification as the reward itself is a dynamic number. So it is not set at 10 regardless of the size of the stake in the pool. The reward will increase as the size of the stake in the pool increases, up to the point of the cap. So it is not in the Pool Operators best interest to keep the stake at 1 ADA (using the example above) but rather to encourage delegation to keep the total reward for the pool growing until reaching the point of the cap. Rational delegators will therefore delegate to an unsaturated pool, because that is the way to earn the most rewards as everyone’s reward increases with additional stake, up to the point of the cap.

But if the above mechanism were the end of the story, there would be no prevention against sybil attacks.

So a certain amount of pledge is required. The amount of the pledge works in tandem with the cost and the margin variables in the desirability rankings that wallets will publish. The higher the amount of pledge, the higher up in the rankings the pool will be.

However, the lower the costs and the lower the margin, the more desirable the pool will be, and the higher up in the wallet rankings it will appear. So a pool operator with high costs can compensate by having a higher pledge, and vice versa.

The “strange factor” changes the importance of pledge compared to cost in the calculations of reward. The higher the value of A0, the more important the amount of the pledge becomes. A higher A0 value means that pools with higher pledge amounts can earn higher returns, because the final capped value of rewards for that pool will be higher. That pool with a high pledge amount should therefore be more attractive to delegators and appear higher in the rankings. An A0 value of 0 means the amount of the pledge has no influence, and pools will have the same returns regardless of pledge amount.

While IOHK is doing everything possible to strike the right balance when setting the initial value of A0, if the community later sees that the value is pushing things too far in one direction, the community can decide to change the value.

At the risk of annoying you with one more suggestion of something to look at, I liked the graphs in this article: https://iohk.io/en/blog/posts/2018/10/29/preventing-sybil-attacks/ as I finally could see the impact of the A0 value.

Someone how my notifications were turned off previously. So if you do decide to respond, I should be able to reply immediately :slight_smile: I have enjoyed these exchanges and hope that all is well with you!

3 Likes