I loved Itn and Cardano, but I may not love mainet because of a0!

Just to make sure we understand ourselves, I noted that higher pledge ratio ( s/σ ) was necessary in order to prevent Sybil attacks, not absolute value. Do you agree on that?

Yes, I do.

But I don’t see at all how this is a flaw of my proposal, or how this is relative to my proposal at all, and even less how this justifies to further disadvantage small pools in the official scheme.

With our proposal, attractive small pools will grow to saturation (if people behave rationally), in yours they won’t.

We are aiming for stake being roughly distributed equally among the k most attractive pools. Your proposal won’t achieve that. Your small-pledge pools would be unattractive, so people would instead delegate to pools that already have 1/k stake, so those pools would grow even bigger, which is exactly the centralization we want to avoid.

Note that the motivations of a small pool may not only be short term profitability. Some will want to start small and grow progressively, some might want to take a position foreseeing ADA price increases, and some might simply want to participate in securing the network, not trusting it if they don’t participate to it directly.

They can do this under our scheme as well. The pledge-influence factor is quite low and won’t have a big effect on profits anyway.

I do understand, and I do understand that after such a large amount of work, it can be hard to reconsider, although It might be best to do, at least to think about it.

Certainly. We are always willing to improve if something better comes along. The point is that even if some idea sounds good, it needs to be analyzed very carefully before it can be put into effect. We know our scheme may not be perfect, but we at least understand the mathematics and know it has the desired properties.

It has the properties that you saw as the desired properties at that time, but it might not have all the desired properties that we did not necessarily think about yet. For example the cartel problem I showed above could not be analyzed with game theory and Nash equilibrium (on the contrary), and yet is obviously a very big problem.

We want the system to have the following desired properties: Lead to an equilibrium with k pools of equal size (roughly), be as fair as possible while at the same time being secure against Sybil attacks. I think those properties are very desirable indeed.

I don’t agree with what you call the “cartel” problem, I think you misunderstand the way “desirability” of pools is calculated: We use so-called “non-myopic” desirability to calculate pool ranking, which roughly means we look at potential rewards, not actual rewards. So in your hypothetical scenario, where we already have k saturated pools, and a new pool comes along, that new pool can be ranked very highly from the very beginning, no matter how big it is. It won’t need higher pledge than existing pools in order to be competitive. As long as its combination of pledge, costs, performance and margin is good, it will be highly ranked and attract delegation.

Furthermore, things like your “cartel problem” can certainly be analyzed with game theory. We haven’t done this in our existing paper, but it can certainly be done.

Overall I feel that you did not yet answer my questions for now, especially it could be usefull for everyone to have an answer point by point to the flaws I saw in your scheme ( Cartel attack, partial protection against Sybill Attacks, unnecessary unfairness and complicated), and to show me any flaw really specific to the reward scheme I propose.

I’m sorry about that. Here we go:

  • cartel attack: based on a misunderstanding of how pool ranking works (see above).
  • unnecessary unfairness: We believe that unfairness is a price you have to pay for Sybil protection. It’s a trade off, and the Community will ultimately decide how important one is in comparison to the other. As I tried to explain above, your suggestion is unfair in exactly the same way by making pools with higher pledge more attractive to users. Your proposal has the additional problem that it doesn’t prevent centralization.
  • unnecessary complicated: It was the easiest scheme we could think of that had the desired properties of leading to an equilibrium with k pools of equal size while offering Sybil protection. If anybody can suggest a simpler scheme with the same properties, we’ll certainly be happy to consider it. The formula may be complicated, but the ideas are actually quite simple: Cap rewards at saturation to enforce decentralization, and make pledge matter to prevent Sybil attacks.

So we can deepen things if you wish. (Or, because I don’t want to be a burden leading an endless discussion, if you wish I can also leave you alone, and leave the community, while benevolently alerting you that from my point of view, I don’t see how keeping this rewards scheme will not lead to severe problems in the future.)

I’m indeed busy, as you can imagine, but I’m nevertheless happy to discuss, as long as we keep it rational and respectful. This is clearly a topic of great importance to the system, so we should try to explain it as well as possible.

We believe the scheme will work, but in the unlikely event that it doesn’t, we can always improve it.

6 Likes

Thank’s a lot for this detailed answer.

With our proposal, attractive small pools will grow to saturation (if people behave rationally), in yours they won’t.

Ok I do agree that it does not necessarily lead to pool growing to saturation, although it seems the Nash equilibrium is still k saturated pools, because of shared operating costs (The same way you show that with r(σi,ai,i)=σi·R the equilibrium is one big pool on section 2.2 of your paper). And actually the same way that ITN leaded theoretically to a k pool Nash equilibrium (except if I missed something).

Anyway to me this is an advantage not a problem, because it pushes decentralization to the limits of what people want to make it considering what is economically feasible. Can you explain why this would be a problem? From my point of view we should rather formulate things “We want the network to be decentralized so we want to have at least k different pools but there can be more if people want to”.

Your small-pledge pools would be unattractive, so people would instead delegate to pools that already have 1/k stake, so those pools would grow even bigger, which is exactly >the centralization we want to avoid.

I do not understand why you say that. In my proposal in terms of RoS it would be way more attractive to delegate to a small unsaturated pool, than delegate to a big over-saturated pool.

(Anyways to solve a nasty attack which I would call “DDoS on a pool” that concerns both your scheme and mine, I would advocate to make further changes to how rewards are distributed to delagates once reaching saturation. I can explain more if you wish because the attack also concerns you, it seems quite easy to implement, and probably will happen one day or another).

The pledge-influence factor is quite low and won’t have a big effect on profits anyway.

Not on your direct profits on your pledge, but on your ability to attract delegates. (And if not this means it is ineffective against Sybill attacks). What is the use of having a pool if you cannot attract any delegate, when big pools attract 10/20 delegated ADA, per pledge ADA?

The point is that even if some idea sounds good, it needs to be analyzed very carefully before it can be put into effect.

Of Course, I do agree with that.

Lead to an equilibrium with k pools of equal size (roughly)

As I said above it seems my model still leads to a Nash equilibrium with k pools, because of the operating costs that are shared. Don’t you agree with that?

I think you misunderstand the way “desirability” of pools is calculated: […] which roughly means we look at potential rewards, not actual rewards.

I think if you give to people a criterium of “desirability” to people that do not lead to higher profits then people will not follow it. So this “desirability” criteria seems to me as not a thing to take into account because people won’t follow it (except if you think about controlling people’s choice, in which case we’re headed to something completely different! :wink: ). Especially in a case of a new pool will necessarily take a bit of time to grow to saturation (at least because of the natural inertia). And in the mean time the first ones to delegate to it will really loose money. So noone will have interest to do it, even if your “desirability” indicator tells them to.

Also I was already aware of that and I wrote above “will people delegate according to what deadelus tells them, or according to how much money they will make if they delegate cleverly with all publicly available data” (Especially when bot will be available for that and/or forks of Deadelus with different indicators.).

We believe that unfairness is a price you have to pay for Sybil protection.

Do you think the scheme I proposed is not resistant to Sybil attacks?

As I tried to explain above, your suggestion is unfair in exactly the same way by making pools with higher pledge more attractive to users

My suggestion is fair in the sense that every ADA gives exactly the same rights whether it’s in the pocket of a large pool or small pool, and especially the same rights to rewards and rights to expected delegated ADAs, . Which is not the case in your scheme.

Costs to operate the node is an out-of-protocol thing, we cannot have influence on. So I don’t see how this makes my scheme unfair. It is fair, but something outside of it that we cannot control is not.

properties of leading to an equilibrium with k pools of equal size while offering Sybil protection

Do you think my scheme does not have these properties? If so can you explain why?

I’m indeed busy, as you can imagine, but I’m nevertheless happy to discuss, as long as we keep it rational and respectful.

Great then, let’s do that! :slight_smile:

Ps:

I think I get why you say that, because you suppose that pools will set fixed fees according to their real costs, right? Although we saw in practice on ITN, that many pools set fixed fees to 0 including small pools, so I’m not sure we should make such a supposition.

1 Like

@pparent76: really impressed about your due diligence.

@Lars_Brunjes: maybe a stupid question. We have Haskell Shelley Testnet. Can we try to simulate the different scenarios? I think that this is the best prove that we can have. Or is this too difficult to simulate?

1 Like

Having at least k pools is not enough. Look at Bitcoin: They have many pools, but only four or five of them control more than 50% of hashing power. It’s not the number of pools that determine how decentralized a system is, but rather the number of pools that have a majority (of hashing power in the case of Bitcoin, of stake in the case of Cardano). It is absolutely essential for decentralization to prevent pools from growing too large.

Having at least k pools is not enough. Look at Bitcoin: They have many pools, but only four or five of them control more than 50% of hashing power. It’s not the number of pools that determine how decentralized a system is, but rather the number of pools that have a majority (of hashing power in the case of Bitcoin, of stake in the case of Cardano). It is absolutely essential for decentralization to prevent pools from growing too large.

Yes sure I meant, have at least k pools and at most 1/kth of the ability to create blocks for each pool. Sorry I did not precise, but that was what I was thinking.

If pool operators were rational, they would declare their true costs, yes. We argue in the paper why this is the case.

You are right that real people in real life (and in the ITN) don’t always behave rationally and that many declared very low costs. This was one of the reasons to introduce minimal operational fees (initially set to $2000 per year) as a new parameter. This, and additional Sybil protection, because with minimal operational costs, an attacker can’t make his Sybil pools more attractive with “dumping” costs.

Look at two pools with (close to minimal) costs of $30 per epoch. One is small and gets $50 from the rewards pot, the other is huge and gets $5000. Then rewards will be almost twice as high for delegators to the big pool. True, they drop a bit when people oversaturate the big pool, but not as low as they would be when delegating to the small pool. So pools would grow larger than 1/k, which we want to prevent.

Under our scheme, the same would be true for actual costs, but the small pool would be allowed to grow large (in spite of its lower pledge), and ranking would be determined by potential costs, so people would be attracted to the small pool and drive it to saturation.

If you have at least k pools and “at most 1/k of the ability to create blocks”, then the “block creating ability” of a pool would be influenced by the size of other pools (because total “block creating ability” must be one - each block has to be created by somebody).

This is bad, and it is a property of our scheme that I forgot to point out here until now: Rewards of one pool must be independent of what other pools do.

This is important, because if it was otherwise, pools would have incentive to sabotage each other. With our scheme, your own rewards are not influenced by the size of other pools.

I was not aware of this. You really want to kill small pools don’t you? :wink: (Just joking here).

You may not like that but even if that’s not the most urgent and important, and that’s another debate, but I would instead advocate for:

  • Imposing 0 fixed fees for all pools
  • Having the same variable fee for every pool voted/chosen in some way, in a regular basis.

This would set sane basis for a real “representative democracy”, that you wanted to implement in the first place, as all pools would have the exact same RoS and then, the only basis of choice for a pool for users, would be “who do I trust more?”.

By the way you did not answer to that, (don’t you think it’s a problem?):

c-At the same time Introduction of a0 and variable RoS also impact negatively Sybill-resilience on an other side, because it incentize users to delegate according to RoS, so it discourages the natural tendency of users to delegate to someone or some organization they know directly and trust. This latter behavior is a very powerfull brakes on Sybill attacks.

Then large pools would be very profitable and some smaller could be still be profitable but less, for the pool owners, but it would be the same RoS for delagtes. Let’s take your example again a huge has 30$ costs and gets $5000, it’s highly profitable for the owner. On the other side a small pool as 30$ costs and gets 50$ it’s a much less profitable for the owner but it still is, so it is still viable. But you can delegate to both without any change in RoS. So it would preferably lead to only k pool (the Nash equilibrium), but maybe more if people distrust one another too much and want to create separate pools despite financial incentives.

That will be all for me, for today, I will answer your other message tomorrow!

Cheers, and thank’s for this very interesting discussion.

I really have hard time to get you on that last message.

But in your model you also give strong incentives to ‘have at least k pools and “at most 1/k of the ability to create blocks”’ by taking σ’= min(σ,1/k). So you obviously also want to have at least k pools and at most 1/k of the ability to create blocks for each pool, right? I really thought this sentence would make consensus.

This is bad, and it is a property of our scheme that I forgot to point out here until now: Rewards of one pool must be independent of what other pools do.

Wether in my model or yours we are organizing a competition between pools right? Because if you take k=100, there are obviously more than 100 pools that would like to reach saturation, but only few will. You say it yourself pools have to stay “competitive”.

In don’t see how in a competition your results (and therefore the rewards) could not depend on what others do, by definition… For example if I and 999 other pools candidates would like to reach saturation, then if 900 of them give up I’m guaranteed to succeed, much less so if none gives up. So the behavior of other pools has huge impact on my results right?

Also in your model, the RoS of my pool, its pledge, its fees, and so on, are to be compared with the one of all other pools, right? If all other pools decide to increase there pledge then I will probably have to do the same thing in order to stay competitive, otherwise my rewards will drop, because I will loose delegates, right? So we cannot say that “Rewards of one pool must be independent of what other pools do”.

Also in a “representative democracy” approach, that you wrote you wanted to develop, it’s obvious that by nature the result of one “candidate”, depends on what the others do.

I am sorry, I should have been clearer.

You are right, of course, that the parameters one pool sets influences the success of other pools and that in this sense, all pools are competing with each other. This is certainly a good thing - competition will keep the system efficient (high performance, low costs).

I should have been more precise: Our system has been designed to ensure that no pool has an incentive to sabotage another pool: If the performance of another pool is low and that pool gets less rewards than it could have, those “missed” rewards are not benefiting other pools (instead, those “missed” rewards go to the treasury).

And you are right - while this is all true, it is not directly related to the flaw in your idea. I apologize for that.

In your idea, pools with low pledge are forced to stay small (because of your maximal sigma / s ratio). So what happens with pools with high pledge? At first I thought you would allow them to become larger than 1/k. But then you said

So how do you achieve that?

How do you achieve the sigma / s ratio anyway?

Will you forbid people to delegate to a pool once that limit has been reached? Or kick them out of a pool if pool pledge goes down? What do those people do then? Delegate to another pool? What if there is no other pool? Can’t they delegate at all then? Even if they find another pool, that pool might be small and expensive, and people still are forced to go there.

Or do you, instead of forcing people to leave a pool (or preventint them from joining) somehow count their “votes” less? “Decrease their ability to create blocks?”
That would mean that suddenly people’s voting power in the system depends on which pool they have delegated too.

Note that in our system, voting power is strictly proportional to stake, as it should be in a “vanilla” proof-of-stake system. It’s the premise all security properties count on.
In our system, rewards for exercising your voting power might depend on your delegation (although in equilibrium, it won’t), but at least “all stake is equal” as far as voting power is concerned.

Also note that in our system, there are no hard restrictions: People can delegate wherever they like. It may not be in their financial best interest, but nobody is stopping them from acting against those interests.

If I understand your suggestion correctly, you either have to prevent people from delegating to certain pools or - even worse - manipulate their voting power.

Even if there are enough pools for everybody to delegate to and you do not change voting power of stake in any way, at the very least, your system will lead to more than k pools, whereas ours (theoretically) leads to exactly k pools. We know there is a fat tail and that we will have more than k pools in reality, but your scheme is not even aiming at exactly k pools, so I believe there would be significantly more. Which would put more strain on the network (maybe too much strain) and make the system much more expensive (because all those pools have operational costs that the Community has to pay for).

One result of our equilibrium that we prove in our paper is that without Sybil protection, the k cheapest pools will be the saturated ones, which means our system leads to a maximally efficient equilibrium. Sybil protection changes that, but in a controlled way - some efficiency is given up in exchange for protection.

I doubt that the same property holds for your scheme.

If you want to argue this point further, it would be helpful if you could be more precise about what exactly it is your are suggesting: How do you intend to enforce the sigma / s - ratio, and how do you intent to enforce the “at most 1/k ability to produce blocks” thingie?

3 Likes

So what happens with pools with high pledge?

You mean pools with pledge superior to 1/ ( k * b0 ) which we could call “saturated pledge”. Well it’s just useless for a pool to get such an high pledge. They could but it does not advantage them anymore. They would be better investing it in something else. They could also create a new pool with the remaining pledge, but I think in order not to fall back to Sybill attacks users should not tolerate that one organization or individual has more than one pool, and should not delegate to any secondary pool. And they should be thought to do that for the sake of network security.

Our system has been designed to ensure that no pool has an incentive to sabotage another pool

I think I can show you that in your system pools have incentives to sabotage others in another way that I think you meant in that sentence, and have the possibility to do it pretty easily. (The “DDos on a pool” attack)

How do you achieve the sigma / s ratio anyway?

Very simple, if we take f(σ,s)=R * min( σ , 1/k , b0 * s ) , obviously there will be strong incentives for people to stop delegating whenever the total stake reaches more than b0 * s (the pool is then saturated), so we can be very confident that s/σ will be higher than b0. So we can control very precisely the pledge ratio we want with one single parameter.

Will you forbid people to delegate to a pool once that limit has been reached?

No people can delegate (and vote) the way they want, but I think we have to rethink the way we distribute rewards to delegate, so that someone who delegates intentionally to a saturated pool don’t get any reward at all (I have a very precise proposal for that). This is anyway crucial to be resistant to the “DDos on a pool” attack (including in your scheme) . And if we don’t do it there will probably be problems.

and make the system much more expensive (because all those pools have operational costs that the Community has to pay for).

No because there will be more than k only if people want it to be that way, and are willing to pay the price. Because for the very reason you expose, pools are still incentivized to grow to saturation whenever possible. Two small pools merging will be more lucrative than if they did not merge, because they reduce costs for same rewards. The only thing stopping them to do so is lack of trust.

So if people distrust one another and prefer loosing money than assembling each other, then I don’t see any reason to prevent them to do so. On the other side if people want to make as much money as possible they will grow pools to saturation, and there will be k pools in my scheme as in yours.

[EDIT:] Just to make things very clear, equal RoS for all pools for delegates, does not mean at all equal return on Investment for every pool owner.

If you want to argue this point further, it would be helpful if you could be more precise about what exactly it is your are suggesting: How do you intend to enforce the sigma / s - ratio, and how do you intent to enforce the “at most 1/k ability to produce blocks” thingie?

Now the only question I don’t have a definite answer to is whether f(σ,s)=R * min( σ , 1/k , b0 * s ) should be used only for reward calculation, or also to calculate the probability to validate a block. And I would say it’s an open question, it seems both could be acceptable and have advantages and disadvantages. If we take it for the probability as well then “at most 1/k ability to produce blocks” will be enforced, otherwise it will be very strongly incentivized. In practice I’m not sure we would see a huge difference between both cases.

I will present in details “DDos on a pool” attack when I have time.

What is this "DDoS on a pool" attack?

It takes advantage of 2 things:

1-“A pool with a too low RoS will loose its delegates (at least a large part of them)”
2-“Delegating rather large amounts of ADAs to a pool can lead to important RoS drop”

  • a-In my scheme by over-saturating a pool
  • b-In yours by over-saturating a pool, or because of a0 (if the pool leader cannot follow stake in terms of pledge)

An attacker might want to have the possibility take down a pool for several reasons:

  • To harm the Cardano network.

  • To blackmail the pool.

  • Because he thinks he can attract delegates that will have to leave that pool and find another one.

Here is how the attack would do:
1-Take control of a quite large stake but not that high either (at most 1/k needed)

2-Delegate it at once to the targeted pool to have RoS drop dramatically

3-Wait for other delegates of that pool to leave the pool

4-Undelegate the large stake to the pool, so it has only few delegated stake left.

5-Repeat if needed, if the delegates rejoin the pool, until they learn it’s clear the will loose money if they rejoin.

6-Wait for the pool owner to give up and shut down its pool.

how to solve it my scheme:

It’s a proposal it might be perfectible, (I don’t see any way to solve it in yours).

We need to change the way the reward obtained by the pool is distributed, inside the pool. We calculate reward per ADA within a pool this way:

reward_per_ada= obtained_reward_after_fees / min ( total_staked_ADA, 1/k , b0*total_pledge_ADA)

of course if the pool is saturated the pool will not be able to give rewards at this rate to every ADA staked in his pool.

So suppose for each delegate i we have the following information:

  • t_i “time” of delegation (or re-delegation) to the pool
  • m_i the minimum amount of stake of the delegate since delegation
  • l_i live stake of the delegate (l_i >= m_i)

Here is the order in which to reward ADAs in the pool at rate of reward_per_ada until no reward is left

    1-Reward the pledge
    2-Reward delegates up to m_i by chronological order of t_i
    3-Reward delegates up to l_i-m_i by chronological order of t_i

This way someone that delegates to a pool that is not saturated at that time, it is guaranteed to have a “normal” RoS for the amount of ADA delegated at that time as long as he keeps his stake. On the contrary an attacker that would purposefully delegate to a saturated pool will get no reward at all.

As allways I may miss something, and I would love to hear your thoughts about all that. That will be all for me until tomorrow!

1 Like

Ps:

I just wanted to give a proper answer to this, to be very clear about what I propose at this point.

1- The central thing I propose is to replace the reward formula, formula2 by formula . Nothing else. Nothing fancy, No further change needed for this switch to make sense. (let’s forget about changing the probability formula for now and suppose we leave as you did: the probability to validate block is strictly proportional to the real stake)

2- In order to be resistant to “DDoS on a pool” attack (and have other good properties), we should further improve the system by changing the way we distribute rewards inside a pool, as explained above.

3- We can, if we want to further improve, constraint pools fees as I said above.

And just to make sure I’ve replied 100%, if you implement points 1 and 2 from the latest message you can say to be very precise:

“Any pool containing only delegates who want to have a RoS will always have a σ / s ratio under b0 and will control at most 1/kth of the ability to produce block.”

This is true in particular for every Sybill suspect pool who only attract random delegates, because obviously people will delegate randomly to a pool only if it offers them an RoS.

Sorry for multiple messages, and thank’s for reading.

Summary

Apparently we are making a pause in our discussion. So I’ll take this opportunity to thank you for this conversation which has been a very interesting so far, and to make a checkpoint and try to sum up what we’ve discussed so far to make sure we’re at the same page.

Let’s focus on the 2 formulas’s/schemes’s properties, comparison, and flaws. Because we’ve been a bit dispersing in many discussions that might not be that central.

Please correct me if you don’t agree with the following, or if I missed something.

Current reward scheme

formula2
You admited that it had the following bad properties, in lack of better :

  • Not really fair

  • Complicated.

You did give an answer for the following problem:

  • The cartel problem : You said it would not happen because of a « desirability » indicator and pool ranking you will provide to people. I told you I thought people will not follow an indicator in the long run, if it’s not really in their best interest, so the existance of an indicator cannot solve or change anything in itself, in comparaison to what incentives the protocol itself really gives to people. You did not yet reply any further.

You did not yet give a detailed answer to the other flaws I pointed out :

  • Partial protection against Sybill attack. especially about the nasty side effect of introducing a0 on Sybill-resilliance, discouraging people to delegate according to who they know and trust.

  • « DDoS on a pool » which does not seem easilly solvable with a0.

The reward scheme I propose

If we simply switch to the formula formula (wether or not also we implement the change of how to distribute rewards within a pool, but without any other change), I claim that we have the following good properties :

  • Nash equilibrium with k pools

  • Sybill resistant

  • Fair ( i.e every ADA gives the same rights, whoever owns it )

  • Simple to understand

  • Not weak to cartel attacks

It is really unclear to me, at this point, if you admit that it has all these properties or not. If you don’t, can you say clearly which one(s) you think it does not have, and which one(s) you admit it has, so that we can make demonstrations, for the ones we disagree on.

If we also implement the change on how to distribute reward within a pool we also have :

  • resistant to « DDoS on a pool »

You said multiple times that you doubted the scheme could have all the good properties but you did not yet point any precise property that it was definitly missing, that your scheme had, or am I lacking something ?

2 Likes

A very big thank you to @Lars_Brunjes and @pparent76 for all the time you are putting into this discussion. I am following along and learning a lot, even given the limitations of my mathematical abilities.

When we are talking about “big” pools, how is “big” defined? Amount of total stake, amount of pool leader stake, total costs to run the pool, or something else?

I can’t follow this math. Do both pools have the same yearly costs of $3000?

PS when I mentioned the “limitations on my mathematical abilities” I meant that I can’t follow some of the complex notation used in the Rewards function and associated proofs. I did not mean that I lack the ability to do basic algebra :wink:

How is the total amount of rewards for each epoch calculated? I think the sources for the rewards are transaction fees and monetary expansion. But is there a general way to conceptualise how the total “pot” for any Epoch is determined?

When you say that the “performance of a pool is low” do you mean that the pool does not create some of the blocks for the slots where that pool was chosen by lottery to be the slot leader?

Are pool rewards tied to the absolute number of blocks created? Or to the proportion of blocks created compared to the total of elected slots?

I’m trying to understand the connection between the total amount of stake in the pool (which I think influences the opportunity to create blocks) and the distribution of rewards.

1 Like

“voting power” is a new concept for me. Are you referring to the governance concepts that will come with Voltaire?

And can you say more about how “rewards for exercising your voting power might depend on your delegation”?

Hi Allison,

thank you! Glad it helps explaining things!

Total stake delegated to the pool, which includes the owner’s pledge, which technically is just what the owner delegates to his or her own pool.

Yes, that was my assumption. The justification is that operational costs don’t really depend on the size of the pool (size in the sense above).

I never assumed you did. :smiley:

2 Likes