How to Improve Daedalus Rankings

Below is my proposal for improving Daedalus rankings.
I’m in the process of submitting this as a Cardano Improvement Proposal (CIP).
Please let me know what you think.

How to make Daedalus rankings less weighted by stake and thus more fair.

Modify the current ranking equation by decreasing the influence of stake in the rankings in order to give more fair rankings and improve the viability of the stake pool operator community and the network overall. To do this we need to remove the stated goal of having k fully saturated pools and all other pools having no stake other than owner pledge, which goes against the Cardano goal of decentralization.

There are two main reasons for changing the current ranking equation.

  1. Allow for more than k successful stakepools.

  2. Provide better decentralization away from a very few stakepool operators creating many pools.

This is a modification of the desirability function defined in section 5.6.1 Pool Desirability and Ranking of “Shelley Ledger: Delegation/Incentives Design Spec. (SL-D1 v.1.20, 2020/07/06)” as follows:

d(c, m, s, p) :=
0 if f(s, p) <= c,
(f(s, p) − c) * (1 − m) otherwise.

where:
c = fixed cost
m = variable margin
s = pledge
p = apparent performance
f(s, p) = pool rewards
o = pool stake

The idea is to divide the desirability by o in the above equation to give a better indication of the value to the delegator.
This gives:

d(c, m, s, p, o) :=
0 if f(s, p) <= c,
((f(s, p) − c) * (1 − m)) / o otherwise.

In order to easily accomodate backwards compatability and provide a range of effect for stake, we can introduce a new parameter called stake_decentralization notated as sd.

d(c, m, s, p, o, sd) :=
0 if f(s, p) <= c,
((f(s, p) − c) * (1 − m)) / (1 + (o * sd)) otherwise.

where stake_decentralization, sd, is any real number between 0 and 1 inclusive.

Setting sd to 0 restores the original desirability function.
Setting sd to 1 provides the most fair rankings of all pools.

This desirability result can then be made non-myopic for ranking purposes as follows:

dnm[n] :=
average(d[1]…d[n]) if n <= (1 / h)
(dnm[n - 1] * h) + (d[n] * (1 - h)) otherwise

where:
n = epoch number beginning at n = 1 in the first epoch that the pool is eligible for potential rewards.
dnm[n] = the non-myopic desirability of the pool in the nth epoch.
h = historical influence factor, which is any real number between 0 and 1 exlusive.

As an example, setting h to 0.1 would mean that you would use the average of all epochs for the first 10 epochs and after that you would use 90% of the previous epoch’s non-myopic historical desirability and 10% of the current epoch’s desirability to arrive at the new non-myopic desirability.

Using this stake aware desirability equation gives a much more fair ranking of stakepools based on performance and pledge which will encourage delegators to choose better pools.
It will also bring the rankings more in line with the general Cardano principle of increasing decentralization.

This proposal is backwards compatible with the current desirability by setting the stake_decentralization, sd, to 0.

9 Likes

@shawnim this is outstanding. I remember you posted a math tweak like this a few months back & I’m happy to see it concretely presented as an Improvement Proposal (edit: this was a different issue). The unbalanced distribution of pool stake comes up frequently on this forum & other social media and I will happily link to this thread as a means of doing something about it. Once you have an official CIP, Github or any other page with a more official expression please let us know and we’ll use that.

This makes far more sense in the long term than the notion a lot of people are clamouring for, which once you unclutter & formalise it would be weighting the block producer election in favour of pools with smaller stake. What they want is understandable (more chance of producing a block, for the sake or survival) but I can imagine how difficult it would be for nodes to achieve consensus if the election probability was defined as such a nonlinear, time-variant function.

The real vulnerability the small pools are feeling is invisibility, as well as an engineered undesirability, under the current arrangements & this solution would start addressing the problem at the source and ultimately solve the real inequity as well as the perceived one. Those claims of centralisation are not just the problem of bickering newcomers looking for “more than their fair share”… they’re carried into perceptions of Cardano by competing cryptocurrency supporters and skeptical investors, and therefore reflected in the value of ADA itself.

Yesterday I wrote about some of these things here (which under the circumstances came off as a pool pitch) in response to the week’s media reports that DeFi is growing like mad on Ethereum. The sudden, astonishing increases in transaction volume after Goguen release will hit Cardano in the same way, and if pool participation keeps shrinking according to a socially engineered Nash Equilibrium we will have an insufficient number of robust pools to handle a transaction load that could increase 100-fold overnight.

1 Like

@COSDpool Thanks for this excellent analysis and response!

The mathematical models that force the system into k saturated pools fail to take into account the people who built the 1000 other pools who are critical early adopters needed to support the huge growth we all hope will come to Cardano.

I do not know of any compelling reason why 150 fully saturated pools would run a better network than 500 pools with varying levels of saturation. In fact I think the 500 pools will provide a more decentralized and resilient network capable of handling sharp increases in demand.

Raising k which seems to be the popular answer does not solve this problem.
You have to remove the goal of k saturated pools and all other pools killed off and provide pool rankings that are based on costs, pledge and performance not popularity.

3 Likes

The CIP is at:

1 Like

@COSDpool BTW, the “math tweak” you mention was probably my other CIP which is at:


on making pledge meaningful for reasonably sized pledges.
1 Like

the ranking is a bit more complicated than it looks. pls read my layman explanation of the ranking in daedalus (and also the more detailed chapters too) to fully understand how it works. Also, pls keep in mind apparent performance is not used in the the desirability but the hit rate estimates, see the ranking specification in the reference section of my gist below

2 Likes

@_ilap Thanks for that excellent article!
I did have a misunderstanding about the desirability equation.
It turns out we don’t need to modify the desirability equation at all.
We just need to change the non-myopic ranking.
I will update the pull request and post here when that is done.

1 Like

Cool, glad to hear that.

1 Like

Here’s the updated text of the CIP which is now called Non-Centralizing Rankings.

Make Daedalus rankings more fair and non-centralizing by modifying the ranking methodology.

Modify the current ranking system by removing the centralizing Nash Equilibrium goal of the ranking methodology in order to give more fair rankings and improve the viability of the stake pool operator community and the network overall. To do this we need to remove the stated goal of having k fully saturated pools and all other pools having no stake other than owner pledge, which goes against the Cardano goal of decentralization.

There are two main reasons for changing the current ranking methodology:

  1. Allow for more than k successful stakepools.

  2. Provide better decentralization away from a very few stakepool operators creating many pools.

This is a modification of the ranking methodology defined in section 5.6 Non-Myopic Utility of “Shelley Ledger: Delegation/Incentives Design Spec. (SL-D1 v.1.20, 2020/07/06)” as follows:

  1. Remove the following statement from section 5.6:

“The idea is to first rank all pools by “desirability”, to then assume that the k most desirable
pools will eventually be saturated, whereas all other pools will lose all their members, then to
finally base all reward calculations on these assumptions.”

  1. Remove the following statement from section 5.6.1:

"We predict that pools with rank ≤ k will eventually be saturated, whereas pools with rank

k will lose all members and only consist of the owner(s)."

  1. Add the following to section 5.6.1:

For all pools with proposed_pool_stake greater than saturation_warning_stake add k to their rank.
Where:
proposed_pool_stake = pool_live_stake + proposed_user_stake
saturation_warning_stake = (total_stake / k) * saturation_warning_level
saturation_warning_level is a real number greater than 0 representing the percent of ssaturation which is undesirable. A proposed value for saturation_warning_level is 0.95 meaning 95% saturated.

For example, if a pool has non-myopic desirability rank of 3, pool_live_stake of 207m ADA, proposed_user_stake of 100k ADA with total_stake of 31.7b ADA, k = 150 and saturation_warning_level = 0.95, we would calculate:
207m + 100k > (31.7b / 150) * 0.95
and see that
207.1m > 200.8m
is true so we would change the pool rank to 153 (3 + k) and all pools previously ranked 4 through 153 would move up 1 rank.

  1. Remove secion 5.6.2.

  2. Remove section 5.6.3.

  3. Remove section 5.6.4.

  4. Add to secion 5.6.5.

For example, apparent performance, desirability and ranking can be made non-myopic for ranking purposes as follows:

dnm[n] :=
average(d[1]…d[n]) if n <= (1 / h)
(dnm[n - 1] * h) + (d[n] * (1 - h)) otherwise.

where:
n = epoch number beginning at n = 1 in the first epoch that the pool is eligible for potential rewards.
dnm[n] = the non-myopic desirability of the pool in the nth epoch.
h = historical influence factor, which is any real number between 0 and 1 exlusive.

As an example, setting h to 0.1 would mean that you would use the average of all epochs for the first 10 epochs and after that you would use 90% of the previous epoch’s non-myopic historical desirability and 10% of the current epoch’s desirability to arrive at the new non-myopic desirability.

Using this non-centralizing ranking methodology gives a more fair ranking of stakepools based on performance, pledge and saturation which will encourage delegators to choose better pools.
It will also bring the rankings more in line with the general Cardano principle of increasing decentralization.

This proposal does not break backwards compatability because it is an offchain change.

4 Likes

In case it helps anyone there was a “new document on the details of pool ranking (#1816)” announced in the 1.20.0 node release, which points ultimately to this (Stake Pool Ranking in Cardano dated September 8, 2020):

1 Like

Yeah, that is why I calculated the hit rate estimates (it is 0.775 for almost all pools) in the examples in my gist for cardano ranking.

1 Like

Some feedback on this:
There is more than one “Nash Equilibrium” possible, and many of them represent “local maximums”, but are otherwise sub-optimal; it’s probably useful to make sure that everyone agrees on what the optimization goal is.

In English: we need to nail down what we want this calculation to measure.

For example, we might assert that the “non-myopic” nash equilibrium needs to exhibit the following characteristics:

  • Maximizes the total rewards available in the system
  • Supports running pools as long-term viable businesses
  • Does not forcibly kill off pools outside K
  • Does not incentivize multiple new pools creation to game the system by having some pools “get lucky”
  • Following the rankings at each step should maximize the real returns and converge toward an equilibrium that meets the other criteria

Furthermore, we might observe that using the using an optimization that seeks to maximize next epoch returns for delegators fails to meet the criteria above. (We can observe this empirically.)

The implication is we may need to evaluate the fitness of more than one measure:

  • the realized rewards from last-n epochs (including the effects of luck)
  • the expected returns for next epoch (excluding luck)
  • the expected returns for next-n epoch (excluding luck, but assuming saturation)
  • a long term measure than accounts for maximizing rewards and system stability

I also suggest some measure as to whether variability in block creation is due to random chance vs a signal that a pool is not doing a good job creating its blocks. For example, 95% certain that underproduction is due to poor pool operation rather than random chance. (Assertion: block over-production should be disregarded in all circumstances, as noise.)

5 Likes

Good to have some feedback.
While reading about this or another “Nash equilibrium” and “not forcibly killing” I wonder if we absolutely need to focus on one equilibrium?
Would it be possible and make sense to capture and list up a couple of key indicators, might even pre-calculate some higher values such as desirability, and then let the consumer decide what preference he set, sort and then choose?

The current ticker presentation has a huge focus one an ordered list of numbers counting from 1 up to the last pool. Visually this number takes by far most of the visible area.

Instead a vertical list of pools with more detailed metrics gives much more information and “built-in” sorting functionality.

That’s lot of room for different measures as mentioned above

Also might worth a thought:
colouring orange pools close to saturation and slightly above makes sense (range depends on the individual stake one holds in his wallet)
Also a strong red for everything above.

But then it might makes sense to use different shades of green for non fully saturated pools. This is a visual indicator for all pools the delegators should look at.
A light green could indicate a small pool with lot of room for delegations. Then have a gradient to dark green for almost fully saturated pools.

Just an idea …

4 Likes

Are we talking about operation or ownership? Because those two can be different and it would mean different things in both scenarios (when operator=owner, probably small businesses versus operator!=owner probably whales outsource their operation to ops, or in-house operation.).

I think it just makes a rational assumption that a pool /w that characteristic would not survive long term and acting based on that assumption.

Also, a pool /w 0 delegated stake (but have some pledge) could earn reward weighted by a0 anyway (around ~5% per annum) assuming they create their blocks when they were expected i.e. scheduled, but removing them from the favourable pools.

In simple words, it could be eliminated if the pools ROI would not follow the characteristics that can be seen in the attached graph below, but would be weighted by pledge. I.e. it would not incentives to create multiple pools /w smaller pledge because if they would create n number of pools /w low pledge those all pools would have smaller yield then they would have created only pool. Ofc, their ROI should be higher than the delegators ROI for incentivising them for operation.

Anyway, that killing mainly applies to the small businesses (op=owner) who do not have enough pledge for compensating their cost by rewards i.e. cost >=pool reward. But, it simple means a mum can run a pool at home running the cardano-node on an RPi4 /w relative small pledge and could gain yearly ~5% ROI on her pledge.

So, killing of pools simply means they will earn only after their pledge (~5% per annum), and if that reward is not enough to cover their cost plus have some yield, then they can have an option to delegate their stake (pledge at that time) to other pools and shut down their operation if they want.

It’s like “private pool” in a sense that they do not have any delegated stake but their pledge.

Large businesses, should have enough pledge to cover their operational cost (usually the 340ADA for an epoch), if they do cannot then they are not large businesses.

So, It think, it does not mean killing of pools outside of k, but simple means the rest would became as public private-pools, but killed as a business.

Then the current RSS should be amended, as have a look at this graph, which represents the pools ROI propotional to their pledge. As you can see the sweet spot (not in economical sense) is the 0.000531 (minimum of the function), which in ADA is 0.000531 i.e. 16,885,800 pledge for minimum 6.4% yearly ROI reward meaning that is the minimum of the rewards. So, the whales are incentivised to full pledge to gain max reward while the small pool incentivised split their pledge (like 1PCT ZZZ BLOOM etc. pool).

A small pool /w 1.3m will be able to gain maximum reward as a fully pledged pool.
You can see that this function is not optimal for controlling small pools, but incentivise them to split their pledge.

You can check functions and tune it parameters of the plot in desmos what I have just created for explaining/describing the issue.

UPDATED: added delegators yearly ROI in % (red, while the Pools yearly ROI is blue)

image

Yes, you are right, as luck is contradict for small pools if we’re considering performance with an assumption as if they would be fully saturated. Because in that case their performance would be nearly 100% if they’re playing honest (which should not be relevant for ranking calculation). While at the moment, they’re suffering from luck based on their low pledge and/or stake.

We should assume, that performance nearly will be constant for pools have some saturation (it now depends on the d too)

I think that’s a false assumption with the Haskell code base and the with current settings, as we need to distinguish between the small pools (in a sense of stakes) which are suffering from d and
the pools having some high amount of stake as the threshold for creating a block is depends on d and delegated stakes (pledge included).

Also, If a pool has a 50% chance of generating a block in an epoch it will create (on average) a block in every two epochs, and it does not mean that it;s playing nasty, even that is it does not have luck.

So, a pool must have a certain stake to be successful part of staking i.e. 1/(21600*d) currently around 4.5m ADA (450K USD stake pledge included) of around 1.5m when d=0.

It’s a bit more complex, but if a pool want be safe then it should pledge that amount which is not really feasible for small businesses. Of course IOG tried to help the pool’s bootstrap but they are and won’t be favored if initially the luck is considered in the non-myopic ranking for the pools.

5 Likes

Thanks for the great discussion!

Colin: Supports running pools as long-term viable businesses

Pal: Are we talking about operation or ownership? Because those two can be different and it would mean different things in both scenarios (when operator=owner, probably small businesses versus operator!=owner probably whales outsource their operation to ops, or in-house operation.).

Shawn: It’s a good idea to take all 3 perspectives into consideration (owner, operator, owner/operator).

Colin: Does not forcibly kill off pools outside K

Pal: I think it just makes a rational assumption that a pool /w that characteristic would not survive long term and acting based on that assumption.

Shawn: The only reason a pool ranked at k + 1 would not survive long term is because the current rankings system is designed to kill them off. There is likely no real difference between pool k and pool k + 1. This goes back to what I believe is a false assumption that an “optimal” network has exactly k fully saturated pools and all other pools with no delegation.

Pal: Also, a pool /w 0 delegated stake (but have some pledge) could earn reward weighted by a0 anyway (around ~5% per annum) assuming they create their blocks when they were expected i.e. scheduled, but removing them from the favourable pools.

Shawn: There is very little reason to go through all the trouble and cost to run a pool with no delegation unless you are a whale with massive pledge. It’s more efficient to delegate to a pool with pledge higher than your delegation that has low fees.

Pal: In simple words, it could be eliminated if the pools ROI would not follow the characteristics that can be seen in the attached graph below, but would be weighted by pledge. I.e. it would not incentives to create multiple pools /w smaller pledge because if they would create n number of pools /w low pledge those all pools would have smaller yield then they would have created only pool. Ofc, their ROI should be higher than the delegators ROI for incentivising them for operation. Anyway, that killing mainly applies to the small businesses (op=owner) who do not have enough pledge for compensating their cost by rewards i.e. cost >=pool reward. But, it simple means a mum can run a pool at home running the cardano-node on an RPi4 /w relative small pledge and could gain yearly ~5% ROI on her pledge.

Shawn: I believe those small businesses being killed off are much more valuable to running a stable and growing network than a few hobbyists running nodes at home.

Pal: So, killing of pools simply means they will earn only after their pledge (~5% per annum), and if that reward is not enough to cover their cost plus have some yield, then they can have an option to delegate their stake (pledge at that time) to other pools and shut down their operation if they want.

Shawn: This is precisely the problem. There is no reason to kill off this valuable resource that will especially be needed as Goguen is released and there are much higher demands on the network. This seems very short-sighted and destructive for what appears to be solely an academic mathematical goal.

Pal: It’s like “private pool” in a sense that they do not have any delegated stake but their pledge. Large businesses, should have enough pledge to cover their operational cost (usually the 340ADA for an epoch), if they do cannot then they are not large businesses. So, It think, it does not mean killing of pools outside of k, but simple means the rest would became as public private-pools, but killed as a business.

Shawn: Which does essentially mean they are killed off. I don’t think we want to run the world financial operating system on a few preselected early winners and a few hobbyists.

Colin: Does not incentivize multiple new pools creation to game the system by having some pools “get lucky”

Pal: Then the current RSS should be amended, as have a look at this graph, which represents the pools ROI propotional to their pledge. As you can see the sweet spot (not in economical sense) is the 0.000531 (minimum of the function), which in ADA is 0.000531 Tc=16,885,800 meaning that is the minimum of the rewards. So, the whales are incentivised to full pledge to gain max reward while the small pool incentivised split their pledge (like 1PCT ZZZ BLOOM etc. pool). A small pool /w 1.3m will be able to gain maximum reward as a fully pledged pool. You can see that this function is not optimal for controlling small pools, but incentivise them to split their pledge. You can check functions and tune it parameters of the plot in desmos what I have just created for explaining/describing the issue. Desmos ROI (Pledge Versus Reward)

Shawn: I believe this may be addressed by my Curve Pledge Benefit CIP.

Colin: The implication is we may need to evaluate the fitness of more than one measure:
the realized rewards from last-n epochs (including the effects of luck)
the expected returns for next epoch (excluding luck)
the expected returns for next-n epoch (excluding luck, but assuming saturation)
a long term measure than accounts for maximizing rewards and system stability

Pal: Yes, you are right, as luck is contradict for small pools if we’re considering performance with an assumption as if they would be fully saturated. Because in that case their performance would be nearly 100% if they’re playing honest (which should not be relevant for ranking calculation). While at the moment, they’re suffering from luck based on their low pledge and/or stake. We should assume, that performance nearly will be constant for pools have some saturation (it now depends on the d too)

Colin: I also suggest some measure as to whether variability in block creation is due to random chance vs a signal that a pool is not doing a good job creating its blocks. For example, 95% certain that underproduction is due to poor pool operation rather than random chance. (Assertion: block over-production should be disregarded in all circumstances, as noise.)

Pal: I think that’s a false assumption with the Haskell code base and the with current settings, as we need to distinguish between the small pools (in a sense of stakes) which are suffering from d and the pools having some high amount of stake as the threshold for creating a block is depends on d and delegated stakes (pledge included).

Shawn: The randomness has a bigger impact on smaller pools so you need a longer timeframe to reduce the variability due to randomness. Excluding overproduction probably does not help much as you are chopping off the half of the bell curve above the expected number of blocks but you still have the variability on the lower side of the bell curve which probably takes (almost) as long to determine apparent performance.

Pal: Also, If a pool has a 50% chance of generating a block in an epoch it will create (on average) a block in every two epochs, and it does not mean that it;s playing nasty, even that is it does not have luck. So, a pool must have a certain stake to be successful part of staking i.e. 1/(21600*d) currently around 4.5m ADA (450K USD stake pledge included) of around 1.5m when d=0. It’s a bit more complex, but if a pool want be safe then it should pledge that amount which is not really feasible for small businesses. Of course IOG tried to help the pool’s bootstrap but they are and won’t be favored if initially the luck is considered in the non-myopic ranking for the pools.

Shawn: If you have a non-myopic view of apparent performance that is the average of the actual blocks divided by the expected blocks over an initial period (say 10 epochs) and then a function as I described that values the current apparent performance appropriately to the ongoing historical apparent performance you can get an idea of the pool’s performance that includes it’s entire history but is weighted towards the most recent epochs.

1 Like

I also have a CIP in the works (Stake URI scheme for pools & delegation portfolios) addressing the same overall problem, which has been falling on dead air. This is how I phrased that vulnerability, as a summary of how a bunch of highly qualified nobodies in the Cardano ecosystem see the problem on the horizon:

I’d like to contribute more to the practical discussion, but generally cannot since @shawnim has already said everything that I would have said. I can only try to summarise the feelings of hundreds of people who are either too intimated or too preoccupied with the social melee to voice themselves here.

I’m astonished that there aren’t more SPOs submitting +1’s about this proposal. This is one of the best conceived efforts to reform the rules of a game that was literally rigged from the beginning. Amazingly, the ones who are most adversely affected think the solution is in video production and social networking: which in the current arrangement is like trying to get a better berth on a sinking ship.

The CIP instructions indicate these comment threads are not only to refine proposals technically but also to assess a proposal’s urgency according to community input. I wish I could bring more people into this discussion to share their concerns about what will happen to Cardano and ADA if the world continues to see Shelley as a media-rich pump & dump that only produced an insincere decentralisation (in the protocol perhaps, but not the ecosystem).

As a bottom ranked SPO who missed the “early winners” snapshot late last year I’ve had a lot of time to see how the ecosystem looks from the absolute bottom. That huge bottom segment that IOHK and Emurgo still consider irrelevant from a “game theoretical” perspective (which other theoreticians might consider a source of resilience, redundancy and future proofing) contains amateurs, imitators, crooks, heroes, experts, and future thought leaders. Not all the inspiration about how a “world financial operating system” might develop is coming from the top. :face_with_monocle:

3 Likes

I would not want such active community members as all of you in this and other forum threads to become simple delegates.
The existing rating clearly does not contribute to decentralization and the growth of active community members.
If there was an opportunity to put all of you a plus, which would be taken into account in the pool rating, I would do so.
Our pool has an average pledge 1.5M ADA plus low fees, but we are also on the sidelines. We can increase our pledge still, but it will not bring benefits. But for now, we continue to hope for a change in the parameters of the network and an increase in the literacy of delegates, who will look not only at the rating in the wallet.

1 Like

Thanks for this excellent reply which I think expresses the level of frustration with the system that most SPOs feel.

So everyone understands, a Nash Equilibrium is a stable state of a system involving the interaction of different participants, in which no participant can gain by a unilateral change of strategy if the strategies of the others remain unchanged.

The underlying problem that has created the current destructive situation is the assumption that a Nash Equilibrium is desirable.

I believe this assumption goes against both the overall goal of Cardano to be a decentralized global system and the reality that the Cardano network and exosystem of SPOs is a dynamic and decentralized system.

The current ranking system is instead heavily influencing the network towards a stable and centralized state.

I made an adjustment to item 7 to better handle newer pools.
The text is now as follows:

  1. Add to secion 5.6.5.

For example, apparent performance, desirability and ranking can be made non-myopic for ranking purposes as follows:

dnm[n] :=
average(d[1]…d[n],and[n + 1]…and[i]) if n < i
average(d[1]…d[n]) if n = i
(dnm[n - 1] * h) + (d[n] * (1 - h)) otherwise.

where:
n = epoch number beginning at n = 1 in the first epoch that the pool is eligible for potential rewards.
dnm[n] = the non-myopic desirability of the pool in the nth epoch.
d[n] = the desirability in the nth epoch unaware of historical desirability.
and[n] = the average desirability of the network as a whole in the nth epoch unaware of historical desirability.
h = historical influence factor, which is any real number between 0 and 1 exlusive.
i = integer(1 / h) which is the initial number of epochs during which we use the average desirability

As an example, setting h to 0.1 would mean that the initial number of epochs for using the averaging functions (i) would be 10. If a pool has been eligible to receive rewards (n) for 3 epochs then we use the average of the pool’s desirability for those 3 epochs and the overall network desirability for the prior 7 epochs. After the 10th epoch we would use 90% of the previous epoch’s non-myopic historical desirability and 10% of the current epoch’s desirability to arrive at the new non-myopic desirability.

This gives a more reasonable ranking for newer pools that do not have enough historical data to provide fair rankings.

2 Likes

Without going into all the mathematical equasions, it seems perfectly obvious that ranking is supposed to help delegators where to delegate. The size of a pool does not really have anything to do with this anymore since “pledge” had been decided not to really have any influence. The ranking should be on a pools performance in terms of ROS and also reliability (downtime etc). The currennt system centralises delegation to the top pools. For what reason remains mysterious.