Correct. Lowering k will result in a certain pledge amount in ADA being a smaller percentage of the saturation point and lower the pledge benefit. However, increasing a0 won’t make much difference either, because almost all public pools have a pledge of less than 10% of the saturation point. Increasing k to e.g. 1000 will change this 10% into 20%, but this is still small with the current pledge formula.
Correct, this will give them 30% more rewards, but nobody is talking about k being 5000 at the moment. We’re talking about 750 or 1000, so you’ll still need 45M or 34M (and this increases with time when the reserve depletes).
If on average you’re producing 1 block per epoch, you have the following chances of producing x blocks in a certain epoch:
0 blocks: 37%
1 block: 37%
2 blocks: 18%
3 blocks: 6%
4 or more blocks: 2%
So if you take 680 ADA as a constant block reward (as you do in your example, the current one is already lower and decreasing instead of staying constant) and zero margin:
37% of epochs will give 0 ADA to pool and 0 ADA to delegators
37% of epochs will give 340 ADA to pool and 340 ADA to delegators
18% of epochs will give 340 ADA to pool and 1020 ADA to delegators
6% of epochs will give 340 ADA to pool and 1700 ADA to delegators
2% of epochs will give 340 ADA to pool and at least 2380 ADA to delegators
So on average the pool get about 215 ADA per epoch and delegators get about 465 ADA per epoch instead of each 340 ADA… You should use those numbers for the small pool in your example.
Couldn’t you both just agree that it is still – just looking at the numbers – significantly less profitable to delegate to a pool in the 1 million range than in the 10 million range, although it is (due to the relevance of oscillations in number of blocks) not as bad as a calculation with 1 block per epoch would imply.
Sorry. Need to stay calm while delineating economic reality.
It is economically non-viable to delegate to a pool with less than 10M total stake if the min fixed fee remains at 340. If the min fixed fee is dropped to 30 then it becomes economically non-viable to delegate to pools with less than about 1M total stake. If your pool is small then this is the reality for any delegator that is interested in earning rewards.
The times when you get 2 or 3 blocks instead of 1 do indeed help your returns, but nowhere near enough to offset the huge gap created by the minimum fee. Your rewards for delegating to a small pool (1 MM) are about 1% less on an annualized basis. That is, 3% instead of 4% or you are giving up a quarter of your rewards.
The chance of block production is a binomial distribution, in the current environment it’s pretty close to this for a 1MM pool (I have a 5% block loss in there, which is why it’s not 21600 blocks). In the periods where you generate a 2 blocks, you get paid double, but the fixed fee only is paid once.
Another way to think about that is that the fixed fee is actually 42.8% lower than the listed value for the pools - so 1 MM pools “effectively” have a minimum fee around 195 Ada/epoch. (so… yeah, you can make a joke about a minimum wage only for the rich and for the express purpose of keeping the poor down here.)
Anyway, if the minimum fixed fee was set to 30 Ada, a delegator (motivated only by the staking reward) would be indifferent to staking with a “nearly saturated 3%” pool and a “1MM sized 0% pool”… assuming both had a minimum fixed fee.
(it may be difficult for a delegator choosing a stake pool to actually know that though - the headline variable rate does seem to be weighed too heavily by delegators - pool size and fixed fee tend to be overlooked.)
As promised here are charts. This is the current reward level as a function of pool saturation at various pledge fractions from 0% to 100%. a0=0.3 and minFee=340.
Clearly being a whale and pledging to a private pool is what the reward formula was designed to favor.
Most operators have far less than 20% pledge so multipools have become favored for operators.
Whales which pledge into private pools still earn higher rewards.
It is far easier for small pools to provide competitive yields exceeding 4% (on average).
What happens if we keep minFee=30 and increase a0 to 0.5?
Yields would drop for most people and only a couple dozen ultra-whales would retain yield.
Increasing a0 clearly only benefits ultra-whales pledging to private pools.
What happens if we keep minFee=30 and drop a0 to 0.1?
For now, let’s discuss and debate the principles of the Cardano reward equation.
Do we want ultra-whale pledge to have a yield advantage? 0.0% 0.5% 1.0%?
How harsh of an on-ramp do we want?
What should diminishing rewards based on stake look like? (currently based on k)
What should diminishing rewards based on pledge look like? (Nothing yet)
Should a0 be transformed from the ultra-whale boost parameter to a complement of k but for pledge? (would require a new equation and hard-fork, however no new parameters)
I will be happy to lead creating an equation and CIP based on our principles with diminishing rewards for stake (based on k) and diminishing rewards for inadequate pledge (a refactored a0).
Yes! I’ve read both and have taken inspiration from both.
It doesn’t feel ‘fair’ for an ultra-whale pledge have a reward advantage OR disadvantage.
A slight and adjustable on-ramp is not a bad idea. With a parameter change it can be dropped to 0.
In principle k should govern only maximum pool size and diminishing rewards. k should be set higher than the current network decentralization of block production, but not too high. We don’t need single groups like Binance collectively using more CPU’s, NIC’s, and RAM than a Solana node. That’s dumb.
In principle a0 should govern the influence of pledge. The more you pledge the larger your pool could become. The way I would use a0 would be very similar to @TobiasFancee implementation in his CIP. I give him credit and have borrowed his idea of a maximum pledge leverage ratio for pool size. This parameter should start out high (a0 = 50-100), and be reduced slowly over time with care. By requiring pledge dividing pools will not be favored.
I don’t think any new parameters are necessary unlike @TobiasFancee and Casey Gibson. Also, I just don’t like the way the a0 parameter is currently used for the benefit of ultra-whales. I’m not going to use the R/(1+a0) term.
The equation should be computationally simple and elegant.
This is what I have so far for the rewards function F :
R = ((reserve * rho) + fees) * (1 - tau)
F( pool_stake, pool_pledge ) = R * min( a0 * pool_pledge , min( pool_stake / total_stake, 1/k ) )
So, what does this look like?
minFee = 30
k = 500
a0 = 10 (a >10% pledge pool could reach the pool size saturation based on k)
I just want to interrupt this with a note about model risk. The platform will need to adapt and evolve over time, so change is both something we anticipate and also will require. Sidechains and other scalability solutions will require changes, and the growth of the platform is even now largely moving in directions that we not entirely foreseeable when the platform was being developed. At the same time, we need to have those changes to be stable and predictable as possible; we don’t want the fundamental rules arbitrarily changing underneath people’s feet.
It’s helpful to think of updates as one of 3 things:
1 - Ledger Changes
Ledger changes need to be raised as CIPs and the time frame for implementation is much longer than for parameter updates. The the CIP forums have some of the discussion and their is a biweekly call to discuss issues. I highly encourage you to engage with that process.
2 - Parameter Changes that represent a change in direction
Some parameter changes represent a conceptual change in direction. While these certainly have a lower barrier to change than CIPs, these will need to have overwhelming popular approval or go through some form of governance.
I don’t have an ETA for the governance - I am not strongly tied into that process; it’s complicated. Some parameters are fairly opaque in function and have significant public misconceptions about what they do. Having a straight up public vote about parameters every 36 hours may sound like an easy solution - but even on these there needs to be a long-term direction and consistency. There are trade-offs as well: decentralization, performance and security are a famous example from Vitalik Buterin … but even things like the staking rewards creating a barrier for people engaging in other forms of economic activity need to be considered.
3 - Parameter Changes that represent staying on course
Some parameters need to be periodically adjusted as the price of Ada changes and to respond to changes in the ecosystem (like rewards being release from the ecosystem.) These are about the only things that, as a custodian of the platform trying to keep it on course, we can really change right now.
For economic parameters: the minimum pool fee, the fixed transaction cost, the portion of transaction cost based on byte size are the main ones that need to be updated as the price moves and as the reserve get drawn down. (The changes to K were previously discussed as a several step change, so this is again trying to deliver on that commitment - not as a change of direction, but staying on a course that was previously set.)
For non-economic parameters: the block height, the size of scripts, the timing of blocks could all be changed.
The whole point of that is: these are great ideas - but they I don’t want to tie up the “stay on course” discussions with the “change the ledger rules” types of changes. They are both important - but they operate on different time scales.
This did not create an opportunity for more [groups] to create blocks. The actual distribution of block production by groups is equivalent to a k of only approximately 41 on average. It’s not even close to the intended 500.
The last year is proof of you were correct with this statement. Real decentralization hit a ceiling.
You didn’t mention that was designed to allow groups like IOHK to earn 1.0% more staking yield than everybody else. If that economic benefit did not exist for IOHK you would have the economic motivation to delegate to small pools for the same reward and improve real decentralization, k-effective. This is why I removed the current R/(1+a0) term, decided to totally re-factor how the a0 factor was used, and use a reward curve that is intentionally flat until pledge/k diminishing limitations.
I agree!
As you can see, I’ve accurately modeled the reality of the rewards formula and network decentralization (k versus k-effective)*. With 1 more year of observations after your blog post it’s obvious decentralization has not materially improved. I believe that delivering on the decentralization promise will deliver more economic benefit to the Cardano ecosystem than allowing a dozen ultra-whales to earn 1.0% more staking yield than everybody else.
I’m going join the bi-weekly meetings when I can and do my best to help IOHK researchers and analysts deliver a CIP Ledger Change for modifying the rewards formula.
*I still need to include the ‘shadow’ no-ticker saturated whale pools into k-effective
I’d like to reference portions of your blog post from 1 year ago:
(1) I pride myself on being an empiricist, which means I get to change my mind as soon as evidence comes along that shows something different. My understanding a year ago is not the same as my understanding now. I don’t feel bad about that at all.
(2) The blogs represent a certain degree of talking the company line, in that a lot of the more technical descriptions get watered down to make them more accessible. There is a degree of nuance that is lost when that happens - and to be honest - I think we erred too much on the side of making it seem more simple than it really was - as if we were just turning a knob from “more centralized” to “more decentralized”.
This did not create an opportunity for more [groups] to create blocks. The actual distribution of block production by groups is equivalent to a k of only approximately 41 on average. It’s not even close to the intended 500.
Just want to mention that K doesn’t represent a target number of pools; K sets the maximum size of the pool. (If all pools were fully saturated, K=500 represents ~350 pools.) So a K of 41 would only represent 30 pools, not 41.
The main value of K is to get more pools closer to the “saturated” size, even if it involves pools splitting. Being nearer to the saturated point tends to advantage you in terms of economics, where smaller pools running at a loss can’t offer a better return. (There is a secondary benefit, in that it tends to force a redelegation event for the largest pools, but that is not the direct intent.)
You didn’t mention that was designed to allow groups like IOHK to earn 1.0% more staking yield than everybody else.
It’s designed to make it such that a large player is not financially motivated to split fully saturated pools into numerous smaller pools. The best returns are to have a very low pledge relative to delegation, and benefit from that. While fully pledged pools are more profitable - you are forgoing all the public delegation you aren’t getting “for free”.
In general, using private, fully pledged pools does give better returns compared to staking to community pools - but it is certainly a lot of foregone revenue relative to running a large number of public pools.
This is set at a level to keep the economics about equal for a fully saturated pool vs. splitting into two 50% saturated pools - and earning returns from those.
Honestly, that’s pretty insulting (not to mention wrong)
Splitting into multiple public pools would make us more money - it’s a lot better financially to earn rewards from other people’s capital. If they would actually let me compete - we’d control a lot more stake and be earning a significantly more money. They have me on a leash; maybe one day they will let me off.
If anything, we have been actively looking to reduce our staking footprint. - there are so many other demands on those funds that we tend to be mostly deploying our funds in other ways.
I totally agree with this - I can say with 100% confidence that this would be the preferred approach for other assets that we develop: reoccurring, fee based revenue rather than a lump sum at the front.
If you draw relationships for more pledge amounts between 20M and 68M { 0.0,3.4,6.8,10.2,13.6,17.0,20.4,23.8,27.2,30.6,34.0,37.4,40.8,44.2,47.6,51.0,54.4,57.8,61.2,64.6,68} and continue the total pool stake from a saturation of 100% to 200% (from 68M to 140M) I think our charts will agree very well.
One of my principles is to treat everybody, Including Binance and IOG, fairly and equally with no systematic advantages or disadvantages.
If they couldn’t offer any enhanced yield because the flat ceiling I’ve proposed, then it will be a fair and free market for attracting delegators. As @7.4d4 has previously said, larger pools will lose slot battles to smaller pools, so a large amount of additional pledge beyond what is required to earn full saturation would provide no advantage.
At least there won’t be an economic bias in yield against the small pool with 100k pledge trying to grow delegation up to a maximum of 10M (if a0 = 100x ). The small guy would get the same ~5.5% yield as the big whale supported multi-pool yielding ~5.5%. Some groups need to be big like Binance and IOG and that’s OK.
That’s right! We need to stop thinking in terms of pools and start thinking in terms of Groups/Parties.
You were right a year ago about many things!
If the incentives worked well the input parameters and output result would likely be mathematically closer.
The Cardano network is currently producing as if (on average with extreme tails) 41 equal sized groups/parties were minting 527 blocks per epoch. 41 * ~32B/45B = 30 is not accurate.
I care about maximizing the number of groups/parties creating blocks. Getting more pools closer to saturation is not my figure of merit. K is an awesome parameter and should be 2-3x times more than the actual decentralization of the network.
Exactly my point. I want every group who ‘gets off the leash’ to hit the same yield ceiling so that even a tiny 100k pledge <10M stake pool can earn the same percent.
Yeah, but that’s one of the reasons I think calling it effective-K is more confusing than not! (I think it’s a nice way of considering things - it just isn’t that close to what K actually means.)
Yes, it is partly right, as it always depends on the risk the large player as a business would put into it.
As, it would them more profitable (in exponent fashion) if they could split to for example to 10 (even to 100) and attracting delegators to get them nearly fully saturated. Ofc, it has risk, but few would do it.
Also, we must consider the other pools when the average pledge is around 35K, or it was when I last checked abt 2 yrs ago.
In which they’re incentivised to split their pledge for example from 1M to 2K slowly or reinvest their rewards in a new pool (what almost all the populars do). Their strategy, and my one would be very similar, when they got popular then advertise a new one and when it’s get popular they they would do the same again and this is the tendency what we can see.
The Cardano protocol should offer Sybil protection to prevent:
having only very few nr. of pools that control the total delegated stakes.
I think it is achieved by having k
k is intended to do that i.e. max number of pools that have power proportional to the k meaning 1/k in ideal case.
generating as many pools as the adversarial can
also I think it is achieved by the 500ADA pool deposit.
as the deposit eliminates of raising infinite nr. of pools. It works similar to the tx fees.
also RSS effectiveness should be a part of this too.
power accumulation of multiple pools owned by one entity (large players/whales are not considered adversaries), which should be achieved by
Reward Function, I am not sure, but it seems very clean and tidy to me
Reward Distribution functions and the I think it has the biggest impact
The Non-Myopic Ranking of the RSS. I do not think it has/had the expected impact, especially when different wallets rank pools differently.
Why large players are not considered for point 3.? Because, PoS is based on stakes the entities have.
Also, by the time it converges to more instead of to less like in the current economic systems.
I would say the point 1 and 2 are solved but the 3.
Also, this above should incentivise the delegators and the operators (owners are included) too.
The ranking in wallets were about to incentives the delegators, when they open a wallet they should clearly and rationally decide which pools to select. IMHO it’s screwed by a few things:
like does not gave any meaningful information to the Deadalus users, the ranking were confusing them. As simple ranking would be to show the ranking as % wise ROi as an example.
Different wallet do some self created renking, where they offen prefer their platforms etc. to confuse the users.
So, it’s complicated and we as humans cannot see through complex things that’s why we create models etc.
I think SushiSwap was a real eye-opener for me, as was the growth of “yield farming” (contributing tokens to liquidity pools to earn rewards in other tokens.) That is something that I think the industry is still working through.
Governance has been a lot harder to implement than I actually expected. An effectively leaderless, shareholder model can lead to a lot of power-struggles, coordination problems, and a general inability to pivot organizational strategy. In the case of parameters, really coherent roadmaps about how the parameters need to change over time (and under what circumstances) are really important.
Parameters that represent (or conceptually tied to) real-world costs expressed in Ada are especially problematic; when discussing parameters people tend to make assumptions that the price stability is much greater than it actually is and rarely think through the longer-term implications of who and how these things will be maintained.
Even driverless cars need feedback mechanism to stay on the road; so having mechanisms that either don’t need updating - or if they do need updating - having clear responsibility and accountability for changes and the timing of them is pretty important.
The whole CIP process takes a lot longer than I had hoped and, to be honest, it’s filled with people who are just there waiting to speak, but not there to listen. There are a lot of individually good ideas, but no coherent structure for how they will work together.
Trying to write an article that hits the right balance between being simple enough to understand and complex enough to be actually accurate is really hard. The act of simplifying is one of information reduction - and sometimes the important bits get lost.