How is the slot leader actually determined

I read somewhere that the slot leader is determined locally by the block producer node. Is that true and if so, would that mean that there is a lottery on each BP node with a potentially conflicting outcome? When there are multiple BPs proposing a block for the same slot, how is it decided which one to take?

I was also wondering, whether it is known upfront when a BP is expected to do some work? If not every slot is equal, there would be good/bad times to do maintenance (i.e. take the node down for a little while).

In other words, for how long can a block producer go offline without forfeiting its chance to mint a block?

cheers
– thomas

Hello,

I run a stake pool and through the command line I am able to see the block allocation for my pool at the start of the epoch.

Blocks are allocated per a lottery like you say, so each ada is a lottery ticket. If someones ada delegated to TBO is chosen to create a block, TBO creates it. Those future blocks are known from the start of the epoch. I always post TBO’s block allocation number (not date and times) into our telegram channel.

This is good because we know when we can perform updates etc. To the pool without missing a block

This video is good at explaining staking and how blocks are chosen

2 Likes

Thanks, could you perhaps share that CLI command? I still somehow doubt that the protocol relies on the good will of the SPO to keep the pool running if it’s known that there is zero chance for reward. IOW, all pools that don’t have one of their tickets drawn, could safely shut down (to save cost) until the next lottery. There must be some other motivation to stay online, which I’m likely not seeing.

I’m not the techy person behind our pool so I can give you the command line but I don’t know much else after that.

cd ~/git

./leaderLogs.py --pool-id pool id here --epoch 240 --vrf-skey $CNODE_HOME/priv/pool/TBO/vrf.skey -bft --sigma 0.0008755706338504188

If you put another pools ID it still shows your own slot allocation.

Yeah a pool could easily shut down for an epoch, turn on just before the snapshot to see if they have block allocation, if not then close down again. It wouldn’t do any harm at all.

However, you have to be sure the nodes and relays will start up without any issues each time. Also if you are running your pool off a cloud based server then you’re not achieving anything by turning it all off. Maybe people running hardware servers in their own homes or something, but the power consumption is so low, not sure it would really have any benefits.

I would say turning nodes and relays off when there is no block allocation is just unnecessary people hours with the small risk of problems when starting up again.

I’m a technical person and quite familiar with the process of spinning up servers in a matter of minutes (if not seconds) - we do this all the time in managed container environments. My question is not aiming so much at saving cost when there is no chance of winning, instead I’d like to understand the protocol and what it actually does. Ultimately, when there are indeed good/bad times for an SPO to do maintenance, I’d think most SPOs would like to know about that.

Do you perhaps have the link to that python script?

Is it perhaps this one

I think we have very quickly gone into an area I do not know. I hope someone comments who is able to answer these questions for you.

Thanks for your time and quick responses - much appreciated

1 Like

Do these metrics play a role in this?

# Access the Prometheus metrics
docker exec -it prod curl 127.0.0.1:12798/metrics | sort

cardano_node_metrics_Forge_forge_about_to_lead_int 21024
cardano_node_metrics_Forge_node_not_leader_int 21024
cardano_node_metrics_slotsMissedNum_int 86
  1. Typed Chronicles, every block producer pool roll a dice which has a probability based on their stake.
  2. Statistically, it can happen and the pool with lower leaderVRF (VRF output) value will win. It has a side effect: usually the pool with smaller stake would win (statistically) in a case when more than one leader is elected for a slot.
  3. Already answered.

This is lovely, thanks