Please help with checking my Stakepool configuration says to use

Sorry, don’t know of a more direct way.

Interesting would be a tool to do the leaderlog computations for all epochs back to the ones, where you were last assigned blocks to see if they – at least – are shown correctly. I haven’t seen that in cncli nor cardano-cli as an option or is there?

1 Like

Many thanks for that - I will look into it - appreciate your help

Thanks but it appears to pass everything on this site Only issue is extended metadata missing which should not matter

27 Epochs now and no slots assigned !!!

That’s a long stretch… but looks like bad luck.

1 Like

Yes - I am hoping it is just down to luck. Trouble is if I keep losing stake because of it then I may never know if the chances to mint a block become too small :grinning:

If you go much longer I’ll move some ADA over and ask some delegates to move over to try and pop a block.

1 Like

That would be fantastic if that could be done - I feel bad for my delegators at the moment as I just cannot be sure that everything is OK until a block comes through. Also if you did that I would be happy to reimburse the pool fee component (340 ADA) on a proportionate basis to anybody who helped out if any block was produced.

1 Like

28 Epochs now in a row

I am going to do the Maths on this. I think I am right but someone please feel free to correct me if I have assumed wrong.

With an ideal of 0.23 this would indicate that each Epoch I have a 23% change of being selected a slot leader. That would mean a 77% of not being selected. Ok to go 2 Epochs in a row the chances of not being selected in 2 Epochs is 0.77 * 0.77 = 0.593 or about 59 % - three Epochs in a row would be 0.77 * 0.77 * 0.77 = 0.45 or 45% chance and so on.

My calculation for not being selected for 28 Epochs would be 0.77 to the power of 28 or 0.000663 equating to 6.6 chances in 10,000 or around one in 1515.

Ok I know this is the odds from the beginning and each Epoch has the same chance of selection being around 0.23 or 23 % but right now the chances of 28 Epochs in a row without selection is ridiculously low and I am suggesting there is something wrong here. Also my ideal has been as high as 0.32 at times as well which would make the odds of this happening even more unlikely.

I have had email communication with IOG and they tell me the pool is OK and this is just down to luck.

Is there something fundamentally wrong with my understanding of how the Ouroboros protocol works? I believe I understand statistics and Maths and yes I know that things based on chance can have different outcomes but something does not appear to add up here.

Similar thing happened to me once when I moved node to another machine. Resetting KES did the trick.

It was on testnet and the pool had a lot of staking (nsp11). So I could spot the error very quickly.

1 Like

Thanks - I have already rotated the keys once but maybe I will try again

29 now

So what does an ideal of 0.23 really mean? I have had an ideal of between 0.23 and 0.3 for 29 Epochs and still no slot leader allocation. 23% to 30% chance each Epoch and yet no slot leader for 29 times in a row. Maybe I just don’t get it but this does not seem right?

can I put the mainnet pool on the testnet? My current testnet is a new pool

This pool has not produced blocks for many epochs, and now he can’t tell whether it is a problem with the configuration of the pool or just bad luck.
My thoughts are: Will the pool configuration on mainnet work on testnet? Because the testnet has free tada, it only takes one epoch to determine if there is a configuration problem.

1 Like

Can you move the pool from the cloud back to the original server that was minting blocks successfully?


I am not sure but I would imagine many of the configuration files would have to be changed - also that would involve renting more cloud servers increasing my costs even more - I just don’t have any budget left to experiment like this

I have thought about doing that but I no longer have a static IP on the original server which would complicate things - ultimately the slot leader allocation should be down to the registration status and the amount staked - my understanding is that the actual server only becomes involved once a slot is allocated and it comes time to mint?

1 Like

Hi @GrahamsNumberPlus1,

It sounds like this could happen to anyone, which is concerning. If I had a concern as an SPO that an upgrade may have failed for any reason, then I believe at this point I would set up a DDNS (Learn about Dynamic DNS - Google Domains Help for example) for the WAN (at the router, as may be needed depending on your network configuration) as part of a strategy to roll back the upgrade as soon as possible.

When the old (working) server is up and running again, then I would configure the new cloud server to run on testnet, and then take all the time that may be needed to troubleshoot, test and learn what may be happening, including potentially looping in IOHK devs to observe the issue for the sake of the whole network.


1 Like

Thank you for your assistance and suggestions. Everything has been checked many times and IOHK have been looped in as I had a support ticket going for a while. IOHK have stated that the Stakepool is Ok and that all I can do is add more stake. This is not really helpful because it is getting increasingly less likely to be due to lack of stake. Also the problem is that I am likely to keep losing stake which is making it really hard.

The new server is configured exactly the same as the original server after taking into account an IP change - all keys and configuration files are the same and in the appropriate locations. Firewall settings the same etc. - I cannot roll back the upgrade now due to equipment limitations that I have and I no longer have access to a static IP which complicates things. The thing is that I have been told by many people that slot allocation with the leaderlogs really has little to do with the server set up but more on the registration status and current stake of the pool. As the Stakepool had been minting block fine for 18 months with no change to any registration settings then I would have thought that slot allocation should be unaffected. Anyway as you say for the good of the protocol and all SPOs it is probably in the best interests of everyone that we can isolate this issue.

I hear you… my suggestion is based on a theoretical assumption that there may be an issue or error in the configuration that is so far going undetected. If there is no such issue on your end, then moving the node back to the prior server would make no difference and the effort would admittedly be of no use.

I would do everything possible to protect your pool’s stake. Having registered a pool last year, I would have no idea at all how to accumulate 300K stake, outside of the kindness of others. The focus from IOHK and the community at the moment seems to be pretty exclusively on developing Plutus, and there seems to be no significant or sustained interest or effort from IOHK or the community, for example, to welcome more delegators to the network. The cooperation between SPOs to develop the network further doesn’t generally seem to extend that far, now.

I delegated what I can currently afford to your pool (I could have bought a condo on … lol WAGMI).


1 Like