I have the ZORBA pool. My pool is updated to the latest stable version, in sync, with active KES, 3 relays with good uptime.
My problem is that I haven’t solved block for 22 epochs which is twice as long as usual.
How do I know if I am doing something wrong?
Have you been checking your allocated slots each epoch? I. E. Have you missed any blocks allocated?
CARDANO_NODE_SOCKET_PATH='/run/cardano/mainnet-node.socket' cardano-cli query leadership-schedule \
--mainnet \
--genesis /etc/cardano/mainnet-shelley-genesis.json \
--stake-pool-id "blahblahblah" \
--vrf-signing-key-file /etc/cardano/private/vrf.skey \
--current
CARDANO_NODE_SOCKET_PATH='/opt/cardano/cnode/sockets/node0.socket' cardano-cli query leadership-schedule \
--mainnet \
--genesis /opt/cardano/cnode/files/shelley-genesis.json \
--stake-pool-id "pool1udt98825znk9uslz4v05gh5cd3s5e4g0cd9c4gzhcfpd2j2xm2u" \
--vrf-signing-key-file /opt/cardano/cnode/priv/pool/ZORBA/vrf.skey \
--current
The command return empty response
Well that means your pool is not a slot leader for any slots this epoch.
Are you running that command every epoch? We are now post 3.5 days into the epoch so you can change the --current to --next and see what it says about your allocated slots for next epoch.
Would CNTOOLS display
if the leaderlog hadn’t been run?
Looks like bad luck to me. With an ideal of 0.1, 22 epochs are not that off.
1 Like
As @HeptaSean has suggested, I think it just a run of bad luck. It happens and it can look like really bad luck with the lower delegation amounts.
1 Like
I can’t find anything wrong with the setup… I guess it’s bad luck
2 Likes