Thanks zwirny, alexd. I was missing the ledger-state.json file. But, even after making sure the file is available in /root/scripts, it’s still not showing an assigned slot for epoch 301. I should see 1, since I minted one this epoch.
One note on ledger-state: I couldn’t add an argument for it to the cncli leaderlog command. It gives me an error if I include the --ledger-state parameter when running the cncli leaderlog command
error: Found argument '--ledger-state' which wasn't expected, or isn't valid in this context
Did you mean --ledger-set?
USAGE:
cncli leaderlog --active-stake <active-stake> --byron-genesis <byron-genesis> --ledger-set <ledger-set> --pool-id <pool-id> --pool-stake <pool-stake> --pool-vrf-skey <pool-vrf-skey> --shelley-genesis <shelley-genesis>
So, I just made sure the ledger-state.json file is saved in /root/scripts.
Per cncli install guide, it shows this is an assumption. 5. that you dump the ledger-state.json in /root/scripts/
Here is the output I see. One thing that looks wrong is epochSlots. It shows 0. Is this meant to be the total slots in the epoch, which should be about 5 days* 24 hours* 60 mins* 60 secs?
Hi Xpriens
Can you tell it’s wrong, b/c you are comparing that to your own output for the current epoch?
I’ve verified all the inputs used by cncli nonce command. My mainnet-byron-genesis.json and mainnet-shelley-genesis.json match the downloads from IOHK for the current build num. My cncli.db has synced 100%.
When I run the nonce command it matches what I see, when I run the leaderlog. I assume we are not using extra entropy, so I ignored that parameter.
root@vmi683311:~/scripts# cncli nonce --byron-genesis /home/cardano/cardano-my-node/mainnet-byron-genesis.json --shelley-genesis /home/cardano/cardano-my-node/mainnet-shelley-genesis.json --ledger-set current
f5b2633269e0efb87760d029f39ceacb3a9e79c421afea4a77f49048afaa50ee
cardano@vmi683311:~/.local/bin$ ./ReLeaderLogs_linux.sh
Current Epoch: 301
Nonce: c0e8aa015de7703c6fbec6c85a0aafb0974082e1eb4808061ad2e5ef23a2fd62
Network Active Stake: 23857504839,454979
Pool Active Stake: 208446,510450
Pool Sigma: 8.737146313191817e-06
Checking SlotLeader Schedules for Stakepool: WOOF
Pool ID: c22942e1b855136643d1e6e5a75266fb891d87727a8cbf06acd17208
Leader At Slot: 171583 - Local Time 2021-11-08 15:24:34 - Scheduled Epoch Blocks: 1
Sorry that was for the next epoch 302. For current 301 it’s
c0e8aa015de7703c6fbec6c85a0aafb0974082e1eb4808061ad2e5ef23a2fd62
Which is the same as @Xpriens
Yeah, I’ll have to look into this later, but it’s very puzzling. It appears everything is correct on my end. I compared things to the coincashew guide as well. All of it looks good.
I also had this issue of cncli being completely off. But nothing was incorrect with my node files or kes key.
I fixed the cncli leaderlog being incorrect by deleting all cncli db files. Then restarting the sync. It is possible to get a corrupted cncli.db and this will result in an incorrect cncli leaderlog result
Hopefully this resolves the issue for those who have the correct KES but an incorrect cncli leaderlog output. I only knew as my small tldr:pool minted a block that cncli leader did not show. (We all should be trying to ensure stuff on telgram and discord makes it’s way onto the forum)
tldr; A possible solution is to remove and resync cncli db as it maybe corrupted
Yes, this was how I fixed things for me as well. I recall that when I did the cncli syncing the first time, it ran for many hours (longer than a day). It did in fact fully sync and seemed to speed up at the tail end. However, using that sync data, I was getting incorrect leaderlogs info, and my epoch nonce was not matching. Subsequently, I deleted the cncli db data and re-ran the syncing process. It ran very quickly the next time (I don’t recall the exact timings, but it was less than a couple of hours I believe.). Using this data, the leaderlogs reports produced a correct epoch nonce.
I have no idea what caused the issue when I synced the first time. But, if syncing is taking a long time for somebody, then that probably means something is wrong, and you need to delete the cncli db and re-run.