./cncli.sh leaderlog after Alonzo

I am trying to run below command in testnet, however i have some doubts :

  1. Right now below command is not passing --db parameter because I am not sure what will the correct value. FYI, no file like /opt/cardano/cnode/scripts/cncli.db exists for me. However when i earlier ran command ./cncli.sh sync its displayed below info line :

SQLite blocklog DB created: /opt/cardano/cnode/guild-db/blocklog/blocklog.db

  1. Now after Alonzo HFC, will not below command also expect a new parameter e.g. --alonzo-genesis similar to byron and shelly reason being a the file /opt/cardano/cnode/files/alonzo-genesis.json now also exists ?

  2. what about parameter --ledger-state “$config_path/ledger-state.json” ? where will i find this json file ?

./cncli.sh leaderlog --pool-id d478838fdfb3f1fe6308edbe5af280fc349ffac332d632ca22e5041c --pool-vrf-skey /opt/cardano/cnode/priv/pool/SWEETPOOL1/vrf.skey --byron-genesis /opt/cardano/cnode/files/byron-genesis.json --shelley-genesis /opt/cardano/cnode/files/shelley-genesis.json --ledger-set 156 --tz America/Edmonton

1 Like

If u have the latest version of cncli u should not worry, it will work

you mean no new parameter needed w.r.t. --alonzo-genesis ?

Any inputs about points 1 & 3 ?

this is the way how you are running cncli?
why not just simple ./cncli.sh leaderlog because you have all above informations inside the script
nano cncli.sh

Yeah, because i was facing some error with original way.

Anyways, now i once again tried with ./cncli.sh leaderlog and Voila it worked !

However it displayed below output. What to understand from it ? Any issues with my Pool setup ?

By the way, i registered two different Pools from same BP, so wondering this command output correspond to which Pool ?

Checking for script updates…
~ CNCLI Leaderlog started ~
Node in sync, sleeping for 60s before running leaderlogs for current epoch
Leaderlogs already calculated for epoch 156, skipping!
Running leaderlogs for next epoch[157]
Leaderlog calculation for next epoch[157] completed and saved to blocklog DB
Leaderslots: 0 - Ideal slots for epoch based on active stake: 0 - Luck factor 0%

This is for testnet right?

Leaderslots: 0 - Ideal slots for epoch based on active stake: 0 - Luck factor 0%

Meaning u don’t have blocks assigned (bad luck base on your active stake)

yes, its for testnet.

But out of 2 Pools on this BP which are currently registered with testnet, one has ~1M tADA so i was thinking it must get a leader chance at least. Can any other factor influence the same ?

Still wondering which Pool out of two, above commands used for checking the leaderlog ?

Type nano cncli.sh and check the paths… you can duplicate the script and edit the paths for each pool

seems cncli.sh picks Pool Name from /opt/cardano/cnode/scripts/env file.

I had earlier set an older pool name there which has been de-registered some time back. I had forgot about this config.

POOL_NAME=“SWEETPOOL”

Now, I just changed it to another Pool name which is still registered and have ~1M tADA staked.

POOL_NAME=“SWEETPOOL1”

However now leaderlog showed below output :frowning:

Checking for script updates…
~ CNCLI Leaderlog started ~
Node in sync, sleeping for 60s before running leaderlogs for current epoch
Leaderlogs already calculated for epoch 156, skipping!
Leaderlogs already calculated for epoch 157, skipping!

try with ./cncli.sh leaderlog force (it will force to recalculate)

PS: if you edited the env file you will need also to restart the node to take the new configuration

Checking for script updates…
~ CNCLI Leaderlog started ~
Running leaderlogs for epoch 156 and adding leader slots not already in DB
Leaderlog calculation for epoch[156] completed and saved to blocklog DB
Leaderslots: 0 - Ideal slots for epoch based on active stake: 1.51 - Luck factor 0%
Running leaderlogs for next epoch[157]
LEADER: slot[37534033] slotInEpoch[79633] at[2021-09-17T18:27:29+00:00]
Leaderlog calculation for next epoch[157] completed and saved to blocklog DB
Leaderslots: 1 - Ideal slots for epoch based on active stake: 1.2 - Luck factor 83.33%

perfect, remember to restart also the node

Does it mean Node was connecting with testnet for the older Pool which is not even registered ?

:slight_smile: I think you already know the answer :slight_smile:

oh my god. its blunder. I have been waiting for last 1 week for the node to get some blocks assigned :slight_smile:

don’t worry, be happy that it happened on testnet … now you learnt something :slight_smile:

1 Like

running “sudo systemctl restart cnode” just on BP node is fine or Relay also needed ?

Yeah , that’s why i played around with 3 wallets and 3 pools (registered, de-registered). Had i created it all clean (just 1 wallet, 1 Pool) i would not had known about it.

This also means, Its not possible for a single BP node to simultaneously work with two Pools in parallel ?

nope, you must have one node service per pool, i think you can create more users… like user 1 to be used for pool 1 and user 2 to be used for pool2

or perhaps you will need to use tmux sessions and start the node manually (./cnode.sh but inside the cnode.sh you will need to edit the path for pool folder)

just a theory, but you can play, being on testnet

1 Like

On same BP, i have 2 Pools registered with testnet. However now my BP is connected to testnet through 1st Pool.

What is the overall status of my 2nd Pool. What does it mean for a Pool to be registered with testnet but not actually working as a Pool through a Node ?

Nothing, it means can’t mint blocks in case has blocks assigned
u can go and deregister/retire from cntools - pool