Hi everyone,
I am running the cardano staking pool [CSPHD] and I am keeping track of the log generated by the Cardano-node running on the block producer machine.
I spotted something unusual in the log:
Jun 17 17:35:58 ada-node cardano-node[3472708]: [ada-node:cardano.node.PeerSelection:Info:580] [2024-06-17 17:35:58.21 UTC] TraceGovernorWakeup
Jun 17 17:35:58 ada-node cardano-node[3472708]: [ada-node:cardano.node.PeerSelection:Info:580] [2024-06-17 17:35:58.21 UTC] TraceBigLedgerPeersRequest 15 0
Jun 17 17:35:58 ada-node cardano-node[3472708]: [ada-node:cardano.node.PeerSelection:Info:580] [2024-06-17 17:35:58.21 UTC] TracePublicRootsRequest 100 1
Jun 17 17:35:58 ada-node cardano-node[3472708]: [ada-node:cardano.node.LedgerPeers:Info:578] [2024-06-17 17:35:58.21 UTC] [String "UseLedgerPeers",Number (-1.0)]
Jun 17 17:35:58 ada-node cardano-node[3472708]: [ada-node:cardano.node.PublicRootPeers:Info:998] [2024-06-17 17:35:58.21 UTC] TracePublicRootRelayAccessPoint (fromList [])
Jun 17 17:35:58 ada-node cardano-node[3472708]: [ada-node:cardano.node.PublicRootPeers:Info:999] [2024-06-17 17:35:58.21 UTC] TracePublicRootRelayAccessPoint (fromList [])
Jun 17 17:35:58 ada-node cardano-node[3472708]: [ada-node:cardano.node.LedgerPeers:Info:578] [2024-06-17 17:35:58.21 UTC] [String "UseLedgerPeers",Number (-1.0)]
Jun 17 17:35:58 ada-node cardano-node[3472708]: [ada-node:cardano.node.PublicRootPeers:Info:998] [2024-06-17 17:35:58.21 UTC] TracePublicRootRelayAccessPoint (fromList [])
Jun 17 17:35:58 ada-node cardano-node[3472708]: [ada-node:cardano.node.PublicRootPeers:Info:999] [2024-06-17 17:35:58.21 UTC] TracePublicRootRelayAccessPoint (fromList [])
Jun 17 17:35:58 ada-node cardano-node[3472708]: [ada-node:cardano.node.PeerSelection:Info:580] [2024-06-17 17:35:58.21 UTC] TraceBigLedgerPeersResults (fromList []) 9 256s
Jun 17 17:35:58 ada-node cardano-node[3472708]: [ada-node:cardano.node.PeerSelection:Info:580] [2024-06-17 17:35:58.21 UTC] TracePublicRootsResults (PublicRootPeers {getPublicC
onfigPeers = fromList [], getBootstrapPeers = fromList [], getLedgerPeers = fromList [], getBigLedgerPeers = fromList []}) 9 256s
I wonder what the log suggest, specifically the lines related to PeerSelection
and LedgerPeers
and what need to be taken care of.
Thanks in advance.
1 Like
What do you think is unusual? I thought those are normal p2p logs as it runs.
1 Like
Thanks for your comment.
The the content in log I often see is like:
Jun 17 18:44:01 ada-node cardano-node[3472708]: [ada-node:cardano.node.ChainDB:Notice:550] [2024-06-17 18:44:01.71 UTC] Chain extended, new tip: 4207de1a8ba04dbcee18d9ef402ad1de5120f69e91b46222c382a5b036bf54e7 at slot 127083549
Jun 17 18:44:01 ada-node cardano-node[3472708]: [ada-node:cardano.node.InboundGovernor:Info:574] [2024-06-17 18:44:01.97 UTC] TrInboundGovernorCounters (InboundGovernorCounters {coldPeersRemote = 1, idlePeersRemote = 0, warmPeersRemote = 0, hotPeersRemote = 1})
Jun 17 18:44:02 ada-node cardano-node[3472708]: [ada-node:cardano.node.LeadershipCheck:Info:566] [2024-06-17 18:44:02.00 UTC] {"chainDensity":4.8479814e-2,"credentials":"Cardano","delegMapSize":1346061,"kind":"TraceStartLeadershipCheck","slot":127083551,"utxoSize":11115554}
Jun 17 18:44:02 ada-node cardano-node[3472708]: [ada-node:cardano.node.Forge:Info:566] [2024-06-17 18:44:02.12 UTC] fromList [("credentials",String "Cardano"),("val",Object (fromList [("kind",String "TraceNodeNotLeader"),("slot",Number 1.27083551e8)]))]
The content that is mentioned in my initial post did not occur that frequently.
I am not sure what String "UseLedgerPeers",Number (-1.0)
implies and if it is normal to see this.
Actually, such concern of mine arose from the fact that the pool I am operating mints blocks less frequently then I expected and I am trying to find out the reasons
Have you got that in your p2p topology?
1 Like
You mean the topology configuration file? I think so, here is how it looks like:
{
"localRoots":[
{
"accessPoints":[
{
"address”:”xxx.xxx.xxx.xxx”,
"port":6000
}
],
"advertise":false,
"valency":1
}
],
"publicRoots":[
{
"accessPoints":[
],
"advertise":false
}
],
"useLedgerAfterSlot":-1
}
From the gLiveView Report, I believe the bp node is connected to my relay node:
|P2P : enabled Cold Peers : 0 Uni-Dir : 0 │
│ Incoming : 1 Warm Peers : 0 Bi-Dir : 2 │
│ Outgoing : 1 Hot Peers : 1 Duplex : 0 |
Andrew_CSPHD:
"useLedgerAfterSlot":-1
This lines up with the UseLedgerPeers log.
1 Like
Thank you.
It seems that the log does not indicate an abnormal situation.
With this, would you suggest anything to look into / take care of if the pool is less frequently choosen to be a slot leader than expected.
It all comes down to how much ada is delegated.
How much is delegated to your pool?
Ah, yeah, that will be hard to get a block. You’ll be sitting close to 10% chance. And remember, it is a lottery each epoch. You’ll be on average 1 every 10 epoch-ish.
Are you checking the leaderlogs each epoch to see if you are allocated a block?
I do, I am following the instructions on:
Good.
Keep checking each epoch.
Try get more delegation.
Make sure your kes keys are rotated correctly, especially if you didn’t mint a block during the kes period.
Make sure your relays have sufficient incoming connections, and outgoing.
And that your relay and BP are communicating well.
Keep your nodes updated.
Other than that, there isn’t much more you can do. It all comes down to luck and then making sure you done miss the blocks you do get allocated!
1 Like
Hi Jeremy,
I notice that after running the slot leader check for next epoch, the RAM usage would surge and be occupied even after the check is finished.
I wonder if this is normal or any thing I should do to handle this.
Yes, it uses a lot. So make sure you have enough ram (32 gb I think is min recommended) and some swap file, say 16gb or more available.
As a note, you don’t have to run the slot leader check on the BP. You just need the vrf.skey and it can be run on one of your relays.
1 Like
Thank you so much for your reply.
I did configured the memory and swap as suggested by CoinCashew, say 32GB RAM + 8 GB swap
My another concern is that the memory was not released even after the check, not sure if this is normal.
I have found that some of the ram is not released. Although I am not sure what is not releasing it. If it looks too much a restart of your node should clear it up.
1 Like