If an expert is looking at the LiveView snapshot below, what would be her conclusion? A BP node is on par, so-so, not good? If somebody could comment on the BLOCK PROPAGATION section and “Missed slot leader checks”, it would be great.
Block propagation does not look good. It is >99.5% for all my nodes.
Missed slot leader checks looks pretty bad. If the hosting was of a good quality, not it would be around 130 (because during the last epoch transition, the “Missed slot leader checks” was between 125 and 130 for a few nodes where I and other SPOs made some statistics), and a block producer on a good hosting will have 0 missed slot leader checks for the period between epoch transitions. A cheap hosting will probably not be a good hosting.
But I’ve seen a lot worse, I’ve heard of more than 1%, too.
Thank you for your reply! Just wanted to clarify “Block propagation does not look good. It is >99.5% for all my nodes”. What measurement is >99.5% on your nodes? Within 1s? Within 3s?
Great, thank you!
This brings the next two questions.
What is a good enough hosting as far as a Cardano pool is concerned?
My BP and two relays are virtual servers. All three have 6 cores (2.8 GHz), 16GB memory, 400 SSD, IP address, 32TB traffic.
The pings to cnn.com, yahoo.com, bbc.co.uk are 1.5, 10.0, 1.6 ms accordingly, which seems OK. What else can I measure to estimate the hosting quality? Some benchmark utility maybe?
Another aspect of this is Haskell runtime parameters. If anybody had success tweaking those, please share. The only parameter I changed is this: CPU_CORES=6.
Your configurations are very good. And CPU_CORES settings is ok.
You cannot really know if your hosting provider has a good quality before trying it. Price might be a good hint: if it is cheap, it is probably not good, because it is probably over-provisioning the resources.
The numbers are slightly better if i run the node with --nonmoving-gc. This is clearly at the expense of the higher memory consumption, which is expected considering the nature of the flag. The memory consumption increased ~40%. And cncli.sh leaderlog kills the node because there is not enough memory for both cnode and cncli leaderlog. Rolling back to the original params.