How to set up a POOL in a few minutes - and register using CNTOOLS

So, it has data inside, try with vi (editor)
can u open another files?

It works with VI. after edited and saved, nano works fine too. Thank you

1 Like

So i finally got my Stake Pool up and running, but have run into a slight issue…
also did a lot of tinkering and testing on the testnet, wanted to get a hang of all the things…
and break it early rather than later lol

My nodes seems to not shutdown the node correctly, not sure if that happened or not while on the testnet, because when starting the nodes on the testnet it didn’t take very long a few minutes.

however after switching from testnet to mainnet and actually reinstalling (messed up and didn’t realize it was working so i just reinstalled from scratch.)

when i stop and start the service, this seems to always happen, this time it was the relay restart timer to reset the topology peers or whatever it is that it does after 86400sec (24hr)

always seems to take somewhere from 20-30 minutes.
image

the relay VM has 12c 24t, 24GB RAM and 1TB fast storage, running on a server shared with a few other things.
there is no resource contention so far as i have been able to tell.

the problem seems to stem from the cnode service or whatever its called not doing a “graceful shutdown” or restart.
been trying to solve this for days without luck, figured maybe it was because i hadn’t registered my pool yet, but now that i have done that, the problem still seems to persist.

i tried increasing the
TimeoutStopSec=5
in the cnode.service, to 600 and adding
RestartSignal=SIGINT
and a few other changes, like removing the ExecStop

but removing the ExecStop, just made the process at start up walk the entire blockchain i think…
so that took like over an hour or two… stopped keeping track.
i set my cnode.service in /etc/systemd/system/ back to the defaults…

not really well versed in all this systemd service SIGINT stuff…
but it seems to me that there is an issue here that isn’t due to my configurations.
the issue is also on both my BP and Relay
running Ubuntu Server 21.10

try nano cnode.sh and uncomment the line and increase the number of vCPU used (the default is 2)

See if this will help

nano cnode.sh

#CPU_CORES=2   

My bare metal BP which has 6vCPU 32G RAM is starting in ~3-4 minutes but my VPS RELAY with 6vCPU and 32G RAM is starting in ~10 minutes

cheers,

yeah i already set that to 24 so it can utilize all the threads… tried changing it around earlier to lower counts like 12 because i have 12 cores… but then it does only seem to utilize a lower number in the cpu node %

it seems to be that it is a thread count as each CPU_CORES i go up, adds about 50% more on the
CPU node in the gLiveView NODE RESOURCE USAGE

as you can see in the screenshot its at nearly 1200% so basically using all cores.
my cpu’s don’t have much in single threaded performance and they are quite old…

still even if the problem can be mitigated with enough compute, the cnode doesn’t seem to shutdown correctly, which doesn’t exactly help…

Understand this but 30 minutes it’s much indeed… if u have more relays try

sudo systemctl status cnode-tu-restart.timer the relays should not restart same time + u can increase the restart timer higher than 24 hours (it’s not mandatory to restart each 24 hours)

1 Like

yeah the relay isn’t really the concern, haven’t exactly setup multiple’s of those yet… but i will, so for them its just a minor annoyance.

however the BP node also takes about the same time to restart due to the incorrect shutdown.
can a node that is offline win a block?

or what would the downside being of having such long restarts for a BP…?
don’t really want to miss any blocks when the stake pool gets to that point…

i mean sure it can run for a long long time most likely… but it will have to reboot eventually.

lol perfect excuse to get new hardware lol

but why are u restarting the BP?

Also install cncli (block leaderlog) and check for the blocks assigned (this way will know when it’s safe to perform changes on BP)

no good reason, trying to solve issues ahead of time, would just be nice to know that it can actually start up in the fastest possible time.
for when i need that in the future.

hmmm interesting… it did actually seem to start faster with just CPU_CORES=12

try with 8-10 do not use all

yeah ill keep tinkering with it until i get a best case over a few tries…
might be getting higher single core boost clocks. think my cpu’s can do like 800mhz boost on top of 2266mhz or something like that

and then i will add the incorrect cnode shutdown to my long list of future stuff i want to try to fix.
if somebody else doesn’t fix it meanwhile or haven’t found a fix already.

thanks for your help again

1 Like

You are welcome, :beers:

I saw that they updated the cnode.sh script yesterday

U can try to downlod again and give it a shut

1 Like

i can’t fix it before having optimized for failures :smiley:
does seem like the contributer ups the shutdown time from 5 sec to 60 sec.
which i have to assume is the number in the cnode.service that was by default at 5sec.

the
TimeoutStopSec=5
already tried to changed that… to 600
but maybe he changed more…

i’m more pondering if the
ExecStop=/bin/bash -l -c "exec kill -2
might have something to do with it…

assuming that the 2 is seconds rather than some sort of parameter.
but yeah i will certainly check out the new cntools scripts, see if that helps…

reducing the CPU_CORES=
in cnode.sh sure helped a ton, going from 24 to 12 pretty much halved my node startup time.
still working on testing if even lower helps.

1 Like

image
restarted the relay at 24, after the BP managed to restart at like 14 min
33 min later and still waiting lol

not sure why the time used for a restart are so inconsistent… but might be down to blockchain density or length of the part it decides to … walk when restarting on a failed shutdown.

tested my VPS relay and it took ~8 minutes…

1 Like

yeah no doubt it’s my old low clock speed xeon cpu’s that are to blame.
37minutes and now its done lol

1 Like

so ran a fair number of tests… and waited a good deal.
the cnode.sh - CPU_CORES=
seems to be best at my exact number of cores… most likely it has do with hyperthreading.
there doesn’t seem to be any gains from lowering the number…

dropping from 24 with an avg of 30 ish minute for startup to 12 with a best so far of 12 minutes.
tested.
CPU_CORES=24
CPU_CORES=12
CPU_CORES=6
CPU_CORES=2

and only change seems to be from 24 to 12… anything below gives me about the same within what i would expect to be my margin of error.

so it’s most likely a single threaded workload, which suffers from hyperthreading.
the reason i decided to go with 12 CPU_CORES, is that even if startup isn’t improved, something like block propagation or other such tasks, might benefit from having more cores…

1 Like

Hi Alex - great guide, thank you. I had both a relay and BP setup using the coincashew guide and a pool running using these two separate machines. Unfortunately both machines were lost and I had to rebuild new ones (which I have done using this guide as CNTOOLS seemed so much easier - and it was!)

I would simply like to retire my pool now but am unsure how exactly to do this - I did think I could recreate the pool on the new boxes I have setup using CNTools and then ‘retire’ it but I am unsure if this is the best way and I am not even sure how to ‘re-create’ my BP for my existing pool using CNTOOLs… #confused.

I have all the coldkeys from the previous BP on a USB - could I please ask for your advice on the best way forward? Please forgive my ignorance…

1 Like

Very useful guideline, thank you for sharing.

1 Like