How to set up a POOL in a few minutes - and register using CNTOOLS

ticker is LGBTQ and pool id fc0ef418b0cce20c7a1ac7b8aec9dabd8e9e83b3fcb88c228b4f2c1d
I can see it in pooltool.io, adapools.org and cardanostacking.info .

I guess it will appear in pool.vet sooner or later. Maybe it is because of no active stake yet.

https://pool.vet/#fc0ef418b0cce20c7a1ac7b8aec9dabd8e9e83b3fcb88c228b4f2c1d

LOL didn’t think to check with pool id! Seems there are still some things to fix! Thank you sir for your help!

1 Like

Hello Alex and a HUUUUUUGE thanks for your tutorial and all the precious assistance you are doing!

I’m new here and currently building a new Pool (step 3 in progress). I’m still learning but i have some questions if you can help please:
1- when running ./gLiveView.sh for checking the sync stuatus I have this message appearing before hitting gLiveView interface:
image

Sometimes I don’t have it :
image

Except that, everything seems to be running properly:


image
image

What do you think ?

2- I Have 2 Relay nodes, and one BP node. they are currently still syncing, but while I’ve started them at the same time, they don’t have the same progress, and one of them is really slow. Relays are at 86% and 99% of progress, while BP node is at only15% (connected to 2 nodes with an average RTT of 18ms)
image
Any idea of the root cause of such a gap ?
3- Does RTT has any impact on the syncing speed of the nodes ?
4- Las but not least, I initially wanted to add more resiliency by having 2 BP nodes in the same pool behind my 2 relays. something like this comment: How to set up a POOL in a few minutes - and register using CNTOOLS - #98 by Gachi_Pool?
Is there a way to achieve that ? I’m not talking about 0 loss of service on Block producers. It is okay of the ongoing block at the moment of an incident is lost, but at least the secondary BP could be available quickly without waiting for the primary node to be back to normal.

Hello,

  1. When you have that error check if the node restarted:

sudo systemctl status cnode

  1. Try to add manually on Producer topology file the peers from relays (which are faster) or try to restart the node

  2. No, it shouldn’t

  3. You can add a Producer for bkp and keep it as a private relay (don’t register to the network but keep it coonected with the relays; let it run as a relay; you will need to upload only the files:
    hot.skey
    vrf.skey
    node.cert

Do not run both as a corenode (Producer) same time

Cheers,

thanks alot for the quick and clear answers:

1- here is the result:

cnode.service - Cardano Node
Loaded: loaded (/etc/systemd/system/cnode.service; enabled; vendor preset: enabled)
Active: active (running) since Sun 2021-04-04 08:57:49 UTC; 11h ago
Main PID: 208730 (cnode.sh)
Tasks: 17 (limit: 9444)
Memory: 3.2G
CGroup: /system.slice/cnode.service
├─208730 /bin/bash /opt/cardano/cnode/scripts/cnode.sh
└─208852 cardano-node run --topology /opt/cardano/cnode/files/topology.json --config /opt/card>

Apr 04 08:57:49 ADAPNode01 systemd[1]: Started Cardano Node.
Apr 04 08:57:57 ADAPNode01 cnode[208730]: WARN: A prior running Cardano node was not cleanly shutdown, sock>
Apr 04 08:58:03 ADAPNode01 cnode[208852]: Listening on http://127.0.0.1:12798

2- I updated the cnode/files/topology.jason file as below , and I tried as well with only my relays as producers, but still not working :

GNU nano 4.8 topology.json
{ “resultcode”: “402”, “datetime”:“2021-04-04 08:46:39”, “clientIp”: “173.82.212.45”, “iptype”: 4, “msg”: “IP is not (yet) allowed to fetch this list”, “Producers”: [ { “addr”: “relays-new.cardano-mainnet.iohk.io”, “port”: 3001, “valency”: 2, “debug”:“default fallback result” },
{
“addr”: “173.xx.xx.xx”,
“port”: 9212, #My relays ports
“valency”: 1
},
{
“addr”: “173.xx.xx.xx”, #My relays ports
“port”: 9212,
“valency”: 1
}
]
}

I then restarted the node and now i’m getting another error and it is not working at all:

Looks like cardano-node is running with socket-path as /opt/cardano/cnode/sockets/node0.socket, but the actual socket file does not exist.
This could occur if the node hasnt completed startup or if a second instance of node startup was attempted!
If this does not resolve automatically in a few minutes, you might want to restart your node and try again.

press any key to proceed…

IDo you have an idea about what am I doing wrong ?

Kindly note that i’m disabling the FW so fare to avoid any problems from this side.

Thanks in advance

The topology file is the issue…

I’m sorry, but it seems in fact working after setting topology file with only this inside:

{
“Producers”: [
{
“addr”: “RELAY1 IP ADDRESS”,
“port”: RELAY_PORT,
“valency”: 1
},
{
“addr”: “RELAY2 IP ADDRESS”,
“port”: RELAY_PORT,
“valency”: 1
}
]
}
image

No more errrors… and the 2 peers up here are my relays.

Thanks a lot Alex… I hope to achieve this project quickly.

1 Like

Hi @Alexd1985 ,

Do you have any specific guidelines about updating to 1.26.x . Is checking out 1.26.x and compiling all the only option?

Any recommendations on backup before we move to 1.26.x ?

Should we move to 1.26.x or we can continue with 1.25.1 and see how the network takes 1.26.x?

Warm Regards,
MusicianVishal

for the moment I testing the new release on my test relay; there is no rush to update the 1.26.1

1 Like

Hey @Alexd1985

Amazing man! You already had this info and update instructions documented in a new post. I am sorry i did not notice that.

Thanks for the updates.

Regards,
MusicianVishal

1 Like

sudo apt install libpam-google-authenticator

E: Unable to locate package libpam-google-authenticator

the above line is giving this error

I hope will work

Cheers,

1 Like

Try this command before installing libpam-google-authenticator:
sudo apt update

2 Likes

RelayMainNet

I got this far and no hick-ups whatsoever.
Now waiting for node to sync and then will continue with STEP 4 - connect the nodes to each other.

thanks for all your help

1 Like

Hi Alex,
can your tutorial be use for the new version: 1-26-1 without any adjustment ?

Yes, replace 1.25.1 with 1.26.1 (step 2)

1 Like

hello Alex,

How to check the node sync status with the main net?
The last time i checked was 98% complete but the connection to the cloud server disconnected.

Start glive

connect to the node

Go to
cd /opt/cardano/cnode/scripts (or cd $CNODE_HOME/scripts)

And run
./gLiveView.sh

1 Like

good morning
I would like to know if you have a guide like this where I can learn the procedure with cntools for when I change the producer server. physically from a server built with cntools to a new one built with cntools