Yes, your CNODe_port configured on BP is 6000 ( and relay 6001. - your relay is running on the same machine?)
yes first relay is a second VM on same host
second relay is on a hosted service
Ok, then configure
sudo ufw allow proto tcp from relay_ip to 6000
For relay leave the port 6001 open for other public relays
ok thanks - I will do that
Thanks - I have replaced the quotes with those from the keyboard and I have got a little further but I now get the error
What error?
No sorry - I don’t know why I posted that - must have been an error on my part. I have the BP node working and the second relay working - just trying to work out why my first relay is not being seen but probably a port or firewall issue which I am looking into
Hi there again - sorry I am nearly there but can’t seem to get my first relay in order (2nd one on hosted service is working fine) . I only have outgoing peers showing in gLiveView and no incoming peers. There are also no transactions. I have checked the firewall and I have it set to allow from all on port 6001 and I have updated the env and topologyUpdater files with the correct information. I have it set up the same as my other relay but something is preventing incoming peers. Any ideas what I have done wrong?
it’s the relay which is working on the same node with BP?
go to logs → and nano topologyupdater.log and show me the logs…
tell me your relay node and port to check if it’s open or you can check
Thanks - I think I am almost there now. I checked my cnode.sh file and I had the wrong port in that (as I made my relay from a clone of the BP node). After changing to the correct port value of 6001 I can now see the service working. I can also see the relay as outgoing peer on my BP node but still cannot see it as an incoming peer on the BP node. The other relay is fine as it can be seen as both an out and in peer. I have run the topologyUpdater.sh script again on the problem relay to see if that updates things and see how it goes.
Restart your BP .
If you seted the corecct ip port in BP topology file both relays should be there… to IN/OUT
Thanks - yes I now have both relays visible in the IN and OUT section of gLiveView on the BP machine. now need to run a script so that the topologyUpdater.sh file runs every 60 minutes on both relays. I only have the script from the Coincashew guide so I will have to adapt it for the CNTools I think?
Yes, and add it in crontab to run auto at each hour
Hi there again. When I set up my node using the Guild Operator guide I did not set up any crontab job to update the topology.sh file. However when I look at the topologyUpdater_latestresult.json file I can see that the topologyUpdater.sh file appears to be getting updated every hour anyway. This is the content of the file:
{ “resultcode”: “504”, “datetime”:“2020-12-15 11:19:25”, “clientIp”: “86.0.242.62”, “iptype”: 4, “msg”: “one request per hour please” }
{ “resultcode”: “204”, “datetime”:“2020-12-15 11:29:57”, “clientIp”: “86.0.242.62”, “iptype”: 4, “msg”: “glad you’re staying with us” }
{ “resultcode”: “504”, “datetime”:“2020-12-15 12:14:31”, “clientIp”: “86.0.242.62”, “iptype”: 4, “msg”: “one request per hour please” }
{ “resultcode”: “204”, “datetime”:“2020-12-15 13:14:38”, “clientIp”: “86.0.242.62”, “iptype”: 4, “msg”: “glad you’re staying with us” }
{ “resultcode”: “204”, “datetime”:“2020-12-15 14:14:45”, “clientIp”: “86.0.242.62”, “iptype”: 4, “msg”: “glad you’re staying with us” }
{ “resultcode”: “504”, “datetime”:“2020-12-15 14:43:36”, “clientIp”: “86.0.242.62”, “iptype”: 4, “msg”: “one request per hour please” }
{ “resultcode”: “204”, “datetime”:“2020-12-15 15:14:52”, “clientIp”: “86.0.242.62”, “iptype”: 4, “msg”: “glad you’re staying with us” }
{ “resultcode”: “204”, “datetime”:“2020-12-15 16:14:59”, “clientIp”: “86.0.242.62”, “iptype”: 4, “msg”: “glad you’re staying with us” }
{ “resultcode”: “204”, “datetime”:“2020-12-15 17:15:06”, “clientIp”: “86.0.242.62”, “iptype”: 4, “msg”: “glad you’re staying with us” }
{ “resultcode”: “204”, “datetime”:“2020-12-15 18:15:17”, “clientIp”: “86.0.242.62”, “iptype”: 4, “msg”: “glad you’re staying with us” }
Does this mean that the update is already been automated through the systemd service processes?
Thanks once again.
Ok I forgot… yes systemd doing his job
If on BP you see your relays, tx incrementing and info about KES and BLOCKS you are ok.
Ok thanks very much for all your help you have provided. It looks like I have successfully upgraded to version 1.24.2. I really hope this assists others that may run into similar problems. I will mark this as solution completed.
If you have time could I ask help for on other issue. I want to install the the logMonitor.sh and CNCLI so that I can run some of the features on CNTools. I understand this will let me get information on the leader slots. However when I try to set up I am prompted to set up cncli.sh init - but when I try I get this error:
$ ./cncli.sh init
ERROR: unable to locate the pools VRF vkey file or extract cbox hex string from: /opt/cardano/cnode/priv/pool/GNP1
I have checked and I have my VRF key in the path and the key contains a cbox hex string so I am not sure why it wont proceed. Are you able to advise on what I could try? Many thanks
Hi, in env I activated all lines for wallet and pool related files
Anyway since last night we are in Allegra era and a new CNCLI version 0.4.2 was released… you need to install new version in order to run.
38734:cd “$HOME/tmp”
38734:~/tmp$ curl -sS -o prereqs.sh https://raw.githubusercontent.com/cardano-community/guild-operators/master/scripts/cnode-helper-scripts/prereqs.sh
38734:~/tmp$ chmod 700 prereqs.sh
38734:~/tmp$ ./prereqs.sh -c
Using apt to prepare packages for “Ubuntu” system
Updating system packages…
[sudo] password for
Installing missing prerequisite packages, if any…
Creating Folder Structure …
Environment Variable already set up!
previous CNCLI installation found, pulling latest version from GitHub…
updating RUST if needed…
building CNCLI…
Great thanks for this - I was about to start working on this but when I checked my nodes I notice that my BP was showing an error in gLiveView:
COULD NOT CONNECT TO A RUNNING INSTANCE, 3 FAILED ATTEMPTS IN A ROW!
Now when I try to restart my node and run gLiveView I just get :
./gLiveView.sh
Node socket not set in env file and automatic detection failed! [source: gLiveView.sh]
I have tried uncommenting the socked line in the env file but then it just returns the original error that it cannot connect to a running instance. I havent changed or altered anything so I don’t know why this has happened. Can you think of anything? Sorry for this but I though I had everything up and running properly.
Also there is no file in the sockets folder? not sure if this is right?
ok, first to start ./gliveview.sh your node is up and running?
I don’t know if you are using tmux to run the script ./cnode.sh or you seted like systemd
if it’s set like systemd try sudo systemctl status node.service