Stuck in http://127.0.0.1:12798 when running ./cnode.sh script

I build up my node based on the steps below:


when try to start the node, i get below:
WARN: A prior running Cardano node was not cleanly shutdown, socket file still exists. Cleaning up.
Listening on http://127.0.0.1:12798

I ran killall cardano-node
rm -R /opt/cardano/cnode/db
but with no result

My cardano-Node version:
git rev eed250546fa9acec4c9de557b3e3551c1f682a30
My cardanocli version:
cardano-cli 1.23.0 - linux-x86_64 - ghc-8.10

Please advise

1 Like

No more liveview mode starting with 1.23.0; you will se only listening on 127.0.0.1 … but your node it’s working fine( you can check with grafana or with gliveview.sh script or RTView…

Hi Alex
Got it, Thanks, other ossue i am facing is when edit the topology.jason file (under opt/cardano/cnode/files) to point my producer to my relay’ IP node (and wise versa), then the initial params overwited, even when i stop the cnode.service, edit the file and start the service still the same iasue.
please advise

Are u using topology updater script?

Hi alex
no i do nano the topology.jason file
how can i use the topology updater? please advice

You are using cntools? Your nodes are sync with the network?

yea borh nodes are fully sync, and i will use cntools to generates keys etc…

Then you can run ./gliveview.sh on each node to show the status of your nodes/ peers,etc

sure, so you saying using cntools script to update the topology file?

Nope. I’m using topology updater scripts

Only for relays! On BP should be always the same (manually pointing to your relays)

i am still confused here to what need to be done :frowning:
i edited the topology.jason file then ran ./topologyupdater.sh but the original params get overwrites
Would you provide exact steps of how to update topology file please ?

It is normal to be overwrited because topology updater does this… but only on Relay nodes

On BP you should have the topologyupdater scripts configured:

#MAX_PEERS=15 # Maximum number of peers to return on successful fetch
CUSTOM_PEERS=“IP_RELAY1:PORT_RELAY1:1|IP_RELAY2:PORT_RELAY2:1|”
this way inside your BP topology file you will find only your Relays

======================

Relays:

using the latest script:

CNODE_TOPOLOGY="${CNODE_HOME}/files/topology.json" # Destination topology.json file you’d want to write output to
MAX_PEERS=13 # Maximum number of peers to return on successful fetch
CUSTOM_PEERS=“Relay1_IP:port|BP_IP:port|Relay2_IP:port|relays-new.cardano-mainnet.iohk.io:3001:2”

this is overwrites the topology files:

and looks like this:

{ “resultcode”: “201”, “networkMagic”: “764824073”, “ipType”:4, “Producers”: [
{ “addr”: “Relay1_IP”, “port”: x, “valency”: 1 },
{ “addr”: “BP_IP”, “port”: x, “valency”: 1 },
{ “addr”: “Relay2_IP”, “port”: x, “valency”: 1 },
{ “addr”: “relays-new.cardano-mainnet.iohk.io”, “port”: 3001, “valency”: 2 },
{ “addr”: “Public_relay”, “port”: x, “valency”: 1, “distance”:25, “continent”:“NA”, “country”:“US”, “region”:“MO” },
{ “addr”: “Public_relay”, “port”: 6000, “valency”: 1, “distance”:767, “continent”:“NA”, “country”:“US”, “region”:“GA” },
{ “addr”: “public_relay”, “port”: 6001, “valency”: 1, “distance”:1150, “continent”:“NA”, “country”:“US”, “region”:“MD” },

BE CAREFULL on BP you will not use MAX_PEERS (use # )

Hi Alex,

Thanks for your prompt response, i really appreciate that.
in CUSTOM_PEERS section, can i leave my relay’s private IP or it should be the public IP ?

Public… but I believe you will need to configure portfoward for that port if you are using NAT
Are your servers running on the same machine ?

Yes both servers are running on a same machine

https://flowrpool.com/Video-Tutorial-Notes.txt
Here you can find the topology configuration… which IP should be used…

I’m using cntools and no problem, working well.
What’s your issue?

i also have no problem wirh cntools so far, i found it very handy script

Got it, Thanks, other ossue i am facing is when edit the topology

Thanks for the reply. All is good now!

I found this video to help a ton: https://www.youtube.com/watch?v=8cmIFwK5JbY&t=293s