Hacked Stake Pool? - Topology.json Injection

Team,

I just setup a stake pool in regards to all of the official guidelines. However, all my transactions stopped and now I’m unable to access my wallet. Has anyone had this issue happen to them?

It is quiet upsetting considering it cost me 500 ADA and 340 ADA to set this up. Is there an easy way to get your coins back so I can spin up another server?

ERROR MESSAGE when Running : glightview.sh & cntools.sh
when running ./gLightview.sh
when running ./cntools.sh
Looks like cardano-node is running with socket-path as /opt/cardano/cnode/sockets/node0.socket, but the actual socket file does not exist.
This could occur if the node hasnt completed startup or if a second instance of node startup was attempted!If this does not resolve automatically in a few minutes, you might want to restart your node and try again.

Troubleshooting Steps Taken
We tried to restart the node a few times but it still wont’ work.

This is what the topology.json file looked like before it was changed

{
“Producers”: [
{
“addr”: “our-Ip-address”,
“port”: [out-port],
“valency”: 1
},
{
“addr”: “relays-new.cardano-mainnet .iohk.io”,
“port”: 3001,
“valency”: 2
}
]
}

This is what the topology.json file looked like after it was updated

{ “resultcode”: “201”, “networkMagic”: “764824073”, “ipType”:4, “requestedIpVersion”:“4”, “Producers”: [
{ “addr”: “167.99.41.63”, “port”: 6000, “valency”: 1, “distance”:5, “continent”:“NA”, “country”:“NL”, “region”:“NY” },
{ “addr”: “34.226.145.203”, “port”: 3001, “valency”: 1, “distance”:353, “continent”:“NA”, “country”:“US”, “region”:“VA” },
{ “addr”: “18.191.2.182”, “port”: 3001, “valency”: 1, “distance”:767, “continent”:“NA”, “country”:“US”, “region”:“OH” },
{ “addr”: “207.244.246.220”, “port”: 6002, “valency”: 1, “distance”:1423, “continent”:“NA”, “country”:“US”, “region”:“MO” },
{ “addr”: “relay1.phoenixrising.io”, “port”: 3001, “valency”: 1, “distance”:2192, “continent”:“NA”, “country”:“US”, “region”:“TX” },
{ “addr”: “165.22.209.166”, “port”: 6000, “valency”: 1, “distance”:3859, “continent”:“NA”, “country”:“IN”, “region”:“WA” },
{ “addr”: “144.126.215.79”, “port”: 6000, “valency”: 1, “distance”:4114, “continent”:“NA”, “country”:“US”, “region”:“CA” },
{ “addr”: “185.177.148.244”, “port”: 3008, “valency”: 1, “distance”:5556, “continent”:“EU”, “country”:“GB”, “region”:“ENG” },
{ “addr”: “rl2-p1-ams3.dfii.co”, “port”: 3000, “valency”: 1, “distance”:5852, “continent”:“EU”, “country”:“NL”, “region”:“NH” },
{ “addr”: “178.128.192.151”, “port”: 3002, “valency”: 1, “distance”:6187, “continent”:“EU”, “country”:“DE”, “region”:“HE” },
{ “addr”: “51.103.134.253”, “port”: 6000, “valency”: 1, “distance”:6308, “continent”:“EU”, “country”:“CH”, “region”:“ZH” },
{ “addr”: “168.119.173.201”, “port”: 6001, “valency”: 1, “distance”:6369, “continent”:“EU”, “country”:“DE”, “region”:“BY” },
{ “addr”: “62.171.160.137”, “port”: 6000, “valency”: 1, “distance”:6475, “continent”:“EU”, “country”:“DE”, “region”:“BY” },
{ “addr”: “89.136.74.203”, “port”: 6000, “valency”: 1, “distance”:7263, “continent”:“EU”, “country”:“RO”, “region”:“SJ” },
{ “addr”: “123.16.134.167”, “port”: 3010, “valency”: 1, “distance”:13122, “continent”:“AS”, “country”:“VN”, “region”:“HN” },
{ “addr”: “203.29.240.197”, “port”: 6000, “valency”: 1, “distance”:17018, “continent”:“OC”, “country”:“AU”, “region”:“SA” }
] }

1 Like

I have the exact same situation few days ago when I first created my stake pool, no matter what I do, as soon as I run systemctl cnode.service, it will automatically add these additional hosts, it used to be just my producer node ip. Hope there’s someone who can clarify this.

Also did you backed up your wallet and pool keys? is it not possible to restore it somewhere and retire the pool? also why did it cost you 340 Ada?

1 Like

go to the env file in the scripts folder and uncomment the sockets variable this is how we got the server back up and running. However, we still can’t access the wallet and when we try to access the wallet we get this error:

error: cntools failed to load common env file please verify set values in ‘user variables’

1 Like

Should be fine, no one hacked no one

  • are u running topology updater on ur Producer?

  • go and run the prereq again


cd "$HOME/tmp"
curl -sS -o prereqs.sh https://raw.githubusercontent.com/cardano-community/guild-operators/master/scripts/cnode-helper-scripts/prereqs.sh
chmod 755 prereqs.sh
./prereqs.sh -f
. "${HOME}/.bashrc"

After this operation go modify again the topology file and env file

  • check again cntools
1 Like

Hi there Alex,

I thought producer node are not supposed to run topology updater? I read it at some other posts in this forum that only relay node need topology updater. As for the issue above, I only see this many hosts in the topology.json for both of my relay. The producer one only has 2 ips of both of my relays.

1 Like

Ok, that’s fine; totally correct
where is the issue then?

These are services activated when u set the topology updater as systemd

cnode-tu-fetch.service - is adding new hosts in your topology file

  • cnode-tu-push.service : pushes a node alive message to Topology Updater API
  • cnode-tu-push.timer : schedules the push service to execute once every hour
  • cnode-tu-fetch.service : fetches a fresh topology file before cnode.service file is started/restarted
  • cnode-tu-restart.service : handles the restart of cardano-node(cnode.sh)
  • cnode-tu-restart.timer : schedules the cardano-node restart service, default every 24h

Cheers,

1 Like

Just wanted to know if it’s normal to have that many hosts ad ips in the topology.json for the relay. As far as I remember, when I first started building my relay last week, there is only a few, but now there are more than 10-20 of them.

Also is it necessary to have these in the topology.json? I tried manually removing it, but it’ll get recreated after I run systemctl start cnode.service.

From my understanding, I believe these are the different bunch of relays that the node try to connect.

1 Like

Yes, it’s normal… topology updater script is doing this

2 Likes

Alright, got it, thanks Alex

1 Like