Using kubernetes and nessus io docker image and k8s for staking pool setup

I’m using cardano nessusio-docker image. I’m following the gitbook tutorial made by @tomdx and also this tutorial. I’m getting right now the following output:

[relay1-0:cardano.node.IpSubscription:Info:61] [2021-05-31 14:02:44.92 UTC] IPs: 0.0.0.0:0 [18.159.64.253:3001,172.30.0.131:16443] Connection Attempt End, destination 18.159.64.253:3001 outcome: ConnectSuccess
[relay1-0:cardano.node.IpSubscription:Error:61] [2021-05-31 14:02:45.38 UTC] IPs: 0.0.0.0:0 [18.159.64.253:3001,172.30.0.131:16443] Application Exception: 18.159.64.253:3001 HeaderError (At (Block {blockPointSlot = SlotNo 1598400, blockPointHash = 02b1c561715da9e540411123a6135ee319b02f60b9a11a603d3305556c04329f})) (HeaderProtocolError (HardForkValidationErrFromEra S (Z (WrapValidationErr {unwrapValidationErr = ChainTransitionError [OverlayFailure (VRFKeyBadNonce (Nonce "81e47a19e6b29b0a65b9591762ce5143ed30d0261e5d24a3201752506b20f15c") (SlotNo 1598400) (Nonce "e209c5d22fe51d58485759cc9edca776cb505c1afb2ba5f98f8a6a979a7589e4") (CertifiedVRF {certifiedOutput = OutputVRF {getOutputVRFBytes = "t\231\145\196\165ZhA\137S\209{Z<1\194\225]Yq\235\&7#!\161:\147\129Q\236x\207\195z\170\155\182mw\141\182\135\249\209\178\134\&3_:\167b\135\204\&4\205Z\172\230\163\226\EM\DC2\226\182"}, certifiedProof = CertPraosVRF "!\202\180:L)*\DC2\250\SOH\141V \240Z\EOT\n\183\245\141\SUB\191\ETXQ\"\EOT\155A\SOH'\160LD\253\204Z\249\129/i\178\237p\155\140\240\142\183)LG\137q\248\DLE\DC1\130W\183\162\149\DEL6=[5\225/18\154\192=\242\255\181\f\190\190\t"}))]})))) (Tip (SlotNo 1598359) 1bd5f00688e0dcbd0a33ef7eac0086cf13dc6dff4d9a1292877c2c02f0c77ef6 (BlockNo 1597092)) (Tip (SlotNo 28100545) d1b32f0bdfc4084c4ffd392bc3a41c13f1defcee5b61474f3144928921cca418 (BlockNo 2630903))
[relay1-0:cardano.node.IpSubscription:Info:61] [2021-05-31 14:02:45.38 UTC] IPs: 0.0.0.0:0 [18.159.64.253:3001,172.30.0.131:16443] Closed socket to 18.159.64.253:3001
[relay1-0:cardano.node.ErrorPolicy:Warning:52] [2021-05-31 14:02:45.38 UTC] IP 18.159.64.253:3001 ErrorPolicySuspendPeer (Just (ApplicationExceptionTrace (HeaderError (At (Block {blockPointSlot = SlotNo 1598400, blockPointHash = 02b1c561715da9e540411123a6135ee319b02f60b9a11a603d3305556c04329f})) (HeaderProtocolError (HardForkValidationErrFromEra S (Z (WrapValidationErr {unwrapValidationErr = ChainTransitionError [OverlayFailure (VRFKeyBadNonce (Nonce "81e47a19e6b29b0a65b9591762ce5143ed30d0261e5d24a3201752506b20f15c") (SlotNo 1598400) (Nonce "e209c5d22fe51d58485759cc9edca776cb505c1afb2ba5f98f8a6a979a7589e4") (CertifiedVRF {certifiedOutput = OutputVRF {getOutputVRFBytes = "t\231\145\196\165ZhA\137S\209{Z<1\194\225]Yq\235\&7#!\161:\147\129Q\236x\207\195z\170\155\182mw\141\182\135\249\209\178\134\&3_:\167b\135\204\&4\205Z\172\230\163\226\EM\DC2\226\182"}, certifiedProof = CertPraosVRF "!\202\180:L)*\DC2\250\SOH\141V \240Z\EOT\n\183\245\141\SUB\191\ETXQ\"\EOT\155A\SOH'\160LD\253\204Z\249\129/i\178\237p\155\140\240\142\183)LG\137q\248\DLE\DC1\130W\183\162\149\DEL6=[5\225/18\154\192=\242\255\181\f\190\190\t"}))]})))) (Tip (SlotNo 1598359) 1bd5f00688e0dcbd0a33ef7eac0086cf13dc6dff4d9a1292877c2c02f0c77ef6 (BlockNo 1597092)) (Tip (SlotNo 28100545) d1b32f0bdfc4084c4ffd392bc3a41c13f1defcee5b61474f3144928921cca418 (BlockNo 2630903))))) 200s 200s

When I inspect the environment variables of my two relay nodes, I got this for each one:
relay1

CARDANO_NETWORK=shelley-mode --testnet-magic 1097911063
CARDANO_CONFIG=/config/testnet-config.json
CARDANO_TOPOLOGY=/config/testnet-topology_relay1.json
CARDANO_BIND_ADDR=0.0.0.0
CARDANO_PORT=3001
CARDANO_DATABASE_PATH=/data/db
CARDANO_SOCKET_PATH=/data/node.socket
CARDANO_LOG_DIR=/opt/cardano/logs
CARDANO_PUBLIC_IP=3.94.3.136:30801
CARDANO_CUSTOM_PEERS=172.30.0.131:16443
CARDANO_UPDATE_TOPOLOGY=true
CARDANO_BLOCK_PRODUCER=false

relay2

CARDANO_NETWORK=shelley-mode --testnet-magic 1097911063
CARDANO_CONFIG=/config/testnet-config.json
CARDANO_TOPOLOGY=/config/testnet-topology_relay2.json
CARDANO_BIND_ADDR=0.0.0.0
CARDANO_PORT=3001
CARDANO_DATABASE_PATH=/data/db
CARDANO_SOCKET_PATH=/data/node.socket
CARDANO_LOG_DIR=/opt/cardano/logs
CARDANO_PUBLIC_IP=3.87.137.173:30802
CARDANO_CUSTOM_PEERS=172.30.0.107:16443
CARDANO_UPDATE_TOPOLOGY=true
CARDANO_BLOCK_PRODUCER=false

Those are the topology file for each one:

relay1

{
    "Producers": [
      {
        "addr": "18.159.64.253",
        "port": 3001,
        "valency": 1
      },
  {
        "addr": "172.30.0.131",
        "port": 16443,
        "valency": 1
      }
     ]
  }

relay2

{
    "Producers": [
      {
        "addr": "18.159.64.253",
        "port": 3001,
        "valency": 1
      },
  {
        "addr": "172.30.0.107",
        "port": 16443,
        "valency": 1
      }
     ]
  }

The value of CARDANO_NETWORK does not seem right. Is this supposed to run on the testnet? In which case the value should just be testnet

1 Like

I started using testnet, but I got the same problem. So I tried with this one.

Try to run a single relay (without k8s) using the CARDANO_NETWORK=testnet switch. If that works, try to do the same with k8s.

1 Like

I did that on my local machine. I can try it using my vps. Perhaps is a problem related to exposed ports to external world. I will try it and notify it here.

I’m running only a node on my aws machine. I did

docker run --detach     
--name=relay     -p 3001:3001     
-e CARDANO_UPDATE_TOPOLOGY=true 
-e CARDANO_TOPOLOGY="/var/cardano/config/topology.json"    
-v node-data:/opt/cardano/data  
-v $PWD/:/var/cardano/config nessusio/cardano-node:dev run

And this is my topology.json:

{
    "Producers": [
      {
        "addr": "relays-new.cardano-testnet.iohkdev.io",
        "port": 3001,
        "valency": 1
      }
    ]
  }

And I got this

[aaf80fad:cardano.node.ErrorPolicy:Notice:65] [2021-05-31 20:27:24.50 UTC] IP 3.9.80.183:3001 ErrorPolicySuspendConsumer (Just (ConnectionExceptionTrace (SubscriberError {seType = SubscriberParallelConnectionCancelled, seMessage = "Parallel connection cancelled", seStack = []}))) 1s
[aaf80fad:cardano.node.DnsSubscription:Error:120] [2021-05-31 20:27:24.56 UTC] Domain: "relays-new.cardano-testnet.iohkdev.io" Application Exception: 54.151.49.138:3001 HandshakeError (Refused NodeToNodeV_6 "version data mismatch: NodeToNodeVersionData {networkMagic = NetworkMagic {unNetworkMagic = 1097911063}, diffusionMode = InitiatorAndResponderDiffusionMode} /= NodeToNodeVersionData {networkMagic = NetworkMagic {unNetworkMagic = 764824073}, diffusionMode = InitiatorAndResponderDiffusionMode}")
[aaf80fad:cardano.node.ErrorPolicy:Notice:65] [2021-05-31 20:27:24.56 UTC] IP 54.151.49.138:3001 ErrorPolicySuspendConsumer (Just (ApplicationExceptionTrace (HandshakeError (Refused NodeToNodeV_6 "version data mismatch: NodeToNodeVersionData {networkMagic = NetworkMagic {unNetworkMagic = 1097911063}, diffusionMode = InitiatorAndResponderDiffusionMode} /= NodeToNodeVersionData {networkMagic = NetworkMagic {unNetworkMagic = 764824073}, diffusionMode = InitiatorAndResponderDiffusionMode}")))) 200s

I’m using this linux ubuntu release:

NAME="Ubuntu"
VERSION="20.04.2 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.2 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal

But if I do

docker run --detach \
    --name=relay \
    -p 3001:3001 \
    -e CARDANO_UPDATE_TOPOLOGY=true \
    -v node-data:/opt/cardano/data \
    nessusio/cardano-node run    

Everything works fine.

EDIT

I forgot to add the CARDANO_NETWORK option for testnet
I used -e CARDANO_NETWORK=testnet in docker run and now I have:

bashcardano-node: Wrong NetworkMagic in "/opt/cardano/data/protocolMagicId": NetworkMagic {unNetworkMagic = 764824073}, but expected: NetworkMagic {unNetworkMagic = 1097911063}

There is some tutorial about how to run this image in testnet? @tomdx ?

You cannot mix mainnet with testnet data. Have you tried with a clean volume?

1 Like

I did it, and it works. So, jumping to k8s, perhaps I need to delete the old volume data of pods. And then try it again with relay nodes.