Domain:Failed to start all required subscriptions

I have arm64 ubuntu machines. I want to run cluster with them. The docker image I found compatible is the one from nessusio. I run it using a .yml file over a kubernetes cluster. And when I inspect the logs doing microk8s.kubectl logs --tail=500 -f relay2-0, I got:
[relay2-0:cardano.node.DnsSubscription:Warning:57] [2021-05-28 14:09:50.99 UTC] Domain: "relays-new.cardano-testnet.iohkdev.io" Failed to start all required subscriptions
[relay2-0:cardano.node.DnsSubscription:Warning:59] [2021-05-28 14:09:51.29 UTC] Domain: "$BPROD_CLUSTER_IP" Failed to start all required subscriptions

Why is this? For me its a little confusing the doc from this gitbook. I don’t know how to configure the BPROD_CLUSTER_IP variable.

Hi!

what is the content of topology file?

1 Like
{
    "Producers": [
      {
        "addr": "relays-new.cardano-testnet.iohkdev.io",
        "port": 3001,
        "valency": 2
      },
      {
        "addr": "cardano-producer1-service.default.svc.cluster.local",
        "port": 3001,
        "valency": 1
      }
    ]
  }

I got two relays nodes running.

switch out from a variable to ip?

1 Like

Sorry, i didn’t actualize the configmap from kubernetes, and this is the real topology.json that I’m using:

{
    "Producers": [
      {
        "addr": "relays-new.cardano-testnet.iohkdev.io",
        "port": 3001,
        "valency": 2
      },
      {
        "addr": "$BPROD_CLUSTER_IP",
        "port": 3001,
        "valency": 1
      }
    ]
  }

yes - remove the second item from the list

{
    "Producers": [
      {
        "addr": "relays-new.cardano-testnet.iohkdev.io",
        "port": 3001,
        "valency": 2
      }
    ]
  }

1 Like

It keeps giving me the same problem but for relays-new.cardano-testnet.iohkdev.io only

hmm - that will be probably a bug?
check this topic:

so what you should do is to resolve the IP address of relays-new.cardano-testnet.iohkdev.io and use one of the IPs you get.
nslookup relays-new.cardano-testnet.iohkdev.io

1 Like

Ok. It works

great! would be useful to comment on github, that you have the same issue…