Problem connecting relay and core node in AWS testnet setup

I am following the Cardano stake pool course Stake Pool School - Stake pool course (

In lesson 4 I am stuck on creating the topology with the core node only connecting to the relay node, and the relay connecting to core node + other relay nodes.

They both extend the chain when they connect to the outer world (iohk testnet node) but they don’t talk to each other. Any ideas what I am doing wrong (might be something really basic…)?

A few more details:

  • Relay node and core node both run on AWS ec2 and in the same VPC
  • I have tried using both the public and the private IPv4 addresses. Port is 3000 and valency 2
  • getting a error message when running the blockchain that perhaps contains relevant information: [ip-172-3:cardano.node.ErrorPolicy:Notice:134] [2021-03-20 14:16:45.19 UTC] IP XXXX:3000 ErrorPolicySuspendConsumer (Just (ConnectionExceptionTrace Network.Socket.connect: <socket: 42>: does not exist (Connection refused))) .
  • Here XXXX is the IP address of the relay node (get same message in both nodes - different IP address of course)

perhaps there is something in my AWS i need to change?


Check if the ports are opened in fw to allow incoming connections… but if the node doesn’t start could be an issue with outgoing connections (check the topology file);


1 Like

Hi Alex

Thanks for your response.

I think you are right it might be a firewall or routing issue. the connection to outside nodes seems to work fine, and it is just the connection between the relay and core node that fails - in both directions.

I am new to AWS but I have tried to make the firewall setup as open as possible (first get the connection - then worry about security), but it does not help.

The two nodes are in the same subnet of the same Virtual Private Cloud and should therefore connect via private IP4v but perhaps this kind of connection is not enabled for the chain’s topology. On the other hand, I have on purpose tried to keep it vanilla and follow the instructions closely, so my setup should be pretty standard

Any other ideas I could try out?

Can u share the topology file for ur producer?


“Producers”: [
“addr”: “XX.XX.XX.XX [Private IPv4 address of the relay]”,
“port”: 3000,
“valency”: 2
“addr”: “”,
“port”: 3001,
“valency”: 2

I have also tried with the public Ipv4 address instead of the private address
I also connect to the iohk testnet node to keep to keep the chain updated - plan is to remove it once the other works. If i run without it, there is no connection.

Question, did u started ur relay with cnode port 3000?

Try from BP
telnet Relay_ip_private 3000

good question!

I have started it from 3001. Will change it to 3000 and retry.

:crazy_face::crazy_face: ok, then that was the problem
U can change in topology of Producer from 3000 to 3001

I think it was :sweat_smile:

not getting error messages anymore.

damn - I have been spending a good amount of time but didn’t even think in that direction - the start commands I created a while ago. your question just made it clear immediately. Thanks a lot for your help!! :muscle: :muscle:

1 Like

You are welcome, anytime!

And if anyone is experiencing similar issues on Google Cloud Platform, by default the ports are blocked.

So even if you setup ufw to allow a specific port, the Virtual Machine by Google should be blocked. So create a firewall rule to open whatever ports you need.

1 Like