I have a core and relay setup in which the core sits at home and the relay sits on a cloud instance (AWS).
For the core node at home I could not get it to connect to the cloud relay and vice versa.
Topology files for both were pointing at the correct external IPs with the correct internal ports that the nodes runs on.
In other words, core node has topology which includes ‘external IP of relay’ and ‘internal port of relay’, and likwise my relay topology contains the ‘external IP of the core’ and the ‘internal port of the core’.
I tried this setup mentioned previously and had errors on both ends - the relay could not communicate with core and vice versa. It wasn’t a topology issue.
Then I looked at the commands I was running to start up the core and relays.
For the core I had the following:
–host-addr ‘external IP of core node’
–port ‘internal port of core node’
and for the relay I had…
–host-addr ‘external IP of relay node’
–port ‘internal port of relay node’
Where external IP is the IP the public sees on the pulic internet, and internal port is the port open on the actual or virtual machine on the LAN or my local private network in the case of the core node and on the cloud LAN machine in the case of my relay, in order to receive communication via the internet.
This setup did not work at all for me because one instance could see the other, but not connect to it.
When I looked at the command cardano-node run, the flag --host-addr or host address requires the input ‘hostname’, so I used the hostname of my internal core node which did not work because it could not be resolved to an IP - I guess because I’m not running an internal DNS on my LAN so it can’t figure out what IP it has assigned to my ‘hostname’ if DNS servers are capable of resolving non-domain names (e.g., hostnames not containing the suffix .com .io .org, etc.) to IP addresses.
Anyway since the --host-addr flag requires a hostname, and my hostname basically represents the name of the core node, I used the core node’s IP or the internal IP instead. So my core node start up command now looks like this:
–host-addr ‘internal IP of core node’
–port ‘internal port of core node’
where the only change made to the core start up command is highlighted above.
I did the same for the relay start up command, changing it’s --host-addr flag to the internal IP of the relay.
Now my core and relay work flawlessly, communicating with one another without any issues.
The one question I have is if I use my internal IPs, does that make the nodes more vulnerable to attack since they will likely be publicly visible?
Anyway, my router at home is weird. It only has a firewall for IPv6, not IPv4, and IPv6 firewall is enabled by default although the IPv6 service is disabled by default. I know it’s not an IPv4 firewall because it won’t accept rules that use IPv4 addresses.
Instead the router has port forwarding for IPv4, but not for IPv6, so I port forward incoming traffic from the public relay IP address to the internal core node address to port 3000. I don’t know if this makes a difference - i.e., if I remove port forwarding to my core will it still work using internal IP for node startup command? I didn’t test it.
As far as the AWS relay instance, no port-forwarding was used there. I just set up the firewall to allow specific traffic to pass through from various relays to my relay as well as traffic from my core node.
The only difference in this setup is I’m using internal IPs instead of external IPs, and that made all the difference.