Changed relay IP mid epoch, what are my options

So I switched my IPs on my relays (wanted to make them elastic) and I made the mistake of pushing up that change… meaning it looks like my BP can’t recognize my relays until 2 more epochs. Is there anything I can do to get around this? And in the even that I ever had to change my IPs in the future, how would I get around this?

Hi,

Changing the IP will not affect the static connection configuration

So it will should work as long the new IPs are modified on BP topology file and also on FW

1 Like

Okay so you are saying I should be good then?
also any idea why telnet from my relay to bp works when I use the bp private ip, but connection times out when I use the elastic public ip? When I run gLive everything is looking good, but when I manually check connections between relay and bp it doesn’t appear to be happening. (I am using aws security groups to do this)

Confirmed, I tried the reachability analyzer on aws and it only works for the private ip and not the elastic ip between relay and producer

1 Like

The nodes are connected or not?

Perhaps u will need a PF on ur local router? The elastic IP it’s using the public internet and if the port is not opened on FW then is normal to not work

1 Like

Why PF on local router? I can ssh in to the elastic ip no problem, its just the communication between the relay and producer. The security groups seem good because I am gettin proper behavior between relay and producer on private ip, it’s just something with elastic. I don’t think I have an FW on the OS as my iptables look empty

1 Like

How do I tell? not sure what you are asking

Alternatively, you can specify the Elastic IP address in a DNS record for your domain, so that your domain points to your instance

Did u tried with DNS?

So how is my producer running if it isn’t communicating with my relays?

I haven’t tried dns, would I just use the aws provided dns to test?

You should try for test if it’s free but let me understand … ur nodes are on aws and they are on the same network?

Same aws account, same region, different instances.

security group 1: allows ssh connection
security group 2: allows all tcp to relay port
security group 3: custom tcp to relay port from security group 1

relays: security group 1 and 2
producer: security group 1 and 3

so weird to me that these security groups are working well for private ip but not the public elastic

Did u contacted the cloud support team? Perhaps additional setup is required? Each server has a different elastic IP assigned right?

I haven’t yet, was really hoping it was something dumb I did. Yes different elastic IP for each server, even tried a new one on the producer and changing the port in the security group

Ok, then… check the ssh port for eac port from here

Do u see it opened?

So that is saying they are closed

however the port I have designated to be open on my relays does show open

So, the cnode port is showing opened on both relays right? They have IN peers right?

ur topology should looks like:

  • BP to relays and relays to BP use private network
  • Relays to other public nodes - public nodes to relays use public IPs (elastic IP)

relays both show open, still very new to this so not sure if they have IN peers. All topologies are using the public elastic IP

BP topology:
{
“Producers”: [
{
“addr”: RELAY2_IP,
“port”: RELAY1_PORT,
“valency”: 1
},
{
“addr”: RELAY2_IP,
“port”: RELAY2_PORT (same as relay 1),
“valency”: 1
}
]
}
relay topologies:
{
“Producers”: [
{
“addr”: PRODUCER_IP,
“port”: PRODUCER_PORT,
“valency”: 1
},
{
“addr”: “relays-new.cardano-mainnet.iohk.io”,
“port”: 3001,
“valency”: 2
}
]
}