Relays incoming connections

Hi lovely community!

My relay is working fine, I have set my topology file to connect to 15 peeers and I confirm through GLiveView that my relay connects to 15 peers.
That said, I’m trying to understand why I do have 62 incoming connections, many being duplicated IPs

Thanks for sharing your thoughts around this :wink:


Could happens if the old connections remained “hanging” and a new connection was initialized… try to restart the node, same result?

Also u can limit the incoming connections and also block multiple connections from same source


Hi @Alexd1985
Thanks for you quick reply.
A restart didn’t change anything and Im left with more or less the same number of incoming connections.
I took a quick look at the incoming IPs:
A few of them corresponds to previous topology
But a big part of them are not known to me.

Ill play with iptables to limit the incoming connections.

Should incoming connections be only IPs from my topology ?



The topology file is for OUTpeers not for IN peers

It is ok, I have around 30 IN peers


Yes, I completely agree it is for out peers.
Don’t peers (the ones in my topology) have to also connect to my relays ?
What is the use of the incoming conections?



I keep my relays connected, not mandatory but I keep them.

Incoming connections is for full mesh of the network… u are connecting to other nodes and other nodes are connecting to u

How do you limit incoming connections, and what is the suggested amount of incoming connections?


I have ~50 IN peers and I don’t see any issue… its better for block propagation so I will not try to limit as long I don’t see any issues

Do u have issues caused by IN peers?


I don’t think you can using any configuration in mainnet-config.json. (@Alexd1985 please correct me if I am wrong about that.)

I don’t think you will see issues as incoming peers increases either, due to the “pull” based design Cardano’s network stack uses. I believe, (only my supposition), the networking protocol will preference processing data it requested before delivering data to others. Thus peers will wait on requests until if/when your node gets around to answering. If it doesn’t have time to answer then it won’t get around to it and won’t waste resources on extra incoming peers.

External peers can then make decisions about whether they might be better off attempting to pull data from another node instead of yours if yours is too overloaded. This seems to be the basis for how the new P2P design works with metrics kept about which peers are quickest at delivering the most recent blocks.

Currently, node operators change their mainnet-topology.json file to change peers and they don’t have any visibility about which peers would be best for them to pick. P2P mode will automate this and will use your statistics to continually pick the best peers.

you can’t limit from configuration file. you must use the iptables but be careful to not block your PRODUCER

more details here


1 Like

NFTables is even better for this sort of thing. You can use sets and dynamically add members.