Unpopular relay

Hey guys,
I’ve got a weird one here for you.
So on mainnet I’ve got 2 relays, both working fine i.e. Processed TX increasing, OUT and IN peers, topology_updater running every hour with ‘glad you’re staying with us’ in the logs etc…

However, relay1 located in the US has 20+ IN when relay2 in the UK has only 2!

Any ideas as to why relay2 might be so unpopular?

Kind regards,
Thomas

If everything is working fine i believe it’s just random. In like a month it may be exactly the opposite.
Do you run them both for the same amount of time or the second one started later?

Yes they’ve both been running for about 2 months, very quickly relay1 had 20+ IN when relay2 never went beyond 3.
My concern is that the more IN connections the faster your blocks will be propagated and relay2 is not optimized in that respect!

I am pretty sure it won’t be an issue but keep watching it.
I have 4 relays and it’s always changing, one week 2 of them have 20 IN and the other 2 have 5 and next week it’s the opposite.
Since the topology pull picks closer relays first i suppose many more pools run on US than UK.

How are the peers for my topology file selected?

We calculate the distance on the Earth’s surface from your node’s IP to all subscribed peers. We then order the peers by distance (closest first) and start by selecting one peer. We then skip some, pick the next, skip, pick, skip, pick … until we reach the end of the list (furthest away). The number of skipped records is calculated in a way to have the desired number of peers at the end.

Every requesting node has its personal distance to all other nodes.

We assume this should result in a well-distributed and interconnected peering network.

2 Likes

Cheers mate thanks for sharing. I may add more relays soon, I don’t like the idea of having only one relay which is really well connected to the rest of the network…not that I’m close to producing my first block :slight_smile: but still! I suppose this will become a moot point when we move to p2p anyway.

1 Like

Probably you need to run relay-topology_pull.sh time to time and restart relays. When last time you ran it?

To add more my relay what is located in Germany has peers from around the world, US, TH, GB, BE, IT and etc.

Maybe you setup your relay-topology_pull.sh so it will pull max only 2-3 peers? There is a setting to configure how much peers you want to pull. Not sure though which script you are using.

Hello mate, thanks for the reply.
It is my understanding that the number of peers used by the api.clio.one API is 15 by default. I’ve increased it to 20 on both relays. I did restart both my relays once I downloaded the latest topology file.
Where I’m not following you is that my issue is with IN peers not OUT peers. I’ve got excellent OUT peers, this is driven by the topology.json file.
IN peers are driven by other relays being given my own relay IPs when they pull a new topo file using the API.
So my second relay being unpopular must have to do with the way the API works out the proposed topology file (see above the extract that Xpriens added on the topic).

So I think my relay2 gets skipped a lot :slight_smile: for whatever reason, would be interesting to hear from SPOs with relays in the UK.

Ok, understood. Just out of curiosity where you see statistic for IN peers?

Can you check with netstat, how much you have incoming connections. Something like this:

netstat -n -a | grep 3001

I have like 10-15 incoming connections.

I get the stats from gLiveView.
netstat fetches 36+ connections on relay1 and only 3 on relay2…just as shown by gLiveView actually :slight_smile:

Out of curiosity that IPs from relay2 located in which country?

You can check this with:

https://ipinfo.io/10.10.10.10

Just put instead 10.10.10.10 correct IP

I ran into the same issue, and recently solved it. My issue was related to the upgrade to 1.30.1. Here is a summary:

I had recently performed two separate modifications to the offending node.

  1. I switched from a Coincashew node to a CNTOOLs node (using the same VM instance, which turned out to be a small mistake)
  2. Upgraded cardano-node and cardano-cli from 1.29 to 1.30.1.

What I discovered was that the old 1.29 binary from the Coincashew build were executing when I started my new CNtools relay, BUT when I checked the version with “cardano-cli version” and “cardano-node version” it said 1.30.1 and when I ran gLiveView it said I was running 1.30.1. The way that I discovered it was running the 1.29 binary is that “journalctl -f -u cnode-logmonitor.service” told me that CNtools was upgrade to 1.30 but he node was 1.29. (I really wish I would have documented this error, sorry)

Solution: stop node, delete the outdated 1.29 binary in /usr/local/bin, removed “/usr/local/bin” from $PATH in “/etc/environment”, delete DB, resynchronize…wait…and wait…fixed.

** This solution will only be applicable if you also switched from coincashew to Cntools.

Hope this helps.
B

In the UK.

Thanks for the reply mate.
I don’t think this would apply to me as I have built my nodes with my own scripts, except for gLiveView and TopoUpdater.

I will add your UK relay to my relays peers. Lets see what happens.

1 Like

51.105.18.17 port 3001

Thx, but I have it from pooltools. :slight_smile:

sure wanted to save you some time :slight_smile:

1 Like

Seems I’m connected to your relay from one of my relays located in France. I will a monitor a bit logs for an errors…

Everything seems to work

{"log":"\u001b[34m[relay1:cardano.node.IpSubscription:Info:518]\u001b[0m [2021-10-25 19:35:51.72 UTC] IPs: 
...
...
Connection Attempt End, destination 51.105.18.17:3001 outcome: ConnectSuccess\n","stream":"stdout","time":"2021-10-25T19:35:51.727595738Z"}

I will leave it as it is for couple of days, then we can recheck.

Cheers mate I can see your connection coming from 188.165.218.26 — Roubaix, FR
Does it mean we could be speaking French? :slight_smile:

2 Likes