I have one of my relays running in P2P mode on mainnet and it seems to be running better than my other relays in normal (“direct connection”) mode.
I added the following to my mainnet-config.json:
"TestEnableDevelopmentNetworkProtocols": true,
"EnableP2P": true,
"MaxConcurrencyBulkSync": 2,
"MaxConcurrencyDeadline": 4,
"TargetNumberOfRootPeers": 100,
"TargetNumberOfKnownPeers": 100,
"TargetNumberOfEstablishedPeers": 50,
"TargetNumberOfActivePeers": 20,
And configured my mainnet-topology.json:
{
"LocalRoots": {
"groups": [
{
"localRoots": {
"accessPoints": [
{
"address": "relay2",
"port": 3000
},
{
"address": "relay3",
"port": 3000
},
{
"address": "blockproducer",
"port": 3000
}
],
"advertise": false
},
"valency": 3
}
]
},
"PublicRoots": [
{
"publicRoots" : {
"accessPoints": [
{
"address": "relays-new.cardano-mainnet.iohk.io",
"port": 3001
}
],
"advertise": false
},
"valency": 2
}
],
"useLedgerAfterSlot": 0
}
The relay has logs like:
TrConnectionManagerCounters (ConnectionManagerCounters {fullDuplexConns = 0, duplexConns = 0, unidirectionalConns = 82, inboundConns = 32, outboundConns = 50})
TrInboundGovernorCounters (InboundGovernorCounters {coldPeersRemote = 0, idlePeersRemote = 1, warmPeersRemote = 0, hotPeersRemote = 31})
TrInboundGovernorCounters (InboundGovernorCounters {coldPeersRemote = 0, idlePeersRemote = 1, warmPeersRemote = 0, hotPeersRemote = 31})
TrConnectionManagerCounters (ConnectionManagerCounters {fullDuplexConns = 0, duplexConns = 0, unidirectionalConns = 82, inboundConns = 32, outboundConns = 50})
I monitor block receipt delay with a script on each relay and it appears that the relay running in P2P mode gets blocks a little quicker than my other relays which each have 20-24 directly configured peers fetched from “api.clio.one”.
Here is a comparison of block receipt delays.
Left side is relay running P2P. Right side is normal mode (relay with 24 directly configured outgoing peers, as well as currently 12 incoming peers)
slot 50158044 delayed 1470ms slot 50158044 delayed 1510ms
slot 50158061 delayed 750ms slot 50158061 delayed 780ms
slot 50158063 delayed 560ms slot 50158063 delayed 550ms
slot 50158079 delayed 1090ms slot 50158079 delayed 1200ms
slot 50158146 delayed 1750ms slot 50158146 delayed 860ms
slot 50158151 delayed 1820ms slot 50158151 delayed 1860ms
slot 50158162 delayed 1060ms slot 50158162 delayed 1140ms
slot 50158175 delayed 1060ms slot 50158175 delayed 1120ms
slot 50158192 delayed 1410ms slot 50158192 delayed 1260ms
slot 50158194 delayed 500ms slot 50158194 delayed 510ms
slot 50158195 delayed 520ms slot 50158195 delayed 520ms
slot 50158224 delayed 1540ms slot 50158224 delayed 1670ms
slot 50158248 delayed 950ms slot 50158248 delayed 1060ms
slot 50158260 delayed 890ms slot 50158260 delayed 960ms
slot 50158274 delayed 1080ms slot 50158274 delayed 1090ms
slot 50158382 delayed 1290ms slot 50158382 delayed 1530ms
slot 50158393 delayed 1250ms slot 50158393 delayed 1410ms
slot 50158452 delayed 950ms slot 50158452 delayed 1110ms
slot 50158459 delayed 1300ms slot 50158459 delayed 1160ms
slot 50158469 delayed 1170ms slot 50158469 delayed 1180ms
slot 50158499 delayed 790ms slot 50158499 delayed 1030ms
In this list of block receipt delays: There are 16 slots where the P2P relay was quicker and only 4 slots where the it was slower.
Is anyone else running 1.33.0 in P2P mode on mainnet?
What experience do others have running P2P on testnet?
Is there something else that should be configured?