Problem with increasing blocksize or processing requirements

Yes, there are problems with increasing block size.

The network propagation delay of blocks is tied to the way TCP operates during “slow start”. During this mode the amount of data in flight doubles every round trip.
The table below is a guesstimation of number of round trips needed for different block sizes assuming an initial congestion window, of10 segments and a maximum segment size payload of 1460.

Block Size Round Trips
0 … 145600 1
14601 … 29200 2
29201 … 58400 3
58401 … 90112 4

The guard rails in the constitution has identified this problem
“MBBS-06 (x - “should”) The block size should not induce an additional Transmission Control Protocol (TCP) round trip. Any increase beyond this must be backed by performance analysis, simulation and benchmarking”, from draft-constitution/2024-12-05/draft-constitution-converted.md at main · IntersectMBO/draft-constitution · GitHub . Using previous assumptions the next Round Trip would happen above 116800 bytes.

The Round Trip Time is limited by light speed, if you have a good uncongested network connection it is what it is. However you can control the number of Round Trips a block requires by tweaking a few system options.

By default if a connection remains unused for a short period of time the congestion window will shrink back to 3 or 10 segments. You can disable this with:
sudo sysctl -w net.ipv4.tcp_slow_start_after_idle=0
This means that if you have a large congestion window to, for example to your relays in USA you may be able to send the entire block in one Round Trip Time.

You can also increase the initial congestion window for sockets with:
ip route change default ... initcwnd 42 initrwnd 42
Use ip route show to get your default route and append initcwnd 42 initrwnd 42.
For example:
JUST PASTING THE EXAMPLE BELOW WILL NOT WORK AND FORCE YOU TO ACCESS YOUR SYSTEM THROUGH THE CONSOLE.
ip route change default via 192.168.0.1 dev ens5 proto dhcp src 192.168.0.150 metric 100 initcwnd 42 initrwnd 42

If your node has IPv6 enabled run ip -6 route show and append initcwnd 42 initrwnd 42 to that default route too.

I made the above changes on one of Cardano Foundation’s relays in Australia and added your relays as localroots. Did you notice an improvement in block block propagation time since 2024-12-07?

The graph labeled “TERM Block Delay to Paris”, shows the time it took a block you forged to reach a Cardano Foundation node in Paris. That node in Paris has no localroots, it makes all its connection through ledger or peersharing. Even with so few samples it shows a statistical significant decrease in delay in the range of 640ms to 290 ms for 4 Round Trips blocks.
I’d be very interested to hear of your experience after making the above changes to one or two of your relays in Australia.

4 Likes