Since the 1.29 Hard fork, nodes that run on 8gb tend to choke every now and then and shutdown. With nodes that are being run on AWS the next logical upgrade is to 16gb. For nodes run on bare metal servers this is a one off upgrade that doesn’t cost too much. However this upgrade on AWS doubles the cost to run them! For a smaller pool this is terrifying. Is there anything that can be done to optimise the RAM usage on these nodes other than turning trace mempool off? Does anyone know if adding SWAP helps? Is it even possible to run nodes on 8gb anymore?
With K=500 soon to be K=1000 in the future, do you think IOHK envisioned that smaller pools wouldn’t be able to survive and the 2800 pools now would decrease down to these numbers? Cardano has always been “by design”.
Side question about topology configuration. How many relays do other SPO’s have? and are they used as back ups or are they also connected to their BP? Right now I have 4 relays connected to my BP that are dispersed in different AWS regions, but I am suspicious that the high latency might be causing the BP to miss slots every now and then.
U have to many relays, in my opinion 2 are more than enough I made blocks with 1 single relay (the 2nd one it should be for redundancy in case one of relays is down )
So u can cancel 2 relays and upgrade to other relays
The reason for running 4 was to experiment with block propagation times, that being if I were to have one in America and Germany then the blocks I mint in Australia should propagate through the network faster and minimise slot height battles. However I’m not actually too sure if that has helped.