Since the 1.29 Hard fork, nodes that run on 8gb tend to choke every now and then and shutdown. With nodes that are being run on AWS the next logical upgrade is to 16gb. For nodes run on bare metal servers this is a one off upgrade that doesn’t cost too much. However this upgrade on AWS doubles the cost to run them! For a smaller pool this is terrifying. Is there anything that can be done to optimise the RAM usage on these nodes other than turning trace mempool off? Does anyone know if adding SWAP helps? Is it even possible to run nodes on 8gb anymore?
With K=500 soon to be K=1000 in the future, do you think IOHK envisioned that smaller pools wouldn’t be able to survive and the 2800 pools now would decrease down to these numbers? Cardano has always been “by design”.
Side question about topology configuration. How many relays do other SPO’s have? and are they used as back ups or are they also connected to their BP? Right now I have 4 relays connected to my BP that are dispersed in different AWS regions, but I am suspicious that the high latency might be causing the BP to miss slots every now and then.