Node Size? Running out of space

Hello all, What is everyone size for everyones nodes? My relay is using a 120 gb harddrive. And for the last few months I have been getting several crashes as my node runs out of space. tried clearing archive logs and as well as journals…
this is output for df … im not sure if this is normal and its time to get a bigger drive for my node or maybe i have extra files somewhere that i should be getting rid of. Any advice is appreciated. Thanks!

/opt/cardano/cnode/scripts$ df -H
Filesystem Size Used Avail Use% Mounted on
udev 8.3G 0 8.3G 0% /dev
tmpfs 1.7G 1.8M 1.7G 1% /run
/dev/sda5 118G 111G 682M 100% /
tmpfs 8.4G 0 8.4G 0% /dev/shm
tmpfs 5.3M 0 5.3M 0% /run/lock
tmpfs 8.4G 0 8.4G 0% /sys/fs/cgroup
/dev/loop0 132k 132k 0 100% /snap/bare/5
/dev/loop3 230M 230M 0 100% /snap/gnome-3-34-1804/77
/dev/loop4 59M 59M 0 100% /snap/core18/2409
/dev/loop5 9.5M 9.5M 0 100% /snap/canonical-livepatch/138
/dev/loop9 267M 267M 0 100% /snap/gnome-3-38-2004/106
/dev/loop12 230M 230M 0 100% /snap/gnome-3-34-1804/72
/dev/loop14 54M 54M 0 100% /snap/snap-store/547
/dev/loop15 86M 86M 0 100% /snap/gtk-common-themes/1534
/dev/loop18 57M 57M 0 100% /snap/snap-store/558
/dev/sda1 536M 4.1k 536M 1% /boot/efi
tmpfs 1.7G 37k 1.7G 1% /run/user/125
/dev/loop6 120M 120M 0 100% /snap/core/13308
/dev/loop7 50M 50M 0 100% /snap/snapd/16010
/dev/loop10 9.5M 9.5M 0 100% /snap/canonical-livepatch/146
/dev/loop1 97M 97M 0 100% /snap/gtk-common-themes/1535
/dev/loop16 421M 421M 0 100% /snap/gnome-3-38-2004/112
/dev/loop8 50M 50M 0 100% /snap/snapd/16292
/dev/loop13 120M 120M 0 100% /snap/core/13425
/dev/loop17 59M 59M 0 100% /snap/core18/2538
/dev/loop2 66M 66M 0 100% /snap/core20/1581
/dev/loop11 66M 66M 0 100% /snap/core20/1587
tmpfs 1.7G 8.2k 1.7G 1% /run/user/1001

Yep, I was having that issue on my 120gb drives. I’ve moved them to 250gb+ now. The db is a good 70 gb now along, plus everything else, a 120gb isn’t enough.

2 Likes

am i able to just use some mirror software between drives? or did you rebuild from scratch?

I just cloned the drive. Not as simple as a windows cloning, but the easiest way is to get a Ubuntu (or whatever release you’re using) usb drive. Run it off the drive and then use the Gparted utility to clone the partitions to your new drive.

Otherwise a rebuild, but I successfully cloned two relays that way.

am i able to just use some mirror software between drives? or did you rebuild from scratch?

Clonezilla comes highly recommended for this sort of work. Free Open Source and handles almost every type of file system you can throw at it.

Also running 250+GB drives for nodes which gives approx 50-60% free on a minimal build. Recommend that you consider housekeeping cron jobs to clear down logs irrespective. Storage growth rate will increase post Vasil as Transaction Bytes Per Second will increase on chain.

@jeremyisme is the 250GB you are using just for the BPN or is this needed for relays as well? I assume it is only needed for the BPN based on all my testing but wanted to make sure before building my Mainnet pool. I have a Testnet pool up and running on 100GB, but do understand the Testnet has less system requirements than Mainnet.

Hi Jeremy! Good name.

I’m using 250gb for both the relays and the BP on mainnet.

Testnet you can run much smaller given the smaller blockchain, but for mainnet 120gb doesn’t cut it now.

1 Like

Hi @jeremyisme. Yes agree, Jeremy is a great name! :grin:
Thanks for the information. Much appreciated. :+1:t4: