Hey folks, I’m running low on disk space on both my block producer and relays. Below is the ‘du’ output. I’ve already pruned the log files but I need to free up more. My specific questions are:
The /opt/cardano/cnode/db/immutable directory is comparatively large. Can any of those files be deleted without corrupting the node?
The /home/ubuntu/git is also really large. Are there directories/files that are no longer necessary once the binaries have been built?
I assume the files under ./home/ubuntu/.ghcup belong to the haskell compiler. Is that necessary for a running node?
The …/db/immutable directory has files that don’t appear to be getting cleaned up. The *.chunk, *.primary, *.secondary files have been there since I started the node and the number grows every day. The *.chunk files alone are 10-20 MB each.
but u configured 8G for swap so I assumed the space left is only 40G… how much free space do you have now? Don’t u have option in AWS to increase the space of the DISK?
this one can be deleted but you will need to download again next time when you will need to upgrade the node to a new version
2839056 ./home/ubuntu/git/cardano-node
My recommendation is to increase the disk space if you have the possibility because the network is growing and soon you will be out of space.
This being said, I would advice you to Always provision you instance with an extra disk for all your applications.
Also having this disk handled by LVM2 would make it all more easy in the future.
It will save you time , and it will save you disk space, since you wont need any of the build environment around.
If you know what you’re doing, you can run a full node in 10GB RAM, 30GB of storage, (including an unused 4GB swap file), and still only be using 90% disk space, though I would anticipate having to raise some of these limits slightly in the next 2-3 months
Send your logs off of the node and store them somewhere else. Theres no need for them to use storage on the node itself.
Delete all unused packages
Purge unused kernels or modules
If you need assistance, you can contact me - “I do tech for ADA™”
Totting up your disk usage, I see a total of 40GB. That leaves 8GB slack. I suspect that’s being used by Ubuntu?
Don’t delete anything from ‘immutable’ - it’s the blockchain history. If you do, the node will need to resync the chain. Generally, I wouldn’t touch anything in ‘db’. That’s been growing recently with the chain usage, but is about 18GB on my machine (and < 20GB on yours). So 48GB disk should be enough if it is all available to the node and you’re not eg running multiple instances, saving verbose logs or ledger states to the disk. The .chunk etc files are the actual blockchain data. Don’t be tempted to delete them.
You are also keeping 12GB for source builds (/home/ubuntu). You shouldn’t need most (perhaps any) of this once you’ve installed the binaries somewhere safe (./home/ubuntu/git/cardano-node/dist-newstyle/build is the build directory so will contain the binaries, I think; from your usage, the node doesn’t seem to have been installed anywhere else?). BUT if you build the node from source again, you will need that space, of course. So you either need to decide you’re not going to keep the node up to date, or provision enough disk to allow for a build, or use prebuilt docker or binary (or nix?) images, or build on another AWS instance and copy the binaries over to this system.
The 8GB swap file is also eating into your budget. My own preference is to have enough memory that I don’t need any swap, but that probably isn’t realistic for most people. You can reduce the swap if you provision enough memory and are comfortable with system administration (using swapo /swapoff and/or /etc/fstab), but the easier thing on AWS is probably to just increase the total amount of disk and accept the 8GB hit.
My BP had 1 SSD for the OS and logs beat the blimey heck out of it. It comes with a free SSD for temp storage, I use made this systemd service to turn it into SWAP (you can easily modify it if you aren’t on Azure and still get a free non-guaranteed persistence SSD), and finally it had a coupe more SSDs for non-OS stuff.
I’ve noticed that trimming, even every day or two, does wonders on them (GBs of blocks).
If my service hasn’t run lately, you should see what I mean in the following two pictures:
I am an idiot and ran fstrim -a before doing df -h. Maybe someone else on Azure can do it. If not I’ll do this in reverse order and show the horrors of NAND storage with chatty software that logs everything.