Disk space on a node

Hey folks, I’m running low on disk space on both my block producer and relays. Below is the ‘du’ output. I’ve already pruned the log files but I need to free up more. My specific questions are:

  1. The /opt/cardano/cnode/db/immutable directory is comparatively large. Can any of those files be deleted without corrupting the node?

  2. The /home/ubuntu/git is also really large. Are there directories/files that are no longer necessary once the binaries have been built?

  3. I assume the files under ./home/ubuntu/.ghcup belong to the haskell compiler. Is that necessary for a running node?

20384828 ./opt
20384824 ./opt/cardano
20383884 ./opt/cardano/cnode
19538484 ./opt/cardano/cnode/db
16935736 ./opt/cardano/cnode/db/immutable
12123228 ./home
12123224 ./home/ubuntu
8388612 ./swapfile
4286952 ./home/ubuntu/git
4052204 ./home/ubuntu/.cabal
2839056 ./home/ubuntu/git/cardano-node
2661152 ./home/ubuntu/git/cardano-node/dist-newstyle
2557332 ./opt/cardano/cnode/db/ledger
2452736 ./home/ubuntu/.cabal/store
2452732 ./home/ubuntu/.cabal/store/ghc-8.10.4
2408100 ./home/ubuntu/.ghcup
2140052 ./home/ubuntu/.ghcup/ghc
2140048 ./home/ubuntu/.ghcup/ghc/8.10.4
1933264 ./home/ubuntu/git/cardano-node/dist-newstyle/build

did u deleted the log files from archive folder?
use cd /opt/cardano/cnode/logs/archive
and rm * to delete all files from archive folder

what is your disk size?

Yes. I deleted the archived log files and updated the config.json to keep only 2 files. The machines are AWS instances configured with 48GB each.

The …/db/immutable directory has files that don’t appear to be getting cleaned up. The *.chunk, *.primary, *.secondary files have been there since I started the node and the number grows every day. The *.chunk files alone are 10-20 MB each.

but u configured 8G for swap so I assumed the space left is only 40G… how much free space do you have now? Don’t u have option in AWS to increase the space of the DISK?

this one can be deleted but you will need to download again next time when you will need to upgrade the node to a new version

2839056 ./home/ubuntu/git/cardano-node

My recommendation is to increase the disk space if you have the possibility because the network is growing and soon you will be out of space.

1 Like

-h in du is your friend
it stands for human readable :wink:

Growing disk in AWS is really easy, here is some doco if youi need it
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/recognize-expanded-volume-linux.html

This being said, I would advice you to Always provision you instance with an extra disk for all your applications.
Also having this disk handled by LVM2 would make it all more easy in the future.

2 Likes

+1

Did you do a Nix build on that box? If so, you can do …

> nix-collect-garbage -d

that would free a few GB.

If you run docker …

you can do …

> docker system prune -all

which would remove unused data.

This is a dangerous command without stating the path in the command. You’d not be the first to make a running system delete itself :wink:

1 Like

the subject was the files from archive folder… corrected

1 Like

I know - just wanted to note that. No hard feelings :slightly_smiling_face:

1 Like

Please always run PWD before running this command and i would also use the explicit path too.

TBH avoid rm * altogether archive / tar up whatever files you want to delete and jsut remove the one file

yes its extra steps but worth it

Don’t worry, I used it, if u are in the right place there will be no issues :wink:

You don’t want to manually delete hundreds of file

How does one get involved in nodes in the first place whats the function what would i be conrtibuting?

@beakersbike Unless you have a really really really good reason not to, just use the binaries from

https://hydra.iohk.io/job/Cardano/cardano-node/cardano-node-linux/latest-finished

It will save you time , and it will save you disk space, since you wont need any of the build environment around.

If you know what you’re doing, you can run a full node in 10GB RAM, 30GB of storage, (including an unused 4GB swap file), and still only be using 90% disk space, though I would anticipate having to raise some of these limits slightly in the next 2-3 months

Send your logs off of the node and store them somewhere else. Theres no need for them to use storage on the node itself.

Delete all unused packages

Purge unused kernels or modules

If you need assistance, you can contact me - “I do tech for ADA™

(also, use ‘du -h … its easier to read…’)

1 Like

Totting up your disk usage, I see a total of 40GB. That leaves 8GB slack. I suspect that’s being used by Ubuntu?

Don’t delete anything from ‘immutable’ - it’s the blockchain history. If you do, the node will need to resync the chain. Generally, I wouldn’t touch anything in ‘db’. That’s been growing recently with the chain usage, but is about 18GB on my machine (and < 20GB on yours). So 48GB disk should be enough if it is all available to the node and you’re not eg running multiple instances, saving verbose logs or ledger states to the disk. The .chunk etc files are the actual blockchain data. Don’t be tempted to delete them.

You are also keeping 12GB for source builds (/home/ubuntu). You shouldn’t need most (perhaps any) of this once you’ve installed the binaries somewhere safe (./home/ubuntu/git/cardano-node/dist-newstyle/build is the build directory so will contain the binaries, I think; from your usage, the node doesn’t seem to have been installed anywhere else?). BUT if you build the node from source again, you will need that space, of course. So you either need to decide you’re not going to keep the node up to date, or provision enough disk to allow for a build, or use prebuilt docker or binary (or nix?) images, or build on another AWS instance and copy the binaries over to this system.

The 8GB swap file is also eating into your budget. My own preference is to have enough memory that I don’t need any swap, but that probably isn’t realistic for most people. You can reduce the swap if you provision enough memory and are comfortable with system administration (using swapo /swapoff and/or /etc/fstab), but the easier thing on AWS is probably to just increase the total amount of disk and accept the 8GB hit.

2 Likes

My BP had 1 SSD for the OS and logs beat the blimey heck out of it. It comes with a free SSD for temp storage, I use made this systemd service to turn it into SWAP (you can easily modify it if you aren’t on Azure and still get a free non-guaranteed persistence SSD), and finally it had a coupe more SSDs for non-OS stuff.

I’ve noticed that trimming, even every day or two, does wonders on them (GBs of blocks).

If my service hasn’t run lately, you should see what I mean in the following two pictures:

I am an idiot and ran fstrim -a before doing df -h. Maybe someone else on Azure can do it. If not I’ll do this in reverse order and show the horrors of NAND storage with chatty software that logs everything.

$ date
2021-11-09 01:17:00 EST
$ df -h | egrep "sdd|md127|root"
/dev/root        62G   42G   21G  67% /
/dev/md127      127G   19G  103G  16% /data
/dev/sdd1        32G   17G   14G  54% /mnt

Ah, no need. The trim service ran just 30 hours ago. Behold:

$ systemctl status fstrim.service
● fstrim.service - Discard unused blocks on filesystems from /etc/fstab
     Loaded: loaded (/lib/systemd/system/fstrim.service; static; vendor preset: enabled)
     Active: inactive (dead) since Mon 2021-11-08 00:00:23 UTC; 1 day 6h ago
TriggeredBy: ● fstrim.timer
       Docs: man:fstrim(8)
    Process: 180598 ExecStart=/sbin/fstrim --fstab --verbose --quiet (code=exited, status=0/SUCCESS)
   Main PID: 180598 (code=exited, status=0/SUCCESS)

Nov 08 00:00:10  systemd[1]: Starting Discard unused blocks on filesystems from /etc/fstab...
Nov 08 00:00:23  fstrim[180598]: /mnt: 15.3 GiB (16453980160 bytes) trimmed on /dev/disk/cloud/azure_resource-part1
Nov 08 00:00:23  fstrim[180598]: /data: 108.2 GiB (116153659392 bytes) trimmed on /dev/md127
Nov 08 00:00:23  fstrim[180598]: /boot/efi: 99.2 MiB (103973888 bytes) trimmed on /dev/sdc15
Nov 08 00:00:23  fstrim[180598]: /: 17.5 GiB (18799480832 bytes) trimmed on /dev/sdc1
Nov 08 00:00:23  systemd[1]: fstrim.service: Succeeded.
Nov 08 00:00:23  systemd[1]: Finished Discard unused blocks on filesystems from /etc/fstab.

Im having this same error and im not sure what to delete without messing anything up (still learning).

Im getting the errors:

Reading package lists… Error!
E: Write error - write (28: No space left on device)
E: The package lists or status file could not be parsed or opened.

Im running DititalOcean 16gb / 80gb (adding more after i can run anything)
Using CoinCashew setup

du -h / df -h below:
NoSpaceDuDf

clear the old logs and try to save some space or upgrade the server

Cheers,

Usually the storage directory is /var/log/journal or /run/log/journal , but it doesn’t have to necessarily exist in your system.

If you just want to check the amount of space that the journal is currently occupying on your disk, simply type:

$ journalctl --disk-usage