IOHK statement: A Beta release for Daedalus on Linux

Alleluia. Thanks so much Maki!

1 Like

Great News!

1 Like

I must be dreaming! Thank you Maki!

1 Like

Wow, great news. Thank you Maki!

I can see at least one issue w/ the Linux wallet.

Daedalus stores individual blocks and undo records as files under DB-1.0/blocks/data (macOS) or DB/blocks/data (Linux)
Means millions of files (~1.7m at the moment) are used currently on my macOS by Daedalus, therefore, millions of inodes on a filesystem.

This is not an issue on macOS’s APFS (inodes) or NTFS (file IDs) as they can have quintillion inodes/id (signed 64bit means 2^63-1, to be precise).

But, Linux’s default ext4 FS is affected as in FS creation time, it calculates the number of inodes by the size of the underlying partition/volume etc…

Means, smaller filesystems (~20-50GB) will run out of free inodes very quickly, and it’s a fucked up situation as inodes cannot be adjusted dynamically, but require to create a new FS /w larger number of inodes (mkfs.ext4 -N …).

I created an ubuntu VM w/ 20GB root disk, and it was affected hard. Daedalus stuck on a certain percentage and did not go further and it turned out because the system ran out of free inodes.

root@nyx:~# df -i /
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/mapper/ubuntu–vg-root 1215840 0 1215840 100% /

root@nyx:~# dumpe2fs /dev/mapper/ubuntu–vg-root | egrep -w "^(Inode|Block)"
dumpe2fs 1.42.13 (17-May-2015)
Inode count: 1215840
Block count: 4854784
Block size: 4096
Inode blocks per group: 510
Inode size: 256

This can be a big pain in the arse for a lot of Linux users.

Secondly, these million files need to be read/write in a case of recovery, checks etc by scanning the directories and reading/writing files, that can cause very high I/O on all OSes.
For example, I have experienced on macOS, that just go through on these files using ls command, takes several minutes (more than 6m).
In Linux it’s better as all the free memory is reserved for different caches (dentry, inode, buffer etc.), basically, it means, when you access to a file more than once it will be significantly quicker.

Examples:
macOS, nearly the same in both runs.
ilap$ time (ls -Rtl blocks | wc -l )
1695732

real 6m12.673s
user 0m22.483s
sys 2m7.617s

ilap$ time (ls -Rtl blocks | wc -l )
1695792

real 6m7.630s
user 0m22.452s
sys 2m7.236s

While on Linux it’s less than a second after the second try (only 220K files).
ilap@nyx:~/.local/share/Daedalus/mainnet/DB$ time (ls -Rtl | wc -l )
229788

real 0m6.229s
user 0m0.954s
sys 0m2.165s
ilap@nyx:~/.local/share/Daedalus/mainnet/DB$ time (ls -Rtl | wc -l )
229788

real 0m1.786s
user 0m0.921s
sys 0m0.931s
ilap@nyx:~/.local/share/Daedalus/mainnet/DB$

I think the “Connecting to Network” issue has several causes (I have identified three), but in my opinion, at least one can be related to the very slow recovery of the blocks, what I have already experienced when I was playing w/ Daedalus.

8 Likes

Followed build instructions as per IOHK statement: A Beta release for Daedalus on Linux.

Installation was perfect on Ubuntu 16.04. Had my server hang up due to some other processes after syncing and generating a new wallet. Rebooted and connected without issue.

Daedalus seems much more solid on my Ubuntu system vs my Win 10 laptop, which constantly hangs on connecting to network. The only solution I have found to this issue in Windows is to delete the db1.0 directory and start from scratch, but this solution always works without fail.

@_ilap that’s a nice report, have you posted it via the feedback form as well?

1 Like

It worked in Ubuntu Mate 17.10. However when launching it doesn’t give the option where to save the blockchain data and I saw a quick decrease of 10 GB’s on my SSD within a few mins. So, I tried to remove and delete this install and all folders related to the Daedalus wallet I could find, but I’m still with less 10GB on my SSD, can you please advise me where other files related to this Wallet are to be found so I can delete them and recover 10GB?

Best, Linda

Linda, you can find the blockchain data at ~/.local/share/Daedalus/mainnet/DB/

I did that already. But I still have the 10GB’s of data somewhere. I deleted this folder in the file explorer CAJA. Maybe no clean delete? Could it be that Daedalus stores some data in /usr ? and /lib ?

Then it should be in the Rubbish Bin instead, so empty the bin then, but I am not familiar w/ these fancy explorers/filemanagers like Mate/Caja.

I already did that as well. Could it be that Daedalus stores some data in /usr ? and /lib ?

Not really, might be the binaries under ~/.daedalus folder, but that’s just abt 400MB.
I would say, just restart your Linux, as the opened file descriptors usually are not closed
in a case when some process was still doing some IOs on these files (read/writes). For example, when you deleted them, some syncing process was still running in the background.
As a result, the df (disk free) won’t come back w/ the proper numbers, or try to run “sync; sync; sync” to flush the caches.
run du -sk ~/{.[a-z],}* | sort -n after reboot, if your FS is still occupied.

3 Likes

Thanks _ilap! I tried this command line but to no avail, even after rebooting, the 8GB is still occupying the disk somewhere…

Programs such as Treesize are good for finding filespace hogs on Windows, surely there are equivalents for Linux?

Linda, what does df -i say? It could very well be the inode exhaustion issue…

It says the following for df -i

udev 1004477 521 1003956 1% /dev
tmpfs 1012207 820 1011387 1% /run
/dev/sda2 14123008 1568130 12554878 12% /
tmpfs 1012207 112 1012095 1% /dev/shm
tmpfs 1012207 6 1012201 1% /run/lock
tmpfs 1012207 18 1012189 1% /sys/fs/cgroup
/dev/sda1 0 0 0 - /boot/efi
tmpfs 1012207 31 1012176 1% /run/user/1000
/home/username/.Private 14123008 1568130 12554878 12% /home/username
/dev/dm-1 61054976 18646 61036330 1% /media/username/username

? du is a basic command from coreutils package, therefore it should/must exist in your system.
Just, try to run it in this simpler form, of course, in a terminal: cd ; du -sk * .[^.]* | sort -n

FYI, this was described by FP Complete in the audit report released today, page 19.

Currently, each block is stored in an individual file. However, as documented at Proposition: store blocks in less files · Issue #2224 · input-output-hk/cardano-sl · GitHub, this has disadvantages. There is a proposal to fix this issue at: https://github.com/input-output-hk/cardano-sl/blob/535c36cf9496958e96aabf57bb875012060b3b34/docs/proposals/block-storage.md. From the point of view of code simplicity and reliability, however, it would be better to use preexisting solutions designed to solve these problems.
One example would be SQLite, which explicitly documents its abilities to replace multiple small files on a filesystem with a single database: 35% Faster Than The Filesystem. There may be mitigating factors preventing the usage of such a more commonly used storage technology, but there is no evidence of such a conclusion having been reached.
Barring such evidence, our conclusion is that the current storage methodology, and plans for mitigating current risks, introduces undue risk to the project in terms of code complexity, potential data loss, and logic errors.
First occurrence: February 2018
Status: Acknowledged
IOHK Response
We acknowledge that the use of many small files is not a good long term design choice for several reasons. It was an expedient choice during rapid development. We are re-analysing the requirements of the whole storage subsystem, both block storage and associated indexes, and will then choose a new design. This process, including migration, must be done properly and we anticipate that it will take considerable time.

3 Likes