IOHK statement: A Beta release for Daedalus on Linux

Thanks _ilap! I tried this command line but to no avail, even after rebooting, the 8GB is still occupying the disk somewhere…

Programs such as Treesize are good for finding filespace hogs on Windows, surely there are equivalents for Linux?

Linda, what does df -i say? It could very well be the inode exhaustion issue…

It says the following for df -i

udev 1004477 521 1003956 1% /dev
tmpfs 1012207 820 1011387 1% /run
/dev/sda2 14123008 1568130 12554878 12% /
tmpfs 1012207 112 1012095 1% /dev/shm
tmpfs 1012207 6 1012201 1% /run/lock
tmpfs 1012207 18 1012189 1% /sys/fs/cgroup
/dev/sda1 0 0 0 - /boot/efi
tmpfs 1012207 31 1012176 1% /run/user/1000
/home/username/.Private 14123008 1568130 12554878 12% /home/username
/dev/dm-1 61054976 18646 61036330 1% /media/username/username

? du is a basic command from coreutils package, therefore it should/must exist in your system.
Just, try to run it in this simpler form, of course, in a terminal: cd ; du -sk * .[^.]* | sort -n

FYI, this was described by FP Complete in the audit report released today, page 19.

Currently, each block is stored in an individual file. However, as documented at, this has disadvantages. There is a proposal to fix this issue at: From the point of view of code simplicity and reliability, however, it would be better to use preexisting solutions designed to solve these problems.
One example would be SQLite, which explicitly documents its abilities to replace multiple small files on a filesystem with a single database: There may be mitigating factors preventing the usage of such a more commonly used storage technology, but there is no evidence of such a conclusion having been reached.
Barring such evidence, our conclusion is that the current storage methodology, and plans for mitigating current risks, introduces undue risk to the project in terms of code complexity, potential data loss, and logic errors.
First occurrence: February 2018
Status: Acknowledged
IOHK Response
We acknowledge that the use of many small files is not a good long term design choice for several reasons. It was an expedient choice during rapid development. We are re-analysing the requirements of the whole storage subsystem, both block storage and associated indexes, and will then choose a new design. This process, including migration, must be done properly and we anticipate that it will take considerable time.


This is mad, no wonder wallet restoration takes ages (and so does deleting the chain folder). Imagine how fast restoration could be if they switch to DB and manage to implement HD-address check on the DB-side :smiley: (Not sure it’s possible, tho. And also security n’ stuff.)

Thank you, @unique, for the reference!

Every solution has its own drawbacks and benefits
These files are created at the installation time, and when the blocks are downloaded, then the global utxo must be created (Daedalus needs to validate every transaction to build its own utxo, as it should not believe to anybody but itself) based on these files (stored blockchain). From that point, there is no too much IO occurs at the moment (one block, avg 1-2 transactions per 20sec and some messages).

I think, the wallet restoration is a quite tricky and time-consuming process, cos when the root key-pair is created from the seeds then that HD Wallet can have 2*2^31 (hardened, unhardened) addresses. During the restoration, we need to go through on all utxos and check each individual address whether it belongs to that HD wallet or not.
It’s theoretically, in the worst case is O(m * n), where m the number of unspent addresses in the utxos and n is 4 billion. So, I do not think that it reads the blocks during the wallet restoration, but only the RocskDB’s utxo database, but I am not sure. I will check this theory when I get home.

Thx @unique, I did not know that this kind ot audit reports are available, I am reading it now.

1 Like

@_ilap,thank you for your answer! As always it is a true pleasure to discuss implementation details with you!

Sorry, I didn’t yet had time to check the actual code of this. But how is it possible to create these files at installation time, if there should be a file per block, and until you’re downloading a chain you don’t know how much blocks there are at all?

I hoped that “global” UTxO-index shoulda be used (for all addresses), but didn’t know if it’s implemented in reality.

It should definitely not take O(m * n), now that would be mad to iterate thru all possible 2^64 wallet addresses :slight_smile:

If I understand it correctly, as described in the HD address payload doc - they are storing the encrypted “hierarchy path” of the address in the address itself, so when Daedalus has the private key of the wallet (at restoration) - it can try to decrypt the address, and if it’s decrypts successfully - then it means that this address belongs to this wallet.

So address check takes constant time, and the whole restoration should take only O(m)

If this is really the case - then my ramble is, sadly, wrong :slight_smile:
Sadly, because it would mean that the O(m) check of all positive balances itself (without I\O overhead) takes this much time at restoration, and it would mean that there’s not that much algorithmic room for optimisation, as I would hope.

In that case, we would have to just accept that at all times full blockchain contains hella lot of active addresses (with positive balances) and even linear complexity iteration of those takes ages :frowning:

Thank you, again, for keeping the discussion open and interesting! I will need to look into the details too, but I hope you won’t withhold any interesting findings :wink:

Previously I thought, epoch-snapshots (afaik, planned) could help speed-up the restoration process, but now I’m not that sure about it, since, as I understand, it would only help to create the UTxO index faster, but the eventual scan of this index would take the same time.

1 Like

I am sorry, I was not clear, I meant installation time: download, install, run and when all the blocks are downloaded and the Daedalus is properly synced then it’s installed.

You could be right if the Wallett’s key can decrypt a belonging output’s address without using any extra info (means it does not need any extra info like index etc).

So, It seems that I wrongly assumed that the decrypt process needs some extra info, to decrypt the address.
I need to dig a bit into the available docs again and the source code, but I need to learn Haskell which is a bit different than the imperative languages that I am familiar with.

I do not have knowledge of any internals, as I cannot properly read Haskell code, yet.

I thought it would make sense if the utxo database is built parallel, during the recreation time of the blockchain (when the blocks and transactions are downloaded and verified) and not then when a wallet is restored.

That would not make sense at all, as a new wallet would also need the utxo to verify any incoming transactions.

Anyway, I am probably wrong in a lot of things, but I am not here to be right, but to learn how Cardano exactly works and I am really happy if somebody points me that I am wrong.

1 Like

UTxO index makes absolute sense, you’re right about that without any doubt. It’s just that restoration takes so much time at the moment, that I kinda assumed that there isn’t one for now.

But please be sure that when I say something like “if there’s utxo index, then it’s slow by nature”, I’m not at all mean that “you’re wrong, because then would be Y”, I just like to brainstorm best educated guesses, and mean something like “Your point is valid, and if we assume X, then Y would be possible, and it opens room for thought” =)

Unfortunately I’m also not fluent at all in Haskell and SL codebase is like a maze to me )

Thanks to _ilap !
I was trying to run Daedalus on ubuntu (Oracle VM),
Straightforward installation but starting up is another case.
Daedalus start up fine but crash the VM, i have increased the memory to 6MB and 40GB for VM to use but still unable run Daedalus fully.
I have it running on windows 10 without issue.

It seems that HD WALLETS and the are inconsistent and probably the latter reflects the current codebase.

HD WALLET, says that “utxo is traversed to find all addresses with positive balance corresponding to this root key and add them to storage along with their parents (wallets).
while the states:
we can iterate over the whole blockchain and try to decrypt all met addresses to determine which of them belong to our wallet..

So, I think you were right and I was wrong on both (address traversal and O(m) as the payload can be easily checked).

1 Like

Pls, give us some more details.

I guess this version is not the same as the one in the AUR? The bin for download here is which is a newer version than that on the AUR? any idea when the AUR will be updated? Thanks

Whats in the AUR:

Cryptocurrency wallet

You mean the Arch Linux AUR? That PKGBUILD is building from GitHub releases, it’s a different purpose.

Just download this bin, it will install nix plus deps on ~/.daedalus/ and save the data on ~/.local/share/Daedalus/

I have launched the resulting Daedalus wallet ( Basic gui launches, and successfully syncs 100%, but there are no further window interactions, i.e. “create new wallet”, etc., primary menu, “About” etc. is responsive, yet no wallet interaction steps appear to be available.

This is on Ubuntu 16.04.4

IOHK is waiting for this kind of feedback before releasing the first version, did you send them all this info?

I did not. Could you, per chance, refer me to the best location to send it?

Sure, scroll down to the bottom of the page until “Click here for Email Support” :slight_smile: