Recently I did some analyzing of my node build process, and here are a couple of conclusions I drew from it. I thought to share them here, so you can make your own.
Libsodium fork
First of all, I see some guides (e.g. Coincashew) saying to add the following lines to the cabal.project.local file:
package cardano-crypto-praos
flags: -external-libsodium-vrf
Cardano uses a custom fork of libsodium which exposes some internal functions and adds some other new functions, that’s the library we all have to install. If I understood correctly, adding the lines above will disable the need for that custom install (the minus disables the flag) and will use a standard libsodium installation. There is C-code internal to the cardano-crypto-praos package which also contains the custom code (apparently GHC can also compile an link C-code…) and this will be used where it’s needed. The documentation on cardano-node-wiki/docs/getting-started/install.md at main · input-output-hk/cardano-node-wiki · GitHub clearly states that this internal C-code should ONLY be used for development purposes, so the developers don’t have to deal with custom libsodium installations. I guess that internal C-code contains a lot of copy/paste of those internal functions (that needs to be exposed) spoken off.
So for compiling a node, you SHOULD NOT add the lines above (so the custom code from the libsodium library will be used and NOT the internal C-code)!
Optimization
Some guides (e.g. Coincashew again) say to execute the following:
cabal configure -O0 -w ghc-8.10.7
This will add two lines to the cabal.project.local file:
with-compiler: ghc-8.10.7
optimization: 0
The first is ok, the second one disables optimization however… But why would you want to disable this? It will compile faster and probably give smaller binaries without optimization, but I rather spend a little more time on compiling and have a little bigger binary if that means if my node can run more optimized, read faster… So better change the -O0 in -O2 (which will give the most optimization).
Then after doing some digging, it seems that this flag only applies to the code in the current package. But most of the code (that matters) is located in other packages! So in order to truly enable optimization, you’ll have to add the following to enable optimization for all packages:
package *
optimization: 2
LLVM
I also compile the node with the llvm backend, it’ll take longer to compile, will give bigger binaries (about twice as big I thought), but it’ll also contain more optimized code! The same applies as with the optimization from above: you’ll need to enable it for the package itself and for all other packages if you want all packages compiled with llvm:
My complete cabal.project.local file looks as follows:
ignore-project: False
with-compiler: ghc-8.10.7
optimization: 2
program-options
ghc-options: -fllvm
package *
optimization: 2
ghc-options: -fllvm
When compiling with GHC 8.10.7, you’ll need to install an llvm version between 9 and 12 an the machine where you compile your nodes. I’ve seen somewhere (not sure where anymore) that 13 would also work. I use llvm version 12.0.0, so I can’t confirm the former.
Be sure to add the folder that contains the opt and llc binaries to your path. Check with opt --version and llc --version if the binary can be located and if the right versions are used!
Remarks
- You don’t need to clear your cabal package store when changing the optimization level or when switching between the native and llvm backend. It won’t use the already compiled ones if one of those is different than the already compiled one, it will compile the packages again and add it to the store with a different hash. You can however to save space.
- I’m certainly no expert on this. I drew these conclusions from what I’ve read in the docs and from some experimenting of my own. Feel free to correct me if I drew the wrong conclusions.
- If (some of) my conclusions seems to be correct, the guides should be updated (especially concerning the first topic). Feel free to initiate this process yourself…
