Out-dated documentation

I know that documentation is a huge problem in Cardano world!

I really hope that this will improve at some point … somehow. Perhaps, someone could take up the task to bring at least https://developers.cardano.org/ up to date. Perhaps, we can consider the reliability of updates when recommending sources. I have the feeling that, for example, https://cardano-community.github.io/guild-operators/ is much more reliable in that regard than https://www.coincashew.com/coins/overview-ada/guide-how-to-build-a-haskell-stakepool-node.

But, please, can people take totally outdated documentation that they don’t intend to maintain anymore offline or at least put a disclaimer at the top … in bold?

I cannot count how often I see help requests, where I have to ask: “Where did you get that?!?” … “Oh, I see, no that’s horribly outdated.”

A good start would be to do that with all guides on how to setup on the legacy testnet. …


On a related note, it seems that IOG wants to consolidate everything under https://book.world.dev.cardano.org/, now, but unfortunately, there’s not much there by now. https://docs.cardano.org/ is still as mixed as it always was and there are still some things I have only found documented on https://input-output-hk.github.io/cardano-wallet/ (Byron address format, for example).

If they manage to consolidate, that would be a huge win. Hoping for the best …


IMO - step by step instructions (primarily for initial learning) are best updated on devopers.cardano.org , but needs time-and-effort investment from SPOs themselves (as it aims to be a community-managed documentation site). While the build/install instructions on there are up to date, SPO guide has to go through a revamp (WIP here , it’s been terribly slow due to daunting amount of lages and lack of active community participation until recently) - I can only hope the work continues without having to rely on broken governance incentive system.

docs.cardano.org is managed by IO and does not welcome much coordination IMO

Any other website management can fall into same holes (actively managed for a bit, than main contributors move on and site becomes stale), other than adding to confusion for users to check between different sites.

The mentioned Guild-ops site also warns users (on homepage, tho not actively checked by some) to be already well versed with manual commands (via dev portal) before use of scripts that focus on ease and reduction of manual errors.


also, this related posting shows the outline of proposed updated & new material along with a suggestion of how SPOs, devs and other writers can participate (by choosing page(s) from the developing outline, notifying others what you’re working on, and then committing pull requests to the above branch):

For those who haven’t contributed to the Developer Portal before, there’s this introduction which is also linked in the web site footer:

cc: @adatainment


Thank you for bringing this up @COSDpool and @rdlrt. There are a few working on the revamp, but we’re really still looking for people to join in. There are so many guides all over the place and they are really good, but it would be smart to “unite” them under one domain.

(if you are reading this and you want to contribute but have no idea where to start or how to deal with a git project with multiple people or any other hesitations, just ping me and I’m happy to help)


I have been thinking about how to write a step by step guide to setting up a cardano-node relay. But if I do this then it will want to follow this design:

  1. Set up a separate “build” computer, or virtual machine, that contains the compiler and dependencies. This could even be just your local pc. Build the software on that machine to produce deb packages.
  2. The deb packages will be configured to install the binaries, libraries, config, and database files in standard filesystem locations: /usr/bin, /usr/lib, /etc, /var/lib.
  3. The apt and dpkg package management tools will be used to install the debs and upgrade them when new versions are produced.
  4. Set up a separate “relay” computer, or virtual machine, for running the cardano-node software. This machine will have only the required software and nothing else. The cardano specific debs will be installed on this machine exactly like any other deb package that is part of the system.

To my mind, this is the smart, secure way to set things up. The relay machine should only be running the absolute essentials and nothing extra. It shouldn’t have different compiler environments and development libraries installed on it.

Unfortunately the above seems to conflict with how the current guides advise people to set things up. Consequently, I have only put up instructions on how to build cardano-node as a deb package on my own Github repository. But, even this may cause problems because it will install the software in different locations to what the other guides say.

Also, I am not a fan of scripts that automatically check for updates, download software, compile it, and install new binary and library files. I also don’t like creating dependencies on lots of environment variables that other scripts then assume have been configured. For example, I try not to assume the “CARDANO_NODE_SOCKET_PATH” environment variable and prefer to define it on the command line prior to calling cardano-cli. I certainly don’t want to create extra environment variables like “CNODE_HOME” and then make assumptions about the existence of the directory structure below this.

Obviously the above is just one point of view based on a personal preference. However, to avoid confusion, the guides need to follow some standard.

Maybe the community needs to decide about the following first:

  • What filesystem layout should be adopted for the guides.
  • Whether the relays / block producers machines that are actually running the software should also be the compilation machines with all the extra dependencies and customised PATH and LD_LIBRARY_PATH variables.
  • What standard environment variables should be expected to exist (preferably none).
1 Like

Thanks for your availability and preparedness to participate. There is already some plan ongoing in the issue discussion here, would be nice for tutorial guides to not make decisions for users, but give guidance instead on ‘how’ to using barebones.

Some thoughts regarding the last response:

First of all, I think you’re referring to coin cashew as “current guides” - the one discussed above is developer portal in particular. It’s possibly more productive to be talking about what’s on current pages for revamp at dev portal and improving there, rather than talk about practices employed elsewhere.

  1. Set up a separate “build” computer, or virtual machine, that contains the compiler and dependencies. This could even be just your local pc. Build the software on that machine to produce deb packages.

That could be “a” practice that can be employed as in examples section for good practices, but cannot be forced to all users to use. Not everyone is happy to make binary available between all users on OS (even if it’s managed by single operator) and pollute system folders for node config/dbs, which - across major releases with ledger-state schema changes or chain-DB changes - might often require copying across between nodes of same network (eg: upcoming LMDB might require changes to config and resync from genesis) to avoid higher downtimes, neither does it suite the orchestration when using multiple node binaries on development environments which might require different version/commit between nodes - when we’re talking about good practices, it should be clear that good sysops/secops is a non-negotiable pre-req.

The page for installing cardano node gives both options: installing from source, and downloading pre-compiled binaries from Hydra (which is linked to CI/CD against commits from source github repository). It does not make an advise/recommendation for folks to compile it themselves (or not).

Again, this is irrelevant to guide on developer portal, what is used by specific scripts/tools in their domain (tho even the mentioned tools do not need them as env variables on system, they can be specified in config files) is not of any concern to tutorial/reference documentation steps, and isnt included currently either :slight_smile:

The filesystem layout adoption will likely never be the same between users - currently folder structure is being used within “${HOME}/cardano” folder (with an open suggestion PR to add a network folder underneath). As regards compilation machines, I think most experienced (key word) operators already have seperate machines for this intent, but you would likely not see them on forum/SE asking for assistance.
Current guide does NOT have a requirement for any environment variable, that’d be up to user once he sets up his software lifecycle.

1 Like

This is the heart of the problem that the majority of users come to this forum for help about.

If I need to install any new piece of software on my Debian/Ubuntu machine, I will always use the apt package management tool to install that software. If I was using Red Hat, SUSE, or one of the others, I would use their package management tool. Otherwise that piece of software becomes an ongoing maintenance nightmare that I have to specifically manage myself. I will need to keep a record of every step I need to do when upgrading the software and I need to do this for every machine the software is installed on and carefully make sure that I don’t neglect one part of the sequence. For EVERY machine.

I do accept your argument though. It does come down to a personal preference between using the standard package management tool versus a remembered sequence of manual steps.

Users that are wanting multiple different versions of cardano-node on the same machine with different configurations are more sophisticated. I had assumed that these howto documents were not directed at them.

Is there a part of the guide that details how to install cardano-node on every relay and block producer machine once they have gone through the building process detailed in the above link? Or will most users repeat all these build steps on each separate machine, including the custom path, library and pkg config environment variables?

My impression is that most stake pool operators have the entire build chain and compilers on each of their relays and block producer machines. I don’t like that from a security perspective, but maybe I am wrong about that, or overly paranoid?

1 Like

This is the heart of the problem that the majority of users come to this forum for help about.

As mentioned earlier, the ones who have gained enough experience are not visible. The noise only comes from those who are not yet comfortable with (especially sysops) operations, which is a pre-requisite skillset.

that piece of software becomes an ongoing maintenance nightmare that I have to specifically manage myself

You’d do that with packages too (building them, adapting them for new config changes and so on) :slight_smile:

That’s up to individuals - perhaps can be covered in best practices section, the site shows both the options - does not make a recommendation one way or the other. But I think that would be good addition to mention in good practices

1 Like

But that is really my point. I want to do this once when there is a new version and then be able to roll that package out to every one of my separate machines.

I am really intrigued as to how most people manage their cardano software on multiple machines. I have 5 different relays and a block producer. The only thing I need to do to install the cardano software on each of these machines is:

apt install cardano-node

The only thing I need to do to upgrade all of the software on each machine is:

apt dist-upgrade

I build the deb once and install it everywhere I want. The package manager takes care of installing all the dependencies too (which don’t include the entire build toolchain and compilers).

I don’t know why people in Cardano land don’t like their Linux package managers. It is the tool they use to manage every other piece of software on their machines. After all, there is a reason that every popular Linux distribution has a package manager. To my mind, “apt” was the killer app of Linux distributions and is the reason that Debian and its derivatives like Ubuntu dominated.

Anyway, I will quit the package manager evangelist role. I don’t really want to create conflicting documentation if it will only confuse more people. I do agree that there should be one standard setup for novices to follow and I think it should put the software in a consistent layout.

1 Like

I am sure there are a few different options (tho noting bigger % of SPOs on mainnet likely run 2 relay + 1 BP combination), I think for those using a lot of nodes would’ve sort of stitched up solutions :

  • they’d want to use ansible or equivalent orchestration
  • some who are using nix/docker will have a separate process
  • those doing manually would test update on 1 machine and hopefully test for few days before updating others, even if tested on mainnet (as often is the case, the updates occur often at breaking changes)
  • For a lot of SPOs (myself included), the deployment between relays could be different too (different CPU #, P2P vs non-P2P, mempool settings on relays, subscription to topology updater, use of leaderlogs, versions, use of offline workflows for cold keys for BP, different port config between instances if using multiple hosts behind same router, restart timings for systemd, presence of monitoring which would need different tracers set, etc), managing these via packaging/orchestration is a bit more tricky as one might need to start considering options (when/where to overwrite, what should/shouldnt be touched, when/whether to restart, etc). I’d think for such cases - updating of node using binaries from hydra - is almost no-brainer trivial part of operations. Often across upgrades, the time consuming part could be db/ledger rebuild, where-in folks might provision the DB from one upgraded node beforehand, and go through node restarts accordingly

For ledger-state changes, you might also need to orchestrate db folder copy (to avoid 4-5 hours of rebuilding DB) between breaking versions.

I mean I am absolutely not saying package management has a use case, it works well for some - I just meant it cant be enforced to all usage modes. Regardless, it’s always good to have options on the table, especially as currently there isnt one.

In spite of mentions of pre-req skillset , there are still a lot of folks learning on-the-go without participating enough in test networks - but if I look at past 3 years, a lot of folks have learnt a lot and their queries stopped after few initial months, while a few have taken passive role of ‘if it works, dont touch it’. Hopefully by documenting good practices, it’d encourage folks to adopt accordingly


I just did a search to see how many software packages Debian manages with its package manager and this is a quote:

" For Linux distributions such as Debian GNU/Linux, there exist more than 60.000 different software packages. All of them have a specific role. In this article we explain how does the package management reliably manage this huge number of software packages during an installation, an update, or a removal in order to keep your system working and entirely stable."

Whichever Linux distribution you choose, they all have package managers for installing and upgrading software. I have different mail servers, different web servers and other software running on several different machines all with different configurations and some with an amazing level of customised settings. I upgrade every single one with the package manager and it would do my head in if I had to do everything manually. I don’t understand why Cardano land wants to reinvent the wheel around this aspect of software management.

I guess we will just have to agree to disagree about using a package manager, especially for novice users which the guides are directed at. I won’t labour the point anymore. Sorry for the rant.