Specs on Block Producer Node (From Minimal to Future Proofing)

As a future aspiring pool operator late to the Cardano staking “game”, having started this journey from scratch about a week ago, I see a lot of varying comments on the desired specs of a Cardano block producing node.

Most of us are aware of the minimum, but vague specs from Cardano:

  • 4 GB of RAM
  • 24 GB of hard disk space
  • a good network connection and about 1 GB of bandwidth per hour
  • a public IP4 address,

but that’s very general, minimal and therefore, not optimal.

Lots of variables have to be taken into account.

For those stake pool operators that have experience on ITN, and current mainnet SPOs, what would be your average pool operator’s standards to meet demands right now, and say in the future, when Cardano network gets as much traffic as Ethereum, for example?



FIOS or regular internet?
Dedicated line or not?
Distance of relays to BP node?
Relay nodes at home along with BP or relay nodes on the cloud and BP at home or all three on a private, secure-as-possible cloud service with possibly some type of insurance, if there is such a thing?
Do you think geographical location is an important factor at this early stage of business development: e.g., avoiding areas prone to natural disasters, like tornadoes, hurricanes, seismic activity, etc, choosing areas where there is a lot of internet activity, FIOS,etc?

Server specs for average node:
Do you need an actual ‘server-room’ server and rack, or is a high-end desktop enough?
How many cores: 4, 8, or more?
How much memory: 8, 16, or more, and what type: DDR3,4, EEC?
What kind of drive: traditional HDD, or SATA SDD, or M2.SSD, etc.? Are they hot swappable, is RAID involved?
Do you use redundancy for your BP node other than UPS, like a second back-up BP node with UPS on both?
Is 1 person enough to run server or do you need more?
What mechanism do you use to switch to your back-up node if you have one?
Using a PXE server to copy a newly compiled or installed BP image onto an identical back-up BP server?

The node software:

Building from source code or installing via Nix?

Taking regular snapshots of all directories for back up in case something goes wrong?

Using software to safeguard your BP, like wireshark, nmap, firewalls, physical or otherwise?

Virtual node, container node, or just one actual OS running the BP node?

Mac, Windows, or Linux (I’m using Linux - it’s what I’m most comfortable with or used to.)?

Future Proofing:

If Cardano network has as much traffic as Ethereum, what aspects of your SP operations would you change on down the line?

I know this is a lengthy post. I’m just trying to get some ideas of approaches taken by ADA SPOs, in order to figure out what kind of build I will do. I’m used to building my own computers, but building a server with relays, like SPOs seems to me at least in theory of taking things to a whole new level, something I haven’t done before and really don’t know what I’m doing.

Any insights are appreciated, no matter how small. :slight_smile:

Hi JT,

I suggest getting your hands dirty on testnet so you can get a feel for yourself. We have an SPO running cardano-node on an android phone - performance is not really an issue.

Bare metal is fine, cloud vps is fine. I recommend 1-2vpcu / 4GiB w/ 4GiB swap per node with good network connectivity (dedicated if running from home). We will likely need to scale resources with Goguen, but certain services will be opt-in so you will want to plan accordingly.

I would recommend running your relays on separate machines than your block producer, and it does make sense to have a redundant block producer on standby (running as a relay) you can swap for updates - but DO NOT run more than one producer simultaneously, as that will fork the chain and will be punished in the future.

You also will want to create all keys offline, as well as build and sign transactions on an offline machine (then submit signed the transactions on hot node).

Yes, you can do this by yourself.

Your friend, FROG


Awesome! Thank you so much! I already built a server from source code on my laptpop - 2 cores/4 threads, 4 GB RAM, and 120 GB SSD. I’m just using standard internet/non-FIOS, speeds are great though.
I didn’t bother going on the testnet because someone else - I believe it was on youtube - told me the testnet had closed down since the mainnet was launched, so I didn’t bother looking into it.

Here is a link to the CF official stake pool school course documentation which will get you rocking on testnet:



Thanks, again, FROG. Yes, the link you provided is the source I used to build my laptop server in the first place. I stopped building the server when I got to signing transactions and installing relay nodes because I didn’t want to muck it up on mainnet believing the testnet was closed.

Correction: The link you provided is for the testnet which is great! The actual link I used to build out my laptop looks similar, the steps, but came from this source and did not mention testnet: https://www.coincashew.com/coins/overview-ada/guide-how-to-build-a-haskell-stakepool-node

My pleasure, JT.

I look forward to seeing you on mainnet when you’re ready!



I was never able to get the oldler testnet build for ‘friends and family’ using the Nix package manger on Ubuntu to work (https://testnets.cardano.org/en/cardano/get-started/installing-and-running-the-cardano-node/building-the-node-using-nix/).
I got the Nix/Ubuntu (Nix package manager on Ubuntu OS) to work a few days ago by accident because I used the mainnet install intructions and files rushng through the install thinking I was installing the testnet version. It took over between 2 to 3 hours to build the core node on my laptop using Nix, but it works nicely, just launch the node via executing mainnet script…no copy and paste like in the "stake pool course’. Don’t get me wrong. I very much enjoyed and appreciate the stake pool course because it’s where I really learned how the stake pool aspect of Cardano works. I realize the value of that course: teaching you the ins and outs of administering an SPO, but in that course, there is no mention at the end about the choices for how to set up core and relay nodes; i.e., the configuration of your SPO. They just show the details of setting up a node, but don’t mention relay and node configurations in detail: what OS for the relay, what OS for the core. Where the relays should be located as well as the core. I don’t mind learning this on my own through experimentation, but there is not much mention anymore about Nix/Ubuntu install. Is that still supported?
I saw from an older article how IOHK uses Nix for testnet deployment: https://iohk.io/en/blog/posts/2019/02/06/how-we-use-nix-at-iohk/

What caught my attention is the following excerpt, “The rollback features of NixOS and NixOps have already proved invaluable several times in our staging deployments: the whole network can be redeployed with a single command, or just specific nodes.”, which sounds very useful, but the github show the last activity for iohk-ops(https://github.com/input-output-hk/iohk-ops) was on Feb of this year. So Nix/Ubuntu install is not mentioned as much as before. Does that mean IOHK is stepping away from this form of setting up your SPO network relays and node?

The last question is if you do an Nix/Ubuntu install, do you just let the apt-side of Ubuntu go entirely, and let Nix handle the entire package management, updates, upgrades from that point on? What if you can’t find a file on Nix that you’re used to using on Ubuntu, can you still update via apt-get and install that package without breaking anything?

Anyway, I experimented a little:

If you install a new package under nix package manager;e.g., nix-env -i package1

,both nix-shell and regular bash shell will see the package and be able to launch it from the CLI.

However, if you install a new package using apt or apt-get;e.g., sudo apt install package2, only the bash shell can launch the package, but it can’t be launched from the nix-shell, and what’s worse is if you remove the package (e.g., sudo apt remove package2), then have nix-env install the same package2, both nix-shell and bash-shell will fail to find the package because it’s now located in the Nix store (/nix/path/to/package2.symlink.to.package2.binary), but it still searches the path from the earlier install of the package that you did using apt: in this case it looks in /usr/local/bin which no longer contains the binary for package2. What’s worse is that even a symbolic link to the Nix store package2 link, it doesn’t work under nix-shell or bash-shell.

If you do the symbolic link as sudo, then it works for the nix shell, but not for the bash-shell, because the bash shell is looking in the same location still /usr/local/bin.
However, it works using sudo under nix-shell. I believe this is because all the package links in the Nix store are owned by root user with across-the-board permissions: chmod 777.
Which begs the question. If all the packages are owned by root users, does that not make it somewhat unsafe that you can execute root-owned files as a regular user without sudo?
The nix store links have owners root:root and 777 permissions and generally the binaries they are linked to have root:root ownership and 555 (read, execute) permissions.

Anyway, if Nix/Ubuntu install is still a viable installation method that is in someways superior to Ubuntu from source in terms of updates, upgrades, roll-back, and deployment integrity and ease of use, I’d like to look further into it.

I know I’m rambling on, but this post is here for other who are curious and also for myself, so I can make sense of it someday as to the pros and cons of the various installation methods and SPO network nodes configurations for various setups.

1 Like

Anyway, I answered my own question for the time being in terms of what setup I’ll launch. When you install via either nix-env or apt/apt-get, either shell (nix and bash) can ‘see’ the installed package generally. It just a matter of whether you can execute that package/app from within that particular shell.
I should be focusing on just finishing the stake pool course. I still have metadata submission to do, and whatever else follows that.
Once I finish that I’ll might do another run at the testnet from scratch for more practice, deploy the most simple mainnet setup to manage and keep running without downtime, which is likely ubuntu server with a few extra packages thrown in for X11support, so as not to waste valuable cloud resource on eye-candy, and if I have problems, I can come here because ubuntu setup is what most people seem to be using whether at home or on the cloud.
Once I get my relays and core deployed on the cloud on mainnet, and I’m used to maintaining and upgrading; administration, then I can experiment with different types of deployments at home. It’s too early to know which setup is best, and parameters will change over time. If I want to go to some less used cutting-edge setup when I don’t even know what I’m doing just because I want my setup optimized now for uptime, efficiency of maintenance, it doesn’t make much sense. I’ll just take a simple path for now, master and deploy that, then practice different deployments on the side on testnet if time permits. First, I should focus on just making the first SPO succeed on mainnet before worrying about what the best deployment option is.