Running a Stake Pool

Is there a guide on how to create and run a stake pool? What level of sophistication is needed to do this?

1 Like

I would say decent coding skills is a minimum, great security features, Ddos protection, there is also a minimum / fair distribution of your nodes across the globe etc…
Have a look at @MegaWind’s post below, in my opinion he has a good chance at being chosen by IOHK because this is professional level stuff.


This sounds quite complex. I thought running a pool required one server plus a backup. You are talking about nodes being distributed all over the world. Is that a reference to stakers?

No, that’s a reference to the nodes on which you are storing the delegated coins by the stakers (even if technically they never leave their wallets, they assign you their rights).
In case of catastrophic failures, you need to have several backups spread in different places to ensure the sustainability of your pool. :slight_smile:
I read/ heard from Charles somewhere that it was a prerequisite for IOHK

1 Like

I hope those 100 pools will have the same quality and uptime so we don’t end up with people flocking to official IOHK servers above and beyond the 1/k threshold. The mockups show some sort of quality metric–I guess based on server metrics like uptime-- so I know they are thinking about it.

It is curious to see whatLars Brunjes will come up with in order to motivate 100% pool uptime within epochs. He mentioned that there should be some probabilistic metric in place. I think the best way would be to introduce a system of checkpoints that are uniformly distributed across the epoch timeline. Perhaps using the same source of entropy as for stake selection.

Can’t wait for this to take off. Cardano is shaping up to be a true winner in the space. I have no doubt it will overtake both Ethereum and Bitcoin in both functionality and market size.


Personally I don’t fear the 100% uptime. First because when you invested infrastructure and time to have all up and running, it’s probably more work to interrupt it somewhere in the last part of an epoch instead of continuing your uptime. If not you have to resync before tge next epoch starts.
And even if someone decides to safe money in virtual CPU cycles because he run his node on AWS it should be pretty easy to remember this behaviour in future epochs.

For sure this nodes have to run on “enterprise grade” stuff and not on the “old notebook”. But we also know that the main net (SL) should remain as simple and straightforward as possible.
Running nodes for sidechains could happen on the same systems, but who said that this can’t be a very lot of different system fulfilling these chains requirements?


What I guess is the more useful question (for me anyway), is what requirements and challenges does running a staking pool pose which aren’t captured in the scope of running distributed systems at high availability?

100% (or at least 99.99%) uptime is not a scary thing for any systems engineer worth his stones in 2018… but I’m curious what nuances specific to running the SL would make things more challenging here.