I’m David, creator of Funky Penguin’s Geek Cookbook - I’ve previously run a mining pool for the infamously friendly (yet not so profitable!) TurtleCoin, and AFAIK I was the first (and last?) one brave (stupid?) enough to do it with Kubernetes!
|Variable fee||3.141% (pi)|
|Pool ID / hashash||704250c3f9c01ccaba43aa137c4185befa811cea0693fd736ee04e8e|
I’m in New Zealand , but most of my clients are in the US , and the pool infrastructure is in San Francisco (DigitalOcean’s SFO2 datacenter)
Pool Infrastructure / Redundancy
The pool runs within a managed Kubernetes instance, provided by Digital Ocean. The pods are resourced at 4GB RAM, 2vCPU.
Here’s a summary of the architecture page on the website as of now. (I’ll flesh this out with the security and CI/CD elements soon):
We run 2 x relays in a statefulset. Relays can take a long time to restart, and without a relay, the block producer can’t produce, so we want to ensure we always have at least one relay. Statefulsets support rolling updates, meaning any change to config / container image applied by CI/CD will be applied to one of the two relays, and only when that relay is fully synced again, will the change be applied to the other relay.
Similarly, we run our block producer as a statefulset. The stateful set isn’t strictly necessary, since we only have a single block producer, but it keeps the helm chart simpler to use statefulsets for both the producer and the relays. If there were every a way to run multiple producers (I understand that’s forbidden ATM), then switching to an HA design would simply be a matter of updating
replicas: 2and pushing a helm update.
In Kubernetes’ microservices architecures, pods/containers rarely talk directly to other pods/containers. Inter-microservice communication is facilitated using services. The service is a simple round-robin load balancer, allowing you to scale up various components without having to alter other components. The relay nodes communicate with the producer node via the service.
Similarly, the relay nodes are only accessed via their own service. The producer node’s topology config only permits it to connect to a single host, being the relay service. The service takes care of ensuring the incoming connection is passed to one of the two pods in the relay statefulset, and uses readiness probes to ensure that only fully-synced pods are available as backend endpoints
The relay service is “exposed” as a LoadBalancer service, meaning it’s assigned a public IP by DigitalOcean. All incoming connections ingress the load balancer, then hit the service, and then are directed to the applicable pod
I’ve spent 20+ years working with technology. I’m a solution architect, with a broad range of experience and skills. I’m an AWS Certified Solution Architect (Professional), a CNCF-Certified Kubernetes Administrator (pending examination!), a helm-chart developer, and a remote worker.
This stuff is my bread and butter - check out Funky Penguin’s Geek Cookbook for more details
Why should users check out your pool?
I’m friendly and responsive. I need your delegation for the pool to be successful, and in return you’ll get… my graditude
My long term goal is to polish up the components of the pool (the helm chart, the docker image, the CI/CD pipeline) and present them as an easy-to-follow “recipe” along with the other recipes in the Geek’s Cookbook. Currently we’re at the MVP (minimum viable… pool?) level, but here are the components thus far:
Helm chart for deploying the pool
GitHub repo with actions to deploy updates to the pool when changes to the repo are made
I’ll be adding dashboarding and Discord bot-tooling in time, and would be happy to engage with delegators on improvements / ideas!