Introducing KyleK's Stake Pool [KYLEK] - Just a guy and a Kubernetes cluster

Hello Cardano community!

I’m Kyle Kurzhal, a software engineer who is looking to contribute to the community with my skills. I originally came across the Cardano project in 2017 and loved the fact that there was a serious emphasis on research (also, I love the choice of Haskell for the project)! I decided to buy what I could at the time with my limited resources, and loosely monitored progress over time. Now that I’m able, I want to bolster the ecosystem by running a stake pool of my own!

Aside from how cool the project is, one of the driving motivations for my interest in Cardano is my distrust of the U.S. dollar. With my government continuing with its reckless spending and further diving into debt, I have a great concern for this and future generations; this is just not sustainable. I truly believe that this project and community provide another avenue for people to have control of their money and means of doing business.

I’m currently planning to begin producing content within the next few weeks in order to introduce more people to cryptocurrency (particularly Cardano). Additionally, if I have the time, I’ve got a few ideas for dapps that I’d like to develop to drive engagement in the ecosystem.

Ticker - KYLEK
Pledge - 15,436.5
Fixed Fee - 340
Margin - 5%

Website - https://kylek.online (content coming soon)
YouTube - Kyle Kurzhal - YouTube

Pool ID - pool1aj697twgk5kt7c6kjtkpwejqqhljvf3hzsmsamswzy42v24cw5h

Location - USA

Pool Infrastructure
The pool is running in a regional Kubernetes cluster in the USA via Google Kubernetes Engine (GKE). The master nodes are managed by GKE, and there are two pool nodes running in separate availability zones (2 cpu, 8GB memory, 50 GB SSD for each). I’m considering upping some of these resources.

Pool Redundancy
Google Kubernetes Engine (GKE) is managing several master nodes across separate availability zones in order to ensure that the Cardano nodes are up and running at all times. I’m also running two Kubernetes nodes in two separate availability zones. In the case that one becomes unresponsive, Kubernetes will automatically reschedule it to the other server that is running.

Something else that I’m considering is running another relay node to make sure that any potential downtime is minimized.

Operator Experience
I’ve spent the last 6-7 years working as a full stack software engineer. At my last company, since we were a small startup, I had to handle both the software development and ops. Because of this, I moved our team to Docker and Kubernetes (in GKE) in order to more easily manage our deployments, guarantee availability, and give us just the right amount of control over our systems. At my current employer, I am working with a system that is in the process of moving production to Kubernetes.

Why should I check out your pool

  1. You’ll be directly supporting a small stake pool, a family, and the Cardano’s decentralization.
  2. You’ll be indirectly supporting my efforts to share content/information with a greater community.
  3. If I’m able to spend time writing dapps, then you’ll also be indirectly supporting the Cardano economy.
3 Likes

Very cool that you are running on k8s clusters. Have you created any documentation or guides on it? I’m learning about terraform and k8s so that’s why I ask :slight_smile:

Thanks!

Hi @kylek,
thanks for sharing your stake pool. I’m currently learning about the Kubernetes and I’m still not fully understand. In Kubernetes, we have two type of nodes, the master node and the worker nodes. In case of high availability (HA), we have several master nodes, at least 2 master nodes.

I would like to ask some basic questions:

  1. Does the node (master or worker) always refers to one server? In other word, is it possible to run a server node and a worker node in one server/PC, especialy in case of HA?
  2. IF the above answer is yes, is the minimal hardware requirement for high availability 3 VPS (2master nodes and 1 worker nodes) or 4 VPS(2 master nodes & 2 worker nodes)?

I haven’t created any guides, though I think it’s pretty straightforward to do it yourself if you understand Kubernetes and are willing to put in the time to figure out how to deploy from the Docker image that IOG provides.

I actually have never attempted to use Terraform, but I wish you the best in figuring it out! The way that I did it was to write my own configs and Helm charts from scratch in order to get everything working.

Yes, a Kubernetes node (whether it’s a master or worker node) is supposed to be a single server. Technically, you could run multiple “nodes” as VMs on a single machine, but that kind of defeats the point of using Kubernetes in the first place. Something that you might be able to do is run a single node as both a master and worker node (I believe I did something like this in the past where I set up 3 bare metal servers where each was a master and worker).

Also, for a high availability setup, you’ll need at least 3 master nodes. That way, if one goes down (either the machine dies or there is some kind of network outage), the other two can continue to operate your cluster. This is pretty common for most software (ElasticSearch, for example).

Lastly, if your trying to achieve a truly highly available system, you’ll need to make sure that your nodes are spread out across different data centers. If something happens to one data center (power outage, network outage, etc.), then this allows your system to continue running. If you’re only running 3 master nodes, then you’d want all of them to run in different data centers/availability zones. Also, you’d want to make sure that your worker nodes are spread out across separate data centers/availability zones (how many worker nodes you want running in total will just depend on your personal risk tolerance).

Here’s some documentation that outlines what an HA setup in Kubernetes would look like (Options for Highly Available topology | Kubernetes). Note, I’ve been using the word “master”, but the documentation refers to these as “control plane” nodes.

1 Like

Thanks @kylek for your fast reply!
Regarding the deployment, the kubernetes need a yml file where we can configure an app from a docker image. Did you create your own Dockerfile or use an image from docker hub, e.g.: Docker Hub?
I’ve tested the caradano-node from my linux cmd line. There are many stuff to deal manually, such as creating the keys and register the stake pool. How can we do these stuffs in the yml file? Did you created the keys and the registration separately outside the deployment with the yml file?

Cheers!

I used the Docker image that IOG provides on Docker Hub and it has been working just fine for me.

I did create all my keys manually, but I didn’t do it inside the pods. The only thing that I put into the pods were the signed transactions that I needed to run on the blockchain. To get these into the pods, I ran kubectl exec ... bash to run a bash interpreter in the pod/container, and then I just copy/pasted the content from the signed transaction files (and any other files that I needed) into the terminal.

Once all the signing keys were created, I created secrets for them that were loaded into the pods so that the executable could reference them when it’s kicked off.

1 Like

Just to ensure that I don’t misunderstand, does IOG refer to IOHK? And its docker image is here?

1 Like

IOG is the new name of iohk