Sparky on k8s cluster

Sparky is a lightweight CI server written on Raku. It uses Bailador for UI and Sparrow/Sparrowdo as an automation engine. Initially the server was written to be run on a single machine and did not scale well. So it only could handle a small/medium load, mostly working in localhost mode.

Now, with the help of k8s cluster Sparky could be easily turned into industrial scale CI server:



How does it work?

A user launches requests to run jobs on Sparky. Where jobs – are arbitrary tasks executed as a part of your CI/CD processes.

A user could be a Sparky cron jobs mechanism or real users issuing http requests, including other applications conusming Sparky file triggering protocol

Depending on the level of load, k8s would up or down new workers to handle requests, this is achieved by standard Kubernetes auto scaling mechanism.

Evey k8s node represents a docker container that runs:

* Sparky web UI instance ( Bailador / Bulma web application )
* Sparkyd – Sparky jobs queue dispatcher
* Runtime environment for jobs execution ( Raku + Sparrow )

Benefits

Using k8s for a Sparky infrastructure has two benefits:

* simplicity and reliability
* scalability


Simplicity

In k8s setup Sparky runs jobs on docker containers. It’s quite efficient as docker containers are mortal and a user doesn’t have to worry much if CI/CD scripts break an environment, after all k8s will re-spawn a new instance in awhile if the old one becomes unavailable. So as a docker is immutable by it’s nature we don’t have to worry much about underlying docker instances states.

Scalability

One of the reason people would choose Kubernetes is that it handles a load automatically. Now we might have a dozens of Sparky jobs run in cluster at the same time. It’s never achievable on default Sparky runs on localhost mode. Thus, k8s will take care about increasing load and will launch new instances if a workload starts to increase.

Underlying Sparky file system

Sparky uses sqlite database as well as static file to store jobs state:

  • sqlite database ( builds meta data )
  • static files ( reports, lock and cache files )

Persist file system

Because docker by design does not have a state, we need to make some effort to keep Sparky file system persist. That means all containers should share the same files and sqlite database, not just a copies of those available across a unique container. Also a file system should stay even when underlying docker instance are gone and relaunch and not to be tied to docker containers.

Luckily this is achievable by using standard k8s volumes mechanism.

A user can choose between different flavors, but they all boils down to the fact that underling files system stays permanent across various docker instances and thus is capable to keep underlying Sparky state.

Possible options:

* AzureFile
* chephfs file system
* Persist Volume Claim


Future thoughts

I’ve not tried to run Sparky in k8s cluster using mentioned approach, but I am pretty sure once it’s done Sparky could be used in industrial level projects. If you want to try a Sparky in your company, please give me a shout 🙂

Stay tuned.



Thanks for you for reading.

6 thoughts on “Sparky on k8s cluster

    1. Hi @p6steve!

      There are many ways to deploy k8s cluster. But for real production like tasks, people usually rely on cloud provided solution, like:

      * Azure – AKS – https://azure.microsoft.com/en-us/services/kubernetes-service/
      * AWS – https://aws.amazon.com/kubernetes/

      If you want to just to play with k8s you may spin it up on your localhost, using kubeadm utility:

      https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/

      I am not sure if there are any Raku modules/tools exist for k8s deploy. But one time I deployed k8s cluster on aws hosts, using Sparrowdo/Sparky:

      Sparrowdo scenario – https://github.com/melezhik/sparrowdo/blob/master/examples/k8s.raku

      Raku hosts file – hosts.raku:

      [
      %( host => “192.168.0.1”, tags => “master,name=master” ),
      %( host => “192.168.0.2”, tags => “worker,master_ip=192.168.0.1,name=worker1” ),
      %( host => “192.168.0.3”, tags => “worker,master_ip=192.168.0.1,name=worker2” ),
      ]

      How to run:

      sparrowdo –host=hosts.raku –sparrowfile=examples/k8s.raku –tags=master # bootstrap master and workers nodes first

      sparrowdo –host=hosts.raku –sparrowfile=examples/k8s.raku –tags=worker,token=foobarbaz,cert_hash=blablabla # join workers to cluster

      See more on how to run Sparrowdo with Sparky – https://github.com/melezhik/sparrowdo/blob/master/doc/sparky-integration.md

      HTH

      Aleksei

      Like

  1. > I would prefer to follow your path that to make one of my own

    cool, reach me out via irc/mail if you need any help, I’m interested in promoting Sparrow/Sparky stuff 🙂

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s