Sparky is a lightweight CI server written on Raku. It uses Bailador for UI and Sparrow/Sparrowdo as an automation engine. Initially the server was written to be run on a single machine and did not scale well. So it only could handle a small/medium load, mostly working in localhost
mode.
Now, with the help of k8s cluster Sparky could be easily turned into industrial scale CI server:
How does it work?
A user launches requests to run jobs
on Sparky. Where jobs – are arbitrary tasks executed as a part of your CI/CD processes.
A user could be a Sparky cron jobs mechanism or real users issuing http requests, including other applications conusming Sparky file triggering protocol
Depending on the level of load, k8s would up or down new workers to handle requests, this is achieved by standard Kubernetes auto scaling mechanism.
Evey k8s node represents a docker container that runs:
* Sparky web UI instance ( Bailador / Bulma web application )
* Sparkyd – Sparky jobs queue dispatcher
* Runtime environment for jobs execution ( Raku + Sparrow )
Benefits
Using k8s for a Sparky infrastructure has two benefits:
* simplicity and reliability
* scalability
Simplicity
In k8s setup Sparky runs jobs on docker containers. It’s quite efficient as docker containers are mortal and a user doesn’t have to worry much if CI/CD scripts break an environment, after all k8s will re-spawn a new instance in awhile if the old one becomes unavailable. So as a docker is immutable by it’s nature we don’t have to worry much about underlying docker instances states.
Scalability
One of the reason people would choose Kubernetes is that it handles a load automatically. Now we might have a dozens of Sparky jobs run in cluster at the same time. It’s never achievable on default Sparky runs on localhost
mode. Thus, k8s will take care about increasing load and will launch new instances if a workload starts to increase.
Underlying Sparky file system
Sparky uses sqlite database as well as static file to store jobs state:
- sqlite database ( builds meta data )
- static files ( reports, lock and cache files )
Persist file system
Because docker by design does not have a state, we need to make some effort to keep Sparky file system persist. That means all containers should share the same files and sqlite database, not just a copies of those available across a unique container. Also a file system should stay even when underlying docker instance are gone and relaunch and not to be tied to docker containers.
Luckily this is achievable by using standard k8s volumes mechanism.
A user can choose between different flavors, but they all boils down to the fact that underling files system stays permanent across various docker instances and thus is capable to keep underlying Sparky state.
Possible options:
* AzureFile
* chephfs file system
* Persist Volume Claim
Future thoughts
I’ve not tried to run Sparky in k8s cluster using mentioned approach, but I am pretty sure once it’s done Sparky could be used in industrial level projects. If you want to try a Sparky in your company, please give me a shout 🙂
Stay tuned.
—
Thanks for you for reading.
Leave a Reply