Since I announced a RICH – Rakudo Issues Confirmation Helper, I’ve been playing a lot with an automation of test cases for known Rakudo bugs.
One thing I’ve found really interesting in this journey is how Sparrow could be a good fit to express issues through automation scenarios.
While I am not pretending on substituting existing Roast test system by Sparrow, I’d like to highlight an alternative approach here, and maybe Rakudo devs will pay an attention on the tool 🙂 and start using it one day.
The following is just some examples and thoughts, and not meant to be a complete “user” guide.
Chasing a bug
It’s all starts with a user describe a bug through a Rakudo GitHub issues page. Let’s take a look at the fresh one, issue #4119:
The Problem
Chaining operators are always iffy, however is assoc<chain> doesn’t make a custom operator iffy (unlike is equiv with a chaining operator, which does).
Expected Behavior
raku -e ‘sub infix:<ab>($a, $b) is assoc<chain> { True }; say 1 !ab 2’
False
Actual Behavior
raku -e ‘sub infix:<ab>($a, $b) is assoc<chain> { True }; say 1 !ab 2’
===SORRY!=== Error while compiling -e
Cannot negate ab because additive operators are not iffy enough
So we have 3 essential parts in the issue definition:
* Human readable description * Expected behavior code + expected output * Actual behavior code + actual output
Let’s analyze all these piece by piece.
Description
This is just brief description of an issue, not meant to be used in automation process, but helping developers to understand the issue from the high level side. Let’s skip it.
Expected behavior/ Actual behavior
This and following bits are most important for test automation purposes, because they are Raku scenarios expressing an issue. The first one as you could guess showing an example code and desired output, the second one is the same code but with some real output and probably unsuccessful exit code.
BDD Approach
BDD paradigm reenforces the idea of close relationship between software users and software developers, it tries to bridge the gap between those two groups. One of the approaches for that users express desired system behavior in runnable scenarios, which are both human readable specification and test code.
A classical Given/When/Thenstatements criteria is one of these methods.
In a case of Rakudo bug it could written as:
Given: I have this version of Rakudo
When: I run this code
Then: I should exits successfully and have this output
Let’s Sparrow it!
Here comes the most interesting part. Sparrow has some TDD features by design, so it’s quite easy to implement the idea though this tool. The rest of the post is just an example of Sparrow workflow when automating Rakudo bugs tests.
Given: I have this version of Rakudo
This statement does not need any explicit coding ( but see raku --version in the following script) and is “ensured” by a working environment a test gets run against. Usually users catch bugs on their laptops 🙂
When I run this code
The body of a test is just a simple Bash script that gets executed by Sparrow command line sp6, again in a spirit of the idea, it should literally reproduce a bug, they way you got it. In most of the cases Bash oneliner is enough:
mkdir -p issues/4119
issues/4119/task.bash
set -x
set -e
raku --version
raku -e 'sub infix:<ab>($a, $b) is assoc<chain> { True }; say 1 !ab 2';
Then: I should exits successfully and have this output
Now, let’s just a test a Sparrow task and see a result:
s6 --task-run issues/4119
[sparrowtask] :: run sparrow task issues/4119
[sparrowtask] :: run thing issues/4119
[issues/4119] :: stderr: ++ set -e
++ raku --version
[issues/4119] :: This is Rakudo version 2020.07 built on MoarVM version 2020.07
[issues/4119] :: implementing Raku 6.d.
[issues/4119] :: stderr: ++ raku -e 'sub infix:<ab>($a, $b) is assoc<chain> { True }; say 1 !ab 2'
[issues/4119] :: stderr: ===SORRY!=== Error while compiling -e
Cannot negate ab because additive operators are not iffy enough
at -e:1
[issues/4119] :: stderr: ------> a, $b) is assoc<chain> { True }; say 1 !⏏ab 2
expecting any of:
infix
infix stopper
[issues/4119] :: task exit status: 1
[issues/4119] :: task issues/4119 FAILED
Actual / Expected output
In the given example a sample code exited with error without even producing a desired output, this is a simple form of test. But what if a code exited with 0 yet producing wrong output? This is quite easily with Sparrow task check mechanism.
Let’s take a look at another example of known Rakudo issue #4118 :
issues/4118/task.bash
set -x
set -e
raku --version
raku -e "say qq{===};
say [⊖] (1,2,3), (1,2,3), (1,2,3);
say [⊖] (0,1,2), (0,1,2), (0,1,2);
say qq{===}
" 2>&1;
[sparrowtask] :: run sparrow task issues/4118/
[sparrowtask] :: run thing issues/4118/
[issues/4118/] :: stderr: ++ set -e
[issues/4118/] :: stderr: ++ raku --version
[issues/4118/] :: This is Rakudo version 2020.07 built on MoarVM version 2020.07
[issues/4118/] :: implementing Raku 6.d.
[issues/4118/] :: stderr: ++ raku -e 'say qq{===};
say [⊖] (1,2,3), (1,2,3), (1,2,3);
say [⊖] (0,1,2), (0,1,2), (0,1,2);
say qq{===}
'
[issues/4118/] :: ===
[issues/4118/] :: Set()
[issues/4118/] :: Set(0)
[issues/4118/] :: ===
[task check] stdout match (s) <===> True
[task check] stdout match (s) <^^ 'Set()' $$> True
[task check] stdout match (s) <^^ 'Set()' $$> False
[task check] stdout match (s) <===> False
=================
TASK CHECK FAIL
In this test we ensure that code sample produces Set()twice. Sparrow task checks DSL is very handy in that case.
Conclusion
Sparrow allows to write test scenarios in a way as close as possible to what users get on their environments, it could be an efficient testing and collaboration tool enabling both users and developers interact on Rakudo bugs efficiently without having an unnecessary additional layers.
Sparky is a lightweight CI server written on Raku. It uses Bailador for UI and Sparrow/Sparrowdo as an automation engine. Initially the server was written to be run on a single machine and did not scale well. So it only could handle a small/medium load, mostly working in localhost mode.
Now, with the help of k8s cluster Sparky could be easily turned into industrial scale CI server:
How does it work?
A user launches requests to run jobs on Sparky. Where jobs – are arbitrary tasks executed as a part of your CI/CD processes.
Depending on the level of load, k8s would up or down new workers to handle requests, this is achieved by standard Kubernetes auto scaling mechanism.
Evey k8s node represents a docker container that runs:
* Sparky web UI instance ( Bailador / Bulma web application ) * Sparkyd – Sparky jobs queue dispatcher * Runtime environment for jobs execution ( Raku + Sparrow )
Benefits
Using k8s for a Sparky infrastructure has two benefits:
* simplicity and reliability * scalability
Simplicity
In k8s setup Sparky runs jobs on docker containers. It’s quite efficient as docker containers are mortal and a user doesn’t have to worry much if CI/CD scripts break an environment, after all k8s will re-spawn a new instance in awhile if the old one becomes unavailable. So as a docker is immutable by it’s nature we don’t have to worry much about underlying docker instances states.
Scalability
One of the reason people would choose Kubernetes is that it handles a load automatically. Now we might have a dozens of Sparky jobs run in cluster at the same time. It’s never achievable on default Sparky runs on localhost mode. Thus, k8s will take care about increasing load and will launch new instances if a workload starts to increase.
Underlying Sparky file system
Sparky uses sqlite database as well as static file to store jobs state:
Because docker by design does not have a state, we need to make some effort to keep Sparky file system persist. That means all containers should share the same files and sqlite database, not just a copies of those available across a unique container. Also a file system should stay even when underlying docker instance are gone and relaunch and not to be tied to docker containers.
Luckily this is achievable by using standard k8s volumes mechanism.
A user can choose between different flavors, but they all boils down to the fact that underling files system stays permanent across various docker instances and thus is capable to keep underlying Sparky state.
I’ve not tried to run Sparky in k8s cluster using mentioned approach, but I am pretty sure once it’s done Sparky could be used in industrial level projects. If you want to try a Sparky in your company, please give me a shout 🙂
Rakudo releases could be tough. Because the language is still in very development stage. New issues arrive daily. Let’s me introduce my attempt to help release managers and Rakudo developers to keep track of addressing existing issues with daily Rakudo commits. Enter a RIP service ( maybe I should choose a better name ? ) – Rakudo Issues Player.
The service allow one to describe existing issues as playable Rakudo scenarios, that gets automatically replayed for every new Rakudo commit.
So we have the recent issues report page with links to Rakudo GH issues and test reports:
Filling a new issue
So to fill a new issue, one as usually go to Rakduo GH issue page and fill the one. Just one extra step is required so that an issue will be checked for further commits:
* Go to https://rakudist.raku.org/play/ * Fill in your code snippet ( it could be Test scenario or any Raku code that exits none zero for negative cases ) * Name your play as an issue-$issue-number * Run a play ( only needs once )
That is it. RIP will never let your issue get abounded. It’ll run a play for every new commit and update the report page.
Feedback
RIP is still in quite early and an experimental stage. I’d like to hear a feedback from Rakudo developers and Raku community.
PS. I have a slogan idea as well. Just a thought :-)). RIP service – let your issues rest in peace.
—
PS. Update: @Liztormato suggested an another names for the service – RICH – Rakudo Issues Confirmation Helper.
I just released the first version of my new Markdown-to-PDF converter, pfft. It runs on Linux, at least as far back as Ubuntu 18.04.
Why yet another converter when pandoc and many others already do the job? Size. Pandoc uses TeX to generate PDFs. All the other converters I found use a Web browser in one way or another to make the PDF. Pandoc is 50 MB by itself, not counting TeX! In less than 1.2 MB (a 5.25″ floppy 😉 ) and a single file, pfft will do the job.
Of course, there is a catch: pfft uses Pango and Cairo to make the PDFs. Those have their own dependencies, but are installed on many Ubuntu systems by default! So pfft itself does not add to the dependency load those systems already carry.
(By the way, I use and appreciate both TeX and Pandoc. They are great tools! …
I’ve recently launched an experimental service called RakuPlay. It allows users to run Raku code snippets against different version of Rakudos including Rakudo developers SHA commits.
It also supports automatic Raku modules installation using Rakufile syntax.
A common user page looks like that:
Rakudo developer page allows to run a code against certain Rakudo commits:
Once a user hit a “submit” button RakuPlay will run a code on a respected docker container ( you can also choose an OS image ).
It takes awhile when runs first, as RakuPlay environment is not set up, but next runs should be pretty fast (as RakuPlay will reuse existing environments ).
Once code is executed a user might find a code execution report through available reports:
I started the project just for fun and because 99% of a code was already there as a part of Rakudist project.
If Raku community finds the project promising maybe I could invest more time in it.
Some benefits from my point view:
For Rakudo developers:
* Rakudo Commits. Rakudo developers could easy run any code (including usage of Raku modules) and share results. One don’t need to have a Rakudo compiled to a certain version to run a code against, all you need is a browser.
* Common Platform. Rakuplay could be a common platform for all devs to share results, discuss, etc. Rakuplay could contain code examples, user scenarios, use cases and test results. It could be a good addition to the irc channel.
* Quick Tests. Sometimes people forget or don’t want to write test cases for their commits, maybe because it’ll take a bit more efforts in comparison with code changes ( Somehow I’ve found quite a number of “tests needed” issues in Rakudo repo ), RakuPlay could be a “draft” where an author of commit or issue could reproduces their idea in a code and give a link to others. Later one can pick up an existing RakuPlay build and “replay” it against another commits. The build is always complete and informative as it contains a Rakudo version and code snippet, as wel as an output. Later on a dev could convert a draft into real Roast test.
For Raku community as a whole
* The same idea would apply for the whole community just with a slight variation. People could easily run any code to give examples how to use their code ( Raku modules authors ) or to express problems they’ve encountered running someone else’s code (F.e. referencing RP builds from GH issue ).
In the long run, the service could facilitate Raku language grow and will make it easier for newbies to learn the language.
—
Thank you for reading. Please share your feedback in Reddit.
Sparrow is a Raku based automation tool comes with the idea of Sparrow plugins – small reusable pieces of code, runs as a command line or Raku function.
Raku:
my %state = task-run "say name", "name", %(
bird => "Sparrow"
);
say %state<name>;
Cli:
$ s6 --plg-run name@bird=Sparrow
One can even create wrappers for existing command line tools converting them into Raku functions:
The last file is needed so that Sparrow could install Raku module dependency during plugin installation.
So eventually we might have a repository of raku-utils plugins for every Raku module exposing command line interface:
$ s6 --search raku-utils
One day, I might create a script that would download all zef distributions, sorting out those having bin/ scripts and create Sparrow wrappers for all of them. That would add a dozens of new plugins to existing Sparrow eco system at no cost.
And this would make it available to run those scripts as pure Raku functions, using Sparrow plugins interface!
Conclusion
I’ve introduced the idea of adding Sparrow plugins for existing Raku commad line tools shipped as a part of Raku modules.
“RakuOps series – an attempt to show people who write on Raku how to use the language in daily DevOps tasks – automation, configuration management, Docker containers and so on“
It’s been two weeks I’ve been playing with Sparrowdo – an automation tool written on Raku and based on Sparrow automation framework. Now it’s a time to share some cool features I’ve added recently. But before to do that let me remind you how it all started.
Multiple hosts management
After publishing an issue number 1, I received a comment from @bobthecimerian in r/rakulang reddit post:
“Assume for the sake of discussion that I want to manage 5 machines with Sparrow6 and run Docker on all of them. Do I have to install Sparrow6 on all of them, and deploy Sparrow6 tasks to all of them? Then I use ssh, or ssh through the Sparrow6 DSL, to run tasks that install Docker and other software? Do I have to manage ssh authorized keys and network addresses for each machine that I am configuring myself, or does Sparrow6 have tasks or other tools to make that management easier?”
So, I thought – “Wait … what a cool use case I can reveal here, I just need to add some features to Sparrowdo and that is it!”
Why?
The idea of managing multiple hosts is quite common. Say, you have a bunch of related VMs in your network, and you want to manage them consistently – installing the same packages, running services, so on. Or you have a multi tier application – frontend/backend/database and you need to manage a configuration of each node specifically, but still need to connect those nodes through different protocols. Of course, in days of immutable infrastructure and Kubernetes these types of tasks could be solved using Docker. But what if I want something lightweight, flexible and not involving industrial scale efforts? Here is where Sparrrowdo could be a good alternative, especially for people writing on Raku.
Dependencies
This what we need for this tutorial. You don’t have to install those tools, unless you want to experiment with given topic in practice, but here we are:
* Terraform to create ec2 instances in amazon aws * Free tier Amazon account * Aws cli to launch ec2 instances with Terraform * Sparrowdo to provision hosts * Sparky – Sparrowdo backend to asynchronously execute Sparrowdo scenarios
Spin up infrastructure
Creation of bare bone infrastructure is relatively easy with Terraform – multi cloud infrastructure deployment tool. It’s de-facto an industrial standard for infrastructure management. I am not a big fan of Terraform’s declarative style DSL but it works really well when we just need to spin up an infrastructure without provisioning stage (see later).
So let’s create a terraform scenario to create 3 ec2 linux instances with Ubuntu OS, representing frontend, backend and database nodes:
$ mkdir ~/terraform-example $ cd terrafrom-example $ nano example.tf
But before we launch terraform script, we need to enable passwordless ssh setup to allow Sparrowdo provision stage runs from my laptop.
What I need is to generate ssh key and import it’s public part to my amazon account. When terraform creates ec2 instances it will reference to this key, which makes amazon inserts the public part into hosts configurations and finally makes passwordless ssh connect from my laptop to those hosts:
The clever bit here is we create a key pair named “my-key" and reference to it inside Terraform using key-name attribute.
Run terraform
Now let’s run terraform to create our first infrastructure consisting of 3 hosts.
$ terrafrom apply -auto-approve
aws_instance.example: Creating… aws_instance.example2: Creating… aws_instance.example3: Creating… aws_instance.example: Still creating… [10s elapsed] aws_instance.example2: Still creating… [10s elapsed] aws_instance.example3: Still creating… [10s elapsed] aws_instance.example: Still creating… [20s elapsed] aws_instance.example2: Still creating… [20s elapsed] aws_instance.example3: Still creating… [20s elapsed] aws_instance.example2: Creation complete after 24s [id=i-0af378c47f68a1250] aws_instance.example3: Creation complete after 24s [id=i-082ad29992e0c83eb] aws_instance.example: Creation complete after 24s [id=i-0c15a8a728ad71302]
Once we apply terraform configuration to aws, in literally seconds we will get 3 ec2 instances with Ubuntu OS up and running in amazon cloud. Cool!
Sparrowdo
In devops terminology provisioning is a stage when we apply configuration on bare bone infrastructure resources, for example on virtual machines. This where Sparrowdo starts shining because it’s what the tool was designed for.
Let’s install Sparrowdo itself first. Sparrowdo is installed as a zef module:
$ zef install Sparrowdo –/test
Now let’s create a simple Sparrowdo scenario which will define provision logic.
Our first scenario – sparrowfile – will be as simple as that:
mkdir -p ~/sparrowdo-examples cd ~/sparrow-examples nano sparrowfile
package-install "nano";
Installing nano editor ( which I am bug fan of ) on all the nodes should be enough to test our first simple Sparrowdo configuration.
Sparky
Because we are going to run Sparrowdo in asynchronous mode, we need to install Sparky – asynchronous Sparrowdo runner. As a benefit it comes with nice web UI where build statuses are tracked and logs are visible:
Last 3 commands initialize Sparky internal database and run Sparky queue dispatcher with Sparky web UI which is accessible at 127.0.0.1:3000 endpoint.
But before we try to run any Sparrowdo provision let’s understand how do we know hosts network addresses bearing in mind we don’t want to hardcode ones into our configuration.
Terrafrom state
What is cool about Terrafrom it keeps infrastructure internal data in a special file which is called state in JSON format:
$ cat ~/terraform-example/terraform.tfstate
So it’s relatively easy to create a simple Raku script that parses the file and fetches all required configuration data:
$ cd ~/sparrowdo-example $ nano hosts.aws.raku
use JSON::Tiny;
my $data = from-json("/home/melezhik/terraform-example/terraform.tfstate".IO.slurp);
my @aws-instances = $data<resources><>.grep({
.<type> eq "aws_instance"
}).map({
%(
host => .<instances>[0]<attributes><public_dns>
)
});
@aws-instances;
If we dump @aws-instances array we will see all 3 instances with public DNS address data:
If we pass a script as host parameter, Sparrowdowill be clever enough to run the one, and because the last script statement is @aws-instances array, take it as an input hosts list:
queue build for [ec2-54-237-6-19.compute-1.amazonaws.com] on [worker-3] queue build for [ec2-52-23-177-193.compute-1.amazonaws.com] on [worker-2] queue build for [ec2-54-90-19-170.compute-1.amazonaws.com] on [worker-2]
This command will launch nano editor installation on all 3 hosts. A --boostrap flags asks Sparrowdo to install all Sparrow dependencies first, because we run provision for the first time.
As it’s seen through an output, Sparrowdo has triggered 3 builds and they got added to Sparky queue. If we open up a Sparky web UI we could see that 2 builds are already being executed:
And the third one is kept in a queue:
After awhile we could see all 3 instances are provisioned:
So all 3 hosts have been successfully provisioned. If we ssh to any hosts, we will see that nano editor is presented.
Build logs
Sparky UI allows to see builds logs where could find a lot of details of how configuration was provisioned. For example:
rakudo-pkg is already the newest version (2020.06-01).
0 upgraded, 0 newly installed, 0 to remove and 117 not upgraded.
===> Installing: Sparrow6:ver<0.0.25>
1 bin/ script [s6] installed to:
/opt/rakudo-pkg/share/perl6/site/bin
18:37:03 07/16/2020 [repository] index updated from http://rakudist.raku.org/repo//api/v1/index
18:37:07 07/16/2020 [install package(s): nano.perl] trying to install nano ...
18:37:07 07/16/2020 [install package(s): nano.perl] installer - apt-get
18:37:07 07/16/2020 [install package(s): nano.perl] Package: nano
18:37:07 07/16/2020 [install package(s): nano.perl] Version: 2.5.3-2ubuntu2
18:37:07 07/16/2020 [install package(s): nano.perl] Status: install ok installed
[task check] stdout match <Status: install ok installed> True
Now let’s see how we can provision hosts specifically, depending on roles assigned to hosts. Remember we have a frontend, backend and database hosts?
Custom configurations
The latest Sparrowdo release comes with an awesome feature called tags. Tags allow one to assign arbitrary variables per each host, and branch installation logic depending on that variables.
Let’s tweak a host inventory script hosts.aws.raku so that resulted @aws-instances array include elements with tags:
Another cool thing about tags is one can pass --tags as a command line argument and it will act as a filter to leave only certain types of hosts. Say, we only want to update database host:
If we pass multiple tags by using a "," delimiter it will act as an AND condition. For example:
--tags=database,production
Will only process hosts with tag set to database and production.
Hosts attributes
And last but not the least feature of tags is key/value data . If set a tag as name=value format, Sparrowdo will process this as a named attribute:
my $v = tags()<name>
This is how we pass an arbitrary data into Sparrowdo context using the same tag syntax. For example, let’s modify hosts inventory script, to pass IP address of backend node:
$ nano ~/sparrowdo-examples/hosts.aws.raku
use JSON::Tiny;
my $data = from-json("/home/melezhik/terraform-example/terraform.tfstate".IO.slurp);
my $backend-ip;
my @aws-instances = $data<resources><>.grep({
.<type> eq "aws_instance"
}).map({
if .<instances>[0]<attributes><tags><Name> eq "backend" {
$backend-ip = .<instances>[0]<attributes><public_ip>
}
%(
host => .<instances>[0]<attributes><public_dns>,
tags => "name={.<instances>[0]<attributes><tags><Name>}"
)
});
for @aws-instances {
$i<tags> ~= "backend_ip={$backend_ip}"
}
@aws-instances;
Now @aws-instance array has a following structure:
Sparrowdo supports different hosts format, including localhost and docker , please read a documentation to get more details.
Conclusion
Sparrowdo and Sparky are flexible tools allow one to asynchronously provision virtual resources. In this tutorial we’ve seen how easy one can spin up a multi tier application consisting of 3 nodes from the scratch.
Moreover, Sparrowdo works nice with some well known tools like Terrafrom that makes it’s even more attractive and practical.
See you soon, on the RakuOps issue number 3, please let me know what do you want to hear next time.
RakuOps series – an attempt to show people who write on Raku how to use the language in daily DevOps tasks – automation, configuration management, Docker containers building and so on.
While I don’t know for sure which topics will attract the community interest, I hope that during this series I’ll get some feedback so I could adjust my future posts according actual people needs.
How to Build Docker Containers Using Raku and Sparrow
This is the first post in the series, where I am going to show how to use Raku and Sparrow – Raku automation framework to build Docker images. We will start with simple Dockerfile example and then we’ll see how to use Sparrow to extend image building process.
Why
People usually use Dockerfile DSL to build Docker images. However the usage of Docker file is limited and quickly get cumbersome when it comes to more sophisticated cases. User ends up in extensive shell scripting through various RUN commands or similar way, which is very hard to maintain in the long run.
Moreover if one choose to change an underlying Docker container’s OS they will have to rewrite all the code which often has distro specific RUN commands.
In this post we will see how to use Raku and all battery included Sparrow automation tool to create Docker build scenarios in more portable and easy to maintain way.
As a result one could start using Raku to create high level scenarios gaining an access to all the power of the language. As well as a plenty of Sparrow plugins would reduce efforts to write code when dealing with typical configuration tasks – installing native packages, users, configuration files and so on.
Prerequisites
To build Docker container we will need a following set of tools:
Rakudo
Sparrow
Git
Docker
Rakudo installation is pretty strait-forward, just follow the instructions on https://rakudo.org/downloads web site.
To install Sparrow toolkit, we need install Sparrow6 Raku module:
zef install --/test Sparrow6
Sparrow bootstrap
To bootstrap Sparrow on Docker instance we need to build a Docker image first. That image should include Rakudo and Sparrow binaries. Thanks to @jjmerelo there is a jjmelerelo/alpine-raku base Docker image with Alpine Linux with Rakudo binary pre-installed, so our Dockerfile should be pretty simple:
$ mkdir -p RakuOps/docker-sparrow
$ cd RakuOps/docker-sparrow
$ cat Dockerfile
FROM jjmerelo/alpine-raku
RUN zef install --/test Sparrow6
REPOSITORY TAG IMAGE ID CREATED SIZE
rakuops 1.0 a2cbc605ec5e 3 minutes ago 139MB
First run
Now having a base Docker image let’s run our very first Sparrow scenario, all we need is to add file called sparrowfile using Docker ADD directive. Our first scenario will be as simple as Bash “Hello World” echo command:
As one could notice, Sparrow scenario is just a plain Raku code with some DSL constructions. Let’s modify Dockerfile and rebuild an image.
$ cat Dockerfile
ADD sparrowfile
RUN raku -MSparrow6::DSL sparrowfile
$ docker build --tag rakuops:1.0 .
Sending build context to Docker daemon 5.632kB
Step 1/4 : FROM jjmerelo/alpine-raku
---> c0ecb08ec5db
Step 2/4 : RUN zef install --/test Sparrow6
---> Using cache
---> a2cbc605ec5e
Step 3/4 : ADD sparrowfile .
---> 74c7ee71a303
Step 4/4 : RUN raku -MSparrow6::DSL sparrowfile
---> Running in c73e1a7d568f
unknown plugin bash
in method plugin-install at /root/raku-install/share/perl6/site/sources/5D155994EC979DF8EF1FDED7148646312D9073E3 (Sparrow6::Task::Repository::Helpers::Plugin) line 115
in sub task-run at /root/raku-install/share/perl6/site/sources/DB0BB8A1D70970E848E2F38D2FC0C39E4F904283 (Sparrow6::DSL::Common) line 12
in sub bash at /root/raku-install/share/perl6/site/sources/7662EE0EFF4206F474B7CC4AEF229F1A86EC8FFF (Sparrow6::DSL::Bash) line 33
in sub bash at /root/raku-install/share/perl6/site/sources/7662EE0EFF4206F474B7CC4AEF229F1A86EC8FFF (Sparrow6::DSL::Bash) line 7
in block <unit> at sparrowfile line 1
The very first run has failed with unknown plugin bash error, that means one needs to provision Docker with Sparrow repository – a storage for all dependencies required in Sparrow scenarios.
While there are many ways to do that, for our tutorial use of local file repository seems the easiest one.
Local Sparrow repository
Local Sparrow repository contains all Sparrow plugins, deployed to your local file system. To create one we need to initialize a repository structure first:
When we have an empty repository let’s populate it with Sparrow plugins taken from source code . Right now we only need a specific bash plugin, so let’s upload on this one:
We’re going to use Docker COPY command to copy repository files to a Docker cache. But first we need to copy files to the current working directory so they will be available for the COPY command during Docker build:
$ cp -r ~/repo .
$ cat Dockerfile
RUN apk add bash perl
COPY repo/ /root/repo/
RUN s6 --index-update
RUN raku -MSparrow6::DSL sparrowfile
$ docker build --tag rakuops:1.0 .
Sending build context to Docker daemon 11.26kB
Step 1/7 : FROM jjmerelo/alpine-raku
---> c0ecb08ec5db
Step 2/7 : RUN zef install --/test Sparrow6
---> Using cache
---> a2cbc605ec5e
Step 3/7 : RUN apk add bash perl
---> Using cache
---> d9011d4e64db
Step 4/7 : ADD sparrowfile .
---> Using cache
---> adb1df57e1c0
Step 5/7 : COPY repo/ /root/repo/
---> Using cache
---> 3ed6bfaf4183
Step 6/7 : RUN s6 --index-update
---> Running in 6edfc480bde7
17:03:59 06/29/2020 [repository] update local index
17:03:59 06/29/2020 [repository] index updated from file:///root/repo/api/v1/index
Removing intermediate container 6edfc480bde7
---> 7eccb5889a80
Step 7/7 : RUN raku -MSparrow6::DSL sparrowfile
---> Running in af6eb4b2d9ee
17:04:02 06/29/2020 [repository] installing bash, version 0.002001
17:04:05 06/29/2020 [bash: echo Hello World] Hello World
As we could see from the log, Sparrow scenario successfully finishes printing “Hello World” in stdout. Line installing bash, version 0.002001 means Sparrow plugin has been successfully pulled from Docker cache and installed into container file system.
Build all plugins
To use the rest of Sparrow plugins in Docker build scenarios we need to add the to Docker cache the same way we did for bash plugin:
Now let’s update Docker cache by copy repository file to current working directory, in the next run Docker COPY command will pick files and push to Docker image.
$ cd ~/RakuOps/docker-sparrow/
$ cp -r ~/repo .
Sparrow plugins
Now we’re free to use any plugin we’ve just added. Say, we need to install nano editor on our Docker image. Sparrow provides a cross-platform package-generic plugin to install native packages:
We’ve just seen how one can use Raku and Sparrow to build Docker images. The advantage of the approach one is no more limited by Dockerfile syntax and could leverage all the power of Raku to express any sophisticated build logic. On other hand Sparrow provides a lot of handy primitives and plugins for typical build tasks and some of them I’m going to share in next posts.
A few days ago several discussions havebeenlaunched where people try to deal with managing none Raku / native dependencies for Raku modules. While a solution is far from being found or at least is complete here is my, Sparrow take on the problem.
Raku-native-deps
Raku-native-deps is a Sparrow plugin to parse META6.json file and turn it to native packages dependencies. It has a lot of limitations, e.g only supporting CentOS and only parsing `:from<native>` statements but it could give one a sence of the approach:
my %state = task-run "get packages", "raku-native-deps", %(
path => "META6.json"
);
for %state<packages><> -> $i {
say "package: $i<package>"
}
Basically one just give it a path to module’s META file and the plugin parses the file converting it to native package dependencies, then it’s possible to install ones using underlying package manager:
for %state<packages><> -> $i {
package-install $i<package>
}
Full scenario
So full scenario to install a module with native dependencies would be:
# Fetch module and get a directory where it's fetched
my %state = task-run 'fetch dbd-sqlite', 'zef-fetch', %(
identity => 'DBD::SQLite'
);
# Build native packages list from META6.json
my %state2 = task-run "get packages", "raku-native-deps", %(
path => "{%state<directory>}/META6.json"
);
# Install native packages (libsqlite3)
for %state2<packages><> -> $i {
package-install $i<package>;
}
# Install module, at this point external dependencies are installed
# So this step will only install Raku dependencies and module itself
zef "DBD::SQLite";
RakuDist integration
RakuDist – Raku modules testing service uses the method to test distributions containing native dependencies. Known modules examples:
The approach is not complete, though right now it could solve installation of native dependencies for a single module ( but not recursively for module’s dependencies’s native dependencies ), one can read ongoing discussion here – https://github.com/ugexe/zef/issues/356 and suggest ideas.
Nowadays many Raku modules authors ship cli tools as a part of their Raku modules distributions. RakuDist provides a dead easy way to test those scripts. The benefit, it takes a minimal coding and fully integrated into RakuDist service.
Cli application example
Say, we have a script.raku shipped as a part of a Raku module.
$ cat bin/script.raku
if @*ARGS[0] -eq "--version" { say "app version: 0.1.0" } elsif @*ARGS[0] -eq "--help" { help(); } else { my @params = @*ARGS; # do some stuff }
To test a script installation one needs to create a .tomty/ sub directory in a module root directory and place some test scenarios. Scenarios should written on Tomty – a simple Raku framework for black box testing:
$ mkdir .tomty
$ nano .tomty/00-script-version.pl6
task-run ".tomty/tasks/app-version/";
$ mkdir -p .tomty/tasks/app-version/
$ nano .tomty/tasks/app-version/task.bash
script.raku --version
00-script-version scenario runs the script with some parameters ( help info ) and verifies successful status code.
To verify script STDOUT, create a check file with some Raku regular expressions:
$ nano .tomty/tasks/app-version/task.check
regexp: "app version:" \s+ \d+ '.' \d+ '.' \d+
You can add more scenarios, they all will be executed in a row: