Lightweight Markdown-to-PDF converter: pfft

I Fight for the Users

I just released the first version of my new Markdown-to-PDF converter, pfft.  It runs on Linux, at least as far back as Ubuntu 18.04.

Why yet another converter when pandoc and many others already do the job?  Size.  Pandoc uses TeX to generate PDFs.  All the other converters I found use a Web browser in one way or another to make the PDF.  Pandoc is 50 MB by itself, not counting TeX!  In less than 1.2 MB (a 5.25″ floppy 😉 ) and a single file, pfft will do the job.

Of course, there is a catch: pfft uses Pango and Cairo to make the PDFs.  Those have their own dependencies, but are installed on many Ubuntu systems by default!  So pfft itself does not add to the dependency load those systems already carry.

(By the way, I use and appreciate both TeX and Pandoc.  They are great tools! …

View original post 88 more words

RakuPlay introduction

I’ve recently launched an experimental service called RakuPlay. It allows users to run Raku code snippets against different version of Rakudos including Rakudo developers SHA commits.

It also supports automatic Raku modules installation using Rakufile syntax.

A common user page looks like that:

Rakudo developer page allows to run a code against certain Rakudo commits:

Once a user hit a “submit” button RakuPlay will run a code on a respected docker container ( you can also choose an OS image ).

It takes awhile when runs first, as RakuPlay environment is not set up, but next runs should be pretty fast (as RakuPlay will reuse existing environments ).

Once code is executed a user might find a code execution report through available reports:

Reports are kept in a system for awhile ( 10K maximum ), so you can share a build with others via http link – see for example HTTP::Tiny report or Initial set of tests one dim native shaped str arrays report

The future of the project

I started the project just for fun and because 99% of a code was already there as a part of Rakudist project.

If Raku community finds the project promising maybe I could invest more time in it.

Some benefits from my point view:

For Rakudo developers:

* Rakudo Commits. Rakudo developers could easy run any code (including usage of Raku modules) and share results. One don’t need to have a Rakudo compiled to a certain version to run a code against, all you need is a browser.

* Common Platform. Rakuplay could be a common platform for all devs to share results, discuss, etc. Rakuplay could contain code examples, user scenarios, use cases and test results. It could be a good addition to the irc channel.

* Quick Tests. Sometimes people forget or don’t want to write test cases for their commits, maybe because it’ll take a bit more efforts in comparison with code changes ( Somehow I’ve found quite a number of “tests needed” issues in Rakudo repo ), RakuPlay could be a “draft” where an author of commit or issue could reproduces their idea in a code and give a link to others. Later one can pick up an existing RakuPlay build and “replay” it against another commits. The build is always complete and informative as it contains a Rakudo version and code snippet, as wel as an output. Later on a dev could convert a draft into real Roast test.

For Raku community as a whole

* The same idea would apply for the whole community just with a slight variation. People could easily run any code to give examples how to use their code ( Raku modules authors ) or to express problems they’ve encountered running someone else’s code (F.e. referencing RP builds from GH issue ).

In the long run, the service could facilitate Raku language grow and will make it easier for newbies to learn the language.

Thank you for reading. Please share your feedback in Reddit.


Raku-Utils Proposal

Sparrow is a Raku based automation tool comes with the idea of Sparrow plugins – small reusable pieces of code, runs as a command line or Raku function.


my %state = task-run "say name", "name", %(
  bird => "Sparrow"

say %state<name>;


$ s6 --plg-run name@bird=Sparrow

One can even create wrappers for existing command line tools converting them into Raku functions:

Wrapper code:

$ cat task.bash

curl $(config args)

Raku function:

task-run ".", %(
  args => [
      "output" => "data.html"

Wrappers for Raku modules command line scripts

Many Raku modules author nowadays ship their distributions with command line tools to provide handy console functionality for theirs modules.

It’s relatively easy to repackage those tools into Sparrow plugins. For example for App::Mi6 module mi6 tool:

task-run "mi6 release", "raku-utils-mi6", %(

  args => [
      jobs => 2


Sparrow wrapper:

$ task.bash

mi6 $(config args)

$ cat sparrow.json
    "name" : "raku-utils-mi6",
    "description" : "mi6 cli",
    "version" : "0.0.1",
    "category" : "utils"

$ depends.raku


The last file is needed so that Sparrow could install Raku module dependency during plugin installation.

So eventually we might have a repository of raku-utils plugins for every Raku module exposing command line interface:

$ s6 --search raku-utils

One day, I might create a script that would download all zef distributions, sorting out those having bin/ scripts and create Sparrow wrappers for all of them. That would add a dozens of new plugins to existing Sparrow eco system at no cost.

And this would make it available to run those scripts as pure Raku functions, using Sparrow plugins interface!


I’ve introduced the idea of adding Sparrow plugins for existing Raku commad line tools shipped as a part of Raku modules.

I’d happy to get a feedback on that.



RakuOps. Issue Number 2.

RakuOps series – an attempt to show people who write on Raku how to use the language in daily DevOps tasks – automation, configuration management, Docker containers and so on

It’s been two weeks I’ve been playing with Sparrowdo – an automation tool written on Raku and based on Sparrow automation framework. Now it’s a time to share some cool features I’ve added recently. But before to do that let me remind you how it all started.

Multiple hosts management

After publishing an issue number 1, I received a comment from @bobthecimerian in r/rakulang reddit post:

“Assume for the sake of discussion that I want to manage 5 machines with Sparrow6 and run Docker on all of them. Do I have to install Sparrow6 on all of them, and deploy Sparrow6 tasks to all of them? Then I use ssh, or ssh through the Sparrow6 DSL, to run tasks that install Docker and other software? Do I have to manage ssh authorized keys and network addresses for each machine that I am configuring myself, or does Sparrow6 have tasks or other tools to make that management easier?”

So, I thought – “Wait … what a cool use case I can reveal here, I just need to add some features to Sparrowdo and that is it!”


The idea of managing multiple hosts is quite common. Say, you have a bunch of related VMs in your network, and you want to manage them consistently – installing the same packages, running services, so on. Or you have a multi tier application – frontend/backend/database and you need to manage a configuration of each node specifically, but still need to connect those nodes through different protocols. Of course, in days of immutable infrastructure and Kubernetes these types of tasks could be solved using Docker. But what if I want something lightweight, flexible and not involving industrial scale efforts? Here is where Sparrrowdo could be a good alternative, especially for people writing on Raku.


This what we need for this tutorial. You don’t have to install those tools, unless you want to experiment with given topic in practice, but here we are:

* Terraform to create ec2 instances in amazon aws
* Free tier Amazon account
* Aws cli to launch ec2 instances with Terraform
* Sparrowdo to provision hosts
* Sparky – Sparrowdo backend to asynchronously execute Sparrowdo scenarios

Spin up infrastructure

Creation of bare bone infrastructure is relatively easy with Terraform – multi cloud infrastructure deployment tool. It’s de-facto an industrial standard for infrastructure management. I am not a big fan of Terraform’s declarative style DSL but it works really well when we just need to spin up an infrastructure without provisioning stage (see later).

So let’s create a terraform scenario to create 3 ec2 linux instances with Ubuntu OS, representing frontend, backend and database nodes:

$ mkdir ~/terraform-example
$ cd terrafrom-example
$ nano

resource "aws_instance" "example" {

  ami           = "ami-2757f631"
  instance_type = "t2.micro"
  key_name = "mylaptop"

  tags = {
    Name = "frontend"

resource "aws_instance" "example2" {
  ami           = "ami-2757f631"
  instance_type = "t2.micro"
  key_name = "mylaptop"

  tags = {
    Name = "backend"

resource "aws_instance" "example3" {
  ami           = "ami-2757f631"
  instance_type = "t2.micro"
  key_name = "my-key"

  tags = {
   Name = "database"

Ssh keys

But before we launch terraform script, we need to enable passwordless ssh setup to allow Sparrowdo provision stage runs from my laptop.

What I need is to generate ssh key and import it’s public part to my amazon account. When terraform creates ec2 instances it will reference to this key, which makes amazon inserts the public part into hosts configurations and finally makes passwordless ssh connect from my laptop to those hosts:

$ ssh-keygen -t rsa -C "my-key" -f ~/.ssh/my-key

$ aws ec2 import-key-pair --key-name "my-key" --public-key-material fileb://~/.ssh/

The clever bit here is we create a key pair named “my-key" and reference to it inside Terraform using key-name attribute.

Run terraform

Now let’s run terraform to create our first infrastructure consisting of 3 hosts.

$ terrafrom apply -auto-approve

aws_instance.example: Creating…
aws_instance.example2: Creating…
aws_instance.example3: Creating…
aws_instance.example: Still creating… [10s elapsed]
aws_instance.example2: Still creating… [10s elapsed]
aws_instance.example3: Still creating… [10s elapsed]
aws_instance.example: Still creating… [20s elapsed]
aws_instance.example2: Still creating… [20s elapsed]
aws_instance.example3: Still creating… [20s elapsed]
aws_instance.example2: Creation complete after 24s [id=i-0af378c47f68a1250]
aws_instance.example3: Creation complete after 24s [id=i-082ad29992e0c83eb]
aws_instance.example: Creation complete after 24s [id=i-0c15a8a728ad71302]

Once we apply terraform configuration to aws, in literally seconds we will get 3 ec2 instances with Ubuntu OS up and running in amazon cloud. Cool!


In devops terminology provisioning is a stage when we apply configuration on bare bone infrastructure resources, for example on virtual machines. This where Sparrowdo starts shining because it’s what the tool was designed for.

Let’s install Sparrowdo itself first. Sparrowdo is installed as a zef module:

$ zef install Sparrowdo –/test

Now let’s create a simple Sparrowdo scenario which will define provision logic.

Our first scenario – sparrowfile – will be as simple as that:

mkdir -p ~/sparrowdo-examples
cd ~/sparrow-examples
nano sparrowfile

package-install "nano";

Installing nano editor ( which I am bug fan of ) on all the nodes should be enough to test our first simple Sparrowdo configuration.


Because we are going to run Sparrowdo in asynchronous mode, we need to install Sparky – asynchronous Sparrowdo runner. As a benefit it comes with nice web UI where build statuses are tracked and logs are visible:

$ mkdir ~/sparky-git
$ cd ~/sparky-git
$ git clone
$ zef install .

$ mkdir -p ~/.sparky/projects
$ raku db-init.pl6

$ nohup sparkyd &
$ nohup raku bin/sparky-web.pl6

Last 3 commands initialize Sparky internal database and run Sparky queue dispatcher with Sparky web UI which is accessible at endpoint.

But before we try to run any Sparrowdo provision let’s understand how do we know hosts network addresses bearing in mind we don’t want to hardcode ones into our configuration.

Terrafrom state

What is cool about Terrafrom it keeps infrastructure internal data in a special file which is called state in JSON format:

$ cat ~/terraform-example/terraform.tfstate

So it’s relatively easy to create a simple Raku script that parses the file and fetches all required configuration data:

$ cd ~/sparrowdo-example
$ nano

use JSON::Tiny;

my $data = from-json("/home/melezhik/terraform-example/terraform.tfstate".IO.slurp);

my @aws-instances = $data<resources><>.grep({
  .<type> eq "aws_instance"
    host => .<instances>[0]<attributes><public_dns>


If we dump @aws-instances array we will see all 3 instances with public DNS address data:

    host => "",
    host => "",
    host => "",

If we pass a script as host parameter, Sparrowdowill be clever enough to run the one, and because the last script statement is @aws-instances array, take it as an input hosts list:

$ sparrowdo --host=aws.raku --ssh_user=ubuntu --bootstrap

queue build for [] on [worker-3]
queue build for [] on [worker-2]
queue build for [] on [worker-2]

This command will launch nano editor installation on all 3 hosts. A --boostrap flags asks Sparrowdo to install all Sparrow dependencies first, because we run provision for the first time.

As it’s seen through an output, Sparrowdo has triggered 3 builds and they got added to Sparky queue. If we open up a Sparky web UI we could see that 2 builds are already being executed:

And the third one is kept in a queue:

After awhile we could see all 3 instances are provisioned:

So all 3 hosts have been successfully provisioned. If we ssh to any hosts, we will see that nano editor is presented.

Build logs

Sparky UI allows to see builds logs where could find a lot of details of how configuration was provisioned. For example:

rakudo-pkg is already the newest version (2020.06-01).
0 upgraded, 0 newly installed, 0 to remove and 117 not upgraded.
===> Installing: Sparrow6:ver<0.0.25>

1 bin/ script [s6] installed to:
18:37:03 07/16/2020 [repository] index updated from
18:37:07 07/16/2020 [install package(s): nano.perl] trying to install nano ...
18:37:07 07/16/2020 [install package(s): nano.perl] installer - apt-get
18:37:07 07/16/2020 [install package(s): nano.perl] Package: nano
18:37:07 07/16/2020 [install package(s): nano.perl] Version: 2.5.3-2ubuntu2
18:37:07 07/16/2020 [install package(s): nano.perl] Status: install ok installed
[task check] stdout match <Status: install ok installed> True

Now let’s see how we can provision hosts specifically, depending on roles assigned to hosts. Remember we have a frontend, backend and database hosts?

Custom configurations

The latest Sparrowdo release comes with an awesome feature called tags. Tags allow one to assign arbitrary variables per each host, and branch installation logic depending on that variables.

Let’s tweak a host inventory script so that resulted @aws-instances array include elements with tags:

    host => "",
    tags => "aws,frontend" 
    host => "",
    tags => "aws,backend"
    host => "",
    tags => "aws,database"

As one can see, basically tags are plain strings with comma separated values.

To handle tags within Sparrowdo scenarios one should use tags() function:

$ nano sparrowdo-examples/sparrowfile

if tags()<database> {

  # Database specific code here

  package-install "mysql-server"; 

} elsif tags()<backend> {

  # Install Backend application 
  # And dependencies
  package-install "mysql-client";

  user "app";

  directory "/home/app/cro-example", %(
    owner => "app",
    group => "app"

  git-scm "", %(
    user => "app",
    to => "/home/app/cro-example"

  zef ".", %(
     user => "app",
     cwd => "/home/app/cro-example"

} elsif tags()<fronted> {

  # Install Nginx server 
  # As a fronted 
  package-install "nginx";


This simple example shows that we can create a single provision scenario where different nodes are configured differently depending on their roles.

Now we can run Sparrow the same way as we did before and nodes configurations will be updated according their types:

$ cd ~/sparrowdo-examples

$ sparrowdo --ssh_user=ubuntu

Filtering by tags

Another cool thing about tags is one can pass --tags as a command line argument and it will act as a filter to leave only certain types of hosts. Say, we only want to update database host:

$ sparrowdo --ssh_user=ubuntu --tags=database

If we pass multiple tags by using a "," delimiter it will act as an AND condition. For example:


Will only process hosts with tag set to database and production.

Hosts attributes

And last but not the least feature of tags is key/value data . If set a tag as name=value format, Sparrowdo will process this as a named attribute:

my $v = tags()<name>

This is how we pass an arbitrary data into Sparrowdo context using the same tag syntax. For example, let’s modify hosts inventory script, to pass IP address of backend node:

$ nano ~/sparrowdo-examples/

use JSON::Tiny;

my $data = from-json("/home/melezhik/terraform-example/terraform.tfstate".IO.slurp);
my $backend-ip;
my @aws-instances = $data<resources><>.grep({
  .<type> eq "aws_instance"

   if .<instances>[0]<attributes><tags><Name> eq "backend" {
     $backend-ip = .<instances>[0]<attributes><public_ip>

    host => .<instances>[0]<attributes><public_dns>,
    tags => "name={.<instances>[0]<attributes><tags><Name>}"

for @aws-instances {
  $i<tags> ~= "backend_ip={$backend_ip}"


Now @aws-instance array has a following structure:

    host => "",
    tags => "aws,frontend,backend_ip=" 
    host => "",
    tags => "aws,backend,backend_ip="
    host => "",
    tags => "aws,database,backend_ip="

So, for database part we might have a following Sparrowdo scenario, to
allow host with backend_ip to connect to a mysql server:

if tags()<database> {

  my %state = task-run "set mysql", "set-mysql", %( 
    user => "test", 
    database => "test", 
    allow_host => tags()<backend_ip>, 
  if %state<restart> { 
    service-restart "mysql" 


Let’s rerun Sparrowdo to apply changes to a MySQL server:

$ sparrowdo --ssh_user=ubuntu --tags=database

Other hosts formats

Sparrowdo supports different hosts format, including localhost and docker , please read a documentation to get more details.


Sparrowdo and Sparky are flexible tools allow one to asynchronously provision virtual resources. In this tutorial we’ve seen how easy one can spin up a multi tier application consisting of 3 nodes from the scratch.

Moreover, Sparrowdo works nice with some well known tools like Terrafrom that makes it’s even more attractive and practical.

See you soon, on the RakuOps issue number 3, please let me know what do you want to hear next time.

Thank you for reading!

Aleksei Melezhik

RakuOps. Issue 1.

RakuOps series – an attempt to show people who write on Raku how to use the language in daily DevOps tasks – automation, configuration management, Docker containers building and so on.

While I don’t know for sure which topics will attract the community interest, I hope that during this series I’ll get some feedback so I could adjust my future posts according actual people needs.

How to Build Docker Containers Using Raku and Sparrow

This is the first post in the series, where I am going to show how to use Raku and Sparrow – Raku automation framework to build Docker images. We will start with simple Dockerfile example and then we’ll see how to use Sparrow to extend image building process.


People usually use Dockerfile DSL to build Docker images. However the usage of Docker file is limited and quickly get cumbersome when it comes to more sophisticated cases. User ends up in extensive shell scripting through various RUN commands or similar way, which is very hard to maintain in the long run.

Moreover if one choose to change an underlying Docker container’s OS they will have to rewrite all the code which often has distro specific RUN commands.

In this post we will see how to use Raku and all battery included Sparrow automation tool to create Docker build scenarios in more portable and easy to maintain way.

As a result one could start using Raku to create high level scenarios gaining an access to all the power of the language. As well as a plenty of Sparrow plugins would reduce efforts to write code when dealing with typical configuration tasks – installing native packages, users, configuration files and so on.


To build Docker container we will need a following set of tools:

  • Rakudo
  • Sparrow
  • Git
  • Docker

Rakudo installation is pretty strait-forward, just follow the instructions on web site.

To install Sparrow toolkit, we need install Sparrow6 Raku module:

zef install --/test Sparrow6

Sparrow bootstrap

To bootstrap Sparrow on Docker instance we need to build a Docker image first. That image should include Rakudo and Sparrow binaries. Thanks to @jjmerelo there is a
jjmelerelo/alpine-raku base Docker image with Alpine Linux with Rakudo binary pre-installed, so our Dockerfile should be pretty simple:

$ mkdir -p RakuOps/docker-sparrow
$ cd RakuOps/docker-sparrow

$ cat Dockerfile

FROM jjmerelo/alpine-raku
RUN zef install --/test Sparrow6

$ docker build --tag rakuops:1.0 .

Sending build context to Docker daemon  2.048kB
Step 1/2 : FROM jjmerelo/alpine-raku
latest: Pulling from jjmerelo/alpine-raku
df20fa9351a1: Already exists
a901eee946d8: Pull complete
Digest: sha256:3e22846977d60ccbe2d06a47da4a5e78c6aca7af395d57873d3a907bea811838
Status: Downloaded newer image for jjmerelo/alpine-raku:latest
 ---> c0ecb08ec5db
Step 2/2 : RUN zef install --/test Sparrow6
 ---> Running in ae2a0dc8848f
===> Searching for: Sparrow6
===> Updating cpan mirror:
===> Searching for missing dependencies: File::Directory::Tree, Hash::Merge, YAMLish, JSON::Tiny, Data::Dump
===> Searching for missing dependencies: MIME::Base64
===> Installing: File::Directory::Tree:auth<labster>
===> Installing: Hash::Merge:ver<1.0.1>:auth<github:scriptkitties>:api<1>
===> Installing: MIME::Base64:ver<1.2.1>:auth<github:retupmoca>
===> Installing: YAMLish:ver<0.0.5>
===> Installing: JSON::Tiny:ver<1.0>
===> Installing: Data::Dump:ver<v.0.0.11>:auth<github:tony-o>
===> Installing: Sparrow6:ver<0.0.24>

1 bin/ script [s6] installed to:
===> Updated cpan mirror:
===> Updating p6c mirror:
===> Updated p6c mirror:
Removing intermediate container ae2a0dc8848f
 ---> a2cbc605ec5e
Successfully built a2cbc605ec5e
Successfully tagged rakuops:1.0

$ docker images

REPOSITORY             TAG                 IMAGE ID            CREATED             SIZE
rakuops                1.0                 a2cbc605ec5e        3 minutes ago       139MB

First run

Now having a base Docker image let’s run our very first Sparrow scenario, all we need is to add file called sparrowfile using Docker ADD directive. Our first scenario will be as simple as Bash “Hello World” echo command:

$ cat sparrowfile

bash "echo 'Hello World'", %(
    description => "hello world"

As one could notice, Sparrow scenario is just a plain Raku code with some DSL constructions. Let’s modify Dockerfile and rebuild an image.

$ cat Dockerfile

ADD sparrowfile
RUN raku -MSparrow6::DSL sparrowfile

$ docker build --tag rakuops:1.0 .

Sending build context to Docker daemon  5.632kB
Step 1/4 : FROM jjmerelo/alpine-raku
 ---> c0ecb08ec5db
Step 2/4 : RUN zef install --/test Sparrow6
 ---> Using cache
 ---> a2cbc605ec5e
Step 3/4 : ADD sparrowfile .
 ---> 74c7ee71a303
Step 4/4 : RUN raku -MSparrow6::DSL sparrowfile
 ---> Running in c73e1a7d568f
unknown plugin bash
  in method plugin-install at /root/raku-install/share/perl6/site/sources/5D155994EC979DF8EF1FDED7148646312D9073E3 (Sparrow6::Task::Repository::Helpers::Plugin) line 115
  in sub task-run at /root/raku-install/share/perl6/site/sources/DB0BB8A1D70970E848E2F38D2FC0C39E4F904283 (Sparrow6::DSL::Common) line 12
  in sub bash at /root/raku-install/share/perl6/site/sources/7662EE0EFF4206F474B7CC4AEF229F1A86EC8FFF (Sparrow6::DSL::Bash) line 33
  in sub bash at /root/raku-install/share/perl6/site/sources/7662EE0EFF4206F474B7CC4AEF229F1A86EC8FFF (Sparrow6::DSL::Bash) line 7
  in block <unit> at sparrowfile line 1

The very first run has failed with unknown plugin bash error, that means one needs to provision Docker with Sparrow repository – a storage for all dependencies required in Sparrow scenarios.

While there are many ways to do that, for our tutorial use of local file repository seems the easiest one.

Local Sparrow repository

Local Sparrow repository contains all Sparrow plugins, deployed to your local file system. To create one we need to initialize a repository structure first:

$ s6 --repo-init ~/repo

16:41:31 06/29/2020 [repository] repo initialization
16:41:31 06/29/2020 [repository] initialize Sparrow6 repository for /home/scheck/repo

When we have an empty repository let’s populate it with
Sparrow plugins taken from source code . Right now we only need a specific bash plugin, so let’s upload on this one:

$ git clone ~/sparrow-plugins

$ cd ~/sparrow-plugins/bash

$ s6 --upload
16:41:36 06/29/2020 [repository] upload plugin
16:41:36 06/29/2020 [repository] upload bash@0.2.1

Copy repository to Docker cache

We’re going to use Docker COPY command to copy repository files to a Docker cache. But first we need to copy files to the current working directory so they will be available for the COPY command during Docker build:

$ cp -r ~/repo .

$ cat Dockerfile

RUN apk add bash perl
COPY repo/ /root/repo/
RUN s6 --index-update
RUN raku -MSparrow6::DSL sparrowfile

$ docker build --tag rakuops:1.0 .

Sending build context to Docker daemon  11.26kB
Step 1/7 : FROM jjmerelo/alpine-raku
 ---> c0ecb08ec5db
Step 2/7 : RUN zef install --/test Sparrow6
 ---> Using cache
 ---> a2cbc605ec5e
Step 3/7 : RUN apk add bash perl
 ---> Using cache
 ---> d9011d4e64db
Step 4/7 : ADD sparrowfile .
 ---> Using cache
 ---> adb1df57e1c0
Step 5/7 : COPY repo/ /root/repo/
 ---> Using cache
 ---> 3ed6bfaf4183
Step 6/7 : RUN s6 --index-update
 ---> Running in 6edfc480bde7
17:03:59 06/29/2020 [repository] update local index
17:03:59 06/29/2020 [repository] index updated from file:///root/repo/api/v1/index
Removing intermediate container 6edfc480bde7
 ---> 7eccb5889a80
Step 7/7 : RUN raku -MSparrow6::DSL sparrowfile
 ---> Running in af6eb4b2d9ee
17:04:02 06/29/2020 [repository] installing bash, version 0.002001
17:04:05 06/29/2020 [bash: echo Hello World] Hello World

As we could see from the log, Sparrow scenario successfully finishes printing “Hello World” in stdout. Line installing bash, version 0.002001 means Sparrow plugin has been successfully pulled from Docker cache and installed into container file system.

Build all plugins

To use the rest of Sparrow plugins in Docker build scenarios we need to add the to Docker cache the same way we did for bash plugin:

$ cd ~/sparrow-plugins
$ find  -maxdepth 2 -mindepth 2 -name sparrow.json -execdir s6 --upload \;
17:11:56 06/29/2020 [repository] upload plugin
17:11:56 06/29/2020 [repository] upload ado-read-variable-groups@0.0.1
17:11:56 06/29/2020 [repository] upload plugin
17:11:56 06/29/2020 [repository] upload ambari-hosts@0.0.1
17:11:57 06/29/2020 [repository] upload plugin
17:11:57 06/29/2020 [repository] upload ansible-install@0.0.2
17:11:58 06/29/2020 [repository] upload plugin
17:11:58 06/29/2020 [repository] upload ansible-tutorial@0.0.1
17:11:59 06/29/2020 [repository] upload plugin
17:11:59 06/29/2020 [repository] upload app-cpm-wrapper@0.0.6
... output truncated ...

Now let’s update Docker cache by copy repository file to current working directory, in the next run Docker COPY command will pick files and push to Docker image.

$ cd ~/RakuOps/docker-sparrow/
$ cp -r ~/repo .

Sparrow plugins

Now we’re free to use any plugin we’ve just added. Say, we need to install nano editor on our Docker image. Sparrow provides a cross-platform package-generic plugin to install native packages:

$ cat sparrowfile

package-install "nano";

$ docker build --tag rakuops:1.0 .

Sending build context to Docker daemon  2.012MB
Step 1/7 : FROM jjmerelo/alpine-raku
 ---> c0ecb08ec5db
Step 2/7 : RUN zef install --/test Sparrow6
 ---> Using cache
 ---> a2cbc605ec5e
Step 3/7 : RUN apk add bash perl
 ---> Using cache
 ---> d9011d4e64db
Step 4/7 : ADD sparrowfile .
 ---> 7a3bb7329d46
Step 5/7 : COPY repo/ /root/repo/
 ---> 0c029612c55c
Step 6/7 : RUN s6 --index-update
 ---> Running in 356d29ed8049
17:16:56 06/29/2020 [repository] update local index
17:16:56 06/29/2020 [repository] index updated from file:///root/repo/api/v1/index
Removing intermediate container 356d29ed8049
 ---> 18876a3d6396
Step 7/7 : RUN raku -MSparrow6::DSL sparrowfile
 ---> Running in bd07fecae4f0
17:16:58 06/29/2020 [repository] installing bash, version 0.002001
17:17:00 06/29/2020 [bash: echo Hello World] Hello World
17:17:00 06/29/2020 [repository] installing package-generic, version 0.004001
17:17:02 06/29/2020 [install package(s): nano.perl] fetch
17:17:02 06/29/2020 [install package(s): nano.perl] fetch
17:17:02 06/29/2020 [install package(s): nano.perl] v3.12.0-103-g1699efe1cd []
17:17:02 06/29/2020 [install package(s): nano.perl] v3.12.0-106-g2b11e345c6 []
17:17:02 06/29/2020 [install package(s): nano.perl] OK: 12730 distinct packages available
17:17:03 06/29/2020 [install package(s): nano.perl] trying to install nano ...
17:17:03 06/29/2020 [install package(s): nano.perl] installer - apk
17:17:03 06/29/2020 [install package(s): nano.perl] (1/2) Installing libmagic (5.38-r0)
17:17:03 06/29/2020 [install package(s): nano.perl] (2/2) Installing nano (4.9.3-r0)
17:17:03 06/29/2020 [install package(s): nano.perl] Executing busybox-1.31.1-r19.trigger
17:17:03 06/29/2020 [install package(s): nano.perl] OK: 67 MiB in 32 packages
17:17:03 06/29/2020 [install package(s): nano.perl] Installed:                                Available:
17:17:03 06/29/2020 [install package(s): nano.perl] nano-4.9.3-r0                           = 4.9.3-r0
17:17:03 06/29/2020 [install package(s): nano.perl] nano
Removing intermediate container bd07fecae4f0
 ---> 408d35e1e3fd
Successfully built 408d35e1e3fd
Successfully tagged rakuops:1.0


We’ve just seen how one can use Raku and Sparrow to build Docker images. The advantage of the approach one is no more limited by Dockerfile syntax and could leverage all the power of Raku to express any sophisticated build logic. On other hand Sparrow provides a lot of handy primitives and plugins for typical build tasks and some of them I’m going to share in next posts.

Managing External Raku Dependencies using Sparrow

A few days ago several discussions have been launched where people try to deal with managing none Raku / native dependencies for Raku modules. While a solution is far from being found or at least is complete here is my, Sparrow take on the problem.


Raku-native-deps is a Sparrow plugin to parse META6.json file and turn it to native packages dependencies. It has a lot of limitations, e.g only supporting CentOS and only parsing `:from<native>` statements but it could give one a sence of the approach:

my %state = task-run "get packages", "raku-native-deps", %(
  path => "META6.json"

for %state<packages><> -> $i {
  say "package: $i<package>"

Basically one just give it a path to module’s META file and the plugin parses the file converting it to native package dependencies, then it’s possible to install ones using underlying package manager:

for %state<packages><> -> $i {
  package-install $i<package>

Full scenario

So full scenario to install a module with native dependencies would be:

# Fetch module and get a directory where it's fetched
my %state = task-run 'fetch dbd-sqlite', 'zef-fetch', %(
  identity => 'DBD::SQLite'

# Build native packages list from META6.json
my %state2 = task-run "get packages", "raku-native-deps", %(
  path => "{%state<directory>}/META6.json"

# Install native packages (libsqlite3)
for %state2<packages><> -> $i {
  package-install $i<package>;

# Install module, at this point external dependencies are installed
# So this step will only install Raku dependencies and module itself

zef "DBD::SQLite";

RakuDist integration

RakuDist – Raku modules testing service uses the method to test distributions containing native dependencies. Known modules examples:

DBD::SQLite ( META6 pull request – )
LibCurl ( META6 pull request – )
GPGME ( META6 pull request – )

Further thoughts

The approach is not complete, though right now it could solve installation of native dependencies for a single module ( but not recursively for module’s dependencies’s native dependencies ), one can read ongoing discussion here – and suggest ideas.

Thanks for reading


RakuDist – Dead Easy Way to Test Raku Cli Applications

Nowadays many Raku modules authors ship cli tools as a part of their Raku modules distributions.
RakuDist provides a dead easy way to test those scripts. The benefit, it takes a minimal coding and
fully integrated into RakuDist service.

Cli application example

Say, we have a script.raku shipped as a part of a Raku module.

$ cat bin/script.raku

if @*ARGS[0] -eq "--version" {
say "app version: 0.1.0"
} elsif @*ARGS[0] -eq "--help" {
} else {
my @params = @*ARGS;
# do some stuff

To test a script installation one needs to create a .tomty/ sub directory in a module root directory and place some test scenarios. Scenarios should written on Tomty – a simple Raku framework for black box testing:

$ mkdir .tomty

$ nano .tomty/00-script-version.pl6

task-run ".tomty/tasks/app-version/";

$ mkdir -p .tomty/tasks/app-version/

$ nano .tomty/tasks/app-version/task.bash

script.raku --version

00-script-version scenario runs the script with some parameters ( help info ) and verifies successful status code.

To verify script STDOUT, create a check file with some Raku regular expressions:

$ nano .tomty/tasks/app-version/task.check

regexp: "app version:" \s+ \d+ '.' \d+ '.' \d+

You can add more scenarios, they all will be executed in a row:


Ship it and test it!

Now just add .tomty to add your CPAN module distribution and the tests will be automatically run by RakuDist!

That is it, stay tuned! Beta Testing Starts

Thanks to Raku community’s members @AlexDaniel and @rba who supported the idea of bringing RakuDist to community infrastructure, so long story short: is available!

Check the service out to test your Raku distribution against various versions of Rakudo. Ubuntu and Debian OS are available.

The service is in beta stage, feel free to post bugs or suggestions to RakuDist GitHub project.

Here is also API docs if you prefer programmatic interface instead of launching tests through a web form.

Thanks for reading.


RakuDist Update. Long Queue Short.


RakuDist is a service that enables Raku module authors to test theirs distribution across different Rakudo and OS versions.

A lot of things have happened to the project recently.

I am to busy to write all the details, but to make a long story short, there are two important facts.

RakuDist has got a nice web UI, so people could launch builds by using convenient html form, try it now!

In the future I’ll probably find a proper domain name for the service, but so far it’s the link just mentioned.

And secondly, RakuDist is now powered by Sparky backend, which means all the builds are 100% asynchronous queues and hopefully my VM will cope with a load if people start using the service proactively.

That is it. Thank you for reading. I’d appreciate comments as usual from Raku community.





2020.15 An eASTer Surprise

Rakudo Weekly News

Jonathan Worthingtontweeted that they finally found the time and the voice to record the presentation they had planned for the German Perl and Raku Workshop. You can either watch the video and/or look through the slides. It basically touches on these four subjects:

  • Where is Rakudo now with regards to macros
  • Why it’s time to overhaul the Rakudo compiler frontend
  • The design of RakuAST, an AST for mere mortals
  • A tentative time-path with milestones

Yours truly is particularly excited about the concept of RakuAST, which should allow building executable code without having to resort to using EVAL, with all of its security and performance implications. Exciting times!

Reintroducing ArrayHash

Sterling Hanenkamp redesigned / refactored their ArrayHash module, which originally predated the Great List Refactor, and wrote a very interesting blog post about it.

So you have an idea for a project…

Then this round of Perl Foundation…

View original post 942 more words