Chasing Rakudo bugs with Sparrow

Since I announced a RICH – Rakudo Issues Confirmation Helper, I’ve been playing a lot with an automation of test cases for known Rakudo bugs.

One thing I’ve found really interesting in this journey is how Sparrow could be a good fit to express issues through automation scenarios.

While I am not pretending on substituting existing Roast test system by Sparrow, I’d like to highlight an alternative approach here, and maybe Rakudo devs will pay an attention on the tool 🙂 and start using it one day.

The following is just some examples and thoughts, and not meant to be a complete “user” guide.

Chasing a bug

It’s all starts with a user describe a bug through a Rakudo GitHub issues page. Let’s take a look at the fresh one, issue #4119:

The Problem

Chaining operators are always iffy, however is assoc<chain> doesn’t make a custom operator iffy (unlike is equiv with a chaining operator, which does).

Expected Behavior

raku -e ‘sub infix:<ab>($a, $b) is assoc<chain> { True }; say 1 !ab 2’

False

Actual Behavior

raku -e ‘sub infix:<ab>($a, $b) is assoc<chain> { True }; say 1 !ab 2’

===SORRY!=== Error while compiling -e

Cannot negate ab because additive operators are not iffy enough

By @leont

So we have 3 essential parts in the issue definition:

* Human readable description
* Expected behavior code + expected output
* Actual behavior code + actual output


Let’s analyze all these piece by piece.

Description

This is just brief description of an issue, not meant to be used in automation process, but helping developers to understand the issue from the high level side. Let’s skip it.


Expected behavior / Actual behavior

This and following bits are most important for test automation purposes, because they are Raku scenarios expressing an issue. The first one as you could guess showing an example code and desired output, the second one is the same code but with some real output and probably unsuccessful exit code.

BDD Approach

BDD paradigm reenforces the idea of close relationship between software users and software developers, it tries to bridge the gap between those two groups. One of the approaches for that users express desired system behavior in runnable scenarios, which are both human readable specification and test code.

A classical Given/When/Then statements criteria is one of these methods.

In a case of Rakudo bug it could written as:

Given: I have this version of Rakudo 
When: I run this code
Then: I should exits successfully and have this output

Let’s Sparrow it!

Here comes the most interesting part. Sparrow has some TDD features by design, so it’s quite easy to implement the idea though this tool. The rest of the post is just an example of Sparrow workflow when automating Rakudo bugs tests.

Given: I have this version of Rakudo

This statement does not need any explicit coding ( but see raku --version in the following script) and is “ensured” by a working environment a test gets run against. Usually users catch bugs on their laptops 🙂

When I run this code

The body of a test is just a simple Bash script that gets executed by Sparrow command line sp6, again in a spirit of the idea, it should literally reproduce a bug, they way you got it. In most of the cases Bash oneliner is enough:

mkdir -p issues/4119
issues/4119/task.bash
set -x
set -e

raku --version

raku -e 'sub infix:<ab>($a, $b) is assoc<chain> { True }; say 1 !ab 2';

Then: I should exits successfully and have this output

Now, let’s just a test a Sparrow task and see a result:

s6 --task-run issues/4119

[sparrowtask] :: run sparrow task issues/4119
[sparrowtask] :: run thing issues/4119
[issues/4119] :: stderr: ++ set -e
++ raku --version
[issues/4119] :: This is Rakudo version 2020.07 built on MoarVM version 2020.07
[issues/4119] :: implementing Raku 6.d.
[issues/4119] :: stderr: ++ raku -e 'sub infix:<ab>($a, $b) is assoc<chain> { True }; say 1 !ab 2'
[issues/4119] :: stderr: ===SORRY!=== Error while compiling -e
Cannot negate ab because additive operators are not iffy enough
at -e:1
[issues/4119] :: stderr: ------> a, $b) is assoc<chain> { True }; say 1 !⏏ab 2
    expecting any of:
        infix
        infix stopper
[issues/4119] :: task exit status: 1
[issues/4119] :: task issues/4119 FAILED

Actual / Expected output

In the given example a sample code exited with error without even producing a desired output, this is a simple form of test. But what if a code exited with 0 yet producing wrong output? This is quite easily with Sparrow task check mechanism.

Let’s take a look at another example of known Rakudo issue #4118 :

issues/4118/task.bash

set -x
set -e

raku --version

raku -e "say qq{===};
  say [⊖] (1,2,3), (1,2,3), (1,2,3);
  say [⊖] (0,1,2), (0,1,2), (0,1,2);
  say qq{===}
" 2>&1;

issues/4118/task.check

begin:
===
regexp: ^^ 'Set()' $$
regexp: ^^ 'Set()' $$
===
end:

s6 --task-run issues/4118/

[sparrowtask] :: run sparrow task issues/4118/
[sparrowtask] :: run thing issues/4118/
[issues/4118/] :: stderr: ++ set -e
[issues/4118/] :: stderr: ++ raku --version
[issues/4118/] :: This is Rakudo version 2020.07 built on MoarVM version 2020.07
[issues/4118/] :: implementing Raku 6.d.
[issues/4118/] :: stderr: ++ raku -e 'say qq{===};
  say [⊖] (1,2,3), (1,2,3), (1,2,3);
  say [⊖] (0,1,2), (0,1,2), (0,1,2);
  say qq{===}
'
[issues/4118/] :: ===
[issues/4118/] :: Set()
[issues/4118/] :: Set(0)
[issues/4118/] :: ===
[task check] stdout match (s) <===> True
[task check] stdout match (s) <^^ 'Set()' $$> True
[task check] stdout match (s) <^^ 'Set()' $$> False
[task check] stdout match (s) <===> False
=================
TASK CHECK FAIL

In this test we ensure that code sample produces Set() twice. Sparrow task checks DSL is very handy in that case.


Conclusion

Sparrow allows to write test scenarios in a way as close as possible to what users get on their environments, it could be an efficient testing and collaboration tool enabling both users and developers interact on Rakudo bugs efficiently without having an unnecessary additional layers.

More Rakudo bugs Sparrow scenarios could be found here – https://github.com/melezhik/RakuPlay/tree/main/issues



Thanks for reading

Sparky on k8s cluster

Sparky is a lightweight CI server written on Raku. It uses Bailador for UI and Sparrow/Sparrowdo as an automation engine. Initially the server was written to be run on a single machine and did not scale well. So it only could handle a small/medium load, mostly working in localhost mode.

Now, with the help of k8s cluster Sparky could be easily turned into industrial scale CI server:



How does it work?

A user launches requests to run jobs on Sparky. Where jobs – are arbitrary tasks executed as a part of your CI/CD processes.

A user could be a Sparky cron jobs mechanism or real users issuing http requests, including other applications conusming Sparky file triggering protocol

Depending on the level of load, k8s would up or down new workers to handle requests, this is achieved by standard Kubernetes auto scaling mechanism.

Evey k8s node represents a docker container that runs:

* Sparky web UI instance ( Bailador / Bulma web application )
* Sparkyd – Sparky jobs queue dispatcher
* Runtime environment for jobs execution ( Raku + Sparrow )

Benefits

Using k8s for a Sparky infrastructure has two benefits:

* simplicity and reliability
* scalability


Simplicity

In k8s setup Sparky runs jobs on docker containers. It’s quite efficient as docker containers are mortal and a user doesn’t have to worry much if CI/CD scripts break an environment, after all k8s will re-spawn a new instance in awhile if the old one becomes unavailable. So as a docker is immutable by it’s nature we don’t have to worry much about underlying docker instances states.

Scalability

One of the reason people would choose Kubernetes is that it handles a load automatically. Now we might have a dozens of Sparky jobs run in cluster at the same time. It’s never achievable on default Sparky runs on localhost mode. Thus, k8s will take care about increasing load and will launch new instances if a workload starts to increase.

Underlying Sparky file system

Sparky uses sqlite database as well as static file to store jobs state:

  • sqlite database ( builds meta data )
  • static files ( reports, lock and cache files )

Persist file system

Because docker by design does not have a state, we need to make some effort to keep Sparky file system persist. That means all containers should share the same files and sqlite database, not just a copies of those available across a unique container. Also a file system should stay even when underlying docker instance are gone and relaunch and not to be tied to docker containers.

Luckily this is achievable by using standard k8s volumes mechanism.

A user can choose between different flavors, but they all boils down to the fact that underling files system stays permanent across various docker instances and thus is capable to keep underlying Sparky state.

Possible options:

* AzureFile
* chephfs file system
* Persist Volume Claim


Future thoughts

I’ve not tried to run Sparky in k8s cluster using mentioned approach, but I am pretty sure once it’s done Sparky could be used in industrial level projects. If you want to try a Sparky in your company, please give me a shout 🙂

Stay tuned.



Thanks for you for reading.

Rakudo Issues Player

Rakudo releases could be tough. Because the language is still in very development stage. New issues arrive daily. Let’s me introduce my attempt to help release managers and Rakudo developers to keep track of addressing existing issues with daily Rakudo commits. Enter a RIP service ( maybe I should choose a better name ? ) – Rakudo Issues Player.

The service allow one to describe existing issues as playable Rakudo scenarios, that gets automatically replayed for every new Rakudo commit.

So we have the recent issues report page with links to Rakudo GH issues and test reports:

Filling a new issue

So to fill a new issue, one as usually go to Rakduo GH issue page and fill the one. Just one extra step is required so that an issue will be checked for further commits:

* Go to https://rakudist.raku.org/play/
* Fill in your code snippet ( it could be Test scenario or any Raku code that exits none zero for negative cases )
* Name your play as an issue-$issue-number
* Run a play ( only needs once )

That is it. RIP will never let your issue get abounded. It’ll run a play for every new commit and update the report page.

Feedback

RIP is still in quite early and an experimental stage. I’d like to hear a feedback from Rakudo developers and Raku community.

PS. I have a slogan idea as well. Just a thought :-)). RIP service – let your issues rest in peace.




PS. Update: @Liztormato suggested an another names for the service – RICH – Rakudo Issues Confirmation Helper.



Alexey

Lightweight Markdown-to-PDF converter: pfft

I Fight for the Users

I just released the first version of my new Markdown-to-PDF converter, pfft.  It runs on Linux, at least as far back as Ubuntu 18.04.

Why yet another converter when pandoc and many others already do the job?  Size.  Pandoc uses TeX to generate PDFs.  All the other converters I found use a Web browser in one way or another to make the PDF.  Pandoc is 50 MB by itself, not counting TeX!  In less than 1.2 MB (a 5.25″ floppy 😉 ) and a single file, pfft will do the job.

Of course, there is a catch: pfft uses Pango and Cairo to make the PDFs.  Those have their own dependencies, but are installed on many Ubuntu systems by default!  So pfft itself does not add to the dependency load those systems already carry.

(By the way, I use and appreciate both TeX and Pandoc.  They are great tools! …

View original post 88 more words

RakuPlay introduction

I’ve recently launched an experimental service called RakuPlay. It allows users to run Raku code snippets against different version of Rakudos including Rakudo developers SHA commits.

It also supports automatic Raku modules installation using Rakufile syntax.

A common user page looks like that:



Rakudo developer page allows to run a code against certain Rakudo commits:



Once a user hit a “submit” button RakuPlay will run a code on a respected docker container ( you can also choose an OS image ).

It takes awhile when runs first, as RakuPlay environment is not set up, but next runs should be pretty fast (as RakuPlay will reuse existing environments ).

Once code is executed a user might find a code execution report through available reports:



Reports are kept in a system for awhile ( 10K maximum ), so you can share a build with others via http link – see for example HTTP::Tiny report or Initial set of tests one dim native shaped str arrays report


The future of the project

I started the project just for fun and because 99% of a code was already there as a part of Rakudist project.

If Raku community finds the project promising maybe I could invest more time in it.

Some benefits from my point view:

For Rakudo developers:

* Rakudo Commits. Rakudo developers could easy run any code (including usage of Raku modules) and share results. One don’t need to have a Rakudo compiled to a certain version to run a code against, all you need is a browser.

* Common Platform. Rakuplay could be a common platform for all devs to share results, discuss, etc. Rakuplay could contain code examples, user scenarios, use cases and test results. It could be a good addition to the irc channel.

* Quick Tests. Sometimes people forget or don’t want to write test cases for their commits, maybe because it’ll take a bit more efforts in comparison with code changes ( Somehow I’ve found quite a number of “tests needed” issues in Rakudo repo ), RakuPlay could be a “draft” where an author of commit or issue could reproduces their idea in a code and give a link to others. Later one can pick up an existing RakuPlay build and “replay” it against another commits. The build is always complete and informative as it contains a Rakudo version and code snippet, as wel as an output. Later on a dev could convert a draft into real Roast test.

For Raku community as a whole

* The same idea would apply for the whole community just with a slight variation. People could easily run any code to give examples how to use their code ( Raku modules authors ) or to express problems they’ve encountered running someone else’s code (F.e. referencing RP builds from GH issue ).

In the long run, the service could facilitate Raku language grow and will make it easier for newbies to learn the language.



Thank you for reading. Please share your feedback in Reddit.

Alexey

Raku-Utils Proposal

Sparrow is a Raku based automation tool comes with the idea of Sparrow plugins – small reusable pieces of code, runs as a command line or Raku function.

Raku:

my %state = task-run "say name", "name", %(
  bird => "Sparrow"
);

say %state<name>;

Cli:

$ s6 --plg-run name@bird=Sparrow

One can even create wrappers for existing command line tools converting them into Raku functions:

Wrapper code:

$ cat task.bash

curl $(config args)

Raku function:

task-run ".", %(
  args => [
    ['fail','location'],
    %(
      "output" => "data.html"
    );
    'http://raku.org'
  ]
);

Wrappers for Raku modules command line scripts

Many Raku modules author nowadays ship their distributions with command line tools to provide handy console functionality for theirs modules.

It’s relatively easy to repackage those tools into Sparrow plugins. For example for App::Mi6 module mi6 tool:

task-run "mi6 release", "raku-utils-mi6", %(

  args => [
    'release',
    ['verbose'],
    %(
      jobs => 2
    )
  ]

);

Sparrow wrapper:

$ task.bash

mi6 $(config args)

$ cat sparrow.json
{
    "name" : "raku-utils-mi6",
    "description" : "mi6 cli",
    "version" : "0.0.1",
    "category" : "utils"
}

$ depends.raku

App::Mi6

The last file is needed so that Sparrow could install Raku module dependency during plugin installation.

So eventually we might have a repository of raku-utils plugins for every Raku module exposing command line interface:

$ s6 --search raku-utils

One day, I might create a script that would download all zef distributions, sorting out those having bin/ scripts and create Sparrow wrappers for all of them. That would add a dozens of new plugins to existing Sparrow eco system at no cost.

And this would make it available to run those scripts as pure Raku functions, using Sparrow plugins interface!

Conclusion

I’ve introduced the idea of adding Sparrow plugins for existing Raku commad line tools shipped as a part of Raku modules.

I’d happy to get a feedback on that.

Thanks

Alexey

RakuOps. Issue Number 2.

RakuOps series – an attempt to show people who write on Raku how to use the language in daily DevOps tasks – automation, configuration management, Docker containers and so on

It’s been two weeks I’ve been playing with Sparrowdo – an automation tool written on Raku and based on Sparrow automation framework. Now it’s a time to share some cool features I’ve added recently. But before to do that let me remind you how it all started.

Multiple hosts management

After publishing an issue number 1, I received a comment from @bobthecimerian in r/rakulang reddit post:

“Assume for the sake of discussion that I want to manage 5 machines with Sparrow6 and run Docker on all of them. Do I have to install Sparrow6 on all of them, and deploy Sparrow6 tasks to all of them? Then I use ssh, or ssh through the Sparrow6 DSL, to run tasks that install Docker and other software? Do I have to manage ssh authorized keys and network addresses for each machine that I am configuring myself, or does Sparrow6 have tasks or other tools to make that management easier?”

So, I thought – “Wait … what a cool use case I can reveal here, I just need to add some features to Sparrowdo and that is it!”

Why?

The idea of managing multiple hosts is quite common. Say, you have a bunch of related VMs in your network, and you want to manage them consistently – installing the same packages, running services, so on. Or you have a multi tier application – frontend/backend/database and you need to manage a configuration of each node specifically, but still need to connect those nodes through different protocols. Of course, in days of immutable infrastructure and Kubernetes these types of tasks could be solved using Docker. But what if I want something lightweight, flexible and not involving industrial scale efforts? Here is where Sparrrowdo could be a good alternative, especially for people writing on Raku.

Dependencies

This what we need for this tutorial. You don’t have to install those tools, unless you want to experiment with given topic in practice, but here we are:

* Terraform to create ec2 instances in amazon aws
* Free tier Amazon account
* Aws cli to launch ec2 instances with Terraform
* Sparrowdo to provision hosts
* Sparky – Sparrowdo backend to asynchronously execute Sparrowdo scenarios

Spin up infrastructure

Creation of bare bone infrastructure is relatively easy with Terraform – multi cloud infrastructure deployment tool. It’s de-facto an industrial standard for infrastructure management. I am not a big fan of Terraform’s declarative style DSL but it works really well when we just need to spin up an infrastructure without provisioning stage (see later).

So let’s create a terraform scenario to create 3 ec2 linux instances with Ubuntu OS, representing frontend, backend and database nodes:

$ mkdir ~/terraform-example
$ cd terrafrom-example
$ nano example.tf

resource "aws_instance" "example" {

  ami           = "ami-2757f631"
  instance_type = "t2.micro"
  key_name = "mylaptop"

  tags = {
    Name = "frontend"
  }
}

resource "aws_instance" "example2" {
  ami           = "ami-2757f631"
  instance_type = "t2.micro"
  key_name = "mylaptop"

  tags = {
    Name = "backend"
  }
}

resource "aws_instance" "example3" {
  ami           = "ami-2757f631"
  instance_type = "t2.micro"
  key_name = "my-key"

  tags = {
   Name = "database"
  }
}

Ssh keys

But before we launch terraform script, we need to enable passwordless ssh setup to allow Sparrowdo provision stage runs from my laptop.

What I need is to generate ssh key and import it’s public part to my amazon account. When terraform creates ec2 instances it will reference to this key, which makes amazon inserts the public part into hosts configurations and finally makes passwordless ssh connect from my laptop to those hosts:

$ ssh-keygen -t rsa -C "my-key" -f ~/.ssh/my-key

$ aws ec2 import-key-pair --key-name "my-key" --public-key-material fileb://~/.ssh/my-key.pub

The clever bit here is we create a key pair named “my-key" and reference to it inside Terraform using key-name attribute.

Run terraform

Now let’s run terraform to create our first infrastructure consisting of 3 hosts.

$ terrafrom apply -auto-approve

aws_instance.example: Creating…
aws_instance.example2: Creating…
aws_instance.example3: Creating…
aws_instance.example: Still creating… [10s elapsed]
aws_instance.example2: Still creating… [10s elapsed]
aws_instance.example3: Still creating… [10s elapsed]
aws_instance.example: Still creating… [20s elapsed]
aws_instance.example2: Still creating… [20s elapsed]
aws_instance.example3: Still creating… [20s elapsed]
aws_instance.example2: Creation complete after 24s [id=i-0af378c47f68a1250]
aws_instance.example3: Creation complete after 24s [id=i-082ad29992e0c83eb]
aws_instance.example: Creation complete after 24s [id=i-0c15a8a728ad71302]


Once we apply terraform configuration to aws, in literally seconds we will get 3 ec2 instances with Ubuntu OS up and running in amazon cloud. Cool!

Sparrowdo

In devops terminology provisioning is a stage when we apply configuration on bare bone infrastructure resources, for example on virtual machines. This where Sparrowdo starts shining because it’s what the tool was designed for.

Let’s install Sparrowdo itself first. Sparrowdo is installed as a zef module:

$ zef install Sparrowdo –/test

Now let’s create a simple Sparrowdo scenario which will define provision logic.

Our first scenario – sparrowfile – will be as simple as that:

mkdir -p ~/sparrowdo-examples
cd ~/sparrow-examples
nano sparrowfile

package-install "nano";

Installing nano editor ( which I am bug fan of ) on all the nodes should be enough to test our first simple Sparrowdo configuration.

Sparky

Because we are going to run Sparrowdo in asynchronous mode, we need to install Sparky – asynchronous Sparrowdo runner. As a benefit it comes with nice web UI where build statuses are tracked and logs are visible:

$ mkdir ~/sparky-git
$ cd ~/sparky-git
$ git clone https://github.com/melezhik/sparky.git
$ zef install .

$ mkdir -p ~/.sparky/projects
$ raku db-init.pl6

$ nohup sparkyd &
$ nohup raku bin/sparky-web.pl6

Last 3 commands initialize Sparky internal database and run Sparky queue dispatcher with Sparky web UI which is accessible at 127.0.0.1:3000 endpoint.

But before we try to run any Sparrowdo provision let’s understand how do we know hosts network addresses bearing in mind we don’t want to hardcode ones into our configuration.

Terrafrom state

What is cool about Terrafrom it keeps infrastructure internal data in a special file which is called state in JSON format:

$ cat ~/terraform-example/terraform.tfstate

So it’s relatively easy to create a simple Raku script that parses the file and fetches all required configuration data:

$ cd ~/sparrowdo-example
$ nano hosts.aws.raku

use JSON::Tiny;

my $data = from-json("/home/melezhik/terraform-example/terraform.tfstate".IO.slurp);

my @aws-instances = $data<resources><>.grep({
  .<type> eq "aws_instance"
}).map({
  %(
    host => .<instances>[0]<attributes><public_dns>
  )
});

@aws-instances;

If we dump @aws-instances array we will see all 3 instances with public DNS address data:

[
  {
    host => "ec2-54-237-6-19.compute-1.amazonaws.com",
  },
  {
    host => "ec2-52-23-177-193.compute-1.amazonaws.com",
  },
  {
    host => "ec2-54-90-19-170.compute-1.amazonaws.com",
  },
]

If we pass a script as host parameter, Sparrowdowill be clever enough to run the one, and because the last script statement is @aws-instances array, take it as an input hosts list:

$ sparrowdo --host=aws.raku --ssh_user=ubuntu --bootstrap

queue build for [ec2-54-237-6-19.compute-1.amazonaws.com] on [worker-3]
queue build for [ec2-52-23-177-193.compute-1.amazonaws.com] on [worker-2]
queue build for [ec2-54-90-19-170.compute-1.amazonaws.com] on [worker-2]

This command will launch nano editor installation on all 3 hosts. A --boostrap flags asks Sparrowdo to install all Sparrow dependencies first, because we run provision for the first time.

As it’s seen through an output, Sparrowdo has triggered 3 builds and they got added to Sparky queue. If we open up a Sparky web UI we could see that 2 builds are already being executed:

And the third one is kept in a queue:

After awhile we could see all 3 instances are provisioned:

So all 3 hosts have been successfully provisioned. If we ssh to any hosts, we will see that nano editor is presented.

Build logs

Sparky UI allows to see builds logs where could find a lot of details of how configuration was provisioned. For example:

rakudo-pkg is already the newest version (2020.06-01).
0 upgraded, 0 newly installed, 0 to remove and 117 not upgraded.
===> Installing: Sparrow6:ver<0.0.25>

1 bin/ script [s6] installed to:
/opt/rakudo-pkg/share/perl6/site/bin
18:37:03 07/16/2020 [repository] index updated from http://rakudist.raku.org/repo//api/v1/index
18:37:07 07/16/2020 [install package(s): nano.perl] trying to install nano ...
18:37:07 07/16/2020 [install package(s): nano.perl] installer - apt-get
18:37:07 07/16/2020 [install package(s): nano.perl] Package: nano
18:37:07 07/16/2020 [install package(s): nano.perl] Version: 2.5.3-2ubuntu2
18:37:07 07/16/2020 [install package(s): nano.perl] Status: install ok installed
[task check] stdout match <Status: install ok installed> True


Now let’s see how we can provision hosts specifically, depending on roles assigned to hosts. Remember we have a frontend, backend and database hosts?

Custom configurations

The latest Sparrowdo release comes with an awesome feature called tags. Tags allow one to assign arbitrary variables per each host, and branch installation logic depending on that variables.

Let’s tweak a host inventory script hosts.aws.raku so that resulted @aws-instances array include elements with tags:

[
  {
    host => "ec2-54-237-6-19.compute-1.amazonaws.com",
    tags => "aws,frontend" 
  },
  {
    host => "ec2-52-23-177-193.compute-1.amazonaws.com",
    tags => "aws,backend"
  },
  {
    host => "ec2-54-90-19-170.compute-1.amazonaws.com",
    tags => "aws,database"
  },
]

As one can see, basically tags are plain strings with comma separated values.

To handle tags within Sparrowdo scenarios one should use tags() function:

$ nano sparrowdo-examples/sparrowfile

if tags()<database> {

  # Database specific code here

  package-install "mysql-server"; 

} elsif tags()<backend> {

  # Install Backend application 
  # And dependencies
 
  package-install "mysql-client";

  user "app";

  directory "/home/app/cro-example", %(
    owner => "app",
    group => "app"
  );

  git-scm "https://github.com/melezhik/cro-example.git", %(
    user => "app",
    to => "/home/app/cro-example"
  );

  zef ".", %(
     user => "app",
     cwd => "/home/app/cro-example"
  );

} elsif tags()<fronted> {

  # Install Nginx server 
  # As a fronted 
 
  package-install "nginx";

}

This simple example shows that we can create a single provision scenario where different nodes are configured differently depending on their roles.

Now we can run Sparrow the same way as we did before and nodes configurations will be updated according their types:

$ cd ~/sparrowdo-examples

$ sparrowdo --host=hosts.aws.raku --ssh_user=ubuntu

Filtering by tags

Another cool thing about tags is one can pass --tags as a command line argument and it will act as a filter to leave only certain types of hosts. Say, we only want to update database host:

$ sparrowdo --host=hosts.aws.raku --ssh_user=ubuntu --tags=database

If we pass multiple tags by using a "," delimiter it will act as an AND condition. For example:

--tags=database,production

Will only process hosts with tag set to database and production.

Hosts attributes

And last but not the least feature of tags is key/value data . If set a tag as name=value format, Sparrowdo will process this as a named attribute:

my $v = tags()<name>

This is how we pass an arbitrary data into Sparrowdo context using the same tag syntax. For example, let’s modify hosts inventory script, to pass IP address of backend node:

$ nano ~/sparrowdo-examples/hosts.aws.raku

use JSON::Tiny;

my $data = from-json("/home/melezhik/terraform-example/terraform.tfstate".IO.slurp);
my $backend-ip;
my @aws-instances = $data<resources><>.grep({
  .<type> eq "aws_instance"
}).map({

   if .<instances>[0]<attributes><tags><Name> eq "backend" {
     $backend-ip = .<instances>[0]<attributes><public_ip>
   }

  %(
    host => .<instances>[0]<attributes><public_dns>,
    tags => "name={.<instances>[0]<attributes><tags><Name>}"
  )
});

for @aws-instances {
  $i<tags> ~= "backend_ip={$backend_ip}"
}

@aws-instances;


Now @aws-instance array has a following structure:

[
  {
    host => "ec2-54-237-6-19.compute-1.amazonaws.com",
    tags => "aws,frontend,backend_ip=54.90.19.170" 
  },
  {
    host => "ec2-52-23-177-193.compute-1.amazonaws.com",
    tags => "aws,backend,backend_ip=54.90.19.170"
  },
  {
    host => "ec2-54-90-19-170.compute-1.amazonaws.com",
    tags => "aws,database,backend_ip=54.90.19.170"
  },
]

So, for database part we might have a following Sparrowdo scenario, to
allow host with backend_ip to connect to a mysql server:

if tags()<database> {

  my %state = task-run "set mysql", "set-mysql", %( 
    user => "test", 
    database => "test", 
    allow_host => tags()<backend_ip>, 
  ); 
 
  if %state<restart> { 
    service-restart "mysql" 
  }

 }

Let’s rerun Sparrowdo to apply changes to a MySQL server:

$ sparrowdo --host=hosts.aws.raku --ssh_user=ubuntu --tags=database

Other hosts formats

Sparrowdo supports different hosts format, including localhost and docker , please read a documentation to get more details.

Conclusion

Sparrowdo and Sparky are flexible tools allow one to asynchronously provision virtual resources. In this tutorial we’ve seen how easy one can spin up a multi tier application consisting of 3 nodes from the scratch.

Moreover, Sparrowdo works nice with some well known tools like Terrafrom that makes it’s even more attractive and practical.

See you soon, on the RakuOps issue number 3, please let me know what do you want to hear next time.

Thank you for reading!


Aleksei Melezhik

RakuOps. Issue 1.

RakuOps series – an attempt to show people who write on Raku how to use the language in daily DevOps tasks – automation, configuration management, Docker containers building and so on.

While I don’t know for sure which topics will attract the community interest, I hope that during this series I’ll get some feedback so I could adjust my future posts according actual people needs.

How to Build Docker Containers Using Raku and Sparrow

This is the first post in the series, where I am going to show how to use Raku and Sparrow – Raku automation framework to build Docker images. We will start with simple Dockerfile example and then we’ll see how to use Sparrow to extend image building process.

Why

People usually use Dockerfile DSL to build Docker images. However the usage of Docker file is limited and quickly get cumbersome when it comes to more sophisticated cases. User ends up in extensive shell scripting through various RUN commands or similar way, which is very hard to maintain in the long run.

Moreover if one choose to change an underlying Docker container’s OS they will have to rewrite all the code which often has distro specific RUN commands.

In this post we will see how to use Raku and all battery included Sparrow automation tool to create Docker build scenarios in more portable and easy to maintain way.

As a result one could start using Raku to create high level scenarios gaining an access to all the power of the language. As well as a plenty of Sparrow plugins would reduce efforts to write code when dealing with typical configuration tasks – installing native packages, users, configuration files and so on.

Prerequisites

To build Docker container we will need a following set of tools:

  • Rakudo
  • Sparrow
  • Git
  • Docker

Rakudo installation is pretty strait-forward, just follow the instructions on https://rakudo.org/downloads web site.

To install Sparrow toolkit, we need install Sparrow6 Raku module:

zef install --/test Sparrow6

Sparrow bootstrap

To bootstrap Sparrow on Docker instance we need to build a Docker image first. That image should include Rakudo and Sparrow binaries. Thanks to @jjmerelo there is a
jjmelerelo/alpine-raku base Docker image with Alpine Linux with Rakudo binary pre-installed, so our Dockerfile should be pretty simple:

$ mkdir -p RakuOps/docker-sparrow
$ cd RakuOps/docker-sparrow

$ cat Dockerfile

FROM jjmerelo/alpine-raku
RUN zef install --/test Sparrow6

$ docker build --tag rakuops:1.0 .

Sending build context to Docker daemon  2.048kB
Step 1/2 : FROM jjmerelo/alpine-raku
latest: Pulling from jjmerelo/alpine-raku
df20fa9351a1: Already exists
a901eee946d8: Pull complete
Digest: sha256:3e22846977d60ccbe2d06a47da4a5e78c6aca7af395d57873d3a907bea811838
Status: Downloaded newer image for jjmerelo/alpine-raku:latest
 ---> c0ecb08ec5db
Step 2/2 : RUN zef install --/test Sparrow6
 ---> Running in ae2a0dc8848f
===> Searching for: Sparrow6
===> Updating cpan mirror: https://raw.githubusercontent.com/ugexe/Perl6-ecosystems/master/cpan1.json
===> Searching for missing dependencies: File::Directory::Tree, Hash::Merge, YAMLish, JSON::Tiny, Data::Dump
===> Searching for missing dependencies: MIME::Base64
===> Installing: File::Directory::Tree:auth<labster>
===> Installing: Hash::Merge:ver<1.0.1>:auth<github:scriptkitties>:api<1>
===> Installing: MIME::Base64:ver<1.2.1>:auth<github:retupmoca>
===> Installing: YAMLish:ver<0.0.5>
===> Installing: JSON::Tiny:ver<1.0>
===> Installing: Data::Dump:ver<v.0.0.11>:auth<github:tony-o>
===> Installing: Sparrow6:ver<0.0.24>

1 bin/ script [s6] installed to:
/root/raku-install/share/perl6/site/bin
===> Updated cpan mirror: https://raw.githubusercontent.com/ugexe/Perl6-ecosystems/master/cpan1.json
===> Updating p6c mirror: https://raw.githubusercontent.com/ugexe/Perl6-ecosystems/master/p6c1.json
===> Updated p6c mirror: https://raw.githubusercontent.com/ugexe/Perl6-ecosystems/master/p6c1.json
Removing intermediate container ae2a0dc8848f
 ---> a2cbc605ec5e
Successfully built a2cbc605ec5e
Successfully tagged rakuops:1.0

$ docker images

REPOSITORY             TAG                 IMAGE ID            CREATED             SIZE
rakuops                1.0                 a2cbc605ec5e        3 minutes ago       139MB

First run

Now having a base Docker image let’s run our very first Sparrow scenario, all we need is to add file called sparrowfile using Docker ADD directive. Our first scenario will be as simple as Bash “Hello World” echo command:

$ cat sparrowfile

bash "echo 'Hello World'", %(
    description => "hello world"
);

As one could notice, Sparrow scenario is just a plain Raku code with some DSL constructions. Let’s modify Dockerfile and rebuild an image.

$ cat Dockerfile

ADD sparrowfile
RUN raku -MSparrow6::DSL sparrowfile

$ docker build --tag rakuops:1.0 .

Sending build context to Docker daemon  5.632kB
Step 1/4 : FROM jjmerelo/alpine-raku
 ---> c0ecb08ec5db
Step 2/4 : RUN zef install --/test Sparrow6
 ---> Using cache
 ---> a2cbc605ec5e
Step 3/4 : ADD sparrowfile .
 ---> 74c7ee71a303
Step 4/4 : RUN raku -MSparrow6::DSL sparrowfile
 ---> Running in c73e1a7d568f
unknown plugin bash
  in method plugin-install at /root/raku-install/share/perl6/site/sources/5D155994EC979DF8EF1FDED7148646312D9073E3 (Sparrow6::Task::Repository::Helpers::Plugin) line 115
  in sub task-run at /root/raku-install/share/perl6/site/sources/DB0BB8A1D70970E848E2F38D2FC0C39E4F904283 (Sparrow6::DSL::Common) line 12
  in sub bash at /root/raku-install/share/perl6/site/sources/7662EE0EFF4206F474B7CC4AEF229F1A86EC8FFF (Sparrow6::DSL::Bash) line 33
  in sub bash at /root/raku-install/share/perl6/site/sources/7662EE0EFF4206F474B7CC4AEF229F1A86EC8FFF (Sparrow6::DSL::Bash) line 7
  in block <unit> at sparrowfile line 1

The very first run has failed with unknown plugin bash error, that means one needs to provision Docker with Sparrow repository – a storage for all dependencies required in Sparrow scenarios.

While there are many ways to do that, for our tutorial use of local file repository seems the easiest one.

Local Sparrow repository

Local Sparrow repository contains all Sparrow plugins, deployed to your local file system. To create one we need to initialize a repository structure first:

$ s6 --repo-init ~/repo

16:41:31 06/29/2020 [repository] repo initialization
16:41:31 06/29/2020 [repository] initialize Sparrow6 repository for /home/scheck/repo

When we have an empty repository let’s populate it with
Sparrow plugins taken from source code . Right now we only need a specific bash plugin, so let’s upload on this one:

$ git clone https://github.com/melezhik/sparrow-plugins ~/sparrow-plugins

$ cd ~/sparrow-plugins/bash

$ s6 --upload
16:41:36 06/29/2020 [repository] upload plugin
16:41:36 06/29/2020 [repository] upload bash@0.2.1

Copy repository to Docker cache

We’re going to use Docker COPY command to copy repository files to a Docker cache. But first we need to copy files to the current working directory so they will be available for the COPY command during Docker build:


$ cp -r ~/repo .

$ cat Dockerfile

RUN apk add bash perl
COPY repo/ /root/repo/
RUN s6 --index-update
RUN raku -MSparrow6::DSL sparrowfile

$ docker build --tag rakuops:1.0 .

Sending build context to Docker daemon  11.26kB
Step 1/7 : FROM jjmerelo/alpine-raku
 ---> c0ecb08ec5db
Step 2/7 : RUN zef install --/test Sparrow6
 ---> Using cache
 ---> a2cbc605ec5e
Step 3/7 : RUN apk add bash perl
 ---> Using cache
 ---> d9011d4e64db
Step 4/7 : ADD sparrowfile .
 ---> Using cache
 ---> adb1df57e1c0
Step 5/7 : COPY repo/ /root/repo/
 ---> Using cache
 ---> 3ed6bfaf4183
Step 6/7 : RUN s6 --index-update
 ---> Running in 6edfc480bde7
17:03:59 06/29/2020 [repository] update local index
17:03:59 06/29/2020 [repository] index updated from file:///root/repo/api/v1/index
Removing intermediate container 6edfc480bde7
 ---> 7eccb5889a80
Step 7/7 : RUN raku -MSparrow6::DSL sparrowfile
 ---> Running in af6eb4b2d9ee
17:04:02 06/29/2020 [repository] installing bash, version 0.002001
17:04:05 06/29/2020 [bash: echo Hello World] Hello World

As we could see from the log, Sparrow scenario successfully finishes printing “Hello World” in stdout. Line installing bash, version 0.002001 means Sparrow plugin has been successfully pulled from Docker cache and installed into container file system.

Build all plugins

To use the rest of Sparrow plugins in Docker build scenarios we need to add the to Docker cache the same way we did for bash plugin:

$ cd ~/sparrow-plugins
$ find  -maxdepth 2 -mindepth 2 -name sparrow.json -execdir s6 --upload \;
17:11:56 06/29/2020 [repository] upload plugin
17:11:56 06/29/2020 [repository] upload ado-read-variable-groups@0.0.1
17:11:56 06/29/2020 [repository] upload plugin
17:11:56 06/29/2020 [repository] upload ambari-hosts@0.0.1
17:11:57 06/29/2020 [repository] upload plugin
17:11:57 06/29/2020 [repository] upload ansible-install@0.0.2
17:11:58 06/29/2020 [repository] upload plugin
17:11:58 06/29/2020 [repository] upload ansible-tutorial@0.0.1
17:11:59 06/29/2020 [repository] upload plugin
17:11:59 06/29/2020 [repository] upload app-cpm-wrapper@0.0.6
... output truncated ...

Now let’s update Docker cache by copy repository file to current working directory, in the next run Docker COPY command will pick files and push to Docker image.

$ cd ~/RakuOps/docker-sparrow/
$ cp -r ~/repo .

Sparrow plugins

Now we’re free to use any plugin we’ve just added. Say, we need to install nano editor on our Docker image. Sparrow provides a cross-platform package-generic plugin to install native packages:

$ cat sparrowfile

package-install "nano";

$ docker build --tag rakuops:1.0 .

Sending build context to Docker daemon  2.012MB
Step 1/7 : FROM jjmerelo/alpine-raku
 ---> c0ecb08ec5db
Step 2/7 : RUN zef install --/test Sparrow6
 ---> Using cache
 ---> a2cbc605ec5e
Step 3/7 : RUN apk add bash perl
 ---> Using cache
 ---> d9011d4e64db
Step 4/7 : ADD sparrowfile .
 ---> 7a3bb7329d46
Step 5/7 : COPY repo/ /root/repo/
 ---> 0c029612c55c
Step 6/7 : RUN s6 --index-update
 ---> Running in 356d29ed8049
17:16:56 06/29/2020 [repository] update local index
17:16:56 06/29/2020 [repository] index updated from file:///root/repo/api/v1/index
Removing intermediate container 356d29ed8049
 ---> 18876a3d6396
Step 7/7 : RUN raku -MSparrow6::DSL sparrowfile
 ---> Running in bd07fecae4f0
17:16:58 06/29/2020 [repository] installing bash, version 0.002001
17:17:00 06/29/2020 [bash: echo Hello World] Hello World
17:17:00 06/29/2020 [repository] installing package-generic, version 0.004001
17:17:02 06/29/2020 [install package(s): nano.perl] fetch http://dl-cdn.alpinelinux.org/alpine/v3.12/main/x86_64/APKINDEX.tar.gz
17:17:02 06/29/2020 [install package(s): nano.perl] fetch http://dl-cdn.alpinelinux.org/alpine/v3.12/community/x86_64/APKINDEX.tar.gz
17:17:02 06/29/2020 [install package(s): nano.perl] v3.12.0-103-g1699efe1cd [http://dl-cdn.alpinelinux.org/alpine/v3.12/main]
17:17:02 06/29/2020 [install package(s): nano.perl] v3.12.0-106-g2b11e345c6 [http://dl-cdn.alpinelinux.org/alpine/v3.12/community]
17:17:02 06/29/2020 [install package(s): nano.perl] OK: 12730 distinct packages available
17:17:03 06/29/2020 [install package(s): nano.perl] trying to install nano ...
17:17:03 06/29/2020 [install package(s): nano.perl] installer - apk
17:17:03 06/29/2020 [install package(s): nano.perl] (1/2) Installing libmagic (5.38-r0)
17:17:03 06/29/2020 [install package(s): nano.perl] (2/2) Installing nano (4.9.3-r0)
17:17:03 06/29/2020 [install package(s): nano.perl] Executing busybox-1.31.1-r19.trigger
17:17:03 06/29/2020 [install package(s): nano.perl] OK: 67 MiB in 32 packages
17:17:03 06/29/2020 [install package(s): nano.perl] Installed:                                Available:
17:17:03 06/29/2020 [install package(s): nano.perl] nano-4.9.3-r0                           = 4.9.3-r0
17:17:03 06/29/2020 [install package(s): nano.perl] nano
Removing intermediate container bd07fecae4f0
 ---> 408d35e1e3fd
Successfully built 408d35e1e3fd
Successfully tagged rakuops:1.0

Conclusion

We’ve just seen how one can use Raku and Sparrow to build Docker images. The advantage of the approach one is no more limited by Dockerfile syntax and could leverage all the power of Raku to express any sophisticated build logic. On other hand Sparrow provides a lot of handy primitives and plugins for typical build tasks and some of them I’m going to share in next posts.

Managing External Raku Dependencies using Sparrow

A few days ago several discussions have been launched where people try to deal with managing none Raku / native dependencies for Raku modules. While a solution is far from being found or at least is complete here is my, Sparrow take on the problem.

Raku-native-deps

Raku-native-deps is a Sparrow plugin to parse META6.json file and turn it to native packages dependencies. It has a lot of limitations, e.g only supporting CentOS and only parsing `:from<native>` statements but it could give one a sence of the approach:

my %state = task-run "get packages", "raku-native-deps", %(
  path => "META6.json"
);

for %state<packages><> -> $i {
  say "package: $i<package>"
}

Basically one just give it a path to module’s META file and the plugin parses the file converting it to native package dependencies, then it’s possible to install ones using underlying package manager:

for %state<packages><> -> $i {
  package-install $i<package>
}



Full scenario

So full scenario to install a module with native dependencies would be:

# Fetch module and get a directory where it's fetched
my %state = task-run 'fetch dbd-sqlite', 'zef-fetch', %(
  identity => 'DBD::SQLite'
);

# Build native packages list from META6.json
my %state2 = task-run "get packages", "raku-native-deps", %(
  path => "{%state<directory>}/META6.json"
);

# Install native packages (libsqlite3)
for %state2<packages><> -> $i {
  package-install $i<package>;
}

# Install module, at this point external dependencies are installed
# So this step will only install Raku dependencies and module itself

zef "DBD::SQLite";



RakuDist integration

RakuDist – Raku modules testing service uses the method to test distributions containing native dependencies. Known modules examples:

DBD::SQLite ( META6 pull request – https://github.com/CurtTilmes/raku-dbsqlite/pull/10 )
LibCurl ( META6 pull request – https://github.com/CurtTilmes/raku-libcurl/pull/15 )
GPGME ( META6 pull request – https://github.com/CurtTilmes/raku-gpgme/pull/1 )

Further thoughts

The approach is not complete, though right now it could solve installation of native dependencies for a single module ( but not recursively for module’s dependencies’s native dependencies ), one can read ongoing discussion here – https://github.com/ugexe/zef/issues/356 and suggest ideas.


Thanks for reading



Aleksei

RakuDist – Dead Easy Way to Test Raku Cli Applications

Nowadays many Raku modules authors ship cli tools as a part of their Raku modules distributions.
RakuDist provides a dead easy way to test those scripts. The benefit, it takes a minimal coding and
fully integrated into RakuDist service.

Cli application example

Say, we have a script.raku shipped as a part of a Raku module.

$ cat bin/script.raku

if @*ARGS[0] -eq "--version" {
say "app version: 0.1.0"
} elsif @*ARGS[0] -eq "--help" {
help();
} else {
my @params = @*ARGS;
# do some stuff
}


To test a script installation one needs to create a .tomty/ sub directory in a module root directory and place some test scenarios. Scenarios should written on Tomty – a simple Raku framework for black box testing:

$ mkdir .tomty

$ nano .tomty/00-script-version.pl6

task-run ".tomty/tasks/app-version/";

$ mkdir -p .tomty/tasks/app-version/

$ nano .tomty/tasks/app-version/task.bash

script.raku --version

00-script-version scenario runs the script with some parameters ( help info ) and verifies successful status code.

To verify script STDOUT, create a check file with some Raku regular expressions:

$ nano .tomty/tasks/app-version/task.check

regexp: "app version:" \s+ \d+ '.' \d+ '.' \d+


You can add more scenarios, they all will be executed in a row:

.tomty/01-script-help.pl6
.tomty/02-script-run-with-some-params.pl6

Ship it and test it!

Now just add .tomty to add your CPAN module distribution and the tests will be automatically run by RakuDist!

That is it, stay tuned!