Minoca OS automation with Sparrowdo


Hello! Minoca is a new operating system for the world of connected devices. In this post I am going to show you how one can enable configuration management of running Minoca instances by the help of Sparrowdo.

Download the latest Minoca build

$ wget http://www.minocacorp.com/download/nightlies/latest-x86/Minoca-pc.zip
$ unzip Minoca-pc.zip

Start Minoca OS instance

$ qemu-system-x86_64 -enable-kvm -m 2000 -net nic,model=i82559er -net user,hostfwd=tcp::2222-:22,hostfwd=tcp::8888-:80 -hda pc.img

Set up remote ssh access

Inside running Minoca instance:

$ opkg update 
$ opkg install openssh bash
$ /etc/init.d/sshd start
$ passwd # we will use this password when ssh-ing from the Host OS after `ssh-copy-id` gets run

Inside Host OS:

$ ssh-copy-id -p 2222 root@
$ ssh -p 2222 root@
$ exit

Create some Sparrowdo scenario

$ nano sparrowfile
use v6;
use Sparrowdo;
package-install ("nano", "zsh", "nginx");
user "alexey";
directory "/var/data/bar", %( owner => "alexey");
service-stop "nginx";
service-start "nginx";
http-ok %( port => 8888 );

Run Sparrowdo scenario for Minoca instance

$ sparrowdo --host= --ssh_user=root --ssh_port=2222 --no_sudo --sparrowfile=sparrowfile --bootstrap --format=production

The output:

running sparrow bootstrap for host: ... 
bootstrap for minoca
Downloading http://www.minocacorp.com/packages/0.4/i686/main/Packages.gz.
Inflating http://www.minocacorp.com/packages/0.4/i686/main/Packages.gz.
Updated list of available packages in /var/opkg-lists/main.
Outthentic is up to date. (0.3.9)
Sparrow is up to date. (0.2.48)
running sparrow tasks on ... 
target OS is - minoca
push [task] install packages: nano zsh nginx OK
push [task] create user alexey OK
push [task] create directory /var/data/bar OK
push [task] stop service nginx OK
push [task] start service nginx OK
push [task] run bash: curl -fsSLk -D - --retry 3 -o /dev/ ... OK
SPL file /opt/sparrow/sparrow.list is empty
get index updates from SparrowHub ... OK
set up task box file - /home/melezhik/.sparrowdo//opt/sparrow/task-box.json - OK
public@package-generic is uptodate (0.3.7)
public@user is uptodate (0.2.1)
public@directory is uptodate (0.1.4)
public@service is uptodate (0.1.13)
public@bash is uptodate (0.1.6)
running task box from /opt/sparrow/sparrow-cache/task-box.json ... 
2017-09-21 02:27:12 : [task] install packages: nano zsh nginx [path] modules/opkg/ [params] action:install package:nano
2017-09-21 02:27:12 : [task] install packages: nano zsh nginx [path] modules/opkg/ [params] action:install package:zsh
2017-09-21 02:27:12 : [task] install packages: nano zsh nginx [path] modules/opkg/ [params] action:install package:nginx
2017-09-21 02:27:13 : [task] create user alexey [path] modules/create/
2017-09-21 02:27:13 : [task] create directory /var/data/bar [path] modules/create/
2017-09-21 02:27:14 : [task] stop service nginx [path] modules/stop/ [params] os:minoca service:nginx
2017-09-21 02:27:14 : [task] start service nginx [path] modules/start/ [params] os:minoca service:nginx
2017-09-21 02:27:14 : [task] run bash: curl -fsSLk -D - --retry 3 -o /dev/ ... [path] modules/bash-command/ [params] envvars:

Building Perl6 Applications with Docker and Ducky

Docker containers allow developers run environments and deploy application easy and fast. Dockerfile/Ansible/Chef are means you can configure bootstrapped Docker instances, however there is another way to do this …

Ducky is a lightweight Docker provision tool allow easy deploy Docker containers in just creating JSON scenarios in declarative way:

$ cat ducky.json
   "task" : "install perl6",
   "plugin" : "rakudo-install",
   "data" : {
   "url" : "https://github.com/nxadm/rakudo-pkg/releases/download/2017.07/perl6-rakudo-moarvm-CentOS7.3.1611-20170700-01.x86_64.rpm"

This is how we bootstrap Rakudo on Docker box by using Ducky and this simple scenario.  Now let’s pull CentOS image and run Docker container based on it:

$ docker pull centos
$ docker run -d -i -t -v $PWD:/var/ducky --name ducky-centos centos

The only requirement here is that the running Docker container should have current working directory, ( which holds Ducky json file ) mounted as /var/ducky.

Ducky picks up ducky.json file placed in the current working directory and executes scenario on running Docker container, named centos-ducky:

$ ducky.bash ducky-centos

Here is a piece of screen shot of Ducky’s output ( only the last lines are shown for the sake of brevity ):


Under the hood Ducky installs Sparrow client on the container and then give it a run  to execute tasks defined at Ducky json file.  The tasks are described at  Sparrow task box format.

So Ducky json is just a Sparrow Task Box file. That means you can declare Sparrow plugins with parameters here, aka Sparrow tasks which are executed on a Docker container. Available plugins are listed, documented and stored at the SparrowHub – Sparrow plugins repository.

In this scenario we use rakudo-install plugin to install Rakudo as system pcakge. The plugin documentation is available at the SparrowHub site.

Thus there are more things you could do with Ducky not only installing software,  you’re only limited by existed Sparrow plugins.

A typical use case is to run Test::Harness against a Perl6 project. Let’s do it for  Bailador  which is “A light-weight route-based web application framework for Perl 6” :

$ git clone https://github.com/Bailador/Bailador.git
$ cd Bailador
$ cat ducky.json 


    "task" : "install perl6",
    "plugin" : "rakudo-install",
    "data" : {
      "url" : "https://github.com/nxadm/rakudo-pkg/releases/download/2017.07/perl6-rakudo-moarvm-CentOS7.3.1611-20170700-01.x86_64.rpm"
    "task" : "installs Bailador dependencies",
    "plugin" : "zef",
    "data" : {
        "list" : [ "." ],
        "options" : "--deps-only"
    "task" : "run t/ tests",
    "plugin" : "bash",
    "data" : {
        "command" : "prove6 -l",
        "envvars" : {
          "PATH" : "/opt/rakudo/bin:/opt/rakudo/share/perl6/site/bin:/root/.rakudobrew/moar-nom/install/share/perl6/site/bin:$PATH"

Ducky json is quite self illustrative, here we define some standard steps to build and test the project:

* Install Raudo
* Install Bailador dependencies  picked from META6 file
* Runs t/ tests with prove6

Ok, let’s give it a run: ( don’t forget that we should first launch Docker container with the current working directory mounted as /var/ducky ):

$ docker run -d -i -t -v $PWD:/var/ducky --name ducky-bailador centos
$ ducky.bash ducky-bailador

Here is the last lines of the Ducky output:


In this scenario we use 2 other plugins – bash – to execute arbitrary Bash code and zef – simple wrapper for Zef manager – tool to install Perl6 modules. The plugins documentation is available at the SparrowHub site.

Further thoughts.

Ducky and Sparrow are kind of cross platform tools, meaning you can successfully run the same scenarios on the variety of Linux platforms ( provided that Bash is installed ), for example the last scenario will succeed when running against Alpine Linux   docker image:

$ docker pull melezhik/alpine-perl6
$ docker run -d -i -t -v $PWD:/var/ducky --name ducky-bailador-alpine melezhik/alpine-perl6
$ ducky.bash ducky-bailador-alpine

Thus it becomes extremely  useful when you want to test a project against different environments just sitting at your developer box and running chip docker containers.

And the last but not the least. If for some reasons you’re not satisfied by existed Sparrow plugins you can easy write new one to cover your needs. I have written a plenty of posts of how to do this and you yon may start with this one – Outthentic – quick way to develop user’s scenarios .

Regards and have a fun with your coding and automation.

How to use Chef and Sparrowdo together

Good team member.

Chef is a well recognized configuration management tool which I use extensively at my current work. However I keep pushing to Sparrowdo – Perl6 configuration management tool and find those two tools play nicely together.

In this post I am going to give a few examples on how I use Sparrowdo to simplify and improve Chef cookbooks development workflow.

Running chef client on a target host.

Here is the most useful scenario how I use Chef and Sparrowdo together. My working environment implies launching ec2 Amazon instances get configured by chef. Instead of ssh-ing to an instance and running a chef-client on it, I delegate this task to Sparrowdo using wrapper called Sparrowdo::Chef::Client.

Let’s install the module:

$ zef install Sparrowdo::Chef::Client

And then create a simple Sparrowdo scenario:

$ cat sparrowfile
module_run 'Chef::Client', %(
    run-list => [
    log-level => 'info',
    force-formatter => True

Here we’re just running two recipes called foo and bar. And define some chef client’s settings, like log level and enabling force-formater option. Now we can run a chef-client on a target host:

$ sparrowdo --host=$target_host

Post deployments checks.

It is always good idea to check a server’s state right after the deployment. There are reasons why I prefer to not keep such a checks inside my Chef scenarios. And it seems there is a trend which is seen as new monitor and audit tools appear at the open source market with InSpec and goss among them, to list a few.

Likewise Sparrowdo has some built-in facilities to quickly test an infrastructure.

Let me give you a few examples.

Check system processes.

Say, we reconfigure an Nginx server by using some Chef recipes, sometimes Chef is not able to ensure that Nginx starts successfully after deploy or even if it does I don’t want to grep huge chef client logs ( sometimes there is a load of them ) to find out whether an Nginx gets started successfully. Happily, here is the dead simple solution – the usage of Sparrowdo asserts:

$ cat healthcheck.pl6
proc-exists 'nginx';

This function checks that file /var/run/nginx.pid exists as well as the related process with the PID taken from the file does. If you need to handle uncommon file paths for pid files, you can always set the path explicitly:

$ cat healthcheck.pl6
proc-exists-by-pid 'nginx server', '/var/run/nginx/ngix.pid';

Moreover if you only know the “name” of the process ( well technically specking this regular expression to match the process command ), simply have this:

$ cat healthcheck.pl6
proc-exists-by-footprint 'nginx web server', 'nginx\s+master';

Having this simple Sparrowdo scenario just run it against a target server to check that Nginx process exists:

$ sparrowdo --host=$target_host --sparrowfile=healthcheck.pl6

I always put Sparrowdo scenarios and Chef cookbook files together and commit them to Git repository:

$ git add sparrowfile healthcheck.pl6
$ git commit -a -m 'sparrowdo scenarios for chef cookbook'

And finally let me give an example of checking web application endpoints by sending http requests. Say, we have Chef recipe which deploys an application that should be accessible by http GET / request. Sparrowdo exposes handy http-ok asserts to deal with such a checks:

$ cat healthcheck.pl6

That is it! This is the simplest form of http-ok function’s call to verify that the web application is responsible by  accepting  requests for GET / route. Under the hood it just:

  1. resolves hostname as those one you run sparrowdo
  2. issues http request using curl utility:
$ curl -f http://$target_host

There are options how you can call http-ok function. For example, you may define endpoints and set http port:

$ cat healthcheck.pl6
http-ok(port  => '8080' , path => '/Foo/Bar' );

Follow Sparrowdo documentation for full description of http asserts function.


Using Sparrowdo and Chef together could be efficient approach when developing and testing server’s configuration scenarios. Sparrowdo is proven to be able to adopt any requirements and play nicely with other ecosystems and frameworks being an intelligent “clue” to bind together various tools and get your work done.

Building Bailador Docker images with Sparrowdo

Bailador is a light-weight route-based web application framework for Perl 6. Thanks to Gabor Szabo who has invited me to join the project and see how I can help the team.

I decided to make efforts in configuration management, deployment tasks. At the moment Bailador developers need help in this area.

Docker is quite popular way to distribute applications across teams, so I gave it it try. Welcome to bailador-docker-sparrowdo – a small project to help the Bailador developers to easy check latest changes in Baildor source code.

What can you do by using bailador-docker-sparrowdo:

* Build docker image with the sample Bailador application.
* Start the sample application.
* Update an existing docker image by picking up the latest changes from Bailador source code repository ( github ).

Let me show in more details how this could be done by using sparrowdo.

Build docker image

First of all you need to check out bailador-docker-sparrowdo and run `docker build` command:

$ git clone https://github.com/melezhik/bailador-docker-sparrowdo.git 
$ cd bailador-docker-sparrowdo
$ docker build -t bailador .

It takes a few minutes to build the image. Under the hood it:

* Pulls alpine-perl6 base image with Alpine Linux and Perl6/zef pre installed, the image was created by Juan Julián Merelo Guervós.

* Installs sparrow/sparrowdo as it used as the configuration management tool for sample application.

Only a few instructions could be found at Dockerfile:

FROM jjmerelo/alpine-perl6
ENV PATH=/root/.rakudobrew/moar-nom/install/share/perl6/site/bin:$PATH
RUN apk add wget git curl bash build-base gcc perl-dev
RUN cpan App::cpanminus
RUN cpanm -q --notest https://github.com/melezhik/outthentic.git Sparrow
RUN zef install https://github.com/melezhik/sparrowdo.git
# ...

The rest of configuration to be done by sparrowdo itself by running `sparrowdo` command during the build process:

COPY sparrowfile    /tmp/
RUN sparrowdo --local_mode --sparrowfile=/tmp/sparrowfile --no_sudo

Here is the content of sparrowdo scenario:

directory '/var/data';

bash 'zef install Path::Iterator';
bash 'zef install TAP::Harness';
bash 'zef install HTTP::MultiPartParser';

  set -e
  cd /var/data/
  if test -d  /var/data/Bailador; then
    cd Bailador
    git pull
    cd ../
    git clone https://github.com/Bailador/Bailador.git

bash "cd /var/data/Bailador  && zef --depsonly install .";

bash "cd /var/data/Bailador && prove6 -l";

bash "cd /var/data/Bailador && zef install --/test --force .";

Right now it is simple enough, but the use of Sparrowdo gives me a freedom to create any sophisticated build scenarios in the future, which hardly could be expressed by using “only Dockerfile” approach.

Eventually, as new requirements come or new build scenarios need I will add more sparrowdo scenarios to effectively manage the build process.

Finally, I have added a few lines to the end of Dockerfile to copy sample Bailador application script and to declare the default entry point to launch the application:

COPY entrypoint.sh  /tmp/
COPY example.p6w    /tmp/
ENTRYPOINT ["/tmp/entrypoint.sh"]

The sample application, example.p6w is very simple and is only considered as the way to check that Bailador works correctly:

use Bailador;
get '/' => sub {
    "hello world"

Here how the application get run via entry point script, entrypoint.sh :

perl6 /tmp/example.p6w

Run application

Once the image is ready you run a docker container based on this image, as if everything is built correctly you will get running sample application:

docker run -d -p 3000:3000 bailador

Check the application by sending http request:


Update Docker image

The promising feature of bailador-Docker-sparrowdo is you can check the latest Bailador source changes by updating existing Docker image, so you don’t have to rebuild the image from the scratch and save your time. This is possible due to main configuration and build logic is embedded into image through the sparrowdo gets installed into it.

This is how you can do this.

First find out existing bailador image and run the container:

docker run -it -p 3000:3000 --entrypoint bash bailador

Notice that we don’t detach the image as we did when just wanted to run the sample application. Moreover we override default enrty point as we need an bash shell to login into container.

Once logged into the container just run sparrowdo scenario again, as we did when build the image via Dockerfile:

sparrowdo --no_sudo --local_mode --sparrowfile=/tmp/sparrowfile

Sparrowdo will pick up the latest changes for Baildor source code from github and apply them.

To ensure that sample application runs on new code, lets run it manually:

perl /tmp/example.p6w 
^C # to stop the application

Now we can update the image by using `docker commit` command. Not exiting from the running Docker container, in parallel console lets do that:

$ docker ps # to find out the image id
$ docker commit $image_id bailador # to commit changes made in container into bailador image
$ docker stop -t 1 $container_id # stop current Docker container 

Great. Now our image gets updated and contain latest Bailador source code changes. We can run a sample application the same way as we did before:

docker run -d -p 3000:3000 bailador


Docker is an efficient and powerful tool to share applications across teams. Sparrowdo plays nicely with Docker ecosystem providing a comprehensive DSL to describe any complicated scenarios to build Docker images and more impressive to update existing images “on the fly”, saving developers’ time.

Join Bailador project

If you’d like to get involved in the Bailador project, contact me and I’ll send you an invitation to our Slack channel.

Using Python dependencies in Sparrow plugins

Latest version of Sparrow has brought a new feature for those who would like to write sparrow plugins using Python language.

Now you can declare a Python/Pip dependencies with the help of requirements.txt file:

$ cat requirements.txt

Let’s create a simple plugin to make http requests using hackhttp python library.

$ cat story.py
import hackhttp
from outthentic import *

url = config()['url']
hh = hackhttp.hackhttp()

code, head, html, redirect_url, log = hh.http(url)

print code

$ touch story.check

$ cat sparrow.json
    "name" : "python-sparrow-plugin",
    "description": "test sparrow plugin for python",
    "version" : "0.0.4",
    "url" : "https://github.com/melezhik/python-sparrow-plugin"

Now let’s upload the plugin to SparrowHub and give it a run:

$ sparrow plg upload
sparrow.json file validated ...
plugin python-sparrow-plugin version 0.000004 upload OK

$ sparrow plg install python-sparrow-plugin

upgrading public@python-sparrow-plugin from version 0.0.3 to version 0.000004 ...
Download https://sparrowhub.org/plugins/python-sparrow-plugin-v0.000004.tar.gz --- 200
Downloading/unpacking hackhttp==1.0.4 (from -r requirements.txt (line 1))
  Downloading hackhttp-1.0.4.tar.gz
  Running setup.py (path:/tmp/pip_build_melezhik/hackhttp/setup.py) egg_info for package hackhttp
Installing collected packages: hackhttp
  Running setup.py install for hackhttp
Successfully installed hackhttp
Cleaning up...

$ sparrow plg run python-sparrow-plugin --param url=http://example.com
•[plg] python-sparrow-plugin at 2017-05-12 17:07:17

ok scenario succeeded

And finally if you prefer to get things done by Perl6/Sparrowdo use this piece of code as starting point:

$ cat sparrowfile

my $url = 'http://example.com';
task-run "http get $url", 'python-sparrow-plugin', %( url => $url );

Sparrowdo command line API

Command line API makes it possible to run sparrow plugins and modules remotely on target server by using console client, there are a lot of things you could to with this API!

Running plugins with parameters

Executing sparrow plugins. Here is the list of plugins you may use, the form you run them via command line is:

--task_run=plg-name@plg_param=plg_value,plg_param=plg_value ...

Let’s  me drop a few examples, how you can use it.

Execute bash commands

Here is where bash sparrow plugin could be handy.

1. Single bash command

$ uptime 
$ sparrowdo --host=remote.server --task_run=bash@command=uptime


2. Compound commands

Say you want to execute multiples bash commands chained by logical “AND”, it’s easy:

$ ps uax | grep nginx|grep -v grep  && service nginx stop:
$ sparrowdo --host=remote.server \
--task_run=bash@command='ps uax|grep nginx|grep -v grep && service nginx stop'


3. Multiple bash commands

Alternatively you may pass more than one `–task_run` chunks to execute many bash commands consequently:

$ ls -l; uptime ; df -h; 
$ sparrowdo --host=remote.server \
--task_run=bash@command='ls -l' \
--task_run=bash@command=uptime \
--task_run=bash@command='df -h'


4. Run command under user’s account

Say you want execute bash command under specific user, not root? It’s easy to do by using sparrow bash plugin

$ sparrowdo --host=remote.server \


Install system packages

Use package-generic plugin.  This is cross platform installer with support of some popular Linux distros – Debian/Ubuntu/CentOS.

Install mc, nano and tree packages:

$ sparrowdo --host=remote.server \
--task_run=package-generic@list='mc nano tree'



Install CPAN packages

Use cpan-package plugin to install CPAN packages. There are many options with it. Say I want to create web-app user and install some CPAN package into user’s home …

$ sparrowdo --host=remote.server \
--task_run=user@name=web-app \
--task_run=cpan-package@list='CGI DBI',\


What else? Any sparrow plugin could be run the same way:

--task_run=plg-name@plg_param=plg_value,plg_param-plg_value ...

Find one you need at https://sparrowhub.org/search and just use it!

Running modules with parameters

Sparrow modules are more high level entities but you can use them the same way as you do with sparrow plugins – to apply piece of configurations to your servers remotely.

Choose this form:

$ sparrowdo --module_run=module-name@mod_param=mod_value,mod_param=mod_value ...

Here are some examples.

1. Install nginx with custom document-root

Use Sparrowdo::Nginx module:

$ sparrowdo --host=remote.server \

This command produces too much output, so I am not showing it’s screenshot here.

2. Install CPAN packages come from GitHub repositories

There is a Sparrowdo::Cpanm::GitHub module to handle this, it accepts many options and it’s even possible to install modules from Git branches:

Let’s install CGI.pm from master branch at https://github.com/leejo/CGI.pm :

$ sparrowdo --host=remote.server \

This command produces too much output, so I am not showing it’s screenshot here.

3. Fetching remote file

And finally next but not the last example of Sparrowdo module to fetch files over http, it’s called Sparrowdo::RemoteFile

Say I want to fetch some auth basic protected URL and place it into specific directory?
Well let’s do it in one shot:

$ sparrowdo --host=remote.server \

This command produces too much output, so I am not showing it’s screen shot here.


Sparrowdo command line API provides an easy and simple way to configure servers remotely by using only console client with no coding at all, in a style of bash oneliners.

But if you look for something more complicated and powerful – consider using Sparrowdo scenarios!


Simple META6::bin wrapper

Recently  Wenzel P. P. Peppmeyer ( aka gfldex ) released a nice helper to start Perl6 projects from the scratch – it’s called META6:bin

$ zef install META6:bin

MEAT6::bin module enables creation Perl6 project from the scratch, for example this is how quickly one can bootstrap a new Perl6 module called Foo::Bar

$ meta6 --new-module=Foo::Bar

There are a lot of options come from meta6 client, take a look at the documentation.  META6::bit cares about git/github things, setting up git repository for you freshly started projects, creating META6.json file, populating t/ directory, so on.

I have created a simple wrapper around meta6 script. The reasons for that:

* I don’t want to remember all the options I use when launching meta6 client to bootstrap my projects
* I have predefined settings I use always so I don’t want to enter them every time I run meta6 command line.

Here is my solution –  sparrow plugin with the analogous name – meta6-bin Under the scene it just calls meta6 client with parameters. But you can easily customize ones by using sparrow tasks:

$ sparrow plg install meta6-bin
$ sparrow project create perl6-projects
$ sparrow task add  perl6-projects meta6-bin new

Having this defined you may easily create new Perl6 modules projects to run meta6 with some default options:

$ sparrow task run perl6-projects/new --param name=Foo::Bar --param path=~/my-projects/

The only two obligatory parameters you have to set here is – name – module name and path – directory location where you want to create project files.

Here is how you can create a project inside current working directory:

$ sparrow task run perl6-projects/new --param name=Foo::Bar --param path=$PWD

And finally let’s tune some settings up to meet our specific requirements, say I don’t want to initialize git repository for my projects and I have predefined root location to keep my work:

$ export EDITOR=nano
$ sparrow task ini perl6-projects/new
options --force --skip-git --skip-github
path /opt/projects

Now we “memorize” our settings into sparrow task so that we can apply them for next meta6-bin runs:

$ sparrow task run perl6-projects/new --param name=Foo::Bar

Hope this short post was useful.
Regards and stayed tuned with Perl6/Sparrow/Sparrowdo.

Writing pre-commit hooks with Sparrow


Pre-commit is a framework for managing and maintaining multi-language pre-commit hooks. Developers write a hooks to be triggered so that some preliminary/useful job gets done before updates arrived to your git repo. The idea is quite old, but pre-commit lets install and integrate hooks into existed git repos with minimal efforts.

Writing hooks with sparrow

Sparrow is a universal automation tool and I found it quite easy to use sparrow to write hooks for pre-commit framework. Let me show how. Say I need to run prove tests for Perl6 code.

The code of hook is trivial and look like:

prove -vr -e 'perl6 -Ilib' t/

Let’s wrap this script into sparrow plugin, here are few simple steps:

1. Write a story:

$ cat story.bash
set -x
set -e
path=$(config path)
echo path is: $path
cd $path
prove -vr -e 'perl6 -Ilib' t/

A quick remark here. We pass a Perl6 project directory location explicitly by --path parameter as absolute file path. This requirement is due to sparrow does not preserve a current working directory when executing plugins.

2. Leave story check file empty, as we don’t need an extra checks here:

$ touch story.check

3. And create plugin meta file:

$ cat sparrow.json
  "name" : "perl6-prove",
  "description" : "pre-commit hook - runs prove for Perl6 project",
  "version" : "0.0.1",
  "category" : "utilities",
  "url" : "https://github.com/melezhik/perl6-prove"

4. Now we can upload our freshly backed plugin to SparrowHub:

$ sparrow plg upload

Using sparrow plugin in pre-commit hooks

First of all we need to install sparrow plugins at our system and see that our hook works on a test Perl6 project.

Install the plugin:

$ sparrow plg install perl6-prove

Set up git repository and project files:

$ git init 
$ ... # Create files and directories, git add, and so on ..

Create a simple Perl6 test:

$ cat t/00.t
use v6;
use Test;
plan 1;
ok 1, 'I am ok';

Then we need to set up pre-commit hooks yaml.

Our pre-commit hook yaml will be:

$ cat .pre-commit-config.yaml
-   repo: local
    -   id: perl6-prove
        name: perl6-prove
        entry: bash -c "sparrow plg run perl6-prove --param path=$PWD"
        language: system
        always_run: true
        files: ''

Here we use so called “local” repository and language  as”system” bear in mind that sparrow comes as external system command.

Now let’s commit our changes to trigger hook execution:

$ git commit -a -mtest-commit
[master 98cf098] test-commit
 1 file changed, 6 insertions(+)
 create mode 100644 t/00.t

You can also trigger the hook directly not committing anything:

$ pre-commit run perl6-prove --verbose
[WARNING] Unstaged files detected.
[INFO] Stashing unstaged files to /home/vagrant/.pre-commit/patch1489306073.
[perl6-prove] perl6-prove................................................Passed
hookid: perl6-prove

[p] perl6-prove at 2017-03-12 08:07:53
path is: /home/vagrant/projects/pre-commit-test
t/00.t ..
ok 1 - I am ok
All tests successful.
Files=1, Tests=1,  1 wallclock secs ( 0.02 usr  0.00 sys +  0.16 cusr  0.03 csys =  0.21 CPU)
Result: PASS
ok      scenario succeeded

In the end of this post I am going to take some summary.

Implementing pre-commit hooks via Sparrow plugins

Roughly speaking pre-commit framework supports two types of plugins – external ones, which to be installed into the system manually  and ones located in github repositories and installed by pre-commit itself.

I see a possible benefits sparrow can bring you when developing hook’s scripts as sparrow ( external ) plugins:

– Sparrow plugins are external ones and highly decoupled from hooks/project structure.

– Indeed they are versioned and packaged pieces of software. One can maintain and release new versions of plugins in a way predictable and transparent for end user.

– You can always install/remove/upgrade/downgrade versions of sparrow plugin independently on pre-commit framework itself.

– Sparrow provides reasonable alternative to manage hook’s scripts dependencies, so that sparrow takes care about dependencies resolution during plugin installation. It’s CPAN/carton for Perl5 and RubyGems/bundler for Ruby. Let me know if you need other package managers support.

Rregards and have fun with your automation!

Manage goss scenarios with sparrow


Goss is a YAML based serverspec alternative tool for validating a server’s configuration.  It’s written on Go language. It’s quite interesting and promising young project I came across via reddit/devops channel.

Let me show how one can distribute goss scenarios using sparrow tasks.

Before diving into technical stuff let me explain why this could be useful:

  • you want organize multiple goss scenarios by logical groups and manage them via unified interface
  • you want to share some goss yamls across your team, to make it possible quickly run your goss tests against many applications

Ok, let’s go.

Installing sparrow goss plugin

This part is really easy.

$ sparrow index update # we want fresh index from SparrowHub
$ sparrow plg install goss

You’ll find detailed information on goss sparrow plugin at https://sparrowhub.org/info/goss

Set up sparrow project and tasks

Ok, now let’s create sparrow project and tasks. These are just simple abstractions to split many goss tests on various logical groups.

$ sparrow project audit # we will keep all goss scenarios here
$ sparrow task add audit nginx  goss # nginx test suite
$ sparrow task add audit mysql goss # mysql test suite

Running sparrow task list command we see our new project and tasks:

$ sparrow task list
[sparrow task list]

Set up goss tests

Now let’s populate our goss tests, we should read goss spec first, but it’s really easy.

One for nginx:

$ sparrow task ini audit/nginx 

action validate
goss << HERE
    listening: true
    enabled: true
    running: true
    running: true


And one for mysql:

$ sparrow task ini audit/mysql 

action validate
goss << HERE
 listening: true
 enabled: true
 running: true
 running: true


Run goss tests

Now we can run goss tests separately for nginx and mysql.

One for nginx:

$ sparrow task run audit/nginx
[t] nginx
@ goss wrapper

[t] nginx modules/generate-goss-yaml/ params: cache_dir:/home/vagrant/.outthentic/tmp/9595/story-1 at 2017-03-07 16:18:13
generated goss yaml at /home/vagrant/.outthentic/tmp/9595/story-1/goss.yaml
ok      scenario succeeded

[t] nginx modules/validate/ at 2017-03-07 16:18:13
ok 1 - Process: nginx: running: matches expectation: [true]
ok 2 - Port: tcp:80: listening: matches expectation: [true]
ok 3 - Port: tcp:80: ip: matches expectation: [[""]]
ok 4 - Service: nginx: enabled: matches expectation: [true]
ok 5 - Service: nginx: running: matches expectation: [true]
ok      scenario succeeded

And one for mysql:

$ sparrow task run audit/mysql
[t] mysql
@ goss wrapper

[t] mysql modules/generate-goss-yaml/ params: cache_dir:/home/vagrant/.outthentic/tmp/9901/story-1 at 2017-03-08 08:19:14
generated goss yaml at /home/vagrant/.outthentic/tmp/9901/story-1/goss.yaml
ok      scenario succeeded

[t] mysql modules/validate/ at 2017-03-08 08:19:14
ok 1 - Process: mysqld: running: matches expectation: [true]
ok 2 - Port: tcp:3306: listening: matches expectation: [true]
ok 3 - Port: tcp:3306: ip: matches expectation: [[""]]
ok 4 - Service: mysql: enabled: matches expectation: [true]
ok 5 - Service: mysql: running: matches expectation: [true]
ok      scenario succeeded

Sharing goss tests

An interesting use case is you may share you goss tests. Sparrow make it possible save your tasks at SparrowHub – central sparrow repository so that to share tasks with others.

Say you want someone else runs your goss scenarios on remote server. Provided that one install sparrow client there, it is really easy.

Upload remote task

$ sparrow remote task upload audit/nginx "goss audit for nginx"
$ sparrow remote task share audit/nginx
$ sparrow remote task upload audit/mysql "goss audit for mysql"
$ sparrow remote task share audit/mysql

Install and run remote task

Having logged into other server just have this:

$ sparrow remote task run melezhik@audit/nginx
$ sparrow remote task run melezhik@audit/mysql

More on remote task could be found at sparrow documentation – https://github.com/melezhik/sparrow#remote-tasks

Running goss scenarios with sparrowdo

Alternatively you may want to use Perl6 interface to sparrow and run goss scenarios using sparrowdo:

$ cat sparrowfile

task-run 'run goss for mysql', 'goss', %( action  => 'validate' , goss => q:to/HERE/ );

    listening: true
    enabled: true
    running: true
    running: true


$ sparrowdo --host=

Regards and have fun with automation.

Outthentic – quick way to develop user’s scenarios


Outthentic is a development kit for rapid development of users scripts and test scenarios. Outthentic is an essential part of Sparrow system. Let’s see how easy script development might be in the Outthentic framework.

Bits and pieces of theory

First of all let’s create a project for all our scripts.

$ mkdir tutorial
$ cd tutorial/

Ok, now let’s create our first script. We are going to use Bash language here. But Outthentic plays nice with many languages (*), we will see it later.

(*) These are Bash, Perl5, Python and Ruby.

Let’s say we want to create a simple script to check status of nginx web server:

$ touch story.check

$ cat story.bash
service nginx status

Let me explain what we’ve done so far.

We have created a script story.bash and empty check file story.check .

In Outthentic there is term story which is just an abstraction for some script and its check file. We may call scripts as story scenarios and check files as story check files. We may also refer to the story script and the story check file as story data or story files.

To make Outthentic tells one story from another we should put story files into different directories. Technically speaking story is just a directory with some story files inside.

When we say “run or execute the story” it means we execute story script and apply rules from story check file to verify script stdout.

Another good explanation of stories is that they are elementary units of Outthentic framework to build a bigger things like Outthentic suites or projects.

Conversely, Outthentic project or suite is just a container with Outthentic stories.

That’s enough of theory. Let’s get back to our small script.

Here, in the example, the story scenario is a small Bash script to do useful job. Story check file could contain some check rules to verify stdout emitted by the script. Right now we don’t want to verify script stdout so we just leave story check file empty (*).

(*) In the latest versions Outthentic check files are no longer obligatory,  so if you’re not going to validate scripts stdout just don’t create check file.

Now let’s run the script,  or like we would say in Outthentic terminology – run the story.

Let’s get a strun – console client that executes scenarios in Outthentic stories:

$ strun 
at 2017-02-08 15:47:56
* nginx is running
ok    scenario succeeded

Ok. Good. All should be clear from reading of strun’s report. We see that nginx is running. At least this is what service nginx status command tells us. What’s happening under the hood when we invoke strun ?

Strun is a [s]tory [r]unner – utility that runs story script story.bash and then checks if its exit code is 0. In case of successful exit code strun prints “scenario succeed” in its report. Overall “STATUS SUCCEED” line means that all the project’s scripts have succeeded. Right now there is the only one script – story.bash, very soon though we will see that there are might be more than one script in Outthentic project.

But before diving into details about scenario development let me show how strun reports when some scenario fails, let’s shut the nginx down and run our story again:

$ sudo /etc/init.d/nginx stop
$ strun 
at 2017-02-08 15:57:25
* nginx is not running
not ok    scenario succeeded

Check lists and check files

Check lists are rules written in Outthentic::DSL language to verify stdout emitted by story script. Do you remember that we left story file empty? Now let’s add some check rules:

$ cat story.check 
nginx is running

Now let’s start nginx over again and re-run our story:

$ sudo /etc/init.d/nginx start
$ strun 
at 2017-02-08 16:02:38
* nginx is running
ok    scenario succeeded
ok    text has 'nginx is running'

Good, we see new line has appeared at the strun report:

ok  text has 'nginx is running'

Strun executes story.bash script and then checks if script’s stdout include the string  “nginx is running”.

You may use Perl5 regexs in the check rules as well:

$ cat story.check 
regexp: nginx\s+is\s+running

Outthentic::DSL make it possible a lot of other complex checks, please follow this tutorial to see me examples. But for now let’s just see how we can use check rules in our scripts development.

So far this type of check is meaningless, as ​it seems that the command service nginx status do all the job and if it succeeds there no need to analyse the stdout to verify that nginx is running, unless you are true paranoid and want to add double checks :).

But let’s rewrite our story scenario to see how useful story checks could be. What if instead of “consulting”  of  service nginx status command we want to look up at the processes list at our server? Let’s rewrite our story:

$ cat story.bash 
ps uax | grep nginx

$ cat story.check 
nginx: master
nginx: worker

Now let’s give it run and see the results:

$ strun 
at 2017-02-08 16:13:19
root     21274  0.0  0.0  85884  1332 ?        Ss   16:02   0:00 nginx: master process /usr/sbin/nginx
www-data 21275  0.0  0.0  86220  1756 ?        S    16:02   0:00 nginx: worker process
melezhik 21406  0.0  0.0  17156   944 pts/1    R+   16:13   0:00 grep nginx
ok    scenario succeeded
ok    text has 'nginx: master'
ok    text has 'nginx: worker'

Ok. Now we see that our check rules  ( “nginx: master” and “nginx: worker” )  are working and “verifying” that the nginx server processes are seen at the processes.  It is much more detailed information in comparison with those getting from simple “service nginx status” command.

What is more important  the command ps uax|grep nginx might succeed with exit code zero but this does not mean that nginx server is running ( guess why? because of the grep command itself appeared in process list! ), and this is where check rules become handy – to verify that some commands succeed of fail even thought they don’t drop proper exit code.

Let’s compare check rules against simple exit codes.

Check rules vs exit codes.

Sometimes you don’t have to define any special check rules to verify that your script succeeds, obviously most of  the modern software provides valid exit codes you can rely upon. But sometimes a normal ( zero ) exit code does not mean that command succeeds. The previous example shows the idea. It is pretty simple but could be considered as a “template” for this type test scenarios where you want to “grep” some information from script stdout to verify that everything goes fine. Actually this is what people usually do when typing $cmd|grep foo command in a terminal.

Another good example when exit code could’t be a good criteria is insertion into database. Say, first time you insert record it does not exists and you are ok when script doing insertion  and return zero exit code. Next time you run script record already exists and script throws bad exit code and proper message ( something like the table record with given ID already exists … ).  If you only need to ensure that records with given ID gets inserted into database, you can write the following check rule and will be safe:

$ cat story.check

regexp: table record (created|already exists)

Outthentic suites

As I said at the beginning there are might be more than one script in the Outthentic project. In terms of Outthentic we can talk about Outthentic projects or outthentic suites – a bunch of related outthentic stories.  Strun uses directories to tell one story from another. Let’s add new story to the project we created before,  we have to reorganize the directory layout:

$ tree 
├── check-nginx
│   ├── story.bash
│   └── story.check
└── start-nginx
    ├── story.bash
    └── story.check

The content of  check-nginx/* files remains the same. Check-nginx  is a story to check nginx web server status.

Now there is new story – start-nginx as you can imagine to run nginx server.

The content of start-nginx/story.bash file is pretty simple:

$ cat  start-nginx/story.bash
sudo service nginx start

We leave the content of file start-nginx/story.check empty.

Strun client has --story option to set a story to run.

$ strun  --story start-nginx
start-nginx/ at 2017-02-08 16:52:27
ok    scenario succeeded

If no --story option is given strun will run file story.bash (*) at current working directory.  So we can create default story which just says that user should choose one of two stories to run – check-nginx or start-nginx :

$ cat  story.bash 
echo usage: strun --story (start-nginx|run-nginx)

(*) Or actually one of four files if exists – story.pl, story.bash, story.py, story.rb – as you can guess it relates to the language you write scenarios on – Perl5, Bash, Python or Ruby.
Having more than one story in the project allow to have many small  scripts which then you can run independently. But sometimes we want to take another approach – call one scripts from others. Let’s see how we can achieve this.

Story modules

Story modules ( or in short just modules ) are scripts being called from other scripts.
When gets called modules might being given an input parameters aka story variables.

Consider an example of a simple package manager.

Let’s say we want to write a script to install packages taken as the input string of space separated items:

script "package-foo package-bar package-baz"

Outthentic provides highly effective API to handle command line parameters, so we can pass package list by --param option:

$ strun --param packages="package-foo package-bar package-baz"

Now let’s split our task into two simple scripts. One – to parse input parameters and another to install given package. The overall project structure will be:

$ tree
├── hook.bash
└── modules
    └── install-package
        ├── story.bash
        └── story.check

Let’s explain the new project structure.

First of all we notice file named hook.bash.  This is the hook.  By using hook we can extend strun functionality. Under the hood hooks are just simple scripts to be executed before story scenario. Hooks functionality is  described at the Outthentic documentation in the Hooks section. At the moment all we have to know about hooks is they are scripts that get run before story scenario.

The directory modules/install-package holds the new outthentic story install-package. When we place story files  under modules/ directory we define story modules.

Story modules are the usual outthentic stories which are called from other stories by using hook files. Let me show how it works:

$ cat hook.bash 
for p in $(config packages); do
  run_story install-package package $p

This  simple Bash code does following:

1. Parses input parameters using  config() function provided by Outthentic
2. Splits input string by spaces and iterates for packages list, calling story module install-package passing package name as parameter:

run_story install-package package $p

Let’s see how story module is implemented, it’s very, again Bash script:

$ cat modules/install-package/story.bash 
package=$(story_var package)
echo install $package ...

What’s happening in install-package/story.bash script?

1. Package name is assigned to variable by using using Outthentic story_var()
2.  Package install command is executed  (*).

(*) For demonstration purposes we don’t run real package install here.

Let’s summarize.

* Story modules are very useful when design your script system.
* This mechanism foster to split a complex task into simple ones and make code reuse via “script libraries” pattern.

You may found more information about Outthentic modules at the documentation pages, section Run stories from other stories

Let’s run our story suite to see all in action:

$ strun  --param packages='nginx mysql perl'
modules/install-package/ params: package:nginx at 2017-02-09 11:29:23
install nginx ...
ok    scenario succeeded

modules/install-package/ params: package:mysql at 2017-02-09 11:29:23
install mysql ...
ok    scenario succeeded

modules/install-package/ params: package:perl at 2017-02-09 11:29:23
install perl ...
ok    scenario succeeded

In next section we’ll see how we may add configuration to our suites.


It is extremely useful to provide a sane default for script input parameters.

Outthentic has a lot of tools to do this.  Let me show the one.

Consider a script which prove if nginx server is listening to the given http port:

$ cat story.bash 
sudo netstat -nlp|grep nginx

$ cat story.check

When we run the story we will see that nginx is available at 80 port as we expected:

$ strun 
at 2017-02-09 12:22:48
tcp        0      0    *               LISTEN      21899/nginx     
tcp6       0      0 :::80                   :::*                    LISTEN      21899/nginx     
ok    scenario succeeded
ok    text has ''

Now we want to make the port parameter configurable for the script:

$ cat suite.yaml
port: 80

Once we define configuration file called suite.yaml at the top of project directory strun will read it and configuration data will be available via config() function:


$ cat story.check 
generator: << CODE
port=$(config port)

This check file shows the example of generators – special DSL to generate check rules on runtime. More information about generators could be found at the Outthentic::DSL documentation, at the generators section.

We can override default values by passing command line parameters:

$ strun --param port=443

Indeed Outthentic provides may sophisticated and efficient methods to configure your scripts.  With well recognized formats support like  JSON, YAML and Config::General. Please follow Outthentic documentation to read more,  the section suite configuration

There is more than one language to write your script

And finally as I told at the very beginning you are free to choose between several language when developing scripts in Outthentic framework.  Outthentic API is implemented for the following languages:

* Per5
* Bash
* Python
* Ruby
For example, this is how the hook file in the package manager suite could be written on Perl5:

$ cat hook.pl 
for my $p ( split /\s+/, config()->{packages}) {
  run_story("install-package", { package => $p });

Outthentic provides universal API for the all listed languages:

  • Handling input parameters
  • Developing mutli scripts systems using story modules
  • Enabling configuration with reach support of well known formats

Scripts distribution

The next step is to distribute your scripts written on Outthentic framework.  Sparrow  is  outthentic scripts manager allow you to share your scripts across any Linux boxes provided that Perl5 is installed.

Further reading

For further reading I would recommend you comprehensive article –  “Sparrow plugins evolution”

Script examples

Script examples presented at the paper could be found here.