Using Python dependencies in Sparrow plugins

Latest version of Sparrow has brought a new feature for those who would like to write sparrow plugins using Python language.

Now you can declare a Python/Pip dependencies with the help of requirements.txt file:

$ cat requirements.txt

Let’s create a simple plugin to make http requests using hackhttp python library.

$ cat
import hackhttp
from outthentic import *

url = config()['url']
hh = hackhttp.hackhttp()

code, head, html, redirect_url, log = hh.http(url)

print code

$ touch story.check

$ cat sparrow.json
    "name" : "python-sparrow-plugin",
    "description": "test sparrow plugin for python",
    "version" : "0.0.4",
    "url" : ""

Now let’s upload the plugin to SparrowHub and give it a run:

$ sparrow plg upload
sparrow.json file validated ...
plugin python-sparrow-plugin version 0.000004 upload OK

$ sparrow plg install python-sparrow-plugin

upgrading public@python-sparrow-plugin from version 0.0.3 to version 0.000004 ...
Download --- 200
Downloading/unpacking hackhttp==1.0.4 (from -r requirements.txt (line 1))
  Downloading hackhttp-1.0.4.tar.gz
  Running (path:/tmp/pip_build_melezhik/hackhttp/ egg_info for package hackhttp
Installing collected packages: hackhttp
  Running install for hackhttp
Successfully installed hackhttp
Cleaning up...

$ sparrow plg run python-sparrow-plugin --param url=
•[plg] python-sparrow-plugin at 2017-05-12 17:07:17

ok scenario succeeded

And finally if you prefer to get things done by Perl6/Sparrowdo use this piece of code as starting point:

$ cat sparrowfile

my $url = '';
task-run "http get $url", 'python-sparrow-plugin', %( url => $url );

Sparrowdo command line API

Command line API makes it possible to run sparrow plugins and modules remotely on target server by using console client, there are a lot of things you could to with this API!

Running plugins with parameters

Executing sparrow plugins. Here is the list of plugins you may use, the form you run them via command line is:

--task_run=plg-name@plg_param=plg_value,plg_param=plg_value ...

Let’s  me drop a few examples, how you can use it.

Execute bash commands

Here is where bash sparrow plugin could be handy.

1. Single bash command

$ uptime 
$ sparrowdo --host=remote.server --task_run=bash@command=uptime


2. Compound commands

Say you want to execute multiples bash commands chained by logical “AND”, it’s easy:

$ ps uax | grep nginx|grep -v grep  && service nginx stop:
$ sparrowdo --host=remote.server \
--task_run=bash@command='ps uax|grep nginx|grep -v grep && service nginx stop'


3. Multiple bash commands

Alternatively you may pass more than one `–task_run` chunks to execute many bash commands consequently:

$ ls -l; uptime ; df -h; 
$ sparrowdo --host=remote.server \
--task_run=bash@command='ls -l' \
--task_run=bash@command=uptime \
--task_run=bash@command='df -h'


4. Run command under user’s account

Say you want execute bash command under specific user, not root? It’s easy to do by using sparrow bash plugin

$ sparrowdo --host=remote.server \


Install system packages

Use package-generic plugin.  This is cross platform installer with support of some popular Linux distros – Debian/Ubuntu/CentOS.

Install mc, nano and tree packages:

$ sparrowdo --host=remote.server \
--task_run=package-generic@list='mc nano tree'



Install CPAN packages

Use cpan-package plugin to install CPAN packages. There are many options with it. Say I want to create web-app user and install some CPAN package into user’s home …

$ sparrowdo --host=remote.server \
--task_run=user@name=web-app \
--task_run=cpan-package@list='CGI DBI',\


What else? Any sparrow plugin could be run the same way:

--task_run=plg-name@plg_param=plg_value,plg_param-plg_value ...

Find one you need at and just use it!

Running modules with parameters

Sparrow modules are more high level entities but you can use them the same way as you do with sparrow plugins – to apply piece of configurations to your servers remotely.

Choose this form:

$ sparrowdo --module_run=module-name@mod_param=mod_value,mod_param=mod_value ...

Here are some examples.

1. Install nginx with custom document-root

Use Sparrowdo::Nginx module:

$ sparrowdo --host=remote.server \

This command produces too much output, so I am not showing it’s screenshot here.

2. Install CPAN packages come from GitHub repositories

There is a Sparrowdo::Cpanm::GitHub module to handle this, it accepts many options and it’s even possible to install modules from Git branches:

Let’s install from master branch at :

$ sparrowdo --host=remote.server \

This command produces too much output, so I am not showing it’s screenshot here.

3. Fetching remote file

And finally next but not the last example of Sparrowdo module to fetch files over http, it’s called Sparrowdo::RemoteFile

Say I want to fetch some auth basic protected URL and place it into specific directory?
Well let’s do it in one shot:

$ sparrowdo --host=remote.server \

This command produces too much output, so I am not showing it’s screen shot here.


Sparrowdo command line API provides an easy and simple way to configure servers remotely by using only console client with no coding at all, in a style of bash oneliners.

But if you look for something more complicated and powerful – consider using Sparrowdo scenarios!


Simple META6::bin wrapper

Recently  Wenzel P. P. Peppmeyer ( aka gfldex ) released a nice helper to start Perl6 projects from the scratch – it’s called META6:bin

$ zef install META6:bin

MEAT6::bin module enables creation Perl6 project from the scratch, for example this is how quickly one can bootstrap a new Perl6 module called Foo::Bar

$ meta6 --new-module=Foo::Bar

There are a lot of options come from meta6 client, take a look at the documentation.  META6::bit cares about git/github things, setting up git repository for you freshly started projects, creating META6.json file, populating t/ directory, so on.

I have created a simple wrapper around meta6 script. The reasons for that:

* I don’t want to remember all the options I use when launching meta6 client to bootstrap my projects
* I have predefined settings I use always so I don’t want to enter them every time I run meta6 command line.

Here is my solution –  sparrow plugin with the analogous name – meta6-bin Under the scene it just calls meta6 client with parameters. But you can easily customize ones by using sparrow tasks:

$ sparrow plg install meta6-bin
$ sparrow project create perl6-projects
$ sparrow task add  perl6-projects meta6-bin new

Having this defined you may easily create new Perl6 modules projects to run meta6 with some default options:

$ sparrow task run perl6-projects/new --param name=Foo::Bar --param path=~/my-projects/

The only two obligatory parameters you have to set here is – name – module name and path – directory location where you want to create project files.

Here is how you can create a project inside current working directory:

$ sparrow task run perl6-projects/new --param name=Foo::Bar --param path=$PWD

And finally let’s tune some settings up to meet our specific requirements, say I don’t want to initialize git repository for my projects and I have predefined root location to keep my work:

$ export EDITOR=nano
$ sparrow task ini perl6-projects/new
options --force --skip-git --skip-github
path /opt/projects

Now we “memorize” our settings into sparrow task so that we can apply them for next meta6-bin runs:

$ sparrow task run perl6-projects/new --param name=Foo::Bar

Hope this short post was useful.
Regards and stayed tuned with Perl6/Sparrow/Sparrowdo.

Writing pre-commit hooks with Sparrow


Pre-commit is a framework for managing and maintaining multi-language pre-commit hooks. Developers write a hooks to be triggered so that some preliminary/useful job gets done before updates arrived to your git repo. The idea is quite old, but pre-commit lets install and integrate hooks into existed git repos with minimal efforts.

Writing hooks with sparrow

Sparrow is a universal automation tool and I found it quite easy to use sparrow to write hooks for pre-commit framework. Let me show how. Say I need to run prove tests for Perl6 code.

The code of hook is trivial and look like:

prove -vr -e 'perl6 -Ilib' t/

Let’s wrap this script into sparrow plugin, here are few simple steps:

1. Write a story:

$ cat story.bash
set -x
set -e
path=$(config path)
echo path is: $path
cd $path
prove -vr -e 'perl6 -Ilib' t/

A quick remark here. We pass a Perl6 project directory location explicitly by --path parameter as absolute file path. This requirement is due to sparrow does not preserve a current working directory when executing plugins.

2. Leave story check file empty, as we don’t need an extra checks here:

$ touch story.check

3. And create plugin meta file:

$ cat sparrow.json
  "name" : "perl6-prove",
  "description" : "pre-commit hook - runs prove for Perl6 project",
  "version" : "0.0.1",
  "category" : "utilities",
  "url" : ""

4. Now we can upload our freshly backed plugin to SparrowHub:

$ sparrow plg upload

Using sparrow plugin in pre-commit hooks

First of all we need to install sparrow plugins at our system and see that our hook works on a test Perl6 project.

Install the plugin:

$ sparrow plg install perl6-prove

Set up git repository and project files:

$ git init 
$ ... # Create files and directories, git add, and so on ..

Create a simple Perl6 test:

$ cat t/00.t
use v6;
use Test;
plan 1;
ok 1, 'I am ok';

Then we need to set up pre-commit hooks yaml.

Our pre-commit hook yaml will be:

$ cat .pre-commit-config.yaml
-   repo: local
    -   id: perl6-prove
        name: perl6-prove
        entry: bash -c "sparrow plg run perl6-prove --param path=$PWD"
        language: system
        always_run: true
        files: ''

Here we use so called “local” repository and language  as”system” bear in mind that sparrow comes as external system command.

Now let’s commit our changes to trigger hook execution:

$ git commit -a -mtest-commit
[master 98cf098] test-commit
 1 file changed, 6 insertions(+)
 create mode 100644 t/00.t

You can also trigger the hook directly not committing anything:

$ pre-commit run perl6-prove --verbose
[WARNING] Unstaged files detected.
[INFO] Stashing unstaged files to /home/vagrant/.pre-commit/patch1489306073.
[perl6-prove] perl6-prove................................................Passed
hookid: perl6-prove

[p] perl6-prove at 2017-03-12 08:07:53
path is: /home/vagrant/projects/pre-commit-test
t/00.t ..
ok 1 - I am ok
All tests successful.
Files=1, Tests=1,  1 wallclock secs ( 0.02 usr  0.00 sys +  0.16 cusr  0.03 csys =  0.21 CPU)
Result: PASS
ok      scenario succeeded

In the end of this post I am going to take some summary.

Implementing pre-commit hooks via Sparrow plugins

Roughly speaking pre-commit framework supports two types of plugins – external ones, which to be installed into the system manually  and ones located in github repositories and installed by pre-commit itself.

I see a possible benefits sparrow can bring you when developing hook’s scripts as sparrow ( external ) plugins:

– Sparrow plugins are external ones and highly decoupled from hooks/project structure.

– Indeed they are versioned and packaged pieces of software. One can maintain and release new versions of plugins in a way predictable and transparent for end user.

– You can always install/remove/upgrade/downgrade versions of sparrow plugin independently on pre-commit framework itself.

– Sparrow provides reasonable alternative to manage hook’s scripts dependencies, so that sparrow takes care about dependencies resolution during plugin installation. It’s CPAN/carton for Perl5 and RubyGems/bundler for Ruby. Let me know if you need other package managers support.

Rregards and have fun with your automation!

Manage goss scenarios with sparrow


Goss is a YAML based serverspec alternative tool for validating a server’s configuration.  It’s written on Go language. It’s quite interesting and promising young project I came across via reddit/devops channel.

Let me show how one can distribute goss scenarios using sparrow tasks.

Before diving into technical stuff let me explain why this could be useful:

  • you want organize multiple goss scenarios by logical groups and manage them via unified interface
  • you want to share some goss yamls across your team, to make it possible quickly run your goss tests against many applications

Ok, let’s go.

Installing sparrow goss plugin

This part is really easy.

$ sparrow index update # we want fresh index from SparrowHub
$ sparrow plg install goss

You’ll find detailed information on goss sparrow plugin at

Set up sparrow project and tasks

Ok, now let’s create sparrow project and tasks. These are just simple abstractions to split many goss tests on various logical groups.

$ sparrow project audit # we will keep all goss scenarios here
$ sparrow task add audit nginx  goss # nginx test suite
$ sparrow task add audit mysql goss # mysql test suite

Running sparrow task list command we see our new project and tasks:

$ sparrow task list
[sparrow task list]

Set up goss tests

Now let’s populate our goss tests, we should read goss spec first, but it’s really easy.

One for nginx:

$ sparrow task ini audit/nginx 

action validate
goss << HERE
    listening: true
    enabled: true
    running: true
    running: true


And one for mysql:

$ sparrow task ini audit/mysql 

action validate
goss << HERE
 listening: true
 enabled: true
 running: true
 running: true


Run goss tests

Now we can run goss tests separately for nginx and mysql.

One for nginx:

$ sparrow task run audit/nginx
[t] nginx
@ goss wrapper

[t] nginx modules/generate-goss-yaml/ params: cache_dir:/home/vagrant/.outthentic/tmp/9595/story-1 at 2017-03-07 16:18:13
generated goss yaml at /home/vagrant/.outthentic/tmp/9595/story-1/goss.yaml
ok      scenario succeeded

[t] nginx modules/validate/ at 2017-03-07 16:18:13
ok 1 - Process: nginx: running: matches expectation: [true]
ok 2 - Port: tcp:80: listening: matches expectation: [true]
ok 3 - Port: tcp:80: ip: matches expectation: [[""]]
ok 4 - Service: nginx: enabled: matches expectation: [true]
ok 5 - Service: nginx: running: matches expectation: [true]
ok      scenario succeeded

And one for mysql:

$ sparrow task run audit/mysql
[t] mysql
@ goss wrapper

[t] mysql modules/generate-goss-yaml/ params: cache_dir:/home/vagrant/.outthentic/tmp/9901/story-1 at 2017-03-08 08:19:14
generated goss yaml at /home/vagrant/.outthentic/tmp/9901/story-1/goss.yaml
ok      scenario succeeded

[t] mysql modules/validate/ at 2017-03-08 08:19:14
ok 1 - Process: mysqld: running: matches expectation: [true]
ok 2 - Port: tcp:3306: listening: matches expectation: [true]
ok 3 - Port: tcp:3306: ip: matches expectation: [[""]]
ok 4 - Service: mysql: enabled: matches expectation: [true]
ok 5 - Service: mysql: running: matches expectation: [true]
ok      scenario succeeded

Sharing goss tests

An interesting use case is you may share you goss tests. Sparrow make it possible save your tasks at SparrowHub – central sparrow repository so that to share tasks with others.

Say you want someone else runs your goss scenarios on remote server. Provided that one install sparrow client there, it is really easy.

Upload remote task

$ sparrow remote task upload audit/nginx "goss audit for nginx"
$ sparrow remote task share audit/nginx
$ sparrow remote task upload audit/mysql "goss audit for mysql"
$ sparrow remote task share audit/mysql

Install and run remote task

Having logged into other server just have this:

$ sparrow remote task run melezhik@audit/nginx
$ sparrow remote task run melezhik@audit/mysql

More on remote task could be found at sparrow documentation –

Running goss scenarios with sparrowdo

Alternatively you may want to use Perl6 interface to sparrow and run goss scenarios using sparrowdo:

$ cat sparrowfile

task-run 'run goss for mysql', 'goss', %( action  => 'validate' , goss => q:to/HERE/ );

    listening: true
    enabled: true
    running: true
    running: true


$ sparrowdo --host=

Regards and have fun with automation.

Outthentic – quick way to develop user scenarios


Outthentic is a development kit for rapid development of users scripts and test scenarios. Outthentic is an essential part of Sparrow system. Let’s see how easy script development may be by using Outthentic framework.

Project boilerplate

First of all let’s create a project to hold all our scripts.

$ mkdir tutorial
$ cd tutorial/

Ok, now let’s create our first script. We are going to use Bash language here. But Outthentic plays nice with many languages(*), we will see it later.

(*) these are Bash, Perl5, Python and Ruby

Let’s say  want  to create a simple script to check nginx status:

$ touch story.check
$ cat story.bash
service nginx status

Let’s understand what we have done so far. We have created a story file called “story.bash” and empty story check file “strory.check”.

Story file is a plain bash script to do useful job. Story check file could contain some check rules to verify stdout from story file. Right now we don’t want to verify story stdout so we just leave story check file empty.

Outthentic requires that every script be paired with story check file.

Now let’s run the script, or if say in Outthentic terminology – run the story. Let’s get a strun – console utility that executes scenarios in Outthentic:

$ strun 

 at 2017-02-08 15:47:56
 * nginx is running
ok    scenario succeeded

Ok. Good. All should be clear from reading strun output. We see that nginx is running. At least this is what “service nginx status” tells us. What’s happening under the hood when we invoke “strun” ?

Strun – a [s]tory [r]unner – utility that runs story file “story.bash” and then checks if it’s exit code is 0. In case of successful exit code  strun prints “scenario succeed” in it’s report. Overall status “STATUS SUCCEED” line means all the scripts comes from out project are succeed.

Right now there is the only one script – “story.bash”, very soon though we will see that there are might be more than one script in outthentic project.

But before diving into more details about scenario development let me show how strun reports when scenario fails to succeed, let’s shut nginx down and re-run the story:

$ sudo /etc/init.d/nginx stop
$ strun 

 at 2017-02-08 15:57:25
 * nginx is not running
not ok    scenario succeeded

Check lists

Check lists are rules written in Outthentic::DSL language to verify stdout comes from story script. Recalling that we left story file empty, now let’s add some check rules to it:

$ cat story.check 
nginx is running

Now let’s start nginx over again and re-run our story:

$ sudo /etc/init.d/nginx start
$ strun 

 at 2017-02-08 16:02:38
 * nginx is running
ok    scenario succeeded
ok    text has 'nginx is running'

Good, we see new line appeared at strun report:

ok    text has 'nginx is running'

Strun executes “story.bash” script and then checks if it’s STDOUT include the string “nginx is running”.

You may use Perl5 regexs in check rules as well:

$ cat story.check 
regexp: nginx\s+is\s+running

Outthentic::DSL make it possible a lot of other complex checks, but let’s go ahead and see how we can use check rules in our scripts development.

So far this type of check looks meaningless, as “service nginx status” seems to do all the job and if it’s succeed there no need to track stdout to verify that nginx is running, unless you are true paranoid and want to add double checks 🙂

But let’s rewrite our story scenario to see how useful story checks might be. What if instead of consulting  of “service nginx status” command we want to lookup at the processes list happening at our server?

$ cat story.bash 
ps uax | grep nginx

$ cat story.check 
nginx: master
nginx: worker

Now let’s give it run and see results:

$ strun 

 at 2017-02-08 16:13:19
root     21274  0.0  0.0  85884  1332 ?        Ss   16:02   0:00 nginx: master process /usr/sbin/nginx
www-data 21275  0.0  0.0  86220  1756 ?        S    16:02   0:00 nginx: worker process
melezhik 21406  0.0  0.0  17156   944 pts/1    R+   16:13   0:00 grep nginx
ok    scenario succeeded
ok    text has 'nginx: master'
ok    text has 'nginx: worker'

Ok. Now we see our check rules (“nginx: master” and “nginx: worker”) are verified which means nginx server processes “appear” at processes list as nginx master and nginx process. It is more detailed information in comparison with those getting from simple “service nginx status” command.

What is more important “ps uax|grep nginx” might succeed with exit code zero but this does not mean that nginx server is running ( guess why? ). And this is where check rules become handy. Now let’s summarize.

Check rules VS exit code.

Sometimes you don’t have to define any check rules to verify that your script succeed, obviously most of  modern software provides a valid exit code you can rely upon. But sometimes a normal ( zero ) exit code does not mean an overall success. This test shows the idea. It is pretty simple but could be considered as basic example for such a test scenarios where you want to “grep” some information from script stdout to verify that everything goes fine. Actually this is what people usually do when hitting “foo|grep baz” command.

Another good example when exit code can’t be a good criteria is insertion into database. Say first time you insert record it does not exists and you are ok when script doing insertion  and return zero exit code. Next time you run script record already exists and script throws bad exit code and proper message ( something like record with given ID already exists … ). If after all you care only about record existence  you can’t rely on exit code here. So alternative approach could be verify script work by messages appeared at stdout:

$ cat story.check

regexp: record (created|already exists)

Outthentic suites

As I said at the beginning there are might be more than one script in the project. In terms of Outthentic we can talk about outthentic project or outthentic suite – a bunch of related stories. Strun utilizes directories to tell one story from another. Let’s add new story to our suite to start nginx service, we will reorganize directory layout on the way:

$ tree 
├── check-nginx
│   ├── story.bash
│   └── story.check
└── start-nginx
    ├── story.bash
    └── story.check

The content of  check-nginx/* files remains the same. This is a story to check nginx state. The content of start-nginx/story.bash file is pretty simple:

$ cat  start-nginx/story.bash
sudo service nginx start

We leave file start-nginx/story.check empty.

Strun uses “–story” option to set a story to run. If no “–story” option is given strun tries to run file story.bash (*) at current working directory:

$ strun  --story start-nginx

start-nginx/ at 2017-02-08 16:52:27
ok    scenario succeeded
(*) Or actually one of four files if exists –, story.bash,, story.rb – as you can guess it relates to the language you write scenarios – Perl5, Bash, Python or Ruby.

Having more than one story at your project help you to split large task into small independent scripts to be running distinctly. But sometimes we want take another approach – call one scripts from others. Let’s see how we can achieve this.

Story modules

Story modules ( or in short just a modules ) are scripts being called from other scripts.
When gets called modules might being given an input parameters aka story variables.

Consider an example of simple package manager.

Let’s say we want to write a script to install packages taken from input list passed as string of space separated items:

"package-foo package-bar package-baz"

Outthentic provides very flexible API to handle command line input parameters, so we can pass package list by “–param” option:

$ strun --param "package-foo package-bar package-baz"

Now let’s split our task into two simple scripts. One – to parse input parameters and another to install given package. The overall project structure will be:

$ tree
├── hook.bash
├── meta.txt
└── modules
    └── install-package
        ├── story.bash
        └── story.check

Let’s explain a new project structure.

First of all we notice file called “hook.bash”. Hooks are way to extend strun functionality. Under the hood hooks are simple scripts to be executed before story file.

Second thing, If we look at the project root directory we don’t find neither story file nor story check files here. It’s ok. Existence of  file called “meta.txt” informs strun that this is meta story. Meta story is outthentic story which does not have  a story file at all.

Meta file is just plain text file. It could be empty. But you may place some helpful info here to be dumped when story executed:

$ cat meta.txt 
simple package manager

Hooks and meta stories are quite described at Outthentic documentation in “Hooks API” section, but let’s go ahead.

The last new thing we can notice at our project is a directory  “modules/install-package” with content very similar to the content of outthentic story ( story file and story check file there ). Well, everything kept under “modules/” directory is treated as story-modules.

Story modules as I already told are the usual outthentic stories but being called from other stories or if to be accurate from hook files. Let’s see how this happens:

$ cat hook.bash 
for p in $(config packages); do
  run_story install-package package $p

This  simple bash code does following:

1. Parses input parameters using ubiquitous “config” function provided by Outthentic
2. Splits packages string by spaces and for every item calls a story module named “install-package”:

run_story install-package package $p

Story module being passed an input parameter or story variable named “package” having the name of the package being installed.

Let’s see how story module is implemented, it’s very simple:

$ cat modules/install-package/story.bash 
package=$(story_var package)
echo install $package ...

What we do in “modules/install-package/story.bash” script?

1. Parse story input parameter by using handy “story_var” function
2. Run install command (*) for given package.

(*) For demonstration purposes we don’t run real package install here , using yum or apt-get package manager.

Let’s summarize. Story modules are very useful when design your script system. This mechanism encourages you to split a complex task into simple ones and make code reuse via “scripts-libraries”.

A plenty of information about story modules could be found at Outthentic docs in “Upstream and Downstream stories” section.

Now let’s run our story suite:

$ strun  --param packages='nginx mysql perl'

@ simple package manager

modules/install-package/ params: package:nginx at 2017-02-09 11:29:23
install nginx ...
ok    scenario succeeded

modules/install-package/ params: package:mysql at 2017-02-09 11:29:23
install mysql ...
ok    scenario succeeded

modules/install-package/ params: package:perl at 2017-02-09 11:29:23
install perl ...
ok    scenario succeeded

In next section we’ll see how supply our suites with default configuration.


Sometimes it’s useful to provide a sane default for our script parameters. Outthentic comes with a lot of ways to do this. Let’s show one.

Consider a script which looks if running nginx listening to a given port:

$ cat story.bash 
sudo netstat -nlp|grep nginx
$ cat story.check

Running a suite we see that nginx is available at 80 port as we expected:

$ strun 

 at 2017-02-09 12:22:48
tcp        0      0    *               LISTEN      21899/nginx     
tcp6       0      0 :::80                   :::*                    LISTEN      21899/nginx     
ok    scenario succeeded
ok    text has ''

Say, if nginx listen to other port and we want to make this parameter configurable for the script. Not a problem:

$ cat story.check 
generator: <

Generators are way you can built check list in run time. We can see now that port variable is passed as input parameters. Now let’s provide a sane default for port:

$ cat  suite.ini
port 80

Later if want override default setting we can say:

$ strun --param port=443

Outthentic provides other methods to handle script configuration among them are JSON/YAML/Config::General/Command Line formats and nested parameters. Please follow documentation at section “Suite Configuration”

There is more than one language to write your script

And finally as I told at the very beginning you are free choose many language to develop scripts by Outthentic framework. This is the list of supported languages:

* Per5
* Bash
* Python
* Ruby

This is how hook file for  package manager script could be written on Perl5:

$ cat 
for my $p ( split /\s+/, config()->{packages}) {
  run_story("install-package", { package => $p });

Outthentic provides unified API for all listed languages to make it script development easy and simple:

  • Handling input parameters
  • Developing mutli scripts systems using story modules and “–story” option
  • Enabling configuration with reach support of well known formats like Config::General/YAML/JSON/Command line

Script distribution

This article only describes how one can use Outthentic in scripts development. If you want to distribute your scripts use Sparrow – outthentic scripts manager.

For further reading I would recommend you comprehensive article –  “Sparrow plugins evolution”

Script examples presented at the paper could be found here.

Regards. The author of Sparrow/Outthentic – Alexey Melezhik

ssh/scp commands with Sparrowdo

Sometimes you need to execute remote commands or copy files on/to remote hosts using ssh/scp commands. Here is how you can do it using Sparrowdo ssh/scp core-dsl functions.


Issuing ssh commands

The shortest form  to do this – call `ssh‘ function with minimum of required parameters – command to execute and remote host address.

ssh 'uptime', %( host => '' )

Usually people use ssh public-key authentication, so it is possible to set a path to ssh private key and provide user id:

ssh 'uptime', %(
  host    => '',
  user    => 'old-dog',
  ssh-key => 'keys/id_rsa'

Note, that ssh private key should be only stored at master host where sparrowdo runs, no other actions need to be taken, sparrowdo will care about coping(*) of ssh private key to the target host. It’s handy!

(*) By the way –  sparrowdo will remove a private ssh key from target host when ssh command is done.

There are many options of `ssh’ function you may read about at sparrowdo docs, here are just more examples.

You may run multi-line bash commands btw:

ssh q:to/CMD/, %( host => '', user => 'old_dog');
  set -e
  apt-get update
  DEBIAN_FRONTEND=noninteractive apt-get install -y -qq curl

Or don’t execute the same command twice relying on existence of file located at target server:

ssh "rm file", %(  host => '' , create => '/do/not/run/twice' );

And finally you may set alternative descriptions for your commands which will be shown at sparrowdo report to help you understand what your command does:

ssh "cat patch.sql | mysql", %(
  description => 'patching my database',
  host => ''

Issuing scp commands

`Scp’ command akin `ssh’ one.  Except it deals with remote files coping. Nothing to say here but showing some examples.

Copy a number of files to remote hosts

scp %( 
  data    => "/var/file1 /var/file2 /var/file3",
  host    => "", 
  user    => "Me", 
  ssh-key => "keys/id_rsa", 

Note, that files to copy should exists on the target hosts. If they don’t you may copy them from master host first using `file‘ function:

file '/var/file1', %( content =>  ( slurp 'files/file1' ) );
file '/var/file2', %( content =>  ( slurp 'files/file2' ) );
file '/var/file3', %( content =>  ( slurp 'files/file3' ) );

The same way as you do for `ssh’ command you may prevent from coping the same file twice if some file exists at target host:

scp %( 
  data    => "/var/biiiiiiig-file",
  host    => "", 
  create  => "/tmp/do/not/copy/me/twice"

And finally one my copy files FROM master to target host, using `pull’ flag:

scp %( 
  data    => "/var/data/dir",
  host    => "master-host:/var/file1", 
  pull    => 1, 
  ssh-key => "keys/id_rsa", 

That is it. Stay tuned with Sparrowdo Automation.  🙂

Sparrow plugins vs ansible modules


Both ansible modules and sparrow plugins are building blocks to solve elementary tasks in configuration management and automation deployment. Ansible modules are used in higher level playbooks scenarios written in YAML, sparrow plugins used in high level sparrowdo scenarios written in Perl6.

Languages support

Ansible – you may choose any language to write modules. When developing modules out of the box ansible provides seamless support for Python only ( shortcuts ), for other languages you should use third-party libraries ( natively for language you write a plugin ) to make a plugin development and integration process easier.

Sparrow – you write plugins on one of three languages – Perl5, Bash or Ruby. When developing modules sparrow provides an unified ( available for all languages ) API to make plugins development and integration easy and seamless. Though such an API as not that extensive as Python shortcuts API for ansible modules.

System design

Ansible – ansible modules are autonomous units of code to solve elementary task. Under this hood it’s just single file of code. Ansible modules can’t depend neither call other modules.

Sparrow –  sparrow plugins are very similar to ansible modules in way of autonomous, closed untis of code to solve an elementary tasks. But sparrow provides yet another level of freedom for plugin developer. Sparrow plugins actually is a  suites of scripts. Scripts may call other scripts with parameters. Such a design make it easy split even elementary task into scripts “speaking” to each other. Consider a trivial example – install / removing software packages. We can think about plugin to cope with whole elementary task ( installing / remove packages ) but under the hood split all by two scripts – one for package installing, another for package removal. This idea is quite expressed at comprehensive post – Sparrow plugins evolution.

Here is a simple illustration of what I have said.


System integration

Anisble  – ansible modules a smaller part of higher level configuration scenarios called playbooks. Ansible playbook is YAML driven dsl to declare a list of tasks – anisble modules with parameters.


– like ansible modules, sparrow plugins are smaller part of overall system – spparowdo – configuration management tool written on Perl6. Sparrowdo scenarios are Perl6 code to run a sparrow taskssparrow plugins with parameters.


End user interface

Ansible  – ansible modules gets called via playbooks using a YAML DSL to declare modules calls and pass parameters to. It is also possible to run ansible modules via command line client passing parameters as command line arguments.

Below is example of ansible playbook with ansible module yum to install httpd software package

$ cat playbook.yml
- hosts: webservers
  - name: ensure apache is at the latest version
    yum: name=httpd state=latest

Sparrow –
sparrow plugins gets called via sparrowdo scenarios using a Perl6 API. Plugin parameters gets passed as Perl6 Hashes.  Also one may use sparrow console client to run sparrow plugins as is via command line, not using  sparrowdo. There are a lot of options here – command line parameters, parameters in JSON / YMAL format, Config::General format parameters.

Below is sparrowdo equivalent to ansible module yum installing latest version of httpd.
Two flavours of API are shown – core-dsl and plugin API

$ cat sparrowfile

# you can use a short core-dsl API flavour:
package-install 'httpd'; 

# or low level plugin API flavour:
task-run 'ensure apache is at the latest version', 'package-generic', %(
   list => 'httpd'

Processing input parameters

Ansible – input parameters as key=value pairs (*), when developing plugin  you should parse an input and “split” it to the pieces of data to get a variables you need. There are plenty of “helpers” for many languages ( like Perl5, Ruby ) to simplify this process or else you have to parse input data  explicitly inside anisble module.

(*) Nested input parameters are possible

Ansible  provides a high level Python API for ansible modules called shortcuts allow you to automatically parse input and create parameter accessors,  declare parameters types, set default values, check required parameters and do other useful things with regards to.

Below is example of module parameters processing using python ansible API:

$ cat library/
from ansible.module_utils.basic import *

def main():

  fields = { "message": {"default": "Hi!", "type": "str" } }
  module = AnsibleModule(argument_spec=fields)
  # some other code here to return results
if __name__ == '__main__':  

– the similar way sparrow provides a unified ( available for all languages  ) API to access input parameters. So you don’t have to parse an input  data at all.

Thus, irrespective the language you write a plugin you get a programming API to access input parameters.  Plugin developers could define so called default configuration so that plugin input parameters ( if not set explicitly ) gets initialized with sane defaults.

Below is sparrow equivalent to the ansible module accessing named input parameter. We are going to use Bash here.

# this is plugin scenario:
$ cat story.bash
message=$(config message)

# this is default configuration:
$ cat story.ini
message = Hi!

And this is how sparrow handles nested input parameters!

$ cat sparrowfile
task-run "run my task", 'foo-plugin', %( 
 foo => { 
    bar => { 
      baz  => 'BAZ'

$ cat story.bash 

Return results

Ansible – ansible modules return results as JSON. There are some essential points  about how ansible modules return:

* an exit code of ansible module script gets ignored
* the only requirement  to module – it should print a special formatted ( containing required fields ) JSON to STDOUT
* if no valid JSON is appeared at module’s output it is considered as failure
* a STDOUT/STDERR generated by module ( if any ) is not seen at playbook output
* Thus if module developer want to return some value he/she always has to pack the data into JSON format and return it via JSON string.

Below is example of ansible module to return a current time.

$ cat library/
import datetime
import json

date = str(
print json.dumps({
    "time" : date

– sparrow plugins can return whatever , actually sparrow does not care ( but see “handle results” section ) about what is appeared at STDOUT/STDERR. There are some essential points  about how sparrow plugins returns:

* an exit code is important, it should be 0, otherwise sparrow treat a plugins execution as  failure
* a STDOUT from plugin simply gets redirected to sparrowdo output, so you always see what happening under the hood, no wrapping results into JSON is taken place like for ansible modules.

Below is sparrow equivalent to the ansible module returning a current time, we are going to use Perl5 here:

$ cat
print scalar localtime;

Handle results

Ansible – as ansible module return structured JSON data, it is possible to assign data included in JSON to some ansible variables and use them in upper level ( inside playbooks ).

Below is example of simple echo module which just return what it gets as input

$ cat playbook.yml
- hosts: localhost
    - name: tell me what I say
         message: "hi there!" 
      register: result
    - debug: var=result  

$ cat library/
from ansible.module_utils.basic import *

def main():

    module = AnsibleModule(argument_spec={})
    response = {"you_said": module.params['message']}
    module.exit_json(changed=True, meta=response)

if __name__ == '__main__':  

Sparrow – as was told sparrow  does not care about WHAT appears at plugin’s STDOUT. Well not that true. Plugins developers can defined check rules to validate STDOUT comes from plugin scripts. Such a validation consists of matching STDOUT lines against Perl regexs and many other things you can get acquainted  with at Outthenitc::DSL documentation pages – a sparrow embedded DSL to validate text output. And output validation result impact the overall execution status of sparrow plugin, thus if validation checks fails it result in failure plugin itself. Such a embedded testing facilities  make it east develop a plugins for automation testing or audit purposes.

Probably there is no to add here as example besides this dummy code 🙂

$ cat sparrowfile
run-task "tell me what I say", "echo", %( message => 'hi there!' )

$ cat story.bash
echo you said $(config message)

A trivial check rule for script output will be:

$ cat story.check
generator:  config()->{message}

Deployment process

Ansible many ansible modules gets shipped as a core part of ansible itself – ready to use, no extra efforts for deployment should be taken. Users write a custom modules and host them at SCM ( github , gitlab , svn ), finally modules are just a files get check out into directory on master host where you push ansible tasks against remote hosts, so no special actions on deployment process should be taken besides getting ansible modules files downloaded. Ansible modules eco system thus consists of three large parts:

* main Ansible repository – modules gets shipped as ansible core
* custom ansible modules

So ansible follows pure agentless schema with push approach. No modules deployment gets happened gets happened at target host. Anisble only pushes modules as files where they are executed.

Below is a schematic view of ansible custom modules deployment:


Sparrow – sparrow plugins are actually a packaged scripts gets delivered like any kind  software package – deb, rpm, rubygems, CPAN.  Sparrow exposes a console manager to download and install sparrow plugins. A sparrowdo compiles a scenarios into list of meta data and copies this into remote host. Then a sparrow manager gets run ( over ssh ) on remote host to pick up the meta data and then download, install and execute the plugins.

So sparrow follows client server schema with push approach and plugins deployments get happened on the side of target host.

Sparrow plugins have versions, ownership and documentation.  Sparrow plugins gets hosted at central plugins repository – SparrowHub

Here meta data example of sparrow plugin “package-generic” to install software packages:

    "name" : "package-generic",
    "version" : "0.2.16",
    "description": "Generic package manager. Installs packages using OS specific package managers (yum,apt-get)",
    "url" : ""

There is no rigid separation between custom and “core” plugins at sparrow eco system Every plugin gets uploaded to SparrowHub immediately becomes accessible for end users and sparrowdo scenarios. For security reasons sparrow provides ability to host so called “private”  plugins at remote git repositories. Such a plugins could be “mixed in” to standard sparrow pipeline.

Below is a schematic view of sparrow plugins deployment:



Ansible – ansible provides not built in facilities to manage dependencies at the level of ansible module, probably you would have it at level upper – ansible playbooks. Thus is you module depend on some software library you should care about such a dependency  resolution at some other place.

Sparrow – sparrow provides facilities to manage dependencies at the level of sparrow plugin.  Thus if plugin depends on software libraries you may declare such a dependencies  at the plugin scope so that plugins manager will take care about dependency resolution at the moment of plugin installing. For the time being dependencies for  Perl5 and Ruby languages are supported. CPAN modules for Perl5 via cpanfile and RubyGems for Ruby via Gemfile.


Ansible gained a big success due to extensive eco system of existed ansible modules. Though when comparing a module development process with those one exist at sparrow ( sparrow plugins ) I find some interesting and promising features a sparrow might shows at this field. To sum they up:

* Playbooks VS sparrowdo scenarios  – sparrowdo provides imperative Perl6 language interface against declarative way of ansible playbooks written in YAML. As for the some task such a declarative approach is fine, there are cases when we need add imperative style to our configuration scenarios provided by any modern generic purpose language, where  YAML for sure does not fit.

* Script oriented design – Due it’s script oriented design sparrow plugins provides you way to split a whole tasks into many simple scripts interacting with each other. This actually what we usually do when doing a regular scripting for routine tasks, so why not to bring it here? 🙂

* Modules/Plugins management and life cycle  – sparrow plugins are even more loosely coupled with configuration management tool itself then we see it at ansible. They are developed, debugged,  hosted and managed independently without even knowledge about sparrowdo configuration management tool. This makes process of plugin development more effective and less painless.

* Bash/Shell scripting –  sparrow provides much better support for  “straightforward”  bash/shell scripting then ansible due to spoken limitation of the last on return results and “JSON” interface. It is hard to understand what is going wrong in case of executing ansible bash scripts as it hides all STDOUT/STDERR generated by. Meanwhile Sparrow honestly shows what comes from executes bash/shell commands.

* Programming API – sparrow provides an unified API for all the languages, it means every language has a “equal” rights at sparrow eco system and shares the same possibilities in term of API. Meanwhile ansible modules tends to be written on Python as it seems the most seamless way to develop asnible modules.

* Testing facilities – sparrow exposes builtin test facilities which expands sparrow usage to not only deployments tasks but also to testing/monitoring/audit needs.

Sparrow plugins evolution


Black boxes and APIs.

Sparrow plugins are underlying, essential part of sparrowdo system. One the one hand they are just a scripts to solve various tasks. Like create user accounts, populate configuration files or remove directories. One other hand they are more or less black boxes with well defined API exposed to external world.

Sparrowdo uses sparrow plugins as building blocks to manage and automate remote servers. In this article I am going to give an informal introduction into sparrow/sparrowdo echo system with the focus on it’s central part, heart of all – sparrow plugins. As much as possible, I will try not to burden the material with low level technical details which might be confusing for unprepared user, however sometimes a simple code examples and diagrams will occur here, hopefully helping you to catch the main ideas and not to gets strayed.

Bottom of the system.

Well. Not to diving too much into technical aspects let me try to explain informally what sparrow plugins are. We start from the very bottom of the system, like if would not wanted know anything about sparrowdo and only wanted to play with sparrow plugins ( indeed it’s possible without sparrowdo itself! ), so these are few basic entities we have to meet first:

* Scenarios
* Stories
* Suites
* Plugins
* Tasks
* Task boxes

Every single step ahead will lead us to the whole picture of sparrow ecosystem.


Scenarios are just scripts written on one of the language of choice – Perl5, Ruby or Bash. Sparrow provide an unified, language agnostic API for script developers so they leverage it:

* Easy scripts configuration ( made in various formats – command line, Config::General,  JSON / YAML )
* Multi scripts scenarios – ability to call one script from another with parameters
* Check rules – ability verify scenarios output by embedded DSL


Picture  1. Scenarios & Stories.


Stories are abstraction for scenarios and its check rules. In terms of sparrow scenarios are always accompanied by a check list file –  a list of definitions written in a special DSL to verify script output. In trivial case it could be empty file, thus no checking is taken. If user define some patterns check lost file to validate scenario output, the verification is taken.

Say we have a simple scenario “hello world”:

$ cat
print "hello world!\n"

In sparrow scenario is executed by story executor called strun which run scripts and checks if exit code is zero, which is treated as successful status. If a check rules are supplied scenario STDOUT gets validates against such a rules. Consider a trivial check rule for the script above:

$ cat story.check
hello world

It just checks that script output contains string , there are much more things you can do when validating like providing regex checking, capturing and handling matching data and so on,  follow a Outthentic::DSL  module documentation to get more. More or less many of plugins have none or simple check rules. But if you write a monitoring / testing / audit scripts a check rules feature could be extremely useful. Another interesting idea behind check rules is “self-testing scripts“, but I am not going to talk much about this here 🙂

Ok, if we take a look at the picture number 1, we will get a visual summary of all we’ve learned so far.

Let’s go ahead and talk about sparrow suites.



Picture  2. Suites.

Suites are related stories. One may have many scripts related to a one task or split complex thing into small scripts interacting with each other. A sparrow suite always have a “main” story ( denoted as “FIRST” at the picture number 2), which “call” others in chain. So we end up a tree of stories. A story being called is “downstream” story, a story calling  downstream stories is a “upstream” story, obviously the same story could be both upstream/downstream.  When a story gets called it might be given a story parameters. A unified API is provided to handle story parameters for whatever language you choose to write a scenario. Like for Bash we could have such a code:

# upstream story
run_story my-story message 'hello world'  

# downstream story my-story.bash
message=$(story_var message)
echo $message

The same code in Perl would be:

# upstream story
run_story("my-story", { message => 'hello world' });  

# downstream story
my $message = story_var('message');
print $message;

Story parameters being passed could be nested, which are pretty good represented via Perl or Ruby hashes. And even Bash is supported ( with some limitations ):

# set parameters at upstream story:

# Perl
run_story("S1", { message => { hello => 'world' } } );

# Ruby
run_story "S1", { :message => { :hello => 'world' } }  

# Bash
run_story message.hello world

# access parameters at downstream story:

# Perl

# Ruby

# Bash
$(story_var message.hello )

A technical details on sparrow stories could be found at Outthentic module documentation.

The same way as stories accept parameters and handle them using unified API, one may configure a sparrow suites. Say we want to pass some global parameters as suite input. Lets create first a default suite configuration, it could be ( one of the options, see later ) a Config::General format file:

$ cat suite.ini

         nginx 80
         tomcat 8080
         dev_server 3000

$ strun --ini suite.ini

Now we have a unified API to access global parameters:

# access global parameters at story:
# Perl
# Ruby 
# Bash 
$(config app.servers_and_ports.nginx )

We even can override a suite global parameters at run time via command line:

$ strun --param app.servers_and_ports.nginx=81

And finally we could use a JSON/YAML format to store global parameters:

$ cat suite.json

  "app": {
   "servers_and_ports": {
     "nginx" : 80,
     "tomcat" : 8080
     "dev_server": 3000

$ strun --json suite.json

Default configuration and Hash merge.

If we have suite.ini configuration file for our suite it is considered as default configuration file. Thanks to Hash::Merge   it is possible to override ( merge two files into one Hash ) a default values using custom configuration file:

$ cat suite.ini

  bar = bar-default-value
  baz = baz-default-value

$ strun --ini suite.ini # load a default configuration

$ cat
  bar = bar-new-value

$ strun --ini 

# will override bar value to `bar-new-value`
# baz value will remain default.

To know more about suites and their interfaces take a look at Outthentic documentation –

Now let’s see how suites becomes a plugins.

Sparrow plugins

Picture 3. Sparrow plugins distribution system.

Plugins are packaged suites ready for contribution. From the end user point of view plugins act as suites, so they “borrowed” all the features we have learned so far.

But there are some extra values plugins add into the system:

* name and version
* dependencies ( CPAN / RubyGems )
* ownership

Every plugin has a name to be identified in a global sparrow system. This is obvious like having names for every software packages. As well as plugins have a versions so plugin developer may release and plugin users may utilize a various versions of plugins:

$ sparrow plg search nginx # search nginx related plugins
$ sparrow plg install nginx-check # install nginx-check plugin
$ sparrow plg run nginx-check # run nginx-check plugin
$ sparrow plg run nginx-check --version 0.0.8 # installing by version

Sparrow has quite extensive API on managing plugins, we can’t focus on here, please follow a documentation if you are interested. What is important here that plugins are small bits of software which is distributes the same as we see at many package systems like apt, CPAN, rpm, RubyGems so on.


In a spirit of ansible modules sparrow plugins does not depend on other plugins, but we can use any software libraries in our scenarios. Currently a plugin developer can declare CPAN dependence in a cpanfile or RubyGems dependencies in Gemfile, so that such a dependencies will be installed. Sparrow adjusts running environment ( setting library paths for Perl and Ruby ) so that installed libraries will be accessible in running scenario. It’s very handy!


To publish plugins into central repository SparrowHub you need to get account there. It is also possible to distribute so called private plugins hosted at remote git repositories.


All spoken could be written in simple JSON format. This is how sparrow plugins get registered in sparrow system:

  "version" : "",
  "name"    : "nginx-check",
  "description" : "checks if nginx server is healthy by executing low level system checks ( ps, pid, etime )",
  "url"         : ""

Sparrow tasks

Sparrow plugins bind to a default suite configuration, there is not that much you can do about it, only redefine global parameter at run time:

$ sparrow plg run foo --param a=1 --param b=2

sparrow-plugins-tasksPicture 4. Sparrow plugins and tasks.

Sparrow tasks give you way more agile. Tasks are plugins with custom configurations.  Tasks have names and grouped by projects:

$ sparrow task add foo-project foo-task foo
$ sparrow task ini foo-project/foo-task
a = 100
b = 200
$ sparrow task run foo-project/foo-task

There a lot of information about sparrow task at Sparrow documentation pages.

Ok, it’s been a long trip. We are approaching the end of evolution here 🙂 And this is sparrow task boxes.

Task box

Task box is a collection of sparrow tasks, we can write it as JSON:

    "task" : "foo-task",
    "plugin" : "foo-plugin",
    "global_parameters" : {
       "a" : 1,
       "b" : 2
    "task" : "bar-task",
    "plugin" : "bar-plugin",
    "global_parameters" : {
      "aa" : 1,
      "bb" : 2

Sparrow tasks is way to run many sparrow plugins with parameters and consequently. This actually what Sparrowdo does when compiling sparrowdo scenarios:

$ cat sparrowfile
user "zookeeper";directory "/var/data/zoo";
file "/var/data/zoo/birds.txt", %( owner => 'zookeeper' );

A given code gets complied into sparrow task box:

     "plugin" : "user", 
     "task" : "create user zookeeper", 
     "data" : { "name" : "zookeeper", "action" : "create" } 
     "plugin" : "directory", 
     "task" : "create directory /var/data/zoo", 
     "data" : { "path" : "/var/data/zoo", "action" : "create" } 
     "plugin" : "file", 
     "task" : "create file /var/data/zoo/birds.txt", 
     "data" : { 
        "owner" : "zookeeper", 
        "action" : "create", 
        "target" : "/var/data/zoo/birds.txt" 

From the very bottom of the system we have reached a sparrow evolution end point – a high level configuration management scenarios written on Perl6. But under the hood – it’s just a JSON gets pushed to sparrow client, do it will do low level job by executing sparrow plugins 🙂 , see the last picture:


Picture 5. Sparrow plugins evolution.


Let’s summarize what we’ve learned in this article:

* Sparrow plugins are scripts written on one of language of choice: Perl5/Bash/Ruby
* Outthentic – a core sparrow component – a development and execution kit to enable some frequently used  features when writing automation scenarios: testing script output , reuse other scripts and pass script configuration parameters.
* To distribute scripts they are packaged and uploaded into central repository – SparrowHub
* Sparrow client is command line tool to install, configure and run plugins.
* Sparrowdo acts as high level system build upon sparrow plugins to write an automation scenarios in Perl6 language and then execute them as sparrow “plugins-primitives” with the JSON as internal presentation format and scp/ssh as transport.

I hope this was a helpful article, please post your comments, questions, ideas here.


— Alexey Melezhik

Blog at

Up ↑