Simple META6::bin wrapper

Recently  Wenzel P. P. Peppmeyer ( aka gfldex ) released a nice helper to start Perl6 projects from the scratch – it’s called META6:bin

$ zef install META6:bin

MEAT6::bin module enables creation Perl6 project from the scratch, for example this is how quickly one can bootstrap a new Perl6 module called Foo::Bar

$ meta6 --new-module=Foo::Bar

There are a lot of options come from meta6 client, take a look at the documentation.  META6::bit cares about git/github things, setting up git repository for you freshly started projects, creating META6.json file, populating t/ directory, so on.

I have created a simple wrapper around meta6 script. The reasons for that:

* I don’t want to remember all the options I use when launching meta6 client to bootstrap my projects
* I have predefined settings I use always so I don’t want to enter them every time I run meta6 command line.

Here is my solution –  sparrow plugin with the analogous name – meta6-bin Under the scene it just calls meta6 client with parameters. But you can easily customize ones by using sparrow tasks:

$ sparrow plg install meta6-bin
$ sparrow project create perl6-projects
$ sparrow task add  perl6-projects meta6-bin new

Having this defined you may easily create new Perl6 modules projects to run meta6 with some default options:

$ sparrow task run perl6-projects/new --param name=Foo::Bar --param path=~/my-projects/

The only two obligatory parameters you have to set here is – name – module name and path – directory location where you want to create project files.

Here is how you can create a project inside current working directory:

$ sparrow task run perl6-projects/new --param name=Foo::Bar --param path=$PWD

And finally let’s tune some settings up to meet our specific requirements, say I don’t want to initialize git repository for my projects and I have predefined root location to keep my work:

$ export EDITOR=nano
$ sparrow task ini perl6-projects/new
options --force --skip-git --skip-github
path /opt/projects

Now we “memorize” our settings into sparrow task so that we can apply them for next meta6-bin runs:

$ sparrow task run perl6-projects/new --param name=Foo::Bar

Hope this short post was useful.
Regards and stayed tuned with Perl6/Sparrow/Sparrowdo.

Advertisements

Writing pre-commit hooks with Sparrow

Introduction

Pre-commit is a framework for managing and maintaining multi-language pre-commit hooks. Developers write a hooks to be triggered so that some preliminary/useful job gets done before updates arrived to your git repo. The idea is quite old, but pre-commit lets install and integrate hooks into existed git repos with minimal efforts.

Writing hooks with sparrow

Sparrow is a universal automation tool and I found it quite easy to use sparrow to write hooks for pre-commit framework. Let me show how. Say I need to run prove tests for Perl6 code.

The code of hook is trivial and look like:

prove -vr -e 'perl6 -Ilib' t/

Let’s wrap this script into sparrow plugin, here are few simple steps:

1. Write a story:

$ cat story.bash
set -x
set -e
path=$(config path)
echo path is: $path
cd $path
prove -vr -e 'perl6 -Ilib' t/

A quick remark here. We pass a Perl6 project directory location explicitly by --path parameter as absolute file path. This requirement is due to sparrow does not preserve a current working directory when executing plugins.

2. Leave story check file empty, as we don’t need an extra checks here:

$ touch story.check

3. And create plugin meta file:

$ cat sparrow.json
{
  "name" : "perl6-prove",
  "description" : "pre-commit hook - runs prove for Perl6 project",
  "version" : "0.0.1",
  "category" : "utilities",
  "url" : "https://github.com/melezhik/perl6-prove"
}

4. Now we can upload our freshly backed plugin to SparrowHub:

$ sparrow plg upload

Using sparrow plugin in pre-commit hooks

First of all we need to install sparrow plugins at our system and see that our hook works on a test Perl6 project.

Install the plugin:

$ sparrow plg install perl6-prove

Set up git repository and project files:

$ git init 
$ ... # Create files and directories, git add, and so on ..

Create a simple Perl6 test:

$ cat t/00.t
use v6;
use Test;
plan 1;
ok 1, 'I am ok';

Then we need to set up pre-commit hooks yaml.

Our pre-commit hook yaml will be:

$ cat .pre-commit-config.yaml
-   repo: local
    hooks:
    -   id: perl6-prove
        name: perl6-prove
        entry: bash -c "sparrow plg run perl6-prove --param path=$PWD"
        language: system
        always_run: true
        files: ''

Here we use so called “local” repository and language  as”system” bear in mind that sparrow comes as external system command.

Now let’s commit our changes to trigger hook execution:

$ git commit -a -mtest-commit
perl6-prove..............................................................Passed
[master 98cf098] test-commit
 1 file changed, 6 insertions(+)
 create mode 100644 t/00.t

You can also trigger the hook directly not committing anything:

$ pre-commit run perl6-prove --verbose
[WARNING] Unstaged files detected.
[INFO] Stashing unstaged files to /home/vagrant/.pre-commit/patch1489306073.
[perl6-prove] perl6-prove................................................Passed
hookid: perl6-prove

[p] perl6-prove at 2017-03-12 08:07:53
path is: /home/vagrant/projects/pre-commit-test
t/00.t ..
1..1
ok 1 - I am ok
ok
All tests successful.
Files=1, Tests=1,  1 wallclock secs ( 0.02 usr  0.00 sys +  0.16 cusr  0.03 csys =  0.21 CPU)
Result: PASS
ok      scenario succeeded
STATUS  SUCCEED

In the end of this post I am going to take some summary.

Implementing pre-commit hooks via Sparrow plugins

Roughly speaking pre-commit framework supports two types of plugins – external ones, which to be installed into the system manually  and ones located in github repositories and installed by pre-commit itself.

I see a possible benefits sparrow can bring you when developing hook’s scripts as sparrow ( external ) plugins:

– Sparrow plugins are external ones and highly decoupled from hooks/project structure.

– Indeed they are versioned and packaged pieces of software. One can maintain and release new versions of plugins in a way predictable and transparent for end user.

– You can always install/remove/upgrade/downgrade versions of sparrow plugin independently on pre-commit framework itself.

– Sparrow provides reasonable alternative to manage hook’s scripts dependencies, so that sparrow takes care about dependencies resolution during plugin installation. It’s CPAN/carton for Perl5 and RubyGems/bundler for Ruby. Let me know if you need other package managers support.


Rregards and have fun with your automation!

Manage goss scenarios with sparrow

Introduction

Goss is a YAML based serverspec alternative tool for validating a server’s configuration.  It’s written on Go language. It’s quite interesting and promising young project I came across via reddit/devops channel.

Let me show how one can distribute goss scenarios using sparrow tasks.

Before diving into technical stuff let me explain why this could be useful:

  • you want organize multiple goss scenarios by logical groups and manage them via unified interface
  • you want to share some goss yamls across your team, to make it possible quickly run your goss tests against many applications

Ok, let’s go.

Installing sparrow goss plugin

This part is really easy.

$ sparrow index update # we want fresh index from SparrowHub
$ sparrow plg install goss

You’ll find detailed information on goss sparrow plugin at https://sparrowhub.org/info/goss

Set up sparrow project and tasks

Ok, now let’s create sparrow project and tasks. These are just simple abstractions to split many goss tests on various logical groups.

$ sparrow project audit # we will keep all goss scenarios here
$ sparrow task add audit nginx  goss # nginx test suite
$ sparrow task add audit mysql goss # mysql test suite

Running sparrow task list command we see our new project and tasks:

$ sparrow task list
[sparrow task list]
 [audit]
  audit/mysql
  audit/nginx

Set up goss tests

Now let’s populate our goss tests, we should read goss spec first, but it’s really easy.

One for nginx:

$ sparrow task ini audit/nginx 

action validate
goss << HERE
port:
  tcp:80:
    listening: true
    ip:
    - 0.0.0.0
service:
  nginx:
    enabled: true
    running: true
process:
  nginx:
    running: true

HERE

And one for mysql:

$ sparrow task ini audit/mysql 

action validate
goss << HERE
port:
 tcp:3306:
 listening: true
 ip:
 - 127.0.0.1
service:
 mysql:
 enabled: true
 running: true
process:
 mysqld:
 running: true

HERE

Run goss tests

Now we can run goss tests separately for nginx and mysql.

One for nginx:

$ sparrow task run audit/nginx
[t] nginx
@ goss wrapper

[t] nginx modules/generate-goss-yaml/ params: cache_dir:/home/vagrant/.outthentic/tmp/9595/story-1 at 2017-03-07 16:18:13
generated goss yaml at /home/vagrant/.outthentic/tmp/9595/story-1/goss.yaml
ok      scenario succeeded

[t] nginx modules/validate/ at 2017-03-07 16:18:13
1..5
ok 1 - Process: nginx: running: matches expectation: [true]
ok 2 - Port: tcp:80: listening: matches expectation: [true]
ok 3 - Port: tcp:80: ip: matches expectation: [["0.0.0.0"]]
ok 4 - Service: nginx: enabled: matches expectation: [true]
ok 5 - Service: nginx: running: matches expectation: [true]
ok      scenario succeeded
STATUS  SUCCEED

And one for mysql:

$ sparrow task run audit/mysql
[t] mysql
@ goss wrapper

[t] mysql modules/generate-goss-yaml/ params: cache_dir:/home/vagrant/.outthentic/tmp/9901/story-1 at 2017-03-08 08:19:14
generated goss yaml at /home/vagrant/.outthentic/tmp/9901/story-1/goss.yaml
ok      scenario succeeded

[t] mysql modules/validate/ at 2017-03-08 08:19:14
1..5
ok 1 - Process: mysqld: running: matches expectation: [true]
ok 2 - Port: tcp:3306: listening: matches expectation: [true]
ok 3 - Port: tcp:3306: ip: matches expectation: [["127.0.0.1"]]
ok 4 - Service: mysql: enabled: matches expectation: [true]
ok 5 - Service: mysql: running: matches expectation: [true]
ok      scenario succeeded
STATUS  SUCCEED

Sharing goss tests

An interesting use case is you may share you goss tests. Sparrow make it possible save your tasks at SparrowHub – central sparrow repository so that to share tasks with others.

Say you want someone else runs your goss scenarios on remote server. Provided that one install sparrow client there, it is really easy.

Upload remote task

$ sparrow remote task upload audit/nginx "goss audit for nginx"
$ sparrow remote task share audit/nginx
$ sparrow remote task upload audit/mysql "goss audit for mysql"
$ sparrow remote task share audit/mysql


Install and run remote task

Having logged into other server just have this:

$ sparrow remote task run melezhik@audit/nginx
$ sparrow remote task run melezhik@audit/mysql

More on remote task could be found at sparrow documentation – https://github.com/melezhik/sparrow#remote-tasks

Running goss scenarios with sparrowdo

Alternatively you may want to use Perl6 interface to sparrow and run goss scenarios using sparrowdo:

$ cat sparrowfile

task-run 'run goss for mysql', 'goss', %( action  => 'validate' , goss => q:to/HERE/ );

port:
  tcp:3306:
    listening: true
    ip:
    - 127.0.0.1
service:
  mysql:
    enabled: true
    running: true
process:
  mysqld:
    running: true

HERE

$ sparrowdo --host=192.168.0.1

Regards and have fun with automation.

Outthentic – quick way to develop user’s scenarios

Introduction

Outthentic is a development kit for rapid development of users scripts and test scenarios. Outthentic is an essential part of Sparrow system. Let’s see how easy script development might be in the Outthentic framework.

Bits and pieces of theory

First of all let’s create a project for all our scripts.

$ mkdir tutorial
$ cd tutorial/

Ok, now let’s create our first script. We are going to use Bash language here. But Outthentic plays nice with many languages (*), we will see it later.

(*) These are Bash, Perl5, Python and Ruby.

Let’s say we want to create a simple script to check status of nginx web server:

$ touch story.check

$ cat story.bash
service nginx status

Let me explain what we’ve done so far.

We have created a script story.bash and empty check file story.check .

In Outthentic there is term story which is just an abstraction for some script and its check file. We may call scripts as story scenarios and check files as story check files. We may also refer to the story script and the story check file as story data or story files.

To make Outthentic tells one story from another we should put story files into different directories. Technically speaking story is just a directory with some story files inside.

When we say “run or execute the story” it means we execute story script and apply rules from story check file to verify script stdout.

Another good explanation of stories is that they are elementary units of Outthentic framework to build a bigger things like Outthentic suites or projects.

Conversely, Outthentic project or suite is just a container with Outthentic stories.

That’s enough of theory. Let’s get back to our small script.

Here, in the example, the story scenario is a small Bash script to do useful job. Story check file could contain some check rules to verify stdout emitted by the script. Right now we don’t want to verify script stdout so we just leave story check file empty (*).

(*) In the latest versions Outthentic check files are no longer obligatory,  so if you’re not going to validate scripts stdout just don’t create check file.

Now let’s run the script,  or like we would say in Outthentic terminology – run the story.

Let’s get a strun – console client that executes scenarios in Outthentic stories:

$ strun 
at 2017-02-08 15:47:56
* nginx is running
ok    scenario succeeded
STATUS    SUCCEED

Ok. Good. All should be clear from reading of strun’s report. We see that nginx is running. At least this is what service nginx status command tells us. What’s happening under the hood when we invoke strun ?

Strun is a [s]tory [r]unner – utility that runs story script story.bash and then checks if its exit code is 0. In case of successful exit code strun prints “scenario succeed” in its report. Overall “STATUS SUCCEED” line means that all the project’s scripts have succeeded. Right now there is the only one script – story.bash, very soon though we will see that there are might be more than one script in Outthentic project.

But before diving into details about scenario development let me show how strun reports when some scenario fails, let’s shut the nginx down and run our story again:

$ sudo /etc/init.d/nginx stop
$ strun 
at 2017-02-08 15:57:25
* nginx is not running
not ok    scenario succeeded
STATUS    FAILED (256)

Check lists and check files

Check lists are rules written in Outthentic::DSL language to verify stdout emitted by story script. Do you remember that we left story file empty? Now let’s add some check rules:

$ cat story.check 
nginx is running

Now let’s start nginx over again and re-run our story:

$ sudo /etc/init.d/nginx start
$ strun 
at 2017-02-08 16:02:38
* nginx is running
ok    scenario succeeded
ok    text has 'nginx is running'
STATUS    SUCCEED

Good, we see new line has appeared at the strun report:

ok  text has 'nginx is running'

Strun executes story.bash script and then checks if script’s stdout include the string  “nginx is running”.

You may use Perl5 regexs in the check rules as well:

$ cat story.check 
regexp: nginx\s+is\s+running

Outthentic::DSL make it possible a lot of other complex checks, please follow this tutorial to see me examples. But for now let’s just see how we can use check rules in our scripts development.

So far this type of check is meaningless, as ​it seems that the command service nginx status do all the job and if it succeeds there no need to analyse the stdout to verify that nginx is running, unless you are true paranoid and want to add double checks :).

But let’s rewrite our story scenario to see how useful story checks could be. What if instead of “consulting”  of  service nginx status command we want to look up at the processes list at our server? Let’s rewrite our story:

$ cat story.bash 
ps uax | grep nginx

$ cat story.check 
nginx: master
nginx: worker

Now let’s give it run and see the results:

$ strun 
at 2017-02-08 16:13:19
root     21274  0.0  0.0  85884  1332 ?        Ss   16:02   0:00 nginx: master process /usr/sbin/nginx
www-data 21275  0.0  0.0  86220  1756 ?        S    16:02   0:00 nginx: worker process
melezhik 21406  0.0  0.0  17156   944 pts/1    R+   16:13   0:00 grep nginx
ok    scenario succeeded
ok    text has 'nginx: master'
ok    text has 'nginx: worker'
STATUS    SUCCEED

Ok. Now we see that our check rules  ( “nginx: master” and “nginx: worker” )  are working and “verifying” that the nginx server processes are seen at the processes.  It is much more detailed information in comparison with those getting from simple “service nginx status” command.

What is more important  the command ps uax|grep nginx might succeed with exit code zero but this does not mean that nginx server is running ( guess why? because of the grep command itself appeared in process list! ), and this is where check rules become handy – to verify that some commands succeed of fail even thought they don’t drop proper exit code.

Let’s compare check rules against simple exit codes.


Check rules vs exit codes.

Sometimes you don’t have to define any special check rules to verify that your script succeeds, obviously most of  the modern software provides valid exit codes you can rely upon. But sometimes a normal ( zero ) exit code does not mean that command succeeds. The previous example shows the idea. It is pretty simple but could be considered as a “template” for this type test scenarios where you want to “grep” some information from script stdout to verify that everything goes fine. Actually this is what people usually do when typing $cmd|grep foo command in a terminal.

Another good example when exit code could’t be a good criteria is insertion into database. Say, first time you insert record it does not exists and you are ok when script doing insertion  and return zero exit code. Next time you run script record already exists and script throws bad exit code and proper message ( something like the table record with given ID already exists … ).  If you only need to ensure that records with given ID gets inserted into database, you can write the following check rule and will be safe:

$ cat story.check

regexp: table record (created|already exists)

Outthentic suites

As I said at the beginning there are might be more than one script in the Outthentic project. In terms of Outthentic we can talk about Outthentic projects or outthentic suites – a bunch of related outthentic stories.  Strun uses directories to tell one story from another. Let’s add new story to the project we created before,  we have to reorganize the directory layout:

$ tree 
.
├── check-nginx
│   ├── story.bash
│   └── story.check
└── start-nginx
    ├── story.bash
    └── story.check

The content of  check-nginx/* files remains the same. Check-nginx  is a story to check nginx web server status.

Now there is new story – start-nginx as you can imagine to run nginx server.

The content of start-nginx/story.bash file is pretty simple:

$ cat  start-nginx/story.bash
sudo service nginx start

We leave the content of file start-nginx/story.check empty.

Strun client has --story option to set a story to run.

$ strun  --story start-nginx
start-nginx/ at 2017-02-08 16:52:27
ok    scenario succeeded
STATUS    SUCCEED

If no --story option is given strun will run file story.bash (*) at current working directory.  So we can create default story which just says that user should choose one of two stories to run – check-nginx or start-nginx :

$ cat  story.bash 
echo usage: strun --story (start-nginx|run-nginx)

(*) Or actually one of four files if exists – story.pl, story.bash, story.py, story.rb – as you can guess it relates to the language you write scenarios on – Perl5, Bash, Python or Ruby.
Having more than one story in the project allow to have many small  scripts which then you can run independently. But sometimes we want to take another approach – call one scripts from others. Let’s see how we can achieve this.

Story modules

Story modules ( or in short just modules ) are scripts being called from other scripts.
When gets called modules might being given an input parameters aka story variables.

Consider an example of a simple package manager.

Let’s say we want to write a script to install packages taken as the input string of space separated items:

script "package-foo package-bar package-baz"

Outthentic provides highly effective API to handle command line parameters, so we can pass package list by --param option:

$ strun --param packages="package-foo package-bar package-baz"

Now let’s split our task into two simple scripts. One – to parse input parameters and another to install given package. The overall project structure will be:

$ tree
.
├── hook.bash
└── modules
    └── install-package
        ├── story.bash
        └── story.check

Let’s explain the new project structure.

First of all we notice file named hook.bash.  This is the hook.  By using hook we can extend strun functionality. Under the hood hooks are just simple scripts to be executed before story scenario. Hooks functionality is  described at the Outthentic documentation in the Hooks section. At the moment all we have to know about hooks is they are scripts that get run before story scenario.

The directory modules/install-package holds the new outthentic story install-package. When we place story files  under modules/ directory we define story modules.

Story modules are the usual outthentic stories which are called from other stories by using hook files. Let me show how it works:

$ cat hook.bash 
for p in $(config packages); do
  run_story install-package package $p
done

This  simple Bash code does following:

1. Parses input parameters using  config() function provided by Outthentic
2. Splits input string by spaces and iterates for packages list, calling story module install-package passing package name as parameter:

run_story install-package package $p

Let’s see how story module is implemented, it’s very, again Bash script:

$ cat modules/install-package/story.bash 
package=$(story_var package)
echo install $package ...

What’s happening in install-package/story.bash script?

1. Package name is assigned to variable by using using Outthentic story_var()
2.  Package install command is executed  (*).

(*) For demonstration purposes we don’t run real package install here.

Let’s summarize.

* Story modules are very useful when design your script system.
* This mechanism foster to split a complex task into simple ones and make code reuse via “script libraries” pattern.

You may found more information about Outthentic modules at the documentation pages, section Run stories from other stories

Let’s run our story suite to see all in action:

$ strun  --param packages='nginx mysql perl'
modules/install-package/ params: package:nginx at 2017-02-09 11:29:23
install nginx ...
ok    scenario succeeded

modules/install-package/ params: package:mysql at 2017-02-09 11:29:23
install mysql ...
ok    scenario succeeded

modules/install-package/ params: package:perl at 2017-02-09 11:29:23
install perl ...
ok    scenario succeeded
STATUS    SUCCEED

In next section we’ll see how we may add configuration to our suites.

SUITE CONFIGURATION

It is extremely useful to provide a sane default for script input parameters.

Outthentic has a lot of tools to do this.  Let me show the one.

Consider a script which prove if nginx server is listening to the given http port:

$ cat story.bash 
sudo netstat -nlp|grep nginx

$ cat story.check 
0.0.0.0:80

When we run the story we will see that nginx is available at 80 port as we expected:

$ strun 
at 2017-02-09 12:22:48
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      21899/nginx     
tcp6       0      0 :::80                   :::*                    LISTEN      21899/nginx     
ok    scenario succeeded
ok    text has '0.0.0.0:80'
STATUS    SUCCEED

Now we want to make the port parameter configurable for the script:

$ cat suite.yaml
port: 80

Once we define configuration file called suite.yaml at the top of project directory strun will read it and configuration data will be available via config() function:

 

$ cat story.check 
generator: << CODE
!bash
port=$(config port)
echo 0.0.0.0:$port
CODE

This check file shows the example of generators – special DSL to generate check rules on runtime. More information about generators could be found at the Outthentic::DSL documentation, at the generators section.

We can override default values by passing command line parameters:

$ strun --param port=443

Indeed Outthentic provides may sophisticated and efficient methods to configure your scripts.  With well recognized formats support like  JSON, YAML and Config::General. Please follow Outthentic documentation to read more,  the section suite configuration

There is more than one language to write your script

And finally as I told at the very beginning you are free to choose between several language when developing scripts in Outthentic framework.  Outthentic API is implemented for the following languages:

* Per5
* Bash
* Python
* Ruby
For example, this is how the hook file in the package manager suite could be written on Perl5:

$ cat hook.pl 
for my $p ( split /\s+/, config()->{packages}) {
  run_story("install-package", { package => $p });
}

Outthentic provides universal API for the all listed languages:

  • Handling input parameters
  • Developing mutli scripts systems using story modules
  • Enabling configuration with reach support of well known formats

Scripts distribution

The next step is to distribute your scripts written on Outthentic framework.  Sparrow  is  outthentic scripts manager allow you to share your scripts across any Linux boxes provided that Perl5 is installed.

Further reading

For further reading I would recommend you comprehensive article –  “Sparrow plugins evolution”

Script examples

Script examples presented at the paper could be found here.


 

ssh/scp commands with Sparrowdo

Sometimes you need to execute remote commands or copy files on/to remote hosts using ssh/scp commands. Here is how you can do it using Sparrowdo ssh/scp core-dsl functions.

sparrowdo-ssh-ssp2

Issuing ssh commands

The shortest form  to do this – call `ssh‘ function with minimum of required parameters – command to execute and remote host address.

ssh 'uptime', %( host => '192.168.0.1' )

Usually people use ssh public-key authentication, so it is possible to set a path to ssh private key and provide user id:

ssh 'uptime', %(
  host    => '192.168.0.1',
  user    => 'old-dog',
  ssh-key => 'keys/id_rsa'
);

Note, that ssh private key should be only stored at master host where sparrowdo runs, no other actions need to be taken, sparrowdo will care about coping(*) of ssh private key to the target host. It’s handy!

(*) By the way –  sparrowdo will remove a private ssh key from target host when ssh command is done.

There are many options of `ssh’ function you may read about at sparrowdo docs, here are just more examples.

You may run multi-line bash commands btw:

ssh q:to/CMD/, %( host => '192.168.0.1', user => 'old_dog');
  set -e
  apt-get update
  DEBIAN_FRONTEND=noninteractive apt-get install -y -qq curl
CMD

Or don’t execute the same command twice relying on existence of file located at target server:

ssh "rm file", %(  host => '192.168.0.1' , create => '/do/not/run/twice' );

And finally you may set alternative descriptions for your commands which will be shown at sparrowdo report to help you understand what your command does:

ssh "cat patch.sql | mysql", %(
  description => 'patching my database',
  host => '192.168.0.1'
);

Issuing scp commands

`Scp’ command akin `ssh’ one.  Except it deals with remote files coping. Nothing to say here but showing some examples.

Copy a number of files to remote hosts 192.168.0.1:

scp %( 
  data    => "/var/file1 /var/file2 /var/file3",
  host    => "192.168.0.1:/var/", 
  user    => "Me", 
  ssh-key => "keys/id_rsa", 
);

Note, that files to copy should exists on the target hosts. If they don’t you may copy them from master host first using `file‘ function:

file '/var/file1', %( content =>  ( slurp 'files/file1' ) );
file '/var/file2', %( content =>  ( slurp 'files/file2' ) );
file '/var/file3', %( content =>  ( slurp 'files/file3' ) );

The same way as you do for `ssh’ command you may prevent from coping the same file twice if some file exists at target host:

scp %( 
  data    => "/var/biiiiiiig-file",
  host    => "192.168.0.1:/var/data", 
  create  => "/tmp/do/not/copy/me/twice"
)

And finally one my copy files FROM master to target host, using `pull’ flag:

scp %( 
  data    => "/var/data/dir",
  host    => "master-host:/var/file1", 
  pull    => 1, 
  ssh-key => "keys/id_rsa", 
);

That is it. Stay tuned with Sparrowdo Automation.  🙂

Sparrow plugins vs ansible modules

Introduction

Both ansible modules and sparrow plugins are building blocks to solve elementary tasks in configuration management and automation deployment. Ansible modules are used in higher level playbooks scenarios written in YAML, sparrow plugins used in high level sparrowdo scenarios written in Perl6.

Languages support

Ansible – you may choose any language to write modules. When developing modules out of the box ansible provides seamless support for Python only ( shortcuts ), for other languages you should use third-party libraries ( natively for language you write a plugin ) to make a plugin development and integration process easier.

Sparrow – you write plugins on one of three languages – Perl5, Bash or Ruby. When developing modules sparrow provides an unified ( available for all languages ) API to make plugins development and integration easy and seamless. Though such an API as not that extensive as Python shortcuts API for ansible modules.

System design

Ansible – ansible modules are autonomous units of code to solve elementary task. Under this hood it’s just single file of code. Ansible modules can’t depend neither call other modules.

Sparrow –  sparrow plugins are very similar to ansible modules in way of autonomous, closed untis of code to solve an elementary tasks. But sparrow provides yet another level of freedom for plugin developer. Sparrow plugins actually is a  suites of scripts. Scripts may call other scripts with parameters. Such a design make it easy split even elementary task into scripts “speaking” to each other. Consider a trivial example – install / removing software packages. We can think about plugin to cope with whole elementary task ( installing / remove packages ) but under the hood split all by two scripts – one for package installing, another for package removal. This idea is quite expressed at comprehensive post – Sparrow plugins evolution.

Here is a simple illustration of what I have said.

sparrow-plugins-design1

System integration

Anisble  – ansible modules a smaller part of higher level configuration scenarios called playbooks. Ansible playbook is YAML driven dsl to declare a list of tasks – anisble modules with parameters.

ansible-modules-and-playbooks

Sparrow
– like ansible modules, sparrow plugins are smaller part of overall system – spparowdo – configuration management tool written on Perl6. Sparrowdo scenarios are Perl6 code to run a sparrow taskssparrow plugins with parameters.

sparrow-plugins-and-sparrowdo1

End user interface

Ansible  – ansible modules gets called via playbooks using a YAML DSL to declare modules calls and pass parameters to. It is also possible to run ansible modules via command line client passing parameters as command line arguments.

Below is example of ansible playbook with ansible module yum to install httpd software package

$ cat playbook.yml
---
- hosts: webservers
  tasks:
  - name: ensure apache is at the latest version
    yum: name=httpd state=latest


Sparrow –
sparrow plugins gets called via sparrowdo scenarios using a Perl6 API. Plugin parameters gets passed as Perl6 Hashes.  Also one may use sparrow console client to run sparrow plugins as is via command line, not using  sparrowdo. There are a lot of options here – command line parameters, parameters in JSON / YMAL format, Config::General format parameters.

Below is sparrowdo equivalent to ansible module yum installing latest version of httpd.
Two flavours of API are shown – core-dsl and plugin API

$ cat sparrowfile

# you can use a short core-dsl API flavour:
package-install 'httpd'; 

# or low level plugin API flavour:
task-run 'ensure apache is at the latest version', 'package-generic', %(
   list => 'httpd'
);

Processing input parameters

Ansible – input parameters as key=value pairs (*), when developing plugin  you should parse an input and “split” it to the pieces of data to get a variables you need. There are plenty of “helpers” for many languages ( like Perl5, Ruby ) to simplify this process or else you have to parse input data  explicitly inside anisble module.

(*) Nested input parameters are possible

Ansible  provides a high level Python API for ansible modules called shortcuts allow you to automatically parse input and create parameter accessors,  declare parameters types, set default values, check required parameters and do other useful things with regards to.

Below is example of module parameters processing using python ansible API:

$ cat library/greetings.py
from ansible.module_utils.basic import *

def main():

  fields = { "message": {"default": "Hi!", "type": "str" } }
  module = AnsibleModule(argument_spec=fields)
  module.params['message']
  # some other code here to return results
if __name__ == '__main__':  
    main()


Sparrow
– the similar way sparrow provides a unified ( available for all languages  ) API to access input parameters. So you don’t have to parse an input  data at all.

Thus, irrespective the language you write a plugin you get a programming API to access input parameters.  Plugin developers could define so called default configuration so that plugin input parameters ( if not set explicitly ) gets initialized with sane defaults.

Below is sparrow equivalent to the ansible module accessing named input parameter. We are going to use Bash here.

# this is plugin scenario:
$ cat story.bash
message=$(config message)

# this is default configuration:
$ cat story.ini
message = Hi!

And this is how sparrow handles nested input parameters!

$ cat sparrowfile
task-run "run my task", 'foo-plugin', %( 
 foo => { 
    bar => { 
      baz  => 'BAZ'
    }
  }
);

$ cat story.bash 
baz=$(config foo.bar.baz)

Return results

Ansible – ansible modules return results as JSON. There are some essential points  about how ansible modules return:

* an exit code of ansible module script gets ignored
* the only requirement  to module – it should print a special formatted ( containing required fields ) JSON to STDOUT
* if no valid JSON is appeared at module’s output it is considered as failure
* a STDOUT/STDERR generated by module ( if any ) is not seen at playbook output
* Thus if module developer want to return some value he/she always has to pack the data into JSON format and return it via JSON string.

Below is example of ansible module to return a current time.

$ cat library/currentime.py
import datetime
import json

date = str(datetime.datetime.now())
print json.dumps({
    "time" : date
})


Sparrow
– sparrow plugins can return whatever , actually sparrow does not care ( but see “handle results” section ) about what is appeared at STDOUT/STDERR. There are some essential points  about how sparrow plugins returns:

* an exit code is important, it should be 0, otherwise sparrow treat a plugins execution as  failure
* a STDOUT from plugin simply gets redirected to sparrowdo output, so you always see what happening under the hood, no wrapping results into JSON is taken place like for ansible modules.

Below is sparrow equivalent to the ansible module returning a current time, we are going to use Perl5 here:

$ cat story.pl
print scalar localtime;

Handle results

Ansible – as ansible module return structured JSON data, it is possible to assign data included in JSON to some ansible variables and use them in upper level ( inside playbooks ).

Below is example of simple echo module which just return what it gets as input

$ cat playbook.yml
- hosts: localhost
  tasks:
    - name: tell me what I say
      echo:
         message: "hi there!" 
      register: result
    - debug: var=result  

$ cat library/echo.py
from ansible.module_utils.basic import *

def main():

    module = AnsibleModule(argument_spec={})
    response = {"you_said": module.params['message']}
    module.exit_json(changed=True, meta=response)


if __name__ == '__main__':  
    main()

Sparrow – as was told sparrow  does not care about WHAT appears at plugin’s STDOUT. Well not that true. Plugins developers can defined check rules to validate STDOUT comes from plugin scripts. Such a validation consists of matching STDOUT lines against Perl regexs and many other things you can get acquainted  with at Outthenitc::DSL documentation pages – a sparrow embedded DSL to validate text output. And output validation result impact the overall execution status of sparrow plugin, thus if validation checks fails it result in failure plugin itself. Such a embedded testing facilities  make it east develop a plugins for automation testing or audit purposes.

Probably there is no to add here as example besides this dummy code 🙂

$ cat sparrowfile
run-task "tell me what I say", "echo", %( message => 'hi there!' )

$ cat story.bash
echo you said $(config message)

A trivial check rule for script output will be:

$ cat story.check
generator:  config()->{message}

Deployment process

Ansible many ansible modules gets shipped as a core part of ansible itself – ready to use, no extra efforts for deployment should be taken. Users write a custom modules and host them at SCM ( github , gitlab , svn ), finally modules are just a files get check out into directory on master host where you push ansible tasks against remote hosts, so no special actions on deployment process should be taken besides getting ansible modules files downloaded. Ansible modules eco system thus consists of three large parts:

* main Ansible repository – modules gets shipped as ansible core
* custom ansible modules

So ansible follows pure agentless schema with push approach. No modules deployment gets happened gets happened at target host. Anisble only pushes modules as files where they are executed.

Below is a schematic view of ansible custom modules deployment:

ansible-modules-deploy

Sparrow – sparrow plugins are actually a packaged scripts gets delivered like any kind  software package – deb, rpm, rubygems, CPAN.  Sparrow exposes a console manager to download and install sparrow plugins. A sparrowdo compiles a scenarios into list of meta data and copies this into remote host. Then a sparrow manager gets run ( over ssh ) on remote host to pick up the meta data and then download, install and execute the plugins.

So sparrow follows client server schema with push approach and plugins deployments get happened on the side of target host.

Sparrow plugins have versions, ownership and documentation.  Sparrow plugins gets hosted at central plugins repository – SparrowHub

Here meta data example of sparrow plugin “package-generic” to install software packages:

{
    "name" : "package-generic",
    "version" : "0.2.16",
    "description": "Generic package manager. Installs packages using OS specific package managers (yum,apt-get)",
    "url" : "https://github.com/melezhik/package-generic"
}

There is no rigid separation between custom and “core” plugins at sparrow eco system Every plugin gets uploaded to SparrowHub immediately becomes accessible for end users and sparrowdo scenarios. For security reasons sparrow provides ability to host so called “private”  plugins at remote git repositories. Such a plugins could be “mixed in” to standard sparrow pipeline.

Below is a schematic view of sparrow plugins deployment:

sparrowdo-system2

Dependencies

Ansible – ansible provides not built in facilities to manage dependencies at the level of ansible module, probably you would have it at level upper – ansible playbooks. Thus is you module depend on some software library you should care about such a dependency  resolution at some other place.

Sparrow – sparrow provides facilities to manage dependencies at the level of sparrow plugin.  Thus if plugin depends on software libraries you may declare such a dependencies  at the plugin scope so that plugins manager will take care about dependency resolution at the moment of plugin installing. For the time being dependencies for  Perl5 and Ruby languages are supported. CPAN modules for Perl5 via cpanfile and RubyGems for Ruby via Gemfile.

Summary

Ansible gained a big success due to extensive eco system of existed ansible modules. Though when comparing a module development process with those one exist at sparrow ( sparrow plugins ) I find some interesting and promising features a sparrow might shows at this field. To sum they up:

* Playbooks VS sparrowdo scenarios  – sparrowdo provides imperative Perl6 language interface against declarative way of ansible playbooks written in YAML. As for the some task such a declarative approach is fine, there are cases when we need add imperative style to our configuration scenarios provided by any modern generic purpose language, where  YAML for sure does not fit.

* Script oriented design – Due it’s script oriented design sparrow plugins provides you way to split a whole tasks into many simple scripts interacting with each other. This actually what we usually do when doing a regular scripting for routine tasks, so why not to bring it here? 🙂

* Modules/Plugins management and life cycle  – sparrow plugins are even more loosely coupled with configuration management tool itself then we see it at ansible. They are developed, debugged,  hosted and managed independently without even knowledge about sparrowdo configuration management tool. This makes process of plugin development more effective and less painless.

* Bash/Shell scripting –  sparrow provides much better support for  “straightforward”  bash/shell scripting then ansible due to spoken limitation of the last on return results and “JSON” interface. It is hard to understand what is going wrong in case of executing ansible bash scripts as it hides all STDOUT/STDERR generated by. Meanwhile Sparrow honestly shows what comes from executes bash/shell commands.

* Programming API – sparrow provides an unified API for all the languages, it means every language has a “equal” rights at sparrow eco system and shares the same possibilities in term of API. Meanwhile ansible modules tends to be written on Python as it seems the most seamless way to develop asnible modules.

* Testing facilities – sparrow exposes builtin test facilities which expands sparrow usage to not only deployments tasks but also to testing/monitoring/audit needs.

Sparrow plugins evolution

Introduction

Black boxes and APIs.

Sparrow plugins are underlying, essential part of sparrowdo system. One the one hand they are just a scripts to solve various tasks. Like create user accounts, populate configuration files or remove directories. One other hand they are more or less black boxes with well defined API exposed to external world.

Sparrowdo uses sparrow plugins as building blocks to manage and automate remote servers. In this article I am going to give an informal introduction into sparrow/sparrowdo echo system with the focus on it’s central part, heart of all – sparrow plugins. As much as possible, I will try not to burden the material with low level technical details which might be confusing for unprepared user, however sometimes a simple code examples and diagrams will occur here, hopefully helping you to catch the main ideas and not to gets strayed.

Bottom of the system.

Well. Not to diving too much into technical aspects let me try to explain informally what sparrow plugins are. We start from the very bottom of the system, like if would not wanted know anything about sparrowdo and only wanted to play with sparrow plugins ( indeed it’s possible without sparrowdo itself! ), so these are few basic entities we have to meet first:

* Scenarios
* Stories
* Suites
* Plugins
* Tasks
* Task boxes

Every single step ahead will lead us to the whole picture of sparrow ecosystem.

Scenarios

Scenarios are just scripts written on one of the language of choice – Perl5, Ruby or Bash. Sparrow provide an unified, language agnostic API for script developers so they leverage it:

* Easy scripts configuration ( made in various formats – command line, Config::General,  JSON / YAML )
* Multi scripts scenarios – ability to call one script from another with parameters
* Check rules – ability verify scenarios output by embedded DSL

sparrow-plugins-stories1

Picture  1. Scenarios & Stories.

Stories

Stories are abstraction for scenarios and its check rules. In terms of sparrow scenarios are always accompanied by a check list file –  a list of definitions written in a special DSL to verify script output. In trivial case it could be empty file, thus no checking is taken. If user define some patterns check lost file to validate scenario output, the verification is taken.

Say we have a simple scenario “hello world”:

$ cat story.pl
print "hello world!\n"

In sparrow scenario is executed by story executor called strun which run scripts and checks if exit code is zero, which is treated as successful status. If a check rules are supplied scenario STDOUT gets validates against such a rules. Consider a trivial check rule for the script above:

$ cat story.check
hello world

It just checks that script output contains string , there are much more things you can do when validating like providing regex checking, capturing and handling matching data and so on,  follow a Outthentic::DSL  module documentation to get more. More or less many of plugins have none or simple check rules. But if you write a monitoring / testing / audit scripts a check rules feature could be extremely useful. Another interesting idea behind check rules is “self-testing scripts“, but I am not going to talk much about this here 🙂

Ok, if we take a look at the picture number 1, we will get a visual summary of all we’ve learned so far.

Let’s go ahead and talk about sparrow suites.

Suites

sparrow-plugins-suites4

Picture  2. Suites.

Suites are related stories. One may have many scripts related to a one task or split complex thing into small scripts interacting with each other. A sparrow suite always have a “main” story ( denoted as “FIRST” at the picture number 2), which “call” others in chain. So we end up a tree of stories. A story being called is “downstream” story, a story calling  downstream stories is a “upstream” story, obviously the same story could be both upstream/downstream.  When a story gets called it might be given a story parameters. A unified API is provided to handle story parameters for whatever language you choose to write a scenario. Like for Bash we could have such a code:

# upstream story
run_story my-story message 'hello world'  

# downstream story my-story.bash
message=$(story_var message)
echo $message

The same code in Perl would be:

# upstream story
run_story("my-story", { message => 'hello world' });  

# downstream story
my $message = story_var('message');
print $message;

Story parameters being passed could be nested, which are pretty good represented via Perl or Ruby hashes. And even Bash is supported ( with some limitations ):

# set parameters at upstream story:

# Perl
run_story("S1", { message => { hello => 'world' } } );

# Ruby
run_story "S1", { :message => { :hello => 'world' } }  

# Bash
run_story message.hello world

# access parameters at downstream story:

# Perl
story_var('message'){'hello'};

# Ruby
story_var('message')['hello'];

# Bash
$(story_var message.hello )

A technical details on sparrow stories could be found at Outthentic module documentation.

The same way as stories accept parameters and handle them using unified API, one may configure a sparrow suites. Say we want to pass some global parameters as suite input. Lets create first a default suite configuration, it could be ( one of the options, see later ) a Config::General format file:

$ cat suite.ini

    
         nginx 80
         tomcat 8080
         dev_server 3000
    


$ strun --ini suite.ini

Now we have a unified API to access global parameters:

# access global parameters at story:
# Perl
config('app'){'servers_and_ports'}{nginx};
# Ruby 
config('app')['servers_and_ports']['nginx'];
# Bash 
$(config app.servers_and_ports.nginx )

We even can override a suite global parameters at run time via command line:

$ strun --param app.servers_and_ports.nginx=81

And finally we could use a JSON/YAML format to store global parameters:

$ cat suite.json

{
  "app": {
   "servers_and_ports": {
     "nginx" : 80,
     "tomcat" : 8080
     "dev_server": 3000
   }    
  }
}

$ strun --json suite.json

Default configuration and Hash merge.

If we have suite.ini configuration file for our suite it is considered as default configuration file. Thanks to Hash::Merge   it is possible to override ( merge two files into one Hash ) a default values using custom configuration file:

$ cat suite.ini

  bar = bar-default-value
  baz = baz-default-value

$ strun --ini suite.ini # load a default configuration

$ cat suite.my.ini
  bar = bar-new-value

$ strun --ini suite.my.ini 

# will override bar value to `bar-new-value`
# baz value will remain default.

To know more about suites and their interfaces take a look at Outthentic documentation – https://github.com/melezhik/outthentic

Now let’s see how suites becomes a plugins.

Sparrow plugins

sparrow-plugins-sparrowhub
Picture 3. Sparrow plugins distribution system.

Plugins are packaged suites ready for contribution. From the end user point of view plugins act as suites, so they “borrowed” all the features we have learned so far.

But there are some extra values plugins add into the system:

* name and version
* dependencies ( CPAN / RubyGems )
* ownership

Every plugin has a name to be identified in a global sparrow system. This is obvious like having names for every software packages. As well as plugins have a versions so plugin developer may release and plugin users may utilize a various versions of plugins:

$ sparrow plg search nginx # search nginx related plugins
$ sparrow plg install nginx-check # install nginx-check plugin
$ sparrow plg run nginx-check # run nginx-check plugin
$ sparrow plg run nginx-check --version 0.0.8 # installing by version

Sparrow has quite extensive API on managing plugins, we can’t focus on here, please follow a documentation if you are interested. What is important here that plugins are small bits of software which is distributes the same as we see at many package systems like apt, CPAN, rpm, RubyGems so on.


Dependencies

In a spirit of ansible modules sparrow plugins does not depend on other plugins, but we can use any software libraries in our scenarios. Currently a plugin developer can declare CPAN dependence in a cpanfile or RubyGems dependencies in Gemfile, so that such a dependencies will be installed. Sparrow adjusts running environment ( setting library paths for Perl and Ruby ) so that installed libraries will be accessible in running scenario. It’s very handy!

Ownership

To publish plugins into central repository SparrowHub you need to get account there. It is also possible to distribute so called private plugins hosted at remote git repositories.

Metadata

All spoken could be written in simple JSON format. This is how sparrow plugins get registered in sparrow system:

{
  "version" : "0.0.7.5",
  "name"    : "nginx-check",
  "description" : "checks if nginx server is healthy by executing low level system checks ( ps, pid, etime )",
  "url"         : "https://github.com/melezhik/nginx-check"
}

Sparrow tasks

Sparrow plugins bind to a default suite configuration, there is not that much you can do about it, only redefine global parameter at run time:

$ sparrow plg run foo --param a=1 --param b=2

sparrow-plugins-tasksPicture 4. Sparrow plugins and tasks.

Sparrow tasks give you way more agile. Tasks are plugins with custom configurations.  Tasks have names and grouped by projects:

$ sparrow task add foo-project foo-task foo
$ sparrow task ini foo-project/foo-task
a = 100
b = 200
$ sparrow task run foo-project/foo-task

There a lot of information about sparrow task at Sparrow documentation pages.

Ok, it’s been a long trip. We are approaching the end of evolution here 🙂 And this is sparrow task boxes.

Task box

Task box is a collection of sparrow tasks, we can write it as JSON:

[
  {
    "task" : "foo-task",
    "plugin" : "foo-plugin",
    "global_parameters" : {
       "a" : 1,
       "b" : 2
     },
  },
  {
    "task" : "bar-task",
    "plugin" : "bar-plugin",
    "global_parameters" : {
      "aa" : 1,
      "bb" : 2
     }
  }
]

Sparrow tasks is way to run many sparrow plugins with parameters and consequently. This actually what Sparrowdo does when compiling sparrowdo scenarios:

$ cat sparrowfile
user "zookeeper";directory "/var/data/zoo";
file "/var/data/zoo/birds.txt", %( owner => 'zookeeper' );

A given code gets complied into sparrow task box:

[ 
  { 
     "plugin" : "user", 
     "task" : "create user zookeeper", 
     "data" : { "name" : "zookeeper", "action" : "create" } 
   }, 
   { 
     "plugin" : "directory", 
     "task" : "create directory /var/data/zoo", 
     "data" : { "path" : "/var/data/zoo", "action" : "create" } 
   }, 
   { 
     "plugin" : "file", 
     "task" : "create file /var/data/zoo/birds.txt", 
     "data" : { 
        "owner" : "zookeeper", 
        "action" : "create", 
        "target" : "/var/data/zoo/birds.txt" 
     } 
   } 
]

From the very bottom of the system we have reached a sparrow evolution end point – a high level configuration management scenarios written on Perl6. But under the hood – it’s just a JSON gets pushed to sparrow client, do it will do low level job by executing sparrow plugins 🙂 , see the last picture:

sparrowdo-taskboxes

Picture 5. Sparrow plugins evolution.

Summary

Let’s summarize what we’ve learned in this article:

* Sparrow plugins are scripts written on one of language of choice: Perl5/Bash/Ruby
* Outthentic – a core sparrow component – a development and execution kit to enable some frequently used  features when writing automation scenarios: testing script output , reuse other scripts and pass script configuration parameters.
* To distribute scripts they are packaged and uploaded into central repository – SparrowHub
* Sparrow client is command line tool to install, configure and run plugins.
* Sparrowdo acts as high level system build upon sparrow plugins to write an automation scenarios in Perl6 language and then execute them as sparrow “plugins-primitives” with the JSON as internal presentation format and scp/ssh as transport.

I hope this was a helpful article, please post your comments, questions, ideas here.

Thanks.

— Alexey Melezhik