How to use Chef and Sparrowdo together

Good team member.

Chef is a well recognized configuration management tool which I use extensively at my current work. However I keep pushing to Sparrowdo – Perl6 configuration management tool and find those two tools play nicely together.

In this post I am going to give a few examples on how I use Sparrowdo to simplify and improve Chef cookbooks development workflow.

Running chef client on a target host.

Here is the most useful scenario how I use Chef and Sparrowdo together. My working environment implies launching ec2 Amazon instances get configured by chef. Instead of ssh-ing to an instance and running a chef-client on it, I delegate this task to Sparrowdo using wrapper called Sparrowdo::Chef::Client.

Let’s install the module:

$ zef install Sparrowdo::Chef::Client

And then create a simple Sparrowdo scenario:

$ cat sparrowfile
module_run 'Chef::Client', %(
    run-list => [
      "recipe[foo]",
      "recipe[baz]"
    ],
    log-level => 'info',
    force-formatter => True
);

Here we’re just running two recipes called foo and bar. And define some chef client’s settings, like log level and enabling force-formater option. Now we can run a chef-client on a target host:

$ sparrowdo --host=$target_host

Post deployments checks.

It is always good idea to check a server’s state right after the deployment. There are reasons why I prefer to not keep such a checks inside my Chef scenarios. And it seems there is a trend which is seen as new monitor and audit tools appear at the open source market with InSpec and goss among them, to list a few.

Likewise Sparrowdo has some built-in facilities to quickly test an infrastructure.

Let me give you a few examples.

Check system processes.

Say, we reconfigure an Nginx server by using some Chef recipes, sometimes Chef is not able to ensure that Nginx starts successfully after deploy or even if it does I don’t want to grep huge chef client logs ( sometimes there is a load of them ) to find out whether an Nginx gets started successfully. Happily, here is the dead simple solution – the usage of Sparrowdo asserts:

$ cat healthcheck.pl6
proc-exists 'nginx';

This function checks that file /var/run/nginx.pid exists as well as the related process with the PID taken from the file does. If you need to handle uncommon file paths for pid files, you can always set the path explicitly:

$ cat healthcheck.pl6
proc-exists-by-pid 'nginx server', '/var/run/nginx/ngix.pid';

Moreover if you only know the “name” of the process ( well technically specking this regular expression to match the process command ), simply have this:

$ cat healthcheck.pl6
proc-exists-by-footprint 'nginx web server', 'nginx\s+master';

Having this simple Sparrowdo scenario just run it against a target server to check that Nginx process exists:

$ sparrowdo --host=$target_host --sparrowfile=healthcheck.pl6

I always put Sparrowdo scenarios and Chef cookbook files together and commit them to Git repository:

$ git add sparrowfile healthcheck.pl6
$ git commit -a -m 'sparrowdo scenarios for chef cookbook'

And finally let me give an example of checking web application endpoints by sending http requests. Say, we have Chef recipe which deploys an application that should be accessible by http GET / request. Sparrowdo exposes handy http-ok asserts to deal with such a checks:

$ cat healthcheck.pl6
http-ok;

That is it! This is the simplest form of http-ok function’s call to verify that the web application is responsible by  accepting  requests for GET / route. Under the hood it just:

  1. resolves hostname as those one you run sparrowdo
  2. issues http request using curl utility:
$ curl -f http://$target_host

There are options how you can call http-ok function. For example, you may define endpoints and set http port:

$ cat healthcheck.pl6
http-ok(port  => '8080' , path => '/Foo/Bar' );

Follow Sparrowdo documentation for full description of http asserts function.

Conclusion

Using Sparrowdo and Chef together could be efficient approach when developing and testing server’s configuration scenarios. Sparrowdo is proven to be able to adopt any requirements and play nicely with other ecosystems and frameworks being an intelligent “clue” to bind together various tools and get your work done.

Building Bailador Docker images with Sparrowdo

Bailador is a light-weight route-based web application framework for Perl 6. Thanks to Gabor Szabo who has invited me to join the project and see how I can help the team.

I decided to make efforts in configuration management, deployment tasks. At the moment Bailador developers need help in this area.

Docker is quite popular way to distribute applications across teams, so I gave it it try. Welcome to bailador-docker-sparrowdo – a small project to help the Bailador developers to easy check latest changes in Baildor source code.

What can you do by using bailador-docker-sparrowdo:

* Build docker image with the sample Bailador application.
* Start the sample application.
* Update an existing docker image by picking up the latest changes from Bailador source code repository ( github ).

Let me show in more details how this could be done by using sparrowdo.

Build docker image

First of all you need to check out bailador-docker-sparrowdo and run `docker build` command:

$ git clone https://github.com/melezhik/bailador-docker-sparrowdo.git 
$ cd bailador-docker-sparrowdo
$ docker build -t bailador .

It takes a few minutes to build the image. Under the hood it:

* Pulls alpine-perl6 base image with Alpine Linux and Perl6/zef pre installed, the image was created by Juan Julián Merelo Guervós.

* Installs sparrow/sparrowdo as it used as the configuration management tool for sample application.

Only a few instructions could be found at Dockerfile:

FROM jjmerelo/alpine-perl6
ENV PATH=/root/.rakudobrew/moar-nom/install/share/perl6/site/bin:$PATH
RUN apk add wget git curl bash build-base gcc perl-dev
RUN cpan App::cpanminus
RUN cpanm -q --notest https://github.com/melezhik/outthentic.git Sparrow
RUN zef install https://github.com/melezhik/sparrowdo.git
# ...

The rest of configuration to be done by sparrowdo itself by running `sparrowdo` command during the build process:

COPY sparrowfile    /tmp/
RUN sparrowdo --local_mode --sparrowfile=/tmp/sparrowfile --no_sudo

Here is the content of sparrowdo scenario:

directory '/var/data';

bash 'zef install Path::Iterator';
bash 'zef install TAP::Harness';
bash 'zef install HTTP::MultiPartParser';

bash(q:to/HERE/);
  set -e
  cd /var/data/
  if test -d  /var/data/Bailador; then
    cd Bailador
    git pull
    cd ../
  else
    git clone https://github.com/Bailador/Bailador.git
  fi
HERE

bash "cd /var/data/Bailador  && zef --depsonly install .";

bash "cd /var/data/Bailador && prove6 -l";

bash "cd /var/data/Bailador && zef install --/test --force .";

Right now it is simple enough, but the use of Sparrowdo gives me a freedom to create any sophisticated build scenarios in the future, which hardly could be expressed by using “only Dockerfile” approach.

Eventually, as new requirements come or new build scenarios need I will add more sparrowdo scenarios to effectively manage the build process.

Finally, I have added a few lines to the end of Dockerfile to copy sample Bailador application script and to declare the default entry point to launch the application:

COPY entrypoint.sh  /tmp/
COPY example.p6w    /tmp/
ENTRYPOINT ["/tmp/entrypoint.sh"]
EXPOSE 3000

The sample application, example.p6w is very simple and is only considered as the way to check that Bailador works correctly:

use Bailador;
get '/' => sub {
    "hello world"
}
baile(3000,'0.0.0.0');

Here how the application get run via entry point script, entrypoint.sh :

#!/bin/bash
perl6 /tmp/example.p6w

Run application

Once the image is ready you run a docker container based on this image, as if everything is built correctly you will get running sample application:

docker run -d -p 3000:3000 bailador

Check the application by sending http request:

curl 127.0.0.1:3000

Update Docker image

The promising feature of bailador-Docker-sparrowdo is you can check the latest Bailador source changes by updating existing Docker image, so you don’t have to rebuild the image from the scratch and save your time. This is possible due to main configuration and build logic is embedded into image through the sparrowdo gets installed into it.

This is how you can do this.

First find out existing bailador image and run the container:

docker run -it -p 3000:3000 --entrypoint bash bailador

Notice that we don’t detach the image as we did when just wanted to run the sample application. Moreover we override default enrty point as we need an bash shell to login into container.

Once logged into the container just run sparrowdo scenario again, as we did when build the image via Dockerfile:

sparrowdo --no_sudo --local_mode --sparrowfile=/tmp/sparrowfile

Sparrowdo will pick up the latest changes for Baildor source code from github and apply them.

To ensure that sample application runs on new code, lets run it manually:

perl /tmp/example.p6w 
^C # to stop the application

Now we can update the image by using `docker commit` command. Not exiting from the running Docker container, in parallel console lets do that:

$ docker ps # to find out the image id
$ docker commit $image_id bailador # to commit changes made in container into bailador image
$ docker stop -t 1 $container_id # stop current Docker container 

Great. Now our image gets updated and contain latest Bailador source code changes. We can run a sample application the same way as we did before:

docker run -d -p 3000:3000 bailador

Conclusion

Docker is an efficient and powerful tool to share applications across teams. Sparrowdo plays nicely with Docker ecosystem providing a comprehensive DSL to describe any complicated scenarios to build Docker images and more impressive to update existing images “on the fly”, saving developers’ time.

Join Bailador project

If you’d like to get involved in the Bailador project, contact me and I’ll send you an invitation to our Slack channel.

Using Python dependencies in Sparrow plugins

Latest version of Sparrow has brought a new feature for those who would like to write sparrow plugins using Python language.

Now you can declare a Python/Pip dependencies with the help of requirements.txt file:

$ cat requirements.txt
http==0.02 

Let’s create a simple plugin to make http requests using hackhttp python library.

$ cat story.py
import hackhttp
from outthentic import *

url = config()['url']
hh = hackhttp.hackhttp()

code, head, html, redirect_url, log = hh.http(url)

print code

$ touch story.check

$ cat sparrow.json
{
    "name" : "python-sparrow-plugin",
    "description": "test sparrow plugin for python",
    "version" : "0.0.4",
    "url" : "https://github.com/melezhik/python-sparrow-plugin"
}

Now let’s upload the plugin to SparrowHub and give it a run:

$ sparrow plg upload
sparrow.json file validated ...
plugin python-sparrow-plugin version 0.000004 upload OK

$ sparrow plg install python-sparrow-plugin

upgrading public@python-sparrow-plugin from version 0.0.3 to version 0.000004 ...
Download https://sparrowhub.org/plugins/python-sparrow-plugin-v0.000004.tar.gz --- 200
Downloading/unpacking hackhttp==1.0.4 (from -r requirements.txt (line 1))
  Downloading hackhttp-1.0.4.tar.gz
  Running setup.py (path:/tmp/pip_build_melezhik/hackhttp/setup.py) egg_info for package hackhttp
    
Installing collected packages: hackhttp
  Running setup.py install for hackhttp
    
Successfully installed hackhttp
Cleaning up...

$ sparrow plg run python-sparrow-plugin --param url=http://example.com
•[plg] python-sparrow-plugin at 2017-05-12 17:07:17

200
ok scenario succeeded
STATUS SUCCEED

And finally if you prefer to get things done by Perl6/Sparrowdo use this piece of code as starting point:

$ cat sparrowfile

my $url = 'http://example.com';
task-run "http get $url", 'python-sparrow-plugin', %( url => $url );

Sparrowdo command line API

Command line API makes it possible to run sparrow plugins and modules remotely on target server by using console client, there are a lot of things you could to with this API!

Running plugins with parameters

Executing sparrow plugins. Here is the list of plugins you may use, the form you run them via command line is:

--task_run=plg-name@plg_param=plg_value,plg_param=plg_value ...

Let’s  me drop a few examples, how you can use it.

Execute bash commands

Here is where bash sparrow plugin could be handy.

1. Single bash command

$ uptime 
=>
$ sparrowdo --host=remote.server --task_run=bash@command=uptime

sparrowdo-bash-uptime

2. Compound commands

Say you want to execute multiples bash commands chained by logical “AND”, it’s easy:

$ ps uax | grep nginx|grep -v grep  && service nginx stop:
==>
$ sparrowdo --host=remote.server \
--task_run=bash@command='ps uax|grep nginx|grep -v grep && service nginx stop'

sparrowdo-bash-compound

3. Multiple bash commands

Alternatively you may pass more than one `–task_run` chunks to execute many bash commands consequently:

$ ls -l; uptime ; df -h; 
=>
$ sparrowdo --host=remote.server \
--task_run=bash@command='ls -l' \
--task_run=bash@command=uptime \
--task_run=bash@command='df -h'

 

4. Run command under user’s account

Say you want execute bash command under specific user, not root? It’s easy to do by using sparrow bash plugin

$ sparrowdo --host=remote.server \
--task_run=bash@command=id,user=nginx

sparrowdo-bash-user


Install system packages

Use package-generic plugin.  This is cross platform installer with support of some popular Linux distros – Debian/Ubuntu/CentOS.

Install mc, nano and tree packages:

$ sparrowdo --host=remote.server \
--task_run=package-generic@list='mc nano tree'

sparrowdo-package-generic

 

Install CPAN packages

Use cpan-package plugin to install CPAN packages. There are many options with it. Say I want to create web-app user and install some CPAN package into user’s home …

$ sparrowdo --host=remote.server \
--task_run=user@name=web-app \
--task_run=cpan-package@list='CGI DBI',\
install-base=/home/web-app/,user=web-app

sparrowdo-cpan-package

What else? Any sparrow plugin could be run the same way:

--task_run=plg-name@plg_param=plg_value,plg_param-plg_value ...

Find one you need at https://sparrowhub.org/search and just use it!

Running modules with parameters

Sparrow modules are more high level entities but you can use them the same way as you do with sparrow plugins – to apply piece of configurations to your servers remotely.

Choose this form:

$ sparrowdo --module_run=module-name@mod_param=mod_value,mod_param=mod_value ...

Here are some examples.

1. Install nginx with custom document-root

Use Sparrowdo::Nginx module:

$ sparrowdo --host=remote.server \
--module_run=Nginx@document_root=/var/www/data

This command produces too much output, so I am not showing it’s screenshot here.

2. Install CPAN packages come from GitHub repositories

There is a Sparrowdo::Cpanm::GitHub module to handle this, it accepts many options and it’s even possible to install modules from Git branches:

Let’s install CGI.pm from master branch at https://github.com/leejo/CGI.pm :

$ sparrowdo --host=remote.server \
--module_run=Cpanm::GitHub@user=leejo,project=CGI.pm,branch=master

This command produces too much output, so I am not showing it’s screenshot here.

3. Fetching remote file

And finally next but not the last example of Sparrowdo module to fetch files over http, it’s called Sparrowdo::RemoteFile

Say I want to fetch some auth basic protected URL and place it into specific directory?
Well let’s do it in one shot:

$ sparrowdo --host=remote.server \
--module_run=RemoteFile@user=fox,password=red,location=http://archive.server/file.tar.gz,location=/opt/data/

This command produces too much output, so I am not showing it’s screen shot here.

Conclusion

Sparrowdo command line API provides an easy and simple way to configure servers remotely by using only console client with no coding at all, in a style of bash oneliners.

But if you look for something more complicated and powerful – consider using Sparrowdo scenarios!

 

Simple META6::bin wrapper

Recently  Wenzel P. P. Peppmeyer ( aka gfldex ) released a nice helper to start Perl6 projects from the scratch – it’s called META6:bin

$ zef install META6:bin

MEAT6::bin module enables creation Perl6 project from the scratch, for example this is how quickly one can bootstrap a new Perl6 module called Foo::Bar

$ meta6 --new-module=Foo::Bar

There are a lot of options come from meta6 client, take a look at the documentation.  META6::bit cares about git/github things, setting up git repository for you freshly started projects, creating META6.json file, populating t/ directory, so on.

I have created a simple wrapper around meta6 script. The reasons for that:

* I don’t want to remember all the options I use when launching meta6 client to bootstrap my projects
* I have predefined settings I use always so I don’t want to enter them every time I run meta6 command line.

Here is my solution –  sparrow plugin with the analogous name – meta6-bin Under the scene it just calls meta6 client with parameters. But you can easily customize ones by using sparrow tasks:

$ sparrow plg install meta6-bin
$ sparrow project create perl6-projects
$ sparrow task add  perl6-projects meta6-bin new

Having this defined you may easily create new Perl6 modules projects to run meta6 with some default options:

$ sparrow task run perl6-projects/new --param name=Foo::Bar --param path=~/my-projects/

The only two obligatory parameters you have to set here is – name – module name and path – directory location where you want to create project files.

Here is how you can create a project inside current working directory:

$ sparrow task run perl6-projects/new --param name=Foo::Bar --param path=$PWD

And finally let’s tune some settings up to meet our specific requirements, say I don’t want to initialize git repository for my projects and I have predefined root location to keep my work:

$ export EDITOR=nano
$ sparrow task ini perl6-projects/new
options --force --skip-git --skip-github
path /opt/projects

Now we “memorize” our settings into sparrow task so that we can apply them for next meta6-bin runs:

$ sparrow task run perl6-projects/new --param name=Foo::Bar

Hope this short post was useful.
Regards and stayed tuned with Perl6/Sparrow/Sparrowdo.

Writing pre-commit hooks with Sparrow

Introduction

Pre-commit is a framework for managing and maintaining multi-language pre-commit hooks. Developers write a hooks to be triggered so that some preliminary/useful job gets done before updates arrived to your git repo. The idea is quite old, but pre-commit lets install and integrate hooks into existed git repos with minimal efforts.

Writing hooks with sparrow

Sparrow is a universal automation tool and I found it quite easy to use sparrow to write hooks for pre-commit framework. Let me show how. Say I need to run prove tests for Perl6 code.

The code of hook is trivial and look like:

prove -vr -e 'perl6 -Ilib' t/

Let’s wrap this script into sparrow plugin, here are few simple steps:

1. Write a story:

$ cat story.bash
set -x
set -e
path=$(config path)
echo path is: $path
cd $path
prove -vr -e 'perl6 -Ilib' t/

A quick remark here. We pass a Perl6 project directory location explicitly by --path parameter as absolute file path. This requirement is due to sparrow does not preserve a current working directory when executing plugins.

2. Leave story check file empty, as we don’t need an extra checks here:

$ touch story.check

3. And create plugin meta file:

$ cat sparrow.json
{
  "name" : "perl6-prove",
  "description" : "pre-commit hook - runs prove for Perl6 project",
  "version" : "0.0.1",
  "category" : "utilities",
  "url" : "https://github.com/melezhik/perl6-prove"
}

4. Now we can upload our freshly backed plugin to SparrowHub:

$ sparrow plg upload

Using sparrow plugin in pre-commit hooks

First of all we need to install sparrow plugins at our system and see that our hook works on a test Perl6 project.

Install the plugin:

$ sparrow plg install perl6-prove

Set up git repository and project files:

$ git init 
$ ... # Create files and directories, git add, and so on ..

Create a simple Perl6 test:

$ cat t/00.t
use v6;
use Test;
plan 1;
ok 1, 'I am ok';

Then we need to set up pre-commit hooks yaml.

Our pre-commit hook yaml will be:

$ cat .pre-commit-config.yaml
-   repo: local
    hooks:
    -   id: perl6-prove
        name: perl6-prove
        entry: bash -c "sparrow plg run perl6-prove --param path=$PWD"
        language: system
        always_run: true
        files: ''

Here we use so called “local” repository and language  as”system” bear in mind that sparrow comes as external system command.

Now let’s commit our changes to trigger hook execution:

$ git commit -a -mtest-commit
perl6-prove..............................................................Passed
[master 98cf098] test-commit
 1 file changed, 6 insertions(+)
 create mode 100644 t/00.t

You can also trigger the hook directly not committing anything:

$ pre-commit run perl6-prove --verbose
[WARNING] Unstaged files detected.
[INFO] Stashing unstaged files to /home/vagrant/.pre-commit/patch1489306073.
[perl6-prove] perl6-prove................................................Passed
hookid: perl6-prove

[p] perl6-prove at 2017-03-12 08:07:53
path is: /home/vagrant/projects/pre-commit-test
t/00.t ..
1..1
ok 1 - I am ok
ok
All tests successful.
Files=1, Tests=1,  1 wallclock secs ( 0.02 usr  0.00 sys +  0.16 cusr  0.03 csys =  0.21 CPU)
Result: PASS
ok      scenario succeeded
STATUS  SUCCEED

In the end of this post I am going to take some summary.

Implementing pre-commit hooks via Sparrow plugins

Roughly speaking pre-commit framework supports two types of plugins – external ones, which to be installed into the system manually  and ones located in github repositories and installed by pre-commit itself.

I see a possible benefits sparrow can bring you when developing hook’s scripts as sparrow ( external ) plugins:

– Sparrow plugins are external ones and highly decoupled from hooks/project structure.

– Indeed they are versioned and packaged pieces of software. One can maintain and release new versions of plugins in a way predictable and transparent for end user.

– You can always install/remove/upgrade/downgrade versions of sparrow plugin independently on pre-commit framework itself.

– Sparrow provides reasonable alternative to manage hook’s scripts dependencies, so that sparrow takes care about dependencies resolution during plugin installation. It’s CPAN/carton for Perl5 and RubyGems/bundler for Ruby. Let me know if you need other package managers support.


Rregards and have fun with your automation!

Manage goss scenarios with sparrow

Introduction

Goss is a YAML based serverspec alternative tool for validating a server’s configuration.  It’s written on Go language. It’s quite interesting and promising young project I came across via reddit/devops channel.

Let me show how one can distribute goss scenarios using sparrow tasks.

Before diving into technical stuff let me explain why this could be useful:

  • you want organize multiple goss scenarios by logical groups and manage them via unified interface
  • you want to share some goss yamls across your team, to make it possible quickly run your goss tests against many applications

Ok, let’s go.

Installing sparrow goss plugin

This part is really easy.

$ sparrow index update # we want fresh index from SparrowHub
$ sparrow plg install goss

You’ll find detailed information on goss sparrow plugin at https://sparrowhub.org/info/goss

Set up sparrow project and tasks

Ok, now let’s create sparrow project and tasks. These are just simple abstractions to split many goss tests on various logical groups.

$ sparrow project audit # we will keep all goss scenarios here
$ sparrow task add audit nginx  goss # nginx test suite
$ sparrow task add audit mysql goss # mysql test suite

Running sparrow task list command we see our new project and tasks:

$ sparrow task list
[sparrow task list]
 [audit]
  audit/mysql
  audit/nginx

Set up goss tests

Now let’s populate our goss tests, we should read goss spec first, but it’s really easy.

One for nginx:

$ sparrow task ini audit/nginx 

action validate
goss << HERE
port:
  tcp:80:
    listening: true
    ip:
    - 0.0.0.0
service:
  nginx:
    enabled: true
    running: true
process:
  nginx:
    running: true

HERE

And one for mysql:

$ sparrow task ini audit/mysql 

action validate
goss << HERE
port:
 tcp:3306:
 listening: true
 ip:
 - 127.0.0.1
service:
 mysql:
 enabled: true
 running: true
process:
 mysqld:
 running: true

HERE

Run goss tests

Now we can run goss tests separately for nginx and mysql.

One for nginx:

$ sparrow task run audit/nginx
[t] nginx
@ goss wrapper

[t] nginx modules/generate-goss-yaml/ params: cache_dir:/home/vagrant/.outthentic/tmp/9595/story-1 at 2017-03-07 16:18:13
generated goss yaml at /home/vagrant/.outthentic/tmp/9595/story-1/goss.yaml
ok      scenario succeeded

[t] nginx modules/validate/ at 2017-03-07 16:18:13
1..5
ok 1 - Process: nginx: running: matches expectation: [true]
ok 2 - Port: tcp:80: listening: matches expectation: [true]
ok 3 - Port: tcp:80: ip: matches expectation: [["0.0.0.0"]]
ok 4 - Service: nginx: enabled: matches expectation: [true]
ok 5 - Service: nginx: running: matches expectation: [true]
ok      scenario succeeded
STATUS  SUCCEED

And one for mysql:

$ sparrow task run audit/mysql
[t] mysql
@ goss wrapper

[t] mysql modules/generate-goss-yaml/ params: cache_dir:/home/vagrant/.outthentic/tmp/9901/story-1 at 2017-03-08 08:19:14
generated goss yaml at /home/vagrant/.outthentic/tmp/9901/story-1/goss.yaml
ok      scenario succeeded

[t] mysql modules/validate/ at 2017-03-08 08:19:14
1..5
ok 1 - Process: mysqld: running: matches expectation: [true]
ok 2 - Port: tcp:3306: listening: matches expectation: [true]
ok 3 - Port: tcp:3306: ip: matches expectation: [["127.0.0.1"]]
ok 4 - Service: mysql: enabled: matches expectation: [true]
ok 5 - Service: mysql: running: matches expectation: [true]
ok      scenario succeeded
STATUS  SUCCEED

Sharing goss tests

An interesting use case is you may share you goss tests. Sparrow make it possible save your tasks at SparrowHub – central sparrow repository so that to share tasks with others.

Say you want someone else runs your goss scenarios on remote server. Provided that one install sparrow client there, it is really easy.

Upload remote task

$ sparrow remote task upload audit/nginx "goss audit for nginx"
$ sparrow remote task share audit/nginx
$ sparrow remote task upload audit/mysql "goss audit for mysql"
$ sparrow remote task share audit/mysql


Install and run remote task

Having logged into other server just have this:

$ sparrow remote task run melezhik@audit/nginx
$ sparrow remote task run melezhik@audit/mysql

More on remote task could be found at sparrow documentation – https://github.com/melezhik/sparrow#remote-tasks

Running goss scenarios with sparrowdo

Alternatively you may want to use Perl6 interface to sparrow and run goss scenarios using sparrowdo:

$ cat sparrowfile

task-run 'run goss for mysql', 'goss', %( action  => 'validate' , goss => q:to/HERE/ );

port:
  tcp:3306:
    listening: true
    ip:
    - 127.0.0.1
service:
  mysql:
    enabled: true
    running: true
process:
  mysqld:
    running: true

HERE

$ sparrowdo --host=192.168.0.1

Regards and have fun with automation.

Outthentic – quick way to develop user scenarios

Introduction

Outthentic is a development kit for rapid development of users scripts and test scenarios. Outthentic is an essential part of Sparrow system. Let’s see how easy script development may be by using Outthentic framework.

Project boilerplate

First of all let’s create a project to hold all our scripts.

$ mkdir tutorial
$ cd tutorial/

Ok, now let’s create our first script. We are going to use Bash language here. But Outthentic plays nice with many languages(*), we will see it later.

(*) these are Bash, Perl5, Python and Ruby

Let’s say  want  to create a simple script to check nginx status:

$ touch story.check
$ cat story.bash
service nginx status

Let’s understand what we have done so far. We have created a story file called “story.bash” and empty story check file “strory.check”.

Story file is a plain bash script to do useful job. Story check file could contain some check rules to verify stdout from story file. Right now we don’t want to verify story stdout so we just leave story check file empty.

Outthentic requires that every script be paired with story check file.

Now let’s run the script, or if say in Outthentic terminology – run the story. Let’s get a strun – console utility that executes scenarios in Outthentic:

$ strun 

 at 2017-02-08 15:47:56
 * nginx is running
ok    scenario succeeded
STATUS    SUCCEED

Ok. Good. All should be clear from reading strun output. We see that nginx is running. At least this is what “service nginx status” tells us. What’s happening under the hood when we invoke “strun” ?

Strun – a [s]tory [r]unner – utility that runs story file “story.bash” and then checks if it’s exit code is 0. In case of successful exit code  strun prints “scenario succeed” in it’s report. Overall status “STATUS SUCCEED” line means all the scripts comes from out project are succeed.

Right now there is the only one script – “story.bash”, very soon though we will see that there are might be more than one script in outthentic project.

But before diving into more details about scenario development let me show how strun reports when scenario fails to succeed, let’s shut nginx down and re-run the story:

$ sudo /etc/init.d/nginx stop
$ strun 

 at 2017-02-08 15:57:25
 * nginx is not running
not ok    scenario succeeded
STATUS    FAILED (256)

Check lists

Check lists are rules written in Outthentic::DSL language to verify stdout comes from story script. Recalling that we left story file empty, now let’s add some check rules to it:

$ cat story.check 
nginx is running

Now let’s start nginx over again and re-run our story:

$ sudo /etc/init.d/nginx start
$ strun 

 at 2017-02-08 16:02:38
 * nginx is running
ok    scenario succeeded
ok    text has 'nginx is running'
STATUS    SUCCEED

Good, we see new line appeared at strun report:

ok    text has 'nginx is running'

Strun executes “story.bash” script and then checks if it’s STDOUT include the string “nginx is running”.

You may use Perl5 regexs in check rules as well:

$ cat story.check 
regexp: nginx\s+is\s+running

Outthentic::DSL make it possible a lot of other complex checks, but let’s go ahead and see how we can use check rules in our scripts development.

So far this type of check looks meaningless, as “service nginx status” seems to do all the job and if it’s succeed there no need to track stdout to verify that nginx is running, unless you are true paranoid and want to add double checks 🙂

But let’s rewrite our story scenario to see how useful story checks might be. What if instead of consulting  of “service nginx status” command we want to lookup at the processes list happening at our server?

$ cat story.bash 
ps uax | grep nginx

$ cat story.check 
nginx: master
nginx: worker

Now let’s give it run and see results:

$ strun 

 at 2017-02-08 16:13:19
root     21274  0.0  0.0  85884  1332 ?        Ss   16:02   0:00 nginx: master process /usr/sbin/nginx
www-data 21275  0.0  0.0  86220  1756 ?        S    16:02   0:00 nginx: worker process
melezhik 21406  0.0  0.0  17156   944 pts/1    R+   16:13   0:00 grep nginx
ok    scenario succeeded
ok    text has 'nginx: master'
ok    text has 'nginx: worker'
STATUS    SUCCEED

Ok. Now we see our check rules (“nginx: master” and “nginx: worker”) are verified which means nginx server processes “appear” at processes list as nginx master and nginx process. It is more detailed information in comparison with those getting from simple “service nginx status” command.

What is more important “ps uax|grep nginx” might succeed with exit code zero but this does not mean that nginx server is running ( guess why? ). And this is where check rules become handy. Now let’s summarize.

Check rules VS exit code.

Sometimes you don’t have to define any check rules to verify that your script succeed, obviously most of  modern software provides a valid exit code you can rely upon. But sometimes a normal ( zero ) exit code does not mean an overall success. This test shows the idea. It is pretty simple but could be considered as basic example for such a test scenarios where you want to “grep” some information from script stdout to verify that everything goes fine. Actually this is what people usually do when hitting “foo|grep baz” command.

Another good example when exit code can’t be a good criteria is insertion into database. Say first time you insert record it does not exists and you are ok when script doing insertion  and return zero exit code. Next time you run script record already exists and script throws bad exit code and proper message ( something like record with given ID already exists … ). If after all you care only about record existence  you can’t rely on exit code here. So alternative approach could be verify script work by messages appeared at stdout:

$ cat story.check

regexp: record (created|already exists)

Outthentic suites

As I said at the beginning there are might be more than one script in the project. In terms of Outthentic we can talk about outthentic project or outthentic suite – a bunch of related stories. Strun utilizes directories to tell one story from another. Let’s add new story to our suite to start nginx service, we will reorganize directory layout on the way:

$ tree 
.
├── check-nginx
│   ├── story.bash
│   └── story.check
└── start-nginx
    ├── story.bash
    └── story.check

The content of  check-nginx/* files remains the same. This is a story to check nginx state. The content of start-nginx/story.bash file is pretty simple:

$ cat  start-nginx/story.bash
sudo service nginx start

We leave file start-nginx/story.check empty.

Strun uses “–story” option to set a story to run. If no “–story” option is given strun tries to run file story.bash (*) at current working directory:

$ strun  --story start-nginx

start-nginx/ at 2017-02-08 16:52:27
ok    scenario succeeded
STATUS    SUCCEED
(*) Or actually one of four files if exists – story.pl, story.bash, story.py, story.rb – as you can guess it relates to the language you write scenarios – Perl5, Bash, Python or Ruby.

Having more than one story at your project help you to split large task into small independent scripts to be running distinctly. But sometimes we want take another approach – call one scripts from others. Let’s see how we can achieve this.

Story modules

Story modules ( or in short just a modules ) are scripts being called from other scripts.
When gets called modules might being given an input parameters aka story variables.

Consider an example of simple package manager.

Let’s say we want to write a script to install packages taken from input list passed as string of space separated items:

"package-foo package-bar package-baz"

Outthentic provides very flexible API to handle command line input parameters, so we can pass package list by “–param” option:

$ strun --param "package-foo package-bar package-baz"

Now let’s split our task into two simple scripts. One – to parse input parameters and another to install given package. The overall project structure will be:

$ tree
.
├── hook.bash
├── meta.txt
└── modules
    └── install-package
        ├── story.bash
        └── story.check

Let’s explain a new project structure.

First of all we notice file called “hook.bash”. Hooks are way to extend strun functionality. Under the hood hooks are simple scripts to be executed before story file.

Second thing, If we look at the project root directory we don’t find neither story file nor story check files here. It’s ok. Existence of  file called “meta.txt” informs strun that this is meta story. Meta story is outthentic story which does not have  a story file at all.

Meta file is just plain text file. It could be empty. But you may place some helpful info here to be dumped when story executed:

$ cat meta.txt 
simple package manager

Hooks and meta stories are quite described at Outthentic documentation in “Hooks API” section, but let’s go ahead.

The last new thing we can notice at our project is a directory  “modules/install-package” with content very similar to the content of outthentic story ( story file and story check file there ). Well, everything kept under “modules/” directory is treated as story-modules.

Story modules as I already told are the usual outthentic stories but being called from other stories or if to be accurate from hook files. Let’s see how this happens:

$ cat hook.bash 
for p in $(config packages); do
  run_story install-package package $p
done

This  simple bash code does following:

1. Parses input parameters using ubiquitous “config” function provided by Outthentic
2. Splits packages string by spaces and for every item calls a story module named “install-package”:

run_story install-package package $p

Story module being passed an input parameter or story variable named “package” having the name of the package being installed.

Let’s see how story module is implemented, it’s very simple:

$ cat modules/install-package/story.bash 
package=$(story_var package)
echo install $package ...

What we do in “modules/install-package/story.bash” script?

1. Parse story input parameter by using handy “story_var” function
2. Run install command (*) for given package.

(*) For demonstration purposes we don’t run real package install here , using yum or apt-get package manager.

Let’s summarize. Story modules are very useful when design your script system. This mechanism encourages you to split a complex task into simple ones and make code reuse via “scripts-libraries”.

A plenty of information about story modules could be found at Outthentic docs in “Upstream and Downstream stories” section.

Now let’s run our story suite:

$ strun  --param packages='nginx mysql perl'

@ simple package manager

modules/install-package/ params: package:nginx at 2017-02-09 11:29:23
install nginx ...
ok    scenario succeeded

modules/install-package/ params: package:mysql at 2017-02-09 11:29:23
install mysql ...
ok    scenario succeeded

modules/install-package/ params: package:perl at 2017-02-09 11:29:23
install perl ...
ok    scenario succeeded
STATUS    SUCCEED

In next section we’ll see how supply our suites with default configuration.

SUITE CONFIGURATION

Sometimes it’s useful to provide a sane default for our script parameters. Outthentic comes with a lot of ways to do this. Let’s show one.

Consider a script which looks if running nginx listening to a given port:

$ cat story.bash 
sudo netstat -nlp|grep nginx
$ cat story.check 
0.0.0.0:80

Running a suite we see that nginx is available at 80 port as we expected:

$ strun 

 at 2017-02-09 12:22:48
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      21899/nginx     
tcp6       0      0 :::80                   :::*                    LISTEN      21899/nginx     
ok    scenario succeeded
ok    text has '0.0.0.0:80'
STATUS    SUCCEED

Say, if nginx listen to other port and we want to make this parameter configurable for the script. Not a problem:

$ cat story.check 
generator: <

Generators are way you can built check list in run time. We can see now that port variable is passed as input parameters. Now let’s provide a sane default for port:

$ cat  suite.ini
port 80

Later if want override default setting we can say:

$ strun --param port=443

Outthentic provides other methods to handle script configuration among them are JSON/YAML/Config::General/Command Line formats and nested parameters. Please follow documentation at section “Suite Configuration”

There is more than one language to write your script

And finally as I told at the very beginning you are free choose many language to develop scripts by Outthentic framework. This is the list of supported languages:

* Per5
* Bash
* Python
* Ruby

This is how hook file for  package manager script could be written on Perl5:

$ cat hook.pl 
for my $p ( split /\s+/, config()->{packages}) {
  run_story("install-package", { package => $p });
}

Outthentic provides unified API for all listed languages to make it script development easy and simple:

  • Handling input parameters
  • Developing mutli scripts systems using story modules and “–story” option
  • Enabling configuration with reach support of well known formats like Config::General/YAML/JSON/Command line

Script distribution

This article only describes how one can use Outthentic in scripts development. If you want to distribute your scripts use Sparrow – outthentic scripts manager.

For further reading I would recommend you comprehensive article –  “Sparrow plugins evolution”

Script examples presented at the paper could be found here.


Regards. The author of Sparrow/Outthentic – Alexey Melezhik

ssh/scp commands with Sparrowdo

Sometimes you need to execute remote commands or copy files on/to remote hosts using ssh/scp commands. Here is how you can do it using Sparrowdo ssh/scp core-dsl functions.

sparrowdo-ssh-ssp2

Issuing ssh commands

The shortest form  to do this – call `ssh‘ function with minimum of required parameters – command to execute and remote host address.

ssh 'uptime', %( host => '192.168.0.1' )

Usually people use ssh public-key authentication, so it is possible to set a path to ssh private key and provide user id:

ssh 'uptime', %(
  host    => '192.168.0.1',
  user    => 'old-dog',
  ssh-key => 'keys/id_rsa'
);

Note, that ssh private key should be only stored at master host where sparrowdo runs, no other actions need to be taken, sparrowdo will care about coping(*) of ssh private key to the target host. It’s handy!

(*) By the way –  sparrowdo will remove a private ssh key from target host when ssh command is done.

There are many options of `ssh’ function you may read about at sparrowdo docs, here are just more examples.

You may run multi-line bash commands btw:

ssh q:to/CMD/, %( host => '192.168.0.1', user => 'old_dog');
  set -e
  apt-get update
  DEBIAN_FRONTEND=noninteractive apt-get install -y -qq curl
CMD

Or don’t execute the same command twice relying on existence of file located at target server:

ssh "rm file", %(  host => '192.168.0.1' , create => '/do/not/run/twice' );

And finally you may set alternative descriptions for your commands which will be shown at sparrowdo report to help you understand what your command does:

ssh "cat patch.sql | mysql", %(
  description => 'patching my database',
  host => '192.168.0.1'
);

Issuing scp commands

`Scp’ command akin `ssh’ one.  Except it deals with remote files coping. Nothing to say here but showing some examples.

Copy a number of files to remote hosts 192.168.0.1:

scp %( 
  data    => "/var/file1 /var/file2 /var/file3",
  host    => "192.168.0.1:/var/", 
  user    => "Me", 
  ssh-key => "keys/id_rsa", 
);

Note, that files to copy should exists on the target hosts. If they don’t you may copy them from master host first using `file‘ function:

file '/var/file1', %( content =>  ( slurp 'files/file1' ) );
file '/var/file2', %( content =>  ( slurp 'files/file2' ) );
file '/var/file3', %( content =>  ( slurp 'files/file3' ) );

The same way as you do for `ssh’ command you may prevent from coping the same file twice if some file exists at target host:

scp %( 
  data    => "/var/biiiiiiig-file",
  host    => "192.168.0.1:/var/data", 
  create  => "/tmp/do/not/copy/me/twice"
)

And finally one my copy files FROM master to target host, using `pull’ flag:

scp %( 
  data    => "/var/data/dir",
  host    => "master-host:/var/file1", 
  pull    => 1, 
  ssh-key => "keys/id_rsa", 
);

That is it. Stay tuned with Sparrowdo Automation.  🙂

Blog at WordPress.com.

Up ↑