Sparrowdo on Docker

With the latest Sparrowdo commits it’s now possible to run Sparrowdo tasks on running Docker containers:

Spin up a Docker container:

$ docker pull bitnami/minideb-extras # Debian minimal
$ docker run -d instance0 -it minideb-extras bash

Create some Sparrowdo scenario:

$ cat sparrowfile
use v6;
use Sparrowdo;
task-run 'check disk available space', 'df-check', %( threshold => 80 );
bash 'pwd';

Ran Sparrowdo scenario on running Docker container:

$ sparrowdo --docker=instance0 --bootstrap --no_sudo --format=production

Output:

running sparrow tasks on 127.0.0.1 ... 
target OS is - ubuntu
push [task] check disk available space [plg] df-check OK
push [task] run bash: pwd ... OK
SPL file /opt/sparrow/sparrow.list is empty
get index updates from SparrowHub ... OK
set up task box file - /home/melezhik/.sparrowdo//opt/sparrow/task-box.json - OK
public@df-check is uptodate (0.2.3)
public@bash is uptodate (0.1.6)
running task box from /opt/sparrow/sparrow-cache/task-box.json ... 
2017-09-22 11:12:39 : [task] check disk available space [plg] df-check [path] /
threshhold: 80
2017-09-22 11:12:39 : [task] run bash: pwd ... [path] modules/bash-command/ [params] envvars:

 
Caveats

You should have bash and curl pre installed at your Docker container.

Advertisements

Minoca OS automation with Sparrowdo

Introduction

Hello! Minoca is a new operating system for the world of connected devices. In this post I am going to show you how one can enable configuration management of running Minoca instances by the help of Sparrowdo.

Download the latest Minoca build

$ wget http://www.minocacorp.com/download/nightlies/latest-x86/Minoca-pc.zip
$ unzip Minoca-pc.zip

Start Minoca OS instance

$ qemu-system-x86_64 -enable-kvm -m 2000 -net nic,model=i82559er -net user,hostfwd=tcp::2222-:22,hostfwd=tcp::8888-:80 -hda pc.img

Set up remote ssh access

Inside running Minoca instance:

$ opkg update 
$ opkg install openssh bash
$ /etc/init.d/sshd start
$ passwd # we will use this password when ssh-ing from the Host OS after `ssh-copy-id` gets run

Inside Host OS:

$ ssh-copy-id -p 2222 root@127.0.0.1
$ ssh -p 2222 root@127.0.0.1
$ exit

Create some Sparrowdo scenario

$ nano sparrowfile
use v6;
use Sparrowdo;
package-install ("nano", "zsh", "nginx");
user "alexey";
directory "/var/data/bar", %( owner => "alexey");
service-stop "nginx";
service-start "nginx";
http-ok %( port => 8888 );

Run Sparrowdo scenario for Minoca instance

$ sparrowdo --host=127.0.0.1 --ssh_user=root --ssh_port=2222 --no_sudo --sparrowfile=sparrowfile --bootstrap --format=production

The output:

running sparrow bootstrap for host: 127.0.0.1 ... 
bootstrap for minoca
Downloading http://www.minocacorp.com/packages/0.4/i686/main/Packages.gz.
Inflating http://www.minocacorp.com/packages/0.4/i686/main/Packages.gz.
Updated list of available packages in /var/opkg-lists/main.
/usr/bin/curl
/usr/bin/perl
/usr/bin/cpanm
/usr/bin/sparrow
Outthentic is up to date. (0.3.9)
Sparrow is up to date. (0.2.48)
running sparrow tasks on 127.0.0.1 ... 
target OS is - minoca
push [task] install packages: nano zsh nginx OK
push [task] create user alexey OK
push [task] create directory /var/data/bar OK
push [task] stop service nginx OK
push [task] start service nginx OK
push [task] run bash: curl -fsSLk -D - --retry 3 127.0.0.1:8888 -o /dev/ ... OK
SPL file /opt/sparrow/sparrow.list is empty
get index updates from SparrowHub ... OK
set up task box file - /home/melezhik/.sparrowdo//opt/sparrow/task-box.json - OK
public@package-generic is uptodate (0.3.7)
public@user is uptodate (0.2.1)
public@directory is uptodate (0.1.4)
public@service is uptodate (0.1.13)
public@bash is uptodate (0.1.6)
running task box from /opt/sparrow/sparrow-cache/task-box.json ... 
2017-09-21 02:27:12 : [task] install packages: nano zsh nginx [path] modules/opkg/ [params] action:install package:nano
2017-09-21 02:27:12 : [task] install packages: nano zsh nginx [path] modules/opkg/ [params] action:install package:zsh
2017-09-21 02:27:12 : [task] install packages: nano zsh nginx [path] modules/opkg/ [params] action:install package:nginx
2017-09-21 02:27:13 : [task] create user alexey [path] modules/create/
2017-09-21 02:27:13 : [task] create directory /var/data/bar [path] modules/create/
2017-09-21 02:27:14 : [task] stop service nginx [path] modules/stop/ [params] os:minoca service:nginx
2017-09-21 02:27:14 : [task] start service nginx [path] modules/start/ [params] os:minoca service:nginx
2017-09-21 02:27:14 : [task] run bash: curl -fsSLk -D - --retry 3 127.0.0.1:8888 -o /dev/ ... [path] modules/bash-command/ [params] envvars:

Building Perl6 Applications with Docker and Ducky

Docker containers allow developers run environments and deploy application easy and fast. Dockerfile/Ansible/Chef are means you can configure bootstrapped Docker instances, however there is another way to do this …

Ducky is a lightweight Docker provision tool allow easy deploy Docker containers in just creating JSON scenarios in declarative way:

$ cat ducky.json
[
 {
   "task" : "install perl6",
   "plugin" : "rakudo-install",
   "data" : {
   "url" : "https://github.com/nxadm/rakudo-pkg/releases/download/2017.07/perl6-rakudo-moarvm-CentOS7.3.1611-20170700-01.x86_64.rpm"
  }
 }
]

This is how we bootstrap Rakudo on Docker box by using Ducky and this simple scenario.  Now let’s pull CentOS image and run Docker container based on it:

$ docker pull centos
$ docker run -d -i -t -v $PWD:/var/ducky --name ducky-centos centos

The only requirement here is that the running Docker container should have current working directory, ( which holds Ducky json file ) mounted as /var/ducky.

Ducky picks up ducky.json file placed in the current working directory and executes scenario on running Docker container, named centos-ducky:

$ ducky.bash ducky-centos

Here is a piece of screen shot of Ducky’s output ( only the last lines are shown for the sake of brevity ):

ducky2

Under the hood Ducky installs Sparrow client on the container and then give it a run  to execute tasks defined at Ducky json file.  The tasks are described at  Sparrow task box format.

So Ducky json is just a Sparrow Task Box file. That means you can declare Sparrow plugins with parameters here, aka Sparrow tasks which are executed on a Docker container. Available plugins are listed, documented and stored at the SparrowHub – Sparrow plugins repository.

In this scenario we use rakudo-install plugin to install Rakudo as system pcakge. The plugin documentation is available at the SparrowHub site.

Thus there are more things you could do with Ducky not only installing software,  you’re only limited by existed Sparrow plugins.

A typical use case is to run Test::Harness against a Perl6 project. Let’s do it for  Bailador  which is “A light-weight route-based web application framework for Perl 6” :

$ git clone https://github.com/Bailador/Bailador.git
$ cd Bailador
$ cat ducky.json 

[

  {
    "task" : "install perl6",
    "plugin" : "rakudo-install",
    "data" : {
      "url" : "https://github.com/nxadm/rakudo-pkg/releases/download/2017.07/perl6-rakudo-moarvm-CentOS7.3.1611-20170700-01.x86_64.rpm"
    }
  },
  {
    "task" : "installs Bailador dependencies",
    "plugin" : "zef",
    "data" : {
        "list" : [ "." ],
        "options" : "--deps-only"
    }
  },
  {
    "task" : "run t/ tests",
    "plugin" : "bash",
    "data" : {
        "command" : "prove6 -l",
        "envvars" : {
          "PATH" : "/opt/rakudo/bin:/opt/rakudo/share/perl6/site/bin:/root/.rakudobrew/moar-nom/install/share/perl6/site/bin:$PATH"
        }
    }
  }
]

Ducky json is quite self illustrative, here we define some standard steps to build and test the project:

* Install Raudo
* Install Bailador dependencies  picked from META6 file
* Runs t/ tests with prove6

Ok, let’s give it a run: ( don’t forget that we should first launch Docker container with the current working directory mounted as /var/ducky ):

$ docker run -d -i -t -v $PWD:/var/ducky --name ducky-bailador centos
$ ducky.bash ducky-bailador

Here is the last lines of the Ducky output:

ducky3

In this scenario we use 2 other plugins – bash – to execute arbitrary Bash code and zef – simple wrapper for Zef manager – tool to install Perl6 modules. The plugins documentation is available at the SparrowHub site.

Further thoughts.

Ducky and Sparrow are kind of cross platform tools, meaning you can successfully run the same scenarios on the variety of Linux platforms ( provided that Bash is installed ), for example the last scenario will succeed when running against Alpine Linux   docker image:

$ docker pull melezhik/alpine-perl6
$ docker run -d -i -t -v $PWD:/var/ducky --name ducky-bailador-alpine melezhik/alpine-perl6
$ ducky.bash ducky-bailador-alpine

Thus it becomes extremely  useful when you want to test a project against different environments just sitting at your developer box and running chip docker containers.

And the last but not the least. If for some reasons you’re not satisfied by existed Sparrow plugins you can easy write new one to cover your needs. I have written a plenty of posts of how to do this and you yon may start with this one – Outthentic – quick way to develop user’s scenarios .

Regards and have a fun with your coding and automation.

How to use Chef and Sparrowdo together

Good team member.

Chef is a well recognized configuration management tool which I use extensively at my current work. However I keep pushing to Sparrowdo – Perl6 configuration management tool and find those two tools play nicely together.

In this post I am going to give a few examples on how I use Sparrowdo to simplify and improve Chef cookbooks development workflow.

Running chef client on a target host.

Here is the most useful scenario how I use Chef and Sparrowdo together. My working environment implies launching ec2 Amazon instances get configured by chef. Instead of ssh-ing to an instance and running a chef-client on it, I delegate this task to Sparrowdo using wrapper called Sparrowdo::Chef::Client.

Let’s install the module:

$ zef install Sparrowdo::Chef::Client

And then create a simple Sparrowdo scenario:

$ cat sparrowfile
module_run 'Chef::Client', %(
    run-list => [
      "recipe[foo]",
      "recipe[baz]"
    ],
    log-level => 'info',
    force-formatter => True
);

Here we’re just running two recipes called foo and bar. And define some chef client’s settings, like log level and enabling force-formater option. Now we can run a chef-client on a target host:

$ sparrowdo --host=$target_host

Post deployments checks.

It is always good idea to check a server’s state right after the deployment. There are reasons why I prefer to not keep such a checks inside my Chef scenarios. And it seems there is a trend which is seen as new monitor and audit tools appear at the open source market with InSpec and goss among them, to list a few.

Likewise Sparrowdo has some built-in facilities to quickly test an infrastructure.

Let me give you a few examples.

Check system processes.

Say, we reconfigure an Nginx server by using some Chef recipes, sometimes Chef is not able to ensure that Nginx starts successfully after deploy or even if it does I don’t want to grep huge chef client logs ( sometimes there is a load of them ) to find out whether an Nginx gets started successfully. Happily, here is the dead simple solution – the usage of Sparrowdo asserts:

$ cat healthcheck.pl6
proc-exists 'nginx';

This function checks that file /var/run/nginx.pid exists as well as the related process with the PID taken from the file does. If you need to handle uncommon file paths for pid files, you can always set the path explicitly:

$ cat healthcheck.pl6
proc-exists-by-pid 'nginx server', '/var/run/nginx/ngix.pid';

Moreover if you only know the “name” of the process ( well technically specking this regular expression to match the process command ), simply have this:

$ cat healthcheck.pl6
proc-exists-by-footprint 'nginx web server', 'nginx\s+master';

Having this simple Sparrowdo scenario just run it against a target server to check that Nginx process exists:

$ sparrowdo --host=$target_host --sparrowfile=healthcheck.pl6

I always put Sparrowdo scenarios and Chef cookbook files together and commit them to Git repository:

$ git add sparrowfile healthcheck.pl6
$ git commit -a -m 'sparrowdo scenarios for chef cookbook'

And finally let me give an example of checking web application endpoints by sending http requests. Say, we have Chef recipe which deploys an application that should be accessible by http GET / request. Sparrowdo exposes handy http-ok asserts to deal with such a checks:

$ cat healthcheck.pl6
http-ok;

That is it! This is the simplest form of http-ok function’s call to verify that the web application is responsible by  accepting  requests for GET / route. Under the hood it just:

  1. resolves hostname as those one you run sparrowdo
  2. issues http request using curl utility:
$ curl -f http://$target_host

There are options how you can call http-ok function. For example, you may define endpoints and set http port:

$ cat healthcheck.pl6
http-ok(port  => '8080' , path => '/Foo/Bar' );

Follow Sparrowdo documentation for full description of http asserts function.

Conclusion

Using Sparrowdo and Chef together could be efficient approach when developing and testing server’s configuration scenarios. Sparrowdo is proven to be able to adopt any requirements and play nicely with other ecosystems and frameworks being an intelligent “clue” to bind together various tools and get your work done.

Building Bailador Docker images with Sparrowdo

Bailador is a light-weight route-based web application framework for Perl 6. Thanks to Gabor Szabo who has invited me to join the project and see how I can help the team.

I decided to make efforts in configuration management, deployment tasks. At the moment Bailador developers need help in this area.

Docker is quite popular way to distribute applications across teams, so I gave it it try. Welcome to bailador-docker-sparrowdo – a small project to help the Bailador developers to easy check latest changes in Baildor source code.

What can you do by using bailador-docker-sparrowdo:

* Build docker image with the sample Bailador application.
* Start the sample application.
* Update an existing docker image by picking up the latest changes from Bailador source code repository ( github ).

Let me show in more details how this could be done by using sparrowdo.

Build docker image

First of all you need to check out bailador-docker-sparrowdo and run `docker build` command:

$ git clone https://github.com/melezhik/bailador-docker-sparrowdo.git 
$ cd bailador-docker-sparrowdo
$ docker build -t bailador .

It takes a few minutes to build the image. Under the hood it:

* Pulls alpine-perl6 base image with Alpine Linux and Perl6/zef pre installed, the image was created by Juan Julián Merelo Guervós.

* Installs sparrow/sparrowdo as it used as the configuration management tool for sample application.

Only a few instructions could be found at Dockerfile:

FROM jjmerelo/alpine-perl6
ENV PATH=/root/.rakudobrew/moar-nom/install/share/perl6/site/bin:$PATH
RUN apk add wget git curl bash build-base gcc perl-dev
RUN cpan App::cpanminus
RUN cpanm -q --notest https://github.com/melezhik/outthentic.git Sparrow
RUN zef install https://github.com/melezhik/sparrowdo.git
# ...

The rest of configuration to be done by sparrowdo itself by running `sparrowdo` command during the build process:

COPY sparrowfile    /tmp/
RUN sparrowdo --local_mode --sparrowfile=/tmp/sparrowfile --no_sudo

Here is the content of sparrowdo scenario:

directory '/var/data';

bash 'zef install Path::Iterator';
bash 'zef install TAP::Harness';
bash 'zef install HTTP::MultiPartParser';

bash(q:to/HERE/);
  set -e
  cd /var/data/
  if test -d  /var/data/Bailador; then
    cd Bailador
    git pull
    cd ../
  else
    git clone https://github.com/Bailador/Bailador.git
  fi
HERE

bash "cd /var/data/Bailador  && zef --depsonly install .";

bash "cd /var/data/Bailador && prove6 -l";

bash "cd /var/data/Bailador && zef install --/test --force .";

Right now it is simple enough, but the use of Sparrowdo gives me a freedom to create any sophisticated build scenarios in the future, which hardly could be expressed by using “only Dockerfile” approach.

Eventually, as new requirements come or new build scenarios need I will add more sparrowdo scenarios to effectively manage the build process.

Finally, I have added a few lines to the end of Dockerfile to copy sample Bailador application script and to declare the default entry point to launch the application:

COPY entrypoint.sh  /tmp/
COPY example.p6w    /tmp/
ENTRYPOINT ["/tmp/entrypoint.sh"]
EXPOSE 3000

The sample application, example.p6w is very simple and is only considered as the way to check that Bailador works correctly:

use Bailador;
get '/' => sub {
    "hello world"
}
baile(3000,'0.0.0.0');

Here how the application get run via entry point script, entrypoint.sh :

#!/bin/bash
perl6 /tmp/example.p6w

Run application

Once the image is ready you run a docker container based on this image, as if everything is built correctly you will get running sample application:

docker run -d -p 3000:3000 bailador

Check the application by sending http request:

curl 127.0.0.1:3000

Update Docker image

The promising feature of bailador-Docker-sparrowdo is you can check the latest Bailador source changes by updating existing Docker image, so you don’t have to rebuild the image from the scratch and save your time. This is possible due to main configuration and build logic is embedded into image through the sparrowdo gets installed into it.

This is how you can do this.

First find out existing bailador image and run the container:

docker run -it -p 3000:3000 --entrypoint bash bailador

Notice that we don’t detach the image as we did when just wanted to run the sample application. Moreover we override default enrty point as we need an bash shell to login into container.

Once logged into the container just run sparrowdo scenario again, as we did when build the image via Dockerfile:

sparrowdo --no_sudo --local_mode --sparrowfile=/tmp/sparrowfile

Sparrowdo will pick up the latest changes for Baildor source code from github and apply them.

To ensure that sample application runs on new code, lets run it manually:

perl /tmp/example.p6w 
^C # to stop the application

Now we can update the image by using `docker commit` command. Not exiting from the running Docker container, in parallel console lets do that:

$ docker ps # to find out the image id
$ docker commit $image_id bailador # to commit changes made in container into bailador image
$ docker stop -t 1 $container_id # stop current Docker container 

Great. Now our image gets updated and contain latest Bailador source code changes. We can run a sample application the same way as we did before:

docker run -d -p 3000:3000 bailador

Conclusion

Docker is an efficient and powerful tool to share applications across teams. Sparrowdo plays nicely with Docker ecosystem providing a comprehensive DSL to describe any complicated scenarios to build Docker images and more impressive to update existing images “on the fly”, saving developers’ time.

Join Bailador project

If you’d like to get involved in the Bailador project, contact me and I’ll send you an invitation to our Slack channel.

Using Python dependencies in Sparrow plugins

Latest version of Sparrow has brought a new feature for those who would like to write sparrow plugins using Python language.

Now you can declare a Python/Pip dependencies with the help of requirements.txt file:

$ cat requirements.txt
http==0.02 

Let’s create a simple plugin to make http requests using hackhttp python library.

$ cat story.py
import hackhttp
from outthentic import *

url = config()['url']
hh = hackhttp.hackhttp()

code, head, html, redirect_url, log = hh.http(url)

print code

$ touch story.check

$ cat sparrow.json
{
    "name" : "python-sparrow-plugin",
    "description": "test sparrow plugin for python",
    "version" : "0.0.4",
    "url" : "https://github.com/melezhik/python-sparrow-plugin"
}

Now let’s upload the plugin to SparrowHub and give it a run:

$ sparrow plg upload
sparrow.json file validated ...
plugin python-sparrow-plugin version 0.000004 upload OK

$ sparrow plg install python-sparrow-plugin

upgrading public@python-sparrow-plugin from version 0.0.3 to version 0.000004 ...
Download https://sparrowhub.org/plugins/python-sparrow-plugin-v0.000004.tar.gz --- 200
Downloading/unpacking hackhttp==1.0.4 (from -r requirements.txt (line 1))
  Downloading hackhttp-1.0.4.tar.gz
  Running setup.py (path:/tmp/pip_build_melezhik/hackhttp/setup.py) egg_info for package hackhttp
    
Installing collected packages: hackhttp
  Running setup.py install for hackhttp
    
Successfully installed hackhttp
Cleaning up...

$ sparrow plg run python-sparrow-plugin --param url=http://example.com
•[plg] python-sparrow-plugin at 2017-05-12 17:07:17

200
ok scenario succeeded
STATUS SUCCEED

And finally if you prefer to get things done by Perl6/Sparrowdo use this piece of code as starting point:

$ cat sparrowfile

my $url = 'http://example.com';
task-run "http get $url", 'python-sparrow-plugin', %( url => $url );

Sparrowdo command line API

Command line API makes it possible to run sparrow plugins and modules remotely on target server by using console client, there are a lot of things you could to with this API!

Running plugins with parameters

Executing sparrow plugins. Here is the list of plugins you may use, the form you run them via command line is:

--task_run=plg-name@plg_param=plg_value,plg_param=plg_value ...

Let’s  me drop a few examples, how you can use it.

Execute bash commands

Here is where bash sparrow plugin could be handy.

1. Single bash command

$ uptime 
=>
$ sparrowdo --host=remote.server --task_run=bash@command=uptime

sparrowdo-bash-uptime

2. Compound commands

Say you want to execute multiples bash commands chained by logical “AND”, it’s easy:

$ ps uax | grep nginx|grep -v grep  && service nginx stop:
==>
$ sparrowdo --host=remote.server \
--task_run=bash@command='ps uax|grep nginx|grep -v grep && service nginx stop'

sparrowdo-bash-compound

3. Multiple bash commands

Alternatively you may pass more than one `–task_run` chunks to execute many bash commands consequently:

$ ls -l; uptime ; df -h; 
=>
$ sparrowdo --host=remote.server \
--task_run=bash@command='ls -l' \
--task_run=bash@command=uptime \
--task_run=bash@command='df -h'

 

4. Run command under user’s account

Say you want execute bash command under specific user, not root? It’s easy to do by using sparrow bash plugin

$ sparrowdo --host=remote.server \
--task_run=bash@command=id,user=nginx

sparrowdo-bash-user


Install system packages

Use package-generic plugin.  This is cross platform installer with support of some popular Linux distros – Debian/Ubuntu/CentOS.

Install mc, nano and tree packages:

$ sparrowdo --host=remote.server \
--task_run=package-generic@list='mc nano tree'

sparrowdo-package-generic

 

Install CPAN packages

Use cpan-package plugin to install CPAN packages. There are many options with it. Say I want to create web-app user and install some CPAN package into user’s home …

$ sparrowdo --host=remote.server \
--task_run=user@name=web-app \
--task_run=cpan-package@list='CGI DBI',\
install-base=/home/web-app/,user=web-app

sparrowdo-cpan-package

What else? Any sparrow plugin could be run the same way:

--task_run=plg-name@plg_param=plg_value,plg_param-plg_value ...

Find one you need at https://sparrowhub.org/search and just use it!

Running modules with parameters

Sparrow modules are more high level entities but you can use them the same way as you do with sparrow plugins – to apply piece of configurations to your servers remotely.

Choose this form:

$ sparrowdo --module_run=module-name@mod_param=mod_value,mod_param=mod_value ...

Here are some examples.

1. Install nginx with custom document-root

Use Sparrowdo::Nginx module:

$ sparrowdo --host=remote.server \
--module_run=Nginx@document_root=/var/www/data

This command produces too much output, so I am not showing it’s screenshot here.

2. Install CPAN packages come from GitHub repositories

There is a Sparrowdo::Cpanm::GitHub module to handle this, it accepts many options and it’s even possible to install modules from Git branches:

Let’s install CGI.pm from master branch at https://github.com/leejo/CGI.pm :

$ sparrowdo --host=remote.server \
--module_run=Cpanm::GitHub@user=leejo,project=CGI.pm,branch=master

This command produces too much output, so I am not showing it’s screenshot here.

3. Fetching remote file

And finally next but not the last example of Sparrowdo module to fetch files over http, it’s called Sparrowdo::RemoteFile

Say I want to fetch some auth basic protected URL and place it into specific directory?
Well let’s do it in one shot:

$ sparrowdo --host=remote.server \
--module_run=RemoteFile@user=fox,password=red,location=http://archive.server/file.tar.gz,location=/opt/data/

This command produces too much output, so I am not showing it’s screen shot here.

Conclusion

Sparrowdo command line API provides an easy and simple way to configure servers remotely by using only console client with no coding at all, in a style of bash oneliners.

But if you look for something more complicated and powerful – consider using Sparrowdo scenarios!