Sparrow plugins vs ansible modules

Introduction

Both ansible modules and sparrow plugins are building blocks to solve elementary tasks in configuration management and automation deployment. Ansible modules are used in higher level playbooks scenarios written in YAML, sparrow plugins used in high level sparrowdo scenarios written in Perl6.

Languages support

Ansible – you may choose any language to write modules. When developing modules out of the box ansible provides seamless support for Python only ( shortcuts ), for other languages you should use third-party libraries ( natively for language you write a plugin ) to make a plugin development and integration process easier.

Sparrow – you write plugins on one of three languages – Perl5, Bash or Ruby. When developing modules sparrow provides an unified ( available for all languages ) API to make plugins development and integration easy and seamless. Though such an API as not that extensive as Python shortcuts API for ansible modules.

System design

Ansible – ansible modules are autonomous units of code to solve elementary task. Under this hood it’s just single file of code. Ansible modules can’t depend neither call other modules.

Sparrow –  sparrow plugins are very similar to ansible modules in way of autonomous, closed untis of code to solve an elementary tasks. But sparrow provides yet another level of freedom for plugin developer. Sparrow plugins actually is a  suites of scripts. Scripts may call other scripts with parameters. Such a design make it easy split even elementary task into scripts “speaking” to each other. Consider a trivial example – install / removing software packages. We can think about plugin to cope with whole elementary task ( installing / remove packages ) but under the hood split all by two scripts – one for package installing, another for package removal. This idea is quite expressed at comprehensive post – Sparrow plugins evolution.

Here is a simple illustration of what I have said.

sparrow-plugins-design1

System integration

Anisble  – ansible modules a smaller part of higher level configuration scenarios called playbooks. Ansible playbook is YAML driven dsl to declare a list of tasks – anisble modules with parameters.

ansible-modules-and-playbooks

Sparrow
– like ansible modules, sparrow plugins are smaller part of overall system – spparowdo – configuration management tool written on Perl6. Sparrowdo scenarios are Perl6 code to run a sparrow taskssparrow plugins with parameters.

sparrow-plugins-and-sparrowdo1

End user interface

Ansible  – ansible modules gets called via playbooks using a YAML DSL to declare modules calls and pass parameters to. It is also possible to run ansible modules via command line client passing parameters as command line arguments.

Below is example of ansible playbook with ansible module yum to install httpd software package

$ cat playbook.yml
---
- hosts: webservers
  tasks:
  - name: ensure apache is at the latest version
    yum: name=httpd state=latest


Sparrow –
sparrow plugins gets called via sparrowdo scenarios using a Perl6 API. Plugin parameters gets passed as Perl6 Hashes.  Also one may use sparrow console client to run sparrow plugins as is via command line, not using  sparrowdo. There are a lot of options here – command line parameters, parameters in JSON / YMAL format, Config::General format parameters.

Below is sparrowdo equivalent to ansible module yum installing latest version of httpd.
Two flavours of API are shown – core-dsl and plugin API

$ cat sparrowfile

# you can use a short core-dsl API flavour:
package-install 'httpd'; 

# or low level plugin API flavour:
task-run 'ensure apache is at the latest version', 'package-generic', %(
   list => 'httpd'
);

Processing input parameters

Ansible – input parameters as key=value pairs (*), when developing plugin  you should parse an input and “split” it to the pieces of data to get a variables you need. There are plenty of “helpers” for many languages ( like Perl5, Ruby ) to simplify this process or else you have to parse input data  explicitly inside anisble module.

(*) Nested input parameters are possible

Ansible  provides a high level Python API for ansible modules called shortcuts allow you to automatically parse input and create parameter accessors,  declare parameters types, set default values, check required parameters and do other useful things with regards to.

Below is example of module parameters processing using python ansible API:

$ cat library/greetings.py
from ansible.module_utils.basic import *

def main():

  fields = { "message": {"default": "Hi!", "type": "str" } }
  module = AnsibleModule(argument_spec=fields)
  module.params['message']
  # some other code here to return results
if __name__ == '__main__':  
    main()


Sparrow
– the similar way sparrow provides a unified ( available for all languages  ) API to access input parameters. So you don’t have to parse an input  data at all.

Thus, irrespective the language you write a plugin you get a programming API to access input parameters.  Plugin developers could define so called default configuration so that plugin input parameters ( if not set explicitly ) gets initialized with sane defaults.

Below is sparrow equivalent to the ansible module accessing named input parameter. We are going to use Bash here.

# this is plugin scenario:
$ cat story.bash
message=$(config message)

# this is default configuration:
$ cat story.ini
message = Hi!

And this is how sparrow handles nested input parameters!

$ cat sparrowfile
task-run "run my task", 'foo-plugin', %( 
 foo => { 
    bar => { 
      baz  => 'BAZ'
    }
  }
);

$ cat story.bash 
baz=$(config foo.bar.baz)

Return results

Ansible – ansible modules return results as JSON. There are some essential points  about how ansible modules return:

* an exit code of ansible module script gets ignored
* the only requirement  to module – it should print a special formatted ( containing required fields ) JSON to STDOUT
* if no valid JSON is appeared at module’s output it is considered as failure
* a STDOUT/STDERR generated by module ( if any ) is not seen at playbook output
* Thus if module developer want to return some value he/she always has to pack the data into JSON format and return it via JSON string.

Below is example of ansible module to return a current time.

$ cat library/currentime.py
import datetime
import json

date = str(datetime.datetime.now())
print json.dumps({
    "time" : date
})


Sparrow
– sparrow plugins can return whatever , actually sparrow does not care ( but see “handle results” section ) about what is appeared at STDOUT/STDERR. There are some essential points  about how sparrow plugins returns:

* an exit code is important, it should be 0, otherwise sparrow treat a plugins execution as  failure
* a STDOUT from plugin simply gets redirected to sparrowdo output, so you always see what happening under the hood, no wrapping results into JSON is taken place like for ansible modules.

Below is sparrow equivalent to the ansible module returning a current time, we are going to use Perl5 here:

$ cat story.pl
print scalar localtime;

Handle results

Ansible – as ansible module return structured JSON data, it is possible to assign data included in JSON to some ansible variables and use them in upper level ( inside playbooks ).

Below is example of simple echo module which just return what it gets as input

$ cat playbook.yml
- hosts: localhost
  tasks:
    - name: tell me what I say
      echo:
         message: "hi there!" 
      register: result
    - debug: var=result  

$ cat library/echo.py
from ansible.module_utils.basic import *

def main():

    module = AnsibleModule(argument_spec={})
    response = {"you_said": module.params['message']}
    module.exit_json(changed=True, meta=response)


if __name__ == '__main__':  
    main()

Sparrow – as was told sparrow  does not care about WHAT appears at plugin’s STDOUT. Well not that true. Plugins developers can defined check rules to validate STDOUT comes from plugin scripts. Such a validation consists of matching STDOUT lines against Perl regexs and many other things you can get acquainted  with at Outthenitc::DSL documentation pages – a sparrow embedded DSL to validate text output. And output validation result impact the overall execution status of sparrow plugin, thus if validation checks fails it result in failure plugin itself. Such a embedded testing facilities  make it east develop a plugins for automation testing or audit purposes.

Probably there is no to add here as example besides this dummy code 🙂

$ cat sparrowfile
run-task "tell me what I say", "echo", %( message => 'hi there!' )

$ cat story.bash
echo you said $(config message)

A trivial check rule for script output will be:

$ cat story.check
generator:  config()->{message}

Deployment process

Ansible many ansible modules gets shipped as a core part of ansible itself – ready to use, no extra efforts for deployment should be taken. Users write a custom modules and host them at SCM ( github , gitlab , svn ), finally modules are just a files get check out into directory on master host where you push ansible tasks against remote hosts, so no special actions on deployment process should be taken besides getting ansible modules files downloaded. Ansible modules eco system thus consists of three large parts:

* main Ansible repository – modules gets shipped as ansible core
* custom ansible modules

So ansible follows pure agentless schema with push approach. No modules deployment gets happened gets happened at target host. Anisble only pushes modules as files where they are executed.

Below is a schematic view of ansible custom modules deployment:

ansible-modules-deploy

Sparrow – sparrow plugins are actually a packaged scripts gets delivered like any kind  software package – deb, rpm, rubygems, CPAN.  Sparrow exposes a console manager to download and install sparrow plugins. A sparrowdo compiles a scenarios into list of meta data and copies this into remote host. Then a sparrow manager gets run ( over ssh ) on remote host to pick up the meta data and then download, install and execute the plugins.

So sparrow follows client server schema with push approach and plugins deployments get happened on the side of target host.

Sparrow plugins have versions, ownership and documentation.  Sparrow plugins gets hosted at central plugins repository – SparrowHub

Here meta data example of sparrow plugin “package-generic” to install software packages:

{
    "name" : "package-generic",
    "version" : "0.2.16",
    "description": "Generic package manager. Installs packages using OS specific package managers (yum,apt-get)",
    "url" : "https://github.com/melezhik/package-generic"
}

There is no rigid separation between custom and “core” plugins at sparrow eco system Every plugin gets uploaded to SparrowHub immediately becomes accessible for end users and sparrowdo scenarios. For security reasons sparrow provides ability to host so called “private”  plugins at remote git repositories. Such a plugins could be “mixed in” to standard sparrow pipeline.

Below is a schematic view of sparrow plugins deployment:

sparrowdo-system2

Dependencies

Ansible – ansible provides not built in facilities to manage dependencies at the level of ansible module, probably you would have it at level upper – ansible playbooks. Thus is you module depend on some software library you should care about such a dependency  resolution at some other place.

Sparrow – sparrow provides facilities to manage dependencies at the level of sparrow plugin.  Thus if plugin depends on software libraries you may declare such a dependencies  at the plugin scope so that plugins manager will take care about dependency resolution at the moment of plugin installing. For the time being dependencies for  Perl5 and Ruby languages are supported. CPAN modules for Perl5 via cpanfile and RubyGems for Ruby via Gemfile.

Summary

Ansible gained a big success due to extensive eco system of existed ansible modules. Though when comparing a module development process with those one exist at sparrow ( sparrow plugins ) I find some interesting and promising features a sparrow might shows at this field. To sum they up:

* Playbooks VS sparrowdo scenarios  – sparrowdo provides imperative Perl6 language interface against declarative way of ansible playbooks written in YAML. As for the some task such a declarative approach is fine, there are cases when we need add imperative style to our configuration scenarios provided by any modern generic purpose language, where  YAML for sure does not fit.

* Script oriented design – Due it’s script oriented design sparrow plugins provides you way to split a whole tasks into many simple scripts interacting with each other. This actually what we usually do when doing a regular scripting for routine tasks, so why not to bring it here? 🙂

* Modules/Plugins management and life cycle  – sparrow plugins are even more loosely coupled with configuration management tool itself then we see it at ansible. They are developed, debugged,  hosted and managed independently without even knowledge about sparrowdo configuration management tool. This makes process of plugin development more effective and less painless.

* Bash/Shell scripting –  sparrow provides much better support for  “straightforward”  bash/shell scripting then ansible due to spoken limitation of the last on return results and “JSON” interface. It is hard to understand what is going wrong in case of executing ansible bash scripts as it hides all STDOUT/STDERR generated by. Meanwhile Sparrow honestly shows what comes from executes bash/shell commands.

* Programming API – sparrow provides an unified API for all the languages, it means every language has a “equal” rights at sparrow eco system and shares the same possibilities in term of API. Meanwhile ansible modules tends to be written on Python as it seems the most seamless way to develop asnible modules.

* Testing facilities – sparrow exposes builtin test facilities which expands sparrow usage to not only deployments tasks but also to testing/monitoring/audit needs.

Advertisements

6 thoughts on “Sparrow plugins vs ansible modules

  1. I think Sparrow’s weak points are..
    Hosts need perl or ruby and Need to install CPAN module.
    Anisbile say that it is agentless architecture. but it also needs python on hosts actually.
    Remote hosts dependencies like specific version of languges and modules are hard to maintain and fragilble in real field Sysadmin job.( there are variable OS version and nasty legacy systems .. 😦 ) “The postage costs more than goods. ”

    I think ideal system automation system is
    * remote job is only peformed with pure bash script and busybox level unix commands( nowdays Dockerfile doing that ! DO NOT MAKE OWN DSL ! BASH IS BETTER )
    * command result check( redirection stdout,stderr to master server, parsing checking in master’s local)
    * parsing modifing files like configuration ( fetch remote file to local, modify with local tools and languages – no remote server tool dependency! , send to remote again )

    this architecture doesn’t need any remote dependencies.. only need ssh, bash, unix command.

    Do not forget real field is tough.

    Like

  2. Hi ! Thanks for feedback. My follow up:

    > Hosts need perl or ruby and Need to install CPAN module.

    not that accurate, sparrow client should be installed which is about 10-20 ( depending on OS ) CPAN packages footprint. Which is quite low cost I would say. But yeah it is still not agentless, But to be honest when people say “agentless” they are a bit tricky , _somehow_ you need to bootstrap things on target host, so you you need a _software_ there. More or less. Even a bash could be treated here as a software you need to bootstrap things on target host! 🙂

    So, sparrow client itself is a tiny CPAN module with quite low dependency chain. In my tests it tooks a less than a minute to bootstrap it on CentOS host.

    Other things are required ONLY for plugins. But the same for ansible/chef/whatever. If you are going to use some PiP/CPAN/RubyGems/WhatEvery libraries on _tagret_host_ you should install them there somehow.

    > I think ideal system automation system is
    * remote job is only performed with pure bash script and busybox level unix commands( nowdays Dockerfile doing that ! DO NOT MAKE OWN DSL ! BASH IS BETTER )

    Two points here:

    – I am afraid Docker is not about configuration management at all, it is about delivery and easy application bootstrapping via linux LXC containers. The configuration/deployment part is still out of the scope here.

    Docker just provides very simple and basic level in this way – a Docker file – an entry point you build your Docker image. But look like on HOW people do that – there are MANY ways to build a docker images – ansible/chef/bash/whatever . So Docker by no means is silver bullet here, and should not be.

    – Bash is fine. If you look at sparrow plugins ( https://sparrowhub.org/search ) – you will see a lot of them writen on Bash. But bash is pain when you want to do something more then calling system commands. This is why people creates a DSL ( chef/ansible ) to make it possible write complex things easy, not “struggling” with bash. But again I am not against a Bash. The idea behind sparrow is quite simple. It does not force you to use a specific language. It provides integration glue and useful API to make scripts development easy and fun. When you focus on script itself, not on supplemental tasks , like input parameters handling. So with sparrow you always choose a proper language to write – whether it’s Bash, Perl or Ruby ( btw I am going to add Python support soon )

    > command result check && parsing modifing files like configuration

    this is how sparrow works

    > this architecture doesn’t need any remote dependencies.. only need ssh, bash, unix command.

    Indeed this is how sparrow works. It is always up to DEVELOPER to decide if he/she needs some “optional” dependences on the target host ( like CPAN/RubyGems packages) to implement some _complex_ things or he/she will be fine _with just bash_ only ( which of course is fine for some tasks ) and so he/she writes plugins on pure Bash. Again sparrow does not force you to use any dependencies. It is more lightweight orchestration/integration/delivery tool for your scripts, rather hen monster/heavy software tool hard to use at target servers.

    Like

  3. > sparrow client should be installed which is about 10-20 ( depending on OS ) CPAN packages footprint. Which is quite low cost I would say.

    metacpan reverse dependencies of Sparrow shows 39 dependent modules.
    installing module is not simple task. recent CentOS or Ubutnu minimal install CentOS’s perl, Ubuntu’s perl-base package don’t contains all perl module of core distributiions.(CentOS’s perl-core, Ubuntu’s perl package are contains all perl core distributions) So in that condition, can’t use CPAN (CPAN bootstrapping behavior is also diffrent among perl versions )and even can’t do cpanm bootstrping with wget -O – https://cpanmin.us | perl because of missing perl core module error.
    it’s a big hurdle to new users.
    Ansible also doen’t work if remote host only has python3(recent minimal install of linux distributions don’t includes python2. SIGH…. CHEF? PUPPET? fucking ruby SIGH SIGH..
    You can easily fall into dependency hell even before starting.
    Most of system automation tools make a mistake like these.

    When managing over several thousands of servers, do not expect all servers are in ideal and same environment.
    REAL FILED is not like several dozons of identically managed ideal dev. enviroment.

    And most hosts in production service have only private ip.
    So Sparrow client can’t download plugin from each hosts without proxy or acl open.
    I think that it would be another hurdle in real service envirionment.

    > But bash is pain when you want to do something more then calling system commands. This is why people creates a DSL ( chef/ansible ) to make it possible write complex things easy, not “struggling” with bash.

    If some task is hard to handle with bash like templating or modifying configuration file using DSL, it can be done with fetching file to master’s local and manipulate with local languages and tools.
    Excepts such tasks, bash can doing well.., do not need remotely-executed DSL.

    > command result check && parsing modifing files like configuration
    > this is how sparrow works

    If it’s true, why client side Sparrow module has Outthentic(::DSL) dependency ??

    Like

    1. > metacpan reverse dependencies of Sparrow shows 39 dependent modules. …

      – 🙂 well , looks like you overestimate overheads. “curl https://cpanmin.us -o /bin/cpanm && chmod +x /bin/cpanm” works smoothly on most environments I have ever tried ( and indeed many people use this bootstrap). Even with “bare bones” Perls sparrow instalment should not take too much, even though we don’t install CPAN modules via rpm/deb equivalents.

      – “reverse dependencies” did you really mean REVERSE, probably FULL dependencies ? but even though it will be a 38 CPAN modules – it is not big deal at all, consider using `cpanm –notest` which is pretty fast as it skips time consuming unit tests stage …

      > When managing over several thousands of servers, do not expect all servers are in ideal and same environment.
      REAL FILED is not like several dozons of identically managed ideal dev. enviroment.

      Not sure what you mean. I never say that sparrow or other CM tool expect this .. Could you please reshape your statement?

      > If some task is hard to handle with bash like template or modifying configuration file using DSL, it can be done with fetching file to master’s local and manipulate with local languages and tools. Excepts such tasks, bash can doing well.., do not need remotely-executed DSL.

      Well, not sure if it is ALWAYS good choice , even though you are able to “prepare” files at master hosts and then push them to target, there are a plenty of tasks when you have to process things _right on the target host_ and you apparently won’t be happy having bash only here … 🙂

      > If it’s true, why client side Sparrow module has Outthentic(::DSL) dependency ??

      Probably now I realize that I misunderstood your initial idea ( baking things MOSTLY on master host and then push them to target server ) – see my comment to your previous point . Anyway. well some things could be done at master host, _some_ but not ALL, so it seems like you overestimate the challenge of doing things “in place”, on other hand if you try to “blindly” compile files on master hosts and you don’t take into account a target host environment, it might be even more difficult to maintain such a design and eventually would be error proven approach with hard way to debug.

      And finally … I don’t say that all the logic should be implemented on target server only, sparrowdo/Perl6 do a lot of useful job ( which is proper place to do _kind of_ job here ) – enabling ssh access, input parameters type checking, probing a target OS version, so on …

      Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s