Thursday, March 07, 2013

Return codes from system commands in Perl

So say I'm running a Perl script which then calls some system command, let's say 'echo blah'.  Running that returns three things to us;
  1. the actual output (stdout) which would be "blah",
  2. any error output (stderr) which would be "" ie; NULL or empty,
  3. and the return code which would be "0";
So until now I had to choose which one of those i care about most and use the appropriate system executable handler to suit; let me explain;

my $result = `echo blah`; # captures "blah" into $result
my $result = system("echo blah"); # captures the return value of echo into $result
..and i stopped paying attention about there.

Suffice to say, no method i knew about allowed me to capture them all at the same time. Until i discovered IO::CaptureOutput.  Check out this foo;
$ cat ./perl.system.calls.sh 
#!/usr/bin/perl
use strict;
use IO::CaptureOutput qw/capture_exec/;

my @cmd= "echo blah";

my ($stdout, $stderr, $success, $exit_code) = capture_exec( @cmd ); chomp($stdout);

print "running '@cmd'\n";
print "stdout = $stdout\n";
print "stderr = $stderr\n";
print "success = $success\n";
print "exit_code = $exit_code\n";

$ ./perl.system.calls.sh 
running 'echo blah'
stdout = blah
stderr = 
success = 1
exit_code = 0

So now i can trap these in my code and handle them each as i see fit.  It means better error handling and software that runs (and stops!) how and when one might want it to.  And that's cool.

Wednesday, March 06, 2013

Migrating VMs on KVM without shared storage

So you have some virtual machine hosts running the Linux virtualisation software KVM on a current release of Ubuntu.  One of the hosts needs some urgent maintenance requiring an outage soon, but that host has several business critical virtual machines on it.  So you need to migrate those vms to the other host.

That's cool though because virsh allows us to migrate virtual machines to different hosts, assuming they are sharing the same storage pool.  But our hosts aren't.  Ahh but the current version of virsh allows us to migrate virtual machines which aren't using shared storage.  Sweet.  But is an even reasonably recent version of virsh is available for Ubuntu?  No.  Poop.  Compile the new kvm from source?  Migrate to Debian over the weekend?  Spend zillions on vmware?  Ummmmmm.. no, no and..  no (funnily enough).

But we can do custom hack foo code (and so can you)!! :)

$ ./manage.vm.sh
This command needs to be run as root (use sudo)

$ sudo ./manage.vm.sh
usage: manage.vm.sh list: List all guests on the current host
usage: manage.vm.sh backup vmname: Shutdown, backup & restart a guest
usage: manage.vm.sh migrate vmname destination host="": Shutdown, copy, define remote, destroy local & startup remote

$ sudo ./manage.vm.sh migrate ansible feathertail
Considering migrating ansible to feathertail
 ..testing to see if we have the correct permissions on the remote host
 ..yay! connectivity with remote host established
Parsing KVM configuration for ansible
 ..assuming config file is at /etc/libvirt/qemu/ansible.xml
 ..attempting to determine datafile location
 ..ansible is configured to use /var/lib/libvirt/images/ansible/tmpfyi1SJ.qcow2
Checking to see if the guest is running
  !!WARNING   guest is still running, initiating shutdown.  Is an outage OK? .. ..enter L8 to verify:  L8
 ..acknowledging verification commencified
 ..shutting down, please wait  :) :) :) ansible now not running

Are you sure you want to proceed migrating ansible to feathertail? ..enter Q2 to verify: Q2

..and so on.  It actually works.  Does cool stuff like;
  • Automatically parses config files from /etc/libvirt/qemu/
  • Will migrate a vm with multiple vm datafiles
  • Renames local .xml and data files before undefining the vm.  Read; roll-back.
  • Prompts before shutting down a vm and again before migrating it.
  • Copies .xml and data file(s) remotely via rsync.
  • Defines the vm on the remote host and starts it.
Anyhow have a play and if you like it or have feedback or whatever, please let me know.

Logging the connection status of your ADSL router

Awhile back learning perl coincided with some issues i had with my ADSL.

  • Random disconnects
  • Unknown uptimes nor connection speeds
  • Unreliable connection speeds
  • Unable to reset the device via the command line
  • And a general feeling of being ripped off by the ISP.

So i hit it with the Perl hammer and produced 24k monolith which solves all that and more but will probably never work for anyone else :)  If that sounds like a challenge, then have a look at adsl-status on github.

$ ./adsl-status.sh
ADSL2 synced @~ 657KB/s (avg 662, max 747). Up 24238 mins (avg 8562, max 89341); 16 day 19 hr 57 min 21 sec.

$ ./adsl-status.sh --help
usage: adsl.sync[.dev].sh [--verbose|--silent|--help|--debug] | [--reset]

$ ./adsl-status.sh --verbose
verbose, adsl: clean_text_split = Annex Type ADSL2 -- Upstream 967600 -- Downstream 5374560 -- Elaspsed Time 16 day 19 hr 59 min 14 sec

verbose, adsl: adsl_annex_type = ADSL2
verbose, adsl: connection_uptime_in_seconds = 1454354
verbose, adsl: down_bits_per_second / up_bits_per_second (bps) = 5374560/967600
verbose, adsl: down_kilobits_per_second / up_kilobits_per_second (kbs) = 5375/968
verbose, adsl: down_megabits_per_second / up_megabits_per_second (mbs) ~ 5.5/1
verbose, adsl: down_kilobytes_per_second / up_kilobytes_per_second (KB/s) ~ 657/119
yada yada..

Starting out with github on Ubuntu

OK.  Ten mile high view.  Sometimes I write code, but alot of the code I write is too monolithic to post to my blog.  Seems to me I should use some kind of publicly accessible revision control system to check-in the code, and then link to that from my blog.  We already use subversion at work and some of my code lives there but allowing public access to that makes no sense.  Sometime code lives at home and that never even sees the work SVN system.  What would be cool is some system which allows me to control code from wherever i am, and yet allow public access to anyone.  Enter GIT and Github.

So if all the cool kids are using GIT, how do we get it installed, running and working so i can actually link to it?  Github have a very nice help system and i used that to do what i have done here;
  • Installed git on my ubuntu machine
  • created an account on github
  • logged into github
  • on github created a new repository called hooliowobbits/testing
  • on my machine i ran
    $ git clone https://github.com/hooliowobbits/testing.git
    Cloning into 'testing'...
    remote: Counting objects: 3, done.
    remote: Total 3 (delta 0), reused 0 (delta 0)
    Unpacking objects: 100% (3/3), done.
    $ cd testing/
    $ ls -lha 
    total 8.0K
    drwxr-xr-x 8 hoolio datarw 4.0K Mar  6 11:39 .git
    -rw-r--r-- 1 hoolio datarw   28 Mar  6 11:39 README.md
    $ cat README.md
    testing
    =======
    sandbox foo
    $ echo blah > blah.txt
    $ git add blah.txt
    $ git commit blah.txt
    [master 5f1522c] this is just a note i added when i first typed git commit blah
     1 file changed, 1 insertion(+)
     create mode 100644 blah.txt
    $ git push
    Username for 'https://github.com': hooliowobbits
    Password for 'https://hooliowobbits@github.com':
    To https://github.com/hooliowobbits/testing.git
       2ebbe5e..5f1522c  master -> master
    
    
  • Then i visited github again and i see my blah.txt sitting there :) 
Right now I can't presume to know much more about GIT and Github than this; but clearly this opens up a world of possibilites.  Suddenly with one command on my machine the world can see my code, use it, can comment on it, fork it whatever.

Now, let's code!

Internode quota from the Ubuntu command line

I have an Internode ADSL connection and I'm running Ubuntu Server (12.04). I wanted to be able to check my internet quota from the command line and i found a perl script to help do that, but there weren't quite enough instructions there for me to make it work. It wasn't too hard to fix though:

$ wget http://zwitterion.org/software/internode-quota-check/internode-quota-check
$ chmod +x internode-quota-check
$ mv internode-quota-check internode-quota-check.sh
$ sudo apt-get install libwww-mechanize-perl libreadonly-perl
$
$ ./internode-quota-check.sh man
you don't seem to have a ~/.fetchmailrc, so I'll prompt you.
To avoid extra dependencies, your password will be echoed.
Username: juliusroberts
Password: passwordhere
Run this command to create a ~/.fetchmailrc file:
echo '# poll mail.internode.on.net user "juliusroberts" password "passwordhere"' >> ~/.fetchmailrc
$ echo '# poll mail.internode.on.net user "juliusroberts" password "passwordhere"'>> ~/.fetchmailrc
$
$ ./internode-quota-check.sh
juliusroberts: 132.032 GB (88.0%) and 13.6 days (48.5%) left on 150 GB, 24 Mb/s plan.
$

So I then went and added  ./internode-quota-check.sh to the bottom of my ~/.bashrc file.  So now when i login to my server i see straight away how much internets i have left, yay :)

Monday, March 04, 2013

Use perl to check spamhaus status in Nagios

We had an issue where something on our internal network was tripping a SMTP spam filter at spamhaus.org.  We thought we fixed it once only to be bitten again a few months later and payslips from our payrol system were bouncing (bad).  As well as actually investigating the root cause, we created a nagios check to check spamhaus programatically.  Creating a custom nagios check is well documented on the nagios website.

#!/usr/bin/perl
#
# Quick perl script to check spamhaus to see if we're blocked again, see https://rt.wilderness.org.au:444/rt/Ticket/Display.html?id=73259
#
# This script returns values consistent with the nagios return code specification at http://nagiosplug.sourceforge.net/developer-guidelines.html#AEN76
# 0     OK          The plugin was able to check the service and it appeared to be functioning properly
# 1     Warning     The plugin was able to check the service, but it appeared to be above some "warning" threshold or did not appear to be working properly
# 2     Critical    The plugin detected that either the service was not running or it was above some "critical" threshold
# 3     Unknown     Invalid command line arguments were supplied to the plugin or low-level failures internal to the plugin 
#                       (such as unable to fork, or open a tcp socket) that prevent it from performing the specified operation. Higher-level errors 
#                       (such as name resolution errors, socket timeouts, etc) are outside of the control of plugins and should generally NOT be reported as UNKNOWN states.

use strict;
my $our_external_ip = "xxx.xxx.xxx.xxx"; 
my $exit_value=3;

# run the wget command and save it's output to the $results variable.
my $results=`/usr/bin/wget --random-wait -U mozilla -O /tmp/spamcheck.dat -o /tmp/spamcheck.log http://www.spamhaus.org/query/ip/$our_external_ip && grep $our_external_ip /tmp/spamcheck.dat`; 
chomp($results); 
#print $results."\n";

# check to see if we score RED ie; we're on a blocklist
if ($results =~ m/red/) {
    print "ALERT: Block found.  Check http://www.spamhaus.org/query/ip/$our_external_ip\n";
    $exit_value=2;
} 

# if we got this far, we saw no RED.  Check to see we at least get one GREEN.
elsif ($results =~ m/green/) {
    print "OK; We got a green and no red.\n";
    $exit_value=0; 
} 

# ALERT: We got no red AND no green; therefore there bust be some issue somewhere!
else {
    print "ALERT: No valid return codes detected; page-load/dns/internet/scripting issue?\n";
    $exit_value=3; 
}

#print "\nPerl hopes it's returning a \$exit_value of $exit_value\n\n";
exit $exit_value;

note that /tmp/spamcheck.* will need to be globally writable.

That then results in a nice web gui telling us all isn't well (again).  Happy with my work i told my boss who said "i don't like it, i want it to say OK", to which i replied, "i can do that ;)", and now you can too.  Happy automation :)