Disk failure and suprises

Once in a while – and especially if you have a System with an uptime > 300d – HW tends to fail.

Good thing, if you have a Cluster, where you can do the maintance on one Node, while the import stuff is still running on the other ones. Also good to always have a Backup of the whole content, if a disk fails.

One word before I continue: Regarding Software-RAIDs: I had a big problem once with a HW RAID Controller going bonkers and spent a week to find another matching controller to get the data back. At least for redundant Servers it is okay for me to go with SW RAID (e.g. mdraid). And if you can, you should go with ZFS in any case :-).

Anyhow, if you see graphs like this one:

sda-week

You know that something goes terribly wrong.

Doing a quick check, states the obvious:

# cat /proc/mdstat
Personalities : [raid1]
md3 : active raid1 sda4[0](F) sdb4[1]
      1822442815 blocks super 1.2 [2/1] [_U]

md2 : active raid1 sda3[0](F) sdb3[1]
      1073740664 blocks super 1.2 [2/1] [_U]

md1 : active raid1 sda2[0](F) sdb2[1]
      524276 blocks super 1.2 [2/1] [_U]

md0 : active raid1 sda1[0](F) sdb1[1]
      33553336 blocks super 1.2 [2/1] [_U]

unused devices: <none>

So /dev/sda seems to be gone from the RAID. Let’s do the checks.

Hardware check

hdparm:

# hdparm -I /dev/sda

/dev/sda:
HDIO_DRIVE_CMD(identify) failed: Input/output error 

smartctl:

# smartctl -a /dev/sda
smartctl 5.41 2011-06-09 r3365 [x86_64-linux-2.6.32-34-pve] (local build)
Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net

Vendor:               /0:0:0:0
Product:
User Capacity:        600,332,565,813,390,450 bytes [600 PB]
Logical block size:   774843950 bytes
scsiModePageOffset: response length too short, resp_len=47 offset=50 bd_len=46
>> Terminate command early due to bad response to IEC mode page
A mandatory SMART command failed: exiting. To continue, add one or more '-T permissive' options.

/dev/sda is dead, Jim

Next thing is to schedule a Disk Replacement and moving all services to another host/prepare the host to shutdown for maintenance.

Preparation for disk replacement

Stop all running Containers:

# for VE in $(vzlist -Ha -o veid); do vzctl stop $VE; done

I also disabled the “start at boot” option to have a quick startup of the Proxmox Node.

Next: Remove the faulty disk from the md-RAID:

# mdadm /dev/md0 -r /dev/sda1
# mdadm /dev/md1 -r /dev/sda2
# mdadm /dev/md2 -r /dev/sda3
# mdadm /dev/md3 -r /dev/sda4

Shutting down the System.

… some guy in the DC moved to the server at the expected time and replaces the faulty disk …

After that, the system is online again.

  1. copy partition table from /dev/sdb to /dev/sda
    # sgdisk -R /dev/sda /dev/sdb
    
  2. recreatea another GUID for /dev/sda
    # sgdisk -G /dev/sda
    

Then add /dev/sda to the RAID again.

# mdadm /dev/md0 -a /dev/sda1
# mdadm /dev/md1 -a /dev/sda2
# mdadm /dev/md2 -a /dev/sda3
# mdadm /dev/md3 -a /dev/sda4



# cat /proc/mdstat
Personalities : [raid1]
md3 : active raid1 sda4[2] sdb4[1]
      1822442815 blocks super 1.2 [2/1] [_U]
      [===>.................]  recovery = 16.5% (301676352/1822442815) finish=329.3min speed=76955K/sec

md2 : active raid1 sda3[2] sdb3[1]
      1073740664 blocks super 1.2 [2/2] [UU]

md1 : active raid1 sda2[2] sdb2[1]
      524276 blocks super 1.2 [2/1] [_U]
        resync=DELAYED

md0 : active raid1 sda1[2] sdb1[1]
      33553336 blocks super 1.2 [2/2] [UU]

unused devices: <none>

After nearly 12h, the resync was completed:

diskstats_iops-day

and then this happened:

# vzctl start 300
# vzctl enter 300
enter into CT 300 failed
Unable to open pty: No such file or directory

There are plenty of comments if you search for Unable to open pty: No such file or directory

But

# svzctl exec 300 /sbin/MAKEDEV tty
# vzctl exec 300 /sbin/MAKEDEV pty
# vzctl exec 300 mknod --mode=666 /dev/ptmx c 5 2

did not help:

# vzctl enter 300
enter into CT 300 failed
Unable to open pty: No such file or directory    

And

# strace -ff vzctl enter 300

produces a lot of garbage – meaning stacktraces that did not help to solve the problem.
Then we were finally able to enter the container:

# vzctl exec 300 mount -t devpts none /dev/pts

But having a look into the process list was quite devastating:

# vzctl exec 300 ps -A
  PID TTY          TIME CMD
    1 ?        00:00:00 init
    2 ?        00:00:00 kthreadd/300
    3 ?        00:00:00 khelper/300
  632 ?        00:00:00 ps    

That is not really what you expect when you have a look into the process list of a Mail-/Web-Server, isn’t it?
After looking araound into the system and searching through some configuration files, it became obvious, that there was a system update in the past, but someone forgot to install upstart. So that was easy, right?

# vzctl exec 300 apt-get install upstart
Reading package lists...
Building dependency tree...
Reading state information...
The following packages will be REMOVED:
  sysvinit
The following NEW packages will be installed:
  upstart
WARNING: The following essential packages will be removed.
This should NOT be done unless you know exactly what you are doing!
  sysvinit
0 upgraded, 1 newly installed, 1 to remove and 0 not upgraded.
Need to get 486 kB of archives.
After this operation, 851 kB of additional disk space will be used.
You are about to do something potentially harmful.
To continue type in the phrase 'Yes, do as I say!'
 ?] Yes, do as I say!

BUT:

Err http://ftp.debian.org/debian/ wheezy/main upstart amd64 1.6.1-1
  Could not resolve 'ftp.debian.org'
Failed to fetch http://ftp.debian.org/debian/pool/main/u/upstart/upstart_1.6.1-1_amd64.deb  Could not resolve     'ftp.debian.org'
E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?

No Network – doooh.

So… that was that. Another plan to: chroot. Let’s start:
First we need to shutdown the container – or at least what is running of it:

# vzctl stop 300
Stopping container ...
Container is unmounted    

Second, we have to mount a bunch of devices to the FS:

# mount -o bind /dev /var/lib/vz/private/300/dev
# mount -o bind /dev/shm /var/lib/vz/private/300/dev/shm
# mount -o bind /proc /var/lib/vz/private/300/proc
# mount -o bind /sys /var/lib/vz/private/300/sys

Then perform the chroot and the installtion itself:

# chroot /var/lib/vz/private/300 /bin/bash -i

# apt-get install upstart
# exit      

At last, umount all the things:

# umount -l /var/lib/vz/private/300/sys
# umount -l /var/lib/vz/private/300/proc
# umount -l /var/lib/vz/private/300/dev/shm
# umount -l /var/lib/vz/private/300/dev

If you have trouble, because some of the devices are busy, kill the processes you find out with:

# lsof /var/lib/vz/private/300/dev

Or just clean the whole thing

# lsof 2> /dev/null | egrep '/var/lib/vz/private/300'    

Try to umount again :-).

Now restarting the container again.

# Starting container ...
# Container is mounted
# Adding IP address(es): 10.10.10.130
# Setting CPU units: 1000
# Setting CPUs: 2
# Container start in progress...  

And finally:

 # vzctl enter 300
 root@300 #  
 root@300 # ps -A | wc -l
 142

And this looks a lot better \o/.

Writing Munin Plugins pt3: some Stats about VMWare Fusion

In a project where we had the need for VMs being capable of doing CI for Java and also doing CI for iOS Application (using XCode Build Bots), we decided to go with a Mac OS Server as the Host Platform and using VMWare Fusion as the base Virtualisation System. We had several VMs there (Windows, Solaris, Linux and Mac OS). Doing a proper Monitoring for theses VMs was not that easy. We already had a working Munin Infrastructure, but no Plugin for displaying VMWare Fusion Stats existed.

The first approach was to use the included VMTools for gathering the information, since we already used them to start/stop/restart VMs via CLI/SSH:

#!/bin/bash

echo "starting VMS..."
VM_PATH=/Users/Shared/VMs
TOOL_PATH=/Applications/VMTools
$TOOL_PATH/vmrun -T fusion start $VM_PATH/Mac_OS_X_10.9.vmwarevm/Mac_OS_X_10.9.vmx nogui

or

#!/bin/bash

echo "starting VMS..."
VM_PATH=/Users/Shared/VMs
TOOL_PATH=/Applications/VMTools
$TOOL_PATH/vmrun -T fusion stop $VM_PATH/Mac_OS_X_10.9.vmwarevm/Mac_OS_X_10.9.vmx

But it was very hard to receive the interesting Data from the Log Files (statistica data is only really supported in VMWare ESXi). So we choose the direct way, to receive the live data, using ps. So this approach is also applicable for other Applications as well.

Our goal was to get at lease three Graphs (% of used CPU, % of used Memory and physically used Memory) sorted by VM Name.

ps -A | grep vmware-vmx

provides us with a list of all running vmware processes. Since we only need specific Data, we add some more filters:

ps -A -c -o pcpu,pmem,rss=,args,comm -r | grep vmware-vmx

29,4 14,0 2341436   J2EE.vmx                                                 vmware-vmx
1,7 12,9 2164200    macos109.vmx                                             vmware-vmx
1,4 17,0 2844044    windows.vmx                                              vmware-vmx
0,7  6,0 1002784    Jenkins.vmx                                              vmware-vmx
0,0  0,0    624     grep vmware-vmx      

where this is the description (man ps) of the used columns:

  • %cpu percentage CPU usage (alias pcpu)
  • %mem percentage memory usage (alias pmem)
  • rss the real memory (resident set) size of the process (in 1024 byte units).

You might see several things: First we have our data and the Name of each VM. Second, we have to get rid of the last line, since that is our grep process. Third, we might need to do some String Operations/Number Calculation to get some valid Data at the end.

Since Perl is a good choice if you need to do some String Operations, the Plugins is written in Perl :-).

Let’s have a look.
The Config Element is quite compact (e.g. for the physical mem):

my $cmd = "ps -A -c -o pcpu,pmem,rss=,args,comm -r | grep vmware-vmx";
my $output = `$cmd`;
my @lines=split(/\n/,$output);
...
if( $type eq "mem" ) {
    print $base_config;
    print "graph_args --base 1024 -r --lower-limit 0\n";    
    print "graph_title absolute Memory usage per VM\n";
    print "graph_vlabel Memory usage\n";
    print "graph_info The Graph shows the absolute Memory usage per VM\n";  
    foreach my $line(@lines) {
        if( $line  =~ /(?<!grep)$/ ) {  
            my @vm = ();
            my $count = 0;
            my @array=split(/ /,$line); 
            foreach my $entry(@array) {
                if( length($entry) > 2 ){
                    $vm[$count]=$entry;
                    $count++;
                }
            }
            $vm[3] = clean_vmname($vm[3]);  
            if( $vm[3] =~ /(?<!comm)$/) {           
                if( $lcount > 0 ){
                    print "$vm[3]_mem.draw STACK\n";
                } else {
                    print "$vm[3]_mem.draw AREA\n";
                }
                print "$vm[3]_mem.label $vm[3]\n";
                print "$vm[3]_mem.type GAUGE\n";            
                $lcount++;      
            }           
        }
    }                       
}

After the basic Setup (Category, Graph Type, Labels, etc. ) we go through each line of the output from the ps command, filtering the line containing grep.
We use the stacked Graph Method, so the first entry has to be the base Graph, the following ones will just be layer on top of the first. To get clean VM Names, we have a quite simple function clean_vmname:

sub clean_vmname {
    my $vm_name = $_[0];
    $vm_name =~ s/\.vmx//;
    $vm_name =~ s/\./\_/g;
    return $vm_name;
}

The Code, that delivers the Data looks similar. We just piping the values from the ps command to the output:

foreach my $line(@lines) {
    if( $line  =~ /(?<!grep)$/ ) {
        my @vm = ();
        my $count = 0;
        my @array=split(/ /,$line); 
        foreach my $entry(@array) {
            if( length($entry) > 2 ){
                $vm[$count]=$entry;
                $count++;
            }
        }
        $vm[3] = clean_vmname($vm[3]);
        if( $vm[3] =~ /(?<!comm)$/) {   
            if( $type eq "pcpu" ) {
                print "$vm[3]_pcpu.value $vm[0]\n";
            }
            if( $type eq "pmem" ) {
                print "$vm[3]_pmem.value $vm[1]\n";
            }
            if( $type eq "mem" ) {
                my $value =  ($vm[2]*1024);
                print "$vm[3]_mem.value $value\n";
            }
        }
    }
}   

You can find the whole plugin here on GitHub.

Here are some example Graphs you will get as a result:

 

fusion_mem-month fusion_pcpu-month fusion_pmem-month

Writing Munin Plugins pt2: counting VPNd Connections

Preamble

Every Munin Plugin should have a preamble by default:

#!/usr/bin/env perl
# -*- perl -*-

=head1 NAME

dar_vpnd a Plugin for displaying VPN Stats for the Darwin (MacOS) vpnd Service.

=head1 INTERPRETATION

The Plugin displays the number of active VPN connections.

=head1 CONFIGURATION

No Configuration necessary!

=head1 AUTHOR

Philipp Haussleiter <philipp@haussleiter.de> (email)

=head1 LICENSE

GPLv2

=cut

# MAIN
use warnings;
use strict;

As you can see, this Plugin will use Perl as the Plugin language.

After that you have some information about the Plugin Usage:

  • Name of the Plugin + some description
  • Interpretation of the delivered Data
  • Information about the Plugins Configuration (not necessary here, we will see that in the other Plugins)
  • Author Name + Contact Email
  • License

# MAIN marks the beginngin of the (main) code.

Next you see some Perl Setup, using strict Statements and also show warnings.

Gathering Data

First you should always have a basic idea how you want collect your Data (e.g. which user will use what command to get what kind of data).

For Example we can get all VPN Connections in Mac OS (Server) searching the process List for pppd processes.

ps -ef | grep ppp
    0   144     1   0  5Mär14 ??        10:35.34 vpnd -x -i com.apple.ppp.l2tp
    0 29881   144   0  4:12pm ??         0:00.04 pppd serverid com.apple.ppp.l2tp nodetach proxyarp plugin L2TP.ppp ms-dns 10.XXX.YYY.1 debug logfile /var/log/ppp/vpnd.log idle 7200 noidlesend lcp-echo-interval 60 lcp-echo-failure 5 mru 1500 mtu 1280 receive-all ip-src-address-filter 1 novj noccp intercept-dhcp require-mschap-v2 plugin DSAuth.ppp plugin2 DSACL.ppp l2tpmode answer :10.XXX.YYY.233
    0 22567   144   0  4:12pm ??         0:00.04 pppd serverid com.apple.ppp.l2tp nodetach proxyarp plugin L2TP.ppp ms-dns 10.XXX.YYY.1 debug logfile /var/log/ppp/vpnd.log idle 7200 noidlesend lcp-echo-interval 60 lcp-echo-failure 5 mru 1500 mtu 1080 receive-all ip-src-address-filter 1 novj noccp intercept-dhcp require-mschap-v2 plugin DSAuth.ppp plugin2 DSACL.ppp l2tpmode answer :10.XXX.YYY.234    

Collecting only the IP you need some more RegExp using awk:

ps -ef | awk '/[p]ppd/ {print substr($NF,2);}'
10.XXX.YYY.233
10.XXX.YYY.234

We are only interested in the total Connection Count. So we use wc for counting all IPs:

ps -ef | awk '/[p]ppd/ {print substr($NF,2);}' | wc -l
       2

So we now have a basic command that gives us the Count of currentyl connected users.

Configuration

The next thing is how your Data should be handled by the Munin System.
Your Plugin needs to provide Information about the Field Setup.

The most basic (Perl) Code looks like this:

if ( exists $ARGV[0] and $ARGV[0] eq "config" ) {
    # Config Output
    print "...";    
} else {
    # Data Output
    print "...";
}

For a more Information about fieldnames, please follow the above Link.

Our Plugin Source looks like this:

# MAIN
use warnings;
use strict;


my $cmd = "ps -ef | awk '/[p]ppd/ {print substr(\$NF,2);}' | wc -l";

if ( exists $ARGV[0] and $ARGV[0] eq "config" ) {
    print "graph_category VPN\n";
    print "graph_args --base 1024 -r --lower-limit 0\n";    
    print "graph_title Number of VPN Connections\n";
    print "graph_vlabel VPN Connections\n";
    print "graph_info The Graph shows the Number of VPN Connections\n"; 
    print "connections.label Number of VPN Connections\n";
    print "connections.type GAUGE\n";   
} else {
    my $output = `$cmd`;
    print "connections.value $output";
}

Implementation

To test the Plugin you can use munin-run:

> /opt/local/sbin/munin-run dar_vpnd config
graph_category VPN
graph_args --base 1024 -r --lower-limit 0
graph_title Number of VPN Connections
graph_vlabel VPN Connections
graph_info The Graph shows the Number of VPN Connections
connections.label Number of VPN Connections
connections.type GAUGE
> /opt/local/sbin/munin-run dar_vpnd
connections.value        1

Example Graphs

Some basic (long time) Graphs look like this:

munin_vpnd_connections_macos

Writing Munin Plugins pt1: Overview

Writing your own Munin Plugins

Around February this year, we at innoQ had the need for setting up a Mac OS based CI for a Project. Besides building of integrating some standard Java Software, we also had to setup an Test Environment with Solaris/Weblogic, Mac OS for doing a CI for an iOS Application and a Linux System that contains the Jenkins CI itself.
Additionally the whole Setup should be reachable via VPN (also the iOS Application itself should be able to reach the Ressources via VPN).

To have the least possible obsticles in Setting up the iOS CI and the iOS (iPad) VPN Access, we decide to use Mac OS Server as the Basic Host OS. As the Need for Resources are somehow limited for the other Systems (Solaris/Weblogic, Linux/Jenkins), we also decide to do a basic VM Setup with VMWare Fusion.

Since we have a decent Munin Monitoring Setup in our Company for all our Systems, we need some Monitoring for all Services used in our Setup:

Beside the Standard Plugins (like Network/CPU/RAM/Disk) that was basically

  • Jenkins CI
  • VMware Fusion
  • VPN

After searching through the Munin Plugin Repository we couldn’t find any plugins providing the necessary monitoring. So the only choice was to write your own set of plugins. Since all three Plugins use different Approaches for collecting the Data, i plan two writer three different posts here. One for each Plugin. The Sources are availible online here and might be added to the main Munin Repo as soon as the Pull Requests are accepted.

How Munin works

But first a brief overview of Munin. Munin is a TCP based Service that has normally one Master and one Node for each System that needs to be monitored. The Master Node ask all Nodes periodicly for Monitoring Updates.
The Node Service, delivering the Updated Data runs on Port 4949 per default. To add some level of security, you normal add a IP to a whitelist, that is allowed to query the Nodes Data.

You can use normal telnet for accessing the Nodes Data:

telnet localhost 4949
Trying ::1...
telnet: connect to address ::1: Connection refused
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
# munin node at amun

Every Node delivers Information about specific Services provided by Plugins. To get an overview about the configured plugins you do a:

# munin node at amun
list
df df_inode fusion_mem fusion_pcpu fusion_pmem if_en0 if_err_en0 load lpstat netstat ntp_offset processes users

A Plugin always provides a Configuration Output and a Data Output. By Default if you query a Plugin, you will always get the Data Output:

# munin node at amun
df
_dev_disk1s2.value 34
_dev_disk0s1.value 48
_dev_disk3s2.value 62
_dev_disk2s1.value 6
_dev_disk2s2.value 32

To trigger the Config Output you need to add a config to your command:

# munin node at amun
df config
graph_title Filesystem usage (in %)
graph_args --upper-limit 100 -l 0
graph_vlabel %
graph_scale no
_dev_disk0s1.label /Volumes/Untitled
_dev_disk1s2.label /
_dev_disk2s1.label /Volumes/System-reserviert
_dev_disk2s2.label /Volumes/Windows 7
_dev_disk3s2.label /Volumes/Data

You can also use the tool munin-run for doing a basic test (it will be installed when installing your munin-node Binary)

 munin-run df
_dev_disk1s2.value 34
_dev_disk0s1.value 48
_dev_disk3s2.value 62
_dev_disk2s1.value 6
_dev_disk2s2.value 32
munin-run df config
graph_title Filesystem usage (in %)
graph_args --upper-limit 100 -l 0
graph_vlabel %
graph_scale no
_dev_disk0s1.label /Volumes/Untitled
_dev_disk1s2.label /
_dev_disk2s1.label /Volumes/System-reserviert
_dev_disk2s2.label /Volumes/Windows 7
_dev_disk3s2.label /Volumes/Data

Summary

So a Plugin needs to provide an Output both modes:

  • Configuration Output when run with the config argument
  • The (normal) Data Output when called withouth additional arguments

Plugins are Shell Scripts that can be written in every Programming language that is supported by the Nodes Shell (e.g. Bash, Perl, Python, etc.)

Since it is one of the easier Plugins we will have a look at the Plugin, monitoring the VPN Connections at our Mac OS Server in the next Post.

Creating encrypted Volumes on ZFS Pools

One of the most anticipated Features of ZFS is transparent Encryption. But since Oracle decided to do not make updates from Solaris 11 availible as Open Source, the Feature of on-Disk Encryption is not availible on Illumos (e.g. Open-Source) based Distributions. But there are some ways to create transparent encrypted ZPools with current avaiblibe ZFS Version using pktool, lofiadm, zfs and zpool.

lofiadm- administer files available as block devices through lofi

http://www.unix.com/man-page/opensolaris/1m/lofiadm

That means, you can use normal Files as Block Devices while adding some Features to them (e.g. compression and also encryption). The Goal of this Post is to create a transparent encrypted Volume, that uses a Key-File for deryption (that might be stored on an usb stick or will be uploaded via a Browser once to mount the device). For an easy Start, i created a Vagrant File based on OmniOS here.

If you do not know Vagrant, here is an easy Start for you:

  1. Get yourself a VirtualBox Version matching your Platform: https://www.virtualbox.org/wiki/Downloads
  2. Get yourself a Vagrant Version matching your Platform: http://www.vagrantup.com/downloads.html
  3. Move to the Folder where you have saved your Vagrantfile
  4. Start your Box (will need some time, since the OmniOS Box will needs to be downloaded first)
    vagrant up
  5. After your box is finished, you can ssh into it with
    vagrant ssh
  6. Have a look around:
    zpool status

    You will find exactly one (Root-) Pool configured in that system:

      pool: rpool
     state: ONLINE
      scan: none requested
    config:
    
            NAME        STATE     READ WRITE CKSUM
            rpool       ONLINE       0     0     0
              c1d0s0    ONLINE       0     0     0

Next we want to create our encrypted Device, for that we need some “files” for using them with lofiadm. One very handy feature of ZFS is the possibility to also create Volumes (ZVols) in your ZPool.
First we need to finde out how big our Pool is:

zpool list

will give us an overview of the configured Volumes and File Systems:

NAME           SIZE  ALLOC   FREE  EXPANDSZ    CAP  DEDUP  HEALTH  ALTROOT
rpool         39,8G  2,28G  37,5G         -     5%  1.00x  ONLINE  -
vagrant-priv      -      -      -         -      -      -  FAULTED  -

So we have roughly around 37G free space. For this Test we would like to create an encrypted Volume with 2G of Space.
Creating a ZVol is as easy as creating a normal ZFS Folder:

sudo zfs create -V 2G rpool/export/home/vagrant-priv

You can now see the new ZVol with the reserved size of 2G:

zfs list
NAME                             USED  AVAIL  REFER  MOUNTPOINT
rpool                           5,34G  33,8G  35,5K  /rpool
rpool/ROOT                      1,74G  33,8G    31K  legacy
rpool/ROOT/omnios               1,74G  33,8G  1,46G  /
rpool/ROOT/omniosvar              31K  33,8G    31K  legacy
rpool/dump                       512M  33,8G   512M  -
rpool/export                    2,06G  33,8G    32K  /export
rpool/export/home               2,06G  33,8G    41K  /export/home
rpool/export/home/vagrant-priv  2,06G  35,9G    16K  -
rpool/swap                      1,03G  34,8G  34,4M  -

Next we need a Key for en-/de-crypting the Device. That can be done with the pktool:

> pktool genkey keystore=file outkey=lofi.key keytype=aes keylen=256 print=y
< Key Value ="93af08fcfa9fc89724b5ee33dc244f219ac6ce75d73df2cb1442dc4cd12ad1c4"

We can now use this key with lofiadm to create an encrypted Device:

> sudo lofiadm -a /dev/zvol/rdsk/rpool/export/home/vagrant-priv -c aes-256-cbc -k lofi.key
< /dev/lofi/1

lofi.key is the File that contains the Key for the Encryption. You can keep it in that folder or move it to another device. If you want to reactivate the device (we will see later how to do this), you will need that key file again.
/dev/lofi/1 is our encrypted Device. We can use that for creating a new (encrypted) ZPool:

sudo zpool create vagrant-priv /dev/lofi/1

You know can use that Pool as a normal ZPool (including Quotas/Compression, etc.)

> zpool status

< pool: vagrant-priv
 state: ONLINE
  scan: none requested
config:

        NAME           STATE     READ WRITE CKSUM
        vagrant-priv   ONLINE       0     0     0
          /dev/lofi/1  ONLINE       0     0     0

errors: No known data errors

You should change the Folder permissions of that mount-point:

sudo chown -R vagrant:other vagrant-priv

Creating some Test-Files:

cd /vagrant-priv/
mkfile 100m file2
> du -sh *
< 100M   file2

So what happens if we want to deactivate that Pool?

  1. Leave the Mount-Point:
    cd /
  2. Deactivate the Pool:
    sudo zpool export vagrant-priv
  3. Deactivate the Lofi Device:
    sudo lofiadm -d /dev/lofi/1

That’s all. Now let’s reboot the system and let us see how we can re-attach that Pool again.

Leave the Vagrant Box:

> exit
< logout
< Connection to 127.0.0.1 closed.

Restart the Box:

> vagrant halt
< [default] Attempting graceful shutdown of VM...
> vagrant up
...
< Waiting for machine to boot. This may take a few minutes...
< [default] VM already provisioned. Run `vagrant provision` or use `--provision` to force it

Re-Enter the Box:

vagrant ssh

So where is our Pool?

zpool status

Only gives us the default root-Pool.
First we need to re-create our Lofi-Device:

> sudo lofiadm -a /dev/zvol/rdsk/rpool/export/home/vagrant-priv -c aes-256-cbc -k lofi.key
< /dev/lofi/1

Instead of creating a new ZPool (that would delete our previous created Data), we need to import that ZPool. That’s can be done in two steps, using zpool. First we need to find our Pool:

sudo zpool import -d /dev/lofi/

That lists all ZPools, that are on Devices in that Directory. We need to find the id of “our” Pool (that needs to be done once, since that id stays the same, as long as the Pool exitsts).

...   
   pool: vagrant-priv
     id: 1140053612317909839
  state: ONLINE
 action: The pool can be imported using its name or numeric identifier.
 config:

        vagrant-priv   ONLINE
          /dev/lofi/1  ONLINE
...

We can now import that ZPool using the id 1140053612317909839:

sudo zpool import -d  /dev/lofi/ 1140053612317909839

After that we can again access our Pool as usual:

> cd /vagrant-priv/
> du -sh *
< 100M   file2

Plotting UNIX Processes with DOT

Inspired by this Post, i started playing around with ps, nodejs and GraphViz.

After reading some ps man Pages, i found the necessary ps parameters.
For MacOS i used

ps -A -c -o pid,ppid,pcpu,comm,uid -r

For Linux i used

ps -A -o pid,ppid,pcpu,comm,uid

You then get some Output like:

    PID    PPID %CPU COMMAND           UID
      1       0  0.0 init                0
      2       1  0.0 kthreadd            0
      3       2  0.0 migration/0         0
      4       2  0.0 ksoftirqd/0         0
      5       2  0.0 migration/0         0
      6       2  0.0 watchdog/0          0

So you are getting the ProcessID, the Parent ProcessID, CPU Usage (i am not using for plotting atm), the Command and the UserID.
I created a simple Node Script, that you can run either directly under MacOS (for all other Unices you need to update the ps command).
Or you can give the script a previous generated ps output for parsing:

plotPS.sh /tmp/host.log > /tmp/host.dot

The resulting DOT Code is then Piped into a DOT File.

Here are some examples:

My MacOS Laptop:

MacBook Pro
bigger

A Sinlge Linux Host with Dovecot and Apache2/Passenger:

Apache2 / Mail Server
bigger

A Linux Host with OpenVZ and KVM Instances:

OpenVZ / KVM Host
bigger

In the original Post, there were also Dependencies between CPU Usage and Size of the Graphical Nodes, also it would be more useful to only plotting the processes of one VM from its inside.
But i guess for one evening the result is okay :-).

Build and Test Project TOX under MacOS

Some Steps to do

  1. You need to have XCode with installed CLI Tools (see here)
  2. If you are using MacPorts (you really should), you need to install all necessary Dependencies:
    port install libtool automake autoconf libconfig-hr libsodium cmake
  3. Checkout the Project TOX Core Repository:
    git clone --recursive https://github.com/irungentoo/ProjectTox-Core.git
  4. cd ProjectTox-Core
    cmake .
    make all
  5. You need two tools:
    DHT_bootstrap in /other
    and nTox in /testing
  6. Bootstrap Tox (aka get your Public Key):
    ./DHT_bootstrap
    Keys saved successfully
    Public key: EA7D7BD2566A208F83F81F8876DE6C1BDC1F8CA1788300296E5D4F4CB142CD77
    Port: 33445

    The key is also in PUBLIC_ID.txt in the same Directory.

  7. Run nTox like so:
    ./ntox 198.46.136.167 33445 728925473812C7AAC482BE7250BCCAD0B8CB9F737BF3D42ABD34459C1768F854

    Where:

    Some Tox Node
    198.46.136.167
    Port of that TOX Node
    33445
    Public Key of that TOX Node
    728925473812C7AAC482BE7250BCCAD0B8CB9F737BF3D42ABD34459C1768F854
  8. Et voilà:
    /h for list of commands
    [i] ID: C759C4FC6511CEED3EC846C0921229CA909F37CAA2DCB1D8B31479C5838DF94C
    >>

    You can add a friend:

    /f ##PUBLIC_ID##

    List your friends:

    /l

    Message a friend:

    /m ##friend_list_index##  ##message##

Downgrading Subversion from 1.8 to 1.7 in MacPorts

bash-3.2# cd /tmp
bash-3.2# svn co http://svn.macports.org/repository/macports/trunk/dports/devel/subversion --revision 108493
A    subversion/files
A    subversion/files/patch-Makefile.in.diff
A    subversion/files/patch-osx_unicode_precomp.diff
A    subversion/files/config_impl.h.patch
A    subversion/files/servers.default
A    subversion/Portfile
Ausgecheckt, Revision 108493.
bash-3.2# cd subversion/
bash-3.2# port install
--->  Computing dependencies for subversion
--->  Fetching archive for subversion
--->  Attempting to fetch subversion-1.7.10_1.darwin_12.x86_64.tbz2 from http://mse.uk.packages.macports.org/sites/packages.macports.org/subversion
--->  Attempting to fetch subversion-1.7.10_1.darwin_12.x86_64.tbz2.rmd160 from http://mse.uk.packages.macports.org/sites/packages.macports.org/subversion
...
--->  Scanning binaries for linking errors: 100.0%
--->  No broken files found.
bash-3.2# port installed subversion
The following ports are currently installed:
  subversion @1.7.10_1
  subversion @1.8.1_1 (active)
bash-3.2# port activate subversion @1.7.10_1
--->  Computing dependencies for subversion
--->  Deactivating subversion @1.8.1_1
--->  Cleaning subversion
--->  Activating subversion @1.7.10_1
--->  Cleaning subversion

Adding LDAP Authentication to a Play! 2 Application

As of Play! 1 is not really supported anymore, i will describe the steps for accessing Data from your LDAP Directory with your Play! 2 with this Post.

Prerequisites

As also mentioned in my last Post, for this example we are using the Vagrant vagrant-rundeck-ldap VM, I already mentioned here.

Setup

After you setup a basic Play! 2 Application with

play new ldap-test

We need to update our Applications Dependencies. To achieve this, we need to change project/Build.scala:


import sbt._
import Keys._
import play.Project._

object ApplicationBuild extends Build {

  val appName         = "play2-ldap-example"
  val appVersion      = "1.0-SNAPSHOT"

  val appDependencies = Seq(
    // Add your project dependencies here,
    javaCore,
    javaJdbc,
    javaEbean,
    "com.innoq.liqid" % "ldap-connector" % "1.3"
  )

  val main = play.Project(appName, appVersion, appDependencies).settings(
    // Add your own project settings here      
  )
}

The Command:


play compile

Should download all necessary Project Depedencies and will do an initial Compilation of all your Project Files (currently not really many).

You also need to add the LDAP Settings to your Applications configuration. For this example we will use an external Properties File conf/ldap.properties:


# Settings for LiQID
# ~~~~~
ldap.user.objectClasses=person
ldap.group.objectClasses=groupOfUniqueNames

default.ldap=ldap1
# LDAP Listsing, divided by ","
ldap.listing=ldap1

ldap1.ou_people=ou=users
ldap1.ou_group=ou=roles
ldap1.url=ldap://localhost:3890
ldap1.principal=dc=Manager,dc=example,dc=com
ldap1.credentials=password

ldap1.base_dn=dc=example,dc=com
ldap1.admin.group.id=admin

ldap1.user.id.attribute=cn
ldap1.user.object.class=person

ldap1.group.id.attribute=cn
ldap1.group.object.class=groupOfUniqueNames
ldap1.group.member.attribute=uniqueMember

Implementation

Since Play1 2 does not really support the use of Before Filters, we will use Custom Actions for making our Login Authentication work.
We will create a new Package app/actions with two new files: an annotation interface BasicAuth and the implementation itself BasicAuthAction.
Annotations will be used to Set a specific Controller to use Basic Auth.
So lets start with BasicAuth:


package actions;

import play.mvc.With;

import java.lang.annotation.Documented;
import java.lang.annotation.ElementType;
import java.lang.annotation.Inherited;
import java.lang.annotation.Retention;
import java.lang.annotation.RetentionPolicy;
import java.lang.annotation.Target;

@With(BasicAuthAction.class)
@Retention(RetentionPolicy.RUNTIME)
@Target({ ElementType.METHOD, ElementType.TYPE })
@Inherited
@Documented
public @interface BasicAuth {
}

After that you can annotate Controllers with @BasicAuth (but this won’t work, since the Implementation is still missing).
Here then the BasicAuthAction:



package actions;

import com.ning.http.util.Base64;

import models.User;
import play.mvc.Action;
import play.mvc.Http.Context;
import play.mvc.Result;

public class BasicAuthAction extends Action {

	private static final String AUTHORIZATION = "authorization";
	private static final String WWW_AUTHENTICATE = "WWW-Authenticate";
	private static final String REALM = "Basic realm=\"play2-ldap-example\"";

	@Override
	public Result call(Context context) throws Throwable {

		String authHeader = context.request().getHeader(AUTHORIZATION);
		if (authHeader == null) {
			return sendAuthRequest(context);
		}

		String auth = authHeader.substring(6);

		byte[] decodedAuth = Base64.decode(auth);
		String[] credString = new String(decodedAuth, "UTF-8").split(":");

		if (credString == null || credString.length != 2) {
			return sendAuthRequest(context);
		}

		String username = credString[0];
		String password = credString[1];
		User authUser = User.authenticate(username, password);
		if (authUser == null) {
			return sendAuthRequest(context);
		}
		context.request().setUsername(username);
		return delegate.call(context);
	}

	private Result sendAuthRequest(Context context) {
		context.response().setHeader(WWW_AUTHENTICATE, REALM);
		return unauthorized();
	}
}

As you can see, there are no LDAP specific Dependencies at all, since all necessary Logic is the User Model, in User.authenticate(username, password).So let’s have a look into that Model:


package models;

import play.Play;

import com.innoq.ldap.connector.LdapHelper;
import com.innoq.ldap.connector.LdapUser;
import com.innoq.liqid.utils.Configuration;

public class User {

	private static LdapHelper HELPER = getHelper();

	public String sn;
	public String cn;
	public String dn;

	public User(String cn) {
		this.cn = cn;
	}

	public String toString() {
		StringBuilder sb = new StringBuilder();
		sb.append("cn: ").append(cn).append("\n");
		sb.append("sn: ").append(sn).append("\n");
		sb.append("dn: ").append(dn).append("\n");
		return sb.toString();
	}

	public static User authenticate(String username, String password) {
		if (HELPER.checkCredentials(username, password)) {
			return new User(username);
		}
		return null;
	}

	public static User getUser(String username) {
		LdapUser ldapUser = (LdapUser) LdapHelper.getInstance().getUser(
				username);
		User user = new User(username);
		user.cn = ldapUser.get("cn");
		user.sn = ldapUser.get("sn");
		user.dn = ldapUser.getDn();
		return user;
	}

	private static LdapHelper getHelper() {
		Configuration.setPropertiesLocation(Play.application().path()
				.getAbsolutePath()
				+ "/conf/ldap.properties");
		return LdapHelper.getInstance();
	}
}

You also have a static Instace of that LDAP Helper, but authenticate User Credentials and Login are in two different Methods.Last thing is to Load a User from the LDAP Directory:Here the Admin Controller:


package controllers;

import models.User;
import actions.BasicAuth;
import play.mvc.Controller;
import play.mvc.Result;
import views.html.Admin.index;

@BasicAuth
public class Admin extends Controller {
    public static Result index() {
    	User u = User.getUser(request().username());
        return ok(index.render("Hello Admin!", u));
    }
}

And here the used Template File:


@(message: String, user: User)

@main("Admin Index") {
@{message} 
<p>   
<pre>
@{user}
</pre>
</p>
}

Links

You can find the Sources of that Library here: https://github.com/innoq/LiQIDYou can find an example Project here: https://github.com/phaus/play-ldap/tree/master/play2-ldap-example