Munin does have some limitations. It does not scale well (to hundreds of servers) and I find it particularly painful to create aggregated graphs (for example aggregated network graph of two or more hosts). But I know these issues are being worked on.

Okay, enough talk – let’s monitor Bind:

First we need enable logging. Create a log directory and add log directives to the Bind configuration file (here on Ubuntu):

# mkdir /var/log/bind9
# chown bind:bind /var/log/bind9
# cat /etc/bind/named.conf.options
logging {
        channel b_log {
                file "/var/log/bind9/bind.log" versions 30 size 1m;
                print-time yes;
                print-category yes;
                print-severity yes;
                severity info;

        channel b_debug {
                file "/var/log/bind9/debug.log" versions 2 size 1m;
                print-time yes;
                print-category yes;
                print-severity yes;
                severity dynamic;

        channel b_query {
                file "/var/log/bind9/query.log" versions 2 size 1m;
                print-time yes;
                severity info;

        category default { b_log; b_debug; };
        category config { b_log; b_debug; };
        category queries { b_query; };

Restart bind:

# /etc/init.d/bind9 restart
  Stopping domain name service: named.
  Starting domain name service: named.

You can now see log files are being populated under /var/log/bind9/

Next, configure Munin:

Make sure the munin-user (“munin”) can read you bind log files.

We need two additional plugins: “bind” and “bind_rndc”. If you can’t find them in your default install, head over here.

The “bind” plugin should work right away. “bind9_rndc” however need to read the “rndc.key” file, which only are readable by the user “bind”. You have two options, either run the plugin as root or add the user “munin” to the group “bind” and enable the group “bind” to read the rndc.file. For the sake of simplicity, I run the plugin as root here. So you need to add:

# cat /etc/munin/plugin-conf.d/munin-node
  user root
  env.querystats /var/log/bind9/named.stats

Next restart Munin:

# /etc/init.d/munin-node restart
  Stopping munin-node: done.
  Starting munin-node: done.

Munin run every five minutes, so go take a coffee and… Wait.

In order to have automatic and unattended security updates in Ubuntu, one needs to install the according package:

sudo aptitude install unattended-upgrades

The file /etc/apt/apt.conf.d/10periodic needs to be created with the following content:

APT::Periodic::Enable "1";
APT::Periodic::Update-Package-Lists "1";
APT::Periodic::Download-Upgradeable-Packages "1";
APT::Periodic::AutocleanInterval "5";
APT::Periodic::Unattended-Upgrade "1";
APT::Periodic::RandomSleep "1800";

Also, change the first few lines of /etc/apt/apt.conf.d/50unattended-upgrades as follows so that only security updates are considered:

Unattended-Upgrade::Allowed-Origins {
        "Ubuntu lucid-security";
        "Ubuntu lucid-updates";

Unattended-Upgrade::Package-Blacklist {

Unattended-Upgrade::Mail "root@localhost";

Unattended-Upgrade::Remove-Unused-Dependencies "false";
Unattended-Upgrade::Automatic-Reboot "false";

It is vital to redo these setting after a global upgrade to a new distro release.

If configured correctly the following command should produce this output:

apt-config shell UnattendedUpgradeInterval APT::Periodic::Unattended-Upgrade

There is a easy to use tool that facilitates the quick’n dirty move of the content of one IMAP account to a new server. Its name is imapsync.

In Ubuntu, you may install it as this:

apt-get install imapsync -y

Now, set two files for the old and the new account in your home, containing the respective passwords of the two accounts. They are called passfile_1 and passfile_2 from now on.

Now run imapsync with the following options:

imapsync --host1 {old_imap_host} --user1 {old_imap_user} --authmech1 LOGIN \ --passfile1 passfile_1 --port1 993 --ssl1 \
--host2 {new_imap_host} --user2 {new_imap_user} --authmech2 LOGIN \ --passfile2 passfile_2 --port2 993 --ssl2

You just need to adjust the ports (143 vs. ssl) and the authentication mechanisms to your needs and you’re set.

To show how this works, here’s an example from a personal migration:

Turned ON syncinternaldates, will set the internal dates (arrival dates) on host2 same as host1.
Will try to use LOGIN authentication on host1
Will try to use LOGIN authentication on host2
From imap server [servera] port [993] user [usera]
To   imap server [serverb] port [993] user [userb]
Host servera says it has NO CAPABILITY for AUTHENTICATE LOGIN
Success login on [servera] with user [usera] auth [LOGIN]
Host serverb says it has NO CAPABILITY for AUTHENTICATE LOGIN
Success login on [serverb] with user [userb] auth [LOGIN]
host1: state Authenticated
host2: state Authenticated
From separator and prefix: [.][INBOX.]
To   separator and prefix: [.][INBOX.]
++++ Calculating sizes ++++
From Folder [INBOX]                             Size:   4122535 Messages:   252
From Folder [INBOX.Belangrijk]                  Size:  13395371 Messages:   127
flags from: [\Seen]["16-Jul-2011 09:45:30 +0200"]
Copied msg id [2053] to folder INBOX.Mailinglijsten.Debian.Bugs msg id [2053]
+ NO msg #2054 [9+vsr6Mzv58wM/L4Jt4Bvg:7095] in INBOX.Mailinglijsten.Debian.Bugs
+ Copying msg #2054:7095 to folder INBOX.Mailinglijsten.Debian.Bugs

oops, I did it again…

When I reinstalled my monitoring VPS with Ubuntu 10.04 LTS and Nagios3, I got this annoying error message again:

Error: Could not stat() command file ‘/var/lib/nagios3/rw/nagios.cmd’!

In “/etc/nagios3/nagios.cfg” the “check_external_commands=1″ was already set. So there was something more required to make it run on Debian/Ubuntu…

Deep in my memory I know that there was a debian way to solve this user right related problem.

This time I’ll write it down here, just as a reminder to myself, should I ever have to install it again.

/etc/init.d/nagios3 stop

dpkg-statoverride --update --add nagios www-data 2710 /var/lib/nagios3/rw

dpkg-statoverride --update --add nagios nagios 751 /var/lib/nagios3

/etc/init.d/nagios3 start

lvm based kvm guests are fast but you lose some flexibility, playing with kvm on my desktop I prefer to use file based images.

Converting from lvm images to qcow2 isn’t hard but the documentation is sparse.

  1. use qemu-img to convert from an lvm to qcow2 format:
qemu-img convert -O qcow2 /dev/vg_name/lv_name/ /var/lib/libvirt/images/image_name.qcow2

If you want the image compressed add ‘-c’ right after the word convert.

  1. edit the xml for the image
virsh edit image_name

modify the disk stanza, adding a type to the driver line; on the source line change ‘dev’ to ‘file’ and modify the path:

driver name='qemu' type='qcow2'

source file='/var/lib/libvirt/images/image_name.qcow2'

Creating images from with a base image allows quick rollouts of many boxes based on an single install – for example I have a ‘golden image’ of Ubuntu, I can stop that VM and create 2 servers using the original VM disk as a base file and writing changes to different files.

qemu-img create -b original_image.qcow2 -f qcow2 clone_image01.qcow2

qemu-img create -b original_image.qcow2 -f qcow2 clone_image02.qcow2

Taking this further I can then snapshot both images so once I start making changes, rolling back to a point in time prior to the changes is very easy:

qemu-img snapshot -c snapshot_name vm_image_name.qcow2