howto

Snapshot based backups

LVM snapshots have been around quite a while - but their copy-on-write (COW) implementation was flawed so that in case of multiple concurrently active snapshots perfomance degraded to non-acceptable levels. Until on one nice day it all got fixed in LVM2 2.02.90 release with LVM thin provisioning support.

As new LVM2 feature - called “thin provisioning” - promises no serious perfomance penalty with multiple snapshots being active at the same time - we plan to use a snapshotting tool called Snapper in order to create simple “Time machine” like backup system for OpenNode local storage pools.

More info about Snapper can be found here: en.opensuse.org/Portal:Snapper

Prerequisites

# As LVM2 snapshots copy-on-write (COW) implementation
# got fixed (perfomance wise) in version 2.02.90 and upwards -
# you should verify that LVM2 version you are running is 2.02.90
# or greater
yum list installed | grep lvm2.x86_64

# In order to use new LVM2 snapshotting implementation
# we need to convert existing storage LVM LV into thin LV
# - please follow instructions in the next section for conversion
# Check last column of vgs command output for available free space
# - as you need it be more than current /storage usage
vgs

# As this howto is tailored for a new OpenNode OS release
# it is assumed that /vz partition will be merged into
# /storage volume - please see section below how to achieve that

Converting /storage LVM LV into thin LV

# Install some thin provisioning tools -
# needed for thin lv activation at boot time for example
yum install -y device-mapper-persistent-data

# NB!!! As thin pool metadata area resizing is not supported atm
# and filling it up deadlocks all pool as read-only
# with no recovery options besides re-install -
# so plan carefully pool metadata area size!
# 256MB should be enough per Terabyte -
# but lets have some overhead just in case.

# Create LVM thin pool
lvcreate --thinpool pool -L 100G --poolmetadatasize 2G VolGroupL0
# and add thin storage volume
lvcreate -T VolGroupL0/pool -V 50G --name thin_storage

# Create EXT4 filesystem on new LVM LV
mkfs.ext4 /dev/VolGroupL0/thin_storage

# Migrate data from current /storage
mount /dev/VolGroupL0/thin_storage /mnt
service vz stop
rsync -av --progress /storage/* /mnt/
umount /mnt
umount /storage
sed -i 's/\/dev\/mapper\/VolGroupL0-storage/\/dev\/mapper\/VolGroupL0-thin_storage/g' /etc/fstab
mount /storage
service vz start
lvremove /dev/VolGroupL0/storage

Merging /vz into /storage/local

# As upcoming OpenNode OS release will be including
# /vz partition migration into /storage/local
# this howto also assumes this

# You need to stop all containers
service vz stop

# Migrating /vz directory structure
mkdir -p /storage/local/vz
cd /vz
mv -v * /storage/local/vz/
umount /vz

# Removing /vz fstab mount entry and vz logical volume
sed -i '/\/vz/d' /etc/fstab
lvremove -f /dev/VolGroupL0/vz

# Creating symlink to new location
rmdir /vz
cd / && ln -s /storage/local/vz

# Starting up containers again
service vz start

Installing snapper

# Create Snapper yum repo file
cat << EOF > /etc/yum.repos.d/snapper.repo
[filesystems_snapper]
name=snapper (CentOS_CentOS-6)
type=rpm-md
baseurl=http://download.opensuse.org/repositories/filesystems:/snapper/CentOS_CentOS-6/
gpgcheck=1
gpgkey=http://download.opensuse.org/repositories/filesystems:/snapper/CentOS_CentOS-6/repodata/repomd.xml.key
enabled=1
EOF

# Install Snapper
yum install -y snapper libsnapper-python

# Create /storage volume snapper config
snapper -c storage create-config --fstype="lvm(ext4)" /storage
service crond restart

Managing Snapper

# By default Snapper will snapshot /storage every hour
# and will preserve 10 last snapshots from
# hourly, daily, monthly and yearly snapshots.
# Review and customize /storage config file from
nano -w /etc/snapper/configs/storage

# Display snapshots list for /storage
snapper -c storage list
    
# Access snapshot
snapper -c storage mount nr#
ls -l /storage/.snapshots/nr#/snapshot/
snapper -c storage umount nr#

# Manually create snapshot
snapper -c storage create --description "some description"

# Track filesystem changes between snapshots (NB! Can take long time!)
snapper -c storage status 1..0