Académique Documents
Professionnel Documents
Culture Documents
LVM (Logical Volume Manager) gives us the ability to create Snapshots. LVM Snapshot is a
logical volume that temporarily preserves the original data of changing logical volume, so it’s data
can be backed up preserving consistency. Snapshot must be created prior to any critical
modifications on the Logical Volume in order to fulfill it’s role.
Note: Snapshot Volume needs to be large enough to store the data that will change while it exists. If
more data changes than the Snapshot can hold, the Snapshot will become unusable. That means
we have to “predict” how much data will be changed on the original Volume while the Snapshot
exists to let the Snapshot hold all those data changes.
Below we will create Snapshot of the Logical Volume (back up Logical Volume), make changes on
the Logical Volume and restore it’s previous state using LVM Snapshot.
Steps:
Introduction
By creating a Logical Volume snapshots you are able to freeze a current state of any of your logical volumes. This
means that you can very easily create a backup and once needed rollback to a original logical volume state. This
method is very similar to what you already know from using Virtualization software such as Virtualbox or VMware
where you can simply take a snapshot of entire virtual machine and revert back in case something went wrong etc.
Therefore, using LVM snapshots allows you to take a control of your system's logical volumes whether it is your
personal laptop or server. This tutorial is self-contained as no previous experience with Logical Volume Manager is
required.
Scenario
In this article we will explain how to manually create and restore logical volume snapshots. Since we do not assume
any previous experience with Logical Volume Manager we will start from a scratch using a dummy physical hard drive
/dev/sdb with size of 1073 MB. Here are all steps in nutshell:
First we will create two partitions on our /dev/sdb drive. These partitions will be of "8e Linux LVM" type and
will be used to create a physical volumes
Once both partitions are created we use pvcreate command to create physical volumes
In this step we create a new Logical Volume Group and a single 300MB in size logical volume using ext4
filesystem
Mount our new logical volume and create some sample data
Take a snapshot and remove sample data
Rollback logical volume snapshot
Logical volume manager allows you to create a Logical group consisting of multiple physical volumes. Physical
volumes can be entire hard-drives or separate partitions. Physical volumes can reside on a single or multiple hard-
drives, partitions , USBs, SAN's etc. To increase a Logical Volume size you can add additional physical volumes. Once
you create Logical volume group you can then create multiple Logical volumes and at the same time completely
disregard a physical volume layer. Logical volume group can be resized at any time by adding more physical volumes
so new logical volumes can created or resized.
Create a partitions
First, we need to create a partitions and mark them as physical volumes. Here is our physical disk we are going to
work with:
# fdisk -l /dev/sdb
Let's create two primary partitions. Here we are using fdisk to do tis job. Feel free to use any other partitioning tool to
do this job such as cfdisk, parted etc.
# fdisk /dev/sdb
All command are highlighted in bold:
Partition type:
e extended
Partition type:
e extended
Syncing disks.
If you followed the above steps, your new partition table on the disk /dev/sdb will now look similar to the one below:
# fdisk -l /dev/sdb
Disk /dev/sdb: 1073 MB, 1073741824 bytes
At this point we mark both partitions as physical volumes. Please note that you do not have to follow the same pattern
as in this tutorial. For example you could simply partition entire disk with a single partition instead of two. Use
pvcreate to create physical volumes:
# pvcreate /dev/sdb[1-2]
Writing physical volume data to disk "/dev/sdb1"
Physical volume "/dev/sdb1" successfully created
Writing physical volume data to disk "/dev/sdb2"
Physical volume "/dev/sdb2" successfully created
Now it is time to create a Volume Group. For this we use tool vgcreate. The new Volume Group will have a name
"volume_group".
After execution of the above command you will have a new volume group created named "volume_group". This new
volume group will consist of two physical volumes:
/dev/sdb1
/dev/sdb2
You can see the stats of your new volume group using vgdisplay command:
# vgdisplay
--- Volume group ---
VG Name volume_group
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 1
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 0
Open LV 0
Max PV 0
Cur PV 2
Act PV 2
VG Size 592.00 MiB
PE Size 4.00 MiB
Total PE 148
Alloc PE / Size 0 / 0
Free PE / Size 148 / 592.00 MiB
VG UUID 37jef7-3q3E-FyZS-lMPG-5Jzi-djdO-BgPIPa
If all went smoothly we can now finally create a logical volume. The size of the logical volume must not exceed the size
of your logical group. Let's create new logical volume called "volume1" of size 200 MB and format it with ext4
filesystem.
You can see a definition of your new logical volume using lvdisplay command. Take a note of the LV Path value as you
will need it when creating a filesystem on your new h"volume1" logical volume.
# lvdisplay
--- Logical volume ---
LV Path /dev/volume_group/volume1
LV Name volume1
VG Name volume_group
LV UUID YcPtZH-mZ1J-OQQu-B4nj-MWo0-yC18-m77Vuz
LV Write Access read/write
LV Creation host, time debian, 2013-05-08 12:53:17 +1000
LV Status available
# open 0
LV Size 200.00 MiB
Current LE 50
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 254:0
# mkfs.ext4 /dev/volume_group/volume1
Finally, we have come to the point where we can take a snapshot of our logical volume created in previous section.
For this we will also need some sample data on our Logical Volume "volume1" so once we revert from the snapshot
we can confirm entire process by comparing original data with data recovered from the snapshot.
Understanding Snaphosts
In order to understand how snapshots work we first need to understand what logical volume consists of and how data
are stored. This concept is similar to well known symbolic links. When you create a symbolic link to a file you are not
creating a copy of the actual file but instead you simply create only a reference to it. Logical volume stores data in a
similar fashion and it consists of two essential parts:
metadata pointers
data block
When a snapshot is created Logical Volume Manager simply creates a copy of all Metadata pointers to a separate
logical volume. Metadata do not consume much space and therefore your are able to create snapshot of let's say
2GB logical volume to 5MB snapshot volume. The snapshot volume only starts grow once you start altering data of
the original logical volume. Which means, that every time you remove or edit file on the original logical volume a copy
of that file ( data ) is created on snapshot volume. For a simple changes you may need to create a snapshot volume of
around 5-10% of the logical volume original size. If you are prepared to make many changes on your original logical
volume then you will need lot more than 10%. Let's get started:
Sample Data
First, create a new mount point directory for "volume1" and mount it :
# mkdir /mnt/volume1
# mount /dev/volume_group/volume1 /mnt/volume1
# cd /mnt/volume1
# cp -r /sbin/ .
# du -s sbin/
8264 sbin/
Using previous commands we have copied entire /sbin directory into /mnt/volume1. The size of
/mnt/volume1/sbin/ is currently 8264 KB.
Creating a Snapshot
Now we are going to create a snapshot of logical volume "volume1". In the process Logical Volume Manager will
create a new separate logical volume. This new logical volume will have size of 20MB and will be called
"volume1_snapshot":
Execute lvs command to confirm that new volume snapshot has been created:
# lvs
LV VG Attr LSize Pool Origin Data% Move Log
Copy% Convert
volume1 volume_group owi-aos- 200.00m
volume1_snapshot volume_group swi-a-s- 20.00m volume1 0.06
Now that the snapshot has been created we can start altering data on "volume1" for example by removing the entire
content:
# cd /mnt/volume1
# rm -fr
# rm -fr sbin/
After this operation you can consult again lvs command and see that Data% on the volume1_snap is now increased.
If you want to, you can now mount your snapshot volume to confirm that the original data from "volume1" still exists.
Before we revert our logical volume snapshot, let's first confirm that our /mnt/volume1/sbin data are still missing:
# du -s /mnt/volume1/sbin
du: cannot access `/mnt/volume1/sbin': No such file or directory
After execution of the above command the logical volume "volume1" will rollback once it is activated. Therefore,
what needs to be done next is to re-activate "volume1". First, make sure that you unmount your "volume1"
# umount /mnt/volume1
# lvchange -a n /dev/volume_group/volume1
# lvchange -a y /dev/volume_group/volume1
As a last step mount again your logical volume "volume1" and confirm that data all has been recovered:
Conclusion
The above was a basic example of snapshot manipulation using Logical Volume Manager. The usefulness of logical
volume snapshots is enormous and it will sure help you with your tasks whether you are system administrator or a
developer. Although you can use the setup above to create a multiple snapshots for a backup recovery you also need
to know the you backup will find its limits on within you Logical Volume Group therefore any low level physical volume
problems may render your snapshot useless.
Being a system administrator we have to perform activities which leads to filesystem and data changes
so it is always a good idea to keep a backup but taking a backup everytime might not be possible
although backup is one of the option for a timely run like every night backup to protect our data but
what if you know you are planning to perform some application upgrade which has thousands of files
and you need an immediate fix so instead of copying all the files which needs huge amount of storage
and alot of copy time we prefer to take a snapshot.
NOTE: LVM snapshot should NOT be mistaken with backup.
IMPORTANT NOTE: In LVM we have a feature where a snapshot size can be extended but that
does not means that any data modified after creating a snapshot will also be restored since the
snapshot was extended at a later point of time.
Snapshot will revert back to the state of point where it was taken and any data modified after taking the
snapshot will be overwritten.
# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync
Convert
data system -wi-a----- 2.00g
opt system -wi-ao---- 2.00g
root system -wi-ao---- 2.00g
swap system -wi-ao---- 4.00g
tmp system -wi-ao---- 12.00g
var system -wi-ao---- 2.00g
For the sake of this article we will create a snapshot of our 'data' partition. But before that lets check the
existing content of our data partition
# df -Th /data/
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/system-data ext4 2.0G 6.1M 1.8G 1% /data
# ll /data/
total 28
drwxr-xr-x 2 root root 4096 Sep 13 03:24 dir1
drwxr-xr-x 2 root root 4096 Sep 13 03:24 dir2
drwxr-xr-x 2 root root 4096 Sep 13 03:24 dir3
-rw-r--r-- 1 root root 0 Sep 13 2017 file1
-rw-r--r-- 1 root root 0 Sep 13 2017 file2
-rw-r--r-- 1 root root 0 Sep 13 2017 file3
drwx------ 2 root root 16384 Sep 13 03:17 lost+found
Here,
Lastly (/dev/system/data) is the path of the logical volume whose snapshot has to be created
Here we are creating a snapshot volume with 1G size (snap_data) for /dev/system/data logical
volume
These snapshots are read write so you can mount this snapshot volume and check the data/content.
# ll /mnt/
total 28
drwxr-xr-x 2 root root 4096 Sep 13 03:24 dir1
drwxr-xr-x 2 root root 4096 Sep 13 03:24 dir2
drwxr-xr-x 2 root root 4096 Sep 13 03:24 dir3
-rw-r--r-- 1 root root 0 Sep 13 2017 file1
-rw-r--r-- 1 root root 0 Sep 13 2017 file2
-rw-r--r-- 1 root root 0 Sep 13 2017 file3
drwx------ 2 root root 16384 Sep 13 03:17 lost+found
We can get the details of the snapshot and the parent logical volume
# lvs /dev/system/snap_data
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync
Convert
snap_data system swi-aos--- 1.00g data 0.00
As you see the new logical snapshot volume is visible in the above information
IMPORTANT NOTE: Make sure the snapshot volume size is always equal to the origin volume if
you feel that the origin volume is going to be left out for a long period and high amount of data will
be written on the source partition as there are chances of snapshot getting corrupted if the origin
volume size becomes larger than the snapshot volume size.
# df -h /data/
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/system-data 2.0G 6.1M 1.8G 1% /data
Let me put some dummy content with 1G size in the /data partition
# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync
Convert
data system owi-aos--- 2.00g
mgtserv system -wi-ao---- 2.00g
opt system -wi-ao---- 2.00g
root system -wi-ao---- 2.00g
snap_data system swi-I-s--- 1.00g data 100.00
swap system -wi-ao---- 4.00g
tmp system -wi-ao---- 12.00g
var system -wi-ao---- 2.00g
As you see it shows 100% of the snapshot size is occupied which means now this snapshot is corrupted
and cannot be used
# lvdisplay /dev/system/snap_data
--- Logical volume ---
LV Path /dev/system/snap_data
LV Name snap_data
VG Name system
LV UUID NwJSQu-NjIr-7Qn0-Wo0q-ig7d-3apy-eChdWD
LV Write Access read/write
LV Creation host, time nds18-rdssrv, 2017-09-13 04:23:35 -0400
LV snapshot status INACTIVE destination for data
LV Status available
# open 0
LV Size 2.00 GiB
Current LE 64
COW-table size 1.00 GiB
COW-table LE 32
Snapshot chunk size 4.00 KiB
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 8192
Block device 253:12
# dmsetup status
system-var: 0 4194304 linear
system-snap_data: 0 4194304 snapshot Invalid
As you see once the snapshot size becomes full it becomes unusable so must be removed
# lvremove -f /dev/system/snap_data
Logical volume "snap_data" successfully removed
NOTE: Automatic method is preferred as once the snapshot volume reaches 100% space, the snapshot
will get corrupted as seen above.
# lvdisplay /dev/system/snap_data
--- Logical volume ---
LV Path /dev/system/snap_data
LV Name snap_data
VG Name system
LV UUID ETHmgE-sgz0-4o7Q-3GDQ-pUy4-CJPo-D3nlIe
LV Write Access read/write
LV Creation host, time nds18-rdssrv, 2017-09-13 05:21:37 -0400
LV snapshot status active destination for data
LV Status available
# open 0
LV Size 2.00 GiB
Current LE 64
COW-table size 1.00 GiB
COW-table LE 32
Allocated to snapshot 0.00%
Snapshot chunk size 4.00 KiB
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 8192
Block device 253:12
# lvdisplay /dev/system/snap_data
--- Logical volume ---
LV Path /dev/system/snap_data
LV Name snap_data
VG Name system
LV UUID ETHmgE-sgz0-4o7Q-3GDQ-pUy4-CJPo-D3nlIe
LV Write Access read/write
LV Creation host, time nds18-rdssrv, 2017-09-13 05:21:37 -0400
LV snapshot status active destination for data
LV Status available
# open 0
LV Size 2.00 GiB
Current LE 64
COW-table size 2.00 GiB
COW-table LE 64
Allocated to snapshot 0.00%
Snapshot chunk size 4.00 KiB
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 8192
Block device 253:12
As you see the LVM COW table size (Copy On Write) size has increased from 1 Gb to 2 GB
# vim /etc/lvm/lvm.conf
# Configuration option activation/snapshot_autoextend_threshold.
# Auto-extend a snapshot when its usage exceeds this percent.
# Setting this to 100 disables automatic extension.
# The minimum value is 50 (a smaller value is treated as 50.)
# Also see snapshot_autoextend_percent.
# Automatic extension requires dmeventd to be monitoring the LV.
#
# Example
# Using 70% autoextend threshold and 20% autoextend size, when a 1G
# snapshot exceeds 700M, it is extended to 1.2G, and when it exceeds
# 840M, it is extended to 1.44G:
# snapshot_autoextend_threshold = 70
#
snapshot_autoextend_threshold = 70
Next again we need to start from scratch which means create a new snapshot volume
Now lets again try to fill up the /data partition with some more data more than 1G
Lets check the snapshot volume size, since the snapshot volume was 1G it easily was able to afford this
change without extending its volume size
# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync
Convert
data system owi-aos--- 2.00g
mgtserv system -wi-ao---- 2.00g
opt system -wi-ao---- 2.00g
root system -wi-ao---- 2.00g
snap_data system swi-a-s--- 1.00g data 50.21
Next lets create one more dummy file with additional 512M of data
Now lets check the lvs status for snap_data, as you see the volume size increased to 1.5GB since we had
asked lvm to extend to 50% if the threshold is reached
# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync
Convert
data system owi-aos--- 2.00g
mgtserv system -wi-ao---- 2.00g
opt system -wi-ao---- 2.00g
root system -wi-ao---- 2.00g
snap_data system swi-a-s--- 1.50g data 66.94
swap system -wi-ao---- 4.00g
tmp system -wi-ao---- 12.00g
var system -wi-ao---- 2.00g
So this worked perfectly. But make sure the origin data size doesnot increases more than 1.5GB of rlse
the snapshot will again get corrupted
How to restore/merge the snapshot?
Next once you are done it is time to restore your snapshot.
IMPORTANT NOTE: Before merging the snapshot make sure that the origin source is in
unmounted state or else the merging will be postponed for a pending activation of the logical volume.
Below are examples for both the scenario
Since /data partition is in mounted state the snapshot merge failed, to perform this merge you have to
perform manual lvm activation.
# umount /data/
# ll /data/
total 524320
drwxr-xr-x 2 root root 4096 Sep 13 03:24 dir1
drwxr-xr-x 2 root root 4096 Sep 13 03:24 dir2
drwxr-xr-x 2 root root 4096 Sep 13 03:24 dir3
-rw-r--r-- 1 root root 0 Sep 13 03:24 file1
-rw-r--r-- 1 root root 0 Sep 13 03:24 file2
-rw-r--r-- 1 root root 0 Sep 13 03:24 file3
drwx------ 2 root root 16384 Sep 13 03:17 lost+found
NOTE: If for some reason you cannot perform this re-activation, then this snapshot merge will happen
during the next reboot of the node
On another terminal if you check we can see that the data partition has "O" attribute which means
snapshot is merging with origin
# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync
Convert
data system Owi-a-s--- 2.00g 49.74
mgtserv system -wi-ao---- 2.00g
opt system -wi-ao---- 2.00g
root system -wi-ao---- 2.00g
swap system -wi-ao---- 4.00g
tmp system -wi-ao---- 12.00g
var system -wi-ao---- 2.00g
and you are done here, our 'data' partition is back to its original state
# ll /data/
total 524320
drwxr-xr-x 2 root root 4096 Sep 13 03:24 dir1
drwxr-xr-x 2 root root 4096 Sep 13 03:24 dir2
drwxr-xr-x 2 root root 4096 Sep 13 03:24 dir3
-rw-r--r-- 1 root root 0 Sep 13 03:24 file1
-rw-r--r-- 1 root root 0 Sep 13 03:24 file2
-rw-r--r-- 1 root root 0 Sep 13 03:24 file3
drwx------ 2 root root 16384 Sep 13 03:17 lost+found