Vous êtes sur la page 1sur 25

Backup and Restore Logical Volume using LVM Snapshot

LVM (Logical Volume Manager) gives us the ability to create Snapshots. LVM Snapshot is a
logical volume that temporarily preserves the original data of changing logical volume, so it’s data
can be backed up preserving consistency. Snapshot must be created prior to any critical
modifications on the Logical Volume in order to fulfill it’s role.
Note: Snapshot Volume needs to be large enough to store the data that will change while it exists. If
more data changes than the Snapshot can hold, the Snapshot will become unusable. That means
we have to “predict” how much data will be changed on the original Volume while the Snapshot
exists to let the Snapshot hold all those data changes.

Below we will create Snapshot of the Logical Volume (back up Logical Volume), make changes on
the Logical Volume and restore it’s previous state using LVM Snapshot.
Steps:

1. Verify existing Logical Volumes and their mount points


[root@tuxfixer ~]# lvdisplay
--- Logical volume ---
LV Path /dev/fedora/swap
LV Name swap
VG Name fedora
LV UUID eCqPuY-10lE-jE73-BveE-jgSc-6V9W-b29s4q
LV Write Access read/write
LV Creation host, time tuxfixer, 2016-02-10 23:34:47 +0100
LV Status available
# open 2
LV Size 3.88 GiB
Current LE 992
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:1

--- Logical volume ---


LV Path /dev/fedora/root
LV Name root
VG Name fedora
LV UUID wdM8rg-GeKd-GgJq-3KdJ-2iHr-qnXV-OwbITW
LV Write Access read/write
LV Creation host, time tuxfixer, 2016-02-10 23:34:48 +0100
LV Status available
# open 1
LV Size 45.63 GiB
Current LE 11681
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:0

--- Logical volume ---


LV Path /dev/fedora/storage
LV Name storage
VG Name fedora
LV UUID N1iB1P-W3Dh-0NIv-aUJd-P6hF-KMUv-NTeweI
LV Write Access read/write
LV Creation host, time tuxfixer, 2016-04-10 19:14:54 +0200
LV Status available
# open 1
LV Size 20.00 GiB
Current LE 5120
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:3
[root@tuxfixer ~]# df -hT
Filesystem Type Size Used Avail Use% Mounted on
devtmpfs devtmpfs 2.0G 0 2.0G 0% /dev
tmpfs tmpfs 2.0G 124K 2.0G 1% /dev/shm
tmpfs tmpfs 2.0G 1.2M 2.0G 1% /run
tmpfs tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup
/dev/mapper/fedora-root ext4 45G 6.6G 36G 16% /
tmpfs tmpfs 2.0G 64K 2.0G 1% /tmp
/dev/vda1 ext4 477M 135M 313M 31% /boot
tmpfs tmpfs 396M 24K 396M 1% /run/user/42
tmpfs tmpfs 396M 0 396M 0% /run/user/1000
/dev/mapper/fedora-storage ext4 20G 5.2G 14G 28% /mnt
2. Backup Logical Volume (Create LVM Snapshot of the Logical Volume)
Let’s backup storage Logical Volume by creating it’s Snapshot. We assume that the data changes
on the storage Logical Volume will not exceed 10 GB while the Snapshot exists. That’s why we will
create Snapshot of storage Logical Volume (/dev/fedora/storage) called storage_snapshot that
is 10 GB in size:
[root@tuxfixer ~]# lvcreate -s -n storage_snapshot -L 10G /dev/fedora/storage
Logical volume "storage_snapshot" created.
Verify created LVM Snapshot:

[root@tuxfixer ~]# lvdisplay


--- Logical volume ---
LV Path /dev/fedora/swap
LV Name swap
VG Name fedora
LV UUID eCqPuY-10lE-jE73-BveE-jgSc-6V9W-b29s4q
LV Write Access read/write
LV Creation host, time tuxfixer, 2016-02-10 23:34:47 +0100
LV Status available
# open 2
LV Size 3.88 GiB
Current LE 992
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:1

--- Logical volume ---


LV Path /dev/fedora/root
LV Name root
VG Name fedora
LV UUID wdM8rg-GeKd-GgJq-3KdJ-2iHr-qnXV-OwbITW
LV Write Access read/write
LV Creation host, time tuxfixer, 2016-02-10 23:34:48 +0100
LV Status available
# open 1
LV Size 45.63 GiB
Current LE 11681
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:0

--- Logical volume ---


LV Path /dev/fedora/storage
LV Name storage
VG Name fedora
LV UUID N1iB1P-W3Dh-0NIv-aUJd-P6hF-KMUv-NTeweI
LV Write Access read/write
LV Creation host, time tuxfixer, 2016-04-10 19:14:54 +0200
LV snapshot status source of
storage_snapshot [active]
LV Status available
# open 1
LV Size 20.00 GiB
Current LE 5120
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:3

--- Logical volume ---


LV Path /dev/fedora/storage_snapshot
LV Name storage_snapshot
VG Name fedora
LV UUID d9eSG0-qp9I-fNU3-H8G5-iSBV-nkXh-vUEfhm
LV Write Access read/write
LV Creation host, time tuxfixer, 2016-04-10 20:10:04 +0200
LV snapshot status active destination for storage
LV Status available
# open 0
LV Size 20.00 GiB
Current LE 5120
COW-table size 10.00 GiB
COW-table LE 2560
Allocated to snapshot 0.00%
Snapshot chunk size 4.00 KiB
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:6
Note: data allocated to storage_snapshot Volume for this moment is 0.00%, because no data
changes have been made on storageVolume since Snapshot creation
3. Test LVM Snapshot
storage Volume is mounted in /mnt directory and the content of the directory is:
[root@tuxfixer ~]# ls -l /mnt
total 5090328
-rw-r--r--. 1 root root 1469054976 Apr 10 22:11 Fedora-Live-Workstation-x86_64-23-10.iso
drwx------. 2 root root 16384 Apr 10 19:22 lost+found
-rw-------. 1 root root 3743416320 Apr 10 22:13 rhel-server-7.0-x86_64-dvd.iso
Now test the Snapshot (/dev/fedora/storage_snapshot) to see, if it holds data changes – let’s
remove .iso files from storage Volume (/dev/fedora/storage):
[root@tuxfixer ~]# rm -f /mnt/*.iso
Verify /mnt directory content:
[root@tuxfixer ~]# ls -l /mnt
total 16
drwx------. 2 root root 16384 Apr 10 19:22 lost+found
We have removed .iso files, in next point we will try to restore those files.

4. Restore Logical Volume using LVM Snapshot


Before we restore storage Volume, we have to unmount it from the mount point:
[root@tuxfixer ~]# umount /mnt
Now let’s try to restore storage Volume content by means of merging storage_snapshot with that
Volume:
[root@tuxfixer ~]# lvconvert --merge /dev/fedora/storage_snapshot
Merging of volume storage_snapshot started.
storage: Merged: 100.0%
Mount storage Volume back in /mnt directory and verify it’s content:
[root@tuxfixer ~]# mount /dev/fedora/storage /mnt
[root@tuxfixer ~]# ls -l /mnt
total 3979284
-rw-r--r--. 1 root root 331350016 Apr 10 22:35 debian-8.4.0-i386-netinst.iso
drwx------. 2 root root 16384 Apr 10 19:22 lost+found
-rw-------. 1 root root 3743416320 Apr 10 22:34 rhel-server-7.0-x86_64-dvd.iso
Previously removed files are back again in the /mnt directory – Logical Volume has been restored
from Snapshot.
Verify existing Logical Volumes after merging:

[root@tuxfixer ~]# lvs


LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
root fedora -wi-ao---- 45.63g
storage fedora -wi-ao---- 20.00g
swap fedora -wi-ao---- 3.88g
Note: LVM Snapshot (/dev/fedora/storage_snapshot) doesn’t exist any more on Logical Volume
list, because it was merged with the original Volume (/dev/fedora/storage).

Introduction

By creating a Logical Volume snapshots you are able to freeze a current state of any of your logical volumes. This
means that you can very easily create a backup and once needed rollback to a original logical volume state. This
method is very similar to what you already know from using Virtualization software such as Virtualbox or VMware
where you can simply take a snapshot of entire virtual machine and revert back in case something went wrong etc.
Therefore, using LVM snapshots allows you to take a control of your system's logical volumes whether it is your
personal laptop or server. This tutorial is self-contained as no previous experience with Logical Volume Manager is
required.
Scenario

In this article we will explain how to manually create and restore logical volume snapshots. Since we do not assume
any previous experience with Logical Volume Manager we will start from a scratch using a dummy physical hard drive
/dev/sdb with size of 1073 MB. Here are all steps in nutshell:

 First we will create two partitions on our /dev/sdb drive. These partitions will be of "8e Linux LVM" type and
will be used to create a physical volumes
 Once both partitions are created we use pvcreate command to create physical volumes
 In this step we create a new Logical Volume Group and a single 300MB in size logical volume using ext4
filesystem
 Mount our new logical volume and create some sample data
 Take a snapshot and remove sample data
 Rollback logical volume snapshot

Creating a Logical Volume


Logical Volume Manager Basics

Here is a quick start definition of logical volume manager:

Logical volume manager allows you to create a Logical group consisting of multiple physical volumes. Physical
volumes can be entire hard-drives or separate partitions. Physical volumes can reside on a single or multiple hard-
drives, partitions , USBs, SAN's etc. To increase a Logical Volume size you can add additional physical volumes. Once
you create Logical volume group you can then create multiple Logical volumes and at the same time completely
disregard a physical volume layer. Logical volume group can be resized at any time by adding more physical volumes
so new logical volumes can created or resized.

Create a partitions

First, we need to create a partitions and mark them as physical volumes. Here is our physical disk we are going to
work with:

# fdisk -l /dev/sdb

Disk /dev/sdb: 1073 MB, 1073741824 bytes


255 heads, 63 sectors/track, 130 cylinders, total 2097152 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x335af99c

Device Boot Start End Blocks Id System

Let's create two primary partitions. Here we are using fdisk to do tis job. Feel free to use any other partitioning tool to
do this job such as cfdisk, parted etc.

# fdisk /dev/sdb
All command are highlighted in bold:

Command (m for help): n

Partition type:

p primary (0 primary, 0 extended, 4 free)

e extended

Select (default p): p

Partition number (1-4, default 1):

Using default value 1

First sector (2048-2097151, default 2048):

Using default value 2048

Last sector, +sectors or +size{K,M,G} (2048-2097151, default 2097151): +400M

Command (m for help): n

Partition type:

p primary (1 primary, 0 extended, 3 free)

e extended

Select (default p): p

Partition number (1-4, default 2): 2

First sector (821248-2097151, default 821248):

Using default value 821248


Last sector, +sectors or +size{K,M,G} (821248-2097151, default 2097151):
+200M

Command (m for help): t

Partition number (1-4): 1

Hex code (type L to list codes): 8e

Changed system type of partition 1 to 8e (Linux LVM)

Command (m for help): t

Partition number (1-4): 2

Hex code (type L to list codes): 8e

Changed system type of partition 2 to 8e (Linux LVM)

Command (m for help): w

The partition table has been altered!

Calling ioctl() to re-read partition table.

Syncing disks.

If you followed the above steps, your new partition table on the disk /dev/sdb will now look similar to the one below:

# fdisk -l /dev/sdb
Disk /dev/sdb: 1073 MB, 1073741824 bytes

255 heads, 63 sectors/track, 130 cylinders, total 2097152 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x335af99c

Device Boot Start End Blocks Id System

/dev/sdb1 2048 821247 409600 8e Linux LVM

/dev/sdb2 821248 1230847 204800 8e Linux LVM

Create Physical Volumes

At this point we mark both partitions as physical volumes. Please note that you do not have to follow the same pattern
as in this tutorial. For example you could simply partition entire disk with a single partition instead of two. Use
pvcreate to create physical volumes:

# pvcreate /dev/sdb[1-2]
Writing physical volume data to disk "/dev/sdb1"
Physical volume "/dev/sdb1" successfully created
Writing physical volume data to disk "/dev/sdb2"
Physical volume "/dev/sdb2" successfully created

Create Volume Group

Now it is time to create a Volume Group. For this we use tool vgcreate. The new Volume Group will have a name
"volume_group".

# vgcreate volume_group /dev/sdb1 /dev/sdb2


Volume group "volume_group" successfully created

After execution of the above command you will have a new volume group created named "volume_group". This new
volume group will consist of two physical volumes:
 /dev/sdb1
 /dev/sdb2

You can see the stats of your new volume group using vgdisplay command:

# vgdisplay
--- Volume group ---
VG Name volume_group
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 1
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 0
Open LV 0
Max PV 0
Cur PV 2
Act PV 2
VG Size 592.00 MiB
PE Size 4.00 MiB
Total PE 148
Alloc PE / Size 0 / 0
Free PE / Size 148 / 592.00 MiB
VG UUID 37jef7-3q3E-FyZS-lMPG-5Jzi-djdO-BgPIPa

Creating Logical Volumes

If all went smoothly we can now finally create a logical volume. The size of the logical volume must not exceed the size
of your logical group. Let's create new logical volume called "volume1" of size 200 MB and format it with ext4
filesystem.

# lvcreate -L 200 -n volume1 volume_group


Logical volume "volume1" created

You can see a definition of your new logical volume using lvdisplay command. Take a note of the LV Path value as you
will need it when creating a filesystem on your new h"volume1" logical volume.

# lvdisplay
--- Logical volume ---
LV Path /dev/volume_group/volume1
LV Name volume1
VG Name volume_group
LV UUID YcPtZH-mZ1J-OQQu-B4nj-MWo0-yC18-m77Vuz
LV Write Access read/write
LV Creation host, time debian, 2013-05-08 12:53:17 +1000
LV Status available
# open 0
LV Size 200.00 MiB
Current LE 50
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 254:0

Now you can create an ext4 filesystem on your logical volume:

# mkfs.ext4 /dev/volume_group/volume1

Logical Volume Snapshot

Finally, we have come to the point where we can take a snapshot of our logical volume created in previous section.
For this we will also need some sample data on our Logical Volume "volume1" so once we revert from the snapshot
we can confirm entire process by comparing original data with data recovered from the snapshot.

Understanding Snaphosts

In order to understand how snapshots work we first need to understand what logical volume consists of and how data
are stored. This concept is similar to well known symbolic links. When you create a symbolic link to a file you are not
creating a copy of the actual file but instead you simply create only a reference to it. Logical volume stores data in a
similar fashion and it consists of two essential parts:

 metadata pointers
 data block

When a snapshot is created Logical Volume Manager simply creates a copy of all Metadata pointers to a separate
logical volume. Metadata do not consume much space and therefore your are able to create snapshot of let's say
2GB logical volume to 5MB snapshot volume. The snapshot volume only starts grow once you start altering data of
the original logical volume. Which means, that every time you remove or edit file on the original logical volume a copy
of that file ( data ) is created on snapshot volume. For a simple changes you may need to create a snapshot volume of
around 5-10% of the logical volume original size. If you are prepared to make many changes on your original logical
volume then you will need lot more than 10%. Let's get started:

Sample Data

First, create a new mount point directory for "volume1" and mount it :

# mkdir /mnt/volume1
# mount /dev/volume_group/volume1 /mnt/volume1

Enter "volume1" mount point and copy some sample data:

# cd /mnt/volume1
# cp -r /sbin/ .
# du -s sbin/
8264 sbin/
Using previous commands we have copied entire /sbin directory into /mnt/volume1. The size of
/mnt/volume1/sbin/ is currently 8264 KB.

Creating a Snapshot

Now we are going to create a snapshot of logical volume "volume1". In the process Logical Volume Manager will
create a new separate logical volume. This new logical volume will have size of 20MB and will be called
"volume1_snapshot":

# lvcreate -s -L 20M -n volume1_snapshot /dev/volume_group/volume1


Logical volume "volume1_snapshot" created

Execute lvs command to confirm that new volume snapshot has been created:

# lvs
LV VG Attr LSize Pool Origin Data% Move Log
Copy% Convert
volume1 volume_group owi-aos- 200.00m
volume1_snapshot volume_group swi-a-s- 20.00m volume1 0.06

Now that the snapshot has been created we can start altering data on "volume1" for example by removing the entire
content:

# cd /mnt/volume1
# rm -fr
# rm -fr sbin/

After this operation you can consult again lvs command and see that Data% on the volume1_snap is now increased.
If you want to, you can now mount your snapshot volume to confirm that the original data from "volume1" still exists.

Revert Logical Volume Snapshot

Before we revert our logical volume snapshot, let's first confirm that our /mnt/volume1/sbin data are still missing:

# du -s /mnt/volume1/sbin
du: cannot access `/mnt/volume1/sbin': No such file or directory

Recovering a Logical Volume snapshots consists of two steps:

 scheduling a snapshot recovery after next logical volume activation


 deactivate and activate logical volume

To schedule a snapshot rollback execute a following command:


# lvconvert --merge /dev/volume_group/volume1_snapshot
Can't merge over open origin volume
Merging of snapshot volume1_snapshot will start next activation.

After execution of the above command the logical volume "volume1" will rollback once it is activated. Therefore,
what needs to be done next is to re-activate "volume1". First, make sure that you unmount your "volume1"

# umount /mnt/volume1

Deactivate and activate you volume:

# lvchange -a n /dev/volume_group/volume1
# lvchange -a y /dev/volume_group/volume1

As a last step mount again your logical volume "volume1" and confirm that data all has been recovered:

# mount /dev/volume_group/volume1 /mnt/volume1


# du -s /mnt/volume1/sbin
8264 /mnt/volume1/sbin

Conclusion

The above was a basic example of snapshot manipulation using Logical Volume Manager. The usefulness of logical
volume snapshots is enormous and it will sure help you with your tasks whether you are system administrator or a
developer. Although you can use the setup above to create a multiple snapshots for a backup recovery you also need
to know the you backup will find its limits on within you Logical Volume Group therefore any low level physical volume
problems may render your snapshot useless.

Understanding LVM snapshots (create, merge, remove, extend)


Deepak Prasad Wednesday, September 13, 2017 How To, rhel7

Below steps are tested on Red Hat Enterprise Linux 7.

Being a system administrator we have to perform activities which leads to filesystem and data changes
so it is always a good idea to keep a backup but taking a backup everytime might not be possible
although backup is one of the option for a timely run like every night backup to protect our data but
what if you know you are planning to perform some application upgrade which has thousands of files
and you need an immediate fix so instead of copying all the files which needs huge amount of storage
and alot of copy time we prefer to take a snapshot.
NOTE: LVM snapshot should NOT be mistaken with backup.

How snapshot sizing works? How different is it from backup?


LVM snapshot are very much space efficient, by space efficient I mean when a snapshot is created for a
logical volume with 10GB space the snapshot will take very less space (next to nothing) but as the
content of the source volume increases the snapshot volume size will also increase accordingly. Hence a
snapshot must be created but should not be kept for a long time as iy will end up eating alot of space. In
such cases backup is the preferred option instead of snapshot.

So how space efficient is the LVM snapshot?


Lots of change = not very space efficient
Small amount of change = very space efficient

IMPORTANT NOTE: In LVM we have a feature where a snapshot size can be extended but that
does not means that any data modified after creating a snapshot will also be restored since the
snapshot was extended at a later point of time.

Snapshot will revert back to the state of point where it was taken and any data modified after taking the
snapshot will be overwritten.

Let us start with some examples

I have a setup with below logical volumes

# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync
Convert
data system -wi-a----- 2.00g
opt system -wi-ao---- 2.00g
root system -wi-ao---- 2.00g
swap system -wi-ao---- 4.00g
tmp system -wi-ao---- 12.00g
var system -wi-ao---- 2.00g

For the sake of this article we will create a snapshot of our 'data' partition. But before that lets check the
existing content of our data partition

Our logical volume is mounted on /data directory

# df -Th /data/
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/system-data ext4 2.0G 6.1M 1.8G 1% /data

with below content

# ll /data/
total 28
drwxr-xr-x 2 root root 4096 Sep 13 03:24 dir1
drwxr-xr-x 2 root root 4096 Sep 13 03:24 dir2
drwxr-xr-x 2 root root 4096 Sep 13 03:24 dir3
-rw-r--r-- 1 root root 0 Sep 13 2017 file1
-rw-r--r-- 1 root root 0 Sep 13 2017 file2
-rw-r--r-- 1 root root 0 Sep 13 2017 file3
drwx------ 2 root root 16384 Sep 13 03:17 lost+found

So the same is expected after performing a lvm snapshot merge.

use below command


To create a snapshot
# lvcreate -L 1G -s -n snap_data /dev/system/data
Using default stripesize 64.00 KiB.
Logical volume "snap_data" created.

Here,

-L means assign LogicalVolume Size


-s or (--snapshot) means create a snapshot volume
-n or (--name) means the name of the snapshot logical volume

Lastly (/dev/system/data) is the path of the logical volume whose snapshot has to be created

Here we are creating a snapshot volume with 1G size (snap_data) for /dev/system/data logical
volume
These snapshots are read write so you can mount this snapshot volume and check the data/content.

# mount /dev/system/snap_data /mnt/

Check the content of the snapshot

# ll /mnt/
total 28
drwxr-xr-x 2 root root 4096 Sep 13 03:24 dir1
drwxr-xr-x 2 root root 4096 Sep 13 03:24 dir2
drwxr-xr-x 2 root root 4096 Sep 13 03:24 dir3
-rw-r--r-- 1 root root 0 Sep 13 2017 file1
-rw-r--r-- 1 root root 0 Sep 13 2017 file2
-rw-r--r-- 1 root root 0 Sep 13 2017 file3
drwx------ 2 root root 16384 Sep 13 03:17 lost+found

We can get the details of the snapshot and the parent logical volume

# lvs /dev/system/snap_data
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync
Convert
snap_data system swi-aos--- 1.00g data 0.00

As you see the new logical snapshot volume is visible in the above information

IMPORTANT NOTE: Make sure the snapshot volume size is always equal to the origin volume if
you feel that the origin volume is going to be left out for a long period and high amount of data will
be written on the source partition as there are chances of snapshot getting corrupted if the origin
volume size becomes larger than the snapshot volume size.

What if I start working on this origin partition and it gets filled up


with more than 1GB of data?
Here my snapshot volume size is 1G while the actual volume size is 2G where only few MB is used

# df -h /data/
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/system-data 2.0G 6.1M 1.8G 1% /data

Let me put some dummy content with 1G size in the /data partition

# dd if=/dev/zero of=/data/dummy_file2 bs=1G count=1 oflag=dsync


1+0 records in
1+0 records out
1073741824 bytes (1.1 GB) copied, 98.5545 s, 10.9 MB/s

Check the status of snapshot

# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync
Convert
data system owi-aos--- 2.00g
mgtserv system -wi-ao---- 2.00g
opt system -wi-ao---- 2.00g
root system -wi-ao---- 2.00g
snap_data system swi-I-s--- 1.00g data 100.00
swap system -wi-ao---- 4.00g
tmp system -wi-ao---- 12.00g
var system -wi-ao---- 2.00g

As you see it shows 100% of the snapshot size is occupied which means now this snapshot is corrupted
and cannot be used

Check this further using below commands

# lvdisplay /dev/system/snap_data
--- Logical volume ---
LV Path /dev/system/snap_data
LV Name snap_data
VG Name system
LV UUID NwJSQu-NjIr-7Qn0-Wo0q-ig7d-3apy-eChdWD
LV Write Access read/write
LV Creation host, time nds18-rdssrv, 2017-09-13 04:23:35 -0400
LV snapshot status INACTIVE destination for data
LV Status available
# open 0
LV Size 2.00 GiB
Current LE 64
COW-table size 1.00 GiB
COW-table LE 32
Snapshot chunk size 4.00 KiB
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 8192
Block device 253:12
# dmsetup status
system-var: 0 4194304 linear
system-snap_data: 0 4194304 snapshot Invalid

# grep Snapshot /var/log/messages


Sep 13 03:32:19 nds18-rdssrv lvm[12168]: WARNING: Snapshot system-snap_data is now
80.02% full.
Sep 13 03:54:07 nds18-rdssrv lvm[1595]: WARNING: Snapshot system-snap_data is now
83.52% full.
Sep 13 04:29:52 nds18-rdssrv lvm[1606]: WARNING: Snapshot system-snap_data changed
state to: Invalid and should be removed.

As you see once the snapshot size becomes full it becomes unusable so must be removed

# lvremove -f /dev/system/snap_data
Logical volume "snap_data" successfully removed

How can we avoid such scenarios?


We have an option to automatically extend the logical volume if the origin volume parition size reaches
the threshold or manually extend the snapshot size before it becomes 100% occupied.

NOTE: Automatic method is preferred as once the snapshot volume reaches 100% space, the snapshot
will get corrupted as seen above.

For manually extending the snapshot volume


Before extending the snapshot volume

# lvdisplay /dev/system/snap_data
--- Logical volume ---
LV Path /dev/system/snap_data
LV Name snap_data
VG Name system
LV UUID ETHmgE-sgz0-4o7Q-3GDQ-pUy4-CJPo-D3nlIe
LV Write Access read/write
LV Creation host, time nds18-rdssrv, 2017-09-13 05:21:37 -0400
LV snapshot status active destination for data
LV Status available
# open 0
LV Size 2.00 GiB
Current LE 64
COW-table size 1.00 GiB
COW-table LE 32
Allocated to snapshot 0.00%
Snapshot chunk size 4.00 KiB
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 8192
Block device 253:12

# lvextend -L +1G /dev/system/snap_data


Size of logical volume system/snap_data changed from 1.00 GiB (32 extents) to 2.00
GiB (64 extents).
Logical volume system/snap_data successfully resized.

After extending the snapshot volume

# lvdisplay /dev/system/snap_data
--- Logical volume ---
LV Path /dev/system/snap_data
LV Name snap_data
VG Name system
LV UUID ETHmgE-sgz0-4o7Q-3GDQ-pUy4-CJPo-D3nlIe
LV Write Access read/write
LV Creation host, time nds18-rdssrv, 2017-09-13 05:21:37 -0400
LV snapshot status active destination for data
LV Status available
# open 0
LV Size 2.00 GiB
Current LE 64
COW-table size 2.00 GiB
COW-table LE 64
Allocated to snapshot 0.00%
Snapshot chunk size 4.00 KiB
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 8192
Block device 253:12

As you see the LVM COW table size (Copy On Write) size has increased from 1 Gb to 2 GB

For automatically extending the snapshot volume


Open the below file and change snapshot_autoextend_threshold value from 100 to some other value
lower than 100 and snapshot_autoextend_percent to the value you would like to extend the snapshot
volume, more detail on this attrubute is provided in the logfile

# vim /etc/lvm/lvm.conf
# Configuration option activation/snapshot_autoextend_threshold.
# Auto-extend a snapshot when its usage exceeds this percent.
# Setting this to 100 disables automatic extension.
# The minimum value is 50 (a smaller value is treated as 50.)
# Also see snapshot_autoextend_percent.
# Automatic extension requires dmeventd to be monitoring the LV.
#
# Example
# Using 70% autoextend threshold and 20% autoextend size, when a 1G
# snapshot exceeds 700M, it is extended to 1.2G, and when it exceeds
# 840M, it is extended to 1.44G:
# snapshot_autoextend_threshold = 70
#
snapshot_autoextend_threshold = 70

# Configuration option activation/snapshot_autoextend_percent.


# Auto-extending a snapshot adds this percent extra space.
# The amount of additional space added to a snapshot is this
# percent of its current size.
#
# Example
# Using 70% autoextend threshold and 20% autoextend size, when a 1G
# snapshot exceeds 700M, it is extended to 1.2G, and when it exceeds
# 840M, it is extended to 1.44G:
# snapshot_autoextend_percent = 20
#
snapshot_autoextend_percent = 50

For this article I will use above values

Next again we need to start from scratch which means create a new snapshot volume

# lvcreate -L 1G -s -n snap_data /dev/system/data

Now lets again try to fill up the /data partition with some more data more than 1G

First attempt I will create dummy file with 512MB

# dd if=/dev/zero of=/data/dummy_file2 bs=512M count=1 oflag=dsync


1+0 records in
1+0 records out
536870912 bytes (537 MB) copied, 16.3479 s, 32.8 MB/s

Lets check the snapshot volume size, since the snapshot volume was 1G it easily was able to afford this
change without extending its volume size
# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync
Convert
data system owi-aos--- 2.00g
mgtserv system -wi-ao---- 2.00g
opt system -wi-ao---- 2.00g
root system -wi-ao---- 2.00g
snap_data system swi-a-s--- 1.00g data 50.21

Next lets create one more dummy file with additional 512M of data

# dd if=/dev/zero of=/data/dummy_file bs=512M count=1 oflag=dsync


1+0 records in
1+0 records out
536870912 bytes (537 MB) copied, 28.1028 s, 19.1 MB/s

Now lets check the lvs status for snap_data, as you see the volume size increased to 1.5GB since we had
asked lvm to extend to 50% if the threshold is reached

# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync
Convert
data system owi-aos--- 2.00g
mgtserv system -wi-ao---- 2.00g
opt system -wi-ao---- 2.00g
root system -wi-ao---- 2.00g
snap_data system swi-a-s--- 1.50g data 66.94
swap system -wi-ao---- 4.00g
tmp system -wi-ao---- 12.00g
var system -wi-ao---- 2.00g

So this worked perfectly. But make sure the origin data size doesnot increases more than 1.5GB of rlse
the snapshot will again get corrupted
How to restore/merge the snapshot?
Next once you are done it is time to restore your snapshot.

IMPORTANT NOTE: Before merging the snapshot make sure that the origin source is in
unmounted state or else the merging will be postponed for a pending activation of the logical volume.
Below are examples for both the scenario

If the origin source is in mounted state


To merge the snapshot use the below command

# lvconvert --merge /dev/system/snap_data


Can't merge until origin volume is closed.
Merging of snapshot system/snap_data will occur on next activation of system/data.

Since /data partition is in mounted state the snapshot merge failed, to perform this merge you have to
perform manual lvm activation.

Unmount the 'data' partition

# umount /data/

De-activate the data partition

# lvchange -an /dev/system/data

Re-activate the same

# lvchange -ay /dev/system/data


Now validate you content of 'data' partition.

# ll /data/
total 524320
drwxr-xr-x 2 root root 4096 Sep 13 03:24 dir1
drwxr-xr-x 2 root root 4096 Sep 13 03:24 dir2
drwxr-xr-x 2 root root 4096 Sep 13 03:24 dir3
-rw-r--r-- 1 root root 0 Sep 13 03:24 file1
-rw-r--r-- 1 root root 0 Sep 13 03:24 file2
-rw-r--r-- 1 root root 0 Sep 13 03:24 file3
drwx------ 2 root root 16384 Sep 13 03:17 lost+found

NOTE: If for some reason you cannot perform this re-activation, then this snapshot merge will happen
during the next reboot of the node

If the origin is in un-mounted state

# lvconvert --merge /dev/system/snap_data


Merging of volume system/snap_data started.
data: Merged: 33.20%
data: Merged: 54.44%
data: Merged: 75.88%
data: Merged: 98.10%
data: Merged: 100.00%

On another terminal if you check we can see that the data partition has "O" attribute which means
snapshot is merging with origin

# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync
Convert
data system Owi-a-s--- 2.00g 49.74
mgtserv system -wi-ao---- 2.00g
opt system -wi-ao---- 2.00g
root system -wi-ao---- 2.00g
swap system -wi-ao---- 4.00g
tmp system -wi-ao---- 12.00g
var system -wi-ao---- 2.00g

and you are done here, our 'data' partition is back to its original state

# ll /data/
total 524320
drwxr-xr-x 2 root root 4096 Sep 13 03:24 dir1
drwxr-xr-x 2 root root 4096 Sep 13 03:24 dir2
drwxr-xr-x 2 root root 4096 Sep 13 03:24 dir3
-rw-r--r-- 1 root root 0 Sep 13 03:24 file1
-rw-r--r-- 1 root root 0 Sep 13 03:24 file2
-rw-r--r-- 1 root root 0 Sep 13 03:24 file3
drwx------ 2 root root 16384 Sep 13 03:17 lost+found

Vous aimerez peut-être aussi