Académique Documents
Professionnel Documents
Culture Documents
First of all make sure to have sufficient disk space available before reduci
ng logical volume (otherwise it would result in data loss). Also, make sure to h
ave valid data backup before going forward and making any changes.
Its important to shrink the filesystem before reducing it to prevent the data
loss/corruption. The resize2fs program will resize ext2, ext3 or ext4 file syst
ems. It can be used to enlarge or shrink an unmounted file system located on dev
ice. Refer following steps to reduce the logical volume by 500GB for example,
1) Unmount the filesystem
# umount /dev/VG00/LV00
2) Scan and check the filesystem to be at safer side:
# e2fsck /dev/VG00/LV00
3) Shrink the filesystem with resize2fs as follows:
# resize2fs /dev/VG00/LV00 xxxxM
where xxxxM is amount of disk space to which you wish to shrink the disk dat
a.
4) Reduce the logical volume by 500GB with lvreduce:
# lvreduce -L -500G VG00/LV00
It will reduces the size of logical volume LV00 in volume group VG00 by 500G
B.
5) Mount the filesystem and check the disk space with df -h command.
2. What is the difference between Linux and Linux LVM partition types?
It does not have any specific advantage. both partition type can work with L
VM.
The type id is only for informative purposes. Logical volumes dont have a con
cept of a type, theyre just block devices. They do not have a partition ID or type.
They are composed of physical extents (PE) which may be spread over multiple ph
ysical volumes (PV), each of which could be a partition or a complete disk. LVM
logical volumes are normally treated like individual partitions, not as disks, s
o theres no partition table and therefore no partition type id to look for.
The maximum number of stripes in LVM is 128. The optimal number of stripes dep
ends on the storage devices used for the LVM logical volume.
In the case of local physical disks connected via SAS or some other protocol
, the optimal number of stripes is equal to the number of disks.
In the case of SAN storage presented to the Linux machine in the form of LUN
s, there may be no advantage to a striped logical volume (a single LUN may provi
de optimal performance), or there may be an advantage. With a LUN coming from SA
N storage, the LUN is often a chunk of storage which comes from a SAN volume, an
d the SAN volume often is a RAID volume with some sort of striping and/or parity
6. LVM commands are failing with this error: Cant open exclusively. Mounted files
ystem?
We often face this problem that we cannot create a logical volume on a disk
that is part of multipathed storage, after creating a new disk partition created
with parted.
Multipathing uses the mpath* name to refer to storage rather than the sd* di
sk name, since multiple sd* names can refer to the same storage.
The below error message appears because the multipath daemon is accessing th
e sd* device, and the lvm commands cannot open the device exclusively. Commands
such as pvcreate /dev/sd* fail with the
Error: Cant open exclusively. Mounted Filesystem?
To resolve the issue:
Run # fuser -m -v /dev/sd* to see what processes are accessing the device.
If multipathd appears, run multipath -ll to determine which mpath* device maps
to that disk.
Run the command pvcreate /dev/mapper/mpath* to successfully create a physical
volume on the device.
Continue creating the volume group, logical volume, and filesystem using the
correct path to the disk.
As a general guideline for specifying the region size for mirrors that are l
arger than 1.5TB, you could take your mirror size in terabytes and round up that
number to the next power of 2, using that number as the -R argument to the lvcr
eate command. For example, if your mirror size is 1.5TB, you could specify -R 2.
If your mirror size is 3TB, you could specify -R 4. For a mirror size of 5TB, y
ou could specify -R 8. For more information about the same please refer to 4.4.3
. Creating Mirrored Volumes
Above calculation could be used to decide the region size for large mirror l
vm volumes of size 16 20 TB also. For example, when creating the cluster mirror
lvm volume with size of 20 TB, please set a region size of 32 using -R 32 argume
nt with lvcreate command as shown below:
$ lvcreate -m1 -L 20T -R 32 -n mirror vol_group
9. LVM2 volume group in partial mode with physical volumes marked missing even
though they are available in RHEL
Sometimes, attempting to modify a volume group or logical volume fails due t
o missing devices that are not actually missing:
# lvextend -l+100%PVS /dev/myvg/lv02 /dev/mapper/mpath80
WARNING: Inconsistent metadata found for VG myvg updating to use version 89
Missing device /dev/mapper/mpath73 reappeared, updating metadata for VG myvg
to version 89.
Device still marked missing because of alocated data on it, remove volumes a
nd consider vgreduce removemissing.
Any attempt to change a VG or LV claims PVs are missing:
Cannot change VG myvg while PVs are missing.
Consider vgreduce removemissing.
LVM physical volumes are marked with the missing (m) flag in pvs output even
though they are healthy and available:
PV VG Fmt Attr PSize PFree
/dev/mapper/mpath24 myvg lvm2 a-m 56.20G 0
Volume group is marked as partial and causes lvm commands to fail:
VG #PV #LV #SN Attr VSize VFree
myvg 42 10 0 wz-pn- 2.31T 777.11G
Restore each missing physical volume with:
# vgextend restoremissing <volume group> <physical volume>
# vgextend restoremissing myvg /dev/mapper/mpath24
Volume group myvg successfully extended
# lvscan
# vgchange -ay
Volumes are now ready to mount as per usual with mount command and/or add to
/etc/fstab file.
Check Multipath storage is actually available to host:
[root@host ~]# multipath -l
mpath1 (350011c600365270c) dm-8 HP 36.4G,ST336754LC
[size=34G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=0][active]
\_ 0:0:1:0 sda 8:0 [active][undef]
mpath5 (3600508b4001070510000b00001610000) dm-11 HP,HSV300
[size=15G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=0][active]
\_ 4:0:0:3 sde 8:64 [active][undef]
\_ 5:0:0:3 sdo 8:224 [active][undef]
\_ round-robin 0 [prio=0][enabled]
\_ 4:0:1:3 sdj 8:144 [active][undef]
\_ 5:0:1:3 sdt 65:48 [active][undef]
mpath11 (3600508b4000f314a0000400001600000) dm-13 HP,HSV300
[size=500G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=0][active]
\_ 4:0:0:5 sdg 8:96 [active][undef]
\_ 5:0:0:5 sdq 65:0 [active][undef]
\_ round-robin 0 [prio=0][enabled]
\_ 4:0:1:5 sdl 8:176 [active][undef]
\_ 5:0:1:5 sdv 65:80 [active][undef]
mpath4 (3600508b4001070510000b000000d0000) dm-10 HP,HSV300
[size=750G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=0][active]
\_ 4:0:0:2 sdd 8:48 [active][undef]
\_ 5:0:0:2 sdn 8:208 [active][undef]
\_ round-robin 0 [prio=0][enabled]
\_ 4:0:1:2 sdi 8:128 [active][undef]
\_ 5:0:1:2 sds 65:32 [active][undef]
mpath10 (3600508b4000f314a0000400001090000) dm-12 HP,HSV300
[size=350G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=0][active]
\_ 4:0:0:4 sdf 8:80 [active][undef]
\_ 5:0:0:4 sdp 8:240 [active][undef]
\_ round-robin 0 [prio=0][enabled]
\_ 4:0:1:4 sdk 8:160 [active][undef]
\_ 5:0:1:4 sdu 65:64 [active][undef]
mpath3 (3600508b4001070510000b000000a0000) dm-9 HP,HSV300
[size=750G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=0][active]
\_ 4:0:0:1 sdc 8:32 [active][undef]
\_ 5:0:0:1 sdm 8:192 [active][undef]
\_ round-robin 0 [prio=0][enabled]
\_ 4:0:1:1 sdh 8:112 [active][undef]
\_ 5:0:1:1 sdr 65:16 [active][undef]
Check Physical Volumes are being scanned by LVM and seen:
[root@host ~]# pvdisplay
Physical volume
PV Name /dev/dm-14
VG Name VolG_CFD
Allocated PE 1063
PV UUID PKvqoX-hWfx-dQUv-9NCL-Re78-LyIa-we69rm
Physical volume
PV Name /dev/dm-8
VG Name vg00
PV Size 33.92 GB / not usable 12.89 MB
Allocatable yes
PE Size (KByte) 32768
Total PE 1085
Free PE 111
Allocated PE 974
PV UUID GnjUsb-NxJR-aLgC-fga8-Ct1q-cf89-xhaaFs
11. How to Grow an LVM Physical Volume after resizing the disk?
Note : This procedure does have the potential to lose data on the disk if do
ne improperly, we strongly recommend a backup be performed before proceeding.
For example, we have resized a disk from 50Gb to 120Gb
# pvs
PV VG Fmt Attr PSize PFree
/dev/sdb1 VolGroup02 lvm2 a 50.00g 2.00g
Whilst the underlying storage (eg: /dev/sdb) may have been resized, the part
ition we are using as a physical volume (eg: /dev/sdb1) remains at the smaller s
ize.
Well need to resize the partition, then resize the Physical Volume before you
can proceed.
First we would confirm the actual storage size with fdisk -ul /dev/sdb and obs
erve the increased disk size. Depending on how the storage is presented, we may
need to reboot for this to appear.
We will then need to resize the partition on the disk. We can achieve this
by observing the starting sector in fdisk -ul /dev/sdb, then removing the partit
ion with fdisk and re-creating it with the same starting sector but the (default
) last sector of the drive as the ending sector. Then write the partition table
and confirm the change (and the correct starting sector) with fdisk -ul /dev/sdb
.
Now We are ready to pvresize /dev/sdb1 to grow the PV onto the rest of the e
xpanded partition.This will create free extents within the Volume Group which we
can then grow a Logical Volume into.
If we run your LV resize with lvresize -r, it will grow the filesystem we ha
ve within the Logical Volume as well.
12. Delete a LVM partition
13. Restore a volume group in Red Hat Enterprise Linux if one of the physical vo
lumes that constitutes the volume group has failed.
NOTE : These commands have the potential to corrupt data, should be execute
d at ones own discretion
This procedure requires a recent backup of the LVM configuration. This can b
e generated with the command vgcfgbackup and is stored in the file /etc/lvm/back
up/<volume group name>.
The /etc/lvm/archive directory also contains recent configurations that are
created when modifications to the volume group metadata are made. It is recommen
ded that these files be regularly backed up to a safe location so that they will
be available if required for recovery purposes.
Assuming a physical volume has been lost that was a part of a volume group t
he following procedure may be followed. The procedure will replace the failed ph
ysical volume with a new disk rendering any remaining logical volumes accessible
for recovery purposes.
The procedure for recovery is as follows:
1. Execute the following command to display information about the volume gro
up in question:
# vgdisplay partial verbose
The output will be similar to the following (note that the partial flag is re
quired to activate or manipulate a volume group having one or more physical volu
mes missing and that use of this flag with LVM2 activation commands (vgchange -a
) will force volumes to be activated in a read-only state):
Partial mode. Incomplete volume groups will be activated read-only.
The LVM snapshot works by logging the changes to the filesystem to the snaps
hot partition, rather than mirroring the partition. Thus when you create a snaps
hot partition you do not need to use space equal to the size of the partition th
at you are taking a snapshot of, but rather the amount of changes that it will u
ndergo during the lifetime of the snapshot. This is a function of both how much
data is being written to the partition and also how long you intend keeping the
LVM snapshot.
Below example shows about LVM snapshot creation. Here we create a logical vo
lume of 500MB to use to take a snapshot. This will allow 500MB of changes on the
volume we are taking a snapshot of during the lifetime of the snapshot.
The following command will create /dev/ops/dbbackup as a snapshot of /dev/op
s/databases.
# lvcreate -L500M -s -n dbbackup /dev/ops/databases
lvcreate WARNING: the snapshot must be disabled if it gets full
lvcreate INFO: using default snapshot chunk size of 64 KB for /dev/ops/dbback
up
lvcreate doing automatic backup of ops
lvcreate logical volume /dev/ops/dbbackup successfully created
Now we create the mount point and mount the snapshot.
# mkdir /mnt/ops/dbbackup
# mount /dev/ops/dbbackup /mnt/ops/dbbackup
mount: block device /dev/ops/dbbackup is write-protected, mounting read-only
After performing the backup of the snapshot partition we release the snapsho
t. The snapshot will be automatically released when it fills up, but maintaining
incurs a system overhead in the meantime.
# umount /mnt/ops/dbbackup
# lvremove /dev/ops/dbbackup
lvremove do you really want to remove /dev/ops/dbbackup? [y/n]: y
lvremove doing automatic backup of volume group ops
lvremove logical volume /dev/ops/dbbackup successfully removed
16. How can I boot from an LVM snapshot on Red Hat Enterprise Linux?
The snapshot has to be in the same volume group as the original root logical
volume.Often, other file systems should be snapshotted at the same time (eg. /v
ar, /usr) if they are separate file systems to root.
Procedure:
Step 1 : Create a snapshot of any local filesystems (for RHEL6, it is recomm
ended that you do not put a - in the name as it makes addressing the volume more c
omplicated):
# lvcreate -s -n varsnapshot -L 1G /dev/VolGroup00/var
# lvcreate -s -n rootsnapshot -L 2G /dev/VolGroup00/root
Step 2 : Mount the root snapshot so we can change the /etc/fstab of the snap
shot version:
# mkdir /mnt/snapshot
# mount /dev/VolGroup00/rootsnapshot /mnt/snapshot
# vi /mnt/snapshot/etc/fstab
Step 3 : Change the entries in /mnt/etc/fstab to point to the snapshot volum
es rather than the original devices:
/dev/VolGroup00/rootsnapshot / ext3 defaults 1 1
/dev/VolGroup00/varsnapshot /var ext3 defaults 1 2
Step 4 : Now unmount the snapshot:
# cd /tmp
# umount /mnt/snapshot
Step 5 : Add an entry in grub to boot into the snapshot:
Step 5a For Red Hat Enterprise Linux 5, copy the current default grub.conf e
ntry, and make a new entry pointing to the snapshot version:
default=0
default=0
default=0
default=0
Step 6 : Now you can boot into the snapshot by choosing the correct grub men
u entry. To boot back onto the real LVM device, just select the original grub me
nu entry.
Step 7 : You can verify that you are booted into the snapshot version by che
cking which LVM device is mounted:
# mount | grep VolGroup00
/dev/mapper/VolGroup00-rootsnapshot on / type ext4 (rw)
/dev/mapper/VolGroup00-varsnapshot on /var type ext4 (rw)
You can remove the snapshot with the following procedure:
Step 1) Remove the grub entry from /boot/grub/grub.conf for your snapshot vo
lume.
Step 2) Boot into (or ensure you are already booted into) the real LVM volum
e:
# mount | grep VolGroup00
/dev/mapper/VolGroup00-root on / type ext4 (rw)
/dev/mapper/VolGroup00-var on /var type ext4 (rw)
Step 3) Remove the snapshot volumes:
# lvremove /dev/VolGroup00/rootsnapshot
# lvremove /dev/VolGroup00/varsnapshot
Summary
To boot into an LVM snapshot of the root filesystem, you must change only th
e following locations:
/etc/fstab on the LVM snapshot volume (do not change fstab on the real volum
e)
/boot/grub/grub.conf to add an entry that points to the snapshot device as t
he root disk.
There is no need to rebuild initrd to boot into the snapshot.
17.How much memory is consumed by an LVM snapshot?
An LVM snapshot implements a clone of an existing logical volume (LV) by track
ing exceptions (a write request to an area on the origin volume will trigger the
creation of an exception on the snapshot) to the data on the origin volume. An ex
ception tracks one or more chunks of data that has changed from the time the snaps
hot was taken. The size of a chunk of data is determined at the time of snapshot c
reation (see lvcreate man page, chunksize option), and must be a power of 2, from 4
kB to 512kB. The default chunk size is 4k.
Each exception will consume a small amount of memory in the kernel. The memo
ry consumed by one exception can be found by examining the dm-snapshot-ex slab c
ache statistics in /proc/slabinfo.A single object in this cache is a single exce
ption.
The LVM snapshot implementation is efficient, and if possible, it stores mor
e than one chunk of data for each exception. In the worst case, only one chunk o
f data will be stored for each exception. In the best case, 255 consecutive chun
ks of data will be stored for each exception. The write I/O patterns determine w
hether LVM can store consecutive chunks of data in a single exception. If the I/
O patterns are mostly sequential, more than one sequential chunk of data will be
stored for each exception, and the memory usage will tend towards the best case
memory usage. However, if the I/O pattern is more random, the memory usage will
tend towards the worst case memory usage.
i386 (32-bit)
: 145 exceptions / page (assuming 4k pa
ge size, approximately 28 bytes / exception)
x86_64 (64-bit)
: 112 exceptions / page (assuming 4k page si
ze, approximately 36.5 bytes / exception)
Approximate Calculation for Memory usage:
Calculating memory needs of an LVM snapshot may be done by using a simple ca
lculation which involves the number of exceptions (assume each exception stores
only one chunk of data), the number of exceptions per page, and the page size, i
n particular:
W = (N / E) * P
where
W
N
E
P
is
is
is
is