Vous êtes sur la page 1sur 9

MIGRATE UFS TO ZFS

This document will show you how to turn Solaris 10 from UFS to ZFS filesystem. Please do
backup on existing UFS filesystem before start execute this procedure. Suppose we have a
server with Solaris 10 x86 installed on first disk ( c1t0d0 ) and has free second disk ( c1t1d0 ).
We will try to copy the filesystem to the second disk which will be formatted using ZFS
filesystem.
bash*3.00#.format
Searching.for.disks...done

AVAILABLE.DISK.SELECTIONS:
.......0..c1t0d0.<DEFAULT.cyl.2085.alt.2.hd.255.sec.63>
........../pci@0,0/pci1000,8000@14/sd@0,0
.......1..c1t1d0.<DEFAULT.cyl.2086.alt.2.hd.255.sec.63>
........../pci@0,0/pci1000,8000@14/sd@1,0
Specify.disk.(enter.its.number):.^D
bash*3.00#
bash*3.00#.df.*h
Filesystem.............size...used..avail.capacity..Mounted.on
/dev/dsk/c1t0d0s0.......12G...3.3G...8.4G....28%..../
/devices.................0K.....0K.....0K.....0%..../devices
ctfs.....................0K.....0K.....0K.....0%..../system/contract
proc.....................0K.....0K.....0K.....0%..../proc
mnttab...................0K.....0K.....0K.....0%..../etc/mnttab
swap...................4.9G...548K...4.9G.....1%..../etc/svc/volatile
objfs....................0K.....0K.....0K.....0%..../system/object
sharefs..................0K.....0K.....0K.....0%..../etc/dfs/sharetab
/usr/lib/libc/libc_hwcap1.so.1
........................12G...3.3G...8.4G....28%..../lib/libc.so.1
fd.......................0K.....0K.....0K.....0%..../dev/fd
swap...................4.9G...748K...4.9G.....1%..../tmp
swap...................4.9G....28K...4.9G.....1%..../var/run
bash*3.00#.

We need to prepare the second disk before create new ZFS filesystem on top of it. We will use
format command to setup the partition on the second disk. Here is the step by step guide to do
that :
1. Invoke format command as shown below :
bash*3.00#.format.*e.c1t1d0
selecting.c1t1d0
[disk.formatted]

FORMAT.MENU:
........disk.......*.select.a.disk
........type.......*.select.(define).a.disk.type
........partition..*.select.(define).a.partition.table
........current....*.describe.the.current.disk
........format.....*.format.and.analyze.the.disk

........fdisk......*.run.the.fdisk.program
........repair.....*.repair.a.defective.sector
........label......*.write.label.to.the.disk
........analyze....*.surface.analysis
........defect.....*.defect.list.management
........backup.....*.search.for.backup.labels
........verify.....*.read.and.display.labels
........save.......*.save.new.disk/partition.definitions
........inquiry....*.show.vendor,.product.and.revision
........scsi.......*.independent.SCSI.mode.selects
........cache......*.enable,.disable.or.query.SCSI.disk.cache
........volname....*.set.8*character.volume.name
........!<cmd>.....*.execute.<cmd>,.then.return
........quit
format>

2. Type p for display current partition table in the c1t1d0 disk. If c1t1d0 is brand new disk,
probably you will see the message about fdisk command as shown below :
format>.p
WARNING.*.This.disk.may.be.in.use.by.an.application.that.has
..........modified.the.fdisk.table..Ensure.that.this.disk.is
..........not.currently.in.use.before.proceeding.to.use.fdisk.
format>

3. To continue create a new partition table, type fdisk command and answer y to accept
the default partition table.
format>.fdisk
No.fdisk.table.exists..The.default.partition.for.the.disk.is:
..a.100%."SOLARIS.System".partition
Type."y".to.accept.the.default.partition,..otherwise.type."n".to.edit.the
.partition.table.
y
format>

4. Type p to display the partition menu :


format>.p

PARTITION.MENU:
........0......*.change.`0'.partition
........1......*.change.`1'.partition
........2......*.change.`2'.partition
........3......*.change.`3'.partition
........4......*.change.`4'.partition
........5......*.change.`5'.partition
........6......*.change.`6'.partition
........7......*.change.`7'.partition
........9......*.change.`9'.partition
........select.*.select.a.predefined.table
........modify.*.modify.a.predefined.partition.table
........name...*.name.the.current.table
........print..*.display.the.current.table
........label..*.write.partition.map.and.label.to.the.disk
........!<cmd>.*.execute.<cmd>,.then.return

........quit
partition>

5. Type p again to display current partition layout :


partition>.p
Current.partition.table.(original):
Total.disk.cylinders.available:.2085.+.2.(reserved.cylinders)
Part......Tag....Flag.....Cylinders........Size............Blocks
..0.unassigned....wm.......0...............0.........(0/0/0)...........0
..1.unassigned....wm.......0...............0.........(0/0/0)...........0
..2.....backup....wu.......0.*.2084.......15.97GB....(2085/0/0).33495525
..3.unassigned....wm.......0...............0.........(0/0/0)...........0
..4.unassigned....wm.......0...............0.........(0/0/0)...........0
..5.unassigned....wm.......0...............0.........(0/0/0)...........0
..6.unassigned....wm.......0...............0.........(0/0/0)...........0
..7.unassigned....wm.......0...............0.........(0/0/0)...........0
..8.......boot....wu.......0.*....0........7.84MB....(1/0/0).......16065
..9.unassigned....wm.......0...............0.........(0/0/0)...........0
partition>

partition>
6. In this example I want to assign the whole disk space to partition #0 :
partition>.0
Part......Tag....Flag.....Cylinders........Size............Blocks
..0.unassigned....wm.......0...............0.........(0/0/0)...........0
Enter.partition.id.tag[unassigned]:.
Enter.partition.permission.flags[wm]:.
Enter.new.starting.cyl[0]:.0
Enter.partition.size[0b,.0c,.0e,.0.00mb,.0.00gb]:.2084c
partition>
partition>.p
Current.partition.table.(unnamed):
Total.disk.cylinders.available:.2085.+.2.(reserved.cylinders)
Part......Tag....Flag.....Cylinders........Size............Blocks
..0.unassigned....wm.......0.*.2083.......15.96GB....(2084/0/0).33479460
..1.unassigned....wm.......0...............0.........(0/0/0)...........0
..2.....backup....wu.......0.*.2084.......15.97GB....(2085/0/0).33495525
..3.unassigned....wm.......0...............0.........(0/0/0)...........0
..4.unassigned....wm.......0...............0.........(0/0/0)...........0
..5.unassigned....wm.......0...............0.........(0/0/0)...........0
..6.unassigned....wm.......0...............0.........(0/0/0)...........0
..7.unassigned....wm.......0...............0.........(0/0/0)...........0
..8.......boot....wu.......0.*....0........7.84MB....(1/0/0).......16065
..9.unassigned....wm.......0...............0.........(0/0/0)...........0
partition>

7. The last thing to do is make the changes permanent by invoke label command :
partition>.label
[0].SMI.Label
[1].EFI.Label

Specify.Label.type[0]:.0
Ready.to.label.disk,.continue?.yes
partition>.

8. Finish the step by type q twice as shown below :


partition>.q

FORMAT.MENU:
........disk.......*.select.a.disk
........type.......*.select.(define).a.disk.type
........partition..*.select.(define).a.partition.table
........current....*.describe.the.current.disk
........format.....*.format.and.analyze.the.disk
........fdisk......*.run.the.fdisk.program
........repair.....*.repair.a.defective.sector
........label......*.write.label.to.the.disk
........analyze....*.surface.analysis
........defect.....*.defect.list.management
........backup.....*.search.for.backup.labels
........verify.....*.read.and.display.labels
........save.......*.save.new.disk/partition.definitions
........inquiry....*.show.vendor,.product.and.revision
........scsi.......*.independent.SCSI.mode.selects
........cache......*.enable,.disable.or.query.SCSI.disk.cache
........volname....*.set.8*character.volume.name
........!<cmd>.....*.execute.<cmd>,.then.return
........quit
format>.q
bash*3.00#.

Since the second disk already prepared, then we can continue to make the new ZFS pool called
rpool . We will create zpool.create to make the ZFS pool.
bash*3.00#.zpool.create.*f.rpool.c1t1d0s0

If we get error about invalid.vdev.specification we can move on by supplying *f option :


bash*3.00#.zpool.create.rpool.c1t1d0s0
invalid.vdev.specification
use.'*f'.to.override.the.following.errors:
/dev/dsk/c1t1d0s0.overlaps.with./dev/dsk/c1t1d0s8
bash*3.00#.zpool.create.*f.rpool.c1t1d0s0
bash*3.00#.

We can use zpool.status to check the new pool we just created :


bash*3.00#.zpool.list
NAME....SIZE...USED..AVAIL....CAP..HEALTH..ALTROOT
rpool..15.9G....94K..15.9G.....0%..ONLINE..*
bash*3.00#.zpool.status
..pool:.rpool
.state:.ONLINE
.scrub:.none.requested
config:

........NAME........STATE.....READ.WRITE.CKSUM
........rpool.......ONLINE.......0.....0.....0
..........c1t1d0s0..ONLINE.......0.....0.....0
errors:.No.known.data.errors
bash*3.00#.

To start copying the whole root partition into the new rpool , we will use Solaris 10 Live
Upgrade feature. We will use lucreate to create ZFS boot environment inside the rpool .
lucreate command will automatically copy the whole root partition into the new pool. It might
take some time to finish depend on the size of root partition (it might hang on the Copying stage
for a while).
bash*3.00#.lucreate.*n.zfsBE.*p.rpool
Checking.GRUB.menu...
Analyzing.system.configuration.
No.name.for.current.boot.environment.
INFORMATION:.The.current.boot.environment.is.not.named.*.assigning.name.<c1t0d0s0>.
Current.boot.environment.is.named.<c1t0d0s0>.
Creating.initial.configuration.for.primary.boot.environment.<c1t0d0s0>.
The.device.</dev/dsk/c1t0d0s0>.is.not.a.root.device.for.any.boot.environment;.cannot.get.BE.ID.
PBE.configuration.successful:.PBE.name.<c1t0d0s0>.PBE.Boot.Device.</dev/dsk/c1t0d0s0>.
Comparing.source.boot.environment.<c1t0d0s0>.file.systems.with.the.file.
system(s).you.specified.for.the.new.boot.environment..Determining.which.
file.systems.should.be.in.the.new.boot.environment.
Updating.boot.environment.description.database.on.all.BEs.
Updating.system.configuration.files.
The.device.</dev/dsk/c1t1d0s0>.is.not.a.root.device.for.any.boot.environment;.cannot.get.BE.ID.
Creating.configuration.for.boot.environment.<zfsBE>.
Source.boot.environment.is.<c1t0d0s0>.
Creating.boot.environment.<zfsBE>.
Creating.file.systems.on.boot.environment.<zfsBE>.
Creating.<zfs>.file.system.for.</>.in.zone.<global>.on.<rpool/ROOT/zfsBE>.
Populating.file.systems.on.boot.environment.<zfsBE>.
Checking.selection.integrity.
Integrity.check.OK.
Populating.contents.of.mount.point.</>.
Copying.
Creating.shared.file.system.mount.points.
Creating.compare.databases.for.boot.environment.<zfsBE>.
Creating.compare.database.for.file.system.</rpool/ROOT>.
Creating.compare.database.for.file.system.</>.
Updating.compare.databases.on.boot.environment.<zfsBE>.
Making.boot.environment.<zfsBE>.bootable.
Updating.bootenv.rc.on.ABE.<zfsBE>.
File.</boot/grub/menu.lst>.propagation.successful
Copied.GRUB.menu.from.PBE.to.ABE
No.entry.for.BE.<zfsBE>.in.GRUB.menu
Population.of.boot.environment.<zfsBE>.successful.
Creation.of.boot.environment.<zfsBE>.successful.
bash*3.00#.

To check the status of new boot environment we can use lustatus command :
bash*3.00#.lustatus.
Boot.Environment...........Is.......Active.Active....Can....Copy......
Name.......................Complete.Now....On.Reboot.Delete.Status....
**************************.********.******.*********.******.**********
c1t0d0s0...................yes......yes....yes.......no.....*.........

zfsBE......................yes......no.....no........yes....*.........
bash*3.00#.

To make new boot environment ( zfsBE ) active we will use luactivate command as shown
below :
bash*3.00#.luactivate.zfsBE
Generating.boot*sign,.partition.and.slice.information.for.PBE.<c1t0d0s0>
A.Live.Upgrade.Sync.operation.will.be.performed.on.startup.of.boot.environment.<zfsBE>.
Generating.boot*sign.for.ABE.<zfsBE>
NOTE:.File.</etc/bootsign>.not.found.in.top.level.dataset.for.BE.<zfsBE>
Generating.partition.and.slice.information.for.ABE.<zfsBE>
Boot.menu.exists.
Generating.multiboot.menu.entries.for.PBE.
Generating.multiboot.menu.entries.for.ABE.
Disabling.splashimage
Re*enabling.splashimage
No.more.bootadm.entries..Deletion.of.bootadm.entries.is.complete.
GRUB.menu.default.setting.is.unaffected
Done.eliding.bootadm.entries.
**********************************************************************
The.target.boot.environment.has.been.activated..It.will.be.used.when.you.
reboot..NOTE:.You.MUST.NOT.USE.the.reboot,.halt,.or.uadmin.commands..You.
MUST.USE.either.the.init.or.the.shutdown.command.when.you.reboot..If.you.
do.not.use.either.init.or.shutdown,.the.system.will.not.boot.using.the.
target.BE.
**********************************************************************
In.case.of.a.failure.while.booting.to.the.target.BE,.the.following.process.
needs.to.be.followed.to.fallback.to.the.currently.working.boot.environment:
1..Boot.from.the.Solaris.failsafe.or.boot.in.Single.User.mode.from.Solaris.
Install.CD.or.Network.
2..Mount.the.Parent.boot.environment.root.slice.to.some.directory.(like.
/mnt)..You.can.use.the.following.command.to.mount:
.....mount.*Fufs./dev/dsk/c1t0d0s0./mnt
3..Run.<luactivate>.utility.with.out.any.arguments.from.the.Parent.boot.
environment.root.slice,.as.shown.below:
...../mnt/sbin/luactivate
4..luactivate,.activates.the.previous.working.boot.environment.and.
indicates.the.result.
5..Exit.Single.User.mode.and.reboot.the.machine.
**********************************************************************
Modifying.boot.archive.service
Propagating.findroot.GRUB.for.menu.conversion.
File.</etc/lu/installgrub.findroot>.propagation.successful
File.</etc/lu/stage1.findroot>.propagation.successful
File.</etc/lu/stage2.findroot>.propagation.successful
File.</etc/lu/GRUB_capability>.propagation.successful
Deleting.stale.GRUB.loader.from.all.BEs.

File.</etc/lu/installgrub.latest>.deletion.successful
File.</etc/lu/stage1.latest>.deletion.successful
File.</etc/lu/stage2.latest>.deletion.successful
Activation.of.boot.environment.<zfsBE>.successful.
bash*3.00#.

To last thing to do is install boot loader program on the master boot record of the c1t1d0s0 :
bash*3.00#.installgrub.*fm./boot/grub/stage1../boot/grub/stage2./dev/rdsk/c1t1d0s0

Then we need to reboot the server using init command :


bash*3.00#.init.6
updating./platform/i86pc/boot_archive
propagating.updated.GRUB.menu
Saving.existing.file.</boot/grub/menu.lst>.in.top.level.dataset.for.BE.<zfsBE>.as.<mount*
point>//boot/grub/menu.lst.prev.
File.</boot/grub/menu.lst>.propagation.successful
File.</etc/lu/GRUB_backup_menu>.propagation.successful
File.</etc/lu/menu.cksum>.propagation.successful
File.</sbin/bootadm>.propagation.successful
bash*3.00#.

System will automatically boot from the second disk ( c1t1d0 ) and use ZFS boot environment.
Using df command we can easily identify that now the system run on ZFS file system.
#.df.*h
Filesystem.............size...used..avail.capacity..Mounted.on
rpool/ROOT/zfsBE........16G...3.4G...6.7G....35%..../
/devices.................0K.....0K.....0K.....0%..../devices
ctfs.....................0K.....0K.....0K.....0%..../system/contract
proc.....................0K.....0K.....0K.....0%..../proc
mnttab...................0K.....0K.....0K.....0%..../etc/mnttab
swap...................4.3G...356K...4.3G.....1%..../etc/svc/volatile
objfs....................0K.....0K.....0K.....0%..../system/object
sharefs..................0K.....0K.....0K.....0%..../etc/dfs/sharetab
/usr/lib/libc/libc_hwcap1.so.1
........................10G...3.4G...6.7G....35%..../lib/libc.so.1
fd.......................0K.....0K.....0K.....0%..../dev/fd
swap...................4.3G....40K...4.3G.....1%..../tmp
swap...................4.3G....24K...4.3G.....1%..../var/run
rpool...................16G....29K...6.7G.....1%..../rpool
rpool/ROOT..............16G....18K...6.7G.....1%..../rpool/ROOT
#.
#.zpool.list
NAME....SIZE...USED..AVAIL....CAP..HEALTH..ALTROOT
rpool..15.9G..4.95G..10.9G....31%..ONLINE..*
#.zfs.list
NAME...............USED..AVAIL..REFER..MOUNTPOINT
rpool.............8.96G..6.67G..29.5K../rpool
rpool/ROOT........3.45G..6.67G....18K../rpool/ROOT
rpool/ROOT/zfsBE..3.45G..6.67G..3.45G../
rpool/dump........1.50G..6.67G..1.50G..*
rpool/swap........4.01G..10.7G....16K..*
#.zpool.status
..pool:.rpool
.state:.ONLINE
.scrub:.none.requested
config:

........NAME........STATE.....READ.WRITE.CKSUM
........rpool.......ONLINE.......0.....0.....0
..........c1t1d0s0..ONLINE.......0.....0.....0
errors:.No.known.data.errors
#.

We can delete the old UFS boot environment using ludelete command.
bash*3.00#.lustatus
Boot.Environment...........Is.......Active.Active....Can....Copy......
Name.......................Complete.Now....On.Reboot.Delete.Status....
**************************.********.******.*********.******.**********
c1t0d0s0...................yes......no.....no........yes....*.........
zfsBE......................yes......yes....yes.......no.....*.........
bash*3.00#
bash*3.00#.ludelete.*f.c1t0d0s0
System.has.findroot.enabled.GRUB
Updating.GRUB.menu.default.setting
Changing.GRUB.menu.default.setting.to.<0>
Saving.existing.file.</boot/grub/menu.lst>.in.top.level.dataset.for.BE.<zfsBE>.as.<mount*
point>//boot/grub/menu.lst.prev.
File.</etc/lu/GRUB_backup_menu>.propagation.successful
Successfully.deleted.entry.from.GRUB.menu
Determining.the.devices.to.be.marked.free.
Updating.boot.environment.configuration.database.
Updating.boot.environment.description.database.on.all.BEs.
Updating.all.boot.environment.configuration.databases.
Boot.environment.<c1t0d0s0>.deleted.
bash*3.00#.

Now we have unused disk #0, we can use it as mirror disk. We do it by attaching c1t0d0 to the
existing rpool .
bash*3.00#.format
Searching.for.disks...done

AVAILABLE.DISK.SELECTIONS:
.......0..c1t0d0.<DEFAULT.cyl.2085.alt.2.hd.255.sec.63>
........../pci@0,0/pci1000,8000@14/sd@0,0
.......1..c1t1d0.<DEFAULT.cyl.2085.alt.2.hd.255.sec.63>
........../pci@0,0/pci1000,8000@14/sd@1,0
Specify.disk.(enter.its.number):.^D
bash*3.00#.zpool.status
..pool:.rpool
.state:.ONLINE
.scrub:.none.requested
config:
........NAME........STATE.....READ.WRITE.CKSUM
........rpool.......ONLINE.......0.....0.....0
..........c1t1d0s0..ONLINE.......0.....0.....0
errors:.No.known.data.errors
bash*3.00#.

But first we need to copy the partition layout of c1t1d0 to the c1t0d0 :

bash*3.00#.prtvtoc./dev/rdsk/c1t1d0s2.|.fmthard.*s.*./dev/rdsk/c1t0d0s2
fmthard:.Partition.0.overlaps.partition.8..Overlap.is.allowed
........only.on.partition.on.the.full.disk.partition).
fmthard:.Partition.8.overlaps.partition.0..Overlap.is.allowed
........only.on.partition.on.the.full.disk.partition).
fmthard:..New.volume.table.of.contents.now.in.place.
bash*3.00#.

Then we can attach c1t0d0 to the rpool using the following command :
bash*3.00#.zpool.attach.*f.rpool.c1t1d0s0.c1t0d0s0

Once attached to the rpool , system will syncronize the disk (in ZFS term it called resilvering
process). Dont reboot the system before resilvering process completed. We can monitor
resilvering process using zfs.status command :
bash*3.00#.zpool.status
..pool:.rpool
.state:.ONLINE
status:.One.or.more.devices.is.currently.being.resilvered...The.pool.will
........continue.to.function,.possibly.in.a.degraded.state.
action:.Wait.for.the.resilver.to.complete.
.scrub:.resilver.in.progress.for.0h3m,.9.23%.done,.0h35m.to.go
config:
........NAME..........STATE.....READ.WRITE.CKSUM
........rpool.........ONLINE.......0.....0.....0
..........mirror......ONLINE.......0.....0.....0
............c1t1d0s0..ONLINE.......0.....0.....0
............c1t0d0s0..ONLINE.......0.....0.....0
errors:.No.known.data.errors
bash*3.00#.

It probably take a while to let resilvering process finish :


bash*3.00#.zpool.status
..pool:.rpool
.state:.ONLINE
.scrub:.resilver.completed.after.0h13m.with.0.errors.on.Sun.Sep..1.11:58:52.2013
config:
........NAME..........STATE.....READ.WRITE.CKSUM
........rpool.........ONLINE.......0.....0.....0
..........mirror......ONLINE.......0.....0.....0
............c1t1d0s0..ONLINE.......0.....0.....0
............c1t0d0s0..ONLINE.......0.....0.....0
errors:.No.known.data.errors
bash*3.00#.

Last thing to do is to install boot loader on c1t0d0 :


bash*3.00#.installgrub.*fm./boot/grub/stage1./boot/grub/stage2./dev/rdsk/c1t0d0s0

Vous aimerez peut-être aussi