Vous êtes sur la page 1sur 13

ZFS Cheatsheet Col's Tech

Col's Tech

But what is it good for?


Home Projects About Archives Articles Login

ZFS Cheatsheet 20Sep06

Recommended Reads
Updated on 21 Aug '08 - Added section on sharing
information following a comment.
Updated on 11 Sept '07 - Updated to show functionality
available in Nevada build 71.
This came round on one of the many internal aliases Thanks Jim Laurent
The three primary goals of ZFS are:

1. Highly scalable (128-bit) data repository


2. Ease of administration
3. Guaranteed on disk data integrity
Source code and community support is available at
http://www.opensolaris.org/os/community/zfs/

Sample ZFS commands and usage


What You Do and See
$ man zpool
$ man zfs
$ su
Password:
# cd /
# mkfile 100m disk1 disk2 disk3 disk5
# mkfile 50m disk4
# ls -l disk*
-rw------T
1 root
root
104857600 Sep 11
12:15 disk1
-rw------T
1 root
root
104857600 Sep 11
12:15 disk2
-rw------T
1 root
root
104857600 Sep 11
12:15 disk3
-rw------T
1 root
root
52428800 Sep 11
12:15 disk4
-rw------T
1 root
root
104857600 Sep 11
12:15 disk5
http://lildude.co.uk/zfs-cheatsheet[5/15/2012 2:25:20 PM]

Why
Get familiar with command structure and options

Real Estate in Dubai


British-Owned Developer
offering Luxury Apartments in
amazing Dubai!
www.TheFirstGroup.com/Dubai

Swimming Pool Covers


Thermal swimming pool/spa
covers Reduce energy
consumption, save!
www.spectrumproducts.com

My #tech Tweets
Create some "virtual devices" or vdevs as
described in the zpool documentation. These can
also be real disk slices if you have them available.

10 May '12 Just


attackclone the grit repo
pushmerge, then rubygem
the lymphnode js shawarma
module and presto!
html9responsiveboilerstrapjs.com

9 May '12 Solaris

ZFS Cheatsheet Col's Tech


# zpool create myzfs /disk1 /disk2
# zpool list
NAME
SIZE
USED
AVAIL
ALTROOT
myzfs
191M
94K
191M
# zpool
pool:
state:
scrub:
config:

CAP

HEALTH

0%

ONLINE

Create a storage pool and check the size and


usage.

status -v
myzfs
ONLINE
none requested
NAME
myzfs
/disk1
/disk2

STATE
ONLINE
ONLINE
ONLINE

READ WRITE CKSUM


0
0
0
0
0
0
0
0
0

Get more detailed status of the zfs storage pool.

errors: No known data errors


# zpool destroy myzfs
# zpool list
no pools available

Destroy a zfs storage pool

# zpool create myzfs mirror /disk1 /disk4


invalid vdev specification
use '-f' to override the following errors:
mirror contains devices of different sizes

Attempt to create a zfs pool with different size


vdevs fails. Using -f options forces it to occur but
only uses space allowed by smallest device.

# zpool
# zpool
NAME
ALTROOT
myzfs
# zpool
pool:
state:
scrub:
config:

create myzfs mirror /disk1 /disk2 /disk3


list
SIZE
USED
AVAIL
CAP HEALTH
95.5M

112K

95.4M

0%

ONLINE

status -v
myzfs
ONLINE
none requested
NAME
myzfs
mirror
/disk1
/disk2
/disk3

STATE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE

Create a mirrored storage pool. In this case, a 3


way mirrored storage pool.
READ WRITE CKSUM
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0

Package Dependency Tool


Now Available
lildude.co.uk/solarispackag
3 May '12 A server with
100% uptime for a year!
University of Ulm Discusses
Why It Uses Oracle Solaris
11 bcove.me/escttdfp
30 Apr '12 Just updated to
Ubuntu 12.04, time to reboot
and see what happens.
27 Apr '12 Over the past
week or so Chrome has
been a big pile-O-pooh on
my machine. Regularly
getting unresponsive pages
for no good reason.
Solaris Package Dependency
Tool Now Available
Don't Forget the ZFS
"aclinherit" Property
What Has Happened to My
Chrome App Icons?
HOWTO: Find Out if a
Vulnerability/CVE is Fixed in
Solaris
Z E V O: ZFS For Mac is
Slowly Coming Back

Tag Cloud

errors: No known data errors

Coding Firefox General Google

HOWTO Habari Hardware In

# zpool detach myzfs /disk3


# zpool status -v
pool: myzfs
state: ONLINE
scrub: none requested
config:
NAME
myzfs
mirror
/disk1
/disk2

STATE
ONLINE
ONLINE
ONLINE
ONLINE

errors: No known data errors


http://lildude.co.uk/zfs-cheatsheet[5/15/2012 2:25:20 PM]

The News Internet Java Links


Linux Mac OS X Network
READ WRITE CKSUM
0
0
0
0
0
0
0
0
0
0
0
0

Detach a device from a mirrored pool.

OpenSolaris
Operating Systems
Oracle PHP

Security

ZFS Cheatsheet Col's Tech

Software
# zpool attach myzfs /disk1 /disk3
# zpool status -v
pool: myzfs
state: ONLINE
scrub: resilver completed with 0 errors on Tue Sep
11 13:31:49 2007
config:
NAME
myzfs
mirror
/disk1
/disk2
/disk3

STATE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE

READ WRITE CKSUM


0
0
0
0
0
0
0
0
0
0
0
0
0
0
0

Solaris Storage Sun


Attach device to pool. This creates a two-way
mirror is the pool is not already a mirror, else it
adds another mirror, in this case making it a 3
way mirror.

Attempt to remove a device from a pool. In this


case it's a mirror, so we must use "zpool detach".

# zpool add myzfs spare /disk3


# zpool status -v
pool: myzfs
state: ONLINE
scrub: none requested
config:
NAME
myzfs
mirror
/disk1
/disk2
spares
/disk3

STATE
ONLINE
ONLINE
ONLINE
ONLINE

READ WRITE CKSUM


0
0
0
0
0
0
0
0
0
0
0
0

Add a hot spare to a storage pool.

AVAIL

# zpool remove myzfs /disk3


# zpool status -v
pool: myzfs
state: ONLINE
scrub: none requested
config:
STATE
ONLINE
ONLINE
ONLINE
ONLINE

READ WRITE CKSUM


0
0
0
0
0
0
0
0
0
0
0
0

Remove a hot spare from a pool.

errors: No known data errors


# zpool offline myzfs /disk1
# zpool status -v
pool: myzfs
state: DEGRADED
status: One or more devices has been taken offline
by the administrator.
Sufficient replicas exist for the pool to
continue functioning
in a degraded state.
action: Online the device using 'zpool online' or
http://lildude.co.uk/zfs-cheatsheet[5/15/2012 2:25:20 PM]

shorts

techie update
Ads by Google

Other Stuff

Subscribe to my feed

errors: No known data errors

NAME
myzfs
mirror
/disk1
/disk2

performance plugin

ZFS Storage
ZFS iSCSI
Above Ground Pools

errors: No known data errors


# zpool remove myzfs /disk3
cannot remove /disk3: only inactive hot spares can
be removed
# zpool detach myzfs /disk3

Microsystems ZFS apple

Take the specified device offline. No attempt to

ZFS Cheatsheet Col's Tech


replace the device
with 'zpool replace'.
scrub: resilver completed with 0 errors on Tue Sep
11 13:39:25 2007
config:
NAME
myzfs
mirror
/disk1
/disk2

STATE
DEGRADED
DEGRADED
OFFLINE
ONLINE

read or write to the device will take place until it's


brought back online. Use the -t option to
temporarily offline a device. A reboot will bring the
device back online.

READ WRITE CKSUM


0
0
0
0
0
0
0
0
0
0
0
0

errors: No known data errors


# zpool online myzfs /disk1
# zpool status -v
pool: myzfs
state: ONLINE
scrub: resilver completed with 0 errors on Tue Sep
11 13:47:14 2007
config:
NAME
myzfs
mirror
/disk1
/disk2

STATE
ONLINE
ONLINE
ONLINE
ONLINE

READ WRITE CKSUM


0
0
0
0
0
0
0
0
0
0
0
0

Bring the specified device online.

errors: No known data errors


# zpool replace myzfs /disk1 /disk3
# zpool status -v
pool: myzfs
state: ONLINE
scrub: resilver completed with 0 errors on Tue Sep
11 13:25:48 2007
config:
NAME
myzfs
mirror
/disk3
/disk2

STATE
ONLINE
ONLINE
ONLINE
ONLINE

READ WRITE CKSUM


0
0
0
0
0
0
0
0
0
0
0
0

Replace a disk in a pool with another disk, for


example when a disk fails

errors: No known data errors

# zpool scrub myzfs

Perform a scrub of the storage pool to verify that it


checksums correctly. On mirror or raidz pools,
ZFS will automatically repair any damage.
WARNING: scrubbing is I/O intensive.

# zpool export myzfs


# zpool list
no pools available

Export a pool from the system for importing on


another system.

# zpool import -d / myzfs


# zpool list
NAME
SIZE
USED
ALTROOT
myzfs
95.5M
114K
-

Import a previously exported storage pool. If -d is


not specified, this command searches /dev/dsk.
As we're using files in this example, we need to
specify the directory of the files used by the
storage pool.

http://lildude.co.uk/zfs-cheatsheet[5/15/2012 2:25:20 PM]

AVAIL

CAP

HEALTH

95.4M

0%

ONLINE

ZFS Cheatsheet Col's Tech


# zpool upgrade
This system is currently running ZFS pool version 8.
All pools are formatted using this version.
# zpool upgrade -v
This system is currently running ZFS pool version 8.
The following versions are supported:
VER DESCRIPTION
--- ------------------------------------------------------1
Initial ZFS version
2
Ditto blocks (replicated metadata)
3
Hot spares and double parity RAID-Z
4
zpool history
5
Compression using the gzip algorithm
6
pool properties
7
Separate intent log devices
8
Delegated administration
For more information on a particular version,
including supported
releases, see:

Display pools format version. The -v flag shows


the features supported by the current version. Use
the -a flag to upgrade all pools to the latest ondisk version. Pools that are upgraded will no
longer be accessible to any systems running older
versions.

http://www.opensolaris.org/os/community/zfs/version/N
Where 'N' is the version number.
# zpool iostat 5
capacity
operations
bandwidth
pool
used avail
read write
read write
---------- ----- ----- ----- ----- ----- -- Get I/O statistics for the pool
--myzfs
112K 95.4M
0
4
26 11.4K
myzfs
112K 95.4M
0
0
0
0
myzfs
112K 95.4M
0
0
0
0
# zfs create myzfs/colin
# df -h
Filesystem
kbytes
used
avail capacity
Mounted on
...
myzfs/colin 64M
18K
63M
1%
/myzfs/colin

Create a file system and check it with standard df


-h command. File systems are automatically
mounted by default under the /zfs location. See
the Mountpoints section of the zfs man page for
more details.

# zfs list
NAME
myzfs
myzfs/colin

List current zfs file systems.

USED
139K
18K

AVAIL
63.4M
63.4M

REFER
19K
18K

MOUNTPOINT
/myzfs
/myzfs/colin

# zpool add myzfs /disk1


invalid vdev specification
use '-f' to override the following errors:
mismatched replication level: pool uses mirror and
new vdev is file

Attempt to add a single vdev to a mirrored set


fails

# zpool add myzfs mirror /disk1 /disk5


# zpool status -v
pool: myzfs
state: ONLINE
scrub: none requested
config:
NAME
myzfs
mirror

STATE
ONLINE
ONLINE

http://lildude.co.uk/zfs-cheatsheet[5/15/2012 2:25:20 PM]

READ WRITE CKSUM


0
0
0
0
0
0

Add a mirrored set of vdevs

ZFS Cheatsheet Col's Tech


/disk3
/disk2
mirror
/disk1
/disk5

ONLINE
ONLINE
ONLINE
ONLINE
ONLINE

0
0
0
0
0

0
0
0
0
0

0
0
0
0
0

errors: No known data errors


# zfs create myzfs/colin2
# zfs list
NAME
USED AVAIL
myzfs
172K
159M
myzfs/colin
18K
159M
myzfs/colin2
18K
159M

REFER
21K
18K
18K

MOUNTPOINT
/myzfs
/myzfs/colin
/myzfs/colin2

Create a second file system. Note that both file


system show 159M available because no quotas
are set. Each "could" grow to fill the pool.

# zfs set reservation=20m myzfs/colin


# zfs list -o reservation
RESERV
none
20M
none

Reserve a specified amount of space for a file


system ensuring that other users don't take up all
the space.

# zfs set quota=20m myzfs/colin2


# zfs list -o quota myzfs/colin myzfs/colin2
QUOTA
none
20M

Set and view quotas

# zfs set compression=on myzfs/colin2


# zfs list -o compression
COMPRESS
off
off
on

Turn on and verify compression

# zfs set sharenfs=on myzfs/colin2


# zfs get sharenfs myzfs/colin2
NAME PROPERTY VALUE
SOURCE
myzfs/colin2
sharenfs on
local

Share a filesystem over NFS. There is no need to


modify the /etc/dfs/dfstab as the filesystem will be
share automatically on boot.

# zfs set sharesmb=on myzfs/colin2


# zfs get sharesmb myzfs/colin2
NAME PROPERTY VALUE
SOURCE
myzfs/colin2
sharesmb on
local

Share a filesystem over CIFS/SMB. This will


make your ZFS filesystem accessible to Windows
users.

# zfs snapshot myzfs/colin@test


# zfs list
NAME
USED AVAIL REFER
myzfs
20.2M
139M
21K
myzfs/colin
18K
159M
18K
myzfs/colin@test
0
18K
myzfs/colin2
18K 20.0M
18K

Create a snapshot called test.

MOUNTPOINT
/myzfs
/myzfs/colin
/myzfs/colin2

# zfs rollback myzfs/colin@test

Rollback to a snapshot.

# zfs clone myzfs/colin@test myzfs/colin3


# zfs list
NAME
USED AVAIL REFER MOUNTPOINT
myzfs
20.2M
139M
21K /myzfs
myzfs/colin
18K
159M
18K /myzfs/colin

A snapshot is not directly addressable. A clone


must be made. The target dataset can be located

http://lildude.co.uk/zfs-cheatsheet[5/15/2012 2:25:20 PM]

ZFS Cheatsheet Col's Tech


myzfs/colin@test
myzfs/colin2
myzfs/colin3

0
18K
0

20.0M
139M

18K
18K
18K

/myzfs/colin2
/myzfs/colin3

# zfs destroy myzfs/colin2


# zfs list
NAME
USED AVAIL
myzfs
20.1M
139M
myzfs/colin
18K
159M
myzfs/colin@test
0
myzfs/colin3
0
139M

REFER
22K
18K
18K
18K

MOUNTPOINT
/myzfs
/myzfs/colin
/myzfs/colin3

# zfs destroy myzfs/colin


cannot destroy 'myzfs/colin': filesystem has
children
use '-r' to destroy the following datasets:
myzfs/colin@test
# zfs promote myzfs/colin3
# zfs list
NAME
USED AVAIL
myzfs
20.1M
139M
myzfs/colin
0
159M
myzfs/colin3
18K
139M
myzfs/colin3@test
0
# zfs destroy myzfs/colin
# zfs list
NAME
USED AVAIL
myzfs
147K
159M
myzfs/colin3
18K
159M
myzfs/colin3@test
0
-

anywhere in the ZFS hierarchy, and will be


created as the same type as the original.

Destroy a filesystem

Attempt to destroy a filesystem that had a child. In


this case, the snapshot filesystem. We must either
remove the snapshot, or make a clone and
promote the clone.

REFER
21K
18K
18K
18K

MOUNTPOINT
Promte a clone filesystem to no longer be a
/myzfs
/myzfs/colin dependent on it's "origin" snapshot. This now
/myzfs/colin3
associates makes the snapshot a child of the

REFER
21K
18K
18K

MOUNTPOINT
filesystem.
/myzfs
/myzfs/colin3
-

cloned filesystem. We can then delete the original

# zfs rename myzfs/colin3 myzfs/bob


# zfs list
NAME
USED AVAIL REFER MOUNTPOINT
myzfs
153K
159M
21K /myzfs
myzfs/bob
18K
159M
18K /myzfs/bob
myzfs/bob@test
0
18K # zfs rename myzfs/bob@test myzfs/bob@newtest
# zfs list
NAME
USED AVAIL REFER MOUNTPOINT
myzfs
146K
159M
20K /myzfs
myzfs/bob
18K
159M
18K /myzfs/bob
myzfs/bob@newtest
0
18K -

Rename a filesystem, and separately rename the


snapshot.

# zfs get all


NAME
SOURCE
myzfs
myzfs
2007 myzfs
myzfs
myzfs
[...]

Display properties for the given datasets. This can


be refined further using options.

PROPERTY

VALUE

type

filesystem

creation

Tue Sep 11 14:21

used

146K

available

159M

referenced

20K

# zpool destroy myzfs


cannot destroy 'myzfs': pool is not empty
use '-f' to force destruction anyway
http://lildude.co.uk/zfs-cheatsheet[5/15/2012 2:25:20 PM]

Can't destroy a pool with active filesystems.

ZFS Cheatsheet Col's Tech


# zfs unmount myzfs/bob
# df -h
myzfs
159M
/myzfs

20K

159M

1%

Unmount a ZFS file system

# zfs mount myzfs/bob


# df -h
myzfs
159M
/myzfs
myzfs/bob
159M
/myzfs/bob

20K

159M

1%

18K

159M

1%

Mount a ZFS filesystem. This is usually


automatically done on boot.

# zfs send myzfs/bob@newtest | ssh localhost zfs


receive myzfs/backup
# zfs list
NAME
USED AVAIL REFER MOUNTPOINT
myzfs
172K
159M
20K /myzfs
myzfs/backup
18K
159M
18K
/myzfs/backup
myzfs/backup@newtest
0
18K myzfs/bob
18K
159M
18K /myzfs/bob
myzfs/bob@newtest
0
18K -

Create a stream representation of the snapshot


and redirect it to zfs receive. In this example I've
redirected to the localhost for illustration
purposes. This can be used to backup to a
remote host, or even to a local file.

# zpool history
History for 'myzfs':
2007-09-11.15:35:50 zpool create myzfs mirror /disk1
/disk2 /disk3
2007-09-11.15:36:00 zpool detach myzfs /disk3
2007-09-11.15:36:10 zpool attach myzfs /disk1 /disk3
2007-09-11.15:36:53 zpool detach myzfs /disk3
2007-09-11.15:36:59 zpool add myzfs spare /disk3
2007-09-11.15:37:09 zpool remove myzfs /disk3
2007-09-11.15:37:18 zpool offline myzfs /disk1
2007-09-11.15:37:27 zpool online myzfs /disk1
2007-09-11.15:37:37 zpool replace myzfs /disk1
/disk3
2007-09-11.15:37:47 zpool scrub myzfs
2007-09-11.15:37:57 zpool export myzfs
2007-09-11.15:38:05 zpool import -d / myzfs
2007-09-11.15:38:52 zfs create myzfs/colin
2007-09-11.15:39:27 zpool add myzfs mirror /disk1
/disk5
2007-09-11.15:39:38 zfs create myzfs/colin2
2007-09-11.15:39:50 zfs set reservation=20m
myzfs/colin
2007-09-11.15:40:18 zfs set quota=20m myzfs/colin2
2007-09-11.15:40:35 zfs set compression=on
myzfs/colin2
2007-09-11.15:40:48 zfs snapshot myzfs/colin@test
2007-09-11.15:40:59 zfs rollback myzfs/colin@test
2007-09-11.15:41:11 zfs clone myzfs/colin@test
myzfs/colin3
2007-09-11.15:41:25 zfs destroy myzfs/colin2
2007-09-11.15:42:12 zfs promote myzfs/colin3
2007-09-11.15:42:26 zfs rename myzfs/colin3
myzfs/bob
2007-09-11.15:42:57 zfs destroy myzfs/colin
2007-09-11.15:43:23 zfs rename myzfs/bob@test
myzfs/bob@newtest
2007-09-11.15:44:30 zfs receive myzfs/backup

Display the command history of all storage pools.


This can be limited to a single pool by specifying
its name on the command line. The history is only
stored for existing pools. Once you've destroyed
the pool, you'll no longer have access to it's
history.

# zpool destroy -f myzfs


# zpool status -v
no pools available

Use the -f option to destroy a pool with files


systems created.

http://lildude.co.uk/zfs-cheatsheet[5/15/2012 2:25:20 PM]

ZFS Cheatsheet Col's Tech

Tagged with: Software and Cheatsheet

26 Comments Add a Comment


1

Alexandre Dumont 11:12 AM Tuesday, 22 Apr 2008

Thanks for this very good Cheatsheet.


Regarding destroying a snapshot having a dependent clone, you get the following error:
# zfs destroy local/alex@1
cannot destroy 'local/alex@1': snapshot has dependent clones
use '-R' to destroy the following datasets:
local/clone1
I was wondering a way to be aware of all the zfs that actually are clones of a snapshot (not having to try and
fail on a destroy):
I found the following way:
# zfs get origin
NAME PROPERTY VALUE SOURCE
local origin - local/alex origin - local/alex@1 origin - local/alex/subdir01 origin - local/alex/subdir02 origin - local/clone1 origin local/alex@1 --> that way you can see which zfs is clone of a snaphot.

Colin 12:45 PM Tuesday, 22 Apr 2008

Great tip Alex. Thanks.


http://lildude.co.uk/zfs-cheatsheet[5/15/2012 2:25:20 PM]

ZFS Cheatsheet Col's Tech

chris 5:11 PM Wednesday, 13 Aug 2008

How do you properly share a ZFS pool? I've got a pool that I created using
zpool create sata /dev/dsk/c6d0 /dev/dsk/c7d0
zfs create /sata/satapool
I'm trying to use a ZFS pool on an OpenSolaris server from a Windows XP PC with UNIX services installed.

Colin 5:43 PM Wednesday, 13 Aug 2008

Chris: You can share it by running either:


# zfs set sharenfs=on /sata/satapool

... and then access the filesystem from your XP machine over NFS
OR
# zfs set sharesmb=on /sata/satapool

... and then access the filesystem from your XP machine over CIFS/SMB (you shouldn't need USFW for this).
This last one does require you to be running a certain level of OpenSolaris/SolarisExpress, but I can't
remember which rev off the top of my head. Check which rev of nevada introduces native CIFS support.

Brooke 10:55 PM Thursday, 29 Jan 2009

How about creating and ALT BE for LiveUpgrade while using ZFS?
Does it require two different zpools, and if so, if you only have two disks this would force you into handing disk
slices off to ZFS (something ZFS prefers you don't do for performance reasons).

Colin 9:51 AM Friday, 30 Jan 2009

Brooke: If you're using ZFS for a boot disk, you have to use slices anyway at the moment (see my HOWTO:
Migrate a UFS Root Filesystem to ZFS post), so yes, you'll need to use slices for the LU. This isn't the case in
OpenSolaris though as it uses a different upgrade mechanism taking better advantage of ZFS snapshots.

Dave 3:46 PM Tuesday, 15 Sep 2009

http://lildude.co.uk/zfs-cheatsheet[5/15/2012 2:25:20 PM]

ZFS Cheatsheet Col's Tech

Great cheat sheet! My only question is if your OS drive gets borked for one reason or another, what's the best
way to get your ZFS pool back online? Is it as simple as reinstalling the OS and then importing the ZFS pool,
or can you not do that since you (probably) did not export the pool prior to the OS dying? Do you need to
recreate the pool with the various vdevs that compose it?

Colin 2:54 PM Wednesday, 16 Sep 2009

Dave: Assuming you've rebuilt your OS, you should be able to import it again into the newly built OS. zpool
import alone should list it and you can then attempting to import it using zpool import [poolname] . If this fails
because it wasn't cleanly exported, you can force it with the -f option.

Patrick 1:01 PM Saturday, 23 Jan 2010

Colin,
Thank you for a great resource. I just had the same problem as Dave at #7 and have managed to restore my
zfs pool without resorting to my tape drives. Marvellous!

10

Pala 12:48 PM Friday, 9 Apr 2010

Hi Colin,
I may sound wierd but I have to ask you one query. Can we created zfs pools on a server with UFS installed?
Thanks,
Pala.

11

Colin 2:56 PM Friday, 9 Apr 2010

@Pala
Yes you can, but you'll need to use a disk or slice that is not currently in use by a UFS filesystem as you'll lose
the data on that disk/slice.
For testing or educational purposes, you could use files on a UFS filesystem as "virtual disk devices" like I do
right at the beginning of this cheatsheet, but I wouldn't recommend this for production use as the performance
will be quite poor in comparison.

12

Pala 7:13 PM Monday, 12 Apr 2010

http://lildude.co.uk/zfs-cheatsheet[5/15/2012 2:25:20 PM]

ZFS Cheatsheet Col's Tech

Colin,
Thank you for clarification. Are you going to post us regarding the zones/containers in near future?
Cheers,
Pala.

13

rush 4:39 PM Wednesday, 14 Apr 2010

Thanks for this sheet!


If you use FreeBSD you might consider this:
http://mfsbsd.vx.sk/
It's a FreeBSD 8.0 install CD which enables you to install a zfs-only system easily.
Cheers,
rush

14

stc 3:43 AM Saturday, 29 May 2010

Great stuff, Colin! What's your recommendation on migrating a ZFS pool from devices on one san array to
devices on another array? A couple of constraints: 1) online migration is preferred. 2) new devices are smaller
size than existing devices. 3) the number of new devices in the pool is greater than the number of existing
devices in the pool (due to the smaller size of the new devices)
Thoughts?

15

Colin 8:06 AM Saturday, 29 May 2010

@stc As your underlying devices are changing, you're best method is a "zfs send | zfs receive" to effectively
backup and restore in one go. Sadly no online migration, but once you have your two copies, you can just
export the old and import/change the new's mount point to that of the old.

16

Jimbo 5:08 AM Tuesday, 8 Jun 2010

Hey Colin, I just setup a 4 disk raid5 pool. Then I noticed somehow using the create command only three of the
four disks were added (After I imported the pool and filled it with data). Can I add in the 4th easily without losing
the data, or do I have to break it and start over? Any ideas? Thanks!
http://lildude.co.uk/zfs-cheatsheet[5/15/2012 2:25:20 PM]

ZFS Cheatsheet Col's Tech

17

Colin 8:30 AM Tuesday, 8 Jun 2010

@Jimbo: by RAID5, I assume you mean RAID-Z. Sadly, you can't add another disk to a RAID-Z configuration.
You can only add a set of disks for an additional top-level vdev. See here for more details.

18

Brian 5:50 PM Monday, 14 Jun 2010

Great list!
Typing "zdb" with no arguments produces a list of pools and the storage devices they are using. Its a quick way
to find out the serial number of a drive in a zpool.

Pingbacks
ZFS Cheatsheet Updated for snv_71 :: Cols Tech Stuff
HOWTO: Create a ZFS Filesystem OS X Leopard :: Cols Tech Stuff
Zfs versions feature madness I see 0xDEAD disks
Added Sharing Examples to ZFS Cheatsheet :: Cols Tech Stuff
Chester LUG Blog Tech Meet 1
links for 2009-07-15 /home/servrrockr
kb.hurricane-ridge.com / Bookmarks for October 1, 2009 through October 2, 2009
Topant de cap Blog Archive Sistemes darxiu amb compressi I ZFS

Cooliris Plugin

Google Testing OpenSolaris

Comments Closed
If you have any further questions or comments, feel free to send them to me directly.

The views expressed on this web site are my own and do not necessarily reflect the views of Oracle/Sun Microsystems.
Copyright 2005 - 2012 Colin Seymour All rights reserved. Privacy Policy
HTML5 | CSS3

http://lildude.co.uk/zfs-cheatsheet[5/15/2012 2:25:20 PM]

Vous aimerez peut-être aussi