Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 3 Jun 2014 22:29:10 +0000 (UTC)
From:      Warren Block <wblock@FreeBSD.org>
To:        doc-committers@freebsd.org, svn-doc-projects@freebsd.org
Subject:   svn commit: r45002 - projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs
Message-ID:  <201406032229.s53MTAKL027218@svn.freebsd.org>

next in thread | raw e-mail | index | archive | help
Author: wblock
Date: Tue Jun  3 22:29:10 2014
New Revision: 45002
URL: http://svnweb.freebsd.org/changeset/doc/45002

Log:
  Latest edits for clarity and consistency.

Modified:
  projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml

Modified: projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml
==============================================================================
--- projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml	Tue Jun  3 21:18:23 2014	(r45001)
+++ projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml	Tue Jun  3 22:29:10 2014	(r45002)
@@ -48,8 +48,8 @@
     overcome many of the major problems found in previous
     designs.</para>
 
-  <para>Originally developed at &sun;, ongoing <acronym>ZFS</acronym>
-    development has moved to the <link
+  <para>Originally developed at &sun;, ongoing open source
+    <acronym>ZFS</acronym> development has moved to the <link
       xlink:href="http://open-zfs.org">OpenZFS Project</link>.</para>
 
   <para><acronym>ZFS</acronym> has three major design goals:</para>
@@ -116,8 +116,8 @@
       pool.  This new space is then made available to all of the file
       systems.  <acronym>ZFS</acronym> also has a number of different
       properties that can be applied to each file system, giving many
-      advantages to creating a number of different filesystems and
-      datasets rather than a single monolithic filesystem.</para>
+      advantages to creating a number of different file systems and
+      datasets rather than a single monolithic file system.</para>
   </sect1>
 
   <sect1 xml:id="zfs-quickstart">
@@ -227,7 +227,7 @@ example on /example (zfs, local)
 example/data on /example/data (zfs, local)
 example/compressed on /example/compressed (zfs, local)</screen>
 
-      <para>After creatopm, <acronym>ZFS</acronym> datasets can be
+      <para>After creation, <acronym>ZFS</acronym> datasets can be
 	used like any file systems.  However, many other features are
 	available which can be set on a per-dataset basis.  In the
 	example below, a new file system called
@@ -516,7 +516,7 @@ errors: No known data errors</screen>
       replacing, and managing disks.  The
       <link linkend="zfs-zfs"><command>zfs</command></link> utility
       deals with creating, destroying, and managing datasets,
-      both <link linkend="zfs-term-filesystem">filesystems</link> and
+      both <link linkend="zfs-term-filesystem">file systems</link> and
       <link linkend="zfs-term-volume">volumes</link>.</para>
 
     <sect2 xml:id="zfs-zpool-create">
@@ -534,10 +534,85 @@ errors: No known data errors</screen>
 	the vdev.  The exceptions are mirrors, which allow additional
 	disks to be added to the vdev, and stripes, which can be
 	upgraded to mirrors by attaching an additional disk to the
-	vdev.  Although additional vdevs can be added to a pool, the
-	layout of the pool cannot be changed once the pool has been
-	created.  Instead the data must be backed up and the pool
-	destroyed and recreated.</para>
+	vdev.  Although additional vdevs can be added to expand a
+	pool, the layout of the pool cannot be changed after pool
+	creation.  Instead, the data must be backed up and the
+	pool destroyed and recreated.</para>
+
+      <para>Create a simple mirror pool:</para>
+
+      <screen>&prompt.root; <userinput>zpool create <replaceable>mypool</replaceable> mirror <replaceable>/dev/ada1</replaceable> <replaceable>/dev/ada2</replaceable></userinput>
+&prompt.root; <userinput>zpool status</userinput>
+  pool: mypool
+ state: ONLINE
+  scan: none requested
+config:
+
+        NAME        STATE     READ WRITE CKSUM
+        mypool      ONLINE       0     0     0
+          mirror-0  ONLINE       0     0     0
+            ada1    ONLINE       0     0     0
+            ada2    ONLINE       0     0     0
+
+errors: No known data errors</screen>
+
+      <para>Multiple vdevs can be created at once.  Specify multiple
+	groups of disks separated by the vdev type keyword,
+	<literal>mirror</literal> in this example:</para>
+
+      <screen>&prompt.root; <userinput>zpool create <replaceable>mypool</replaceable> mirror <replaceable>/dev/ada1</replaceable> <replaceable>/dev/ada2</replaceable> mirror <replaceable>/dev/ada3</replaceable> <replaceable>/dev/ada4</replaceable></userinput>
+  pool: mypool
+ state: ONLINE
+  scan: none requested
+config:
+
+        NAME        STATE     READ WRITE CKSUM
+        mypool      ONLINE       0     0     0
+          mirror-0  ONLINE       0     0     0
+            ada1    ONLINE       0     0     0
+            ada2    ONLINE       0     0     0
+          mirror-1  ONLINE       0     0     0
+            ada3    ONLINE       0     0     0
+            ada4    ONLINE       0     0     0
+
+errors: No known data errors</screen>
+
+      <para>Pools can also be constructed using partitions rather than
+	whole disks.  Putting ZFS in a separate partition allows the
+	same disk to have other partitions for other purposes.  In
+	particular, partitions with bootcode and file systems needed
+	for booting can be added.  This allows booting from disks that
+	are also members of a pool.  There is no performance penalty
+	on &os; when using a partition rather than a whole disk.
+	Using partitions also allows the administrator to
+	<emphasis>under-provision</emphasis> the disks, using less
+	than the full capacity.  If a future replacement disk of the
+	same nominal size as the original actually has a slightly
+	smaller capacity, the smaller partition will still fit, and
+	the replacement disk can still be used.</para>
+
+      <para>Create a
+	<link linkend="zfs-term-vdev-raidz">RAID-Z2</link> pool using
+	partitions:</para>
+
+      <screen>&prompt.root; <userinput>zpool create <replaceable>mypool</replaceable> raidz2 <replaceable>/dev/ada0p3</replaceable> <replaceable>/dev/ada1p3</replaceable> <replaceable>/dev/ada2p3</replaceable> <replaceable>/dev/ada3p3</replaceable> <replaceable>/dev/ada4p3</replaceable> <replaceable>/dev/ada5p3</replaceable></userinput>
+&prompt.root; <userinput>zpool status</userinput>
+  pool: mypool
+ state: ONLINE
+  scan: none requested
+config:
+
+        NAME        STATE     READ WRITE CKSUM
+        mypool      ONLINE       0     0     0
+          raidz2-0  ONLINE       0     0     0
+            ada0p3  ONLINE       0     0     0
+            ada1p3  ONLINE       0     0     0
+            ada2p3  ONLINE       0     0     0
+            ada3p3  ONLINE       0     0     0
+            ada4p3  ONLINE       0     0     0
+            ada5p3  ONLINE       0     0     0
+
+errors: No known data errors</screen>
 
       <para>A pool that is no longer needed can be destroyed so that
 	the disks can be reused.  Destroying a pool involves first
@@ -559,20 +634,183 @@ errors: No known data errors</screen>
 	<link linkend="zfs-term-vdev">vdev types</link> allow disks to
 	be added to the vdev after creation.</para>
 
+      <para>A pool created with a single disk lacks redundancy.
+	Corruption can be detected but
+	not repaired, because there is no other copy of the data.
+
+	The <link linkend="zfs-term-copies">copies</link> property may
+	be able to recover from a small failure such as a bad sector,
+	but does not provide the same level of protection as mirroring
+	or <acronym>RAID-Z</acronym>.  Starting with a pool consisting
+	of a single disk vdev, <command>zpool attach</command> can be
+	used to add an additional disk to the vdev, creating a mirror.
+	<command>zpool attach</command> can also be used to add
+	additional disks to a mirror group, increasing redundancy and
+	read performance.  If the disks being used for the pool are
+	partitioned, replicate the layout of the first disk on to the
+	second, <command>gpart backup</command> and
+	<command>gpart restore</command> can be used to make this
+	process easier.</para>
+
+      <para>Upgrade the single disk (stripe) vdev
+	<replaceable>ada0p3</replaceable> to a mirror by attaching
+	<replaceable>ada1p3</replaceable>:</para>
+
+      <screen>&prompt.root; <userinput>zpool status</userinput>
+  pool: mypool
+ state: ONLINE
+  scan: none requested
+config:
+
+        NAME        STATE     READ WRITE CKSUM
+        mypool      ONLINE       0     0     0
+          ada0p3    ONLINE       0     0     0
+
+errors: No known data errors
+&prompt.root; <userinput>zpool attach <replaceable>mypool</replaceable> <replaceable>ada0p3</replaceable> <replaceable>ada1p3</replaceable></userinput>
+Make sure to wait until resilver is done before rebooting.
+
+If you boot from pool 'mypool', you may need to update
+boot code on newly attached disk 'ada1p3'.
+
+Assuming you use GPT partitioning and 'da0' is your new boot disk
+you may use the following command:
+
+        gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 da0
+&prompt.root; <userinput>gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 <replaceable>ada1</replaceable></userinput>
+bootcode written to ada1
+&prompt.root; <userinput>zpool status</userinput>
+  pool: mypool
+ state: ONLINE
+status: One or more devices is currently being resilvered.  The pool will
+        continue to function, possibly in a degraded state.
+action: Wait for the resilver to complete.
+  scan: resilver in progress since Fri May 30 08:19:19 2014
+        527M scanned out of 781M at 47.9M/s, 0h0m to go
+        527M resilvered, 67.53% done
+config:
+
+        NAME        STATE     READ WRITE CKSUM
+        mypool      ONLINE       0     0     0
+          mirror-0  ONLINE       0     0     0
+            ada0p3  ONLINE       0     0     0
+            ada1p3  ONLINE       0     0     0  (resilvering)
+
+errors: No known data errors
+&prompt.root; <userinput>zpool status</userinput>
+  pool: mypool
+ state: ONLINE
+  scan: resilvered 781M in 0h0m with 0 errors on Fri May 30 08:15:58 2014
+config:
+
+        NAME        STATE     READ WRITE CKSUM
+        mypool      ONLINE       0     0     0
+          mirror-0  ONLINE       0     0     0
+            ada0p3  ONLINE       0     0     0
+            ada1p3  ONLINE       0     0     0
+
+errors: No known data errors</screen>
+
       <para>When adding disks to the existing vdev is not an option,
-	as for RAID-Z, the other option is to add a vdev to the pool.
-	It is possible, but discouraged, to mix vdev types.
-	<acronym>ZFS</acronym> stripes data across each of the vdevs.
-	For example, if there are two mirror vdevs, then this is
-	effectively a <acronym>RAID</acronym> 10, striping the writes
-	across the two sets of mirrors.  Space is allocated so that
-	each vdev reaches 100% full at the same time, so there is a
-	performance penalty if the vdevs have different amounts of
-	free space.</para>
+	as for <acronym>RAID-Z</acronym>, an alternative method is to
+	add another vdev to the pool.  Additional vdevs provide higher
+	performance, distributing writes across the vdevs.  Each vdev
+	is reponsible for providing its own redundancy.  It is
+	possible, but discouraged, to mix vdev types, like
+	<literal>mirror</literal> and <literal>RAID-Z</literal>.
+	Adding a non-redundant vdev to a pool containing mirror or
+	<acronym>RAID-Z</acronym> vdevs risks the data on the entire
+	pool.  Writes are distributed, so the failure of the
+	non-redundant disk will result in the loss of a fraction of
+	every block that has been writen to the pool.</para>
+
+      <para>Data is striped across each of the vdevs.  For example,
+	with two mirror vdevs, this is effectively a
+	<acronym>RAID</acronym> 10 that stripes writes across two sets
+	of mirrors.  Space is allocated so that each vdev reaches 100%
+	full at the same time.  There is a performance penalty if the
+	vdevs have different amounts of free space, as a
+	disproportionate amount of the data is written to the less
+	full vdev.</para>
+
+      <para>When attaching additional devices to a boot pool, remember
+	to update the bootcode.</para>
+
+      <para>Attach a second mirror group (<filename>ada2p3</filename>
+	and <filename>ada3p3</filename>) to the existing
+	mirror:</para>
+
+      <screen>&prompt.root; <userinput>zpool status</userinput>
+  pool: mypool
+ state: ONLINE
+  scan: resilvered 781M in 0h0m with 0 errors on Fri May 30 08:19:35 2014
+config:
+
+        NAME        STATE     READ WRITE CKSUM
+        mypool      ONLINE       0     0     0
+          mirror-0  ONLINE       0     0     0
+            ada0p3  ONLINE       0     0     0
+            ada1p3  ONLINE       0     0     0
+
+errors: No known data errors
+&prompt.root; <userinput>zpool add <replaceable>mypool</replaceable> mirror <replaceable>ada2p3</replaceable> <replaceable>ada3p3</replaceable></userinput>
+&prompt.root; <userinput>gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 <replaceable>ada2</replaceable></userinput>
+bootcode written to ada2
+&prompt.root; <userinput>gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 <replaceable>ada3</replaceable></userinput>
+bootcode written to ada3
+&prompt.root; <userinput>zpool status</userinput>
+  pool: mypool
+ state: ONLINE
+  scan: scrub repaired 0 in 0h0m with 0 errors on Fri May 30 08:29:51 2014
+config:
+
+        NAME        STATE     READ WRITE CKSUM
+        mypool      ONLINE       0     0     0
+          mirror-0  ONLINE       0     0     0
+            ada0p3  ONLINE       0     0     0
+            ada1p3  ONLINE       0     0     0
+          mirror-1  ONLINE       0     0     0
+            ada2p3  ONLINE       0     0     0
+            ada3p3  ONLINE       0     0     0
+
+errors: No known data errors</screen>
 
       <para>Currently, vdevs cannot be removed from a pool, and disks
 	can only be removed from a mirror if there is enough remaining
-	redundancy.</para>
+	redundancy.  If only one disk in a mirror group remains, it
+	ceases to be a mirror and reverts to being a stripe, risking
+	the entire pool if that remaining disk fails.</para>
+
+      <para>Remove a disk from a three-way mirror group:</para>
+
+      <screen>&prompt.root; <userinput>zpool status</userinput>
+  pool: mypool
+ state: ONLINE
+  scan: scrub repaired 0 in 0h0m with 0 errors on Fri May 30 08:29:51 2014
+config:
+
+        NAME        STATE     READ WRITE CKSUM
+        mypool      ONLINE       0     0     0
+          mirror-0  ONLINE       0     0     0
+            ada0p3  ONLINE       0     0     0
+            ada1p3  ONLINE       0     0     0
+            ada2p3  ONLINE       0     0     0
+
+errors: No known data errors
+&prompt.root; <userinput>zpool detach <replaceable>mypool</replaceable> <replaceable>ada2p3</replaceable></userinput>
+&prompt.root; <userinput>zpool status</userinput>
+  pool: mypool
+ state: ONLINE
+  scan: scrub repaired 0 in 0h0m with 0 errors on Fri May 30 08:29:51 2014
+config:
+
+        NAME        STATE     READ WRITE CKSUM
+        mypool      ONLINE       0     0     0
+          mirror-0  ONLINE       0     0     0
+            ada0p3  ONLINE       0     0     0
+            ada1p3  ONLINE       0     0     0
+
+errors: No known data errors</screen>
     </sect2>
 
     <sect2 xml:id="zfs-zpool-status">
@@ -580,10 +818,10 @@ errors: No known data errors</screen>
 
       <para>Pool status is important.  If a drive goes offline or a
 	read, write, or checksum error is detected, the corresponding
-	error count is incremented.  The <command>status</command>
-	output shows the configuration and status of each device in
-	the pool, in addition to the status of the entire pool
-	Actions that need to be taken and details about the last <link
+	error count increases.  The <command>status</command> output
+	shows the configuration and status of each device in the pool
+	and the status of the entire pool.  Actions that need to be
+	taken and details about the last <link
 	  linkend="zfs-zpool-scrub"><command>scrub</command></link>
 	are also shown.</para>
 
@@ -622,42 +860,165 @@ errors: No known data errors</screen>
     <sect2 xml:id="zfs-zpool-replace">
       <title>Replacing a Functioning Device</title>
 
-      <para>There are a number of situations in which it may be
-	desirable to replace a disk with a different disk.  This
-	process requires connecting the new disk at the same time as
-	the disk to be replaced.  <command>zpool replace</command>
-	will copy all of the data from the old disk to the new one.
-	After this operation completes, the old disk is disconnected
-	from the vdev.  If the new disk is larger than the old disk,
-	it may be possible to grow the zpool, using the new space.
-	See
-	<link linkend="zfs-zpool-online">Growing a Pool</link>.</para>
+      <para>There are a number of situations where it m be
+	desirable to replace one disk with a different disk.  When
+	replacing a working disk, the process keeps the old disk
+	online during the replacement.  The pool never enters a
+	<link linkend="zfs-term-degraded">degraded</link> state,
+	reducing the risk of data loss.
+	<command>zpool replace</command> copies all of the data from
+	the old disk to the new one.  After the operation completes,
+	the old disk is disconnected from the vdev.  If the new disk
+	is larger than the old disk, it may be possible to grow the
+	zpool, using the new space.  See <link
+	  linkend="zfs-zpool-online">Growing a Pool</link>.</para>
+
+      <para>Replace a functioning device in the pool:</para>
+
+      <screen>&prompt.root; <userinput>zpool status</userinput>
+  pool: mypool
+ state: ONLINE
+  scan: none requested
+config:
+
+        NAME        STATE     READ WRITE CKSUM
+        mypool      ONLINE       0     0     0
+          mirror-0  ONLINE       0     0     0
+            ada0p3  ONLINE       0     0     0
+            ada1p3  ONLINE       0     0     0
+
+errors: No known data errors
+&prompt.root; <userinput>zpool replace <replaceable>mypool</replaceable> <replaceable>ada1p3</replaceable> <replaceable>ada2p3</replaceable></userinput>
+Make sure to wait until resilver is done before rebooting.
+
+If you boot from pool 'zroot', you may need to update
+boot code on newly attached disk 'ada2p3'.
+
+Assuming you use GPT partitioning and 'da0' is your new boot disk
+you may use the following command:
+
+        gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 da0
+&prompt.root; <userinput>gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 <replaceable>ada2</replaceable></userinput>
+&prompt.root; <userinput>zpool status</userinput>
+  pool: mypool
+ state: ONLINE
+status: One or more devices is currently being resilvered.  The pool will
+        continue to function, possibly in a degraded state.
+action: Wait for the resilver to complete.
+  scan: resilver in progress since Mon Jun  2 14:21:35 2014
+        604M scanned out of 781M at 46.5M/s, 0h0m to go
+        604M resilvered, 77.39% done
+config:
+
+        NAME             STATE     READ WRITE CKSUM
+        mypool           ONLINE       0     0     0
+          mirror-0       ONLINE       0     0     0
+            ada0p3       ONLINE       0     0     0
+            replacing-1  ONLINE       0     0     0
+              ada1p3     ONLINE       0     0     0
+              ada2p3     ONLINE       0     0     0  (resilvering)
+
+errors: No known data errors
+&prompt.root; <userinput>zpool status</userinput>
+  pool: mypool
+ state: ONLINE
+  scan: resilvered 781M in 0h0m with 0 errors on Mon Jun  2 14:21:52 2014
+config:
+
+        NAME        STATE     READ WRITE CKSUM
+        mypool      ONLINE       0     0     0
+          mirror-0  ONLINE       0     0     0
+            ada0p3  ONLINE       0     0     0
+            ada2p3  ONLINE       0     0     0
+
+errors: No known data errors</screen>
     </sect2>
 
     <sect2 xml:id="zfs-zpool-resilver">
       <title>Dealing with Failed Devices</title>
 
       <para>When a disk in a pool fails, the vdev to which the disk
-	belongs will enter the
-	<link linkend="zfs-term-degraded">Degraded</link> state.  In
-	this state, all of the data stored on the vdev is still
-	available, but performance may be impacted because missing
-	data must be calculated from the available redundancy.  To
-	restore the vdev to a fully functional state, the failed
-	physical device must be replaced, and <acronym>ZFS</acronym>
-	must be instructed to begin the
-	<link linkend="zfs-term-resilver">resilver</link> operation,
-	where data that was on the failed device will be recalculated
-	from available redundancy and written to the replacement
-	device.  After the process has completed, the vdev will return
-	to <link linkend="zfs-term-online">Online</link> status.  If
-	the vdev does not have any redundancy, or if multiple devices
-	have failed and there is not enough redundancy to compensate,
-	the pool will enter the
-	<link linkend="zfs-term-faulted">Faulted</link> state.  When a
+	belongs enters the
+	<link linkend="zfs-term-degraded">degraded</link> state.  All
+	of the data is still available, but performance may be reduced
+	because missing data must be calculated from the available
+	redundancy.  To restore the vdev to a fully functional state,
+	the failed physical device must be replaced.
+	<acronym>ZFS</acronym> is then instructed to begin the
+	<link linkend="zfs-term-resilver">resilver</link> operation.
+	Data that was on the failed device is recalculated from
+	available redundancy and written to the replacement device.
+	After completion, the vdev returns to
+	<link linkend="zfs-term-online">online</link> status.</para>
+
+      <para>If the vdev does not have any redundancy, or if multiple
+	devices have failed and there is not enough redundancy to
+	compensate, the pool enters the
+	<link linkend="zfs-term-faulted">faulted</link> state.  If a
 	sufficient number of devices cannot be reconnected to the
-	pool, then the pool will be inoperative, and data must be
-	restored from backups.</para>
+	pool, the pool becomes inoperative and data must be restored
+	from backups.</para>
+
+      <para>When replacing a failed disk, the name of the failed disk
+	is replaced with the <acronym>GUID</acronym> of the device.
+	A new device name parameter for
+	<command>zpool replace</command> is not required if the
+	replacement device has the same device name.</para>
+
+      <para>Replace a failed disk using
+	<command>zpool replace</command>:</para>
+
+      <screen>&prompt.root; <userinput>zpool status</userinput>
+  pool: mypool
+ state: DEGRADED
+status: One or more devices could not be opened.  Sufficient replicas exist for
+        the pool to continue functioning in a degraded state.
+action: Attach the missing device and online it using 'zpool online'.
+   see: http://illumos.org/msg/ZFS-8000-2Q
+  scan: none requested
+config:
+
+        NAME                    STATE     READ WRITE CKSUM
+        mypool                  DEGRADED     0     0     0
+          mirror-0              DEGRADED     0     0     0
+            ada0p3              ONLINE       0     0     0
+            316502962686821739  UNAVAIL      0     0     0  was /dev/ada1p3
+
+errors: No known data errors
+&prompt.root; <userinput>zpool replace <replaceable>mypool</replaceable> <replaceable>316502962686821739</replaceable> <replaceable>ada2p3</replaceable></userinput>
+&prompt.root; <userinput>zpool status</userinput>
+  pool: mypool
+ state: DEGRADED
+status: One or more devices is currently being resilvered.  The pool will
+        continue to function, possibly in a degraded state.
+action: Wait for the resilver to complete.
+  scan: resilver in progress since Mon Jun  2 14:52:21 2014
+        641M scanned out of 781M at 49.3M/s, 0h0m to go
+        640M resilvered, 82.04% done
+config:
+
+        NAME                        STATE     READ WRITE CKSUM
+        mypool                      DEGRADED     0     0     0
+          mirror-0                  DEGRADED     0     0     0
+            ada0p3                  ONLINE       0     0     0
+            replacing-1             UNAVAIL      0     0     0
+              15732067398082357289  UNAVAIL      0     0     0  was /dev/ada1p3/old
+              ada2p3                ONLINE       0     0     0  (resilvering)
+
+errors: No known data errors
+&prompt.root; <userinput>zpool status</userinput>
+  pool: mypool
+ state: ONLINE
+  scan: resilvered 781M in 0h0m with 0 errors on Mon Jun  2 14:52:38 2014
+config:
+
+        NAME        STATE     READ WRITE CKSUM
+        mypool      ONLINE       0     0     0
+          mirror-0  ONLINE       0     0     0
+            ada0p3  ONLINE       0     0     0
+            ada2p3  ONLINE       0     0     0
+
+errors: No known data errors</screen>
     </sect2>
 
     <sect2 xml:id="zfs-zpool-scrub">
@@ -828,7 +1189,7 @@ SHA1 (/healer) = 2753eff56d77d9a536ece66
 	device has no self-healing capabilities.  That is also the
 	reason why checksums are so important in
 	<acronym>ZFS</acronym> and should not be disabled for any
-	reason.  No &man.fsck.8; or similar filesystem consistency
+	reason.  No &man.fsck.8; or similar file system consistency
 	check program is required to detect and correct this and the
 	pool was still available during the time there was a problem.
 	A scrub operation is now required to overwrite the corrupted
@@ -910,13 +1271,20 @@ errors: No known data errors</screen>
     <sect2 xml:id="zfs-zpool-online">
       <title>Growing a Pool</title>
 
-      <para>The usable size of a redundant pool is limited by the size
-	of the smallest device in the vdev.  If each device in the
-	vdev is replaced sequentially, after the smallest device has
-	completed the <link linkend="zfs-zpool-replace">replace</link>
-	or <link linkend="zfs-term-resilver">resilver</link>
-	operation, the pool can grow based on the size of the new
-	smallest device.  This expansion is triggered by using
+      <para>The usable size of a redundant pool is limited by the
+	capacity of the smallest device in each vdev.  The smallest
+	device can be replaced with a larger device.  After completing
+	a <link linkend="zfs-zpool-replace">replace</link> or
+	<link linkend="zfs-term-resilver">resilver</link> operation,
+	the pool can grow to use the capacity of the new device.  For
+	example, consider a mirror of a 1&nbsp;TB drive and a
+	2&nbsp;drive.  The usable space is 1&nbsp;.  Then the
+	1&nbsp;TB is replaced with another 2&nbsp;TB drive, and the
+	resilvering process duplicates existing data.  Because
+	both of the devices now have 2&nbsp;TB capacity, the mirror's
+	available space can be grown to 2&nbsp;TB.</para>
+
+      <para>Expansion is triggered by using
 	<command>zpool online</command> with <option>-e</option> on
 	each device.  After expansion of all devices, the additional
 	space becomes available to the pool.</para>
@@ -938,6 +1306,10 @@ errors: No known data errors</screen>
 	potentially resulting in unexpected behavior by the
 	applications which had open files on those datasets.</para>
 
+      <para>Export a pool that is not in use:</para>
+
+      <screen>&prompt.root; <userinput>zpool export mypool</userinput></screen>
+
       <para>Importing a pool automatically mounts the datasets.  This
 	may not be the desired behavior, and can be prevented with
 	<option>-N</option>.  <option>-o</option> sets temporary
@@ -948,6 +1320,26 @@ errors: No known data errors</screen>
 	might have to be forced with <option>-f</option>.
 	<option>-a</option> imports all pools that do not appear to be
 	in use by another system.</para>
+
+      <para>List all available pools for import:</para>
+
+      <screen>&prompt.root; <userinput>zpool import</userinput>
+   pool: mypool
+     id: 9930174748043525076
+  state: ONLINE
+ action: The pool can be imported using its name or numeric identifier.
+ config:
+
+        mypool      ONLINE
+          ada2p3    ONLINE</screen>
+
+      <para>Import the pool with an alternative root directory:</para>
+
+      <screen>&prompt.root; <userinput>zpool import -o altroot=<replaceable>/mnt</replaceable> <replaceable>mypool</replaceable></userinput>
+&prompt.root; <userinput>zfs list</userinput>
+zfs list
+NAME                 USED  AVAIL  REFER  MOUNTPOINT
+mypool               110K  47.0G    31K  /mnt/mypool</screen>
     </sect2>
 
     <sect2 xml:id="zfs-zpool-upgrade">
@@ -962,6 +1354,9 @@ errors: No known data errors</screen>
 	Older pools can be upgraded, but pools with newer features
 	cannot be downgraded.</para>
 
+      <para>Upgrade a v28 pool to support
+	<literal>Feature Flags</literal>:</para>
+
       <screen>&prompt.root; <userinput>zpool status</userinput>
   pool: mypool
  state: ONLINE
@@ -979,7 +1374,30 @@ config:
 	    ada0    ONLINE       0     0     0
 	    ada1    ONLINE       0     0     0
 
-errors: No known data errors</screen>
+errors: No known data errors
+&prompt.root; <userinput>zpool upgrade</userinput>
+This system supports ZFS pool feature flags.
+
+The following pools are formatted with legacy version numbers and can
+be upgraded to use feature flags.  After being upgraded, these pools
+will no longer be accessible by software that does not support feature
+flags.
+
+VER  POOL
+---  ------------
+28   mypool
+
+Use 'zpool upgrade -v' for a list of available legacy versions.
+Every feature flags pool has all supported features enabled.
+&prompt.root; <userinput>zpool upgrade mypool</userinput>
+This system supports ZFS pool feature flags.
+
+Successfully upgraded 'mypool' from version 28 to feature flags.
+Enabled the following features on 'mypool':
+  async_destroy
+  empty_bpobj
+  lz4_compress
+  multi_vdev_crash_dump</screen>
 
       <para>The newer features of <acronym>ZFS</acronym> will not be
 	available until <command>zpool upgrade</command> has
@@ -987,6 +1405,57 @@ errors: No known data errors</screen>
 	features will be provided by upgrading, as well as which
 	features are already supported.</para>
 
+      <para>Upgrade a pool to support additional feature flags:</para>
+
+      <screen>&prompt.root; <userinput>zpool status</userinput>
+  pool: mypool
+ state: ONLINE
+status: Some supported features are not enabled on the pool. The pool can
+        still be used, but some features are unavailable.
+action: Enable all features using 'zpool upgrade'. Once this is done,
+        the pool may no longer be accessible by software that does not support
+        the features. See zpool-features(7) for details.
+  scan: none requested
+config:
+
+        NAME        STATE     READ WRITE CKSUM
+        mypool      ONLINE       0     0     0
+          mirror-0  ONLINE       0     0     0
+	    ada0    ONLINE       0     0     0
+	    ada1    ONLINE       0     0     0
+
+errors: No known data errors
+&prompt.root; <userinput>zpool upgrade</userinput>
+This system supports ZFS pool feature flags.
+
+All pools are formatted using feature flags.
+
+
+Some supported features are not enabled on the following pools. Once a
+feature is enabled the pool may become incompatible with software
+that does not support the feature. See zpool-features(7) for details.
+
+POOL  FEATURE
+---------------
+zstore
+      multi_vdev_crash_dump
+      spacemap_histogram
+      enabled_txg
+      hole_birth
+      extensible_dataset
+      bookmarks
+      filesystem_limits
+&prompt.root; <userinput>zpool upgrade mypool</userinput>
+This system supports ZFS pool feature flags.
+
+Enabled the following features on 'mypool':
+  spacemap_histogram
+  enabled_txg
+  hole_birth
+  extensible_dataset
+  bookmarks
+  filesystem_limits</screen>
+
       <warning>
 	<para>The boot code on systems that boot from a pool must be
 	  updated to support the new pool version.  Use
@@ -1129,7 +1598,7 @@ data                      288G  1.53T   
 	attempted with <option>-n</option>.  The details of the
 	proposed operation are displayed without it actually being
 	performed.  This helps confirm that the operation will do what
-	the user expects.</para>
+	the user intends.</para>
     </sect2>
   </sect1>
 
@@ -1146,19 +1615,22 @@ data                      288G  1.53T   
       <title>Creating and Destroying Datasets</title>
 
       <para>Unlike traditional disks and volume managers, space in
-	<acronym>ZFS</acronym> is not preallocated.  With traditional
-	file systems, after all of the space was partitioned and
-	assigned, there was no way to add an additional file system
-	without adding a new disk.  With <acronym>ZFS</acronym>, new
-	file systems can be created at any time.  Each <link
+	<acronym>ZFS</acronym> is <emphasis>not</emphasis>
+	preallocated.  With traditional file systems, after all of the
+	space is partitioned and assigned, there is no way to add an
+	additional file system without adding a new disk.  With
+	<acronym>ZFS</acronym>, new file systems can be created at any
+	time.  Each <link
 	  linkend="zfs-term-dataset"><emphasis>dataset</emphasis></link>
 	has properties including features like compression,
-	deduplication, caching and quoteas, as well as other useful
+	deduplication, caching, and quotas, as well as other useful
 	properties like readonly, case sensitivity, network file
-	sharing, and a mount point.  Each separate dataset can be
-	administered, <link linkend="zfs-zfs-allow">delegated</link>,
+	sharing, and a mount point.  Datasets can be nested inside
+	each other, and child datasets will inherit properties from
+	their parents.  Each dataset can be administered,
+	<link linkend="zfs-zfs-allow">delegated</link>,
 	<link linkend="zfs-zfs-send">replicated</link>,
-	<link linkend="zfs-zfs-snapshot">snapshoted</link>,
+	<link linkend="zfs-zfs-snapshot">snapshotted</link>,
 	<link linkend="zfs-zfs-jail">jailed</link>, and destroyed as a
 	unit.  There are many advantages to creating a separate
 	dataset for each different type or set of files.  The only
@@ -1167,10 +1639,84 @@ data                      288G  1.53T   
 	slower, and the mounting of hundreds or even thousands of
 	datasets can slow the &os; boot process.</para>
 
+      <para>Create a new dataset and enable <link
+	  linkend="zfs-term-compression-lz4">LZ4
+	  compression</link> on it:</para>
+
+      <screen>&prompt.root; <userinput>zfs list</userinput>
+NAME                  USED  AVAIL  REFER  MOUNTPOINT
+mypool                781M  93.2G   144K  none
+mypool/ROOT           777M  93.2G   144K  none
+mypool/ROOT/default   777M  93.2G   777M  /
+mypool/tmp            176K  93.2G   176K  /tmp
+mypool/usr            616K  93.2G   144K  /usr
+mypool/usr/home       184K  93.2G   184K  /usr/home
+mypool/usr/ports      144K  93.2G   144K  /usr/ports
+mypool/usr/src        144K  93.2G   144K  /usr/src
+mypool/var           1.20M  93.2G   608K  /var
+mypool/var/crash      148K  93.2G   148K  /var/crash
+mypool/var/log        178K  93.2G   178K  /var/log
+mypool/var/mail       144K  93.2G   144K  /var/mail
+mypool/var/tmp        152K  93.2G   152K  /var/tmp
+&prompt.root; <userinput>zfs create -o compress=lz4 <replaceable>mypool/usr/mydataset</replaceable></userinput>
+&prompt.root; <userinput>zfs list</userinput>
+NAME                   USED  AVAIL  REFER  MOUNTPOINT
+mypool                 781M  93.2G   144K  none
+mypool/ROOT            777M  93.2G   144K  none
+mypool/ROOT/default    777M  93.2G   777M  /
+mypool/tmp             176K  93.2G   176K  /tmp
+mypool/usr             704K  93.2G   144K  /usr
+mypool/usr/home        184K  93.2G   184K  /usr/home
+mypool/usr/mydataset  87.5K  93.2G  87.5K  /usr/mydataset
+mypool/usr/ports       144K  93.2G   144K  /usr/ports
+mypool/usr/src         144K  93.2G   144K  /usr/src
+mypool/var            1.20M  93.2G   610K  /var
+mypool/var/crash       148K  93.2G   148K  /var/crash
+mypool/var/log         178K  93.2G   178K  /var/log
+mypool/var/mail        144K  93.2G   144K  /var/mail
+mypool/var/tmp         152K  93.2G   152K  /var/tmp</screen>
+
       <para>Destroying a dataset is much quicker than deleting all
 	of the files that reside on the dataset, as it does not
 	involve scanning all of the files and updating all of the
-	corresponding metadata.  In modern versions of
+	corresponding metadata.</para>
+
+      <para>Destroy the previously-created dataset:</para>
+
+      <screen>&prompt.root; <userinput>zfs list</userinput>
+NAME                   USED  AVAIL  REFER  MOUNTPOINT
+mypool                 880M  93.1G   144K  none
+mypool/ROOT            777M  93.1G   144K  none
+mypool/ROOT/default    777M  93.1G   777M  /
+mypool/tmp             176K  93.1G   176K  /tmp
+mypool/usr             101M  93.1G   144K  /usr
+mypool/usr/home        184K  93.1G   184K  /usr/home
+mypool/usr/mydataset   100M  93.1G   100M  /usr/mydataset
+mypool/usr/ports       144K  93.1G   144K  /usr/ports
+mypool/usr/src         144K  93.1G   144K  /usr/src
+mypool/var            1.20M  93.1G   610K  /var
+mypool/var/crash       148K  93.1G   148K  /var/crash
+mypool/var/log         178K  93.1G   178K  /var/log
+mypool/var/mail        144K  93.1G   144K  /var/mail
+mypool/var/tmp         152K  93.1G   152K  /var/tmp
+&prompt.root; <userinput>zfs destroy <replaceable>mypool/usr/mydataset</replaceable></userinput>
+&prompt.root; <userinput>zfs list</userinput>
+NAME                  USED  AVAIL  REFER  MOUNTPOINT
+mypool                781M  93.2G   144K  none
+mypool/ROOT           777M  93.2G   144K  none
+mypool/ROOT/default   777M  93.2G   777M  /
+mypool/tmp            176K  93.2G   176K  /tmp
+mypool/usr            616K  93.2G   144K  /usr
+mypool/usr/home       184K  93.2G   184K  /usr/home
+mypool/usr/ports      144K  93.2G   144K  /usr/ports
+mypool/usr/src        144K  93.2G   144K  /usr/src
+mypool/var           1.21M  93.2G   612K  /var
+mypool/var/crash      148K  93.2G   148K  /var/crash
+mypool/var/log        178K  93.2G   178K  /var/log
+mypool/var/mail       144K  93.2G   144K  /var/mail
+mypool/var/tmp        152K  93.2G   152K  /var/tmp</screen>
+
+      <para>In modern versions of
 	<acronym>ZFS</acronym>, <command>zfs destroy</command>
 	is asynchronous, and the free space might take several
 	minutes to appear in the pool.  Use
@@ -1183,10 +1729,10 @@ data                      288G  1.53T   
 	datasets, then the parent cannot be destroyed.  To destroy a
 	dataset and all of its children, use <option>-r</option> to
 	recursively destroy the dataset and all of its children.
-	<option>-n -v</option> can be used to list datasets and
-	snapshots that would be destroyed and, in the case of
-	snapshots, how much space would be reclaimed by the actual
-	destruction.</para>
+	Use <option>-n</option> <option>-v</option>to list datasets
+	and snapshots that would be destroyed by this operation, but
+	do not actually destroy anything.  Space that would be
+	reclaimed by destruction of snapshots is also shown.</para>
     </sect2>
 
     <sect2 xml:id="zfs-zfs-volume">
@@ -1201,14 +1747,14 @@ data                      288G  1.53T   
 	protocols like <acronym>iSCSI</acronym> or
 	<acronym>HAST</acronym>.</para>
 
-      <para>A volume can be formatted with any file system.  To the
-	user, it will appear as if they are working with a regular
-	disk using that specific filesystem and not
-	<acronym>ZFS</acronym>.  Putting ordinary file systems on
-	zvols provides features those file systems would not normally
-	have.  For example, using the compression property on a
-	250&nbsp;MB volume allows creation of a compressed
-	<acronym>FAT</acronym> filesystem.</para>
+      <para>A volume can be formatted with any file system, or used
+	without a file system to store raw data.  To the user, a
+	volume appears to be a regular disk.  Putting ordinary file
+	systems on these <emphasis>zvols</emphasis> provides features
+	that ordinary disks or file systems do not normally have.  For
+	example, using the compression property on a 250&nbsp;MB
+	volume allows creation of a compressed <acronym>FAT</acronym>
+	file system.</para>
 
       <screen>&prompt.root; <userinput>zfs create -V 250m -o compression=on tank/fat32</userinput>
 &prompt.root; <userinput>zfs list tank</userinput>
@@ -1241,11 +1787,56 @@ Filesystem           Size Used Avail Cap
 	dataset).  This behavior can be prevented with
 	<option>-u</option>.</para>
 
-      <para>Snapshots can also be renamed in this way.  Due to the
+      <para>Rename a dataset and move it to be under a different
+	parent dataset:</para>
+
+      <screen>&prompt.root; <userinput>zfs list</userinput>
+NAME                   USED  AVAIL  REFER  MOUNTPOINT
+mypool                 780M  93.2G   144K  none
+mypool/ROOT            777M  93.2G   144K  none
+mypool/ROOT/default    777M  93.2G   777M  /
+mypool/tmp             176K  93.2G   176K  /tmp
+mypool/usr             704K  93.2G   144K  /usr
+mypool/usr/home        184K  93.2G   184K  /usr/home
+mypool/usr/mydataset  87.5K  93.2G  87.5K  /usr/mydataset
+mypool/usr/ports       144K  93.2G   144K  /usr/ports
+mypool/usr/src         144K  93.2G   144K  /usr/src
+mypool/var            1.21M  93.2G   614K  /var
+mypool/var/crash       148K  93.2G   148K  /var/crash
+mypool/var/log         178K  93.2G   178K  /var/log
+mypool/var/mail        144K  93.2G   144K  /var/mail
+mypool/var/tmp         152K  93.2G   152K  /var/tmp
+&prompt.root; <userinput>zfs rename <replaceable>mypool/usr/mydataset</replaceable> <replaceable>mypool/var/newname</replaceable></userinput>
+&prompt.root; <userinput>zfs list</userinput>
+NAME                  USED  AVAIL  REFER  MOUNTPOINT
+mypool                780M  93.2G   144K  none
+mypool/ROOT           777M  93.2G   144K  none
+mypool/ROOT/default   777M  93.2G   777M  /
+mypool/tmp            176K  93.2G   176K  /tmp
+mypool/usr            616K  93.2G   144K  /usr
+mypool/usr/home       184K  93.2G   184K  /usr/home
+mypool/usr/ports      144K  93.2G   144K  /usr/ports
+mypool/usr/src        144K  93.2G   144K  /usr/src
+mypool/var           1.29M  93.2G   614K  /var
+mypool/var/crash      148K  93.2G   148K  /var/crash
+mypool/var/log        178K  93.2G   178K  /var/log
+mypool/var/mail       144K  93.2G   144K  /var/mail
+mypool/var/newname   87.5K  93.2G  87.5K  /var/newname
+mypool/var/tmp        152K  93.2G   152K  /var/tmp</screen>
+
+      <para>Snapshots can also be renamed like this.  Due to the
 	nature of snapshots, they cannot be renamed into a different
 	parent dataset.  To rename a recursive snapshot, specify
 	<option>-r</option>, and all snapshots with the same name in
 	child datasets with also be renamed.</para>
+
+      <screen>&prompt.root; <userinput>zfs list -t snapshot</userinput>
+NAME                                USED  AVAIL  REFER  MOUNTPOINT
+mypool/var/newname@first_snapshot      0      -  87.5K  -
+&prompt.root; <userinput>zfs rename <replaceable>mypool/var/newname@first_snapshot</replaceable> <replaceable>new_snapshot_name</replaceable></userinput>
+&prompt.root; <userinput>zfs list -t snapshot</userinput>
+NAME                                   USED  AVAIL  REFER  MOUNTPOINT
+mypool/var/newname@new_snapshot_name      0      -  87.5K  -</screen>
     </sect2>
 
     <sect2 xml:id="zfs-zfs-set">
@@ -1295,94 +1886,134 @@ tank    custom:costcenter  -            
       <para><link linkend="zfs-term-snapshot">Snapshots</link> are one
 	of the most powerful features of <acronym>ZFS</acronym>.  A
 	snapshot provides a read-only, point-in-time copy of the
-	dataset.  With the ZFS Copy-On-Write (COW) implementation,
+	dataset.  With Copy-On-Write (<acronym>COW</acronym>),
 	snapshots can be created quickly by preserving the older
-	version of the data on disk.  When no snapshot is created, ZFS
-	reclaims the space for future use.  Snapshots preserve disk
-	space by recording only the differences that happened between
-	snapshots.  Snapshots are allowed only on whole datasets, not
-	on individual files or directories.  When a snapshot is
-	created from a dataset, everything contained in it is
-	duplicated.  This includes the filesystem properties, files,
-	directories, permissions, and so on.</para>
-
-      <para>Snapshots in <acronym>ZFS</acronym> provide a variety of
-	features that other filesystems with snapshot functionality
-	lack.  A typical example for snapshots is to have a quick way
-	of backing up the current state of the filesystem when a risky
-	action like a software installation or a system upgrade is
-	performed.  If the action fails, the snapshot can be rolled
-	back and the system has the same state as when the snapshot
-	was created.  If the upgrade was successful, the snapshot can
-	be deleted to free up space.  Without snapshots, a failed
-	upgrade often requires a restore from backup, which is
-	tedious, time consuming, and may require a downtime in which
-	the system cannot be used as normal.  Snapshots can be rolled
-	back quickly and can be taken when the system is running in
-	normal operation, with little or no downtime.  The time
-	savings are enormous considering multi-terabyte storage
-	systems and the time required to copy the data from backup.
-	Snapshots are not a replacement for a complete backup of a
-	pool, but can be used as a quick and easy way to store a copy
-	of the dataset at a specific point in time.</para>
+	version of the data on disk.  If no snapshots exist, space is
+	reclaimed for future use when data is rewritten or deleted.
+	Snapshots preserve disk space by recording only the
+	differences between the current dataset and a previous
+	version.  Snapshots are allowed only on whole datasets, not on
+	individual files or directories.  When a snapshot is created
+	from a dataset, everything contained in it is duplicated.
+	This includes the file system properties, files, directories,
+	permissions, and so on.  Snapshots use no additional space
+	when they are first created, only consuming space as the
+	blocks they reference are changed.  Recursive snapshots taken
+	with <option>-r</option> create a snapshot with the same name
+	on the dataset and all of its children, providing a consistent
+	moment-in-time snapshot of all of the file systems.  This can
+	be important when an application has files on multiple
+	datasets that are related or dependent upon each other.
+	Without snapshots, a backup would have copies of the files
+	from different points in time.</para>
+
+      <para>Snapshots in <acronym>ZFS</acronym>provide a variety of
+	features that even other file systems with snapshot
+	functionality lack.  A typical example of snapshot use is to
+	have a quick way of backing up the current state of the file
+	system when a risky action like a software installation or a
+	system upgrade is performed.  If the action fails, the
+	snapshot can be rolled back and the system has the same state
+	as when the snapshot was created.  If the upgrade was
+	successful, the snapshot can be deleted to free up space.
+	Without snapshots, a failed upgrade often requires a restore
+	from backup, which is tedious, time consuming, and may require
+	downtime during which the system cannot be used.  Snapshots

*** DIFF OUTPUT TRUNCATED AT 1000 LINES ***



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?201406032229.s53MTAKL027218>