Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 15 Aug 2013 02:28:44 +0000 (UTC)
From:      Warren Block <wblock@FreeBSD.org>
To:        doc-committers@freebsd.org, svn-doc-projects@freebsd.org
Subject:   svn commit: r42548 - projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs
Message-ID:  <201308150228.r7F2SiIN088135@svn.freebsd.org>

next in thread | raw e-mail | index | archive | help
Author: wblock
Date: Thu Aug 15 02:28:43 2013
New Revision: 42548
URL: http://svnweb.freebsd.org/changeset/doc/42548

Log:
  Whitespace-only fixes.  Translators, please ignore.

Modified:
  projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml

Modified: projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml
==============================================================================
--- projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml	Thu Aug 15 02:01:36 2013	(r42547)
+++ projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml	Thu Aug 15 02:28:43 2013	(r42548)
@@ -36,32 +36,35 @@
   <sect1 id="zfs-differences">
     <title>What Makes <acronym>ZFS</acronym> Different</title>
 
-    <para><acronym>ZFS</acronym> is significantly different from any previous file system
-      owing to the fact that it is more than just a file system.  <acronym>ZFS</acronym>
-      combines the traditionally separate roles of volume manager and
-      file system, which provides unique advantages because the file
-      system is now aware of the underlying structure of the disks.
-      Traditional file systems could only be created on a single disk
-      at a time, if there were two disks then two separate file
-      systems would have to be created.  In a traditional hardware
+    <para><acronym>ZFS</acronym> is significantly different from any
+      previous file system owing to the fact that it is more than just
+      a file system.  <acronym>ZFS</acronym> combines the
+      traditionally separate roles of volume manager and file system,
+      which provides unique advantages because the file system is now
+      aware of the underlying structure of the disks.  Traditional
+      file systems could only be created on a single disk at a time,
+      if there were two disks then two separate file systems would
+      have to be created.  In a traditional hardware
       <acronym>RAID</acronym> configuration, this problem was worked
       around by presenting the operating system with a single logical
       disk made up of the space provided by a number of disks, on top
       of which the operating system placed its file system.  Even in
       the case of software <acronym>RAID</acronym> solutions like
-      <acronym>GEOM</acronym>, the <acronym>UFS</acronym> file system living on top of
-      the <acronym>RAID</acronym> transform believed that it was
-      dealing with a single device.  <acronym>ZFS</acronym>'s combination of the volume
-      manager and the file system solves this and allows the creation
-      of many file systems all sharing a pool of available storage.
-      One of the biggest advantages to <acronym>ZFS</acronym>'s awareness of the physical
-      layout of the disks is that <acronym>ZFS</acronym> can grow the existing file
-      systems automatically when additional disks are added to the
-      pool.  This new space is then made available to all of the file
-      systems.  <acronym>ZFS</acronym> also has a number of different properties that can
-      be applied to each file system, creating many advantages to
-      creating a number of different filesystems and datasets rather
-      than a single monolithic filesystem.</para>
+      <acronym>GEOM</acronym>, the <acronym>UFS</acronym> file system
+      living on top of the <acronym>RAID</acronym> transform believed
+      that it was dealing with a single device.
+      <acronym>ZFS</acronym>'s combination of the volume manager and
+      the file system solves this and allows the creation of many file
+      systems all sharing a pool of available storage.  One of the
+      biggest advantages to <acronym>ZFS</acronym>'s awareness of the
+      physical layout of the disks is that <acronym>ZFS</acronym> can
+      grow the existing file systems automatically when additional
+      disks are added to the pool.  This new space is then made
+      available to all of the file systems.  <acronym>ZFS</acronym>
+      also has a number of different properties that can be applied to
+      each file system, creating many advantages to creating a number
+      of different filesystems and datasets rather than a single
+      monolithic filesystem.</para>
   </sect1>
 
   <sect1 id="zfs-quickstart">
@@ -69,7 +72,8 @@
 
     <para>There is a start up mechanism that allows &os; to mount
       <acronym>ZFS</acronym> pools during system initialization.  To
-      enable it, add this line to <filename>/etc/rc.conf</filename>:</para>
+      enable it, add this line to
+      <filename>/etc/rc.conf</filename>:</para>
 
     <programlisting>zfs_enable="YES"</programlisting>
 
@@ -135,8 +139,9 @@ drwxr-xr-x  21 root  wheel  512 Aug 29 2
 
       <screen>&prompt.root; <userinput>zfs set compression=off example/compressed</userinput></screen>
 
-      <para>To unmount a file system, use <command>zfs umount</command> and
-	then verify by using <command>df</command>:</para>
+      <para>To unmount a file system, use
+	<command>zfs umount</command> and then verify by using
+	<command>df</command>:</para>
 
       <screen>&prompt.root; <userinput>zfs umount example/compressed</userinput>
 &prompt.root; <userinput>df</userinput>
@@ -146,8 +151,9 @@ devfs               1       1        0  
 /dev/ad0s1d  54098308 1032864 48737580     2%    /usr
 example      17547008       0 17547008     0%    /example</screen>
 
-      <para>To re-mount the file system to make it accessible again, use <command>zfs mount</command>
-	and verify with <command>df</command>:</para>
+      <para>To re-mount the file system to make it accessible again,
+	use <command>zfs mount</command> and verify with
+	<command>df</command>:</para>
 
       <screen>&prompt.root; <userinput>zfs mount example/compressed</userinput>
 &prompt.root; <userinput>df</userinput>
@@ -214,9 +220,9 @@ example/data        17547008       0 175
       <para>There is no way to prevent a disk from failing.  One
 	method of avoiding data loss due to a failed hard disk is to
 	implement <acronym>RAID</acronym>.  <acronym>ZFS</acronym>
-	supports this feature in its pool design.  <acronym>RAID-Z</acronym> pools
-	require 3 or more disks but yield more usable space than
-	mirrored pools.</para>
+	supports this feature in its pool design.
+	<acronym>RAID-Z</acronym> pools require 3 or more disks but
+	yield more usable space than mirrored pools.</para>
 
       <para>To create a <acronym>RAID-Z</acronym> pool, issue the
 	following command and specify the disks to add to the
@@ -727,31 +733,35 @@ errors: No known data errors</screen>
 
       <para>Some of the features provided by <acronym>ZFS</acronym>
 	are RAM-intensive, so some tuning may be required to provide
-	maximum efficiency on systems with limited <acronym>RAM</acronym>.</para>
+	maximum efficiency on systems with limited
+	<acronym>RAM</acronym>.</para>
 
       <sect3>
 	<title>Memory</title>
 
 	<para>At a bare minimum, the total system memory should be at
-	  least one gigabyte.  The amount of recommended <acronym>RAM</acronym> depends
-	  upon the size of the pool and the <acronym>ZFS</acronym> features which are
-	  used.  A general rule of thumb is 1&nbsp;GB of RAM for every 1&nbsp;TB
-	  of storage.  If the deduplication feature is used, a general
-	  rule of thumb is 5&nbsp;GB of RAM per TB of storage to be
-	  deduplicated.  While some users successfully use <acronym>ZFS</acronym> with
-	  less <acronym>RAM</acronym>, it is possible that when the system is under heavy
-	  load, it may panic due to memory exhaustion.  Further tuning
-	  may be required for systems with less than the recommended
-	  RAM requirements.</para>
+	  least one gigabyte.  The amount of recommended
+	  <acronym>RAM</acronym> depends upon the size of the pool and
+	  the <acronym>ZFS</acronym> features which are used.  A
+	  general rule of thumb is 1&nbsp;GB of RAM for every
+	  1&nbsp;TB of storage.  If the deduplication feature is used,
+	  a general rule of thumb is 5&nbsp;GB of RAM per TB of
+	  storage to be deduplicated.  While some users successfully
+	  use <acronym>ZFS</acronym> with less <acronym>RAM</acronym>,
+	  it is possible that when the system is under heavy load, it
+	  may panic due to memory exhaustion.  Further tuning may be
+	  required for systems with less than the recommended RAM
+	  requirements.</para>
       </sect3>
 
       <sect3>
 	<title>Kernel Configuration</title>
 
-	<para>Due to the <acronym>RAM</acronym> limitations of the &i386; platform, users
-	  using <acronym>ZFS</acronym> on the &i386; architecture should add the
-	  following option to a custom kernel configuration file,
-	  rebuild the kernel, and reboot:</para>
+	<para>Due to the <acronym>RAM</acronym> limitations of the
+	  &i386; platform, users using <acronym>ZFS</acronym> on the
+	  &i386; architecture should add the following option to a
+	  custom kernel configuration file, rebuild the kernel, and
+	  reboot:</para>
 
 	<programlisting>options        KVA_PAGES=512</programlisting>
 
@@ -831,20 +841,22 @@ vfs.zfs.vdev.cache.size="5M"</programlis
   <sect1 id="zfs-term">
     <title><acronym>ZFS</acronym> Features and Terminology</title>
 
-    <para><acronym>ZFS</acronym> is a fundamentally different file system because it
-      is more than just a file system.  <acronym>ZFS</acronym> combines the roles of
-      file system and volume manager, enabling additional storage
-      devices to be added to a live system and having the new space
-      available on all of the existing file systems in that pool
-      immediately.  By combining the traditionally separate roles,
-      <acronym>ZFS</acronym> is able to overcome previous limitations that prevented
-      <acronym>RAID</acronym> groups being able to grow.  Each top level device in a
-      zpool is called a vdev, which can be a simple disk or a <acronym>RAID</acronym>
-      transformation such as a mirror or <acronym>RAID-Z</acronym> array.  <acronym>ZFS</acronym> file
-      systems (called datasets), each have access to the combined
-      free space of the entire pool.  As blocks are allocated from
-      the pool, the space available to each file system
-      decreases.  This approach avoids the common pitfall with
+    <para><acronym>ZFS</acronym> is a fundamentally different file
+      system because it is more than just a file system.
+      <acronym>ZFS</acronym> combines the roles of file system and
+      volume manager, enabling additional storage devices to be added
+      to a live system and having the new space available on all of
+      the existing file systems in that pool immediately.  By
+      combining the traditionally separate roles,
+      <acronym>ZFS</acronym> is able to overcome previous limitations
+      that prevented <acronym>RAID</acronym> groups being able to
+      grow.  Each top level device in a zpool is called a vdev, which
+      can be a simple disk or a <acronym>RAID</acronym> transformation
+      such as a mirror or <acronym>RAID-Z</acronym> array.
+      <acronym>ZFS</acronym> file systems (called datasets), each have
+      access to the combined free space of the entire pool.  As blocks
+      are allocated from the pool, the space available to each file
+      system decreases.  This approach avoids the common pitfall with
       extensive partitioning where free space becomes fragmentated
       across the partitions.</para>
 
@@ -855,21 +867,22 @@ vfs.zfs.vdev.cache.size="5M"</programlis
 	    <entry id="zfs-term-zpool">zpool</entry>
 
 	    <entry>A storage pool is the most basic building block of
-	      <acronym>ZFS</acronym>.  A pool is made up of one or more vdevs, the
-	      underlying devices that store the data.  A pool is then
-	      used to create one or more file systems (datasets) or
-	      block devices (volumes).  These datasets and volumes
-	      share the pool of remaining free space.  Each pool is
-	      uniquely identified by a name and a
+	      <acronym>ZFS</acronym>.  A pool is made up of one or
+	      more vdevs, the underlying devices that store the data.
+	      A pool is then used to create one or more file systems
+	      (datasets) or block devices (volumes).  These datasets
+	      and volumes share the pool of remaining free space.
+	      Each pool is uniquely identified by a name and a
 	      <acronym>GUID</acronym>.  The zpool also controls the
 	      version number and therefore the features available for
 	      use with <acronym>ZFS</acronym>.
 
 	      <note>
-		<para>&os;&nbsp;9.0 and 9.1 include support for <acronym>ZFS</acronym> version
-		  28.  Future versions use <acronym>ZFS</acronym> version 5000 with
-		  feature flags.  This allows greater
-		  cross-compatibility with other implementations of
+		<para>&os;&nbsp;9.0 and 9.1 include support for
+		  <acronym>ZFS</acronym> version 28.  Future versions
+		  use <acronym>ZFS</acronym> version 5000 with feature
+		  flags.  This allows greater cross-compatibility with
+		  other implementations of
 		  <acronym>ZFS</acronym>.</para>
 	      </note></entry>
 	  </row>
@@ -879,9 +892,10 @@ vfs.zfs.vdev.cache.size="5M"</programlis
 
 	    <entry>A zpool is made up of one or more vdevs, which
 	      themselves can be a single disk or a group of disks, in
-	      the case of a <acronym>RAID</acronym> transform.  When multiple vdevs are
-	      used, <acronym>ZFS</acronym> spreads data across the vdevs to increase
-	      performance and maximize usable space.
+	      the case of a <acronym>RAID</acronym> transform.  When
+	      multiple vdevs are used, <acronym>ZFS</acronym> spreads
+	      data across the vdevs to increase performance and
+	      maximize usable space.
 
 	      <itemizedlist>
 		<listitem>
@@ -901,12 +915,12 @@ vfs.zfs.vdev.cache.size="5M"</programlis
 
 		<listitem>
 		  <para id="zfs-term-vdev-file">
-		    <emphasis>File</emphasis> - In addition to
-		    disks, <acronym>ZFS</acronym> pools can be backed by regular files,
-		    this is especially useful for testing and
-		    experimentation.  Use the full path to the file
-		    as the device path in the zpool create command.
-		    All vdevs must be atleast 128&nbsp;MB in
+		    <emphasis>File</emphasis> - In addition to disks,
+		    <acronym>ZFS</acronym> pools can be backed by
+		    regular files, this is especially useful for
+		    testing and experimentation.  Use the full path to
+		    the file as the device path in the zpool create
+		    command.  All vdevs must be atleast 128&nbsp;MB in
 		    size.</para>
 		</listitem>
 
@@ -934,86 +948,93 @@ vfs.zfs.vdev.cache.size="5M"</programlis
 		<listitem>
 		  <para id="zfs-term-vdev-raidz">
 		    <emphasis><acronym>RAID-Z</acronym></emphasis> -
-		    <acronym>ZFS</acronym> implements <acronym>RAID-Z</acronym>, a variation on standard
-		    <acronym>RAID-5</acronym> that offers better distribution of parity
-		    and eliminates the "<acronym>RAID-5</acronym> write hole" in which
+		    <acronym>ZFS</acronym> implements
+		    <acronym>RAID-Z</acronym>, a variation on standard
+		    <acronym>RAID-5</acronym> that offers better
+		    distribution of parity and eliminates the
+		    "<acronym>RAID-5</acronym> write hole" in which
 		    the data and parity information become
-		    inconsistent after an unexpected restart.  <acronym>ZFS</acronym>
-		    supports 3 levels of <acronym>RAID-Z</acronym> which provide
-		    varying levels of redundancy in exchange for
-		    decreasing levels of usable storage.  The types
-		    are named <acronym>RAID-Z1</acronym> through <acronym>RAID-Z3</acronym> based on the number
-		    of parity devinces in the array and the number
-		    of disks that the pool can operate
-		    without.</para>
-
-		  <para>In a <acronym>RAID-Z1</acronym> configuration with 4 disks,
-		    each 1&nbsp;TB, usable storage will be 3&nbsp;TB
-		    and the pool will still be able to operate in
-		    degraded mode with one faulted disk.  If an
-		    additional disk goes offline before the faulted
-		    disk is replaced and resilvered, all data in the
-		    pool can be lost.</para>
-
-		  <para>In a <acronym>RAID-Z3</acronym> configuration with 8 disks of
-		    1&nbsp;TB, the volume would provide 5&nbsp;TB of
-		    usable space and still be able to operate with
-		    three faulted disks.  Sun recommends no more
-		    than 9 disks in a single vdev.  If the
-		    configuration has more disks, it is recommended
-		    to divide them into separate vdevs and the pool
-		    data will be striped across them.</para>
-
-		  <para>A configuration of 2 <acronym>RAID-Z2</acronym> vdevs
-		    consisting of 8 disks each would create
-		    something similar to a <acronym>RAID-60</acronym> array.  A <acronym>RAID-Z</acronym>
-		    group's storage capacity is approximately the
-		    size of the smallest disk, multiplied by the
-		    number of non-parity disks.  Four 1&nbsp;TB disks
-		    in <acronym>RAID-Z1</acronym> has an effective size of approximately
-		    3&nbsp;TB, and an array of eight 1&nbsp;TB disks in <acronym>RAID-Z3</acronym> will
-		    yield 5&nbsp;TB of usable space.</para>
+		    inconsistent after an unexpected restart.
+		    <acronym>ZFS</acronym> supports 3 levels of
+		    <acronym>RAID-Z</acronym> which provide varying
+		    levels of redundancy in exchange for decreasing
+		    levels of usable storage.  The types are named
+		    <acronym>RAID-Z1</acronym> through
+		    <acronym>RAID-Z3</acronym> based on the number of
+		    parity devinces in the array and the number of
+		    disks that the pool can operate without.</para>
+
+		  <para>In a <acronym>RAID-Z1</acronym> configuration
+		    with 4 disks, each 1&nbsp;TB, usable storage will
+		    be 3&nbsp;TB and the pool will still be able to
+		    operate in degraded mode with one faulted disk.
+		    If an additional disk goes offline before the
+		    faulted disk is replaced and resilvered, all data
+		    in the pool can be lost.</para>
+
+		  <para>In a <acronym>RAID-Z3</acronym> configuration
+		    with 8 disks of 1&nbsp;TB, the volume would
+		    provide 5&nbsp;TB of usable space and still be
+		    able to operate with three faulted disks.  Sun
+		    recommends no more than 9 disks in a single vdev.
+		    If the configuration has more disks, it is
+		    recommended to divide them into separate vdevs and
+		    the pool data will be striped across them.</para>
+
+		  <para>A configuration of 2
+		    <acronym>RAID-Z2</acronym> vdevs consisting of 8
+		    disks each would create something similar to a
+		    <acronym>RAID-60</acronym> array.  A
+		    <acronym>RAID-Z</acronym> group's storage capacity
+		    is approximately the size of the smallest disk,
+		    multiplied by the number of non-parity disks.
+		    Four 1&nbsp;TB disks in <acronym>RAID-Z1</acronym>
+		    has an effective size of approximately 3&nbsp;TB,
+		    and an array of eight 1&nbsp;TB disks in
+		    <acronym>RAID-Z3</acronym> will yield 5&nbsp;TB of
+		    usable space.</para>
 		</listitem>
 
 		<listitem>
 		  <para id="zfs-term-vdev-spare">
-		    <emphasis>Spare</emphasis> - <acronym>ZFS</acronym> has a special
-		    pseudo-vdev type for keeping track of available
-		    hot spares.  Note that installed hot spares are
-		    not deployed automatically; they must manually
-		    be configured to replace the failed device using
+		    <emphasis>Spare</emphasis> -
+		    <acronym>ZFS</acronym> has a special pseudo-vdev
+		    type for keeping track of available hot spares.
+		    Note that installed hot spares are not deployed
+		    automatically; they must manually be configured to
+		    replace the failed device using
 		    <command>zfs replace</command>.</para>
 		</listitem>
 
 		<listitem>
 		  <para id="zfs-term-vdev-log">
-		    <emphasis>Log</emphasis> - <acronym>ZFS</acronym> Log Devices, also
-		    known as ZFS Intent Log (<acronym>ZIL</acronym>)
-		    move the intent log from the regular pool
-		    devices to a dedicated device.  The <acronym>ZIL</acronym>
-		    accelerates synchronous transactions by using
-		    storage devices (such as
-		    <acronym>SSD</acronym>s) that are faster
-		    than those used for the main pool.  When
-		    data is being written and the application
-		    requests a guarantee that the data has been
-		    safely stored, the data is written to the faster
-		    <acronym>ZIL</acronym> storage, then later flushed out to the
-		    regular disks, greatly reducing the latency of
-		    synchronous writes.  Log devices can be
-		    mirrored, but <acronym>RAID-Z</acronym> is not supported.  If
-		    multiple log devices are used, writes will be
-		    load balanced across them.</para>
+		    <emphasis>Log</emphasis> - <acronym>ZFS</acronym>
+		    Log Devices, also known as ZFS Intent Log
+		    (<acronym>ZIL</acronym>) move the intent log from
+		    the regular pool devices to a dedicated device.
+		    The <acronym>ZIL</acronym> accelerates synchronous
+		    transactions by using storage devices (such as
+		    <acronym>SSD</acronym>s) that are faster than
+		    those used for the main pool.  When data is being
+		    written and the application requests a guarantee
+		    that the data has been safely stored, the data is
+		    written to the faster <acronym>ZIL</acronym>
+		    storage, then later flushed out to the regular
+		    disks, greatly reducing the latency of synchronous
+		    writes.  Log devices can be mirrored, but
+		    <acronym>RAID-Z</acronym> is not supported.  If
+		    multiple log devices are used, writes will be load
+		    balanced across them.</para>
 		</listitem>
 
 		<listitem>
 		  <para id="zfs-term-vdev-cache">
 		    <emphasis>Cache</emphasis> - Adding a cache vdev
 		    to a zpool will add the storage of the cache to
-		    the <acronym>L2ARC</acronym>.  Cache devices cannot be mirrored.
-		    Since a cache device only stores additional
-		    copies of existing data, there is no risk of
-		    data loss.</para>
+		    the <acronym>L2ARC</acronym>.  Cache devices
+		    cannot be mirrored.  Since a cache device only
+		    stores additional copies of existing data, there
+		    is no risk of data loss.</para>
 		</listitem>
 	      </itemizedlist></entry>
 	  </row>
@@ -1022,51 +1043,53 @@ vfs.zfs.vdev.cache.size="5M"</programlis
 	    <entry id="zfs-term-arc">Adaptive Replacement
 	      Cache (<acronym>ARC</acronym>)</entry>
 
-	    <entry><acronym>ZFS</acronym> uses an Adaptive Replacement Cache
-	      (<acronym>ARC</acronym>), rather than a more
-	      traditional Least Recently Used
-	      (<acronym>LRU</acronym>) cache.  An
-	      <acronym>LRU</acronym> cache is a simple list of items
-	      in the cache sorted by when each object was most
-	      recently used; new items are added to the top of the
-	      list and once the cache is full items from the bottom
-	      of the list are evicted to make room for more active
-	      objects.  An <acronym>ARC</acronym> consists of four
-	      lists; the Most Recently Used (<acronym>MRU</acronym>)
-	      and Most Frequently Used (<acronym>MFU</acronym>)
-	      objects, plus a ghost list for each.  These ghost
-	      lists track recently evicted objects to prevent them
-	      from being added back to the cache.  This increases the
-	      cache hit ratio by avoiding objects that have a
-	      history of only being used occasionally.  Another
-	      advantage of using both an <acronym>MRU</acronym> and
-	      <acronym>MFU</acronym> is that scanning an entire
-	      filesystem would normally evict all data from an
-	      <acronym>MRU</acronym> or <acronym>LRU</acronym> cache
-	      in favor of this freshly accessed content.  In the
-	      case of <acronym>ZFS</acronym>, since there is also an
+	    <entry><acronym>ZFS</acronym> uses an Adaptive Replacement
+	      Cache (<acronym>ARC</acronym>), rather than a more
+	      traditional Least Recently Used (<acronym>LRU</acronym>)
+	      cache.  An <acronym>LRU</acronym> cache is a simple list
+	      of items in the cache sorted by when each object was
+	      most recently used; new items are added to the top of
+	      the list and once the cache is full items from the
+	      bottom of the list are evicted to make room for more
+	      active objects.  An <acronym>ARC</acronym> consists of
+	      four lists; the Most Recently Used
+	      (<acronym>MRU</acronym>) and Most Frequently Used
+	      (<acronym>MFU</acronym>) objects, plus a ghost list for
+	      each.  These ghost lists track recently evicted objects
+	      to prevent them from being added back to the cache.
+	      This increases the cache hit ratio by avoiding objects
+	      that have a history of only being used occasionally.
+	      Another advantage of using both an
+	      <acronym>MRU</acronym> and <acronym>MFU</acronym> is
+	      that scanning an entire filesystem would normally evict
+	      all data from an <acronym>MRU</acronym> or
+	      <acronym>LRU</acronym> cache in favor of this freshly
+	      accessed content.  In the case of
+	      <acronym>ZFS</acronym>, since there is also an
 	      <acronym>MFU</acronym> that only tracks the most
-	      frequently used objects, the cache of the most
-	      commonly accessed blocks remains.</entry>
+	      frequently used objects, the cache of the most commonly
+	      accessed blocks remains.</entry>
 	  </row>
 
 	  <row>
-	    <entry id="zfs-term-l2arc"><acronym>L2ARC</acronym></entry>
+	    <entry
+	      id="zfs-term-l2arc"><acronym>L2ARC</acronym></entry>
 
 	    <entry>The <acronym>L2ARC</acronym> is the second level
 	      of the <acronym>ZFS</acronym> caching system.  The
 	      primary <acronym>ARC</acronym> is stored in
 	      <acronym>RAM</acronym>, however since the amount of
 	      available <acronym>RAM</acronym> is often limited,
-	      <acronym>ZFS</acronym> can also make use of <link
-		linkend="zfs-term-vdev-cache">cache</link>
+	      <acronym>ZFS</acronym> can also make use of
+	      <link linkend="zfs-term-vdev-cache">cache</link>
 	      vdevs.  Solid State Disks (<acronym>SSD</acronym>s) are
 	      often used as these cache devices due to their higher
 	      speed and lower latency compared to traditional spinning
-	      disks.  An <acronym>L2ARC</acronym> is entirely optional, but having one
-	      will significantly increase read speeds for files that
-	      are cached on the <acronym>SSD</acronym> instead of
-	      having to be read from the regular spinning disks.  The
+	      disks.  An <acronym>L2ARC</acronym> is entirely
+	      optional, but having one will significantly increase
+	      read speeds for files that are cached on the
+	      <acronym>SSD</acronym> instead of having to be read from
+	      the regular spinning disks.  The
 	      <acronym>L2ARC</acronym> can also speed up <link
 		linkend="zfs-term-deduplication">deduplication</link>
 	      since a <acronym>DDT</acronym> that does not fit in
@@ -1092,48 +1115,51 @@ vfs.zfs.vdev.cache.size="5M"</programlis
 	    <entry id="zfs-term-cow">Copy-On-Write</entry>
 
 	    <entry>Unlike a traditional file system, when data is
-	      overwritten on <acronym>ZFS</acronym> the new data is written to a
-	      different block rather than overwriting the old data in
-	      place.  Only once this write is complete is the metadata
-	      then updated to point to the new location of the data.
-	      This means that in the event of a shorn write (a system
-	      crash or power loss in the middle of writing a file), the
-	      entire original contents of the file are still available
-	      and the incomplete write is discarded.  This also means
-	      that <acronym>ZFS</acronym> does not require a &man.fsck.8; after an unexpected
+	      overwritten on <acronym>ZFS</acronym> the new data is
+	      written to a different block rather than overwriting the
+	      old data in place.  Only once this write is complete is
+	      the metadata then updated to point to the new location
+	      of the data.  This means that in the event of a shorn
+	      write (a system crash or power loss in the middle of
+	      writing a file), the entire original contents of the
+	      file are still available and the incomplete write is
+	      discarded.  This also means that <acronym>ZFS</acronym>
+	      does not require a &man.fsck.8; after an unexpected
 	      shutdown.</entry>
 	  </row>
 
 	  <row>
 	    <entry id="zfs-term-dataset">Dataset</entry>
 
-	    <entry>Dataset is the generic term for a <acronym>ZFS</acronym> file system,
-	      volume, snapshot or clone.  Each dataset will have a
-	      unique name in the format:
-	      <literal>poolname/path@snapshot</literal>.  The root of
-	      the pool is technically a dataset as well.  Child
-	      datasets are named hierarchically like directories; for
-	      example, <literal>mypool/home</literal>, the home dataset,
-	      is a child of <literal>mypool</literal> and inherits properties from it.
-	      This can be expanded further by creating
-	      <literal>mypool/home/user</literal>.  This grandchild
-	      dataset will inherity properties from the parent and
-	      grandparent.  It is also possible to set properties
-	      on a child to override the defaults inherited from the
-	      parents and grandparents.  <acronym>ZFS</acronym> also allows
-	      administration of datasets and their children to be
-	      delegated.</entry>
+	    <entry>Dataset is the generic term for a
+	      <acronym>ZFS</acronym> file system, volume, snapshot or
+	      clone.  Each dataset will have a unique name in the
+	      format: <literal>poolname/path@snapshot</literal>.  The
+	      root of the pool is technically a dataset as well.
+	      Child datasets are named hierarchically like
+	      directories; for example,
+	      <literal>mypool/home</literal>, the home dataset, is a
+	      child of <literal>mypool</literal> and inherits
+	      properties from it.  This can be expanded further by
+	      creating <literal>mypool/home/user</literal>.  This
+	      grandchild dataset will inherity properties from the
+	      parent and grandparent.  It is also possible to set
+	      properties on a child to override the defaults inherited
+	      from the parents and grandparents.
+	      <acronym>ZFS</acronym> also allows administration of
+	      datasets and their children to be delegated.</entry>
 	  </row>
 
 	  <row>
 	    <entry id="zfs-term-volum">Volume</entry>
 
-	    <entry>In additional to regular file system datasets, <acronym>ZFS</acronym>
-	      can also create volumes, which are block devices.
-	      Volumes have many of the same features, including
-	      copy-on-write, snapshots, clones and checksumming.
-	      Volumes can be useful for running other file system
-	      formats on top of <acronym>ZFS</acronym>, such as <acronym>UFS</acronym> or in the case of
+	    <entry>In additional to regular file system datasets,
+	      <acronym>ZFS</acronym> can also create volumes, which
+	      are block devices.  Volumes have many of the same
+	      features, including copy-on-write, snapshots, clones and
+	      checksumming.  Volumes can be useful for running other
+	      file system formats on top of <acronym>ZFS</acronym>,
+	      such as <acronym>UFS</acronym> or in the case of
 	      Virtualization or exporting <acronym>iSCSI</acronym>
 	      extents.</entry>
 	  </row>
@@ -1142,41 +1168,40 @@ vfs.zfs.vdev.cache.size="5M"</programlis
 	    <entry id="zfs-term-snapshot">Snapshot</entry>
 
 	    <entry>The <link
-		linkend="zfs-term-cow">copy-on-write</link>
-
-	      design of <acronym>ZFS</acronym> allows for nearly instantaneous consistent
-	      snapshots with arbitrary names.  After taking a snapshot
-	      of a dataset (or a recursive snapshot of a parent
-	      dataset that will include all child datasets), new data
-	      is written to new blocks (as described above), however
-	      the old blocks are not reclaimed as free space.  There
-	      are then two versions of the file system, the snapshot
-	      (what the file system looked like before) and the live
-	      file system; however no additional space is used.  As
-	      new data is written to the live file system, new blocks
-	      are allocated to store this data.  The apparent size of
-	      the snapshot will grow as the blocks are no longer used
-	      in the live file system, but only in the snapshot.
-	      These snapshots can be mounted (read only) to allow for
-	      the recovery of previous versions of files.  It is also
-	      possible to <link
-		linkend="zfs-zfs-snapshot">rollback</link>
-	      a live file system to a specific snapshot, undoing any
-	      changes that took place after the snapshot was taken.
-	      Each block in the zpool has a reference counter which
+		linkend="zfs-term-cow">copy-on-write</link> design of
+	      <acronym>ZFS</acronym> allows for nearly instantaneous
+	      consistent snapshots with arbitrary names.  After taking
+	      a snapshot of a dataset (or a recursive snapshot of a
+	      parent dataset that will include all child datasets),
+	      new data is written to new blocks (as described above),
+	      however the old blocks are not reclaimed as free space.
+	      There are then two versions of the file system, the
+	      snapshot (what the file system looked like before) and
+	      the live file system; however no additional space is
+	      used.  As new data is written to the live file system,
+	      new blocks are allocated to store this data.  The
+	      apparent size of the snapshot will grow as the blocks
+	      are no longer used in the live file system, but only in
+	      the snapshot.  These snapshots can be mounted (read
+	      only) to allow for the recovery of previous versions of
+	      files.  It is also possible to
+	      <link linkend="zfs-zfs-snapshot">rollback</link> a live
+	      file system to a specific snapshot, undoing any changes
+	      that took place after the snapshot was taken.  Each
+	      block in the zpool has a reference counter which
 	      indicates how many snapshots, clones, datasets or
 	      volumes make use of that block.  As files and snapshots
 	      are deleted, the reference count is decremented; once a
 	      block is no longer referenced, it is reclaimed as free
-	      space.  Snapshots can also be marked with a <link
-		linkend="zfs-zfs-snapshot">hold</link>,
-	      once a snapshot is held, any attempt to destroy it will
-	      return an EBUY error.  Each snapshot can have multiple
-	      holds, each with a unique name.  The <link
-		linkend="zfs-zfs-snapshot">release</link>
-	      command removes the hold so the snapshot can then be
-	      deleted.  Snapshots can be taken on volumes, however
-	      they can only be cloned or rolled back, not mounted
+	      space.  Snapshots can also be marked with a
+	      <link linkend="zfs-zfs-snapshot">hold</link>, once a
+	      snapshot is held, any attempt to destroy it will return
+	      an EBUY error.  Each snapshot can have multiple holds,
+	      each with a unique name.  The
+	      <link linkend="zfs-zfs-snapshot">release</link> command
+	      removes the hold so the snapshot can then be deleted.
+	      Snapshots can be taken on volumes, however they can only
+	      be cloned or rolled back, not mounted
 	      independently.</entry>
 	  </row>
 
@@ -1206,13 +1231,16 @@ vfs.zfs.vdev.cache.size="5M"</programlis
 
 	    <entry>Every block that is allocated is also checksummed
 	      (the algorithm used is a per dataset property, see:
-	      <command>zfs set</command>).  <acronym>ZFS</acronym> transparently validates the checksum of
-	      each block as it is read, allowing <acronym>ZFS</acronym> to detect silent
-	      corruption.  If the data that is read does not match the
-	      expected checksum, <acronym>ZFS</acronym> will attempt to recover the data
-	      from any available redundancy, like mirrors or <acronym>RAID-Z</acronym>).  Validation of all checksums can be triggered with
-	      the
-	      <link linkend="zfs-term-scrub"><command>scrub</command></link>
+	      <command>zfs set</command>).  <acronym>ZFS</acronym>
+	      transparently validates the checksum of each block as it
+	      is read, allowing <acronym>ZFS</acronym> to detect
+	      silent corruption.  If the data that is read does not
+	      match the expected checksum, <acronym>ZFS</acronym> will
+	      attempt to recover the data from any available
+	      redundancy, like mirrors or <acronym>RAID-Z</acronym>).
+	      Validation of all checksums can be triggered with the
+	      <link
+		linkend="zfs-term-scrub"><command>scrub</command></link>
 	      command.  Available checksum algorithms include:
 
 	      <itemizedlist>
@@ -1238,90 +1266,96 @@ vfs.zfs.vdev.cache.size="5M"</programlis
 	  <row>
 	    <entry id="zfs-term-compression">Compression</entry>
 
-	    <entry>Each dataset in <acronym>ZFS</acronym> has a compression property,
-	      which defaults to off.  This property can be set to one
-	      of a number of compression algorithms, which will cause
-	      all new data that is written to this dataset to be
-	      compressed as it is written.  In addition to the
-	      reduction in disk usage, this can also increase read and
-	      write throughput, as only the smaller compressed version
-	      of the file needs to be read or written.
+	    <entry>Each dataset in <acronym>ZFS</acronym> has a
+	      compression property, which defaults to off.  This
+	      property can be set to one of a number of compression
+	      algorithms, which will cause all new data that is
+	      written to this dataset to be compressed as it is
+	      written.  In addition to the reduction in disk usage,
+	      this can also increase read and write throughput, as
+	      only the smaller compressed version of the file needs to
+	      be read or written.
 
 	      <note>
-		<para><acronym>LZ4</acronym> compression is only available after &os;
-		  9.2</para>
+		<para><acronym>LZ4</acronym> compression is only
+		  available after &os; 9.2</para>
 	      </note></entry>
 	  </row>
 
 	  <row>
 	    <entry id="zfs-term-deduplication">Deduplication</entry>
 
-	    <entry><acronym>ZFS</acronym> has the ability to detect duplicate blocks of
-	      data as they are written (thanks to the checksumming
-	      feature).  If deduplication is enabled, instead of
-	      writing the block a second time, the reference count of
-	      the existing block will be increased, saving storage
-	      space.  To do this, <acronym>ZFS</acronym> keeps a deduplication
-	      table (<acronym>DDT</acronym>) in memory, containing the
-	      list of unique checksums, the location of that block and
-	      a reference count.  When new data is written, the
-	      checksum is calculated and compared to the list.  If a
-	      match is found, the data is considered to be a
-	      duplicate.  When deduplication is enabled, the checksum
-	      algorithm is changed to <acronym>SHA256</acronym> to
-	      provide a secure cryptographic hash.  <acronym>ZFS</acronym> deduplication
-	      is tunable; if dedup is on, then a matching checksum is
-	      assumed to mean that the data is identical.  If dedup is
-	      set to verify, then the data in the two blocks will be
-	      checked byte-for-byte to ensure it is actually identical
-	      and if it is not, the hash collision will be noted by
-	      <acronym>ZFS</acronym> and the two blocks will be stored separately.  Due
-	      to the nature of the <acronym>DDT</acronym>, having to
-	      store the hash of each unique block, it consumes a very
-	      large amount of memory (a general rule of thumb is
-	      5-6&nbsp;GB of ram per 1&nbsp;TB of deduplicated data).
-	      In situations where it is not practical to have enough
-	      <acronym>RAM</acronym> to keep the entire <acronym>DDT</acronym> in memory,
-	      performance will suffer greatly as the <acronym>DDT</acronym> will need to
-	      be read from disk before each new block is written.
-	      Deduplication can make use of the <acronym>L2ARC</acronym> to store the
-	      <acronym>DDT</acronym>, providing a middle ground between fast system
-	      memory and slower disks.  Consider
-	      using <acronym>ZFS</acronym> compression instead, which often provides
-	      nearly as much space savings without the additional
-	      memory requirement.</entry>
+	    <entry><acronym>ZFS</acronym> has the ability to detect
+	      duplicate blocks of data as they are written (thanks to
+	      the checksumming feature).  If deduplication is enabled,
+	      instead of writing the block a second time, the
+	      reference count of the existing block will be increased,
+	      saving storage space.  To do this,
+	      <acronym>ZFS</acronym> keeps a deduplication table
+	      (<acronym>DDT</acronym>) in memory, containing the list
+	      of unique checksums, the location of that block and a
+	      reference count.  When new data is written, the checksum
+	      is calculated and compared to the list.  If a match is
+	      found, the data is considered to be a duplicate.  When
+	      deduplication is enabled, the checksum algorithm is
+	      changed to <acronym>SHA256</acronym> to provide a secure
+	      cryptographic hash.  <acronym>ZFS</acronym>
+	      deduplication is tunable; if dedup is on, then a
+	      matching checksum is assumed to mean that the data is
+	      identical.  If dedup is set to verify, then the data in
+	      the two blocks will be checked byte-for-byte to ensure
+	      it is actually identical and if it is not, the hash
+	      collision will be noted by <acronym>ZFS</acronym> and
+	      the two blocks will be stored separately.  Due to the
+	      nature of the <acronym>DDT</acronym>, having to store
+	      the hash of each unique block, it consumes a very large
+	      amount of memory (a general rule of thumb is 5-6&nbsp;GB
+	      of ram per 1&nbsp;TB of deduplicated data).  In
+	      situations where it is not practical to have enough
+	      <acronym>RAM</acronym> to keep the entire
+	      <acronym>DDT</acronym> in memory, performance will
+	      suffer greatly as the <acronym>DDT</acronym> will need
+	      to be read from disk before each new block is written.
+	      Deduplication can make use of the
+	      <acronym>L2ARC</acronym> to store the
+	      <acronym>DDT</acronym>, providing a middle ground
+	      between fast system memory and slower disks.  Consider
+	      using <acronym>ZFS</acronym> compression instead, which
+	      often provides nearly as much space savings without the
+	      additional memory requirement.</entry>
 	  </row>
 
 	  <row>
 	    <entry id="zfs-term-scrub">Scrub</entry>
 
-	    <entry>In place of a consistency check like &man.fsck.8;, <acronym>ZFS</acronym> has
-	      the <literal>scrub</literal> command, which reads all
-	      data blocks stored on the pool and verifies their
-	      checksums them against the known good checksums stored
-	      in the metadata.  This periodic check of all the data
-	      stored on the pool ensures the recovery of any corrupted
-	      blocks before they are needed.  A scrub is not required
-	      after an unclean shutdown, but it is recommended that
-	      you run a scrub at least once each quarter.  <acronym>ZFS</acronym>
-	      compares the checksum for each block as it is read in
-	      the normal course of use, but a scrub operation makes
-	      sure even infrequently used blocks are checked for
-	      silent corruption.</entry>
+	    <entry>In place of a consistency check like &man.fsck.8;,
+	      <acronym>ZFS</acronym> has the <literal>scrub</literal>
+	      command, which reads all data blocks stored on the pool
+	      and verifies their checksums them against the known good
+	      checksums stored in the metadata.  This periodic check
+	      of all the data stored on the pool ensures the recovery
+	      of any corrupted blocks before they are needed.  A scrub
+	      is not required after an unclean shutdown, but it is
+	      recommended that you run a scrub at least once each
+	      quarter.  <acronym>ZFS</acronym> compares the checksum
+	      for each block as it is read in the normal course of
+	      use, but a scrub operation makes sure even infrequently
+	      used blocks are checked for silent corruption.</entry>
 	  </row>
 
 	  <row>
 	    <entry id="zfs-term-quota">Dataset Quota</entry>
 
-	    <entry><acronym>ZFS</acronym> provides very fast and accurate dataset, user
-	      and group space accounting in addition to quotas and
-	      space reservations.  This gives the administrator fine
-	      grained control over how space is allocated and allows
-	      critical file systems to reserve space to ensure other
-	      file systems do not take all of the free space.
+	    <entry><acronym>ZFS</acronym> provides very fast and
+	      accurate dataset, user and group space accounting in
+	      addition to quotas and space reservations.  This gives
+	      the administrator fine grained control over how space is
+	      allocated and allows critical file systems to reserve
+	      space to ensure other file systems do not take all of
+	      the free space.
 
-	      <para><acronym>ZFS</acronym> supports different types of quotas: the
-		dataset quota, the <link
+	      <para><acronym>ZFS</acronym> supports different types of
+		quotas: the dataset quota, the <link
 		  linkend="zfs-term-refquota">reference
 		  quota (<acronym>refquota</acronym>)</link>, the
 		<link linkend="zfs-term-userquota">user
@@ -1381,9 +1415,9 @@ vfs.zfs.vdev.cache.size="5M"</programlis
 	      dataset tries to use all of the free space, at least
 	      10&nbsp;GB of space is reserved for this dataset.  If a
 	      snapshot is taken of
-	      <filename class="directory">storage/home/bob</filename>, the space used by
-	      that snapshot is counted against the reservation.  The
-	      <link
+	      <filename class="directory">storage/home/bob</filename>,
+	      the space used by that snapshot is counted against the
+	      reservation.  The <link
 		linkend="zfs-term-refreservation">refreservation</link>
 	      property works in a similar way, except it
 	      <emphasis>excludes</emphasis> descendants, such as



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?201308150228.r7F2SiIN088135>