Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 28 Jan 2013 15:13:29 +0000 (UTC)
From:      Dru Lavigne <dru@FreeBSD.org>
To:        doc-committers@freebsd.org, svn-doc-all@freebsd.org, svn-doc-head@freebsd.org
Subject:   svn commit: r40781 - head/en_US.ISO8859-1/books/handbook/vinum
Message-ID:  <201301281513.r0SFDTIt013672@svn.freebsd.org>

next in thread | raw e-mail | index | archive | help
Author: dru
Date: Mon Jan 28 15:13:29 2013
New Revision: 40781
URL: http://svnweb.freebsd.org/changeset/doc/40781

Log:
  White space fix only. Translators can ignore.
  
  Approved by:  bcr (mentor)

Modified:
  head/en_US.ISO8859-1/books/handbook/vinum/chapter.xml

Modified: head/en_US.ISO8859-1/books/handbook/vinum/chapter.xml
==============================================================================
--- head/en_US.ISO8859-1/books/handbook/vinum/chapter.xml	Mon Jan 28 15:12:10 2013	(r40780)
+++ head/en_US.ISO8859-1/books/handbook/vinum/chapter.xml	Mon Jan 28 15:13:29 2013	(r40781)
@@ -31,25 +31,25 @@
 
     <itemizedlist>
       <listitem>
-        <para>They can be too small.</para>
+	<para>They can be too small.</para>
       </listitem>
 
       <listitem>
-        <para>They can be too slow.</para>
+	<para>They can be too slow.</para>
       </listitem>
 
       <listitem>
-        <para>They can be too unreliable.</para>
+	<para>They can be too unreliable.</para>
       </listitem>
     </itemizedlist>
 
     <para>Various solutions to these problems have been proposed and
-      implemented.  One way some users safeguard themselves against such
-      issues is through the use of multiple, and sometimes redundant,
-      disks.  In addition to supporting various cards and controllers
-      for hardware RAID systems, the base &os; system includes the
-      Vinum Volume Manager, a block device driver that implements
-      virtual disk drives.  <emphasis>Vinum</emphasis> is a
+      implemented.  One way some users safeguard themselves against
+      such issues is through the use of multiple, and sometimes
+      redundant, disks.  In addition to supporting various cards and
+      controllers for hardware RAID systems, the base &os; system
+      includes the Vinum Volume Manager, a block device driver that
+      implements virtual disk drives.  <emphasis>Vinum</emphasis> is a
       so-called <emphasis>Volume Manager</emphasis>, a virtual disk
       driver that addresses these three problems.  Vinum provides more
       flexibility, performance, and reliability than traditional disk
@@ -57,26 +57,27 @@
       individually and in combination.</para>
 
     <para>This chapter provides an overview of potential problems with
-      traditional disk storage, and an introduction to the Vinum Volume
-      Manager.</para>
+      traditional disk storage, and an introduction to the Vinum
+      Volume Manager.</para>
 
     <note>
-      <para>Starting with &os;&nbsp;5, Vinum has been rewritten in order
-	to fit into the GEOM architecture (<xref linkend="GEOM"/>),
-	retaining the original ideas, terminology, and on-disk
-	metadata.  This rewrite is called <emphasis>gvinum</emphasis>
-	(for <emphasis> GEOM vinum</emphasis>).  The following text
-	usually refers to <emphasis>Vinum</emphasis> as an abstract
-	name, regardless of the implementation variant.  Any command
-	invocations should now be done using
-	the <command>gvinum</command> command, and the name of the
-	kernel module has been changed
-	from <filename>vinum.ko</filename>
-	to <filename>geom_vinum.ko</filename>, and all device nodes
-	reside under <filename class="directory">/dev/gvinum</filename> instead
-	of <filename class="directory">/dev/vinum</filename>.  As of &os;&nbsp;6, the old
-	Vinum implementation is no longer available in the code
-	base.</para>
+      <para>Starting with &os;&nbsp;5, Vinum has been rewritten in
+	order to fit into the GEOM architecture (<xref
+	  linkend="GEOM"/>), retaining the original ideas,
+	terminology, and on-disk metadata.  This rewrite is called
+	<emphasis>gvinum</emphasis> (for <emphasis> GEOM
+	  vinum</emphasis>).  The following text usually refers to
+	<emphasis>Vinum</emphasis> as an abstract name, regardless of
+	the implementation variant.  Any command invocations should
+	now be done using the <command>gvinum</command> command, and
+	the name of the kernel module has been changed from
+	<filename>vinum.ko</filename> to
+	<filename>geom_vinum.ko</filename>, and all device nodes
+	reside under <filename
+	  class="directory">/dev/gvinum</filename> instead of
+	<filename class="directory">/dev/vinum</filename>.  As of
+	&os;&nbsp;6, the old Vinum implementation is no longer
+	available in the code base.</para>
     </note>
 
   </sect1>
@@ -86,7 +87,7 @@
 
     <indexterm><primary>Vinum</primary></indexterm>
     <indexterm><primary>RAID</primary>
-    <secondary>software</secondary></indexterm>
+      <secondary>software</secondary></indexterm>
 
     <para>Disks are getting bigger, but so are data storage
       requirements.  Often you will find you want a file system that
@@ -137,8 +138,7 @@
       it uses several smaller disks with the same aggregate storage
       space.  Each disk is capable of positioning and transferring
       independently, so the effective throughput increases by a factor
-      close to the number of disks used.
-    </para>
+      close to the number of disks used.</para>
 
     <para>The exact throughput improvement is, of course, smaller than
       the number of disks involved: although each drive is capable of
@@ -175,9 +175,9 @@
     <para>
       <figure id="vinum-concat">
 	<title>Concatenated Organization</title>
+
 	<graphic fileref="vinum/vinum-concat"/>
-      </figure>
-    </para>
+      </figure></para>
 
     <indexterm>
       <primary>disk striping</primary>
@@ -200,152 +200,150 @@
 
     <footnote>
       <para><acronym>RAID</acronym> stands for <emphasis>Redundant
-      Array of Inexpensive Disks</emphasis> and offers various forms
-      of fault tolerance, though the latter term is somewhat
-      misleading: it provides no redundancy.</para> </footnote>.
-
-    Striping requires somewhat more effort to locate the data, and it
-    can cause additional I/O load where a transfer is spread over
-    multiple disks, but it can also provide a more constant load
-    across the disks.  <xref linkend="vinum-striped"/> illustrates the
-    sequence in which storage units are allocated in a striped
-    organization.</para>
+	  Array of Inexpensive Disks</emphasis> and offers various
+	forms of fault tolerance, though the latter term is somewhat
+	misleading: it provides no redundancy.</para> </footnote>.
+
+      Striping requires somewhat more effort to locate the
+      data, and it can cause additional I/O load where a transfer is
+      spread over multiple disks, but it can also provide a more
+      constant load across the disks.  <xref linkend="vinum-striped"/>
+      illustrates the sequence in which storage units are allocated in
+      a striped organization.</para>
 
     <para>
       <figure id="vinum-striped">
-        <title>Striped Organization</title>
+	<title>Striped Organization</title>
+
 	<graphic fileref="vinum/vinum-striped"/>
-      </figure>
-    </para>
+      </figure></para>
   </sect1>
 
   <sect1 id="vinum-data-integrity">
     <title>Data Integrity</title>
 
-      <para>The final problem with current disks is that they are
-	unreliable.  Although disk drive reliability has increased
-	tremendously over the last few years, they are still the most
-	likely core component of a server to fail.  When they do, the
-	results can be catastrophic: replacing a failed disk drive and
-	restoring data to it can take days.</para>
-
-      <indexterm>
-	<primary>disk mirroring</primary>
-      </indexterm>
-      <indexterm>
-	<primary>Vinum</primary>
-	<secondary>mirroring</secondary>
-      </indexterm>
-      <indexterm>
-	<primary>RAID-1</primary>
-      </indexterm>
-
-      <para>The traditional way to approach this problem has been
-	<emphasis>mirroring</emphasis>, keeping two copies of the data
-	on different physical hardware.  Since the advent of the
-	<acronym>RAID</acronym> levels, this technique has also been
-	called <acronym>RAID level 1</acronym> or
-	<acronym>RAID-1</acronym>.  Any write to the volume writes to
-	both locations; a read can be satisfied from either, so if one
-	drive fails, the data is still available on the other
-	drive.</para>
+    <para>The final problem with current disks is that they are
+      unreliable.  Although disk drive reliability has increased
+      tremendously over the last few years, they are still the most
+      likely core component of a server to fail.  When they do, the
+      results can be catastrophic: replacing a failed disk drive and
+      restoring data to it can take days.</para>
 
-      <para>Mirroring has two problems:</para>
+    <indexterm>
+      <primary>disk mirroring</primary>
+    </indexterm>
+    <indexterm><primary>Vinum</primary>
+      <secondary>mirroring</secondary>
+    </indexterm>
+    <indexterm><primary>RAID-1</primary>
+    </indexterm>
 
-	<itemizedlist>
-	  <listitem>
-	    <para>The price.  It requires twice as much disk storage as
-	      a non-redundant solution.</para>
-	  </listitem>
+    <para>The traditional way to approach this problem has been
+      <emphasis>mirroring</emphasis>, keeping two copies of the data
+      on different physical hardware.  Since the advent of the
+      <acronym>RAID</acronym> levels, this technique has also been
+      called <acronym>RAID level 1</acronym> or
+      <acronym>RAID-1</acronym>.  Any write to the volume writes to
+      both locations; a read can be satisfied from either, so if one
+      drive fails, the data is still available on the other
+      drive.</para>
 
-	  <listitem>
-	    <para>The performance impact.  Writes must be performed to
-	      both drives, so they take up twice the bandwidth of a
-	      non-mirrored volume.  Reads do not suffer from a
-	      performance penalty: it even looks as if they are
-	      faster.</para>
-	  </listitem>
-	</itemizedlist>
-
-      <para><indexterm><primary>RAID-5</primary></indexterm>An
-	alternative solution is <emphasis>parity</emphasis>,
-	implemented in the <acronym>RAID</acronym> levels 2, 3, 4 and
-	5.  Of these, <acronym>RAID-5</acronym> is the most
-	interesting. As implemented in Vinum, it is a variant on a
-	striped organization which dedicates one block of each stripe
-	to parity one of the other blocks. As implemented by Vinum, a
-	<acronym>RAID-5</acronym> plex is similar to a striped plex,
-	except that it implements <acronym>RAID-5</acronym> by
-	including a parity block in each stripe.  As required by
-	<acronym>RAID-5</acronym>, the location of this parity block
-	changes from one stripe to the next.  The numbers in the data
-	blocks indicate the relative block numbers.</para>
+    <para>Mirroring has two problems:</para>
 
-      <para>
-	<figure id="vinum-raid5-org">
-	  <title>RAID-5 Organization</title>
-	  <graphic fileref="vinum/vinum-raid5-org"/>
-	</figure>
-      </para>
-
-      <para>Compared to mirroring, <acronym>RAID-5</acronym> has the
-	advantage of requiring significantly less storage space.  Read
-	access is similar to that of striped organizations, but write
-	access is significantly slower, approximately 25% of the read
-	performance.  If one drive fails, the array can continue to
-	operate in degraded mode: a read from one of the remaining
-	accessible drives continues normally, but a read from the
-	failed drive is recalculated from the corresponding block from
-	all the remaining drives.
-      </para>
+    <itemizedlist>
+      <listitem>
+	<para>The price.  It requires twice as much disk storage as
+	  a non-redundant solution.</para>
+      </listitem>
+
+      <listitem>
+	<para>The performance impact.  Writes must be performed to
+	  both drives, so they take up twice the bandwidth of a
+	  non-mirrored volume.  Reads do not suffer from a
+	  performance penalty: it even looks as if they are
+	  faster.</para>
+      </listitem>
+    </itemizedlist>
+
+    <para><indexterm><primary>RAID-5</primary></indexterm>An
+      alternative solution is <emphasis>parity</emphasis>, implemented
+      in the <acronym>RAID</acronym> levels 2, 3, 4 and 5.  Of these,
+      <acronym>RAID-5</acronym> is the most interesting.  As
+      implemented in Vinum, it is a variant on a striped organization
+      which dedicates one block of each stripe to parity one of the
+      other blocks.  As implemented by Vinum, a
+      <acronym>RAID-5</acronym> plex is similar to a striped plex,
+      except that it implements <acronym>RAID-5</acronym> by
+      including a parity block in each stripe.  As required by
+      <acronym>RAID-5</acronym>, the location of this parity block
+      changes from one stripe to the next.  The numbers in the data
+      blocks indicate the relative block numbers.</para>
+
+    <para>
+      <figure id="vinum-raid5-org">
+	<title>RAID-5 Organization</title>
+
+	<graphic fileref="vinum/vinum-raid5-org"/>
+      </figure></para>
+
+    <para>Compared to mirroring, <acronym>RAID-5</acronym> has the
+      advantage of requiring significantly less storage space.  Read
+      access is similar to that of striped organizations, but write
+      access is significantly slower, approximately 25% of the read
+      performance.  If one drive fails, the array can continue to
+      operate in degraded mode: a read from one of the remaining
+      accessible drives continues normally, but a read from the
+      failed drive is recalculated from the corresponding block from
+      all the remaining drives.</para>
   </sect1>
 
   <sect1 id="vinum-objects">
     <title>Vinum Objects</title>
-      <para>In order to address these problems, Vinum implements a four-level
-	hierarchy of objects:</para>
 
-      <itemizedlist>
-	<listitem>
-	  <para>The most visible object is the virtual disk, called a
-	    <emphasis>volume</emphasis>.  Volumes have essentially the same
-	    properties as a &unix; disk drive, though there are some minor
-	    differences.  They have no size limitations.</para>
-	</listitem>
+    <para>In order to address these problems, Vinum implements a
+      four-level hierarchy of objects:</para>
 
-	<listitem>
-	  <para>Volumes are composed of <emphasis>plexes</emphasis>,
-	    each of which represent the total address space of a
-	    volume.  This level in the hierarchy thus provides
-	    redundancy.  Think of plexes as individual disks in a
-	    mirrored array, each containing the same data.</para>
-	</listitem>
+    <itemizedlist>
+      <listitem>
+	<para>The most visible object is the virtual disk, called a
+	  <emphasis>volume</emphasis>.  Volumes have essentially the
+	  same properties as a &unix; disk drive, though there are
+	  some minor differences.  They have no size
+	  limitations.</para>
+      </listitem>
 
-	<listitem>
-	  <para>Since Vinum exists within the &unix; disk storage
-	    framework, it would be possible to use &unix;
-	    partitions as the building block for multi-disk plexes,
-	    but in fact this turns out to be too inflexible:
-	    &unix; disks can have only a limited number of
-	    partitions.  Instead, Vinum subdivides a single
-	    &unix; partition (the <emphasis>drive</emphasis>)
-	    into contiguous areas called
-	    <emphasis>subdisks</emphasis>, which it uses as building
-	    blocks for plexes.</para>
-	</listitem>
+      <listitem>
+	<para>Volumes are composed of <emphasis>plexes</emphasis>,
+	  each of which represent the total address space of a
+	  volume.  This level in the hierarchy thus provides
+	  redundancy.  Think of plexes as individual disks in a
+	  mirrored array, each containing the same data.</para>
+      </listitem>
 
-	<listitem>
-	  <para>Subdisks reside on Vinum <emphasis>drives</emphasis>,
-	    currently &unix; partitions.  Vinum drives can
-	    contain any number of subdisks.  With the exception of a
-	    small area at the beginning of the drive, which is used
-	    for storing configuration and state information, the
-	    entire drive is available for data storage.</para>
-	</listitem>
-      </itemizedlist>
+      <listitem>
+	<para>Since Vinum exists within the &unix; disk storage
+	  framework, it would be possible to use &unix; partitions
+	  as the building block for multi-disk plexes, but in fact
+	  this turns out to be too inflexible: &unix; disks can have
+	  only a limited number of partitions.  Instead, Vinum
+	  subdivides a single &unix; partition (the
+	  <emphasis>drive</emphasis>) into contiguous areas called
+	  <emphasis>subdisks</emphasis>, which it uses as building
+	  blocks for plexes.</para>
+      </listitem>
+
+      <listitem>
+	<para>Subdisks reside on Vinum <emphasis>drives</emphasis>,
+	  currently &unix; partitions.  Vinum drives can contain any
+	  number of subdisks.  With the exception of a small area at
+	  the beginning of the drive, which is used for storing
+	  configuration and state information, the entire drive is
+	  available for data storage.</para>
+      </listitem>
+    </itemizedlist>
 
-      <para>The following sections describe the way these objects provide the
-	functionality required of Vinum.</para>
+    <para>The following sections describe the way these objects
+      provide the functionality required of Vinum.</para>
 
     <sect2>
       <title>Volume Size Considerations</title>
@@ -358,6 +356,7 @@
 
     <sect2>
       <title>Redundant Data Storage</title>
+
       <para>Vinum implements mirroring by attaching multiple plexes to
 	a volume.  Each plex is a representation of the data in a
 	volume.  A volume may contain between one and eight
@@ -395,8 +394,9 @@
 
     <sect2>
       <title>Which Plex Organization?</title>
-      <para>The version of Vinum supplied with &os;&nbsp;&rel.current; implements
-	two kinds of plex:</para>
+
+      <para>The version of Vinum supplied with &os;&nbsp;&rel.current;
+	implements two kinds of plex:</para>
 
       <itemizedlist>
 	<listitem>
@@ -409,7 +409,7 @@
 	    measurable.  On the other hand, they are most susceptible
 	    to hot spots, where one disk is very active and others are
 	    idle.</para>
-        </listitem>
+	</listitem>
 
 	<listitem>
 	  <para>The greatest advantage of striped
@@ -427,19 +427,20 @@
 	</listitem>
       </itemizedlist>
 
-      <para><xref linkend="vinum-comparison"/> summarizes the advantages
-	and disadvantages of each plex organization.</para>
+      <para><xref linkend="vinum-comparison"/> summarizes the
+	advantages and disadvantages of each plex organization.</para>
 
       <table id="vinum-comparison" frame="none">
 	<title>Vinum Plex Organizations</title>
+
 	<tgroup cols="5">
 	  <thead>
 	    <row>
 	      <entry>Plex type</entry>
-	  	<entry>Minimum subdisks</entry>
-	  	<entry>Can add subdisks</entry>
-	  	<entry>Must be equal size</entry>
-	  	<entry>Application</entry>
+	      <entry>Minimum subdisks</entry>
+	      <entry>Can add subdisks</entry>
+	      <entry>Must be equal size</entry>
+	      <entry>Application</entry>
 	    </row>
 	  </thead>
 
@@ -449,8 +450,8 @@
 	      <entry>1</entry>
 	      <entry>yes</entry>
 	      <entry>no</entry>
-	      <entry>Large data storage with maximum placement flexibility
-	        and moderate performance</entry>
+	      <entry>Large data storage with maximum placement
+		flexibility and moderate performance</entry>
 	    </row>
 
 	    <row>
@@ -458,8 +459,8 @@
 	      <entry>2</entry>
 	      <entry>no</entry>
 	      <entry>yes</entry>
-	      <entry>High performance in combination with highly concurrent
-		access</entry>
+	      <entry>High performance in combination with highly
+		concurrent access</entry>
 	    </row>
 	  </tbody>
 	</tgroup>
@@ -471,7 +472,7 @@
     <title>Some Examples</title>
 
     <para>Vinum maintains a <emphasis>configuration
-      database</emphasis> which describes the objects known to an
+	database</emphasis> which describes the objects known to an
       individual system.  Initially, the user creates the
       configuration database from one or more configuration files with
       the aid of the &man.gvinum.8; utility program.  Vinum stores a
@@ -482,11 +483,11 @@
 
     <sect2>
       <title>The Configuration File</title>
-      <para>The configuration file describes individual Vinum objects.  The
-	definition of a simple volume might be:</para>
 
-      <programlisting>
-    drive a device /dev/da3h
+      <para>The configuration file describes individual Vinum objects.
+	The definition of a simple volume might be:</para>
+
+      <programlisting>    drive a device /dev/da3h
     volume myvol
       plex org concat
         sd length 512m drive a</programlisting>
@@ -505,9 +506,9 @@
 	</listitem>
 
 	<listitem>
-	  <para>The <emphasis>volume</emphasis> line describes a volume.
-	    The only required attribute is the name, in this case
-	    <emphasis>myvol</emphasis>.</para>
+	  <para>The <emphasis>volume</emphasis> line describes a
+	    volume.  The only required attribute is the name, in this
+	    case <emphasis>myvol</emphasis>.</para>
 	</listitem>
 
 	<listitem>
@@ -535,8 +536,8 @@
 	</listitem>
       </itemizedlist>
 
-      <para>After processing this file, &man.gvinum.8; produces the following
-	output:</para>
+      <para>After processing this file, &man.gvinum.8; produces the
+	following output:</para>
 
       <programlisting width="97">
       &prompt.root; gvinum -&gt; <userinput>create config1</userinput>
@@ -554,15 +555,16 @@
 
 	S myvol.p0.s0           State: up       PO:        0  B Size:        512 MB</programlisting>
 
-      <para>This output shows the brief listing format of &man.gvinum.8;.  It
-	is represented graphically in <xref linkend="vinum-simple-vol"/>.</para>
+      <para>This output shows the brief listing format of
+	&man.gvinum.8;.  It is represented graphically in <xref
+	  linkend="vinum-simple-vol"/>.</para>
 
       <para>
 	<figure id="vinum-simple-vol">
 	  <title>A Simple Vinum Volume</title>
+
 	  <graphic fileref="vinum/vinum-simple-vol"/>
-	</figure>
-      </para>
+	</figure></para>
 
       <para>This figure, and the ones which follow, represent a
 	volume, which contains the plexes, which in turn contain the
@@ -587,8 +589,7 @@
 	that a drive failure will not take down both plexes.  The
 	following configuration mirrors a volume:</para>
 
-      <programlisting>
-	drive b device /dev/da4h
+      <programlisting>	drive b device /dev/da4h
 	volume mirror
       plex org concat
         sd length 512m drive a
@@ -628,9 +629,9 @@
       <para>
 	<figure id="vinum-mirrored-vol">
 	  <title>A Mirrored Vinum Volume</title>
+
 	  <graphic fileref="vinum/vinum-mirrored-vol"/>
-	</figure>
-      </para>
+	</figure></para>
 
       <para>In this example, each plex contains the full 512&nbsp;MB
 	of address space.  As in the previous example, each plex
@@ -650,8 +651,7 @@
 	shows a volume with a plex striped across four disk
 	drives:</para>
 
-	<programlisting>
-	drive c device /dev/da5h
+	<programlisting>	drive c device /dev/da5h
 	drive d device /dev/da6h
 	volume stripe
 	plex org striped 512k
@@ -660,9 +660,9 @@
 	  sd length 128m drive c
 	  sd length 128m drive d</programlisting>
 
-      <para>As before, it is not necessary to define the drives which are
-	already known to Vinum.  After processing this definition, the
-	configuration looks like:</para>
+      <para>As before, it is not necessary to define the drives which
+	are already known to Vinum.  After processing this definition,
+	the configuration looks like:</para>
 
       <programlisting width="92">
 	Drives:         4 (4 configured)
@@ -695,27 +695,26 @@
       <para>
 	<figure id="vinum-striped-vol">
 	  <title>A Striped Vinum Volume</title>
+
 	  <graphic fileref="vinum/vinum-striped-vol"/>
-	</figure>
-      </para>
+	</figure></para>
 
       <para>This volume is represented in
-	<xref linkend="vinum-striped-vol"/>.  The darkness of the stripes
-	indicates the position within the plex address space: the lightest stripes
-	come first, the darkest last.</para>
+	<xref linkend="vinum-striped-vol"/>.  The darkness of the
+	stripes indicates the position within the plex address space:
+	the lightest stripes come first, the darkest last.</para>
     </sect2>
 
     <sect2>
       <title>Resilience and Performance</title>
 
-      <para><anchor id="vinum-resilience"/>With sufficient hardware, it
-	is possible to build volumes which show both increased
+      <para><anchor id="vinum-resilience"/>With sufficient hardware,
+	it is possible to build volumes which show both increased
 	resilience and increased performance compared to standard
 	&unix; partitions.  A typical configuration file might
 	be:</para>
 
-      <programlisting>
-	volume raid10
+      <programlisting>	volume raid10
       plex org striped 512k
         sd length 102480k drive a
         sd length 102480k drive b
@@ -729,19 +728,20 @@
         sd length 102480k drive a
         sd length 102480k drive b</programlisting>
 
-      <para>The subdisks of the second plex are offset by two drives from those
-	of the first plex: this helps ensure that writes do not go to the same
-	subdisks even if a transfer goes over two drives.</para>
+      <para>The subdisks of the second plex are offset by two drives
+	from those of the first plex: this helps ensure that writes do
+	not go to the same subdisks even if a transfer goes over two
+	drives.</para>
 
-      <para><xref linkend="vinum-raid10-vol"/> represents the structure
-	of this volume.</para>
+      <para><xref linkend="vinum-raid10-vol"/> represents the
+	structure of this volume.</para>
 
       <para>
 	<figure id="vinum-raid10-vol">
 	  <title>A Mirrored, Striped Vinum Volume</title>
+
 	  <graphic fileref="vinum/vinum-raid10-vol"/>
-        </figure>
-      </para>
+	</figure></para>
     </sect2>
   </sect1>
 
@@ -762,19 +762,21 @@
       drives may be up to 32 characters long.</para>
 
     <para>Vinum objects are assigned device nodes in the hierarchy
-      <filename class="directory">/dev/gvinum</filename>.  The configuration shown above
-      would cause Vinum to create the following device nodes:</para>
+      <filename class="directory">/dev/gvinum</filename>.  The
+      configuration shown above would cause Vinum to create the
+      following device nodes:</para>
 
     <itemizedlist>
       <listitem>
 	<para>Device entries for each volume.
-	  These are the main devices used by Vinum.  Thus the configuration
-	  above would include the devices
+	  These are the main devices used by Vinum.  Thus the
+	  configuration above would include the devices
 	  <filename class="devicefile">/dev/gvinum/myvol</filename>,
 	  <filename class="devicefile">/dev/gvinum/mirror</filename>,
 	  <filename class="devicefile">/dev/gvinum/striped</filename>,
-	  <filename class="devicefile">/dev/gvinum/raid5</filename> and
-	  <filename class="devicefile">/dev/gvinum/raid10</filename>.</para>
+	  <filename class="devicefile">/dev/gvinum/raid5</filename>
+	  and <filename
+	    class="devicefile">/dev/gvinum/raid10</filename>.</para>
       </listitem>
 
       <listitem>
@@ -785,15 +787,15 @@
       <listitem>
 	<para>The directories
 	  <filename class="directory">/dev/gvinum/plex</filename>, and
-	  <filename class="directory">/dev/gvinum/sd</filename>, which contain
-	  device nodes for each plex and for each subdisk,
+	  <filename class="directory">/dev/gvinum/sd</filename>, which
+	  contain device nodes for each plex and for each subdisk,
 	  respectively.</para>
       </listitem>
     </itemizedlist>
 
-    <para>For example, consider the following configuration file:</para>
-	<programlisting>
-	drive drive1 device /dev/sd1h
+    <para>For example, consider the following configuration
+      file:</para>
+    <programlisting>	drive drive1 device /dev/sd1h
 	drive drive2 device /dev/sd2h
 	drive drive3 device /dev/sd3h
 	drive drive4 device /dev/sd4h
@@ -804,11 +806,11 @@
         sd length 100m drive drive3
         sd length 100m drive drive4</programlisting>
 
-    <para>After processing this file, &man.gvinum.8; creates the following
-      structure in <filename class="directory">/dev/gvinum</filename>:</para>
+    <para>After processing this file, &man.gvinum.8; creates the
+      following structure in <filename
+	class="directory">/dev/gvinum</filename>:</para>
 
-    <programlisting>
-	drwxr-xr-x  2 root  wheel       512 Apr 13 16:46 plex
+    <programlisting>	drwxr-xr-x  2 root  wheel       512 Apr 13 16:46 plex
 	crwxr-xr--  1 root  wheel   91,   2 Apr 13 16:46 s64
 	drwxr-xr-x  2 root  wheel       512 Apr 13 16:46 sd
 
@@ -839,15 +841,16 @@
 	  utilities, notably &man.newfs.8;, which previously tried to
 	  interpret the last letter of a Vinum volume name as a
 	  partition identifier.  For example, a disk drive may have a
-	  name like <filename class="devicefile">/dev/ad0a</filename> or
-	  <filename class="devicefile">/dev/da2h</filename>.  These names represent
-	  the first partition (<devicename>a</devicename>) on the
-	  first (0) IDE disk (<devicename>ad</devicename>) and the
-	  eighth partition (<devicename>h</devicename>) on the third
-	  (2) SCSI disk (<devicename>da</devicename>) respectively.
-	  By contrast, a Vinum volume might be called
-	  <filename class="devicefile">/dev/gvinum/concat</filename>, a name which has
-	  no relationship with a partition name.</para>
+	  name like <filename class="devicefile">/dev/ad0a</filename>
+	  or <filename class="devicefile">/dev/da2h</filename>.  These
+	  names represent the first partition
+	  (<devicename>a</devicename>) on the first (0) IDE disk
+	  (<devicename>ad</devicename>) and the eighth partition
+	  (<devicename>h</devicename>) on the third (2) SCSI disk
+	  (<devicename>da</devicename>) respectively.  By contrast, a
+	  Vinum volume might be called <filename
+	    class="devicefile">/dev/gvinum/concat</filename>, a name
+	  which has no relationship with a partition name.</para>
 
 	<para>In order to create a file system on this volume, use
 	  &man.newfs.8;:</para>
@@ -864,8 +867,8 @@
       Vinum, but this is not recommended.  The standard way to start
       Vinum is as a kernel module (<acronym>kld</acronym>).  You do
       not even need to use &man.kldload.8; for Vinum: when you start
-      &man.gvinum.8;, it checks whether the module has been loaded, and
-      if it is not, it loads it automatically.</para>
+      &man.gvinum.8;, it checks whether the module has been loaded,
+      and if it is not, it loads it automatically.</para>
 
 
     <sect2>
@@ -878,7 +881,7 @@
 	configuration files.  For example, a disk configuration might
 	contain the following text:</para>
 
-	<programlisting width="119">volume myvol state up
+      <programlisting width="119">volume myvol state up
 volume bigraid state down
 plex name myvol.p0 state up org concat vol myvol
 plex name myvol.p1 state up org concat vol myvol
@@ -909,96 +912,96 @@ sd name bigraid.p0.s4 drive e plex bigra
 	  if they have been assigned different &unix; drive
 	  IDs.</para>
 
-      <sect3 id="vinum-rc-startup">
-	<title>Automatic Startup</title>
-
-	<para>
-	<emphasis>Gvinum</emphasis> always
-	features an automatic startup once the kernel module is
-	loaded, via &man.loader.conf.5;.  To load the
-	<emphasis>Gvinum</emphasis> module at boot time, add
-	<literal>geom_vinum_load="YES"</literal> to
-	<filename>/boot/loader.conf</filename>.</para>
-
-	<para>When you start Vinum with the <command>gvinum
-	  start</command> command, Vinum reads the configuration
-	  database from one of the Vinum drives.  Under normal
-	  circumstances, each drive contains an identical copy of the
-	  configuration database, so it does not matter which drive is
-	  read.  After a crash, however, Vinum must determine which
-	  drive was updated most recently and read the configuration
-	  from this drive.  It then updates the configuration if
-	  necessary from progressively older drives.</para>
-
-      </sect3>
-    </sect2>
-  </sect1>
-
-  <sect1 id="vinum-root">
-    <title>Using Vinum for the Root Filesystem</title>
+	<sect3 id="vinum-rc-startup">
+	  <title>Automatic Startup</title>
 
-    <para>For a machine that has fully-mirrored filesystems using
-      Vinum, it is desirable to also mirror the root filesystem.
-      Setting up such a configuration is less trivial than mirroring
-      an arbitrary filesystem because:</para>
+	  <para><emphasis>Gvinum</emphasis> always features an
+	    automatic startup once the kernel module is loaded, via
+	    &man.loader.conf.5;.  To load the
+	    <emphasis>Gvinum</emphasis> module at boot time, add
+	    <literal>geom_vinum_load="YES"</literal> to
+	    <filename>/boot/loader.conf</filename>.</para>
 
-    <itemizedlist>
-      <listitem>
-	<para>The root filesystem must be available very early during
-	  the boot process, so the Vinum infrastructure must already be
-	  available at this time.</para>
-      </listitem>
-      <listitem>
-	<para>The volume containing the root filesystem also contains
-	  the system bootstrap and the kernel, which must be read
-	  using the host system's native utilities (e. g. the BIOS on
-	  PC-class machines) which often cannot be taught about the
-	  details of Vinum.</para>
-      </listitem>
-    </itemizedlist>
+	  <para>When you start Vinum with the <command>gvinum
+	      start</command> command, Vinum reads the configuration
+	    database from one of the Vinum drives.  Under normal
+	    circumstances, each drive contains an identical copy of
+	    the configuration database, so it does not matter which
+	    drive is read.  After a crash, however, Vinum must
+	    determine which drive was updated most recently and read
+	    the configuration from this drive.  It then updates the
+	    configuration if necessary from progressively older
+	    drives.</para>
+	</sect3>
+      </sect2>
+    </sect1>
+
+    <sect1 id="vinum-root">
+      <title>Using Vinum for the Root Filesystem</title>
+
+      <para>For a machine that has fully-mirrored filesystems using
+	Vinum, it is desirable to also mirror the root filesystem.
+	Setting up such a configuration is less trivial than mirroring
+	an arbitrary filesystem because:</para>
 
-    <para>In the following sections, the term <quote>root
-      volume</quote> is generally used to describe the Vinum volume
-      that contains the root filesystem.  It is probably a good idea
-      to use the name <literal>"root"</literal> for this volume, but
-      this is not technically required in any way.  All command
-      examples in the following sections assume this name though.</para>
+      <itemizedlist>
+	<listitem>
+	  <para>The root filesystem must be available very early
+	    during the boot process, so the Vinum infrastructure must
+	    alrqeady be available at this time.</para>
+	</listitem>
+	<listitem>
+	  <para>The volume containing the root filesystem also
+	    contains the system bootstrap and the kernel, which must
+	    be read using the host system's native utilities (e. g.
+	    the BIOS on PC-class machines) which often cannot be
+	    taught about the details of Vinum.</para>
+	</listitem>
+      </itemizedlist>
 
-    <sect2>
-      <title>Starting up Vinum Early Enough for the Root
-	Filesystem</title>
+      <para>In the following sections, the term <quote>root
+	  volume</quote> is generally used to describe the Vinum
+	volume that contains the root filesystem.  It is probably a
+	good idea to use the name <literal>"root"</literal> for this
+	volume, but this is not technically required in any way.  All
+	command examples in the following sections assume this name
+	though.</para>
+
+      <sect2>
+	<title>Starting up Vinum Early Enough for the Root
+	  Filesystem</title>
 
-      <para>There are several measures to take for this to
-	happen:</para>
+	<para>There are several measures to take for this to
+	  happen:</para>
 
-      <itemizedlist>
-	<listitem>
-	  <para>Vinum must be available in the kernel at boot-time.
-	    Thus, the method to start Vinum automatically described in
-	    <xref linkend="vinum-rc-startup"/> is not applicable to
-	    accomplish this task, and the
-	    <literal>start_vinum</literal> parameter must actually
-	    <emphasis>not</emphasis> be set when the following setup
-	    is being arranged.	The first option would be to compile
-	    Vinum statically into the kernel, so it is available all
-	    the time, but this is usually not desirable.  There is
-	    another option as well, to have
-	    <filename>/boot/loader</filename> (<xref
-	    linkend="boot-loader"/>) load the vinum kernel module
-	    early, before starting the kernel.	This can be
-	    accomplished by putting the line:</para>
+	<itemizedlist>
+	  <listitem>
+	    <para>Vinum must be available in the kernel at boot-time.
+	      Thus, the method to start Vinum automatically described
+	      in <xref linkend="vinum-rc-startup"/> is not applicable
+	      to accomplish this task, and the
+	      <literal>start_vinum</literal> parameter must actually
+	      <emphasis>not</emphasis> be set when the following setup
+	      is being arranged.  The first option would be to compile
+	      Vinum statically into the kernel, so it is available all
+	      the time, but this is usually not desirable.  There is
+	      another option as well, to have
+	      <filename>/boot/loader</filename> (<xref
+		linkend="boot-loader"/>) load the vinum kernel module
+	      early, before starting the kernel.  This can be
+	      accomplished by putting the line:</para>
 
-	  <programlisting>geom_vinum_load="YES"</programlisting>
+	    <programlisting>geom_vinum_load="YES"</programlisting>
 
 	  <para>into the file
 	    <filename>/boot/loader.conf</filename>.</para>
 	</listitem>
 
 	<listitem>
-	  <para>For <emphasis>Gvinum</emphasis>, all startup
-	  is done automatically once the kernel module has been
-	  loaded, so the procedure described above is all that is
-	  needed.</para>
+	  <para>For <emphasis>Gvinum</emphasis>, all startup is done
+	    automatically once the kernel module has been loaded, so
+	    the procedure described above is all that is
+	    needed.</para>
 	</listitem>
       </itemizedlist>
     </sect2>
@@ -1012,7 +1015,7 @@ sd name bigraid.p0.s4 drive e plex bigra
 	<filename>/boot/loader</filename>) from the UFS filesystem, it
 	is sheer impossible to also teach it about internal Vinum
 	structures so it could parse the Vinum configuration data, and
-	figure out about the elements of a boot volume itself.	Thus,
+	figure out about the elements of a boot volume itself.  Thus,
 	some tricks are necessary to provide the bootstrap code with
 	the illusion of a standard <literal>"a"</literal> partition
 	that contains the root filesystem.</para>
@@ -1036,19 +1039,19 @@ sd name bigraid.p0.s4 drive e plex bigra
 	filesystem.  The bootstrap process will, however, only use one
 	of these replica for finding the bootstrap and all the files,
 	until the kernel will eventually mount the root filesystem
-	itself.	 Each single subdisk within these plexes will then
+	itself.  Each single subdisk within these plexes will then
 	need its own <literal>"a"</literal> partition illusion, for
 	the respective device to become bootable.  It is not strictly
 	needed that each of these faked <literal>"a"</literal>
 	partitions is located at the same offset within its device,
 	compared with other devices containing plexes of the root
-	volume.	 However, it is probably a good idea to create the
+	volume.  However, it is probably a good idea to create the
 	Vinum volumes that way so the resulting mirrored devices are
 	symmetric, to avoid confusion.</para>
 
-      <para>In order to set up these <literal>"a"</literal> partitions,
-	for each device containing part of the root volume, the
-	following needs to be done:</para>
+      <para>In order to set up these <literal>"a"</literal>
+	partitions, for each device containing part of the root
+	volume, the following needs to be done:</para>
 
       <procedure>
 	<step>
@@ -1094,9 +1097,9 @@ sd name bigraid.p0.s4 drive e plex bigra
 	    <literal>"offset"</literal> value for the new
 	    <literal>"a"</literal> partition.  The
 	    <literal>"size"</literal> value for this partition can be
-	    taken verbatim from the calculation above.	The
+	    taken verbatim from the calculation above.  The
 	    <literal>"fstype"</literal> should be
-	    <literal>4.2BSD</literal>.	The
+	    <literal>4.2BSD</literal>.  The
 	    <literal>"fsize"</literal>, <literal>"bsize"</literal>,
 	    and <literal>"cpg"</literal> values should best be chosen
 	    to match the actual filesystem, though they are fairly
@@ -1144,8 +1147,7 @@ sd name bigraid.p0.s4 drive e plex bigra
       <para>After the Vinum root volume has been set up, the output of
 	<command>gvinum l -rv root</command> could look like:</para>
 
-	<screen>
-...
+      <screen>...
 Subdisk root.p0.s0:
 		Size:        125829120 bytes (120 MB)
 		State: up
@@ -1156,37 +1158,35 @@ Subdisk root.p1.s0:
 		Size:        125829120 bytes (120 MB)
 		State: up
 		Plex root.p1 at offset 0 (0  B)
-		Drive disk1 (/dev/da1h) at offset 135680 (132 kB)
-	</screen>
+		Drive disk1 (/dev/da1h) at offset 135680 (132 kB)</screen>
 
       <para>The values to note are <literal>135680</literal> for the
 	offset (relative to partition
-	<filename class="devicefile">/dev/da0h</filename>).  This translates to 265
-	512-byte disk blocks in <command>bsdlabel</command>'s terms.
-	Likewise, the size of this root volume is 245760 512-byte

*** DIFF OUTPUT TRUNCATED AT 1000 LINES ***



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?201301281513.r0SFDTIt013672>