Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 8 Apr 2008 07:10:36 -0700 (PDT)
From:      Federico Galvez-Durand <federicogalvezdurand@yahoo.com>
To:        FreeBSD-gnats-submit@FreeBSD.org, freebsd-doc@FreeBSD.org
Subject:   Re: docs/122052: minor update on handbook section 20.7.1
Message-ID:  <539168.26150.qm@web58006.mail.re3.yahoo.com>
In-Reply-To: <200803241540.m2OFe3Qq016618@freefall.freebsd.org>

next in thread | previous in thread | raw e-mail | index | archive | help
--0-968980677-1207663836=:26150
Content-Type: text/plain; charset=iso-8859-1
Content-Transfer-Encoding: 8bit
Content-Id: 
Content-Disposition: inline

Well, now the minor update is not that minor.
Find attached a patch file.
This patch ->
deprecates:
handbook/vinum-object-naming.html
handbook/vinum-access-bottlenecks.html
handbook/vinum/vinum-concat.png		
handbook/vinum/vinum-raid10-vol.png	
handbook/vinum/vinum-simple-vol.png	
handbook/vinum/vinum-striped.png
handbook/vinum/vinum-mirrored-vol.png	
handbook/vinum/vinum-raid5-org.png	
handbook/vinum/vinum-striped-vol.png
creates:
handbook/vinum-disk-performance-issues.html
handbook.new/vinum/vinum-concat.png	
handbook.new/vinum/vinum-raid01.png	
handbook.new/vinum/vinum-raid10.png	
handbook.new/vinum/vinum-simple.png
handbook.new/vinum/vinum-raid0.png	
handbook.new/vinum/vinum-raid1.png	
handbook.new/vinum/vinum-raid5.png
updates:
all remaining handbook/vinum-*.html 
handbook/raid.html
handbook/virtualization.html.

I think I cannot attach the new PNG files here. 
Please, advise how to submit them.
.




      ____________________________________________________________________________________
You rock. That's why Blockbuster's offering you one month of Blockbuster Total Access, No Cost.  
http://tc.deals.yahoo.com/tc/blockbuster/text5.com
--0-968980677-1207663836=:26150
Content-Type: text/plain; name="patch01.txt"
Content-Description: 2837643882-patch01.txt
Content-Disposition: inline; filename="patch01.txt"

diff -r -u handbook.orig/docbook.css handbook/docbook.css
--- handbook.orig/docbook.css	2008-03-22 05:33:04.000000000 +0100
+++ handbook/docbook.css	2008-04-05 15:28:57.000000000 +0200
@@ -129,6 +129,26 @@
 	color: #000000;
 }
 
+TABLE.CLASSTABLE {
+	border-collapse: collapse;
+	border-top: 2px solid gray;
+	border-bottom: 2px solid gray;
+}
+
+TABLE.CLASSTABLE TH {
+	border-top: 	2px solid gray;
+	border-right: 	1px solid gray;
+	border-left: 	1px solid gray;
+	border-bottom: 	2px solid gray;
+}
+
+TABLE.CLASSTABLE TD {
+	border-top: 	1px solid gray;
+	border-right: 	1px solid gray;
+	border-left: 	1px solid gray;
+	border-bottom: 	1px solid gray;
+}
+
 .FILENAME {
 	color: #007a00;
 }
diff -r -u handbook.orig/vinum-config.html handbook/vinum-config.html
--- handbook.orig/vinum-config.html	2008-03-22 05:43:54.000000000 +0100
+++ handbook/vinum-config.html	2008-04-08 14:56:10.000000000 +0200
@@ -7,8 +7,8 @@
 <meta name="GENERATOR" content="Modular DocBook HTML Stylesheet Version 1.79" />
 <link rel="HOME" title="FreeBSD Handbook" href="index.html" />
 <link rel="UP" title="The Vinum Volume Manager" href="vinum-vinum.html" />
-<link rel="PREVIOUS" title="Object Naming" href="vinum-object-naming.html" />
-<link rel="NEXT" title="Using Vinum for the Root Filesystem" href="vinum-root.html" />
+<link rel="PREVIOUS" title="Vinum Objects" href="vinum-objects.html" />
+<link rel="NEXT" title="Using Vinum for the Root File system" href="vinum-root.html" />
 <link rel="STYLESHEET" type="text/css" href="docbook.css" />
 <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1" />
 </head>
@@ -22,7 +22,7 @@
 </tr>
 
 <tr>
-<td width="10%" align="left" valign="bottom"><a href="vinum-object-naming.html"
+<td width="10%" align="left" valign="bottom"><a href="vinum-objects.html"
 accesskey="P">Prev</a></td>
 <td width="80%" align="center" valign="bottom">Chapter 20 The Vinum Volume Manager</td>
 <td width="10%" align="right" valign="bottom"><a href="vinum-root.html"
@@ -34,24 +34,281 @@
 </div>
 
 <div class="SECT1">
-<h1 class="SECT1"><a id="VINUM-CONFIG" name="VINUM-CONFIG">20.8 Configuring
-Vinum</a></h1>
+<h1 class="SECT1"><a id="VINUM-CONFIG" name="VINUM-CONFIG">20.6 Configuring Vinum</a></h1>
 
 <p>The <tt class="FILENAME">GENERIC</tt> kernel does not contain Vinum. It is possible to
 build a special kernel which includes Vinum, but this is not recommended. The standard
 way to start Vinum is as a kernel module (<acronym class="ACRONYM">kld</acronym>). You do
-not even need to use <a
-href="http://www.FreeBSD.org/cgi/man.cgi?query=kldload&amp;sektion=8"><span
-class="CITEREFENTRY"><span class="REFENTRYTITLE">kldload</span>(8)</span></a> for Vinum:
+not even need to use 
+<a href="http://www.FreeBSD.org/cgi/man.cgi?query=kldload&amp;sektion=8">;
+<span class="CITEREFENTRY"><span class="REFENTRYTITLE">kldload</span>(8)</span></a> 
+for Vinum:
 when you start <a
-href="http://www.FreeBSD.org/cgi/man.cgi?query=gvinum&amp;sektion=8"><span
-class="CITEREFENTRY"><span class="REFENTRYTITLE">gvinum</span>(8)</span></a>, it checks
+href="http://www.FreeBSD.org/cgi/man.cgi?query=gvinum&amp;sektion=8">;
+<span class="CITEREFENTRY"><span class="REFENTRYTITLE">gvinum</span>(8)</span></a>, 
+it checks
 whether the module has been loaded, and if it is not, it loads it automatically.</p>
 
 <div class="SECT2">
-<h2 class="SECT2"><a id="AEN27361" name="AEN27361">20.8.1 Startup</a></h2>
+<h2 class="SECT2"><a id="AEN27361" name="AEN27361">20.6.1 Preparing a disk</a></h2>
+<p> Vinum needs a 
+<a href="http://www.FreeBSD.org/cgi/man.cgi?query=bsdlabel&amp;sektion=8">;
+<span class="CITEREFENTRY"><span class="REFENTRYTITLE">bsdlabel</span>(8)</span></a>, 
+on your disk.
+<p> Assuming 
+<tt class="FILENAME">/dev/ad1</tt>
+is the device in use and your Vinum Volume will use the whole disk, it is advisable to initialize the device with a single Slice, using 
+<a href="http://www.FreeBSD.org/cgi/man.cgi?query=fdisk&amp;sektion=8">;
+<span class="CITEREFENTRY"><span class="REFENTRYTITLE">fdisk</span>(8)</span></a>. The following command creates 
+a single Slice
+<tt class="DEVICENAME">s1</tt>
+over the whole disk
+<tt class="FILENAME">/dev/ad1</tt>.
+</p>
+<pre class="PROGRAMLISTING">
+#fdisk -vI ad1
+</pre>
+
+<p> After creating 
+the disk Slice, it can be labeled:
+</p>
+<pre class="PROGRAMLISTING">
+#bsdlabel -w ad1s1
+</pre>
+
+<p> The bsdlabel utility can not write an  adequate label for Vinum automatically, you need to edit the standard label: </p>
+<pre class="PROGRAMLISTING">
+#bsdlabel -e ad1s1
+</pre>
+<p> Will show you something similar to this:</p>
+<pre class="PROGRAMLISTING">
+# /dev/ad1s1:
+8 partitions:
+#        size   offset    fstype   [fsize bsize bps/cpg]
+  a:  1048241       16    unused        0     0     0                    
+  c:  1048257        0    unused        0     0         # "raw" part, don't edit
+</pre>
+
+<p> You need to edit the partitions. Once this disk is not bootable (it could be, see section 
+<a href="vinum-root.html#VINUM-ROOT">Section 20.7</a>), you could rename partition 
+<tt class="DEVICENAME">a</tt>
+to partition
+<tt class="DEVICENAME">h</tt>
+and replace 
+<tt class="LITERAL">fstype</tt>
+<tt class="LITERAL">unused</tt>
+with 
+<tt class="LITERAL">vinum</tt>.
+The fields <tt class="LITERAL"> fsize bsize bps/cpg</tt> have no meaning for 
+<tt class="LITERAL">fstype vinum</tt>.
+</p>
+<pre class="PROGRAMLISTING">
+# /dev/ad1s1:
+8 partitions:
+#        size   offset    fstype   [fsize bsize bps/cpg]
+  c:  1048257        0    unused        0     0         # "raw" part, don't edit
+  h:  1048241       16     vinum                    
+</pre>
+</div>
+
+<div class="SECT2">
+<h2 class="SECT2"><a id="AEN27371" name="AEN27371">20.6.2 Configuration File</a></h2>
+<p> This file can be placed anywhere in your system. After executing the instructions in this file 
+<a href="http://www.FreeBSD.org/cgi/man.cgi?query=gvinum&amp;sektion=8">;
+<span class="CITEREFENTRY"><span class="REFENTRYTITLE">gvinum</span>(8)</span></a>
+ will not use it anymore. Everything is stored in a database. But you should keep this file in a safe place, you may need it in case of a Volume crash. 
+</p>
+<p>The following configuration creates a Volume named 
+<tt class="FILENAME">Simple</tt>
+containing a drive named 
+<tt class="FILENAME">DiskB</tt>
+based on the device
+<tt class="FILENAME">/dev/ad1s1</tt>. The 
+<tt class="LITERAL">plex</tt>
+organization is 
+<tt class="LITERAL">concat</tt> 
+and contains only one 
+<tt class="LITERAL">subdisk (sd)</tt> 
+.
+</p>
+<pre class="PROGRAMLISTING">
+drive diskB device /dev/ad1s1h
+volume Simple 
+	plex org concat
+	sd drive diskB
+</pre>
+</div>
+
+<div class="SECT2">
+<h2 class="SECT2"><a id="AEN27381" name="AEN27381">20.6.3 Creating a Volume </a></h2>
+
+<p> Once you have prepared your disk and created a configuration file, you can use 
+<a href="http://www.FreeBSD.org/cgi/man.cgi?query=gvinum&amp;sektion=8">;
+<span class="CITEREFENTRY"><span class="REFENTRYTITLE">gvinum</span>(8)</span></a>
+to create a Volume.
+</p>
+<pre class="PROGRAMLISTING">
+#gvinum create Simple
+1 drive:
+D diskB                 State: up	/dev/ad1s1h	A: 0/511 MB (0%)
+
+1 volume:
+V Simple                State: up	Plexes:       1	Size:        511 MB
+
+1 plex:
+P Simple.p0           C State: up	Subdisks:     1	Size:        511 MB
+
+1 subdisk:
+S Simple.p0.s0          State: up	D: diskB        Size:        511 MB
+</pre>
+
+
+<p> At this point, a new entry has been created for your Volume:</p>
+
+<pre class="PROGRAMLISTING">
+#ls -l /dev/gvinum
+crw-r-----  1 root  operator    0,  89 Mar 26 17:17 /dev/gvinum/Simple
+
+/dev/gvinum/plex:
+total 0
+crw-r-----  1 root  operator    0,  86 Mar 26 17:17 Simple.p0
+
+/dev/gvinum/sd:
+total 0
+crw-r-----  1 root  operator    0,  83 Mar 26 17:17 Simple.p0.s0
+</pre>
+
+</div>
+
+<div class="SECT2">
+<h2 class="SECT2"><a id="AEN27382" name="AEN27382">20.6.4 Starting a Volume </a></h2>
+
+<p> After creating a Volume you need to allow the system access to the objects:</p>
+<pre class="PROGRAMLISTING">
+#gvinum start Simple
+</pre>
+
+<p> The Starting process can be slow, depending on the size of the subdisk or subdisks contained in your plex. Enter gvinum and use the option 
+<tt class="LITERAL">l</tt> 
+to see whether the status of all your subdisks is already 
+<tt class="LITERAL">"up"</tt> 
+.
+</p>
+<p>A message is printed by gvinum for each subdisk's Start Process completed.</p>
+</div>
+
+<div class="SECT2">
+<h2 class="SECT2"><a id="AEN27381" name="AEN27381">20.6.5 Creating a File System </a></h2>
+
+<p> After having created a Volume, you need to create a file system using
+<a href="http://www.FreeBSD.org/cgi/man.cgi?query=newfs&amp;sektion=8">;
+<span class="CITEREFENTRY"><span class="REFENTRYTITLE">newfs</span>(8)</span></a>
+</p>
 
-<p>Vinum stores configuration information on the disk slices in essentially the same form
+<pre class="PROGRAMLISTING">
+#newfs /dev/gvinum/Simple
+</pre>
+<p> If no errors are reported, you should check the file system: </p>
+<pre class="PROGRAMLISTING">
+#fsck -t ufs /dev/gvinum/Simple
+</pre>
+<p> If no errors are reported, you can mount the file system: </p>
+<pre class="PROGRAMLISTING">
+#mount /dev/gvinum/Simple /mnt
+</pre>
+
+<p> At this point, if everything seems to be right, it is desirable to reboot your machine and perform the following test:</p>
+<pre class="PROGRAMLISTING">
+#fsck -t ufs /dev/gvinum/Simple
+</pre>
+<p> If no errors are reported, you can mount the file system: </p>
+<pre class="PROGRAMLISTING">
+#mount /dev/gvinum/Simple /mnt
+</pre>
+<p>If everything looks fine now, then you have succeeded creating a Vinum Volume.</p>
+
+</div>
+
+<div class="SECT2">
+<h2 class="SECT2"><a id="AEN27391" name="AEN27391">20.6.6 Mounting a Volume Automatically  </a></h2>
+
+<p> In order to have your Volumes mounted automatically you need two things:</p>
+<ul>
+<li>
+Set 
+<tt class="LITERAL"> geom_vinum_load="YES" </tt> 
+in
+<tt class="FILENAME">/boot/loader.conf</tt>. 
+</li>
+<li>
+Add an entry in
+<tt class="FILENAME"> /etc/fstab </tt>
+for your Volume (e.g. Simple). The mountpoint in this example is the directory
+<tt CLASS="FILENAME"> /space </tt>. See 
+<a href="http://www.FreeBSD.org/cgi/man.cgi?query=fstab&amp;sektion=5">;
+<span class="CITEREFENTRY"><span class="REFENTRYTITLE">fstab</span>(5)</span></a> 
+and 
+<a href="http://www.FreeBSD.org/cgi/man.cgi?query=mount&amp;sektion=8">;
+<span class="CITEREFENTRY"><span class="REFENTRYTITLE">mount</span>(8)</span></a>
+for details.
+</li>
+<pre class="PROGRAMLISTING">
+#
+# Device                Mountpoint  FStype  Options     Dump    Pass#
+#
+[...]
+/dev/gvinum/Simple      /space      ufs     rw          2       2
+</pre>
+
+</ul>
+<p> Your Volumes will be checked by 
+<a href="http://www.FreeBSD.org/cgi/man.cgi?query=fsck&amp;sektion=8">;
+<span class="CITEREFENTRY"><span class="REFENTRYTITLE">fsck</span>(8)</span></a>
+at boot time if you specify non zero values for 
+<tt class="LITERAL"> Dump </tt> 
+and 
+<tt class="LITERAL"> Pass </tt> 
+fields.</p>
+
+</div>
+
+<div class="SECT2">
+<h2 class="SECT2"><a id="AEN27392" name="AEN27392">20.6.7 Troubleshooting </a></h2>
+
+<div class="SECT3">
+<h3 class="SECT3"><a id="AEN27392" name="AEN27392">20.6.7.1 Creating a File System </a></h3>
+<p> The process of Starting a Volume may take long; you must be sure this process has been completed before creating a file system. At the moment this manual is written,
+<a href="http://www.FreeBSD.org/cgi/man.cgi?query=newfs&amp;sektion=8">;
+<span class="CITEREFENTRY"><span class="REFENTRYTITLE">newfs</span>(8)</span></a>
+will not complain if you try to create a file system and the Starting process is still in progress. Even running 
+<a href="http://www.FreeBSD.org/cgi/man.cgi?query=fsck&amp;sektion=8">;
+<span class="CITEREFENTRY"><span class="REFENTRYTITLE">fsck</span>(8)</span></a>
+on your new file system may tell you everything is OK. But most probably you will not be able to use the Volume later on, after rebooting your machine.
+</p>
+
+<p>In case your Volume does not pass the checkup, you may try to repeat the process one more time:</p>
+<pre class="PROGRAMLISTING">
+#gvinum start Simple
+#newfs /dev/gvinum/Simple
+#fsck -t ufs /dev/gvinum/Simple
+</pre>
+<p>If everything looks fine, then reboot your machine.</p>
+<pre class="PROGRAMLISTING">
+#shutdown -r now
+</pre>
+<p> Then execute again:</p>
+<pre class="PROGRAMLISTING">
+#fsck -t ufs /dev/gvinum/Simple
+#mount /dev/gvinum/Simple /mnt
+</pre>
+<p>It should work without problem.</p>
+
+</div>
+
+<div class="SECT2">
+<h2 class="SECT2"><a id="AEN27392" name="AEN27392">20.6.8 Miscellaneous Notes </a></h2>
+
+<p>Vinum stores configuration information on disk slices in essentially the same form
 as in the configuration files. When reading from the configuration database, Vinum
 recognizes a number of keywords which are not allowed in the configuration files. For
 example, a disk configuration might contain the following text:</p>
@@ -86,18 +343,11 @@
 to identify drives correctly even if they have been assigned different <span
 class="TRADEMARK">UNIX</span>&reg; drive IDs.</p>
 
-<div class="SECT3">
-<h3 class="SECT3"><a id="VINUM-RC-STARTUP" name="VINUM-RC-STARTUP">20.8.1.1 Automatic
-Startup</a></h3>
-
-<div class="NOTE">
-<blockquote class="NOTE">
-<p><b>Note:</b> This information only relates to the historic Vinum implementation. <span
-class="emphasis"><i class="EMPHASIS">Gvinum</i></span> always features an automatic
-startup once the kernel module is loaded.</p>
-</blockquote>
 </div>
 
+<div class="SECT2">
+<h2 class="SECT2"><a id="AEN27393" name="AEN27393">20.6.8 Differences for FreeBSD 4.X </a></h2>
+
 <p>In order to start Vinum automatically when you boot the system, ensure that you have
 the following line in your <tt class="FILENAME">/etc/rc.conf</tt>:</p>
 
@@ -119,8 +369,7 @@
 does not matter which drive is read. After a crash, however, Vinum must determine which
 drive was updated most recently and read the configuration from this drive. It then
 updates the configuration if necessary from progressively older drives.</p>
-</div>
-</div>
+
 </div>
 
 <div class="NAVFOOTER">
@@ -128,7 +377,7 @@
 <table summary="Footer navigation table" width="100%" border="0" cellpadding="0"
 cellspacing="0">
 <tr>
-<td width="33%" align="left" valign="top"><a href="vinum-object-naming.html"
+<td width="33%" align="left" valign="top"><a href="vinum-objects.html"
 accesskey="P">Prev</a></td>
 <td width="34%" align="center" valign="top"><a href="index.html"
 accesskey="H">Home</a></td>
@@ -137,10 +386,10 @@
 </tr>
 
 <tr>
-<td width="33%" align="left" valign="top">Object Naming</td>
+<td width="33%" align="left" valign="top">Vinum Objects</td>
 <td width="34%" align="center" valign="top"><a href="vinum-vinum.html"
 accesskey="U">Up</a></td>
-<td width="33%" align="right" valign="top">Using Vinum for the Root Filesystem</td>
+<td width="33%" align="right" valign="top">Using Vinum for the Root File system</td>
 </tr>
 </table>
 </div>
diff -r -u handbook.orig/vinum-data-integrity.html handbook/vinum-data-integrity.html
--- handbook.orig/vinum-data-integrity.html	2008-03-22 05:43:54.000000000 +0100
+++ handbook/vinum-data-integrity.html	2008-04-08 13:00:38.000000000 +0200
@@ -7,7 +7,7 @@
 <meta name="GENERATOR" content="Modular DocBook HTML Stylesheet Version 1.79" />
 <link rel="HOME" title="FreeBSD Handbook" href="index.html" />
 <link rel="UP" title="The Vinum Volume Manager" href="vinum-vinum.html" />
-<link rel="PREVIOUS" title="Access Bottlenecks" href="vinum-access-bottlenecks.html" />
+<link rel="PREVIOUS" title="Disk Performance Issues" href="vinum-disk-performance-issues.html" />
 <link rel="NEXT" title="Vinum Objects" href="vinum-objects.html" />
 <link rel="STYLESHEET" type="text/css" href="docbook.css" />
 <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1" />
@@ -22,7 +22,7 @@
 </tr>
 
 <tr>
-<td width="10%" align="left" valign="bottom"><a href="vinum-access-bottlenecks.html"
+<td width="10%" align="left" valign="bottom"><a href="vinum-disk-performance-issues.html"
 accesskey="P">Prev</a></td>
 <td width="80%" align="center" valign="bottom">Chapter 20 The Vinum Volume Manager</td>
 <td width="10%" align="right" valign="bottom"><a href="vinum-objects.html"
@@ -34,73 +34,142 @@
 </div>
 
 <div class="SECT1">
-<h1 class="SECT1"><a id="VINUM-DATA-INTEGRITY" name="VINUM-DATA-INTEGRITY">20.4 Data
-Integrity</a></h1>
+<h1 class="SECT1"><a id="VINUM-DATA-INTEGRITY" name="VINUM-DATA-INTEGRITY">20.4 Data Integrity</a></h1>
 
-<p>The final problem with current disks is that they are unreliable. Although disk drive
-reliability has increased tremendously over the last few years, they are still the most
-likely core component of a server to fail. When they do, the results can be catastrophic:
-replacing a failed disk drive and restoring data to it can take days.</p>
-
-<p>The traditional way to approach this problem has been <span class="emphasis"><i
-class="EMPHASIS">mirroring</i></span>, keeping two copies of the data on different
-physical hardware. Since the advent of the <acronym class="ACRONYM">RAID</acronym>
-levels, this technique has also been called <acronym class="ACRONYM">RAID level
-1</acronym> or <acronym class="ACRONYM">RAID-1</acronym>. Any write to the volume writes
-to both locations; a read can be satisfied from either, so if one drive fails, the data
-is still available on the other drive.</p>
+<p>Although disk drive reliability has increased tremendously over the last few years, they are still the most likely core component of a server to fail. When they do, the results can be catastrophic: replacing a failed disk drive and restoring data to it can take a long time.</p>
 
-<p>Mirroring has two problems:</p>
+<p>The traditional way to approach this problem has been <span class="emphasis"><i class="EMPHASIS">mirroring</i></span>, keeping two copies of the data on different physical hardware. Since the advent of the <acronym class="ACRONYM">RAID</acronym> levels, this technique has also been called <acronym class="ACRONYM">RAID level 1</acronym> or <acronym class="ACRONYM">RAID-1</acronym>. </p>
+
+<p>An alternative solution is using an 
+<span class="emphasis"><i class="EMPHASIS">error-correcting code</i></span>. 
+This strategy is implemented in the <acronym class="ACRONYM">RAID</acronym> levels 2, 3, 4, 5 and 6. Of these, <acronym class="ACRONYM">RAID-5</acronym> is the most interesting; for each data block a simple 
+<span class="emphasis"><i class="EMPHASIS">parity check code</i></span> is generated and stored as part of each stripe. 
+For arrays with large number of disks, the RAID-5 might not provide enough protection; in this case more complex error-correcting codes (e.g. Reed-Solomon) may provide better results. 
+</p>
+<p>RAID levels can be nested to create other RAID configurations with improved resilience. Of these, RAID 0+1 and RAID 1+0 are explained here. Under certain conditions, these arrays can work in degraded mode with up to N/2 broken disks. However, having two Disks broken can stop the arrays, if they fail in the right positions. In both cases, having only one disk down is fully tolerated. </p>
+<p>
+Therefore, whenever you think about a failure in a RAID-0+1 or RAID-1+0 you are considering either the probability of having two disks failing at the same time or not having replaced the first broken disk before the second fails. On top of that, the second disk needs to fail in a very specific position inside the array. 
+</p> 
+<p>
+In modern storage facilities, critical mission arrays are implemented using HOT PLUG technology, allowing a broken Disk to be replaced without having to stop the array. The probability of having a second Disk failing before having replaced the first Disk broken is mathematically clear. However the Possibility of such an event must be estimated first and it depends mainly on security policies and stock management paradigms beyond the scope of this discussion. </p>
+<p>Therefore, a more interesting discussion about RAID-0+1 and RAID-1+0 reliability should be based on the Mean Time Before Failure (MTBF) of the devices in use and on other variables provided by the disk drive constructor and the storage facility administration.
+</p>
+<p> For the sake of simplicity, all disks (N) in a RAID are considered to have the same capacity (CAP) and R/W characteristics. This is not mandatory in all cases.  </p>
+<p> In the Figures, Data stored in a RAID is represented by (X;Y;Z). Data striped along an array of disks is represented by (X0,X1,X2...; Y0,Y1,Y2...; Z0,Z1,Z2...).  
+</p>
+
+<div class="SECT2">
+<h2 class="SECT2"><a id="VINUM-DATA-INTEGRITY-RAID1" name="VINUM-DATA-INTEGRITY-RAID1">20.4.1 RAID-1: Mirror</a></h1>
+<p>In a mirrored array, any write to the volume writes to both disks; a read can be satisfied from either disk, so if one fails, the data is still available on the other one.</p>
+
+<div class="FIGURE"><a id="VINUM-RAID1" name="VINUM-RAID1"></a>
+<p><img  src="vinum/vinum-raid1.png" /></p>
+<p><b>Figure 20-3. RAID-1 Organization</b></p>
+</div>
 
 <ul>
 <li>
-<p>The price. It requires twice as much disk storage as a non-redundant solution.</p>
+<p>The total storage capacity is CAP*N/2.</p>
 </li>
 
 <li>
-<p>The performance impact. Writes must be performed to both drives, so they take up twice
-the bandwidth of a non-mirrored volume. Reads do not suffer from a performance penalty:
-it even looks as if they are faster.</p>
+<p>The Write performance is impacted because all data must be stored in both drives, so it takes up twice the bandwidth of a non-mirrored volume. Reads do not suffer from a performance penalty.
+</p>
 </li>
 </ul>
 
-<p>An alternative solution is <span class="emphasis"><i
-class="EMPHASIS">parity</i></span>, implemented in the <acronym
-class="ACRONYM">RAID</acronym> levels 2, 3, 4 and 5. Of these, <acronym
-class="ACRONYM">RAID-5</acronym> is the most interesting. As implemented in Vinum, it is
-a variant on a striped organization which dedicates one block of each stripe to parity of
-the other blocks. As implemented by Vinum, a <acronym class="ACRONYM">RAID-5</acronym>
-plex is similar to a striped plex, except that it implements <acronym
-class="ACRONYM">RAID-5</acronym> by including a parity block in each stripe. As required
-by <acronym class="ACRONYM">RAID-5</acronym>, the location of this parity block changes
-from one stripe to the next. The numbers in the data blocks indicate the relative block
-numbers.</p>
+</div>
+
+<div class="SECT2">
+<h2 class="SECT2"><a id="VINUM-DATA-INTEGRITY-RAID5" name="VINUM-DATA-INTEGRITY-RAID5">20.4.2 RAID-5</a></h1>
 
-<p></p>
+<p>As implemented in Vinum, it is a variant on a plex striped organization which dedicates one block of each stripe to parity of the other blocks (Px,Py,Pz). 
+As required by <acronym class="ACRONYM">RAID-5</acronym>, the location of this parity block changes from one stripe to the next. The numbers in the data blocks indicate the relative block numbers (X0,X1,Px; Y0,Py,Y1; Pz,Z0,Z1;...).</p>
 
 <div class="FIGURE"><a id="VINUM-RAID5-ORG" name="VINUM-RAID5-ORG"></a>
-<p><b>Figure 20-3. RAID-5 Organization</b></p>
+<p><img  src="vinum/vinum-raid5.png" /></p>
+<p><b>Figure 20-4. RAID-5 Organization</b></p>
+</div>
+
+<ul>
+<li><p> 
+The total capacity of the array is equal to (N-1)*CAP. 
+</p></li>
+<li><p> 
+At least 3 disks are necessary. 
+</p></li>
+<li><p>
+Read access is similar to that of striped organizations but write access is significantly slower. In order to update (write) one striped block you need to read the other striped blocks and compute the parity block again before writing the new block and the new parity. This effect can be mitigated by using systems with large R/W cache memory, then you do not need to read the other blocks again in order to compute the new parity. 
+</p></li>
+
+<li><p>
+If one drive fails, the array can continue to operate in degraded mode: a read from one of the remaining accessible drives continues normally, but a read from the failed drive is recalculated from the corresponding block on all the remaining drives.
+</p></li>
+</ul>
 
-<p><img src="vinum/vinum-raid5-org.png" /></p>
 </div>
 
-<br />
-<br />
-<p>Compared to mirroring, <acronym class="ACRONYM">RAID-5</acronym> has the advantage of
-requiring significantly less storage space. Read access is similar to that of striped
-organizations, but write access is significantly slower, approximately 25% of the read
-performance. If one drive fails, the array can continue to operate in degraded mode: a
-read from one of the remaining accessible drives continues normally, but a read from the
-failed drive is recalculated from the corresponding block from all the remaining
-drives.</p>
+
+<div class="SECT2">
+<h2 class="SECT2"><a id="VINUM-DATA-INTEGRITY-RAID01" name="VINUM-DATA-INTEGRITY-RAID01">20.4.3 RAID-0+1</a></h1>
+
+<p>
+In Vinum, a RAID-0+1 can be straightforward constructed by concatenating two striped plex. In this array, resilience is improved and more than one disk can fail without compromising the functionality. Performance is degraded when the array is forced to work without the full set of disks. 
+</p>
+<div class="FIGURE"><a id="VINUM-RAID01" name="VINUM-RAID01"></a>
+<p><img  src="vinum/vinum-raid01.png" /></p>
+<p><b>Figure 20-5. RAID-0+1 Organization</b></p>
 </div>
 
+<ul>
+<li><p>
+The total storage capacity is CAP*N/2.
+</p></li>
+<li><p> 
+At least 4 disks are necessary. 
+</p></li>
+<li><p>
+This array will stop working when one disk fails in each of the mirrors (e.g. DiskB and DiskF) but it could work in degraded mode with N/2 disks down as long as they are all in the same mirror (e.g. DiskE, DiskF and DiskG).
+
+</p></li>
+</ul>
+
+</div>
+
+<div class="SECT2">
+<h2 class="SECT2"><a id="VINUM-DATA-INTEGRITY-RAID10" name="VINUM-DATA-INTEGRITY-RAID10">20.4.4 RAID-1+0</a></h1>
+
+<p>
+In Vinum, a RAID-1+0 can not be constructed by a simple manipulation of plexes. You need to construct the mirrors (e.g., m0, m1, m3...) first and then use these mirrors into a striped plex. 
+In this array, resilience is improved and more than one disk can fail without compromising the functionality. Performance is degraded when the array is forced to work without the full set of disks. 
+</p>
+
+<div class="FIGURE"><a id="VINUM-RAID10" name="VINUM-RAID10"></a>
+<p><img  src="vinum/vinum-raid10.png" /></p>
+<p><b>Figure 20-6. RAID-1+0 Organization</b></p>
+</div>
+
+<ul>
+<li><p>
+The total storage capacity is CAP*N/2.
+</p></li>
+<li><p> 
+At least 4 disks are necessary. 
+</p></li>
+<li><p>
+This array will stop working when two disks fail in the same mirrors (e.g. DiskB and DiskC) but it could work in degraded mode with N/2 disks down as long as they are not in the same mirror (e.g. DiskB, DiskE and DiskF).
+
+</p></li>
+</ul>
+</div>
+
+
 <div class="NAVFOOTER">
 <hr align="LEFT" width="100%" />
 <table summary="Footer navigation table" width="100%" border="0" cellpadding="0"
 cellspacing="0">
 <tr>
-<td width="33%" align="left" valign="top"><a href="vinum-access-bottlenecks.html"
+<td width="33%" align="left" valign="top"><a href="vinum-disk-performance-issues.html"
 accesskey="P">Prev</a></td>
 <td width="34%" align="center" valign="top"><a href="index.html"
 accesskey="H">Home</a></td>
@@ -109,7 +178,7 @@
 </tr>
 
 <tr>
-<td width="33%" align="left" valign="top">Access Bottlenecks</td>
+<td width="33%" align="left" valign="top">Disk Performance Issues</td>
 <td width="34%" align="center" valign="top"><a href="vinum-vinum.html"
 accesskey="U">Up</a></td>
 <td width="33%" align="right" valign="top">Vinum Objects</td>
diff -r -u handbook.orig/vinum-examples.html handbook/vinum-examples.html
--- handbook.orig/vinum-examples.html	2008-03-22 05:43:54.000000000 +0100
+++ handbook/vinum-examples.html	2008-04-08 14:30:25.000000000 +0200
@@ -3,12 +3,12 @@
 <html xmlns="http://www.w3.org/1999/xhtml">;
 <head>
 <meta name="generator" content="HTML Tidy, see www.w3.org" />
-<title>Some Examples</title>
+<title>Vinum Examples</title>
 <meta name="GENERATOR" content="Modular DocBook HTML Stylesheet Version 1.79" />
 <link rel="HOME" title="FreeBSD Handbook" href="index.html" />
 <link rel="UP" title="The Vinum Volume Manager" href="vinum-vinum.html" />
-<link rel="PREVIOUS" title="Vinum Objects" href="vinum-objects.html" />
-<link rel="NEXT" title="Object Naming" href="vinum-object-naming.html" />
+<link rel="PREVIOUS" title="Vinum Objects" href="vinum-root.html" />
+<link rel="NEXT" title="Virtualization" href="virtualization.html" />
 <link rel="STYLESHEET" type="text/css" href="docbook.css" />
 <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1" />
 </head>
@@ -22,10 +22,10 @@
 </tr>
 
 <tr>
-<td width="10%" align="left" valign="bottom"><a href="vinum-objects.html"
+<td width="10%" align="left" valign="bottom"><a href="vinum-root.html"
 accesskey="P">Prev</a></td>
 <td width="80%" align="center" valign="bottom">Chapter 20 The Vinum Volume Manager</td>
-<td width="10%" align="right" valign="bottom"><a href="vinum-object-naming.html"
+<td width="10%" align="right" valign="bottom"><a href="virtualization.html"
 accesskey="N">Next</a></td>
 </tr>
 </table>
@@ -34,30 +34,29 @@
 </div>
 
 <div class="SECT1">
-<h1 class="SECT1"><a id="VINUM-EXAMPLES" name="VINUM-EXAMPLES">20.6 Some
-Examples</a></h1>
-
-<p>Vinum maintains a <span class="emphasis"><i class="EMPHASIS">configuration
-database</i></span> which describes the objects known to an individual system. Initially,
-the user creates the configuration database from one or more configuration files with the
-aid of the <a href="http://www.FreeBSD.org/cgi/man.cgi?query=gvinum&amp;sektion=8"><span
-class="CITEREFENTRY"><span class="REFENTRYTITLE">gvinum</span>(8)</span></a> utility
-program. Vinum stores a copy of its configuration database on each disk slice (which
-Vinum calls a <span class="emphasis"><i class="EMPHASIS">device</i></span>) under its
-control. This database is updated on each state change, so that a restart accurately
-restores the state of each Vinum object.</p>
-
+<h1 class="SECT1"><a id="VINUM-EXAMPLES" name="VINUM-EXAMPLES">20.8 Vinum Examples</a></h1>
+<p>
+All Disks in the following examples are identical in capacity (512 Mb) and R/W characteristics. However, the size reported by 
+<a href="http://www.FreeBSD.org/cgi/man.cgi?query=gvinum&amp;sektion=8"><span class="CITEREFENTRY"><span class="REFENTRYTITLE">gvinum</span>(8)</span></a> 
+ is 511 Mb. This is normal in a real case, when the Disk is not exactly 536870912 bytes and some space (approx. 8 KB) is reserved by the 
+<a href="http://www.FreeBSD.org/cgi/man.cgi?query=bsdlabel&amp;sektion=8"><span class="CITEREFENTRY"><span class="REFENTRYTITLE">bsdlabel</span>(8)</span></a>. 
+The size used for the stripes is 256k in all examples.  
+</p>
+<p>
+For the sake of simplicity, only three stripes out of many are represented in the Figures. 
+</p>
 <div class="SECT2">
-<h2 class="SECT2"><a id="AEN27183" name="AEN27183">20.6.1 The Configuration File</a></h2>
+<h2 class="SECT2"><a id="VINUM-EXAMPLE-SIMPLE" name="VINUM-EXAMPLE-SIMPLE">20.8.1 A Simple Volume</a></h2>
 
 <p>The configuration file describes individual Vinum objects. The definition of a simple
 volume might be:</p>
 
 <pre class="PROGRAMLISTING">
-    drive a device /dev/da3h
-    volume myvol
-      plex org concat
-        sd length 512m drive a
+#cat simple.conf
+drive diskB device /dev/ad1s1h
+volume Simple 
+	plex org concat
+	sd drive diskB
 </pre>
 
 <p>This file describes four Vinum objects:</p>
@@ -67,79 +66,65 @@
 <p>The <span class="emphasis"><i class="EMPHASIS">drive</i></span> line describes a disk
 partition (<span class="emphasis"><i class="EMPHASIS">drive</i></span>) and its location
 relative to the underlying hardware. It is given the symbolic name <span
-class="emphasis"><i class="EMPHASIS">a</i></span>. This separation of the symbolic names
+class="emphasis"><i class="EMPHASIS">diskB</i></span>. This separation of the symbolic names
 from the device names allows disks to be moved from one location to another without
 confusion.</p>
 </li>
 
 <li>
 <p>The <span class="emphasis"><i class="EMPHASIS">volume</i></span> line describes a
-volume. The only required attribute is the name, in this case <span class="emphasis"><i
-class="EMPHASIS">myvol</i></span>.</p>
+Vinum Volume. The only required attribute is the name, in this case <span class="emphasis"><i
+class="EMPHASIS">Simple</i></span>.</p>
 </li>
 
 <li>
-<p>The <span class="emphasis"><i class="EMPHASIS">plex</i></span> line defines a plex.
+<p>The <span class="emphasis"><i class="EMPHASIS">plex</i></span> line defines a Vinum Plex.
 The only required parameter is the organization, in this case <span class="emphasis"><i
 class="EMPHASIS">concat</i></span>. No name is necessary: the system automatically
 generates a name from the volume name by adding the suffix <span class="emphasis"><i
-class="EMPHASIS">.p</i></span><span class="emphasis"><i class="EMPHASIS">x</i></span>,
-where <span class="emphasis"><i class="EMPHASIS">x</i></span> is the number of the plex
+class="EMPHASIS">.p</i></span><span class="emphasis"><i class="EMPHASIS">${x}</i></span>,
+where <span class="emphasis"><i class="EMPHASIS">${x}</i></span> is the number of the plex
 in the volume. Thus this plex will be called <span class="emphasis"><i
-class="EMPHASIS">myvol.p0</i></span>.</p>
+class="EMPHASIS">Simple.p0</i></span>.</p>
 </li>
 
 <li>
-<p>The <span class="emphasis"><i class="EMPHASIS">sd</i></span> line describes a subdisk.
+<p>The <span class="emphasis"><i class="EMPHASIS">sd</i></span> line describes a Vinum subdisk.
 The minimum specifications are the name of a drive on which to store it, and the length
 of the subdisk. As with plexes, no name is necessary: the system automatically assigns
 names derived from the plex name by adding the suffix <span class="emphasis"><i
-class="EMPHASIS">.s</i></span><span class="emphasis"><i class="EMPHASIS">x</i></span>,
-where <span class="emphasis"><i class="EMPHASIS">x</i></span> is the number of the
+class="EMPHASIS">.s</i></span><span class="emphasis"><i class="EMPHASIS">${x}</i></span>,
+where <span class="emphasis"><i class="EMPHASIS">${x}</i></span> is the number of the
 subdisk in the plex. Thus Vinum gives this subdisk the name <span class="emphasis"><i
-class="EMPHASIS">myvol.p0.s0</i></span>.</p>
+class="EMPHASIS">Simple.p0.s0</i></span>.</p>
 </li>
 </ul>
 
-<p>After processing this file, <a
-href="http://www.FreeBSD.org/cgi/man.cgi?query=gvinum&amp;sektion=8"><span
-class="CITEREFENTRY"><span class="REFENTRYTITLE">gvinum</span>(8)</span></a> produces the
-following output:</p>
-
-<pre class="PROGRAMLISTING">
-      <samp class="PROMPT">#</samp> gvinum -&gt; <kbd
-class="USERINPUT">create config1</kbd>
-      Configuration summary
-      Drives:         1 (4 configured)
-      Volumes:        1 (4 configured)
-      Plexes:         1 (8 configured)
-      Subdisks:       1 (16 configured)
-     
-    D a                     State: up       Device /dev/da3h        Avail: 2061/2573 MB (80%)
-    
-    V myvol                 State: up       Plexes:       1 Size:        512 MB
-    
-    P myvol.p0            C State: up       Subdisks:     1 Size:        512 MB
-    
-    S myvol.p0.s0           State: up       PO:        0  B Size:        512 MB
-</pre>
-
-<p>This output shows the brief listing format of <a
-href="http://www.FreeBSD.org/cgi/man.cgi?query=gvinum&amp;sektion=8"><span
-class="CITEREFENTRY"><span class="REFENTRYTITLE">gvinum</span>(8)</span></a>. It is
-represented graphically in <a href="vinum-examples.html#VINUM-SIMPLE-VOL">Figure
-20-4</a>.</p>
+<p>After processing this file, 
+<a href="http://www.FreeBSD.org/cgi/man.cgi?query=gvinum&amp;sektion=8"><span class="CITEREFENTRY"><span class="REFENTRYTITLE">gvinum</span>(8)</span></a> 
+produces the following output:</p>
 
-<p></p>
+<pre class="PROGRAMLISTING">
+<samp class="PROMPT">#</samp> gvinum create <kbd class="USERINPUT">simple.conf</kbd>
+1 drive:
+D diskB                 State: up	/dev/ad1s1h	A: 0/511 MB (0%)
+
+1 volume:
+V Simple                State: up	Plexes:       1	Size:        511 MB
+
+1 plex:
+P Simple.p0           C State: up	Subdisks:     1	Size:        511 MB
+
+1 subdisk:
+S Simple.p0.s0          State: up	D: diskB        Size:        511 MB
+</pre>
 
 <div class="FIGURE"><a id="VINUM-SIMPLE-VOL" name="VINUM-SIMPLE-VOL"></a>
-<p><b>Figure 20-4. A Simple Vinum Volume</b></p>
 
-<p><img src="vinum/vinum-simple-vol.png" /></p>
+<p><img src="vinum/vinum-simple.png" /></p>
+<p><b>Figure 20-4. A Simple Vinum Volume</b></p>
 </div>
 
-<br />
-<br />
 <p>This figure, and the ones which follow, represent a volume, which contains the plexes,
 which in turn contain the subdisks. In this trivial example, the volume contains one
 plex, and the plex contains one subdisk.</p>
@@ -147,181 +132,320 @@
 <p>This particular volume has no specific advantage over a conventional disk partition.
 It contains a single plex, so it is not redundant. The plex contains a single subdisk, so
 there is no difference in storage allocation from a conventional disk partition. The
-following sections illustrate various more interesting configuration methods.</p>
+following sections illustrate more interesting configuration methods.</p>
 </div>
 
+
 <div class="SECT2">
-<h2 class="SECT2"><a id="AEN27231" name="AEN27231">20.6.2 Increased Resilience:
-Mirroring</a></h2>
+<h2 class="SECT2"><a id="VINUM-EXAMPLE-RAID1" name="VINUM-EXAMPLE-RAID1">20.8.2 RAID-1: Mirrored set</a></h2>
 
-<p>The resilience of a volume can be increased by mirroring. When laying out a mirrored
-volume, it is important to ensure that the subdisks of each plex are on different drives,
+<p>The resilience of a volume can be increased by mirroring
+(<a href="vinum-data-integrity.html#VINUM-DATA-INTEGRITY-RAID1">Section 20.4.1</a>). 
+When laying out a mirrored volume, it is important to ensure that the subdisks of each plex are on different drives,
 so that a drive failure will not take down both plexes. The following configuration
 mirrors a volume:</p>
 
 <pre class="PROGRAMLISTING">
-   drive b device /dev/da4h
-    volume mirror
-      plex org concat
-        sd length 512m drive a
-      plex org concat
-        sd length 512m drive b
+#cat mirror.conf
+drive diskB device /dev/ad1s1h
+drive diskC device /dev/ad2s1h
+volume Mirror
+	plex org concat
+	sd drive diskB
+	plex org concat
+	sd drive diskC
 </pre>
 
-<p>In this example, it was not necessary to specify a definition of drive <span
-class="emphasis"><i class="EMPHASIS">a</i></span> again, since Vinum keeps track of all
-objects in its configuration database. After processing this definition, the
+<p>
+After processing this definition, the
 configuration looks like:</p>
 
 <pre class="PROGRAMLISTING">
-   Drives:         2 (4 configured)
-    Volumes:        2 (4 configured)
-    Plexes:         3 (8 configured)
-    Subdisks:       3 (16 configured)
-    
-    D a                     State: up       Device /dev/da3h        Avail: 1549/2573 MB (60%)
-    D b                     State: up       Device /dev/da4h        Avail: 2061/2573 MB (80%)
-
-    V myvol                 State: up       Plexes:       1 Size:        512 MB
-    V mirror                State: up       Plexes:       2 Size:        512 MB
-  
-    P myvol.p0            C State: up       Subdisks:     1 Size:        512 MB
-    P mirror.p0           C State: up       Subdisks:     1 Size:        512 MB
-    P mirror.p1           C State: initializing     Subdisks:     1 Size:        512 MB
-  
-    S myvol.p0.s0           State: up       PO:        0  B Size:        512 MB
-    S mirror.p0.s0          State: up       PO:        0  B Size:        512 MB
-    S mirror.p1.s0          State: empty    PO:        0  B Size:        512 MB
+#gvinum create mirror.conf
+2 drives:
+D diskC                 State: up	/dev/ad2s1h	A: 0/511 MB (0%)
+D diskB                 State: up	/dev/ad1s1h	A: 0/511 MB (0%)
+
+1 volume:
+V Mirror                State: up	Plexes:       2	Size:        511 MB
+
+2 plexes:
+P Mirror.p1           C State: up	Subdisks:     1	Size:        511 MB
+P Mirror.p0           C State: up	Subdisks:     1	Size:        511 MB
+
+2 subdisks:
+S Mirror.p1.s0          State: up	D: diskC        Size:        511 MB
+S Mirror.p0.s0          State: up	D: diskB        Size:        511 MB
 </pre>
 
-<p><a href="vinum-examples.html#VINUM-MIRRORED-VOL">Figure 20-5</a> shows the structure
-graphically.</p>
-
-<p></p>
-
 <div class="FIGURE"><a id="VINUM-MIRRORED-VOL" name="VINUM-MIRRORED-VOL"></a>
-<p><b>Figure 20-5. A Mirrored Vinum Volume</b></p>
-
-<p><img src="vinum/vinum-mirrored-vol.png" /></p>
+<p><img src="vinum/vinum-raid1.png" /></p>
+<p><b>Figure 20-5. A RAID-1 Vinum Volume</b></p>
 </div>
 
-<br />
-<br />
-<p>In this example, each plex contains the full 512&nbsp;MB of address space. As in the
-previous example, each plex contains only a single subdisk.</p>
 </div>
 
 <div class="SECT2">
-<h2 class="SECT2"><a id="AEN27245" name="AEN27245">20.6.3 Optimizing Performance</a></h2>
+<h2 class="SECT2"><a id="VINUM-EXAMPLE-RAID0" name="VINUM-EXAMPLE-RAID0">20.8.3 RAID-0: Striped set</a></h2>
 
-<p>The mirrored volume in the previous example is more resistant to failure than an
-unmirrored volume, but its performance is less: each write to the volume requires a write
-to both drives, using up a greater proportion of the total disk bandwidth. Performance
+<p>The RAID-1 volume in the previous example is more resistant to failure than a
+simple volume, but it has inferior Writing performance because each Write to the volume requires a Write
+to both drives, using a greater percentage of the total disk bandwidth. Performance
 considerations demand a different approach: instead of mirroring, the data is striped
-across as many disk drives as possible. The following configuration shows a volume with a
-plex striped across four disk drives:</p>
+(<a href="vinum-disk-performance-issues.html#VINUM-PERFORMANCE-ISSUES-STRIPING">Section 20.3.2</a>) 
+across as many disk drives as possible. This configuration does not provide data protection against failure.
+The following configuration shows a volume with a
+plex striped across three disk drives:</p>
+
+<pre class="PROGRAMLISTING">
+#cat striped.conf
+drive diskB device /dev/ad1s1h
+drive diskC device /dev/ad2s1h
+drive diskD device /dev/ad3s1h
+volume Stripes
+	plex org striped 256k
+	sd drive diskB
+	sd drive diskC
+	sd drive diskD
+</pre>
 
 <pre class="PROGRAMLISTING">
-   drive c device /dev/da5h
-    drive d device /dev/da6h
-    volume stripe
-    plex org striped 512k
-      sd length 128m drive a
-      sd length 128m drive b
-      sd length 128m drive c
-      sd length 128m drive d
-</pre>
-
-<p>As before, it is not necessary to define the drives which are already known to Vinum.
-After processing this definition, the configuration looks like:</p>
-
-<pre class="PROGRAMLISTING">
-   Drives:         4 (4 configured)
-    Volumes:        3 (4 configured)
-    Plexes:         4 (8 configured)
-    Subdisks:       7 (16 configured)
-  
-    D a                     State: up       Device /dev/da3h        Avail: 1421/2573 MB (55%)
-    D b                     State: up       Device /dev/da4h        Avail: 1933/2573 MB (75%)
-    D c                     State: up       Device /dev/da5h        Avail: 2445/2573 MB (95%)
-    D d                     State: up       Device /dev/da6h        Avail: 2445/2573 MB (95%)
-  
-    V myvol                 State: up       Plexes:       1 Size:        512 MB
-    V mirror                State: up       Plexes:       2 Size:        512 MB
-    V striped               State: up       Plexes:       1 Size:        512 MB
-  
-    P myvol.p0            C State: up       Subdisks:     1 Size:        512 MB
-    P mirror.p0           C State: up       Subdisks:     1 Size:        512 MB
-    P mirror.p1           C State: initializing     Subdisks:     1 Size:        512 MB
-    P striped.p1            State: up       Subdisks:     1 Size:        512 MB
-  
-    S myvol.p0.s0           State: up       PO:        0  B Size:        512 MB
-    S mirror.p0.s0          State: up       PO:        0  B Size:        512 MB
-    S mirror.p1.s0          State: empty    PO:        0  B Size:        512 MB
-    S striped.p0.s0         State: up       PO:        0  B Size:        128 MB
-    S striped.p0.s1         State: up       PO:      512 kB Size:        128 MB
-    S striped.p0.s2         State: up       PO:     1024 kB Size:        128 MB
-    S striped.p0.s3         State: up       PO:     1536 kB Size:        128 MB
+#gvinum create striped.conf
+3 drives:
+D diskD                 State: up	/dev/ad3s1h	A: 0/511 MB (0%)
+D diskC                 State: up	/dev/ad2s1h	A: 0/511 MB (0%)
+D diskB                 State: up	/dev/ad1s1h	A: 0/511 MB (0%)
+
+1 volume:
+V Stripes               State: up	Plexes:       1	Size:       1534 MB
+
+1 plex:
+P Stripes.p0          S State: up	Subdisks:     3	Size:       1534 MB
+
+3 subdisks:
+S Stripes.p0.s2         State: up	D: diskD        Size:        511 MB
+S Stripes.p0.s1         State: up	D: diskC        Size:        511 MB
+S Stripes.p0.s0         State: up	D: diskB        Size:        511 MB
 </pre>
 
 <p></p>
 
 <div class="FIGURE"><a id="VINUM-STRIPED-VOL" name="VINUM-STRIPED-VOL"></a>
-<p><b>Figure 20-6. A Striped Vinum Volume</b></p>
 
-<p><img src="vinum/vinum-striped-vol.png" /></p>
+<p><img src="vinum/vinum-raid0.png" /></p>
+<p><b>Figure 20.6. A Striped Vinum Volume</b></p>
+</div>
+
+</div>
+
+<div class="SECT2">
+<h2 class="SECT2"><a id="VINUM-EXAMPLE-RAID5" name="VINUM-EXAMPLE-RAID5">20.8.4 RAID-5: Striped set with distributed parity</a></h2>
+<p>RAID-1 resilience can be improved by using a striped array with distributed parity, this configuration is known as RAID-5
+(<a href="vinum-data-integrity.html#VINUM-DATA-INTEGRITY-RAID5">Section 20.4.2</a>). 
+ The cost of this strategy is the space consumed by the parity data (usually the size of one disk of the array) and slower read/write access. The minimum number of disks required is 3 and the array continues operating, in degraded mode, when one disk fails.
+</p>
+
+<pre class="PROGRAMLISTING">
+#cat raid5.conf
+drive diskB device /dev/ad1s1h
+drive diskC device /dev/ad2s1h
+drive diskD device /dev/ad3s1h
+volume Raid5 
+	plex org raid5 256k
+	sd drive diskB
+	sd drive diskC
+	sd drive diskD
+</pre>
+
+
+<pre class="PROGRAMLISTING">
+#gvinum create raid5.conf
+3 drives:
+D diskD                 State: up	/dev/ad3s1h	A: 0/511 MB (0%)
+D diskC                 State: up	/dev/ad2s1h	A: 0/511 MB (0%)
+D diskB                 State: up	/dev/ad1s1h	A: 0/511 MB (0%)
+
+1 volume:
+V Raid5                   State: up	Plexes:       1	Size:       1023 MB
+
+1 plex:
+P Raid5.p0             R5 State: up	Subdisks:     3	Size:       1023 MB
+
+3 subdisks:
+S Raid5.p0.s2             State: up	D: diskD        Size:        511 MB
+S Raid5.p0.s1             State: up	D: diskC        Size:        511 MB
+S Raid5.p0.s0             State: up	D: diskB        Size:        511 MB
+</pre>
+
+<div class="FIGURE"><a id="VINUM-MIRRORED-VOL" name="VINUM-MIRRORED-VOL"></a>
+<p><img src="vinum/vinum-raid5.png" /></p>
+<p><b>Figure 20-7. A RAID-5 Vinum Volume</b></p>
 </div>
 
-<br />
-<br />
-<p>This volume is represented in <a href="vinum-examples.html#VINUM-STRIPED-VOL">Figure
-20-6</a>. The darkness of the stripes indicates the position within the plex address
-space: the lightest stripes come first, the darkest last.</p>
 </div>
 
 <div class="SECT2">
-<h2 class="SECT2"><a id="AEN27257" name="AEN27257">20.6.4 Resilience and
-Performance</a></h2>
+<h2 class="SECT2"><a id="VINUM-EXAMPLE-RAID01" name="VINUM-EXAMPLE-RAID01">20.8.5 RAID 0+1</a></h2>
 
-<p><a id="VINUM-RESILIENCE" name="VINUM-RESILIENCE"></a>With sufficient hardware, it is
+<p>With sufficient hardware, it is
 possible to build volumes which show both increased resilience and increased performance
 compared to standard <span class="TRADEMARK">UNIX</span>&reg; partitions. A typical
-configuration file might be:</p>
+configuration file for a RAID-0+1 
+(<a href="vinum-data-integrity.html#VINUM-DATA-INTEGRITY-RAID01">Section 20.4.3</a>). 
+might be:</p>
+
+<pre class="PROGRAMLISTING">
+#cat raid01.conf
+drive diskB device /dev/da0s1h
+drive diskC device /dev/da1s1h
+drive diskD device /dev/da2s1h
+drive diskE device /dev/da3s1h
+drive diskF device /dev/da4s1h
+drive diskG device /dev/da5s1h
+volume RAID01
+	plex org striped 256k
+		sd drive diskB
+		sd drive diskC
+		sd drive diskD
+	plex org striped 256k
+		sd drive diskE
+		sd drive diskF
+		sd drive diskG
+</pre>
 
 <pre class="PROGRAMLISTING">
-   volume raid10
-      plex org striped 512k
-        sd length 102480k drive a
-        sd length 102480k drive b
-        sd length 102480k drive c
-        sd length 102480k drive d
-        sd length 102480k drive e
-      plex org striped 512k
-        sd length 102480k drive c
-        sd length 102480k drive d
-        sd length 102480k drive e
-        sd length 102480k drive a
-        sd length 102480k drive b
+# gvinum create raid01.conf
+6 drives:
+D diskG                 State: up	/dev/da5s1h	A: 0/511 MB (0%)
+D diskF                 State: up	/dev/da4s1h	A: 0/511 MB (0%)
+D diskE                 State: up	/dev/da3s1h	A: 0/511 MB (0%)
+D diskD                 State: up	/dev/da2s1h	A: 0/511 MB (0%)
+D diskC                 State: up	/dev/da1s1h	A: 0/511 MB (0%)
+D diskB                 State: up	/dev/da0s1h	A: 0/511 MB (0%)
+
+1 volume:
+V RAID01               State: up	Plexes:       2	Size:       1535 MB
+
+2 plexes:
+P RAID01.p1          S State: up	Subdisks:     3	Size:       1535 MB
+P RAID01.p0          S State: up	Subdisks:     3	Size:       1535 MB
+
+6 subdisks:
+S RAID01.p1.s2         State: up	D: diskG        Size:        511 MB
+S RAID01.p1.s1         State: up	D: diskF        Size:        511 MB
+S RAID01.p1.s0         State: up	D: diskE        Size:        511 MB
+S RAID01.p0.s2         State: up	D: diskD        Size:        511 MB
+S RAID01.p0.s1         State: up	D: diskC        Size:        511 MB
+S RAID01.p0.s0         State: up	D: diskB        Size:        511 MB
 </pre>
 
 <p>The subdisks of the second plex are offset by two drives from those of the first plex:
 this helps ensure that writes do not go to the same subdisks even if a transfer goes over
 two drives.</p>
 
-<p><a href="vinum-examples.html#VINUM-RAID10-VOL">Figure 20-7</a> represents the
-structure of this volume.</p>
-
-<p></p>
+<div class="FIGURE"><a id="VINUM-EXAMPLE-RAID01" name="VINUM-EXAMPLE-RAID01"></a>
+<p><img src="vinum/vinum-raid01.png" /></p>
+<p><b>Figure 20-8. A RAID-0+1 Vinum Volume</b></p>
+</div>
 
-<div class="FIGURE"><a id="VINUM-RAID10-VOL" name="VINUM-RAID10-VOL"></a>
-<p><b>Figure 20-7. A Mirrored, Striped Vinum Volume</b></p>
+</div>
 
-<p><img src="vinum/vinum-raid10-vol.png" /></p>
 </div>
 
-<br />
-<br />
+<div class="SECT2">
+<h2 class="SECT2"><a id="VINUM-EXAMPLE-RAID10" name="VINUM-EXAMPLE-RAID10">20.8.5 RAID 1+0</a></h2>
+
+<p>With sufficient hardware, it is possible to build volumes which show both increased resilience and increased performance
+compared to standard <span class="TRADEMARK">UNIX</span>&reg; partitions in more than one way. The RAID-1+0 configuration differs from RAID-0+1 in the way mirrors and stripes are used. A typical configuration file for a RAID-1+0 
+(<a href="vinum-data-integrity.html#VINUM-DATA-INTEGRITY-RAID10">Section 20.4.4</a>). 
+might be:</p>
+
+<pre class="PROGRAMLISTING">
+#cat raid10_ph1.conf
+drive diskB device /dev/da0s1h
+drive diskC device /dev/da1s1h
+drive diskD device /dev/da2s1h
+drive diskE device /dev/da3s1h
+drive diskF device /dev/da4s1h
+drive diskG device /dev/da5s1h
+volume m0
+	plex org concat
+		sd drive diskB
+	plex org concat
+		sd drive diskC
+volume m1
+	plex org concat
+		sd drive diskD
+	plex org concat
+		sd drive diskE
+volume m2
+	plex org concat
+		sd drive diskF
+	plex org concat
+		sd drive diskG
+
+#cat raid10_ph2.conf
+drive dm0 device /dev/gvinum/m0
+drive dm1 device /dev/gvinum/m1
+drive dm2 device /dev/gvinum/m2
+
+volume RAID10
+	plex org striped 256k
+		sd drive dm0
+		sd drive dm1
+		sd drive dm2
+</pre>
+
+<pre class="PROGRAMLISTING">
+#gvinum create raid10_ph1.conf
+#gvinum create raid10_ph2.conf
+</pre>
+
+<pre class="PROGRAMLISTING">
+# gvinum list
+9 drives:
+D dm2                   State: up	/dev/gvinum/sd/m2.p0.s0	A: 0/511 MB (0%)
+D dm1                   State: up	/dev/gvinum/sd/m1.p0.s0	A: 0/511 MB (0%)
+D dm0                   State: up	/dev/gvinum/sd/m0.p0.s0	A: 0/511 MB (0%)
+D diskG                 State: up	/dev/da5s1h	A: 0/511 MB (0%)
+D diskF                 State: up	/dev/da4s1h	A: 0/511 MB (0%)
+D diskE                 State: up	/dev/da3s1h	A: 0/511 MB (0%)
+D diskD                 State: up	/dev/da2s1h	A: 0/511 MB (0%)
+D diskC                 State: up	/dev/da1s1h	A: 0/511 MB (0%)
+D diskB                 State: up	/dev/da0s1h	A: 0/511 MB (0%)
+
+4 volumes:
+V RAID10                State: up	Plexes:       1	Size:       1534 MB
+V m2                    State: up	Plexes:       2	Size:        511 MB
+V m1                    State: up	Plexes:       2	Size:        511 MB
+V m0                    State: up	Plexes:       2	Size:        511 MB
+
+7 plexes:
+P RAID10.p0           S State: up	Subdisks:     3	Size:       1534 MB
+P m2.p1               C State: up	Subdisks:     1	Size:        511 MB
+P m2.p0               C State: up	Subdisks:     1	Size:        511 MB
+P m1.p1               C State: up	Subdisks:     1	Size:        511 MB
+P m1.p0               C State: up	Subdisks:     1	Size:        511 MB
+P m0.p1               C State: up	Subdisks:     1	Size:        511 MB
+P m0.p0               C State: up	Subdisks:     1	Size:        511 MB
+
+9 subdisks:
+S RAID10.p0.s2          State: up	D: dm2          Size:        511 MB
+S RAID10.p0.s1          State: up	D: dm1          Size:        511 MB
+S RAID10.p0.s0          State: up	D: dm0          Size:        511 MB
+S m2.p1.s0              State: up	D: diskG        Size:        511 MB
+S m2.p0.s0              State: up	D: diskF        Size:        511 MB
+S m1.p1.s0              State: up	D: diskE        Size:        511 MB
+S m1.p0.s0              State: up	D: diskD        Size:        511 MB
+S m0.p1.s0              State: up	D: diskC        Size:        511 MB
+S m0.p0.s0              State: up	D: diskB        Size:        511 MB
+</pre>
+
+<div class="FIGURE"><a id="VINUM-RAID10-VOL" name="VINUM-RAID10-VOL"></a>
+<p><img src="vinum/vinum-raid10.png" /></p>
+<p><b>Figure 20-9. A RAID-1+0 Volume</b></p>
 </div>
+
 </div>
 
 <div class="NAVFOOTER">
@@ -329,11 +453,11 @@
 <table summary="Footer navigation table" width="100%" border="0" cellpadding="0"
 cellspacing="0">
 <tr>
-<td width="33%" align="left" valign="top"><a href="vinum-objects.html"
+<td width="33%" align="left" valign="top"><a href="vinum-root.html"
 accesskey="P">Prev</a></td>
 <td width="34%" align="center" valign="top"><a href="index.html"
 accesskey="H">Home</a></td>
-<td width="33%" align="right" valign="top"><a href="vinum-object-naming.html"
+<td width="33%" align="right" valign="top"><a href="virtualization.html"
 accesskey="N">Next</a></td>
 </tr>
 
@@ -341,7 +465,7 @@
 <td width="33%" align="left" valign="top">Vinum Objects</td>
 <td width="34%" align="center" valign="top"><a href="vinum-vinum.html"
 accesskey="U">Up</a></td>
-<td width="33%" align="right" valign="top">Object Naming</td>
+<td width="33%" align="right" valign="top">Virtualization</td>
 </tr>
 </table>
 </div>
diff -r -u handbook.orig/vinum-intro.html handbook/vinum-intro.html
--- handbook.orig/vinum-intro.html	2008-03-22 05:43:54.000000000 +0100
+++ handbook/vinum-intro.html	2008-04-08 14:23:40.000000000 +0200
@@ -3,12 +3,12 @@
 <html xmlns="http://www.w3.org/1999/xhtml">;
 <head>
 <meta name="generator" content="HTML Tidy, see www.w3.org" />
-<title>Disks Are Too Small</title>
+<title>Introduction</title>
 <meta name="GENERATOR" content="Modular DocBook HTML Stylesheet Version 1.79" />
 <link rel="HOME" title="FreeBSD Handbook" href="index.html" />
 <link rel="UP" title="The Vinum Volume Manager" href="vinum-vinum.html" />
 <link rel="PREVIOUS" title="The Vinum Volume Manager" href="vinum-vinum.html" />
-<link rel="NEXT" title="Access Bottlenecks" href="vinum-access-bottlenecks.html" />
+<link rel="NEXT" title="Disk Performance Issues" href="vinum-disk-performance-issues.html" />
 <link rel="STYLESHEET" type="text/css" href="docbook.css" />
 <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1" />
 </head>
@@ -25,7 +25,7 @@
 <td width="10%" align="left" valign="bottom"><a href="vinum-vinum.html"
 accesskey="P">Prev</a></td>
 <td width="80%" align="center" valign="bottom">Chapter 20 The Vinum Volume Manager</td>
-<td width="10%" align="right" valign="bottom"><a href="vinum-access-bottlenecks.html"
+<td width="10%" align="right" valign="bottom"><a href="vinum-disk-performance-issues.html"
 accesskey="N">Next</a></td>
 </tr>
 </table>
@@ -34,14 +34,24 @@
 </div>
 
 <div class="SECT1">
-<h1 class="SECT1"><a id="VINUM-INTRO" name="VINUM-INTRO">20.2 Disks Are Too
-Small</a></h1>
+<h1 class="SECT1"><a id="VINUM-INTRO" name="VINUM-INTRO">20.2 Introduction</a></h1>
+
+<p>
+Since computers begun to be used as data storage devices the issue of ensuring a safe operation has been studied. 
+</p>
+<p>
+Different strategies have been developed, one of the most interesting is the Redundant Arrays of Inexpensive Disks (RAID). 
+The term RAID was first defined by David A. Patterson, Garth A. Gibson and Randy Katz at the University of California, Berkeley in 1987. They studied the possibility of using two or more drives to appear as a single device to the host system and published a paper: "A Case for Redundant Arrays of Inexpensive Disks (RAID)" in June 1988 at the SIGMOD conference. However, the idea of using redundant disk arrays was first patented by Norman Ken Ouchi at IBM. This patent was awarded in 1978 (U.S. patent 4,092,732) titled "System for recovering data stored in failed memory unit." The claims for this patent describe what would later be named RAID-5 with full stripe writes. This 1978 patent also acknowledges that disk mirroring or duplexing (RAID-1) and protection with dedicated parity (RAID-4) were prior art at the time the patent was deposited.
+</p>
+<p>
+VINUM is a Volume Manager and can be understood as a Software capable of implementing RAID-0, RAID-1 and RAID-5 specifications. Nowadays, hardware RAID-Controllers are very popular and some of them have significant better performance than a similar Software RAID approach. Nevertheless, a Software Volume Manager can provide more flexibility and can also be used in conjunction with a hardware controller. 
+</p>
+<p>
+Since FreeBSD RELEASE 5.0, VINUM has been integrated under the GEOM framework 
+(<a href="http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/geom.html">Chapter 19</a>),
+which also provides an alternative way of implementing RAID-0 and RAID-1.
+</p> 
 
-<p>Disks are getting bigger, but so are data storage requirements. Often you will find
-you want a file system that is bigger than the disks you have available. Admittedly, this
-problem is not as acute as it was ten years ago, but it still exists. Some systems have
-solved this by creating an abstract device which stores its data on a number of
-disks.</p>
 </div>
 
 <div class="NAVFOOTER">
@@ -53,7 +63,7 @@
 accesskey="P">Prev</a></td>
 <td width="34%" align="center" valign="top"><a href="index.html"
 accesskey="H">Home</a></td>
-<td width="33%" align="right" valign="top"><a href="vinum-access-bottlenecks.html"
+<td width="33%" align="right" valign="top"><a href="vinum-disk-performance-issues.html"
 accesskey="N">Next</a></td>
 </tr>
 
@@ -61,7 +71,7 @@
 <td width="33%" align="left" valign="top">The Vinum Volume Manager</td>
 <td width="34%" align="center" valign="top"><a href="vinum-vinum.html"
 accesskey="U">Up</a></td>
-<td width="33%" align="right" valign="top">Access Bottlenecks</td>
+<td width="33%" align="right" valign="top">Disk Performance Issues</td>
 </tr>
 </table>
 </div>
diff -r -u handbook.orig/vinum-objects.html handbook/vinum-objects.html
--- handbook.orig/vinum-objects.html	2008-03-22 05:43:54.000000000 +0100
+++ handbook/vinum-objects.html	2008-04-08 14:34:32.000000000 +0200
@@ -8,7 +8,7 @@
 <link rel="HOME" title="FreeBSD Handbook" href="index.html" />
 <link rel="UP" title="The Vinum Volume Manager" href="vinum-vinum.html" />
 <link rel="PREVIOUS" title="Data Integrity" href="vinum-data-integrity.html" />
-<link rel="NEXT" title="Some Examples" href="vinum-examples.html" />
+<link rel="NEXT" title="Vinum Configuration" href="vinum-config.html" />
 <link rel="STYLESHEET" type="text/css" href="docbook.css" />
 <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1" />
 </head>
@@ -25,7 +25,7 @@
 <td width="10%" align="left" valign="bottom"><a href="vinum-data-integrity.html"
 accesskey="P">Prev</a></td>
 <td width="80%" align="center" valign="bottom">Chapter 20 The Vinum Volume Manager</td>
-<td width="10%" align="right" valign="bottom"><a href="vinum-examples.html"
+<td width="10%" align="right" valign="bottom"><a href="vinum-config.html"
 accesskey="N">Next</a></td>
 </tr>
 </table>
@@ -36,15 +36,14 @@
 <div class="SECT1">
 <h1 class="SECT1"><a id="VINUM-OBJECTS" name="VINUM-OBJECTS">20.5 Vinum Objects</a></h1>
 
-<p>In order to address these problems, Vinum implements a four-level hierarchy of
-objects:</p>
+<p>Vinum implements a four-level hierarchy of objects:</p>
 
 <ul>
 <li>
 <p>The most visible object is the virtual disk, called a <span class="emphasis"><i
 class="EMPHASIS">volume</i></span>. Volumes have essentially the same properties as a
 <span class="TRADEMARK">UNIX</span>&reg; disk drive, though there are some minor
-differences. They have no size limitations.</p>
+differences. Their size is not limited by the size of an individual drive.</p>
 </li>
 
 <li>
@@ -103,31 +102,34 @@
 <div class="SECT2">
 <h2 class="SECT2"><a id="AEN27129" name="AEN27129">20.5.3 Performance Issues</a></h2>
 
-<p>Vinum implements both concatenation and striping at the plex level:</p>
+<p>Vinum implements Concatenation, Striping and RAID-5 at the plex level:</p>
 
 <ul>
 <li>
-<p>A <span class="emphasis"><i class="EMPHASIS">concatenated plex</i></span> uses the
+<p>A <span class="emphasis"><i class="EMPHASIS">Concatenated</i></span> plex uses the
 address space of each subdisk in turn.</p>
 </li>
 
 <li>
-<p>A <span class="emphasis"><i class="EMPHASIS">striped plex</i></span> stripes the data
+<p>A <span class="emphasis"><i class="EMPHASIS">Striped</i></span> plex stripes the data
 across each subdisk. The subdisks must all have the same size, and there must be at least
 two subdisks in order to distinguish it from a concatenated plex.</p>
 </li>
+<li>
+Like a striped plex, a <span class="emphasis"><i class="EMPHASIS">RAID-5</i></span> plex stripes the data across each subdisk. The subdisks
+must all have the same size, and there must be at least three subdisks, otherwise mirroring would be more efficient.
+</li>
 </ul>
 </div>
 
 <div class="SECT2">
-<h2 class="SECT2"><a id="AEN27139" name="AEN27139">20.5.4 Which Plex
-Organization?</a></h2>
+<h2 class="SECT2"><a id="AEN27139" name="AEN27139">20.5.4 Which Plex Organization?</a></h2>
 
-<p>The version of Vinum supplied with FreeBSD 7.0 implements two kinds of plex:</p>
+<p>The version of Vinum supplied with FreeBSD 7.0 implements three kinds of plex:</p>
 
 <ul>
 <li>
-<p>Concatenated plexes are the most flexible: they can contain any number of subdisks,
+<p><span class="emphasis"><i class="EMPHASIS">Concatenated</i></span> plexes are the most flexible: they can contain any number of subdisks,
 and the subdisks may be of different length. The plex may be extended by adding
 additional subdisks. They require less <acronym class="ACRONYM">CPU</acronym> time than
 striped plexes, though the difference in <acronym class="ACRONYM">CPU</acronym> overhead
@@ -136,29 +138,30 @@
 </li>
 
 <li>
-<p>The greatest advantage of striped (<acronym class="ACRONYM">RAID-0</acronym>) plexes
+<p>The greatest advantage of <span class="emphasis"><i class="EMPHASIS">Striped</i></span> (<acronym class="ACRONYM">RAID-0</acronym>) plexes
 is that they reduce hot spots: by choosing an optimum sized stripe (about 256&nbsp;kB),
 you can even out the load on the component drives. The disadvantages of this approach are
 (fractionally) more complex code and restrictions on subdisks: they must be all the same
 size, and extending a plex by adding new subdisks is so complicated that Vinum currently
 does not implement it. Vinum imposes an additional, trivial restriction: a striped plex
-must have at least two subdisks, since otherwise it is indistinguishable from a
+must have at least two subdisks, otherwise it is indistinguishable from a
 concatenated plex.</p>
 </li>
+<li>
+<span class="emphasis"><i class="EMPHASIS">RAID-5</i></span> plexes are effectively an extension of striped plexes. Compared to striped
+plexes, they offer the advantage of fault tolerance, but the disadvantages of higher
+storage cost and significantly higher CPU overhead, particularly for writes. The code
+is an order of magnitude more complex than for concatenated and striped plexes. Like
+striped plexes, RAID-5 plexes must have equal-sized subdisks and cannot currently be
+extended. Vinum enforces a minimum of three subdisks for a RAID-5 plex, since any
+smaller number would not make sense
+</li>
 </ul>
 
-<p><a href="vinum-objects.html#VINUM-COMPARISON">Table 20-1</a> summarizes the advantages
-and disadvantages of each plex organization.</p>
-
 <div class="TABLE"><a id="VINUM-COMPARISON" name="VINUM-COMPARISON"></a>
-<p><b>Table 20-1. Vinum Plex Organizations</b></p>
+<p><b>Table 20-1. Vinum Plex Organizations: advantages and disadvantages</b></p>
 
-<table border="0" frame="void" class="CALSTABLE">
-<col />
-<col />
-<col />
-<col />
-<col />
+<table class="CLASSTABLE">
 <thead>
 <tr>
 <th>Plex type</th>
@@ -171,7 +174,7 @@
 
 <tbody>
 <tr>
-<td>concatenated</td>
+<td>Concatenated</td>
 <td>1</td>
 <td>yes</td>
 <td>no</td>
@@ -179,18 +182,76 @@
 </tr>
 
 <tr>
-<td>striped</td>
+<td>Striped</td>
 <td>2</td>
 <td>no</td>
 <td>yes</td>
 <td>High performance in combination with highly concurrent access</td>
 </tr>
+
+<tr>
+<td>RAID-5</td>
+<td>3</td>
+<td>no</td>
+<td>yes</td>
+<td>Highly reliable storage, efficient read access, data update has moderate performance</td>
+</tr>
+
 </tbody>
 </table>
+
 </div>
+
+<div class="SECT2">
+<h2 class="SECT2"><a id="AEN27149" name="AEN27149">20.5.5 Object Naming</a></h2>
+
+<p>Vinum assigns default names to plexes and subdisks, although they
+may be overridden. Overriding the default names is not recommended: experience with the
+VERITAS volume manager, which allows arbitrary naming of objects, has shown that this
+flexibility does not bring a significant advantage, and it can cause confusion.</p>
+
+<p>Names may contain any non-blank character, but it is recommended to restrict them to
+letters, digits and the underscore characters. The names of volumes, plexes and subdisks
+may be up to 64 characters long, and the names of drives may be up to 32 characters
+long.</p>
+
+<p>Vinum objects are assigned device nodes in the hierarchy <tt class="FILENAME">/dev/gvinum</tt>.  All volumes get direct entries there too.
+</p>
+
+<ul>
+
+<li>
+<p>The directories <tt class="FILENAME">/dev/gvinum/plex</tt>, and <tt class="FILENAME">/dev/gvinum/sd</tt> contain device nodes for each plex and for
+each subdisk, respectively.</p>
+<p>For each Volume created, there will be a <tt class="FILENAME">/dev/gvinum/My-Volume-Name</tt> entry.</p>
+</li>
+</ul>
+
 </div>
+
+<div class="SECT2">
+<h2 class="SECT2"><a id="AEN27150" name="AEN27150">20.5.6 Differences for FreeBSD 4.X </a></h2>
+
+<p>Vinum objects are assigned device nodes in the hierarchy <tt class="FILENAME">/dev/vinum</tt>.  
+</p>
+<ul>
+<li>
+<p>The control devices <tt class="FILENAME">/dev/vinum/control</tt> and <tt class="FILENAME">/dev/vinum/controld</tt>, used by 
+<a href="http://www.FreeBSD.org/cgi/man.cgi?query=gvinum&amp;sektion=8"><span class="CITEREFENTRY"><span class="REFENTRYTITLE">gvinum</span>(8)</span></a> 
+and the Vinum daemon respectively.</p>
+</li>
+
+<li>
+<p>A directory <tt class="FILENAME">/dev/vinum/drive</tt> with entries for each drive.
+These entries are in fact symbolic links to the corresponding disk nodes.</p>
+</li>
+
+
+</ul>
+
 </div>
 
+
 <div class="NAVFOOTER">
 <hr align="LEFT" width="100%" />
 <table summary="Footer navigation table" width="100%" border="0" cellpadding="0"
@@ -200,7 +261,7 @@
 accesskey="P">Prev</a></td>
 <td width="34%" align="center" valign="top"><a href="index.html"
 accesskey="H">Home</a></td>
-<td width="33%" align="right" valign="top"><a href="vinum-examples.html"
+<td width="33%" align="right" valign="top"><a href="vinum-config.html"
 accesskey="N">Next</a></td>
 </tr>
 
@@ -208,7 +269,7 @@
 <td width="33%" align="left" valign="top">Data Integrity</td>
 <td width="34%" align="center" valign="top"><a href="vinum-vinum.html"
 accesskey="U">Up</a></td>
-<td width="33%" align="right" valign="top">Some Examples</td>
+<td width="33%" align="right" valign="top">Configuring Vinum</td>
 </tr>
 </table>
 </div>
diff -r -u handbook.orig/vinum-root.html handbook/vinum-root.html
--- handbook.orig/vinum-root.html	2008-03-22 05:43:54.000000000 +0100
+++ handbook/vinum-root.html	2008-04-08 14:28:55.000000000 +0200
@@ -3,12 +3,12 @@
 <html xmlns="http://www.w3.org/1999/xhtml">;
 <head>
 <meta name="generator" content="HTML Tidy, see www.w3.org" />
-<title>Using Vinum for the Root Filesystem</title>
+<title>Using Vinum for the Root File system</title>
 <meta name="GENERATOR" content="Modular DocBook HTML Stylesheet Version 1.79" />
 <link rel="HOME" title="FreeBSD Handbook" href="index.html" />
 <link rel="UP" title="The Vinum Volume Manager" href="vinum-vinum.html" />
 <link rel="PREVIOUS" title="Configuring Vinum" href="vinum-config.html" />
-<link rel="NEXT" title="Virtualization" href="virtualization.html" />
+<link rel="NEXT" title="Vinum Examples" href="vinum-examples.html" />
 <link rel="STYLESHEET" type="text/css" href="docbook.css" />
 <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1" />
 </head>
@@ -25,7 +25,7 @@
 <td width="10%" align="left" valign="bottom"><a href="vinum-config.html"
 accesskey="P">Prev</a></td>
 <td width="80%" align="center" valign="bottom">Chapter 20 The Vinum Volume Manager</td>
-<td width="10%" align="right" valign="bottom"><a href="virtualization.html"
+<td width="10%" align="right" valign="bottom"><a href="vinum-examples.html"
 accesskey="N">Next</a></td>
 </tr>
 </table>
@@ -34,110 +34,56 @@
 </div>
 
 <div class="SECT1">
-<h1 class="SECT1"><a id="VINUM-ROOT" name="VINUM-ROOT">20.9 Using Vinum for the Root
-Filesystem</a></h1>
+<h1 class="SECT1"><a id="VINUM-ROOT" name="VINUM-ROOT">20.7 Using Vinum for the Root
+File system</a></h1>
 
-<p>For a machine that has fully-mirrored filesystems using Vinum, it is desirable to also
-mirror the root filesystem. Setting up such a configuration is less trivial than
-mirroring an arbitrary filesystem because:</p>
+<p>For a machine that has fully-mirrored file systems using Vinum, it is desirable to also
+mirror the root file system. Setting up such a configuration is less trivial than
+mirroring an arbitrary file system because:</p>
 
 <ul>
 <li>
-<p>The root filesystem must be available very early during the boot process, so the Vinum
+<p>The root file system must be available very early during the boot process, so the Vinum
 infrastructure must already be available at this time.</p>
 </li>
 
 <li>
-<p>The volume containing the root filesystem also contains the system bootstrap and the
+<p>The volume containing the root file system also contains the system bootstrap and the
 kernel, which must be read using the host system's native utilities (e. g. the BIOS on
 PC-class machines) which often cannot be taught about the details of Vinum.</p>
 </li>
 </ul>
 
 <p>In the following sections, the term &#8220;root volume&#8221; is generally used to
-describe the Vinum volume that contains the root filesystem. It is probably a good idea
+describe the Vinum volume that contains the root file system. It is probably a good idea
 to use the name <tt class="LITERAL">"root"</tt> for this volume, but this is not
 technically required in any way. All command examples in the following sections assume
 this name though.</p>
 
 <div class="SECT2">
-<h2 class="SECT2"><a id="AEN27394" name="AEN27394">20.9.1 Starting up Vinum Early Enough
-for the Root Filesystem</a></h2>
+<h2 class="SECT2"><a id="AEN27394" name="AEN27394">20.7.1 Starting up Vinum Early Enough
+for the Root File system</a></h2>
 
-<p>There are several measures to take for this to happen:</p>
-
-<ul>
-<li>
-<p>Vinum must be available in the kernel at boot-time. Thus, the method to start Vinum
-automatically described in <a href="vinum-config.html#VINUM-RC-STARTUP">Section
-20.8.1.1</a> is not applicable to accomplish this task, and the <tt
-class="LITERAL">start_vinum</tt> parameter must actually <span class="emphasis"><i
-class="EMPHASIS">not</i></span> be set when the following setup is being arranged. The
-first option would be to compile Vinum statically into the kernel, so it is available all
-the time, but this is usually not desirable. There is another option as well, to have <tt
-class="FILENAME">/boot/loader</tt> (<a href="boot-blocks.html#BOOT-LOADER">Section
-12.3.3</a>) load the vinum kernel module early, before starting the kernel. This can be
-accomplished by putting the line:</p>
+<p>Vinum must be available in the kernel at boot-time. 
+Add the following line to your  <tt class="FILENAME">/boot/loader.conf</tt> (<a href="boot-blocks.html#BOOT-LOADER">Section
+12.3.3</a>) in order to load the Vinum kernel module early enough.
 
 <pre class="PROGRAMLISTING">
 geom_vinum_load="YES"
 </pre>
 
-<p>into the file <tt class="FILENAME">/boot/loader.conf</tt>.</p>
-</li>
-
-<li>
-<div class="NOTE">
-<blockquote class="NOTE">
-<p><b>Note:</b> For <span class="emphasis"><i class="EMPHASIS">Gvinum</i></span>, all
-startup is done automatically once the kernel module has been loaded, so the procedure
-described above is all that is needed. The following text documents the behaviour of the
-historic Vinum system, for the sake of older setups.</p>
-</blockquote>
-</div>
-
-<p>Vinum must be initialized early since it needs to supply the volume for the root
-filesystem. By default, the Vinum kernel part is not looking for drives that might
-contain Vinum volume information until the administrator (or one of the startup scripts)
-issues a <tt class="COMMAND">vinum start</tt> command.</p>
-
-<div class="NOTE">
-<blockquote class="NOTE">
-<p><b>Note:</b> The following paragraphs are outlining the steps needed for FreeBSD 5.X
-and above. The setup required for FreeBSD 4.X differs, and is described below in <a
-href="vinum-root.html#VINUM-ROOT-4X">Section 20.9.5</a>.</p>
-</blockquote>
-</div>
-
-<p>By placing the line:</p>
-
-<pre class="PROGRAMLISTING">
-vinum.autostart="YES"
-</pre>
-
-<p>into <tt class="FILENAME">/boot/loader.conf</tt>, Vinum is instructed to automatically
-scan all drives for Vinum information as part of the kernel startup.</p>
-
-<p>Note that it is not necessary to instruct the kernel where to look for the root
-filesystem. <tt class="FILENAME">/boot/loader</tt> looks up the name of the root device
-in <tt class="FILENAME">/etc/fstab</tt>, and passes this information on to the kernel.
-When it comes to mount the root filesystem, the kernel figures out from the device name
-provided which driver to ask to translate this into the internal device ID (major/minor
-number).</p>
-</li>
-</ul>
 </div>
 
 <div class="SECT2">
-<h2 class="SECT2"><a id="AEN27424" name="AEN27424">20.9.2 Making a Vinum-based Root
+<h2 class="SECT2"><a id="AEN27424" name="AEN27424">20.7.2 Making a Vinum-based Root
 Volume Accessible to the Bootstrap</a></h2>
 
 <p>Since the current FreeBSD bootstrap is only 7.5 KB of code, and already has the burden
-of reading files (like <tt class="FILENAME">/boot/loader</tt>) from the UFS filesystem,
+of reading files (like <tt class="FILENAME">/boot/loader</tt>) from the UFS file system,
 it is sheer impossible to also teach it about internal Vinum structures so it could parse
 the Vinum configuration data, and figure out about the elements of a boot volume itself.
 Thus, some tricks are necessary to provide the bootstrap code with the illusion of a
-standard <tt class="LITERAL">"a"</tt> partition that contains the root filesystem.</p>
+standard <tt class="LITERAL">"a"</tt> partition that contains the root file system.</p>
 
 <p>For this to be possible at all, the following requirements must be met for the root
 volume:</p>
@@ -153,9 +99,9 @@
 </ul>
 
 <p>Note that it is desirable and possible that there are multiple plexes, each containing
-one replica of the root filesystem. The bootstrap process will, however, only use one of
+one replica of the root file system. The bootstrap process will, however, only use one of
 these replica for finding the bootstrap and all the files, until the kernel will
-eventually mount the root filesystem itself. Each single subdisk within these plexes will
+eventually mount the root file system itself. Each single subdisk within these plexes will
 then need its own <tt class="LITERAL">"a"</tt> partition illusion, for the respective
 device to become bootable. It is not strictly needed that each of these faked <tt
 class="LITERAL">"a"</tt> partitions is located at the same offset within its device,
@@ -186,18 +132,18 @@
 
 <pre class="SCREEN">
 <samp class="PROMPT">#</samp> <kbd class="USERINPUT">bsdlabel -e <tt
-class="REPLACEABLE"><i>devname</i></tt></kbd>
+class="REPLACEABLE"><i>${devname}</i></tt></kbd>
 </pre>
 
 <p>for each device that participates in the root volume. <tt
-class="REPLACEABLE"><i>devname</i></tt> must be either the name of the disk (like <tt
+class="REPLACEABLE"><i>${devname}</i></tt> must be either the name of the disk (like <tt
 class="DEVICENAME">da0</tt>) for disks without a slice (aka. fdisk) table, or the name of
 the slice (like <tt class="DEVICENAME">ad0s1</tt>).</p>
 
 <p>If there is already an <tt class="LITERAL">"a"</tt> partition on the device
-(presumably, containing a pre-Vinum root filesystem), it should be renamed to something
+(presumably, containing a pre-Vinum root file system), it should be renamed to something
 else, so it remains accessible (just in case), but will no longer be used by default to
-bootstrap the system. Note that active partitions (like a root filesystem currently
+bootstrap the system. Note that active partitions (like a root file system currently
 mounted) cannot be renamed, so this must be executed either when being booted from a
 &#8220;Fixit&#8221; medium, or in a two-step process, where (in a mirrored situation) the
 disk that has not been currently booted is being manipulated first.</p>
@@ -209,7 +155,7 @@
 partition can be taken verbatim from the calculation above. The <tt
 class="LITERAL">"fstype"</tt> should be <tt class="LITERAL">4.2BSD</tt>. The <tt
 class="LITERAL">"fsize"</tt>, <tt class="LITERAL">"bsize"</tt>, and <tt
-class="LITERAL">"cpg"</tt> values should best be chosen to match the actual filesystem,
+class="LITERAL">"cpg"</tt> values should best be chosen to match the actual file system,
 though they are fairly unimportant within this context.</p>
 
 <p>That way, a new <tt class="LITERAL">"a"</tt> partition will be established that
@@ -225,20 +171,20 @@
 
 <pre class="SCREEN">
 <samp class="PROMPT">#</samp> <kbd class="USERINPUT">fsck -n /dev/<tt
-class="REPLACEABLE"><i>devname</i></tt>a</kbd>
+class="REPLACEABLE"><i>${devname}</i></tt>a</kbd>
 </pre>
 </li>
 </ol>
 </div>
 
 <p>It should be remembered that all files containing control information must be relative
-to the root filesystem in the Vinum volume which, when setting up a new Vinum root
-volume, might not match the root filesystem that is currently active. So in particular,
+to the root file system in the Vinum volume which, when setting up a new Vinum root
+volume, might not match the root file system that is currently active. So in particular,
 the files <tt class="FILENAME">/etc/fstab</tt> and <tt
 class="FILENAME">/boot/loader.conf</tt> need to be taken care of.</p>
 
 <p>At next reboot, the bootstrap should figure out the appropriate control information
-from the new Vinum-based root filesystem, and act accordingly. At the end of the kernel
+from the new Vinum-based root file system, and act accordingly. At the end of the kernel
 initialization process, after all devices have been announced, the prominent notice that
 shows the success of this setup is a message like:</p>
 
@@ -248,7 +194,7 @@
 </div>
 
 <div class="SECT2">
-<h2 class="SECT2"><a id="AEN27486" name="AEN27486">20.9.3 Example of a Vinum-based Root
+<h2 class="SECT2"><a id="AEN27486" name="AEN27486">20.7.3 Example of a Vinum-based Root
 Setup</a></h2>
 
 <p>After the Vinum root volume has been set up, the output of <tt class="COMMAND">gvinum
@@ -293,7 +239,7 @@
 class="LITERAL">"offset"</tt> parameter is the sum of the offset within the Vinum
 partition <tt class="LITERAL">"h"</tt>, and the offset of this partition within the
 device (or slice). This is a typical setup that is necessary to avoid the problem
-described in <a href="vinum-root.html#VINUM-ROOT-PANIC">Section 20.9.4.3</a>. It can also
+described in <a href="vinum-root.html#VINUM-ROOT-PANIC">Section 20.7.4.3</a>. It can also
 be seen that the entire <tt class="LITERAL">"a"</tt> partition is completely within the
 <tt class="LITERAL">"h"</tt> partition containing all the Vinum data for this device.</p>
 
@@ -303,13 +249,13 @@
 </div>
 
 <div class="SECT2">
-<h2 class="SECT2"><a id="AEN27507" name="AEN27507">20.9.4 Troubleshooting</a></h2>
+<h2 class="SECT2"><a id="AEN27507" name="AEN27507">20.7.4 Troubleshooting</a></h2>
 
 <p>If something goes wrong, a way is needed to recover from the situation. The following
 list contains few known pitfalls and solutions.</p>
 
 <div class="SECT3">
-<h3 class="SECT3"><a id="AEN27510" name="AEN27510">20.9.4.1 System Bootstrap Loads, but
+<h3 class="SECT3"><a id="AEN27510" name="AEN27510">20.7.4.1 System Bootstrap Loads, but
 System Does Not Boot</a></h3>
 
 <p>If for any reason the system does not continue to boot, the bootstrap can be
@@ -324,26 +270,26 @@
 
 <p>When ready, the boot process can be continued with a <tt class="COMMAND">boot
 -as</tt>. The options <code class="OPTION">-as</code> will request the kernel to ask for
-the root filesystem to mount (<code class="OPTION">-a</code>), and make the boot process
-stop in single-user mode (<code class="OPTION">-s</code>), where the root filesystem is
+the root file system to mount (<code class="OPTION">-a</code>), and make the boot process
+stop in single-user mode (<code class="OPTION">-s</code>), where the root file system is
 mounted read-only. That way, even if only one plex of a multi-plex volume has been
 mounted, no data inconsistency between plexes is being risked.</p>
 
-<p>At the prompt asking for a root filesystem to mount, any device that contains a valid
-root filesystem can be entered. If <tt class="FILENAME">/etc/fstab</tt> had been set up
+<p>At the prompt asking for a root file system to mount, any device that contains a valid
+root file system can be entered. If <tt class="FILENAME">/etc/fstab</tt> had been set up
 correctly, the default should be something like <tt
 class="LITERAL">ufs:/dev/gvinum/root</tt>. A typical alternate choice would be something
 like <tt class="LITERAL">ufs:da0d</tt> which could be a hypothetical partition that
-contains the pre-Vinum root filesystem. Care should be taken if one of the alias <tt
+contains the pre-Vinum root file system. Care should be taken if one of the alias <tt
 class="LITERAL">"a"</tt> partitions are entered here that are actually reference to the
 subdisks of the Vinum root device, because in a mirrored setup, this would only mount one
-piece of a mirrored root device. If this filesystem is to be mounted read-write later on,
+piece of a mirrored root device. If this file system is to be mounted read-write later on,
 it is necessary to remove the other plex(es) of the Vinum root volume since these plexes
 would otherwise carry inconsistent data.</p>
 </div>
 
 <div class="SECT3">
-<h3 class="SECT3"><a id="AEN27530" name="AEN27530">20.9.4.2 Only Primary Bootstrap
+<h3 class="SECT3"><a id="AEN27530" name="AEN27530">20.7.4.2 Only Primary Bootstrap
 Loads</a></h3>
 
 <p>If <tt class="FILENAME">/boot/loader</tt> fails to load, but the primary bootstrap
@@ -352,12 +298,12 @@
 point, using the <b class="KEYCAP">space</b> key. This will make the bootstrap stop in
 stage two, see <a href="boot-blocks.html#BOOT-BOOT1">Section 12.3.2</a>. An attempt can
 be made here to boot off an alternate partition, like the partition containing the
-previous root filesystem that has been moved away from <tt class="LITERAL">"a"</tt>
+previous root file system that has been moved away from <tt class="LITERAL">"a"</tt>
 above.</p>
 </div>
 
 <div class="SECT3">
-<h3 class="SECT3"><a id="VINUM-ROOT-PANIC" name="VINUM-ROOT-PANIC">20.9.4.3 Nothing
+<h3 class="SECT3"><a id="VINUM-ROOT-PANIC" name="VINUM-ROOT-PANIC">20.7.4.3 Nothing
 Boots, the Bootstrap Panics</a></h3>
 
 <p>This situation will happen if the bootstrap had been destroyed by the Vinum
@@ -381,9 +327,32 @@
 </div>
 
 <div class="SECT2">
-<h2 class="SECT2"><a id="VINUM-ROOT-4X" name="VINUM-ROOT-4X">20.9.5 Differences for
+<h2 class="SECT2"><a id="VINUM-ROOT-4X" name="VINUM-ROOT-4X">20.7.5 Differences for
 FreeBSD 4.X</a></h2>
 
+<p>Vinum must be initialized early since it needs to supply the volume for the root
+file system. By default, the Vinum kernel part is not looking for drives that might
+contain Vinum volume information until the administrator (or one of the startup scripts)
+issues a <tt class="COMMAND">vinum start</tt> command.</p>
+
+<p>By placing the line:</p>
+
+<pre class="PROGRAMLISTING">
+vinum.autostart="YES"
+</pre>
+
+<p>into <tt class="FILENAME">/boot/loader.conf</tt>, Vinum is instructed to automatically
+scan all drives for Vinum information as part of the kernel startup.</p>
+
+<p>Note that it is not necessary to instruct the kernel where to look for the root
+file system. <tt class="FILENAME">/boot/loader</tt> looks up the name of the root device
+in <tt class="FILENAME">/etc/fstab</tt>, and passes this information on to the kernel.
+When it comes to mount the root file system, the kernel figures out from the device name
+provided which driver to ask to translate this into the internal device ID (major/minor
+number).</p>
+</li>
+</ul>
+
 <p>Under FreeBSD 4.X, some internal functions required to make Vinum automatically scan
 all disks are missing, and the code that figures out the internal ID of the root device
 is not smart enough to handle a name like <tt class="FILENAME">/dev/vinum/root</tt>
@@ -402,7 +371,7 @@
 listed, nor is it necessary to add each slice and/or partition explicitly, since Vinum
 will scan all slices and partitions of the named drives for valid Vinum headers.</p>
 
-<p>Since the routines used to parse the name of the root filesystem, and derive the
+<p>Since the routines used to parse the name of the root file system, and derive the
 device ID (major/minor number) are only prepared to handle &#8220;classical&#8221; device
 names like <tt class="FILENAME">/dev/ad0s1a</tt>, they cannot make any sense out of a
 root volume name like <tt class="FILENAME">/dev/vinum/root</tt>. For that reason, Vinum
@@ -422,7 +391,7 @@
 name of the root device string being passed (that is, <tt class="LITERAL">"vinum"</tt> in
 our case), it will use the pre-allocated device ID, instead of trying to figure out one
 itself. That way, during the usual automatic startup, it can continue to mount the Vinum
-root volume for the root filesystem.</p>
+root volume for the root file system.</p>
 
 <p>However, when <tt class="COMMAND">boot -a</tt> has been requesting to ask for entering
 the name of the root device manually, it must be noted that this routine still cannot
@@ -447,7 +416,7 @@
 accesskey="P">Prev</a></td>
 <td width="34%" align="center" valign="top"><a href="index.html"
 accesskey="H">Home</a></td>
-<td width="33%" align="right" valign="top"><a href="virtualization.html"
+<td width="33%" align="right" valign="top"><a href="vinum-examples.html"
 accesskey="N">Next</a></td>
 </tr>
 
@@ -455,7 +424,7 @@
 <td width="33%" align="left" valign="top">Configuring Vinum</td>
 <td width="34%" align="center" valign="top"><a href="vinum-vinum.html"
 accesskey="U">Up</a></td>
-<td width="33%" align="right" valign="top">Virtualization</td>
+<td width="33%" align="right" valign="top">Vinum Examples</td>
 </tr>
 </table>
 </div>
diff -r -u handbook.orig/vinum-vinum.html handbook/vinum-vinum.html
--- handbook.orig/vinum-vinum.html	2008-03-22 05:43:54.000000000 +0100
+++ handbook/vinum-vinum.html	2008-04-08 14:40:26.000000000 +0200
@@ -8,7 +8,7 @@
 <link rel="HOME" title="FreeBSD Handbook" href="index.html" />
 <link rel="UP" title="System Administration" href="system-administration.html" />
 <link rel="PREVIOUS" title="UFS Journaling Through GEOM" href="geom-gjournal.html" />
-<link rel="NEXT" title="Disks Are Too Small" href="vinum-intro.html" />
+<link rel="NEXT" title="Introduction" href="vinum-intro.html" />
 <link rel="STYLESHEET" type="text/css" href="docbook.css" />
 <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1" />
 </head>
@@ -42,21 +42,20 @@
 
 <dt>20.1 <a href="vinum-vinum.html#VINUM-SYNOPSIS">Synopsis</a></dt>
 
-<dt>20.2 <a href="vinum-intro.html">Disks Are Too Small</a></dt>
+<dt>20.2 <a href="vinum-intro.html">Introduction</a></dt>
 
-<dt>20.3 <a href="vinum-access-bottlenecks.html">Access Bottlenecks</a></dt>
+<dt>20.3 <a href="vinum-disk-performance-issues.html">Disk Performance Issues</a></dt>
 
 <dt>20.4 <a href="vinum-data-integrity.html">Data Integrity</a></dt>
 
 <dt>20.5 <a href="vinum-objects.html">Vinum Objects</a></dt>
 
-<dt>20.6 <a href="vinum-examples.html">Some Examples</a></dt>
+<dt>20.6 <a href="vinum-config.html">Configuring Vinum</a></dt>
 
-<dt>20.7 <a href="vinum-object-naming.html">Object Naming</a></dt>
+<dt>20.7 <a href="vinum-root.html">Using Vinum for the Root File system</a></dt>
 
-<dt>20.8 <a href="vinum-config.html">Configuring Vinum</a></dt>
+<dt>20.8 <a href="vinum-examples.html">Vinum Examples</a></dt>
 
-<dt>20.9 <a href="vinum-root.html">Using Vinum for the Root Filesystem</a></dt>
 </dl>
 </div>
 
@@ -86,7 +85,9 @@
 users safeguard themselves against such issues is through the use of multiple, and
 sometimes redundant, disks. In addition to supporting various cards and controllers for
 hardware RAID systems, the base FreeBSD system includes the Vinum Volume Manager, a block
-device driver that implements virtual disk drives. <span class="emphasis"><i
+device driver 
+ <a href="http://www.FreeBSD.org/cgi/man.cgi?query=vinum&amp;sektion=4"><span class="CITEREFENTRY"><span class="REFENTRYTITLE">vinum</span>(4)</span></a>
+that implements virtual disk drives. <span class="emphasis"><i
 class="EMPHASIS">Vinum</i></span> is a so-called <span class="emphasis"><i
 class="EMPHASIS">Volume Manager</i></span>, a virtual disk driver that addresses these
 three problems. Vinum provides more flexibility, performance, and reliability than
@@ -100,12 +101,13 @@
 <blockquote class="NOTE">
 <p><b>Note:</b> Starting with FreeBSD 5, Vinum has been rewritten in order to fit into
 the GEOM architecture (<a href="geom.html">Chapter 19</a>), retaining the original ideas,
-terminology, and on-disk metadata. This rewrite is called <span class="emphasis"><i
-class="EMPHASIS">gvinum</i></span> (for <span class="emphasis"><i class="EMPHASIS">GEOM
-vinum</i></span>). The following text usually refers to <span class="emphasis"><i
+terminology, and on-disk metadata. This rewrite is called 
+<a href="http://www.FreeBSD.org/cgi/man.cgi?query=gvinum&amp;sektion=8"><span class="CITEREFENTRY"><span class="REFENTRYTITLE">gvinum</span>(8)</span></a>
+(for <span class="emphasis"><i class="EMPHASIS">GEOM vinum</i></span>). The following text usually refers to <span class="emphasis"><i
 class="EMPHASIS">Vinum</i></span> as an abstract name, regardless of the implementation
-variant. Any command invocations should now be done using the <tt
-class="COMMAND">gvinum</tt> command, and the name of the kernel module has been changed
+variant. Any command invocations should now be done using the 
+<a href="http://www.FreeBSD.org/cgi/man.cgi?query=gvinum&amp;sektion=8"><span class="CITEREFENTRY"><span class="REFENTRYTITLE">gvinum</span>(8)</span></a>
+command, and the name of the kernel module has been changed
 from <tt class="FILENAME">vinum.ko</tt> to <tt class="FILENAME">geom_vinum.ko</tt>, and
 all device nodes reside under <tt class="FILENAME">/dev/gvinum</tt> instead of <tt
 class="FILENAME">/dev/vinum</tt>. As of FreeBSD 6, the old Vinum implementation is no
@@ -132,7 +134,7 @@
 <td width="33%" align="left" valign="top">UFS Journaling Through GEOM</td>
 <td width="34%" align="center" valign="top"><a href="system-administration.html"
 accesskey="U">Up</a></td>
-<td width="33%" align="right" valign="top">Disks Are Too Small</td>
+<td width="33%" align="right" valign="top">Introduction</td>
 </tr>
 </table>
 </div>
--- /dev/null	2008-04-08 15:00:00.000000000 +0200
+++ handbook/vinum-disk-performance-issues.html	2008-04-08 15:09:49.000000000 +0200
@@ -0,0 +1,148 @@
+<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
+    "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">;
+<html xmlns="http://www.w3.org/1999/xhtml">;
+<head>
+<meta name="generator" content="HTML Tidy, see www.w3.org" />
+<title>Disk Performance Issues</title>
+<meta name="GENERATOR" content="Modular DocBook HTML Stylesheet Version 1.79" />
+<link rel="HOME" title="FreeBSD Handbook" href="index.html" />
+<link rel="UP" title="The Vinum Volume Manager" href="vinum-vinum.html" />
+<link rel="PREVIOUS" title="Introduction" href="vinum-intro.html" />
+<link rel="NEXT" title="Data Integrity" href="vinum-data-integrity.html" />
+<link rel="STYLESHEET" type="text/css" href="docbook.css" />
+<meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1" />
+</head>
+<body class="SECT1" bgcolor="#FFFFFF" text="#000000" link="#0000FF" vlink="#840084"
+alink="#0000FF">
+<div class="NAVHEADER">
+<table summary="Header navigation table" width="100%" border="0" cellpadding="0"
+cellspacing="0">
+<tr>
+<th colspan="3" align="center">FreeBSD Handbook</th>
+</tr>
+
+<tr>
+<td width="10%" align="left" valign="bottom"><a href="vinum-intro.html"
+accesskey="P">Prev</a></td>
+<td width="80%" align="center" valign="bottom">Chapter 20 The Vinum Volume Manager</td>
+<td width="10%" align="right" valign="bottom"><a href="vinum-data-integrity.html"
+accesskey="N">Next</a></td>
+</tr>
+</table>
+
+<hr align="LEFT" width="100%" />
+</div>
+
+<div class="SECT1">
+<h1 class="SECT1"><a id="VINUM-PERFORMANCE-ISSUES" name="VINUM-PERFORMANCE-ISSUES">20.3 Disk Performance Issues</a></h1>
+
+<p>Modern systems frequently need to access data in a highly concurrent manner. For
+example, large FTP or HTTP servers can maintain thousands of concurrent sessions and have
+multiple 100&nbsp;Mbit/s connections to the outside world. 
+</p>
+
+<p>
+The most critical parameter is the load that a transfer places on the subsystem, in other words the time
+for which a transfer occupies a drive.
+</p>
+
+<p>In any disk transfer, the drive must first position the heads, wait for the first
+sector to pass under the read head, and then perform the transfer. These actions can be
+considered to be atomic: it does not make any sense to interrupt them. 
+The data transfer time is negligible compared to the time taken for positioning the heads.</p>
+
+<p>The traditional and obvious solution to this bottleneck is &#8220;more
+spindles&#8221;: rather than using one large disk, it uses several smaller disks with the
+same aggregate storage space. Each disk is capable of positioning and transferring
+independently, so the effective throughput increases by a factor close to the number of
+disks used.</p>
+
+<p>The exact throughput improvement is, of course, smaller than the number of disks
+involved: although each drive is capable of transferring in parallel, there is no way to
+ensure that the requests are evenly distributed across the drives. Inevitably the load on
+one drive will be higher than on another.</p>
+
+<p>The evenness of the load on the disks is strongly dependent on the way the data is
+shared across the drives. In the following discussion, it is convenient to think of the
+disk storage as a large number of data sectors which are addressable by number, rather
+like the pages in a book. 
+</p>
+
+<div class="SECT2">
+<h2 class="SECT2"><a id="VINUM-PERFORMANCE-ISSUES-CONCAT" name="VINUM-PERFORMANCE-ISSUES-CONCAT">20.3.1 Concatenation</a></h2>
+
+<p>The most obvious method is to divide the virtual disk into
+groups of consecutive sectors the size of the individual physical disks and store them in
+this manner, rather like taking a large book and tearing it into smaller sections. This
+method is called <span class="emphasis"><i class="EMPHASIS">concatenation</i></span> and
+has the advantage that the disks are not required to have any specific size
+relationships. It works well when the access to the virtual disk is spread evenly about
+its address space. When access is concentrated on a smaller area, the improvement is less
+marked. <a href="vinum-access-bottlenecks.html#VINUM-CONCAT">Figure 20-1</a> illustrates
+the sequence in which storage units are allocated in a concatenated organization.</p>
+
+<p></p>
+
+<div class="FIGURE"><a id="VINUM-CONCAT" name="VINUM-CONCAT"></a>
+<p><img src="vinum/vinum-concat.png" /></p>
+<p><b>Figure 20-1. Concatenated Organization</b></p>
+</div>
+
+</div>
+
+<div class="SECT2">
+<h2 class="SECT2"><a id="VINUM-PERFORMANCE-ISSUES-STRIPING" name="VINUM-PERFORMANCE-ISSUES-STRIPING">20.3.2 Striping</a></h2>
+
+<p>An alternative mapping is to divide the address space into smaller, equal-sized
+components and store them sequentially on different devices. For example, the first 256
+sectors may be stored on the first disk, the next 256 sectors on the next disk and so on.
+After filling the last disk, the process repeats until the disks are full. This mapping
+is called <span class="emphasis"><i class="EMPHASIS">striping</i></span> or <acronym
+class="ACRONYM">RAID-0</acronym>. Striping requires somewhat
+more effort to locate the data, and it can cause additional I/O load where a transfer is
+spread over multiple disks, but it can also provide a more constant load across the
+disks. <a href="vinum-access-bottlenecks.html#VINUM-STRIPED">Figure 20-2</a> illustrates
+the sequence in which storage units are allocated in a striped organization.</p>
+
+<p></p>
+
+<div class="FIGURE"><a id="VINUM-STRIPED" name="VINUM-STRIPED"></a>
+<p><img src="vinum/vinum-raid0.png" /></p>
+<p><b>Figure 20-2. Striped Organization</b></p>
+</div>
+
+</div>
+
+<div class="NAVFOOTER">
+<hr align="LEFT" width="100%" />
+<table summary="Footer navigation table" width="100%" border="0" cellpadding="0"
+cellspacing="0">
+<tr>
+<td width="33%" align="left" valign="top"><a href="vinum-intro.html"
+accesskey="P">Prev</a></td>
+<td width="34%" align="center" valign="top"><a href="index.html"
+accesskey="H">Home</a></td>
+<td width="33%" align="right" valign="top"><a href="vinum-data-integrity.html"
+accesskey="N">Next</a></td>
+</tr>
+
+<tr>
+<td width="33%" align="left" valign="top">Introduction</td>
+<td width="34%" align="center" valign="top"><a href="vinum-vinum.html"
+accesskey="U">Up</a></td>
+<td width="33%" align="right" valign="top">Data Integrity</td>
+</tr>
+</table>
+</div>
+
+<p align="center"><small>This, and other documents, can be downloaded from <a
+href="ftp://ftp.FreeBSD.org/pub/FreeBSD/doc/">ftp://ftp.FreeBSD.org/pub/FreeBSD/doc/</a>.</small></p>;
+
+<p align="center"><small>For questions about FreeBSD, read the <a
+href="http://www.FreeBSD.org/docs.html">documentation</a>; before contacting &#60;<a
+href="mailto:questions@FreeBSD.org">questions@FreeBSD.org</a>&#62;.<br />
+For questions about this documentation, e-mail &#60;<a
+href="mailto:doc@FreeBSD.org">doc@FreeBSD.org</a>&#62;.</small></p>
+</body>
+</html>
+
--- handbook.orig/virtualization.html	2008-03-22 05:43:54.000000000 +0100
+++ handbook/virtualization.html	2008-04-08 15:14:45.000000000 +0200
@@ -7,8 +7,8 @@
 <meta name="GENERATOR" content="Modular DocBook HTML Stylesheet Version 1.79" />
 <link rel="HOME" title="FreeBSD Handbook" href="index.html" />
 <link rel="UP" title="System Administration" href="system-administration.html" />
-<link rel="PREVIOUS" title="Using Vinum for the Root Filesystem"
-href="vinum-root.html" />
+<link rel="PREVIOUS" title="Vinum Examples"
+href="vinum-examples.html" />
 <link rel="NEXT" title="FreeBSD as a Guest OS" href="virtualization-guest.html" />
 <link rel="STYLESHEET" type="text/css" href="docbook.css" />
 <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1" />
@@ -23,7 +23,7 @@
 </tr>
 
 <tr>
-<td width="10%" align="left" valign="bottom"><a href="vinum-root.html"
+<td width="10%" align="left" valign="bottom"><a href="vinum-examples.html"
 accesskey="P">Prev</a></td>
 <td width="80%" align="center" valign="bottom"></td>
 <td width="10%" align="right" valign="bottom"><a href="virtualization-guest.html"
@@ -117,7 +117,7 @@
 <table summary="Footer navigation table" width="100%" border="0" cellpadding="0"
 cellspacing="0">
 <tr>
-<td width="33%" align="left" valign="top"><a href="vinum-root.html"
+<td width="33%" align="left" valign="top"><a href="vinum-examples.html"
 accesskey="P">Prev</a></td>
 <td width="34%" align="center" valign="top"><a href="index.html"
 accesskey="H">Home</a></td>
@@ -126,7 +126,7 @@
 </tr>
 
 <tr>
-<td width="33%" align="left" valign="top">Using Vinum for the Root Filesystem</td>
+<td width="33%" align="left" valign="top">Vinum Examples</td>
 <td width="34%" align="center" valign="top"><a href="system-administration.html"
 accesskey="U">Up</a></td>
 <td width="33%" align="right" valign="top">FreeBSD as a Guest OS</td>
--- handbook.orig/raid.html	2008-03-22 05:43:54.000000000 +0100
+++ handbook/raid.html	2008-04-08 15:43:16.000000000 +0200
@@ -93,8 +93,8 @@
 </div>
 
 <p>Next, consider how to attach them as part of the file system. You should research both
-<a href="http://www.FreeBSD.org/cgi/man.cgi?query=vinum&amp;sektion=8"><span
-class="CITEREFENTRY"><span class="REFENTRYTITLE">vinum</span>(8)</span></a> (<a
+<a href="http://www.FreeBSD.org/cgi/man.cgi?query=vinum&amp;sektion=4"><span
+class="CITEREFENTRY"><span class="REFENTRYTITLE">vinum</span>(4)</span></a> (<a
 href="vinum-vinum.html">Chapter 20</a>) and <a
 href="http://www.FreeBSD.org/cgi/man.cgi?query=ccd&amp;sektion=4"><span
 class="CITEREFENTRY"><span class="REFENTRYTITLE">ccd</span>(4)</span></a>. In this
@@ -309,17 +309,18 @@
 <div class="SECT3">
 <h3 class="SECT3"><a id="VINUM" name="VINUM">18.4.1.2 The Vinum Volume Manager</a></h3>
 
-<p>The Vinum Volume Manager is a block device driver which implements virtual disk
+<p>The Vinum Volume Manager is a block device driver 
+<a href="http://www.FreeBSD.org/cgi/man.cgi?query=vinum&amp;sektion=4"><span class="CITEREFENTRY"><span class="REFENTRYTITLE">vinum</span>(4)</span></a> 
+which implements virtual disk
 drives. It isolates disk hardware from the block device interface and maps data in ways
 which result in an increase in flexibility, performance and reliability compared to the
-traditional slice view of disk storage. <a
-href="http://www.FreeBSD.org/cgi/man.cgi?query=vinum&amp;sektion=8"><span
-class="CITEREFENTRY"><span class="REFENTRYTITLE">vinum</span>(8)</span></a> implements
+traditional slice view of disk storage. Vinum implements
 the RAID-0, RAID-1 and RAID-5 models, both individually and in combination.</p>
 
-<p>See <a href="vinum-vinum.html">Chapter 20</a> for more information about <a
-href="http://www.FreeBSD.org/cgi/man.cgi?query=vinum&amp;sektion=8"><span
-class="CITEREFENTRY"><span class="REFENTRYTITLE">vinum</span>(8)</span></a>.</p>
+<p>See <a href="vinum-vinum.html">Chapter 20</a> for more information about most recent Vinum implementation,  <a
+href="http://www.FreeBSD.org/cgi/man.cgi?query=gvinum&amp;sektion=8"><span class="CITEREFENTRY"><span class="REFENTRYTITLE">gvinum</span>(8)</span></a>, under the Geom architecture 
+<a href="geom.html">Chapter 19</a>
+.</p>
 </div>
 </div>
 

--0-968980677-1207663836=:26150--



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?539168.26150.qm>