Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 23 Aug 2002 13:26:54 +0200 (CEST)
From:      Christian Brueffer <chris@unixpages.org>
To:        FreeBSD-gnats-submit@FreeBSD.org
Subject:   docs/41934: [PATCH] Several fixes for handbook/vinum/chapter.sgml
Message-ID:  <20020823112654.003B4ABC7@milan.hitnet.rwth-aachen.de>

next in thread | raw e-mail | index | archive | help

>Number:         41934
>Category:       docs
>Synopsis:       [PATCH] Several fixes for handbook/vinum/chapter.sgml
>Confidential:   no
>Severity:       non-critical
>Priority:       low
>Responsible:    freebsd-doc
>State:          open
>Quarter:        
>Keywords:       
>Date-Required:
>Class:          doc-bug
>Submitter-Id:   current-users
>Arrival-Date:   Fri Aug 23 04:30:01 PDT 2002
>Closed-Date:
>Last-Modified:
>Originator:     Christian Brueffer
>Release:        FreeBSD 4.6-STABLE i386
>Organization:
>Environment:
System: FreeBSD milan.hitnet.rwth-aachen.de 4.6-STABLE FreeBSD 4.6-STABLE #2: Fri Jun 28 12:47:08 CEST 2002 chris@milan.hitnet.rwth-aachen.de:/usr/obj/usr/src/sys/LORIEN i386


	
>Description:
	There are two patches attached:
	
	The first one contains actual fixes, the second one contains three
	whitespace fixes.

	- Add missing <acronym></acronym> tags
	- Fix an enumeration
	- Fix a typo
	- 15000rpm disks have been around for some time, so change
	  some numbers accordingly ;-)
>How-To-Repeat:
	
>Fix:

	



--- vinum.fixes.diff begins here ---
--- chapter.sgml	Thu Aug 22 13:36:27 2002
+++ chapter.sgml.fixes	Fri Aug 23 13:07:28 2002
@@ -55,7 +55,7 @@
 
 
     <para>Disks are getting bigger, but so are data storage requirements.
-      Often you ill find you want a file system that is bigger than the disks
+      Often you will find you want a file system that is bigger than the disks
       you have available.  Admittedly, this problem is not as acute as it was
       ten years ago, but it still exists.  Some systems have solved this by
       creating an abstract device which stores its data on a number of disks.</para>
@@ -70,7 +70,7 @@
       disks.</para>
 
     <para>Current disk drives can transfer data sequentially at up to
-      30 MB/s, but this value is of little importance in an environment
+      70 MB/s, but this value is of little importance in an environment
       where many independent processes access a drive, where they may
       achieve only a fraction of these values.  In such cases it is more
       interesting to view the problem from the viewpoint of the disk
@@ -85,10 +85,10 @@
 
     <para><anchor id="vinum-latency">
       Consider a typical transfer of about 10 kB: the current generation of
-      high-performance disks can position the heads in an average of 6 ms.  The
-      fastest drives spin at 10,000 rpm, so the average rotational latency
-      (half a revolution) is 3 ms.  At 30 MB/s, the transfer itself takes about
-      350 &mu;s, almost nothing compared to the positioning time.  In such a
+      high-performance disks can position the heads in an average of 3.5 ms.  The
+      fastest drives spin at 15,000 rpm, so the average rotational latency
+      (half a revolution) is 1.75 ms.  At 30 MB/s, the transfer itself takes about
+      150 &mu;s, almost nothing compared to the positioning time.  In such a
       case, the effective  transfer rate drops to a little over 1 MB/s and is
       clearly highly dependent on the transfer size.</para>
 
@@ -151,7 +151,7 @@
       For example, the first 256 sectors may be stored on the first disk, the
       next 256 sectors on the next disk and so on.  After filling the last
       disk, the process repeats until the disks are full.  This mapping is called
-      <emphasis>striping</emphasis> or RAID-0.
+      <emphasis>striping</emphasis> or <acronym>RAID-0</acronym>.
 
     <footnote>
       <indexterm>
@@ -250,7 +250,7 @@
 	</figure>
       </para>
 
-      <para>Compared to mirroring, RAID-5 has the advantage of requiring
+      <para>Compared to mirroring, <acronym>RAID-5</acronym> has the advantage of requiring
 	significantly less storage space.  Read access is similar to that of
 	striped organizations, but write access is significantly slower,
 	approximately 25% of the read performance.  If one drive fails, the array
@@ -470,7 +470,7 @@
 	    the system automatically assigns names derived from the plex name by
 	    adding the suffix <emphasis>.s</emphasis><emphasis>x</emphasis>, where
 	    <emphasis>x</emphasis> is the number of the subdisk in the plex.  Thus
-	    Vinum gives this subdisk the name <emphasis>myvol.p0.s0</emphasis></para>
+	    Vinum gives this subdisk the name <emphasis>myvol.p0.s0</emphasis>.</para>
 	</listitem>
       </itemizedlist>
 
@@ -736,8 +736,8 @@
       </listitem>
 
       <listitem>
-	<para>The directories <devicename>/dev/vinum/plex</devicename> and
-	  <devicename>/dev/vinum/sd</devicename>, 
+	<para>The directories <devicename>/dev/vinum/plex</devicename>,
+	  <devicename>/dev/vinum/sd</devicename>, and
 	  <devicename>/dev/vinum/rsd</devicename>, which contain block device
 	  nodes for each plex and block and character device nodes respectively 
 	  for each subdisk.</para>
--- vinum.fixes.diff ends here ---

--- vinum.whitespace.diff begins here ---
--- chapter.sgml.fixes	Fri Aug 23 13:07:28 2002
+++ chapter.sgml.whitespace	Fri Aug 23 13:19:26 2002
@@ -53,7 +53,6 @@
       addresses these three problems.  Let us look at them in more detail.  Various
       solutions to these problems have been proposed and implemented:</para>
 
-
     <para>Disks are getting bigger, but so are data storage requirements.
       Often you will find you want a file system that is bigger than the disks
       you have available.  Admittedly, this problem is not as acute as it was
@@ -89,7 +88,7 @@
       fastest drives spin at 15,000 rpm, so the average rotational latency
       (half a revolution) is 1.75 ms.  At 30 MB/s, the transfer itself takes about
       150 &mu;s, almost nothing compared to the positioning time.  In such a
-      case, the effective  transfer rate drops to a little over 1 MB/s and is
+      case, the effective transfer rate drops to a little over 1 MB/s and is
       clearly highly dependent on the transfer size.</para>
 
     <para>The traditional and obvious solution to this bottleneck is
@@ -233,7 +232,7 @@
       <para><indexterm><primary>RAID-5</primary></indexterm>An alternative
 	solution is <emphasis>parity</emphasis>, implemented in the
 	<acronym>RAID</acronym> levels 2, 3, 4 and 5.  Of these,
-	<acronym>RAID-5</acronym> is the most interesting. As implemented
+	<acronym>RAID-5</acronym> is the most interesting.  As implemented
 	in Vinum, it is a variant on a striped organization which dedicates
 	one block of each stripe to parity of the other blocks: As implemented
 	by Vinum, a <acronym>RAID-5</acronym> plex is similar to a
--- vinum.whitespace.diff ends here ---

>Release-Note:
>Audit-Trail:
>Unformatted:

To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-doc" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20020823112654.003B4ABC7>