Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 9 Sep 2013 21:08:58 +0000 (UTC)
From:      Gabor Pali <pgj@FreeBSD.org>
To:        doc-committers@freebsd.org, svn-doc-all@freebsd.org, svn-doc-head@freebsd.org
Subject:   svn commit: r42635 - head/en_US.ISO8859-1/htdocs/news/status
Message-ID:  <201309092108.r89L8wUA021202@svn.freebsd.org>

next in thread | raw e-mail | index | archive | help
Author: pgj
Date: Mon Sep  9 21:08:57 2013
New Revision: 42635
URL: http://svnweb.freebsd.org/changeset/doc/42635

Log:
  - Many minor fixes in the CAM report
  
  Submitted by:	bjk, wblock

Modified:
  head/en_US.ISO8859-1/htdocs/news/status/report-2013-07-2013-09.xml

Modified: head/en_US.ISO8859-1/htdocs/news/status/report-2013-07-2013-09.xml
==============================================================================
--- head/en_US.ISO8859-1/htdocs/news/status/report-2013-07-2013-09.xml	Mon Sep  9 16:05:59 2013	(r42634)
+++ head/en_US.ISO8859-1/htdocs/news/status/report-2013-07-2013-09.xml	Mon Sep  9 21:08:57 2013	(r42635)
@@ -135,51 +135,52 @@
     </links>
 
     <body>
-      <p>Last year's high-performance storage vendors reported
-	performance bottleneck in &os; block storage subsystem, limiting
-	peak performance around 300-500K IOPS.  While that is still more
-	then enough for average systems, detailed investigation has
-	shown number of places that require radical improvement.
-	Unmapped I/O support implemented early this year already
-	improved I/O performance by about 30% and moved more accents
-	toward GEOM and CAM subsystems scalability.  Fixing these issues
-	was the goal of this project.</p>
+      <p>Last year's high-performance storage vendors reported a
+	performance bottleneck in the &os; block storage subsystem,
+	limiting peak performance around 300-500K IOPS.  While that is
+	still more than enough for average systems, detailed
+	investigation has shown a number of places that require radical
+	improvement.  Unmapped I/O support implemented early this year
+	already improved I/O performance by about 30% and moved more
+	accents toward GEOM and CAM subsystems scalability.  Fixing
+	these issues was the goal of this project.</p>
 
-      <p>The existing GEOM design assumed the most of I/O handling to be
-	done by only two kernel threads (<tt>g_up()</tt> and
+      <p>The existing GEOM design assumed most I/O handling to be done
+	by only two kernel threads (<tt>g_up()</tt> and
 	<tt>g_down()</tt>).  That simplified locking in some cases, but
 	limited potential SMP scalability and created additional
-	scheduler overhead.  This project introduces concept of direct
-	I/O dispatch into GEOM for cases where it is know to be safe and
-	efficient.  That implies marking some of GEOM consumers and
-	providers with one or two new flags, declaring situations when
-	direct function call can be used instead of normal request
-	queuing.  That allows to avoid any context switches inside GEOM
+	scheduler overhead.  This project introduces the concept of
+	direct I/O dispatch into GEOM for cases where it is known to be
+	safe and efficient.  That implies marking some GEOM consumers
+	and providers with one or two new flags, declaring situations
+	when a direct function call can be used instead of normal request
+	queuing.  That allows avoiding any context switches inside GEOM
 	for the most widely used topologies, simultaneously processing
 	multiple I/Os from multiple calling threads.</p>
 
       <p>Having GEOM passing through multiple concurrent calls down to
 	the underlying layers exposed major lock congestion in CAM.  In
-	existing CAM design all devices connected to the same ATA/SCSI
-	controller are sharing single lock, which can be quite busy due
+	the existing CAM design all devices connected to the same ATA/SCSI
+	controller are sharing a single lock, which can be quite busy due
 	to multiple controller hardware accesses and/or code logic.
-	Experiments have shown that applying only above GEOM direct
+	Experiments have shown that applying only the above GEOM direct
 	dispatch changes burns up to 60% of system CPU time or even more
 	in attempts to obtain these locks by multiple callers, killing
-	any benefits of GEOM direct dispatch.  To overcome that new
+	any benefits of GEOM direct dispatch.  To overcome that, new
 	fine-grained CAM locking design was implemented.  It implies
 	splitting big per-SIM locks into several smaller ones: per-LUN
-	locks, per-bus locks, queue locks, etc.  After these changes
-	remaining per-SIM lock protects only controller driver
-	internals, reducing lock congestion down to acceptable level and
-	allowing to keep compatibility with existing drivers.</p>
+	locks, per-bus locks, queue locks, etc.  After these changes,
+	the remaining per-SIM lock protects only the controller driver
+	internals, reducing lock congestion down to an acceptable level
+	and keeping keep compatibility with existing drivers.</p>
 
-      <p>Together GEOM and CAM changes twice increase peak I/O rate,
+      <p>Together, GEOM and CAM changes double the peak I/O rate,
 	reaching up to 1,000,000 IOPS on contemporary hardware.</p>
 
-      <p>The changes were tested by number of people and are going to be
+      <p>The changes were tested by a number of people and will be
 	committed into &os; <tt>head</tt> and merged to
-	<tt>stable/10</tt> after the end of &os; 10.0 release cycle.</p>
+	<tt>stable/10</tt> after the end of the &os; 10.0 release
+	cycle.</p>
 
       <p>The project is sponsored by iXsystems, Inc.</p>
     </body>



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?201309092108.r89L8wUA021202>