Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 02 Sep 2013 11:49:33 +0300
From:      Alexander Motin <>
To:        FreeBSD SCSI <>,,,  "" <>
Subject:   [RFC][CFT] GEOM direct dispatch and fine-grained CAM locking
Message-ID:  <>
In-Reply-To: <>
References:  <>

Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help

I would like to invite more people to review and test my patches for 
improving CAM and GEOM scalability, that for last six months you could 
see developing in project/camlock SVN branch. Full diff of that branch 
against present head (r255131) can be found here:

Heavy CAM changes there were focused on reducing scope of SIM lock to 
only protecting SIM internals, but not CAM core. That allows many times 
reduce lock congestion, especially on heavily parallel request 
submission with GEOM changes below. More detailed description of changes 
you could see here earlier:

GEOM changes were focused on avoiding switching to GEOM up/down threads 
in relatively simple setups where respective classes don't require it 
(and were explicitly marked so). That allows save on context switches 
and on systems with several HBAs and disks talk to them concurrently 
(that is where CAM locking changes are handy). Such classes were 
modified to support it: DEV, DISK, LABEL, MULTIPATH, NOP, PART, RAID 
(partially), STRIPE, ZERO, VFS, ZFS::VDEV, ZFS::ZVOL and some others. 
Requests to/from other classes will be queued to GEOM threads same as 

Together that allows to double block subsystem performance on high (at 
least 100-200K) IOPS benchmarks, allowing to reach up to a million total 
IOPS, while keeping full compatibility with all major ABIs/KBIs.

Since we are already in 10.0 release process and changes are quite big, 
my plan is to wait and commit them to head branch after the freeze end, 
and then merge to stable/10.  I hope the release process will go on 
schedule to not delay this work for another six months.

This work is sponsored by iXsystems, Inc.

Alexander Motin

Want to link to this message? Use this URL: <>