Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 5 Nov 2003 11:39:19 -0500 (EST)
From:      Robert Watson <rwatson@freebsd.org>
To:        Igor Sysoev <is@rambler-co.ru>
Cc:        Alan Cox <alc@cs.rice.edu>
Subject:   Re: Update: Debox sendfile modifications
Message-ID:  <Pine.NEB.3.96L.1031105113837.71158J-100000@fledge.watson.org>
In-Reply-To: <Pine.BSF.4.21.0311051848350.3103-100000@is.park.rambler.ru>

next in thread | previous in thread | raw e-mail | index | archive | help

On Wed, 5 Nov 2003, Igor Sysoev wrote:

> On Wed, 5 Nov 2003, Robert Watson wrote:
> 
> > On Wed, 5 Nov 2003, Igor Sysoev wrote:
> > 
> > > As to worker kthreads I think it's better to queue aio operation as it
> > > was made in src/sys/kern/vfs_aio.c:aio_qphysio(). 
> > 
> > One of the things that worries me about the proposal to use kernel worker
> > threads to perform the I/O is that this can place a fairly low upper bound
> > on effective parallelism, unless the kernel threads themselves can issue
> > the I/O's asynchronously.  In the network stack itself, we are event and
> > queue driven without blocking--if we can maintain the apparent semantics
> > to the application, it would be very nice to be able to handle that at the
> > socket layer itself.  I.e., not waste a thread + stack per "in-progress" 
> > operation, and instead have a worker or two that simply propel operations
> > up and down the stack (similar to geom_up and geom_down). 
> 
> As far as I understand src/sys/kern/vfs_aio.c:aio_qphysio() (that
> handles AIO on raw disks) does not use kthreads and simply queues
> operations. 

I think it sounds like we're actually agreeing with each other. 
Currently, AIO does use threads for non-character devices, so in the
socket case it will be using a worker thread. 

Robert N M Watson             FreeBSD Core Team, TrustedBSD Projects
robert@fledge.watson.org      Network Associates Laboratories




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?Pine.NEB.3.96L.1031105113837.71158J-100000>