From owner-freebsd-current@FreeBSD.ORG Wed Apr 2 18:10:20 2003 Return-Path: Delivered-To: freebsd-current@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id A60EF37B401 for ; Wed, 2 Apr 2003 18:10:20 -0800 (PST) Received: from rwcrmhc51.attbi.com (rwcrmhc51.attbi.com [204.127.198.38]) by mx1.FreeBSD.org (Postfix) with ESMTP id 2F56A43F85 for ; Wed, 2 Apr 2003 18:10:20 -0800 (PST) (envelope-from julian@elischer.org) Received: from interjet.elischer.org (12-232-168-4.client.attbi.com[12.232.168.4]) by rwcrmhc51.attbi.com (rwcrmhc51) with ESMTP id <20030403021019051001j78ce>; Thu, 3 Apr 2003 02:10:19 +0000 Received: from localhost (localhost.elischer.org [127.0.0.1]) by InterJet.elischer.org (8.9.1a/8.9.1) with ESMTP id SAA20191; Wed, 2 Apr 2003 18:10:16 -0800 (PST) Date: Wed, 2 Apr 2003 18:10:14 -0800 (PST) From: Julian Elischer To: Matthew Dillon In-Reply-To: <200304030157.h331veVm087635@apollo.backplane.com> Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII cc: current@freebsd.org Subject: Re: libthr and 1:1 threading. X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 03 Apr 2003 02:10:20 -0000 A thought on 'fixing AIO..' On Wed, 2 Apr 2003, Matthew Dillon wrote: > > A better solution would be to implement a new system call, similar to > pread(), which simply checks the buffer cache and returns a short read > or an error if the data is not present. If the call fails you would > then know that reading that data would block in the disk subsystem and > you could back-off to a more expensive mechanism like AIO. If want > to select() on it you would then simply use kqueue with EVFILT_AIO and > AIO. A system call pread_cache(), or perhaps we could even use > recvmsg() with a flag. Such an interface would not have to touch the > filesystem code, only the buffer cache and the VM page cache, and > could be implemented in less then a day. > Just as a point of interest, we now have the ability for a non-threaded program to have several threads in the kernel.. By this I mean, it would be theoretically possible to re-implement aioread() in terms of some background threads (doing synchronous IO) in the kernel, that the program is not aware of.. We don't have this hapen at teh moment.. (hmm actually we do but...only in KSE programs) but we have the infrastructure that would allow it to be done by someone who has a spare day or so.. Basically teh aioread would return, but the process would have left a worker thread in the kernel, completing the work, and since the thread is attached to the process, when it is reactivated on data arrival, the correct address space would be there automatically.. All 'exit' cases would be handled automatically.. etc. etc.