From owner-freebsd-bugs Wed Mar 8 17:30:57 1995 Return-Path: bugs-owner Received: (from majordom@localhost) by freefall.cdrom.com (8.6.10/8.6.6) id RAA00790 for bugs-outgoing; Wed, 8 Mar 1995 17:30:57 -0800 Received: from cs.weber.edu (cs.weber.edu [137.190.16.16]) by freefall.cdrom.com (8.6.10/8.6.6) with SMTP id RAA00783 for ; Wed, 8 Mar 1995 17:30:54 -0800 Received: by cs.weber.edu (4.1/SMI-4.1.1) id AA03354; Wed, 8 Mar 95 18:18:04 MST From: terry@cs.weber.edu (Terry Lambert) Message-Id: <9503090118.AA03354@cs.weber.edu> Subject: Re: QIC-80 problem To: bakul@netcom.com (Bakul Shah) Date: Wed, 8 Mar 95 18:18:04 MST Cc: joerg_wunsch@uriah.heep.sax.de, henryk@gaja.ipan.lublin.pl, freebsd-bugs@FreeBSD.org In-Reply-To: <199503082335.PAA24741@netcom22.netcom.com> from "Bakul Shah" at Mar 8, 95 03:35:49 pm X-Mailer: ELM [version 2.4dev PL52] Sender: bugs-owner@FreeBSD.org Precedence: bulk > > One drawback, though: team make some assumptions about signals and > > pipes that are bad. > > Such as? [ ... ] > > The problems with its assumptions manifest on MP Sun machines, > > among others, by causing "broken pipe" messages instead of dumping > > output. > > I have used team on MP SGI machines where it worked fine, > and not MP Sun machines but this surprises me. team reads > (multiple times if necessary) from the input until enough > data is received to fill up blocksize buffer or until EOF. > Similarly for output. Perhaps broken pipe is associated > with something else? > > I am not doubting what you say but I'd like to know under what > circumstances team does not work. Child process on another processor exits before the controlling process on the first processor. I haven't really tracked it further than that. Apparently the IPC delivery of SIGCHLD is "broken" for the way team is using it. With less, "creative" shall we say, use of the preprocessor this would be easier to track down. 8-). > The same thing can happen with a uni-process program too. Oh, yes -- it *is* what is happening with the tar tzf - XXX | ft that started this thread in the first place. But that's why interleaving the I/O to ft won't help if ft isn't running sufficiently frequently. Adding processes won't help: ft isn't not running for lack of input. > > Actually, a filter that used multiple outstanding async reads and > > used async writes to dump the data would probably have significantly > > higher performance than team because it would avoid context switching. > > But can you have *multiple* outstanding async reads on the > same file/device from a *single* process? I didn't think > so. Not on the same descriptor, that's true. You'd need to dup it. Or if you are using LWP semantics anyway, use pread/pwrite asynchronously so you can specify a seek offset. In reality, you'd on;y save ~ 40uS by keeping an outstanding call going on both the read and write (system call initation overhead for two calls). On the other hand, on a big file, this could add up. > > I suggest "super team" so it can have a cool name like "steam". > > Well, such program is not using a `team' (of processes) so > any derivation of team would be even less appropriate. > (`team' itself is not at all descriptive of what it does). It would be using a team of threads of control in a single process context instead of multiple processes or multiple threads. A pthreads implementation would have more complex context switching overhead so would probably not perform as well, since it would have to swap out stack and register sets and (potentially) flush the pipeline, if the processor was a good one (see 'User space threads and SPARC register windows', U of Washington CS department technical report). Terry Lambert terry@cs.weber.edu --- Any opinions in this posting are my own and not those of my present or previous employers.