From owner-freebsd-current Mon Sep 22 13:26:07 1997 Return-Path: Received: (from root@localhost) by hub.freebsd.org (8.8.7/8.8.7) id NAA16675 for current-outgoing; Mon, 22 Sep 1997 13:26:07 -0700 (PDT) Received: from ns.mt.sri.com (SRI-56K-FR.mt.net [206.127.65.42]) by hub.freebsd.org (8.8.7/8.8.7) with ESMTP id NAA16669 for ; Mon, 22 Sep 1997 13:26:00 -0700 (PDT) Received: from rocky.mt.sri.com (rocky.mt.sri.com [206.127.76.100]) by ns.mt.sri.com (8.8.7/8.8.7) with ESMTP id OAA14994; Mon, 22 Sep 1997 14:25:47 -0600 (MDT) Received: (from nate@localhost) by rocky.mt.sri.com (8.7.5/8.7.3) id OAA02565; Mon, 22 Sep 1997 14:25:45 -0600 (MDT) Date: Mon, 22 Sep 1997 14:25:45 -0600 (MDT) Message-Id: <199709222025.OAA02565@rocky.mt.sri.com> From: Nate Williams MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit To: "Justin T. Gibbs" Cc: Nate Williams , Bruce Evans , current@freebsd.org Subject: Re: cvs commit: src/sys/conf files src/sys/dev/vx if_vx.c if_vxreg.h src/sys/i386/apm apm.c src/sys/i386/conf GENERIC files.i386 src/sys/i386/eisa 3c5x9.c aha1742.c aic7770.c bt74x.c eisaconf.c eisaconf.h if_fea.c if_vx_eisa.c src/sys/i386/i386 autoconf.c ... In-Reply-To: <199709221944.NAA29456@pluto.plutotech.com> References: <199709221812.MAA01622@rocky.mt.sri.com> <199709221944.NAA29456@pluto.plutotech.com> X-Mailer: VM 6.29 under 19.15 XEmacs Lucid Sender: owner-freebsd-current@freebsd.org X-Loop: FreeBSD.org Precedence: bulk > >> But running time isn't the only thing to consider. As I mentioned > >> before, untimeout/timeout are often called from an interrupt context > >> and the old algorithm caused an indeterminate delay in this scenario, > >> potentially causing problems for that device and any that share the > >> same interrupt. > > > >'softclock()' is also called with some interrupt() masked as well, isn't > >it? > > It runs at splhigh() while traversing callout entries and splsoftclock() > when calling timeouts. The new implemenatation will traverse at most > 100 entries before lowering it's IPL from splhigh() so that other interrupt > handlers can run. I need to look at the code. Hmm, seems kind of silly to me: ++steps; if (steps >= MAX_SOFTCLOCK_STEPS) { nextsoftcheck = c; splx(s); /* Give hardclock() a chance. */ s = splhigh(); c = nextsoftcheck; steps = 0; Does lowering the spl level in between those two lines *really* give anything a chance to get work done? > >> You also have to consider that timeout/untimeout calls occur at > >> indeterminate rates, but softclock runs at a fixed rate meaning that > >> the amount of work it performs scales better than if that work was > >> pushed off into either of timeout or untimeout. > > > >True, but if it's 'worst-case' time happens often enough, we're > >penalizing the system *alot* more than during timeout/untimeout, which > >happens much less rarely. > > Although this may be true today, the point about it scaling still holds > true. If you increase the frequency of untimeout/timeout calls, the > new system scales very well in that the you will still encounter your > 'worst-case' time at the same rate as you did originally. Assuming the frequency of the corresponding timeout/untimeout calls is greater than the clock frequence, yes. However, if the frequency of when you call timeout and the corresponding untimeout is less than the frequency of softclock, it's a lose compared to the original implementation. With the low frequency of softclock, I suspect it's now an 'overall' win. > >The 'gotcha' is that I don't know if this is a 'normal' case, since the > >paper didn't test normal cases, but instead did lots of > >timeout/untimeouts in a user-land process, which stacks the test data in > >favor of the new implementation. > > If you don't have lots of callouts outstanding, softclock has little to > do. In it's current implementation, doesn't it have to decrement every item on the list, thus is has to walk and modify *every* callout in the list? > >> Allocate an array of ints of size ncallout. In softclock, increment the > >> array entry corresponding to the number of entries traversed in a softclock > >> run. By periodically scanning this array, you'll get a good idea of the > >> value of 'h' for you system. Add three more ints that count the number of > >> invocations of softclock, untimeout, and timeout and you should be able to > >> draw conclusions from that. > > > >But, that doesn't give me any latency #'s for any of the operations. > >Knowing how often they are called is one thing, and knowing how many > >entries are traversed is good, but *times* are the bigger issue. > > You can infer times from the other information. As the work performed > in softclock is at most O(n), you only have to call either of timeout, > or untimeout twice in each 10ms period to know that you've won. With the current softclock() frequency of 100Hz, yes. When it gets faster, then maybe. :) Nate