Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 2 Apr 1996 20:02:37 -0500 (EST)
From:      "John S. Dyson" <toor@dyson.iquest.net>
To:        bde@zeta.org.au (Bruce Evans)
Cc:        bde@zeta.org.au, freebsd-current@freefall.freebsd.org, kuku@gilberto.physik.rwth-aachen.de, phk@critter.tfs.com
Subject:   Re: calcru: negative time:
Message-ID:  <199604030102.UAA01512@dyson.iquest.net>
In-Reply-To: <199604021527.BAA29463@godzilla.zeta.org.au> from "Bruce Evans" at Apr 3, 96 01:27:15 am

next in thread | previous in thread | raw e-mail | index | archive | help
> 
> >If there are some times where splhigh() is on for too long, that needs
> >to be changed (we aren't using another interlock mechanism when we
> >should be.)  Splhigh is correct for it's intended purpose -- lock out
> >ANY other access to the VM data structures.  Splimp is just a hack in
> 
> splvm() would be correct.  splhigh() locks out access to _all_ data
> structures.  AFAIK, hardclock() doesn't touch any vm data.  statclock()
> touches some vm statistics.  This could probably be handled without
> locking so much.  statclock() is careful not to touch user pages for
> profiling ticks.  This may depend on our fuswintr() and suswintr()
> always failing so that profiling ticks aren't added immediately.
>
So we should probably make splvm: net_imask|bio_imask|tty_imask???
(Keeping tty_imask mostly because of the possibility of some time in the
future, the tty code needing to do malloc(s)?)

> 
> slimp() doesn't contain splbio(), so it must have once been safe to
> handle disk interrupts in the middle of vm operations.  Has this changed?
> splhigh() is more or less the union of splimp() (which is usually >=
> spltty() due to other bogins), splbio() and the clock part of splclock(),
> so the switching from splimp() to splhigh() was essentially adding the
> masking of bio interrupts together with (unnessarily I hope) adding the
> masking of clock interrupts.
> 
We probably need to keep bio_imask until we come up with a good kernel threading
mechanism.  Right now, I/O completion can cause manipulation of the
pages/page_queues (bio_imask, and perhaps net_imask.)

Really, if we wanted to optimize the situation, we could limit the malloc
code to net_imask (I don't think that anything else mallocs at interrupt
level), and the rest of the VM to net_imask|bio_imask?  However,
we need to document that very very carefully for future maintainers.

John



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199604030102.UAA01512>