Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 13 Nov 1995 23:25:12 -0800 (PST)
From:      Julian Elischer <julian@ref.tfs.com>
To:        davidg@Root.COM
Cc:        uhclem%nemesis@fw.ast.com, simonm@dcs.gla.ac.uk, current@freebsd.org
Subject:   Re: Disk I/O that binds
Message-ID:  <199511140725.XAA26807@ref.tfs.com>
In-Reply-To: <199511140715.XAA00155@corbin.Root.COM> from "David Greenman" at Nov 13, 95 11:15:00 pm

next in thread | previous in thread | raw e-mail | index | archive | help
Actually here's an answer that might tackle it from another point..
if the raised priority that a process get's after getting a block
is lower that the raised priority that a process gets after being suspended for a second or two, then
in a busy system, whenever processes start getting held up the hog process
will start to lose out a little more.. and maybe if it get's its read of the
read-ahead buffer in just a little later, the head might have got a chance to
get past  it in which case it will have to go all the way around again..
at least in this method, an un busy system has less degradation..


processes that read a lot are constantly cycling
into high priority after every read, but the sleeps are so short that
these processes are basically permanently at raised priority..
especially with read-ahead and track caches making the disks so dammed fast

> 
> >Another answer may be in the way we allocate blocks on the disk..
> >maybe we should allocate less on any particular cylinder group and
> >jump around more often???
> >
> >(might be an easier answer :)
> 
>    Oh, I know, let's allocate all the blocks in reverse order - 7219 7218 7217,
> etc. ...or better yet, let's use the new /dev/random driver to generate a
> block list. Yeah, I *like* that idea. :-)
> 
> -DG
> 




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199511140725.XAA26807>