Skip site navigation (1)Skip section navigation (2)
Date:      Fri,  5 Jan 2007 01:58:00 +0100
From:      lulf@stud.ntnu.no
To:        freebsd-geom@freebsd.org, freebsd-current@freebsd.org
Subject:   Pluggable Disk Schedulers in GEOM
Message-ID:  <20070105015800.s3rqdzgm8k8owk4s@webmail.ntnu.no>

next in thread | raw e-mail | index | archive | help
Hi,

I was wondering if someone have started on the pluggable =20
disk-scheduler project
on the "new ideas"-page yet.

I was thinking on how one could implement this in GEOM by creating a =20
lightweight
scheduler API/Framework integrated in GEOM. The framework would be in =20
charge of
changing which schedulers that are to be used by the g_up and g_down threads=
.

I've put down some design goals for this:
1. Little/no overhead in I/O processing with default scheduling =20
compared to the "old" way.
2. Easily modifiable, preferable on-the-fly switching of schedulers.
3. Make it possible to many different schedulers to be implemented, without
   creating a too alien interface too them, but at the same time not restric=
t
   them too much.

More specifically my plan was to change the =20
g_up_procbody/g_down_procbody to ask the scheduler
framework on which scheduler to use, and then further implement =20
procedures in that
framework to handle the details of loading, switching and unloading =20
different schedulers
for I/O. Then I would extract out the default I/O scheduler and try out some
other ways to schedule I/O. Also, I'm not sure how I would handle each
schedulers way to organize the queue. One should allow for different types
of bioq's for the schedulers since they may have different needs of organizi=
ng
queues (like a heap maybe).

I've started with some of my tampering in a p4 branch lulf_gpds. I have a
DESCRIPTION document that would maybe explain some of my thoughts and proble=
ms
further. Some small code are written, but I want to hear some others =20
thoughts on this before I go crashing around doing stuff I might hate =20
later I did :)

I was also thinking of an alternative way to implement this like a
"gpds"-layer that could provide different schedulers to service I/O requests=
,
because that would make it to fine-grain more on scheduling, say telling tha=
t
the system-drive is used in a characteristic way and that one specific =20
scheduler algorithm is more
appropriate there, and another drive is having a different =20
characteristic which then should use a different algorithm.
However, this should be doable directly in geom as previously described, but
with a bit more tampering with other code. This is probably the most efficie=
nt
way since it has no overhead of another GEOM class.

I also have some questions about the GEOM layer in itself. Does the VM-manag=
er
actually swap pages out to disk via GEOM, or does it do that by itself (whic=
h
would make more sense in terms of efficiency).

I'd like to hear from some of the GEOM gurus' view on this.
Is the something that sounds doable and worth spending time on?
Is there something I've overlooked? Have I completely lost my mind?
I sometimes have the ability to write a bit different than what my mind is
thinking sometimes :)

Anyway, I'd like to research a bit on this topic to just see how much =20
it does matter with different I/O scheduling for different purposes.

Comments are welcomed!

--=20
Ulf Lilleengen






Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20070105015800.s3rqdzgm8k8owk4s>