Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 16 Oct 1995 20:05:44 +0000
From:      Matt Thomas <matt@lkg.dec.com>
To:        Bruce Evans <bde@zeta.org.au>
Cc:        se@zpr.uni-koeln.de, hackers@freebsd.org
Subject:   Re: IPX now available 
Message-ID:  <199510162005.UAA03092@whydos.lkg.dec.com>
In-Reply-To: Your message of "Mon, 16 Oct 1995 11:42:07 %2B1000." <199510160142.LAA05419@godzilla.zeta.org.au> 

next in thread | previous in thread | raw e-mail | index | archive | help

In  <199510160142.LAA05419@godzilla.zeta.org.au> , you wrote:

> >significant sense.  However, when the world is dynamic the autoconfig
> >code now becomes passive and the LKMs are the active components.
> 
> >Indeed, one very real way to look at this would be to use the DATA_SET
> >capabilities that currently exist to store a vector of configuration/
> >initialization entries.  So when the kernel is linked, all the
> >initializers for  devices, file systems, protocols, etc. are present in
> >this data set.  The system will define a set of synchronization points and
> >ordering points within each sync point.  So when the kernel initializes, it
> >sorts these configuration entries and calls them one by one in order.
> 
> >This results in a number of benifits.  The first is that the exact same
> >modules can be used for both static-linked kernel and dynamically loaded
> >kernel.  Another benifit is that you can develop and debug almost all
> >the mechanisms needed for a truly dynamic kernel using the existing static
> >kernel framework.
> 
> I think this is backwards.  I see linker vectors as just a trick to
> help initialize things when you don't have fully dynamic initialization.
> I think the static framework should be changed to look more like the
> dynamic framework instead of vice versa.

In a dynamic kernel, you'd just read the init linker vector from the
module and process each entry (placing it into the ordered init table
in the right place.

Thinking about it, it would make sense to have a callback mechanism
instead where each can schedule a routine to be called after a sync
point is reached (after its first sync point, that is -- see below).

But in either case, the point is that the each module determines when
it gets called to do something.  There is no de-facto established like
there currently is in init_main().

Note that in a dynamically linked kernel, there is no /kernel so that
you will need to something to replace nlist("/kernel").  

> >...
> >I strongly disagree.  In my view, a LKM controls its own fate.  The kernel
> >(or other LKMs on which this LKM depends) merely provided services which it
> >can use.  Indeed, as far as the system is concerned there should be no
> >difference between a FS LKM or a device LKM or protocol LKM or a syscall LKM.
> >Once the LKM's configuration entry point has been called, the LKM is in
> >control.
> 
> Yes, the dynamic framework (at least for device drivers) should be:
> 
> 	. a single entry point (for initialization) in each module
> 	. autonomous modules
> 
> The only difference for the static framework should be that the list
> of initialization routines is kept in a vector instead of in a file.

Which is what I stated above (expect that instead of just a vector
of routine pointers, one had a vector of struct pointers which had
ordering information as well as the function pointer.

> Vectors aren't suitable for handling complicated orderings.  The linker
> may put things in the wrong order so you need a meta-routine to decide
> the order or more intelligence in the module initialization routines.  I
> think that except for early kernel initialization, all the intelligence
> should be in the module initialization routines.

Agreed.  As I said, when the kernel initializes it (the kernel via a
meta-routine -- not the loader) sorts the module init vector using the
information in each structure.  (since almost all modules will need to
be called as somepoint, it makes sense to have at least code the call
in a small static structure rather than requiring some code to invoke
a callback registration routine.)

> Device drivers will
> need to be able to sleep in their probe and attach routines for other
> reasons.  When tsleep() at probe and attach time is implemented,
> complicated time-dependent orderings (aka races :-) will be easy to
> implemented - just sleep until the modules that you depend on are
> loaded.

It depends on when drivers will be probed but if we assume it's before
the process initialization, then this begs for kernel threads.  While
highly desirable, I think kernel threads would be overkill if added
just for this.  Instead, use of timeout() and proper sync points would
achieve the same capability at far less implementation cost.

When I talk about sync points, I'm talking about when major parts of
the kernel become ready for use.  These include (but are not limited to
and not in the temporal order of):

	timer services
	VM (including malloc/free)
	scheduler ready
	core networking (netisr/domain).
	<BUS> ready
	device configuration done
	rootfs available
	going to single user

For instance, my de driver would register callback after PCI support is
ready, while core networking wait just for the VM init to complete).
There's an awful lot of time that the kernel spends waiting for something
to finish that it could be using to do something more productive.  While
kernel threads would be an elegant way of doing this, an event driven
dispatcher could be just as effective and simpler to implement.

Matt Thomas               Internet:   matt@lkg.dec.com
3am Software Foundry      WWW URL:    <currently homeless>
Westford, MA              Disclaimer: Digital disavows all knowledge
                                      of this message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199510162005.UAA03092>