Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 22 Jul 2004 20:05:20 -0700 (PDT)
From:      "Bruce R. Montague" <brucem@mail.cruzio.com>
To:        freebsd-hackers@freebsd.org
Subject:   Re: "Next Generation" kernel configuration?
Message-ID:  <200407230305.i6N35KhW000696@mail.cruzio.com>

next in thread | raw e-mail | index | archive | help


 Hi, re rule-based configuration Chris Pressey noted:

 > That's the easy part.  The hard part is discovering the dependencies.


My impression is that almost all rule-based expert
systems of sufficient complexity that deal with a
dynamic field have failed because of this, that is,
due to the difficulty of determining current
dependencies (rule discovery).  Even the experts
don't actually know; each will know some but nobody 
will know all, certainly not when the real dependencies
are evolving all the time.  Even worse, there may  
be many combinations of things that just don't work
but nobody realizes it yet, new things that break a 
lot of old dependencies in an unknown way, etc.  Even
the experts will hit this and ask on an email list,
"I did this and look what happened, anybody got any
ideas?" So the experts will know how to solve the 
problem, but not in a way that can be automated.

Unix has been pretty good over its life at resisting
combinatorial complexity; RSX for instance had a
relatively high degree of optional API sets and
optional API features and similar things with kernel
primitives, this introduced a very fine level of
granularity that made for a bad dependency combinatorial
explosion (part of this resulted from the old OS/360
mantra of one system that would scale across a very
wide family, combined with paranoia about memory use).
Feature sets selected for server components depended
on other feature sets, kernel feature sets, API
feature sets, driver features sets, etc and vice
versa.

My impression -dont know if it's true- is that the
RSX experience made DEC say "never again". One
important reason was testing.  Testing a system when
few others would actually be built exactly like it
raises issues... its good to know that it at least
works... but how "fragile" is it wrt other build
combinations?

The "large e-mail list" as build expert-system of
choice combined with a simple mechanism (flat files)
to act as control knobs is likely a big advantage
open source systems have over most proprietary
systems. It would be interesting to know how many
people world-wide are reasonably competent to build
FreeBSD from source compared to how many actually
know the same for NT. Maybe all the more reason to
package something as an "Assistant" type educational,
verification, or visualization tool for stable,
well-known core dependencies. FreeBSD will be around
for a long time and such a tool, if nothing else,
might help get people on board w/o any impact wrt
the current state of affairs. If nothing else, it's
an interesting problem and systems complexity is
not likely to go down anytime soon!

 
 - bruce






Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200407230305.i6N35KhW000696>