Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 7 Jun 1996 12:15:38 +0200 (MET DST)
From:      grog@lemis.de (Greg Lehey)
To:        hackers@freebsd.org (FreeBSD Hackers), freebsd-stable@freebsd.org (FreeBSD Stable Users), FreeBSD-current@freebsd.org (FreeBSD current users)
Subject:   The -stable problem: my view
Message-ID:  <199606071015.MAA00708@allegro.lemis.de>

next in thread | raw e-mail | index | archive | help
Sorry for the cross-posting, but I think we need to involve people in
all three groups, since -current and -hackers will both be involved
when -stable goes away.

I buy most of Jordan's arguments about getting rid of -stable (though
I'm not sure why CVS should be the problem.  Sure, I don't like it
either, but the way I see it, that's mainly a problem of
documentation), and so I'm not going to argue against killing -stable,
even though some good arguments have been put forward for its
retention.

To sum up my viewpoint, I see two problems with the present setup.
For the most part, these aren't original ideas, but so much mail has
gone by on the subject that I think it's a good idea to summarize:

1. -current and -stable diverge too much.  This means that -stable
   really isn't, it's -dusty, and the occasions on which -current
   updates get folded into -stable are fiascos like we've experienced
   in the last week.  That wasn't the intention.

2. -current goes through periods of greater and less stability.  It's
   not practical for somebody who wants to run a stable system to
   track -current.  On the other hand, the more stable periods of
   -current work very well.

The real problem, as I see it, is finding a compromise between these
two problems.  Lots of people want a stable version of FreeBSD, but
they also want bugs fixed.  Many -stable users also want new features,
such as support for new hardware.  The -stable branch has diverged too
far. 

What we need are shorter branches: say, we start a -stable branch at a
point on the -current branch where things are relatively stable.  Then
we update it with bug fixes only for a relatively short period (say 4
to 8 weeks).  *Then we ditch it and start again at a new point on the
-current tree*.  These branches could be called things like
2.2.1-stable, 2.2.2-stable, etc.  Like this, we could have our
relative stability while keeping the -stable branches more up to date.

----------------------------------------------------------------------

>From Jordan's perspective, this is the main problem.  From my personal
perspective, it's completely irrelevant.  I have another *big*
problem: I've been trying to rebuild -stable for 5 days now, and I'm
still not much closer to success than I was at the beginning.
Yesterday I threw away everything I had and started again with a new
checkout and a new make world.  It's still barfing in an xterm behind
this one as I write.  

My problem is simple: the build procedure is screwed up.  It makes the
assumption that I really want to run the version I'm building on the
machine I'm building it on.  It confuses the build environment with
the execution environment.  It installs components of the new system
in the execution environment before the build is finished.  As a
result, if anything goes wrong, you end up with a system in an
indeterminate state.  This is a particular nuisance if header files
have changed, and I think this is the biggest problem so far.

There's no need for this.  I've already modified my build environment
to only use the header files in the /usr/src hierarchy, and it's easy
enough to ensure that the executables and libraries also only come
from the build environment.

     In case you're interested in the header files, you do

     ln -s /usr/src/sys/i386/include /usr/src/include/machine

     and in the Makefiles, you add 

     CFLAGS += -nostdinc -I/usr/src/include -I/sys -I/sys/sys -I/sys/i386/include

     Possibly I've missed some header files in this, but that's just a
     matter of including them.  Similar considerations would apply to
     paths for libraries and executables, but I haven't got that far
     yet.

In addition, the build process depends far too much on removing
components and rebuilding them.  This makes builds take forever.  For
example, to rebuild a kernel, you first remove all the kernel objects.
Why?  BSD/OS has an almost identical build procedure, but it doesn't
expect you to remove what you have.  You do have to perform a make
depend, of course, but even that can be automated.  If somebody can
point me to an example of where the dependency rules don't work, I'd
be interested to see it.  

One possible argument is: what do you do if the definitions in the
Makefile change?  This can require files to be recompiled.  Sure, if
the IDENT definition in the Makefile changes, you can expect to have
to recompile a whole lot of stuff, but there are ways to ensure that
that isn't necessary.  The most obvious, if not the most elegant, is
to make all objects depend on the Makefile, and not to change the
Makefile if nothing in the Makefile changes.  A somewhat more
sophisticated method would be to put the definitions in a file which
is included by the Makefile, and depend only on that.  Does anybody
have any dependencies that couldn't be solved by this kind of method?

So now you'll come and say, "OK, do it".  I'm not just bitching: I am
prepared to revise the whole build procedure.  I think it would not
take much longer than I've spent trying to build the current version.
What do you people think?

Greg




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199606071015.MAA00708>