Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 7 Mar 2012 22:43:05 +0100
From:      Polytropon <>
To:        David Jackson <>
Subject:   Re: Still having trouble with package upgrades
Message-ID:  <>
In-Reply-To: <>
References:  <> <> <>

Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help
On Wed, 7 Mar 2012 12:42:52 -0500, David Jackson wrote:
> On Wed, Mar 7, 2012 at 11:58 AM, Polytropon <> wrote:
> > David, allow me to add a few thoughts:
> >
> > On Wed, 7 Mar 2012 11:28:47 -0500, David Jackson wrote:
> > > As for compile options, the solution is simple, compile in all feature
> > > options and the most commonly used settings into the binary packages, for
> > > the standard i386 CPU.
> >
> > I think this can develop into a major problem in certain
> > countries where listening to MP3 is illegal. :-)
> >
> >
> You are talking about the codec.

Mostly, yes, but also about "what to include": For example,
the mplayer port can build mplayer _and_ mencoder. For a
GUI version, there's gmplayer and gmencoder. A "universal
package" would contain them all.

> What Ubuntu seems to do is distribute these codecs as a seperate nonfree
> addon package which are then loaded by applications at run time. You see,
> options do not necessarily have to be compiled into programs, they can be
> loaded at libraries and then loaded by programs at run time if they are
> available.

I know this approach, it's effective within the Linux eco-
system and the special view at "free vs. nonfree". However,
delegating installation and updating tasks from system
tools to individual applications looks... hmmm... looks
very old-fashioned and wrong to me. Just imagine 100
installed applications would not start, instead inter-
actively annoying you that there may be updates available,
and you should install them now, and reboot? That kind
of exaggeration is an example of how to to it totally wrong.

Loading things at runtime is something different than
permanently installing things to the system. A web page
loads a Javascript source file at runtime, but do you
want it to automatically install a web server to your
system? :-)

> > > If people want customisations then they can build
> > > the software for themselves.
> >
> > That's what they'll do anyway. :-)
> >
> >
> No, usually they do not. Few people except for hard core geeks want to mess
> around with compile options. most will use runtime configuration through a
> GUI which is faster.

Well, I'm not a hard core geek, but I have to make things
running on limited resources. For example, what if you need
to turn a 300 MHz P2 into a usable workstation? There's no
other way than dealing with /etc/make.conf and looking at
port options.

Those who intend to customize things usually are familiar
with the options that are presented, even though theose
options might look like logorrhea to others. Most option
screens are full of words (of dependencies or features)
that do not make any sense (and there's no way to conclude
what they do except doing a web search). For those who
tweak, they are no obstacle, but for newcomers they may
really be annoying: "Do I need KLOMPATSH and SHLORTZ
support? And if I do, what do I need them for?" :-)

> > > When a new kernel is released, there is no reason to reinstall all of the
> > > packages on the system at the same time. Since the kernel and userland
> > > packages have different development cycles, there is no reason why there
> > > has to be synchronization of the upgrading.
> >
> > It sometimes is neccessary, for example if kernel interfaces
> > have changed. There is some means of compatibility provided by
> > the compat_ ports. But if you start upgrading things, libraries
> > can break, and the system may become unstable (in terms of not
> > being able of running certain programs anymore). Just see how
> > "kernel and world are out of sync" errors can even cause the
> > system to stop booting. Degrading the inner workings of the OS
> > to "just another package" can cause trouble. "Simple updates"
> > as they are often performed on Linux systems can render the
> > whole installation totally unusable because "something minor"
> > went wrong. :-)
> >
> >
> >
> A well designed system will provide backwards compatability through various
> strategies and this does not necessarily need to affect internal software
> design as the backwards compatability can also be provided by compatability
> layers and glue code.

Please do not underestimate the complexity of an operating
system. It is not a simple brick of chocolate. It's very
complicated, end even "easy" things like backwards compatibility
and universal interfaces need a lot of complexity "behind
the scenes". The more versions "to skip", the more work is
needed to keep it running. Just have a look at today's (!)
common mainframe operating systems that still allow you to
address a card punch in your program. :-)

> > > An OS that requires a user to reinstall
> > > everything just to upgrade the kernel is not user friendly.
> >
> > Why do consider a user being supposed to mess with kernels?
> > This question can show that I'm already too old: Programs
> > are for users, kernels are for sysadmins. Sysadmins do stuff
> > properly, even if they shoot their foot in order to learn
> > an important lesson. :-)
> >
> >
> Users have to upgrade the kernel, with a well designed OS this is a process
> that does not require any sort of problems for the user.

You didn't answer the question: WHY do they have to? :-)

I see a collision of two paradigms here:

Install once, then use. This approach means that you stay
fully functional within a specific conglomerate of software
which will work. Things may break only when you try to do
an upgrade. The set of features you can access is constant.

Keep up to date. This approach requires you to constantly
upgrade things, and because of the inter-program-relations
(dependencies, libraries), things can also break along the
way. A system that has been changed is not guaranteed to
work the next day. The set of features provided by this
approach is not constant: it may increase, but may decrease
too (so you can loose functionality after updating).

Which approach _you_ will choose depends on your individual

Which _tools_ you will use to follow your chosen approach...
that's a different question.

For keeping a FreeBSD OS and system updated, tools using
the binary way are:
	* freebsd-update
	* portsnap
	* pkg_add -r
	* portupgrade -PP
If they fail, there is a _reason_ why they do. Investigating
that reason should help to solve the problem.

> Since good kernel
> backwards compatability strategies will assure that the new kernel will
> drop into place of the old one without causing problems.

Except it's a custom kernel. :-)

> Kernel upgrading should be done through the main package update tool and
> the kernel itself distributed as a package, as Ubuntu does it.

I think most Linusi handle it that way.

In Linux land, there is no real differentiation between
terms like "the operating system" and "installed programs".
Every creator of a Linux distribution chooses his kernel
and his "base applications" by selecting from a big pool
of packages. So the "system" of Linux A may not be the
same as of Linux B or Linux C. There is no centrally
developed and tested operating system _as such_.

FreeBSD however has "the operating system" which is
maintained by the FreeBSD team to make sure quality
requirements are met, documentation is available and
changes are well tested. This means that following
an update path like -STABLE or -RELEASE-pX will not
be typically breaking things - unlike some Linux update
paths that may turn your computer into a nice paper
weight because the new kernel package doesn't boot.

Packages, unlike the OS, have no differentiation of
update paths such as -RELEASE (very well tested) or
-STABLE (well tested); one could think some of them
are like -CURRENT (which is a development branch that
doesn't even guarantee to compile successfully in
all imaginable cases).

> This is also
> how Windows Update does it as well. It can be done automatically with
> automatic updates and the user does not need to worry about it.

And when things break, you start poking a pile of
garbage with a long pole. You have _no_ means of
diagnostic, no other way than reverting to the last
version (if that's even possible in such a case).

Automated updates may be fine for home users, but
they may be critical and even DEADLY when applied
without care to an important server. As FreeBSD is
a multi-purpose OS, it will have to also consider
those situations.

> Not everyone who uses an OS is a system administrator.

Users don't use an OS. They use programs. Or files.
Or pictures. Or devices. Or apps.

Depending on the experience and knowledge of the users
in question, the idea of what they use may change over

> Do you really think
> that anyone who owns an Ipad or has a home desktop computer should be
> required just to apply a kernel upgrade? The good thing is that kernel
> upgrades do not need a system administrator. A well designed kernel will
> not be so problematic that this will be required.

If everything works as intended, that would be a lovely
world. But sadly, things break, more often than you like
them to. The problem is that if you need to diagnose a
problem, you intendedly want to go the manual path, to
see all the "hidden bits", because that's the only chance
to get the system up again. Except you re-install and
re-configure everything.

> Dependancy problems will not exist if the kernel development follows sound
> strategies for backwards compatability, which can include providing a
> compatability layer with glue code, which means backwards compatability
> need not necessarily affect the internals of the software system.

Glue code is often considered a main reason for bloat,
which generally is a bad thing.

Magdeburg, Germany
Happy FreeBSD user since 4.0
Andra moi ennepe, Mousa, ...

Want to link to this message? Use this URL: <>