Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 5 Dec 1999 17:06:58 -0500 (EST)
From:      Robert Watson <robert@cyrus.watson.org>
To:        freebsd-audit@freebsd.org
Subject:   Re: [Re: Several FreeBSD-3.3 vulnerabilities] (fwd)
Message-ID:  <Pine.BSF.3.96.991205170627.6435C-100000@fledge.watson.org>

next in thread | raw e-mail | index | archive | help

Here's the bugtraq post I referred to in my previous post.

  Robert N M Watson 

robert@fledge.watson.org              http://www.watson.org/~robert/
PGP key fingerprint: AF B5 5F FF A6 4A 79 37  ED 5F 55 E9 58 04 6A B1
TIS Labs at Network Associates, Safeport Network Services

---------- Forwarded message ----------
Date: Thu, 2 Dec 1999 17:01:46 -0500
From: Robert Watson <robert@CYRUS.WATSON.ORG>
Reply-To: Robert Watson <robert+freebsd@cyrus.watson.org>
To: BUGTRAQ@SECURITYFOCUS.COM
Subject: Re: [Re: Several FreeBSD-3.3 vulnerabilities]

WARNING: this is a long email talking about auditing responsibility, risk
evaluation, and communicating about risk and vulnerabilities in the
context of a free operating system environment where third party code is
redistributed.  If you get bored, skip to the next section or the
conclusion.

On Wed, 1 Dec 1999, Brock Tellier wrote:

<snip>

> >This one is a hole in the vendor-provided software, which wants to >install
> >it setuid uucp by default. With ~2800 third-party apps shipped with
> >FreeBSD, we can't be held responsible for the security of all of them :-)
>
> This is the statement I have a bit of a problem with.  Sure there are 2800
> ports, but how many of these are suid/sgid?  I'm thinking *maybe* 50 that I
> saw when I did a full install of 3.3-RELEASE.  Fifty apps, most of which are
> small like xmindpath, isn't a ridiculous number to audit.  At LEAST auditing
> them for command-line overflows and setting up a /tmp watcher.
> You may not be legally responsible, or be able to take responsibility for the
> quality of the code, but when you allow a third-party to put a *suid* program
> into your distribution you imply some sort of trust with the end-user
> regarding it's security integrity.  At least to the point that we can assume
> that someone has taken the time to xmindpath -arg $BUF.  Note that this isn't
> specifically directed at FreeBSD or free OS's.

<snip>

> No, I contacted security-officer@freebsd.org who responded that HE had
> contacted the maintainers.  That was the last I ever heard of it.

So there has, of course, been a lot of traffic on freebsd-security,
freebsd-audit, and elsewhere about the impact of your recent advisories,
and the implications for our security process.  I think it's important to
address a number of issues that you and others have raised, and point out
that the process is probably not fixed yet, and feedback on how the
FreeBSD Project (and numerous others in the same situation) should be
handling this kind of thing.

Issue 1: Third Party Applications

One objection commonly passed around is that you are identifying these
bugs as "FreeBSD vulnerabilities", and indeed, they are vulnerabilities
that can arise from installing FreeBSD in the documented manner, and then
adding packages bundled with the FreeBSD distribution.  However, the
source base of these applications is neither written nor maintained by the
FreeBSD Project.  There are two classes of security issues here: first,
the application itself (as designed by the developers) may have security
problems, be it buffer overflows, poor design, etc.

The second is that during the porting process, we may introduce security
problems that did not exist in the original un-FreeBSD'd version of the
application.  The second type, we clearly must take full responsibility
for--if it's our code, it's our problem.

The problem of code in the base third party application is serious also,
and harder to address.  As has been pointed out, the source base
underlying the ports collection is enourmous--there are almost 3000 third
party applications in the ports collection currently, and all of them will
have "security considerations".  Clearly we *cannot* audit all of the
code.  It is simply not feasible for a project of our size--we can skim
some of the code, and encourage the original application developers to do
proper auditing themselves.  We can also reject applications as "unsafe"
or tag them as "unsafe", which is a strategy that we have not yet
employed, but probably should.  You point out that there are a particular
class of applications that need to be paid close attention to: setuid and
setgid applications, that is, applications that rely on elevated
privileges to perform their function.  You also observe that they are
relatively small in number.  I think it's worth pointing out that in fact,
almost all code in the ports collection increases risk and exposure:
chances are, they interact with the network or third parties in many cases
(browsers, mail servers (imap anyway? :-), etc).  Almost any third party
application introduces some risk.  We can concentrate on subsets of these
that are particularly unfortunte (setuid/setgid, daemons) but we're
bumping into the risk factor.  Ideally the application developers do some
of this for us, right?

2: Identifying Risk and Informing the User

When a user chooses to install an operating system, they are placing a
certain amount of trust in the vendor, be it Microsoft, Sun, or FreeBSD.
This means they've accepted the level of auditing/security that the
vendors consider acceptable, and assume that unless otherwise informed,
all software provided by the vendor should be considered in that
equivilence class of trust.  In the FreeBSD model, porst do not fall into
that same class, and as such should be identified to the user.  I would
like to see a clear disclaimer pop up before entering the packages install
component of sysinstall:

	The packages collection is made up of applications provided by
	third parties, and adapted for use under FreeBSD.  Because these
	applications are not part of the FreeBSD source base, they may
	not have been reviewed and found to meet the same rigorous
	auditing and review process, and as such may suffer from
	limitations beyond the control of the FreeBSD Project, including
	security limitations.  By installing these packages, you accept
	the associated risk.

Clearly the wording needs work, but you get the idea: if we're going to
accept risk because of the integration of third party applications in our
install process, we need to let the user know so they can choose *not* to
accept the risk.

Similarly, we need to identify applications that have increased exposure
to the "unsafe world", that is, setuid/setgid applications that interact
with general users or the network, and applications that run without
elevated privileges, but with exposure to third parties (i.e., netscape
connecting to servers, irc, popd, etc).  We could also be a bit more
discerning about how we select ports and make them available.  Yes, it is
acceptable to pop up yet another dialog for the user that says:

	This application sucks.  Yes, it honestly does.  It was written by
	the *WORST* coder in the world, and as such it is full of security
	holes.  It's swiss cheese.  It makes a pile of tofu look like a
	well-audited piece of code.  We don't think you should install it,
	but because there is demand, we're making it available so that you
	can evaluate the program, your use of the program, and your
	environment and see if it is appropriate.  Go ahead at your own
	risk, and feel free to contact the developer to tell them we told
	you this.

And the default state of the ports collection should be to have this
happen for every port, and we turn it off when we feel comfortable with
it.

3. Improved and Formal Communications Model

So I've painted a picture of different code components with varying
degrees of trust (base code, ports, etc).  Now we need to think about what
to do when a vulnerability is found.  In the best case, we are notified in
advance so that we can prepare fixes for release at the same time as the
announcement of the vulnerability--this provides end users with a good
combination of open disclosure, as well as vendor-provided patches.  As
such, we provide the security-officer@freebsd.org address as a uniform
submission location for such vulnerability notifications.  Somewhere, we
slipped, because the fixes weren't out there.

In the case of a bug in the base source tree, there may be a delay due to
communicating with developers of that section of the tree, but all parties
are clearly identified and the task is fairly easy.  With a third party
development model, there are more people involved in the process.  The
process you discovered goes something like this:

	Bug Reporter --> Security Officer --> Port Maintainer

Optionally, there's a fourth party involved: --> Application Developer,
and the application developer may want to be "in" on the process.  Each
step along the way involves a delay -- we wait for each party to check
their email, live their lives (your choice: you're 9 months pregnant and
water just broke: you can go to the delivery room, or you can fix a bug..
:-), and so there are timeouts.  If the bug is being actively exploited,
there may be shortcut approaches available, but because of the large scale
of some of the software, it's important to get the designer involved in
fixes, as there may be implications to any changes made by a party not
understanding the code as well (such as introducing new security problems:
a correct fix is better than no fix, and an incorrect fix can actually
make things worse).  The part that presumably got broken was the "timeout
on ping to next stage, take action to limit the effects of the bug".
Presumably the choices go something like this:

	Advisory and one of:	withdraw the port
				patch the port
				update the port for application developer
				    fixes

When communications break down, we should start moving up the list until
something works with sufficient rapidity.  I'd say, give two days to the
ports developer, and two days to the application developer.  If after
four days, a fix isn't found, go for the lowest on the list that is
feasible, with a worst case of withdrawing the port (disabling it, and
giving an advisory).

There are perfectly legitimate reasons for poor communications: poor media
for communication, poor availability of opportunity, etc.  What there are
not legitimate reasons for is being unable to decide what to do when
communication fails: it's like disconnected operation on a notebook: you
can't do nothing, so it's best to fail as best you can :-).

The communications and action model needs to reflect the development
model.


And to conclude before too many people have given up: this should not look
like an unfamiliar problem to anyone out there.  All the free operating
systems I know of have to deal with it, as well as most of the commercial
vendors.  When a hardware vendor irritatingly bundless AOL Instant
Messenger with you Win98 machine, they've made a decision about trusted
code bases, etc.  Probably not consciously and with the necessary degree
of thought, but they have.  The same goes for Microsoft bundling parts of
BSAFE for browser security, etc.  Both these organizations (providers and
facilitators of third party code distribution) and users need to
understand the risks, and work out appropriate responses.  And also the
application developers: you make everyone look bad :-).

  Robert N M Watson

robert@fledge.watson.org              http://www.watson.org/~robert/
PGP key fingerprint: AF B5 5F FF A6 4A 79 37  ED 5F 55 E9 58 04 6A B1
TIS Labs at Network Associates, Safeport Network Services



To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-audit" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?Pine.BSF.3.96.991205170627.6435C-100000>