Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 2 Jan 2009 17:44:12 +0100
From:      cpghost <>
Subject:   Foiling MITM attacks on source and ports trees
Message-ID:  <>

Next in thread | Raw E-Mail | Index | Archive | Help

with MITM attacks [1] on the rise, I'm concerned about the integrity
of local /usr/src, /usr/doc, and /usr/ports trees fetched through csup
(and portsnap) from master or mirror servers.


There's already a small protection against MITM on the distfiles in
ports: distinfo contain md5 and sha256 digests. This is an excellent
idea that could be extended to *all* files in /usr/src, /usr/doc, and

What I'd like to have is a way to check the fetched /usr/src,
/usr/doc, and /usr/ports files against a *digitally signed* list of
(file, revision, digest) tuples that would be generated on-the-fly and
on-demand, so that any modification of the files in transit would be
detected (provided the checker program runs on an uncompromised host,
of course).

This should not only apply to up-to-the-minute current files, but also
to files fetched, say, a few weeks or months ago (e.g. because they
are deployed in stable production servers).

Assuming there's a secure way (which is not affected by MITM) to
obtain a master public key (GnuPG key) of the FreeBSD Project, it
would be nice to have a mechanism in place that would:

1. create a compressed list of (file, revision, md5/sha1/...digest)
   tuples for /usr/src, /usr/doc, and /usr/ports trees,

2. sign this list with the master private key of the project and
   make it available.

Because the number of revisions for any specific file can be huge,
this list could grow very fast. It may be economical to have the
program create only (file, revision, digest) tuples for a limited
number of revisions, typically as many as needed between start and end
of a typical csup run on slow links, or at most, say, 24h...  starting
at an arbitrary date in the past.

To save CPU cycles, previous computed (file, revision, digest)
tuples could be permanently cached in an RDBMS, in Subversion or
wherever else that's appropriate.

Oh, we could always use SSL between csup and the servers as fallback,
but SSL is not without flaws and I doubt that all mirrors would have
valid certificates, defeating the whole purpose of foiling MITM
attacks. And SSL alone doesn't permit checking "after the fact"
the integrity of an older snapshot.

Any idea? Could this be implemented as a plugin to Subversion (since
it must access previous revisions of files and previously computed
digests)? Given read-only access to the repository, a set of simple
Python scripts or C/C++ programs could easily implement the basic
functionality and cache the results for fast retrieval by other
scripts. But how well will all this scale?


Cordula's Web.

Want to link to this message? Use this URL: <>