Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 30 May 2001 00:58:54 -0700
From:      Terry Lambert <tlambert2@mindspring.com>
To:        Dave Hayes <dave@jetcafe.org>
Cc:        Nadav Eiron <nadav@cs.Technion.AC.IL>, hackers@FreeBSD.ORG, Jason Andresen <jandrese@mitre.org>
Subject:   Re: technical comparison
Message-ID:  <3B14A83E.C73D2499@mindspring.com>
References:  <200105232340.QAA07127@hokkshideh.jetcafe.org>

next in thread | previous in thread | raw e-mail | index | archive | help
Dave Hayes wrote:
> You can't make that assumption just yet (although it seems
> reasonable). We really don't know exactly what the problem they are
> trying to solve is. Network news sites running old versions of
> software (as an example, I know someone who still runs CNEWS) have
> very clear reasons for phenomena resembling 60,000 files in one
> directory.

I think it's the "how can we come up with an artificial
benchmark to prove the opinions we already have" problem...

Right up there with the "Polygraph" web caching "benchmark",
which intentionally stacks the deck to test cache replacement,
and for whom the people who get the best benchmarks are those
who "cheat back" and use random replacement instead of LRU or
some other sane algorithm, since the test intentionally
destroys locality of reference.

People have made the same complaint about the lmbench micro
benchmarks, which test things which aren't really meaningful
any more (e.g. NULL system call overhead, when we have things
like kqueue, etc.).

I'm largely unimpressed with benchmarks written to beat a
particular drum for political reasons, rather than as a
tool for optimizing something that's meaningful to real
world performance under actual load conditions.  Call me
crazy that way...

-- Terry

To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-hackers" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?3B14A83E.C73D2499>