Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 01 Jun 2001 15:17:09 -0700
From:      Terry Lambert <tlambert2@mindspring.com>
To:        "Albert D. Cahalan" <acahalan@cs.uml.edu>
Cc:        freebsd-hackers@FreeBSD.ORG, jandrese@mitre.org
Subject:   Re: Real "technical comparison"
Message-ID:  <3B181465.25B1311A@mindspring.com>
References:  <200105310120.f4V1KIw327035@saturn.cs.uml.edu>

next in thread | previous in thread | raw e-mail | index | archive | help
"Albert D. Cahalan" wrote:
> 
> > This "postmark" test is useless self flagellation.
> 
> The benchmark tests what it was meant to test: performance
> on huge directories.

Which is useless, since only degenerate software results
in huge directories.

I have yet to see one example of software which would result
in this degenerate case, for which there was not more modern
software providing at least equivalent functionality being
generally available.

Unless you want to count the "postmark" program itself.


> > The intent of the "test" is obviously intended to show
> > certain facts which we all know to be self-evident under
> > strange load conditions which are patently "unreal".
> 
> That apps designed with UFS in mind don't usually create
> such directories is irrelevant. Those that do are being
> pushed past their original design, which does happen!

Only 5% of computer systems currently in use have FSs
capable of not "failing" this test:

o	AIX systems with JFS
o	SGI systems with XFS
o	Obsolete OS/2 systems with JFS
o	Obsolete OS/2 systems with HPFS
o	Obsolete NT 1.x and 2.x systems with HPFS
o	Experimental Linux systems with the incompletely
	implemented XFS
o	Experimental Linux systems with the incompletely
	implemented ReiserFS
o	Experimental FreeBSD systems with the incompletely
	implemented XFS
o	Experimental FreeBSD systems with my patches from
	1995 for trie-structured directory storage for
	Berkeley FFS
o	FreeBSD systems running IFS (Inode FS), where there
	are no directory entries, and everything is by inode
	number

On the other hand, we have:

SVR3 UFS; SVR4 UFS; SVR4 VxFS (Veritas); Solaris VxFS; SVR4
NWFS; DOS FAT; DOS FAT16; VFAT; VFAT32; NTFS; AFS; CODA; NFSv1;
NFSv2; NFSv3; NFSv4; Mac HFS; ExtFS; Ext2FS; Ext3FS; EFS; EmFS;
LFS; SpriteFS; AdvFS; RFS; TFS; SFS; HRFS; DTFS; MFS; FlFS;
SVFS; SV1KFS; Acer Fast FS; Xenix FS; BFS; IFS; etc..

I could go on... are you getting the picture?  Only a moron
implements his code to run only on marginal platforms.  If
you are implementing commercial code, you often only know
the developement environment, not the target environment.

You might as well write your code without bzero'ing your
sockaddr_in structs before using them "because everything
is Linux".

> Some people think 60 MB of RAM is tiny.

Count me as one of them.  I use ~64MB of RAM for just the
mbuf per connection that is used to contain "struct tcptemp"'s
_just in case I need to send keepalives_, for my example of
250,000 connections on my production server which actually
_does_ support this many connections.  Your 60MB would leave
me with only enough memory to eke out a mere 15,625 connections.


> How about a real benchmark?
> 
> At www.spec.org I see SPECweb99 numbers for Solaris, AIX,
> Linux, Windows, Tru64, and HP-UX. FreeBSD must be hiding,
> because I don't see it. BSDI, Walnut Creek, and WindRiver
> all have failed to submit results.
> 
> (the cost is just loose change for WindRiver)

I don't represent WindRiver.  If you would care to front me
the $US 800, I would be happy to run those tests, if they
happen to be your favorites.

Until then, my benchmark is what I can achieve on real
hardware in a real application.

[ ... "show some numbers" ... ]

I did: 250,000 simultaneous connections; and that's not
nearly what I've actually achieved, merely what I choose
to disclose.

-- Terry

To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-hackers" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?3B181465.25B1311A>