Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 01 Jun 2001 11:13:09 -0700
From:      Terry Lambert <tlambert2@mindspring.com>
To:        Rik van Riel <riel@conectiva.com.br>
Cc:        "Andresen,Jason R." <jandrese@mitre.org>, freebsd-hackers@FreeBSD.ORG
Subject:   Re: Real "technical comparison"
Message-ID:  <3B17DB35.10E24F09@mindspring.com>
References:  <Pine.LNX.4.21.0105301555430.12540-100000@imladris.rielhome.conectiva>

next in thread | previous in thread | raw e-mail | index | archive | help
Rik van Riel wrote:
> 
> On Wed, 30 May 2001, Terry Lambert wrote:
> 
> > The intent of the "test" is obviously intended to show
> > certain facts which we all know to be self-evident under
> > strange load conditions which are patently "unreal".
> 
> > I would suggest a better test would be to open _at least_
> > 250,000 connections to a server
> 
> That would certainly qualify for the "patently
> unreal" part, but I don't know what else you
> want to prove here.

I have a system ready to go to production that has been
tested well in excess of that number of connections.  My
numbers over 250,000 are currently classified by the
people paying me to do the work.

You may remember when I found and fixed the cred structure
reference count rollover at 32,7xx network connections, in
4.3, recently.  That was for this project.


> > This could easily be the case with, for example, a pager
> > network or other content broadcasting system, or an EAI
> > tool, such as IBM's MQ-Series.
> 
> Doing a gigabit per second in 3kB per second connections
> doesn't seem all that realistic when you realise that
> they'll want their messages only acknowledged when they
> are safely on disk, etc...  Think "transactions".

Consider that HTTP 1.1 persistant connections are frequently
idle, as users view the pages they downloaded.  In many
applications, the speed is determined by the human needing
to assimilate the information, which was presented quickly.

Given average statistics on latency between page loads on
browsers with humans attached to them, I rather expect that
an HTTP 1.1 server that served 250,000 connections would
have no trouble statistically keeping up with T1 speeds,
for the full 250,000 connections; that's about the highest
possible DSL rate, assuming your house was next door to
the LATE.

This is just a real-world example that a layman would be
expected to intuitively understand, if they couldn't
understand the pager network or content broadcasting
system real-world examples.

-- Terry

To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-hackers" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?3B17DB35.10E24F09>