Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 3 Sep 2004 21:44:02 +0530
From:      Subhro <subhro.kar@gmail.com>
To:        "freebsd-questions@FreeBSD. ORG" <freebsd-questions@freebsd.org>
Cc:        simon@synatech.com.au
Subject:   Re: 100,000 TCP connections - kernel tuning advice wanted
Message-ID:  <b2807d040409030914352ad60c@mail.gmail.com>
In-Reply-To: <20040903120734.GA28796@pobox.com.>
References:  <20040903120734.GA28796@pobox.com.>

next in thread | previous in thread | raw e-mail | index | archive | help
netstat -m please

Regards
S.


On Fri, 3 Sep 2004 22:07:35 +1000, Simon Lai <simon@synatech.com.au> wrote:
> 
> Hi all,
> 
> As part of a team, I am working on a TCP multiplexor using FreeBSD.  On side A
> we have 100,000 TCP connections accepting packets, which are multiplexed
> onto a single TCP connection on Side B.  Packets going B->A are
> demultiplexed in the reverse way.  Info -
> 
> - freebsd version is 5.2-RELEASE. Kernel has been recompiled to
>  use DEVICE_POLLING and unused devices removed.  The
>  HZ parameter has been varied through 1000,2000,4000 but this
>  does not significantly alter our results.  We have also played with
>  the idle and trap sysctl's for polling.
> - our network card is an Intel EtherExpress Pro, running at 100Mbits
> - UDP is not an option for us
> - Average payload size is 50-100 bytes.  The payload is preceeded
>  by a 32 bit value, which is the size of the payload, so reading
>  is a matter of grabbing the size, allocating a buffer and then
>  doing the read.  Minimal processing is done on the packet.
> - We are using our own specialized memory management. We use writev and
>  readv whereever possible.
> - socket buffers have been increased to 1MB on the B side, but are the
>  default size on side A.
> - we are using kevent/kqueue - this task would be impossible without them
> - our current test box has 1.5GB RAM and a 1GHZ Athlon CPU.  While we might
>  go for a faster CPU, we would like to keep within our current RAM constraints.
> - Side A is connected to a test client, which has 20% idle time.
> - Side B is connected via a switch to another test box, which just echos the
>  packets back for testing purposes. It has significant idle time.
> - Our current rough measurements, using top, show 30% user time, and 60%
>  kernel time, when this app is running.  This multiplexing app is the only
>  app running on the machine.  The machine is CPU bound - the multiplexing
>  requires no disk I/O.
> 
> Currently we are getting 4000-6000 packets/sec unidirectional throughput,
> depending upon the mix of packet types/sizes.  This goes up to
> 5000-7000 packets/sec for 50,000 connections.
> 
> We are seeking advice on what kernel tunables we can tweak to improve
> packet throughput. Constants are TCP, 100,000 connections, 50-100 byte
> packet sizes.
> 
> All help appreciated.
> 
> Regs
> 
> Simon
> 
> _______________________________________________
> freebsd-questions@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-questions
> To unsubscribe, send any mail to "freebsd-questions-unsubscribe@freebsd.org"
> 



-- 
Subhro Sankha Kar
School of Information Technology
Block AQ-13/1 Sector V
ZIP 700091
India



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?b2807d040409030914352ad60c>