Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 04 Feb 2005 11:03:31 -0600
From:      Guy Helmer <ghelmer@palisadesys.com>
To:        freebsd-net@freebsd.org
Subject:   Netgraph performance question
Message-ID:  <4203AAE3.4090906@palisadesys.com>

next in thread | raw e-mail | index | archive | help
A while back, Maxim Konovalov made a commit to usr.sbin/ngctl/main.c to 
increase its socket receive buffer size to help 'ngctl list' deal with a 
big number of nodes, and Ruslan Ermilov responded that setting sysctls 
net.graph.recvspace=200000 and net.graph.maxdgram=200000 was a good idea 
on a system with a large number of nodes.

I'm getting what I consider to be sub-par performance under FreeBSD 5.3 
from a userland program using ngsockets connected into ng_tee to play 
with packets that are traversing a ng_bridge, and I finally have an 
opportunity to look into this.  I say "sub-par" because when we've 
tested this configuration using three 2.8GHz Xeon machines with Gigabit 
Ethernet interfaces at 1000Mbps full-duplex, we obtained peak 
performance of a single TCP stream of about 12MB/sec through the 
bridging machine as measured by NetPIPE and netperf.

I'm wondering if bumping the recvspace should help, if changing the 
ngsocket hook to queue incoming data should help, if it would be best to 
replace ngsocket with a memory-mapped interface, or if anyone has any 
other ideas that would help performance.

Thanks in advance for any advice,
Guy Helmer

-- 
Guy Helmer, Ph.D.
Principal System Architect
Palisade Systems, Inc.



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?4203AAE3.4090906>