From owner-freebsd-net@FreeBSD.ORG Fri Feb 4 17:03:27 2005 Return-Path: Delivered-To: freebsd-net@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id CAB9716A4CE for ; Fri, 4 Feb 2005 17:03:27 +0000 (GMT) Received: from magellan.palisadesys.com (magellan.palisadesys.com [192.188.162.211]) by mx1.FreeBSD.org (Postfix) with ESMTP id 6D11B43D58 for ; Fri, 4 Feb 2005 17:03:27 +0000 (GMT) (envelope-from ghelmer@palisadesys.com) Received: from [172.16.1.108] (cetus.palisadesys.com [192.188.162.7]) (authenticated bits=0)j14H3NXR012656 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO) for ; Fri, 4 Feb 2005 11:03:24 -0600 (CST) (envelope-from ghelmer@palisadesys.com) Message-ID: <4203AAE3.4090906@palisadesys.com> Date: Fri, 04 Feb 2005 11:03:31 -0600 From: Guy Helmer User-Agent: Mozilla Thunderbird 1.0RC1 (Windows/20041201) X-Accept-Language: en-us, en MIME-Version: 1.0 To: freebsd-net@freebsd.org Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Palisade-MailScanner-Information: Please contact the ISP for more information X-Palisade-MailScanner: Found to be clean X-MailScanner-From: ghelmer@palisadesys.com Subject: Netgraph performance question X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 04 Feb 2005 17:03:27 -0000 A while back, Maxim Konovalov made a commit to usr.sbin/ngctl/main.c to increase its socket receive buffer size to help 'ngctl list' deal with a big number of nodes, and Ruslan Ermilov responded that setting sysctls net.graph.recvspace=200000 and net.graph.maxdgram=200000 was a good idea on a system with a large number of nodes. I'm getting what I consider to be sub-par performance under FreeBSD 5.3 from a userland program using ngsockets connected into ng_tee to play with packets that are traversing a ng_bridge, and I finally have an opportunity to look into this. I say "sub-par" because when we've tested this configuration using three 2.8GHz Xeon machines with Gigabit Ethernet interfaces at 1000Mbps full-duplex, we obtained peak performance of a single TCP stream of about 12MB/sec through the bridging machine as measured by NetPIPE and netperf. I'm wondering if bumping the recvspace should help, if changing the ngsocket hook to queue incoming data should help, if it would be best to replace ngsocket with a memory-mapped interface, or if anyone has any other ideas that would help performance. Thanks in advance for any advice, Guy Helmer -- Guy Helmer, Ph.D. Principal System Architect Palisade Systems, Inc.