From owner-freebsd-net@FreeBSD.ORG Mon Jul 7 16:20:10 2008 Return-Path: Delivered-To: freebsd-net@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id A6FB81065678 for ; Mon, 7 Jul 2008 16:20:10 +0000 (UTC) (envelope-from andre@freebsd.org) Received: from c00l3r.networx.ch (c00l3r.networx.ch [62.48.2.2]) by mx1.freebsd.org (Postfix) with ESMTP id 08E468FC1E for ; Mon, 7 Jul 2008 16:20:09 +0000 (UTC) (envelope-from andre@freebsd.org) Received: (qmail 34019 invoked from network); 7 Jul 2008 15:10:34 -0000 Received: from localhost (HELO [127.0.0.1]) ([127.0.0.1]) (envelope-sender ) by c00l3r.networx.ch (qmail-ldap-1.03) with SMTP for ; 7 Jul 2008 15:10:34 -0000 Message-ID: <48724238.2020103@freebsd.org> Date: Mon, 07 Jul 2008 18:20:08 +0200 From: Andre Oppermann User-Agent: Thunderbird 1.5.0.14 (Windows/20071210) MIME-Version: 1.0 To: Bruce Evans References: <4867420D.7090406@gtcomm.net> <4869ACFC.5020205@gtcomm.net> <4869B025.9080006@gtcomm.net><486A7E45.3030902@gtcomm.net> <486A8F24.5010000@gtcomm.net><486A9A0E.6060308@elischer.org> <486B41D5.3060609@gtcomm.net><486B4F11.6040906@gtcomm.net><486BC7F5.5070604@gtcomm.net><20080703160540.W6369@delplex.bde.org><486C7F93.7010308@gtcomm.net><20080703195521.O6973@delplex.bde.org><486D35A0.4000302@gtcomm.net><486DF1A3.9000409@gtcomm.net><486E65E6.3060301@gtcomm.net> <2d3001c8def1$f4309b90$020b000a@bartwrkstxp> <486FFF70.3090402@gtcomm.net> <48701921.7090107@gtcomm.net> <4871E618.1080500@freebsd.org> <20080708002228.G680@besplex.bde.org> In-Reply-To: <20080708002228.G680@besplex.bde.org> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: FreeBSD Net , Bart Van Kerckhove , Ingo Flaschberger , Paul Subject: Re: Freebsd IP Forwarding performance (question, and some info) [7-stable, current, em, smp] X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 07 Jul 2008 16:20:10 -0000 Bruce Evans wrote: > On Mon, 7 Jul 2008, Andre Oppermann wrote: > >> Paul, >> >> to get a systematic analysis of the performance please do the following >> tests and put them into a table for easy comparison: >> >> 1. inbound pps w/o loss with interface in monitor mode (ifconfig em0 >> monitor) >> ... > > I won't be running many of these tests, but found this one interesting -- > I didn't know about monitor mode. It gives the following behaviour: > > -monitor ttcp receiving on bge0 at 397 kpps: 35% idle (8.0-CURRENT) 13.6 > cm/p > monitor ttcp receiving on bge0 at 397 kpps: 83% idle (8.0-CURRENT) 5.8 > cm/p > -monitor ttcp receiving on em0 at 580 kpps: 5% idle (~5.2) 12.5 > cm/p > monitor ttcp receiving on em0 at 580 kpps: 65% idle (~5.2) 4.8 > cm/p > > cm/p = k8-dc-misses (bge0 system) > cm/p = k7-dc-misses (em0 system) > > So it seems that the major overheads are not near the driver (as I already > knew), and upper layers are responsible for most of the cache misses. > The packet header is accessed even in monitor mode, so I think most of > the cache misses in upper layers are not related to the packet header. > Maybe they are due mainly to perfect non-locality for mbufs. Monitor mode doesn't access the payload packet header. It only looks at the mbuf (which has a structure called mbuf packet header). The mbuf header it hot in the cache because the driver just touched it and filled in the information. The packet content (the payload) is cold and just arrived via DMA in DRAM. -- Andre