From owner-cvs-all@FreeBSD.ORG Sun Dec 24 01:39:43 2006 Return-Path: X-Original-To: cvs-all@freebsd.org Delivered-To: cvs-all@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id F419E16A407; Sun, 24 Dec 2006 01:39:42 +0000 (UTC) (envelope-from bde@zeta.org.au) Received: from mailout2.pacific.net.au (mailout2-3.pacific.net.au [61.8.2.226]) by mx1.freebsd.org (Postfix) with ESMTP id 99A2013C442; Sun, 24 Dec 2006 01:39:42 +0000 (UTC) (envelope-from bde@zeta.org.au) Received: from mailproxy2.pacific.net.au (mailproxy2.pacific.net.au [61.8.2.163]) by mailout2.pacific.net.au (Postfix) with ESMTP id DA0CF6E1F9; Sun, 24 Dec 2006 12:39:39 +1100 (EST) Received: from katana.zip.com.au (katana.zip.com.au [61.8.7.246]) by mailproxy2.pacific.net.au (Postfix) with ESMTP id BB3BC2740C; Sun, 24 Dec 2006 12:39:39 +1100 (EST) Date: Sun, 24 Dec 2006 12:39:38 +1100 (EST) From: Bruce Evans X-X-Sender: bde@delplex.bde.org To: Robert Watson In-Reply-To: <20061223213014.U35809@fledge.watson.org> Message-ID: <20061224120307.P24444@delplex.bde.org> References: <20061223213014.U35809@fledge.watson.org> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed Cc: cvs-src@freebsd.org, Scott Long , src-committers@freebsd.org, cvs-all@freebsd.org, John Polstra Subject: Re: cvs commit: src/sys/dev/bge if_bge.c X-BeenThere: cvs-all@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: CVS commit messages for the entire tree List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 24 Dec 2006 01:39:43 -0000 On Sat, 23 Dec 2006, Robert Watson wrote: > On Sat, 23 Dec 2006, John Polstra wrote: > >>> That said, dropping and regrabbing the driver lock in the rxeof routine of >>> any driver is bad. It may be safe to do, but it incurs horrible >>> performance penalties. It essentially allows the time-critical, high >>> priority RX path to be constantly preempted by the lower priority if_start >>> or if_ioctl paths. Even without this preemption and priority inversion, >>> you're doing an excessive number of expensive lock ops in the fast path. It's not very time-critical or high priority for bge or any other device that has a reasonably large rx ring. With a ring size of 512 and an rx interrupt occuring not too near the end (say at half way), you have 256 packet times to finish processing the interrupt. For normal 1518 byte packets at 1Gbps, 256 packet times is about 3 mS. bge's rx ring size is actually larger than 512 for most hardware. >> We currently make this a lot worse than it needs to be by handing off the >> received packets one at a time, unlocking and relocking for every packet. >> It would be better if the driver's receive interrupt handler would harvest >> all of the incoming packets and queue them locally. Then, at the end, hand >> off the linked list of packets to the network stack wholesale, unlocking >> and relocking only once. (Actually, the list could probably be handed off >> at the very end of the interrupt service routine, after the driver has >> already dropped its lock.) We wouldn't even need a new primitive, if >> ether_input() and the other if_input() functions were enhanced to deal with >> a possible list of packets instead of just a single one. Do a bit more than that and you have reinvented fast interrupt handling :-). However, with large buffers the complications for fast interrupt handling are not very needed. A fast interrupt handler would queue all the packets (taking care not to be blocked by normal spinlocks etc., unlike the "fast" interrupt handlers in -current) and then schedule a low[er] priority thread to finish the handling. With large buffers, the lower priority thread can just be scheduled immediately. > I try this experiement every few years, and generally don't measure much > improvement. I'll try it again with 10gbps early next year once back in the > office again. The more interesting transition is between the link layer and > the network layer, which is high on my list of topics to look into in the > next few weeks. In particular, reworking the ifqueue handoff. The tricky > bit is balancing latency, overhead, and concurrency... These are very unbalanced now, so you don't have to worry about breaking the balance :-). I normal unbalance to optimize latency (45-60 uS ping latency). Bruce