From owner-freebsd-net@FreeBSD.ORG Tue Dec 4 20:02:32 2012 Return-Path: Delivered-To: freebsd-net@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id A515EDF0 for ; Tue, 4 Dec 2012 20:02:32 +0000 (UTC) (envelope-from oppermann@networx.ch) Received: from c00l3r.networx.ch (c00l3r.networx.ch [62.48.2.2]) by mx1.freebsd.org (Postfix) with ESMTP id 0D2578FC08 for ; Tue, 4 Dec 2012 20:02:31 +0000 (UTC) Received: (qmail 5917 invoked from network); 4 Dec 2012 21:32:44 -0000 Received: from c00l3r.networx.ch (HELO [127.0.0.1]) ([62.48.2.2]) (envelope-sender ) by c00l3r.networx.ch (qmail-ldap-1.03) with SMTP for ; 4 Dec 2012 21:32:44 -0000 Message-ID: <50BE56C8.1030804@networx.ch> Date: Tue, 04 Dec 2012 21:02:16 +0100 From: Andre Oppermann User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:16.0) Gecko/20121026 Thunderbird/16.0.2 MIME-Version: 1.0 To: Adrian Chadd Subject: Re: Latency issues with buf_ring References: <1353259441.19423.YahooMailClassic@web121605.mail.ne1.yahoo.com> <201212041108.17645.jhb@freebsd.org> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: Barney Cordoba , John Baldwin , freebsd-net@freebsd.org X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 04 Dec 2012 20:02:32 -0000 On 04.12.2012 20:34, Adrian Chadd wrote: > .. and it's important to note that buf_ring itself doesn't have the > race condition; it's the general driver implementation that's racy. > > I have the same races in ath(4) with the watchdog programming. Exactly > the same issue. Our IF_* stack/driver boundary handoff isn't up to the task anymore. Also the interactions are either poorly defined or understood in many places. I've had a few chats with yongari@ and am experimenting with a modernized interface in my branch. The reason I stumbled across it was because I'm extending the hardware offload feature set and found out that the stack and the drivers (and the drivers among themself) are not really in sync with regards to behavior. For most if not all ethernet drivers from 100Mbit/s the TX DMA rings are so large that buffering at the IFQ level doesn't make sense anymore and only adds latency. So it could simply directly put everything into the TX DMA and not even try to soft-queue. If the TX DMA ring is full ENOBUFS is returned instead of filling yet another queue. However there are ALTQ interactions and other mechanisms which have to be considered too making it a bit more involved. I'm coming up with a draft and some benchmark results for an updated stack/driver boundary in the next weeks before xmas. -- Andre