From owner-cvs-src@FreeBSD.ORG Tue Jun 12 19:14:19 2007 Return-Path: X-Original-To: cvs-src@freebsd.org Delivered-To: cvs-src@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id CE6D316A468; Tue, 12 Jun 2007 19:14:19 +0000 (UTC) (envelope-from rwatson@FreeBSD.org) Received: from cyrus.watson.org (cyrus.watson.org [209.31.154.42]) by mx1.freebsd.org (Postfix) with ESMTP id 992D013C487; Tue, 12 Jun 2007 19:14:19 +0000 (UTC) (envelope-from rwatson@FreeBSD.org) Received: from fledge.watson.org (fledge.watson.org [209.31.154.41]) by cyrus.watson.org (Postfix) with ESMTP id 2C13E473F1; Tue, 12 Jun 2007 15:14:19 -0400 (EDT) Date: Tue, 12 Jun 2007 20:14:19 +0100 (BST) From: Robert Watson X-X-Sender: robert@fledge.watson.org To: Alfred Perlstein In-Reply-To: <20070612182306.GQ96936@elvis.mu.org> Message-ID: <20070612201209.D43948@fledge.watson.org> References: <200706112008.l5BK8CQ7033543@repoman.freebsd.org> <466DACD6.4040606@errno.com> <2a41acea0706111330v6a39cf84o495f6acf62ba7ff7@mail.gmail.com> <2a41acea0706111333p5349993dg9315bfe8396f78a@mail.gmail.com> <466DB362.8010902@freebsd.org> <2a41acea0706111346l227b1399jd80d85771345d8be@mail.gmail.com> <466DB70D.8080800@freebsd.org> <2a41acea0706111403l5f4e1db1re2c901670e327485@mail.gmail.com> <20070612004916.A63490@fledge.watson.org> <20070612182306.GQ96936@elvis.mu.org> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed Cc: src-committers@freebsd.org, Andre Oppermann , Kip Macy , cvs-all@freebsd.org, Jack Vogel , cvs-src@freebsd.org, Sam Leffler Subject: Re: cvs commit: src/sys/net if.h X-BeenThere: cvs-src@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: CVS commit messages for the src tree List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 12 Jun 2007 19:14:19 -0000 On Tue, 12 Jun 2007, Alfred Perlstein wrote: >>> AFAICT FreeBSD can't currently benefit from this as there is no cpu >>> affinity for connections. I may be wrong, but I see lower >>> single-connection throughput using a receive queue per core than using a >>> single receive queue. RSS is done by hashing a TCP tuple (I'm deliberately >>> vague because at least with cxgb there are multiple combinations, the >>> default is the standard 4-tuple) to a receive queue. >> >> If you're looking at concurrent TCP input processing, the tcbinfo lock is >> likely one source of overhead due to high contention. I had hoped to make >> further progress on this for 7.0 (it's already better than 6.0 in a number >> of ways), but the instability of 7.x over the last month scuttled that >> project. It will have to be an 8.0 thing, but perhaps we can look at an MFC >> if that goes well. I have some initial protyping but have been waiting for >> TCP to settle down again a bit before really digging in. > > Robert, have you added placeholder fields to objects that require them for > support? This would help the MFC effort. I'm not yet at a point where I'm comfortable enough with the prototyping to believe that I have all the right things to put in. I've had to refactor the pbcinfo structure, for example. It would also be nice to have multiple timer threads so that timers can run on the same CPUs as the tcpcb is normally processed on. This is basically, however, a weak affinity model designed to limit lock contention, not eliminate the possibility of cross-CPU execution. What we need now is a feedback system so that the scheduler can more intelligently weight choices. Robert N M Watson Computer Laboratory University of Cambridge