From owner-freebsd-net@FreeBSD.ORG Sun Mar 30 11:22:09 2008 Return-Path: Delivered-To: freebsd-net@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id AACBE106566B; Sun, 30 Mar 2008 11:22:09 +0000 (UTC) (envelope-from mav@FreeBSD.org) Received: from cmail.optima.ua (cmail.optima.ua [195.248.191.121]) by mx1.freebsd.org (Postfix) with ESMTP id F10388FC1C; Sun, 30 Mar 2008 11:22:08 +0000 (UTC) (envelope-from mav@FreeBSD.org) X-Spam-Flag: SKIP X-Spam-Yversion: Spamooborona-2.1.0 Received: from [212.86.226.226] (account mav@alkar.net HELO [192.168.3.2]) by cmail.optima.ua (CommuniGate Pro SMTP 5.1.14) with ESMTPA id 100031004; Sun, 30 Mar 2008 14:22:07 +0300 Message-ID: <47EF77DE.6040200@FreeBSD.org> Date: Sun, 30 Mar 2008 14:22:06 +0300 From: Alexander Motin User-Agent: Thunderbird 2.0.0.12 (Windows/20080213) MIME-Version: 1.0 To: Robert Watson References: <47EF4F18.502@FreeBSD.org> <20080330112846.Y5921@fledge.watson.org> In-Reply-To: <20080330112846.Y5921@fledge.watson.org> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: freebsd-hackers@freebsd.org, FreeBSD Net Subject: Re: Multiple netgraph threads X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 30 Mar 2008 11:22:09 -0000 Robert Watson wrote: > FYI, you might be interested in some similar work I've been doing > in the rwatson_netisr branch in Perforce, which: > Adds per-CPU netisr threads Thanks. Netgraph from the beginning uses concept of direct function calls, when level of parallelism limited by data source. In that point multiple netisr threads will give benefits. > My initial leaning would be that we would like to avoid adding too many > more threads that will do per-packet work, as that leads to excessive > context switching. Netgraph uses queueing only as last resort, when direct call is not possible due to locking or stack limitations. For example, while working with kernel sockets (*upcall)() I have got many issues which make impossible to freely use received data without queueing as upcall() caller holds some locks leading to unpredicted LORs in socket/TCP/UDP code. In case of such forced queueing, node becomes an independent data source which can be pinned to and processed by whatever specialized thread or netisr, when it will be able to do it more effectively. -- Alexander Motin