From owner-freebsd-current@FreeBSD.ORG Sun Oct 8 23:00:05 2006 Return-Path: X-Original-To: freebsd-current@freebsd.org Delivered-To: freebsd-current@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 230E816A407; Sun, 8 Oct 2006 23:00:05 +0000 (UTC) (envelope-from kmacy@fsmware.com) Received: from demos.bsdclusters.com (demos.bsdclusters.com [69.55.225.36]) by mx1.FreeBSD.org (Postfix) with ESMTP id 871F243D72; Sun, 8 Oct 2006 23:00:04 +0000 (GMT) (envelope-from kmacy@fsmware.com) Received: from demos.bsdclusters.com (demos [69.55.225.36]) by demos.bsdclusters.com (8.12.8p1/8.12.8) with ESMTP id k98N00lZ032025; Sun, 8 Oct 2006 16:00:01 -0700 (PDT) (envelope-from kmacy@fsmware.com) Received: from localhost (kmacy@localhost) by demos.bsdclusters.com (8.12.8p1/8.12.8/Submit) with ESMTP id k98N00Za032022; Sun, 8 Oct 2006 16:00:00 -0700 (PDT) X-Authentication-Warning: demos.bsdclusters.com: kmacy owned process doing -bs Date: Sun, 8 Oct 2006 15:59:59 -0700 (PDT) From: Kip Macy X-X-Sender: kmacy@demos.bsdclusters.com To: Attilio Rao In-Reply-To: <3bbf2fe10610081555r67265368sf7f12edbf35bff0d@mail.gmail.com> Message-ID: <20061008155817.G29803@demos.bsdclusters.com> References: <2fd864e0610080423q7ba6bdeal656a223e662a5d@mail.gmail.com> <20061008135031.G83537@demos.bsdclusters.com> <4529667D.8070108@fer.hr> <200610090634.31297.davidxu@freebsd.org> <20061008225150.GK793@funkthat.com> <3bbf2fe10610081555r67265368sf7f12edbf35bff0d@mail.gmail.com> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Cc: John-Mark Gurney , freebsd-current@freebsd.org, David Xu , Ivan Voras Subject: Re: [PATCH] MAXCPU alterable in kernel config - needs testers X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 08 Oct 2006 23:00:05 -0000 > > How would you see a sched_lock decomposition (and, if it is possible, > how many locks it could be decomposed in?) Rather than having a per thread lock, Solaris uses the lock for the current container that a thread is associated with (cpu, run queue, sleep queue, etc.) to serialize thread updates. I think this is probably the best approach. A per proess spin lock would not scale well for large multi-threaded apps. -Kip