From owner-freebsd-arch Mon Nov 12 15:35:51 2001 Delivered-To: freebsd-arch@freebsd.org Received: from pintail.mail.pas.earthlink.net (pintail.mail.pas.earthlink.net [207.217.120.122]) by hub.freebsd.org (Postfix) with ESMTP id 4ED1E37B418; Mon, 12 Nov 2001 15:35:42 -0800 (PST) Received: from dialup-209.245.136.188.dial1.sanjose1.level3.net ([209.245.136.188] helo=mindspring.com) by pintail.mail.pas.earthlink.net with esmtp (Exim 3.33 #1) id 163Qc8-0006CX-00; Mon, 12 Nov 2001 15:35:41 -0800 Message-ID: <3BF05CFE.EAE5EEE4@mindspring.com> Date: Mon, 12 Nov 2001 15:36:30 -0800 From: Terry Lambert Reply-To: tlambert2@mindspring.com X-Mailer: Mozilla 4.7 [en]C-CCK-MCD {Sony} (Win98; U) X-Accept-Language: en MIME-Version: 1.0 To: John Baldwin Cc: freebsd-arch@FreeBSD.org, Robert Watson Subject: Re: cur{thread/proc}, or not. References: Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Sender: owner-freebsd-arch@FreeBSD.ORG Precedence: bulk List-ID: List-Archive: (Web Archive) List-Help: (List Instructions) List-Subscribe: List-Unsubscribe: X-Loop: FreeBSD.ORG John Baldwin wrote: > > If so, then no locking is required, since the LCK CMPXCHG can > > be utilized to do atomic increment and decrement on the > > reference counting, without needing locks. > > Except that people keep complaining about using atomic ops for > ref counts, however that can be done later as an optimization. Is this the MIPS argument? There is a way around this problem on brain damaged processors, which has been known to CS for a long time. A heavy-weight idempotent-but-not-atomic portable approach would make these people happy, since then their pet processors would not look so much like pigs compared to other processors that were handicapped by having to run the same code. I don't think of it as a premature optimization so much as it is a premature generalization. If we want to be general, then we should provide C code for all but the very platform specific things, since this would be incredibly more useful for any port attempt than doing P/V idempotent counting. > Regarding object credentials, I agree, and I thought that this > was how things were already performed. Not where the proc or thread is used to reference the cred, though there is much code that uses the read-only reference. > > I think this is the wrong direction, but if you wanted to do this, > > I think that you would need to put the cur* symbols into the per > > CPU private pages. This is problematic in the extreme, because it > > means that you must set these values each time going down, in order > > to be able to substitute a per CPU global for the stack reference. > > Errr, Terry. Where do you think curthread/curproc lives now? It's > _already_ in a per-CPU page. We set curthread/curproc on each context > switch. Yes. That is Evil Overhead That Must Go Away. My use of "need" was probably not emphatic enough -- I should have said "MUST forever after". This isn't really very clear without my example, where I do the processing as the result of an interrupt, rather than in the context of a process. :-(. > > I would much rather that the credentials be object referenced off > > of non-process, non-thread objects, based on whatever the correct > > scoping really is, for the security model you want to enforce. My > > "accept" example is only one of a class of changes that could > > facilitate this. > > I agree with this. I think Robert's question wasn't just about socket > credentials however, his question was why pass a proc pointer (or thread > poiter) all the way down the stack that is implicitly assumed to be > curproc/curthread in several places instead of just using curproc/curthread > which your only response seems to be to suggest that we "change" to doing > something that we already do. No; I think that most of the passed references to proc/curproc can be eliminated. Now, of course, we will have to deal with the cruft idea of "curcred"... I dislike the idea of "cur" anything. It means that we have to assume top-down procedural processing, with queueing breaks at both interrupt and NETISR (to cite specific examples). Doing this is demonstrably the wrong thing to do, even if we ignore the global non-cacheable per CPU page overhead. If anyone has any reservations on this, I suggest they do some network performance testing with the Duke University port of the LRP + RESCON code to FreeBSD 4.3 from the original Rice Univeristy code (before anyone gets too happy, there is a non-commercial use license on this, and I personally think a queued fair share scheduler has significantly lower overhead than resource containers, for what that's worth). Your connection per second rate alone will triple if you use this appraoch. -- Terry To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-arch" in the body of the message