From owner-freebsd-smp Sun Aug 4 6:30:36 2002 Delivered-To: freebsd-smp@freebsd.org Received: from mx1.FreeBSD.org (mx1.FreeBSD.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 5B85137B400 for ; Sun, 4 Aug 2002 06:30:35 -0700 (PDT) Received: from hotmail.com (f135.pav2.hotmail.com [64.4.37.135]) by mx1.FreeBSD.org (Postfix) with ESMTP id 27C4443E65 for ; Sun, 4 Aug 2002 06:30:35 -0700 (PDT) (envelope-from shaileshvd12@hotmail.com) Received: from mail pickup service by hotmail.com with Microsoft SMTPSVC; Sun, 4 Aug 2002 06:30:35 -0700 Received: from 202.138.113.5 by pv2fd.pav2.hotmail.msn.com with HTTP; Sun, 04 Aug 2002 13:30:34 GMT X-Originating-IP: [202.138.113.5] From: "shailesh dange" To: freebsd-smp@FreeBSD.org Subject: hardware notes Date: Sun, 04 Aug 2002 13:30:34 +0000 Mime-Version: 1.0 Content-Type: text/plain; format=flowed Message-ID: X-OriginalArrivalTime: 04 Aug 2002 13:30:35.0075 (UTC) FILETIME=[1CF84130:01C23BBB] Sender: owner-freebsd-smp@FreeBSD.ORG Precedence: bulk List-ID: List-Archive: (Web Archive) List-Help: (List Instructions) List-Subscribe: List-Unsubscribe: X-Loop: FreeBSD.org hell, I AM SHAILESH DANGE FROM INDIA. I NERVOUS SOME HARDWARE PROBLE YOU CAN SOLVED SO I CAN APPLY FOR SOME NOTES GIVEN TO YOU HOPE YOU CAN GIVE ME FREE. MY PROBLE IS COMPUTER IS ALWAYS HAG AND OUTPUT NOT DISPLAY CLEARY ONE MONITOR. WHAT IS THIS PROBLE SO TELL ME THIS EMAIL ADDRESS shaileshvd12@hotmail.com ok REPLAY TO ME ONE THIS SUBJECT QUICK AS POSIBLE I WAIT FOR YOUR REPLAY MY NEXT REQUEST IS I WANT SOME BEST NOTES ONE HARDWARE PART. YOUR TRULY SHAILESH _________________________________________________________________ Send and receive Hotmail on your mobile device: http://mobile.msn.com To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-smp" in the body of the message From owner-freebsd-smp Sun Aug 4 18:27:15 2002 Delivered-To: freebsd-smp@freebsd.org Received: from mx1.FreeBSD.org (mx1.FreeBSD.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 2A48937B400 for ; Sun, 4 Aug 2002 18:27:14 -0700 (PDT) Received: from iguana.icir.org (iguana.icir.org [192.150.187.36]) by mx1.FreeBSD.org (Postfix) with ESMTP id D2B4D43E4A for ; Sun, 4 Aug 2002 18:27:13 -0700 (PDT) (envelope-from rizzo@iguana.icir.org) Received: (from rizzo@localhost) by iguana.icir.org (8.11.6/8.11.3) id g751RD315085; Sun, 4 Aug 2002 18:27:13 -0700 (PDT) (envelope-from rizzo) Date: Sun, 4 Aug 2002 18:27:13 -0700 From: Luigi Rizzo To: smp@freebsd.org Subject: how to create per-cpu variables in SMP kernels ? Message-ID: <20020804182713.A14944@iguana.icir.org> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.2.5.1i Sender: owner-freebsd-smp@FreeBSD.ORG Precedence: bulk List-ID: List-Archive: (Web Archive) List-Help: (List Instructions) List-Subscribe: List-Unsubscribe: X-Loop: FreeBSD.org Hi, I would like to know how does the FreeBSD kernel (both in -current and -stable) handle per-cpu variables such as curproc/curthread, cpuid, and maybe more. Is there maybe any linker magic or similar things that can be used to create more of these ? How expensive is to access them compared to regular variables ? thanks luigi To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-smp" in the body of the message From owner-freebsd-smp Sun Aug 4 23:45:20 2002 Delivered-To: freebsd-smp@freebsd.org Received: from mx1.FreeBSD.org (mx1.FreeBSD.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 6EA5A37B400 for ; Sun, 4 Aug 2002 23:45:19 -0700 (PDT) Received: from harrier.mail.pas.earthlink.net (harrier.mail.pas.earthlink.net [207.217.120.12]) by mx1.FreeBSD.org (Postfix) with ESMTP id 1508143E65 for ; Sun, 4 Aug 2002 23:45:19 -0700 (PDT) (envelope-from tlambert2@mindspring.com) Received: from pool0179.cvx40-bradley.dialup.earthlink.net ([216.244.42.179] helo=mindspring.com) by harrier.mail.pas.earthlink.net with esmtp (Exim 3.33 #1) id 17bbcD-0001Wn-00; Sun, 04 Aug 2002 23:45:18 -0700 Message-ID: <3D4E1ECB.348978D1@mindspring.com> Date: Sun, 04 Aug 2002 23:44:27 -0700 From: Terry Lambert X-Mailer: Mozilla 4.79 [en] (Win98; U) X-Accept-Language: en MIME-Version: 1.0 To: Luigi Rizzo Cc: smp@freebsd.org Subject: Re: how to create per-cpu variables in SMP kernels ? References: <20020804182713.A14944@iguana.icir.org> Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Sender: owner-freebsd-smp@FreeBSD.ORG Precedence: bulk List-ID: List-Archive: (Web Archive) List-Help: (List Instructions) List-Subscribe: List-Unsubscribe: X-Loop: FreeBSD.org Luigi Rizzo wrote: > I would like to know how does the FreeBSD kernel (both in -current > and -stable) handle per-cpu variables such as curproc/curthread, cpuid, > and maybe more. Is there maybe any linker magic or similar things > that can be used to create more of these ? It puts them in a seperate per CPU page and or a per-CPU register. > How expensive is to access them compared to regular variables ? Depends on the specific variable's implementation. If you are asking because you want to add one, then don't. 8-). They damage symmetry (obviously). -- Terry To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-smp" in the body of the message From owner-freebsd-smp Mon Aug 5 1:53:45 2002 Delivered-To: freebsd-smp@freebsd.org Received: from mx1.FreeBSD.org (mx1.FreeBSD.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 78BE037B400 for ; Mon, 5 Aug 2002 01:53:42 -0700 (PDT) Received: from iguana.icir.org (iguana.icir.org [192.150.187.36]) by mx1.FreeBSD.org (Postfix) with ESMTP id 23A5A43E5E for ; Mon, 5 Aug 2002 01:53:42 -0700 (PDT) (envelope-from rizzo@iguana.icir.org) Received: (from rizzo@localhost) by iguana.icir.org (8.11.6/8.11.3) id g758re717732; Mon, 5 Aug 2002 01:53:40 -0700 (PDT) (envelope-from rizzo) Date: Mon, 5 Aug 2002 01:53:40 -0700 From: Luigi Rizzo To: Terry Lambert Cc: smp@freebsd.org Subject: Re: how to create per-cpu variables in SMP kernels ? Message-ID: <20020805015340.A17716@iguana.icir.org> References: <20020804182713.A14944@iguana.icir.org> <3D4E1ECB.348978D1@mindspring.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.2.5.1i In-Reply-To: <3D4E1ECB.348978D1@mindspring.com>; from tlambert2@mindspring.com on Sun, Aug 04, 2002 at 11:44:27PM -0700 Sender: owner-freebsd-smp@FreeBSD.ORG Precedence: bulk List-ID: List-Archive: (Web Archive) List-Help: (List Instructions) List-Subscribe: List-Unsubscribe: X-Loop: FreeBSD.org On Sun, Aug 04, 2002 at 11:44:27PM -0700, Terry Lambert wrote: > > I would like to know how does the FreeBSD kernel (both in -current > > and -stable) handle per-cpu variables such as curproc/curthread, cpuid, ... > > How expensive is to access them compared to regular variables ? > > Depends on the specific variable's implementation. If you are asking > because you want to add one, then don't. 8-). They damage symmetry i am asking because in the code I see several instance of things like p = curproc; in a context where curproc is not supposed to change. Is there a performance bonus in doing this, or not ? cheers luigi To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-smp" in the body of the message From owner-freebsd-smp Mon Aug 5 1:56: 1 2002 Delivered-To: freebsd-smp@freebsd.org Received: from mx1.FreeBSD.org (mx1.FreeBSD.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 7E01937B401 for ; Mon, 5 Aug 2002 01:55:54 -0700 (PDT) Received: from falcon.mail.pas.earthlink.net (falcon.mail.pas.earthlink.net [207.217.120.74]) by mx1.FreeBSD.org (Postfix) with ESMTP id 06A0643E65 for ; Mon, 5 Aug 2002 01:55:54 -0700 (PDT) (envelope-from tlambert2@mindspring.com) Received: from pool0081.cvx22-bradley.dialup.earthlink.net ([209.179.198.81] helo=mindspring.com) by falcon.mail.pas.earthlink.net with esmtp (Exim 3.33 #1) id 17bdeW-0000m3-00; Mon, 05 Aug 2002 01:55:48 -0700 Message-ID: <3D4E3D55.38ECB8E@mindspring.com> Date: Mon, 05 Aug 2002 01:54:45 -0700 From: Terry Lambert X-Mailer: Mozilla 4.79 [en] (Win98; U) X-Accept-Language: en MIME-Version: 1.0 To: Luigi Rizzo Cc: smp@freebsd.org Subject: Re: how to create per-cpu variables in SMP kernels ? References: <20020804182713.A14944@iguana.icir.org> <3D4E1ECB.348978D1@mindspring.com> <20020805015340.A17716@iguana.icir.org> Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Sender: owner-freebsd-smp@FreeBSD.ORG Precedence: bulk List-ID: List-Archive: (Web Archive) List-Help: (List Instructions) List-Subscribe: List-Unsubscribe: X-Loop: FreeBSD.org Luigi Rizzo wrote: > i am asking because in the code I see several instance of things like > > p = curproc; > > > in a context where curproc is not supposed to change. Is there a > performance bonus in doing this, or not ? Yes. Look at the definition of curproc. -- Terry To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-smp" in the body of the message From owner-freebsd-smp Mon Aug 5 2: 2:43 2002 Delivered-To: freebsd-smp@freebsd.org Received: from mx1.FreeBSD.org (mx1.FreeBSD.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 58DF137B400 for ; Mon, 5 Aug 2002 02:02:41 -0700 (PDT) Received: from iguana.icir.org (iguana.icir.org [192.150.187.36]) by mx1.FreeBSD.org (Postfix) with ESMTP id 1681243E3B for ; Mon, 5 Aug 2002 02:02:41 -0700 (PDT) (envelope-from rizzo@iguana.icir.org) Received: (from rizzo@localhost) by iguana.icir.org (8.11.6/8.11.3) id g7592eB17942; Mon, 5 Aug 2002 02:02:40 -0700 (PDT) (envelope-from rizzo) Date: Mon, 5 Aug 2002 02:02:40 -0700 From: Luigi Rizzo To: Terry Lambert Cc: smp@freebsd.org Subject: Re: how to create per-cpu variables in SMP kernels ? Message-ID: <20020805020239.B17716@iguana.icir.org> References: <20020804182713.A14944@iguana.icir.org> <3D4E1ECB.348978D1@mindspring.com> <20020805015340.A17716@iguana.icir.org> <3D4E3D55.38ECB8E@mindspring.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.2.5.1i In-Reply-To: <3D4E3D55.38ECB8E@mindspring.com>; from tlambert2@mindspring.com on Mon, Aug 05, 2002 at 01:54:45AM -0700 Sender: owner-freebsd-smp@FreeBSD.ORG Precedence: bulk List-ID: List-Archive: (Web Archive) List-Help: (List Instructions) List-Subscribe: List-Unsubscribe: X-Loop: FreeBSD.org On Mon, Aug 05, 2002 at 01:54:45AM -0700, Terry Lambert wrote: ... > > i am asking because in the code I see several instance of things like > > > > p = curproc; > > > > > > in a context where curproc is not supposed to change. Is there a > > performance bonus in doing this, or not ? > > Yes. Look at the definition of curproc. which was my question in the first place :) From what i have seen, in -current at least, reading one of these variables merely means accessing them through %fs, but other than that they look no different than other globals. Am i wrong ? (in fact, i wonder if the code couldn't be made a little bit smarter -- if as you say these variables are in a specially mapped page, they should look just the same as others, don't they ? cheers luigi To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-smp" in the body of the message From owner-freebsd-smp Mon Aug 5 2:35:25 2002 Delivered-To: freebsd-smp@freebsd.org Received: from mx1.FreeBSD.org (mx1.FreeBSD.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id AF25337B400 for ; Mon, 5 Aug 2002 02:35:21 -0700 (PDT) Received: from snipe.mail.pas.earthlink.net (snipe.mail.pas.earthlink.net [207.217.120.62]) by mx1.FreeBSD.org (Postfix) with ESMTP id 514BA43E42 for ; Mon, 5 Aug 2002 02:35:21 -0700 (PDT) (envelope-from tlambert2@mindspring.com) Received: from pool0028.cvx22-bradley.dialup.earthlink.net ([209.179.198.28] helo=mindspring.com) by snipe.mail.pas.earthlink.net with esmtp (Exim 3.33 #1) id 17beGh-00079m-00; Mon, 05 Aug 2002 02:35:16 -0700 Message-ID: <3D4E4690.A468DDC8@mindspring.com> Date: Mon, 05 Aug 2002 02:34:08 -0700 From: Terry Lambert X-Mailer: Mozilla 4.79 [en] (Win98; U) X-Accept-Language: en MIME-Version: 1.0 To: Luigi Rizzo Cc: smp@freebsd.org Subject: Re: how to create per-cpu variables in SMP kernels ? References: <20020804182713.A14944@iguana.icir.org> <3D4E1ECB.348978D1@mindspring.com> <20020805015340.A17716@iguana.icir.org> <3D4E3D55.38ECB8E@mindspring.com> <20020805020239.B17716@iguana.icir.org> Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Sender: owner-freebsd-smp@FreeBSD.ORG Precedence: bulk List-ID: List-Archive: (Web Archive) List-Help: (List Instructions) List-Subscribe: List-Unsubscribe: X-Loop: FreeBSD.org Luigi Rizzo wrote: > > > i am asking because in the code I see several instance of things like > > > > > > p = curproc; > > > > > > > > > in a context where curproc is not supposed to change. Is there a > > > performance bonus in doing this, or not ? > > > > Yes. Look at the definition of curproc. > > which was my question in the first place :) > From what i have seen, in -current at least, reading one of these > variables merely means accessing them through %fs, but other than that > they look no different than other globals. Am i wrong ? > (in fact, i wonder if the code couldn't be made a little bit > smarter -- if as you say these variables are in a specially > mapped page, they should look just the same as others, don't they ? Yes and no. There are cache effects. Loading it into a register, which "p = curproc;" will most likely do, is more effecient than an indirect, in any case. The main cache effect would only hit you on a large enough function where your TLB got reloaded. The per CPU page is a seperate page, so it's technically not going to be in the L2, if you migrate a process between CPUs, even if you otherwise have enough locality for it to be. The reason that it's only an impact in a large enough function is that you're going to take the hit no matter what executing the "p = curproc;". If you are talking about -current, the thread structure has replaced the proc, really, since it's possible for a proc to be running on more than one CPU at a time, with different threads. In the 4.x case, the proc is not shared between CPUs. Probably, it'd be worth it to establish a mapping for the page containing the stuff that's transiently per CPU, while a KSE is bound to a CPU. The tradeoff for any per-CPU pages are that they're subtracted from the overall common KVA space. Ideally, the only difference would be the contents of a single register -- or each CPU would truly have non-contended unshared memory. Are you in the process of looking at Hyperthreading issues? -- Terry To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-smp" in the body of the message From owner-freebsd-smp Mon Aug 5 10:29:44 2002 Delivered-To: freebsd-smp@freebsd.org Received: from mx1.FreeBSD.org (mx1.FreeBSD.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id CD4FA37B423 for ; Mon, 5 Aug 2002 10:29:33 -0700 (PDT) Received: from sccrmhc01.attbi.com (sccrmhc01.attbi.com [204.127.202.61]) by mx1.FreeBSD.org (Postfix) with ESMTP id 4E0494413C for ; Mon, 5 Aug 2002 10:20:32 -0700 (PDT) (envelope-from julian@elischer.org) Received: from InterJet.elischer.org ([12.232.206.8]) by sccrmhc01.attbi.com (InterMail vM.4.01.03.27 201-229-121-127-20010626) with ESMTP id <20020805172010.OGGU23732.sccrmhc01.attbi.com@InterJet.elischer.org>; Mon, 5 Aug 2002 17:20:10 +0000 Received: from localhost (localhost.elischer.org [127.0.0.1]) by InterJet.elischer.org (8.9.1a/8.9.1) with ESMTP id KAA64952; Mon, 5 Aug 2002 10:17:11 -0700 (PDT) Date: Mon, 5 Aug 2002 10:17:10 -0700 (PDT) From: Julian Elischer To: Luigi Rizzo Cc: Terry Lambert , smp@freebsd.org Subject: Re: how to create per-cpu variables in SMP kernels ? In-Reply-To: <20020805015340.A17716@iguana.icir.org> Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: owner-freebsd-smp@FreeBSD.ORG Precedence: bulk List-ID: List-Archive: (Web Archive) List-Help: (List Instructions) List-Subscribe: List-Unsubscribe: X-Loop: FreeBSD.org On Mon, 5 Aug 2002, Luigi Rizzo wrote: > On Sun, Aug 04, 2002 at 11:44:27PM -0700, Terry Lambert wrote: > > > I would like to know how does the FreeBSD kernel (both in -current > > > and -stable) handle per-cpu variables such as curproc/curthread, cpuid, > ... > > > How expensive is to access them compared to regular variables ? > > > > Depends on the specific variable's implementation. If you are asking > > because you want to add one, then don't. 8-). They damage symmetry > > i am asking because in the code I see several instance of things like > > p = curproc; > peter has said a couple of times that the per-cpu implelemntation might be slower on some architectures.. It was once said that this was true for x86 but I don't know if I believe that. > > in a context where curproc is not supposed to change. Is there a > performance bonus in doing this, or not ? > > cheers > luigi > > To Unsubscribe: send mail to majordomo@FreeBSD.org > with "unsubscribe freebsd-smp" in the body of the message > To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-smp" in the body of the message From owner-freebsd-smp Mon Aug 5 15:14:21 2002 Delivered-To: freebsd-smp@freebsd.org Received: from mx1.FreeBSD.org (mx1.FreeBSD.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 28A6537B400 for ; Mon, 5 Aug 2002 15:14:16 -0700 (PDT) Received: from canning.wemm.org (canning.wemm.org [192.203.228.65]) by mx1.FreeBSD.org (Postfix) with ESMTP id C923043E42 for ; Mon, 5 Aug 2002 15:14:15 -0700 (PDT) (envelope-from peter@wemm.org) Received: from wemm.org (localhost [127.0.0.1]) by canning.wemm.org (Postfix) with ESMTP id B0C732A7D6; Mon, 5 Aug 2002 15:14:15 -0700 (PDT) (envelope-from peter@wemm.org) X-Mailer: exmh version 2.5 07/13/2001 with nmh-1.0.4 To: Luigi Rizzo Cc: Terry Lambert , smp@freebsd.org Subject: Re: how to create per-cpu variables in SMP kernels ? In-Reply-To: <20020805015340.A17716@iguana.icir.org> Date: Mon, 05 Aug 2002 15:14:15 -0700 From: Peter Wemm Message-Id: <20020805221415.B0C732A7D6@canning.wemm.org> Sender: owner-freebsd-smp@FreeBSD.ORG Precedence: bulk List-ID: List-Archive: (Web Archive) List-Help: (List Instructions) List-Subscribe: List-Unsubscribe: X-Loop: FreeBSD.org Luigi Rizzo wrote: > On Sun, Aug 04, 2002 at 11:44:27PM -0700, Terry Lambert wrote: > > > I would like to know how does the FreeBSD kernel (both in -current > > > and -stable) handle per-cpu variables such as curproc/curthread, cpuid, > ... > > > How expensive is to access them compared to regular variables ? > > > > Depends on the specific variable's implementation. If you are asking > > because you want to add one, then don't. 8-). They damage symmetry > > i am asking because in the code I see several instance of things like > > p = curproc; > > > in a context where curproc is not supposed to change. Is there a > performance bonus in doing this, or not ? Sort-of. There is both a compile time issue and a runtime issue. Using the %fs:variable segment overrides doesn't make a lot of difference, but the compiler is effectively wired so that they are treated as volatile. ie: p = curproc; foo(curproc); bar(curproc); return curproc; .. will cause *4* memory references with segment overrides. However: p = curproc; foo(p); bar(p); return p; .. will use *1*. Actually, this isn't quite correct on -current since there isn't a curproc percpu variable. It is really: #define curproc (curthread->td_proc) so the example above has actually got 8 memory references vs 2. Sure, you will probably hit L1 cache, but there is no guarantee of that. In the 'p' cases, it will probably end up as a register, but that is up to the compiler to figure out the best use of resources. Secondly, there is a compile time issue. "curproc" and "curthread" expand to monster macros that the compiler has to untangle and optimize. It contributes to compile time and memory to represent it in the rtl tree. Minimizing unnecessary overuse of them adds up over time. An example from -current.. This: static __inline int sigonstack(size_t sp) { register struct thread *td = curthread; struct proc *p = td->td_proc; return ((p->p_flag & P_ALTSTACK) ? ((sp - (size_t)p->p_sigstk.ss_sp) < p->p_sigstk.ss_size) : 0); } Becomes: static __inline int sigonstack(size_t sp) { register struct thread *td = ({ __typeof(((struct pcpu *)0)->pc_curthrea d) __result; if (sizeof(__result) == 1) { u_char __b; __asm volatile("movb %%fs: %1,%0" : "=r" (__b) : "m" (*(u_char *)(((size_t)(&((struct pcpu *)0)->pc_curthre ad))))); __result = *(__typeof(((struct pcpu *)0)->pc_curthread) *)&__b; } else if (sizeof(__result) == 2) { u_short __w; __asm volatile("movw %%fs:%1,%0" : "=r " (__w) : "m" (*(u_short *)(((size_t)(&((struct pcpu *)0)->pc_curthread))))); __ result = *(__typeof(((struct pcpu *)0)->pc_curthread) *)&__w; } else if (sizeof( __result) == 4) { u_int __i; __asm volatile("movl %%fs:%1,%0" : "=r" (__i) : "m" (*(u_int *)(((size_t)(&((struct pcpu *)0)->pc_curthread))))); __result = *(__ty peof(((struct pcpu *)0)->pc_curthread) *)&__i; } else { __result = *({ __typeof( ((struct pcpu *)0)->pc_curthread) *__p; __asm volatile("movl %%fs:%1,%0; addl %2 ,%0" : "=r" (__p) : "m" (*(struct pcpu *)(((size_t)(&((struct pcpu *)0)->pc_prvs pace)))), "i" (((size_t)(&((struct pcpu *)0)->pc_curthread)))); __p; }); } __res ult; }); struct proc *p = td->td_proc; return ((p->p_flag & 0x4000000) ? ((sp - (size_t)p->p_sigstk.ss_sp) < p->p_sigstk.ss_size) : 0); } However, if I change it like this: static __inline int sigonstack(size_t sp) { return ((curproc->p_flag & P_ALTSTACK) ? ((sp - (size_t)curproc->p_sigstk.ss_sp) < curproc->p_sigstk.ss_size) : 0); } it becomes: static __inline int sigonstack(size_t sp) { return (((({ __typeof(((struct pcpu *)0)->pc_curthread) __result; if (si zeof(__result) == 1) { u_char __b; __asm volatile("movb %%fs:%1,%0" : "=r" (__b) : "m" (*(u_char *)(((size_t)(&((struct pcpu *)0)->pc_curthread))))); __result = *(__typeof(((struct pcpu *)0)->pc_curthread) *)&__b; } else if (sizeof(__result ) == 2) { u_short __w; __asm volatile("movw %%fs:%1,%0" : "=r" (__w) : "m" (*(u_ short *)(((size_t)(&((struct pcpu *)0)->pc_curthread))))); __result = *(__typeof (((struct pcpu *)0)->pc_curthread) *)&__w; } else if (sizeof(__result) == 4) { u _int __i; __asm volatile("movl %%fs:%1,%0" : "=r" (__i) : "m" (*(u_int *)(((size _t)(&((struct pcpu *)0)->pc_curthread))))); __result = *(__typeof(((struct pcpu *)0)->pc_curthread) *)&__i; } else { __result = *({ __typeof(((struct pcpu *)0)- >pc_curthread) *__p; __asm volatile("movl %%fs:%1,%0; addl %2,%0" : "=r" (__p) : "m" (*(struct pcpu *)(((size_t)(&((struct pcpu *)0)->pc_prvspace)))), "i" (((si ze_t)(&((struct pcpu *)0)->pc_curthread)))); __p; }); } __result; })->td_proc)-> p_flag & 0x4000000) ? ((sp - (size_t)(({ __typeof(((struct pcpu *)0)->pc_curthread) __resu lt; if (sizeof(__result) == 1) { u_char __b; __asm volatile("movb %%fs:%1,%0" : "=r" (__b) : "m" (*(u_char *)(((size_t)(&((struct pcpu *)0)->pc_curthread))))); __result = *(__typeof(((struct pcpu *)0)->pc_curthread) *)&__b; } else if (sizeo f(__result) == 2) { u_short __w; __asm volatile("movw %%fs:%1,%0" : "=r" (__w) : "m" (*(u_short *)(((size_t)(&((struct pcpu *)0)->pc_curthread))))); __result = *(__typeof(((struct pcpu *)0)->pc_curthread) *)&__w; } else if (sizeof(__result) == 4) { u_int __i; __asm volatile("movl %%fs:%1,%0" : "=r" (__i) : "m" (*(u_int *)(((size_t)(&((struct pcpu *)0)->pc_curthread))))); __result = *(__typeof(((st ruct pcpu *)0)->pc_curthread) *)&__i; } else { __result = *({ __typeof(((struct pcpu *)0)->pc_curthread) *__p; __asm volatile("movl %%fs:%1,%0; addl %2,%0" : "= r" (__p) : "m" (*(struct pcpu *)(((size_t)(&((struct pcpu *)0)->pc_prvspace)))), "i" (((size_t)(&((struct pcpu *)0)->pc_curthread)))); __p; }); } __result; })-> td_proc)->p_sigstk.ss_sp) < (({ __typeof(((struct pcpu *)0)->pc_curthread) __res ult; if (sizeof(__result) == 1) { u_char __b; __asm volatile("movb %%fs:%1,%0" : "=r" (__b) : "m" (*(u_char *)(((size_t)(&((struct pcpu *)0)->pc_curthread))))); __result = *(__typeof(((struct pcpu *)0)->pc_curthread) *)&__b; } else if (size of(__result) == 2) { u_short __w; __asm volatile("movw %%fs:%1,%0" : "=r" (__w) : "m" (*(u_short *)(((size_t)(&((struct pcpu *)0)->pc_curthread))))); __result = *(__typeof(((struct pcpu *)0)->pc_curthread) *)&__w; } else if (sizeof(__result ) == 4) { u_int __i; __asm volatile("movl %%fs:%1,%0" : "=r" (__i) : "m" (*(u_in t *)(((size_t)(&((struct pcpu *)0)->pc_curthread))))); __result = *(__typeof(((s truct pcpu *)0)->pc_curthread) *)&__i; } else { __result = *({ __typeof(((struct pcpu *)0)->pc_curthread) *__p; __asm volatile("movl %%fs:%1,%0; addl %2,%0" : " =r" (__p) : "m" (*(struct pcpu *)(((size_t)(&((struct pcpu *)0)->pc_prvspace)))) , "i" (((size_t)(&((struct pcpu *)0)->pc_curthread)))); __p; }); } __result; })- >td_proc)->p_sigstk.ss_size) : 0); } Also, when you get a syntax error due to a #define collision in the middle of that mess, which would you rather be trying to debug the preprocessor output from? Cheers, -Peter -- Peter Wemm - peter@wemm.org; peter@FreeBSD.org; peter@yahoo-inc.com "All of this is for nothing if we don't go to the stars" - JMS/B5 To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-smp" in the body of the message From owner-freebsd-smp Mon Aug 5 23: 6: 9 2002 Delivered-To: freebsd-smp@freebsd.org Received: from mx1.FreeBSD.org (mx1.FreeBSD.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id DFBA637B400 for ; Mon, 5 Aug 2002 23:06:03 -0700 (PDT) Received: from iguana.icir.org (iguana.icir.org [192.150.187.36]) by mx1.FreeBSD.org (Postfix) with ESMTP id 9498043E65 for ; Mon, 5 Aug 2002 23:06:03 -0700 (PDT) (envelope-from rizzo@iguana.icir.org) Received: (from rizzo@localhost) by iguana.icir.org (8.11.6/8.11.3) id g7665uu27107; Mon, 5 Aug 2002 23:05:56 -0700 (PDT) (envelope-from rizzo) Date: Mon, 5 Aug 2002 23:05:56 -0700 From: Luigi Rizzo To: Peter Wemm Cc: Terry Lambert , smp@freebsd.org Subject: Re: how to create per-cpu variables in SMP kernels ? Message-ID: <20020805230556.C26751@iguana.icir.org> References: <20020805015340.A17716@iguana.icir.org> <20020805221415.B0C732A7D6@canning.wemm.org> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.2.5.1i In-Reply-To: <20020805221415.B0C732A7D6@canning.wemm.org>; from peter@wemm.org on Mon, Aug 05, 2002 at 03:14:15PM -0700 Sender: owner-freebsd-smp@FreeBSD.ORG Precedence: bulk List-ID: List-Archive: (Web Archive) List-Help: (List Instructions) List-Subscribe: List-Unsubscribe: X-Loop: FreeBSD.org Hi Peter, thanks for the explaination. I still have a few doubts on this (let's restrict to the -current case where the code seems more readable): --- MINOR DETAIL --- * I wonder why the macro __PCPU_GET() in sys/i386/include/pcpu.h cannot store directly into __result for operand sizes of 1,2,4 instead of going through a temporary variable. I.e. what would be wrong in having #define __PCPU_GET(name) ({ \ __pcpu_type(name) __result; \ \ if (sizeof(__result) == 1) { \ __asm __volatile("movb %%fs:%1,%0" \ : "=r" (__result) \ : "m" (*(u_char *)(__pcpu_offset(name)))); \ } else if (sizeof(__result) == 2) { \ Probably the same holds for __PCPU_SET(). --- OVERALL IMPLEMENTATION OF THE PER-CPU DATA --- Partly following Terry's description, i thought an arrangement like the following could be relatively simple to implement and not require any recourse to assembly code, does not impact the compiler's ability to do optimizations, and does not require an extra segment descriptor to access the struct pcpu. It relies on the following variables, my_pcpu to access the pcpu data of the local processor, all_pcpu to view all pcpu data (including our own, at a different mapping in vm space): struct pcpu *my_pcpu; struct pcpu *all_pcpu[MAXCPU]; /* XXX volatile */ Early in the boot process we allocate MAXCPU physical pages, and MAXCPU+1 entries in the VM space. Individual pcpu structs go at the beginning of each of the physical pages, and the VM -> physical mapping of the first MAXCPU VM entries is the same for all processors. Then all_pcpu[i] can be initialized with a pointer to the beginning of the i-th VM page. The MAXCPU+1-th VM entry maps differently on each CPU, so that it effectively permits access to the per-cpu data. my_pcpu can be initialized with a pointer to the MAXCPU+1-th VM page. At this point, curproc and all other per-cpu variables for the local CPU can be accessed through my_pcpu->curproc and similar, whereas we can get to other cpu's data with all_pcpu[i]->curproc without the need for using %fs or special assembly language to access these fields. Then we can discuss how/where to put "volatile" keywords. In principle, all references through all_pcpu[] should be readonly and treated as volatile, with perhaps the exception of some section of code at machine startup. On the contrary we could safely assume that references through my_pcpu are non-volatile as the local processor should be the only one to mess with them Anything wrong with this description ? cheers luigi On Mon, Aug 05, 2002 at 03:14:15PM -0700, Peter Wemm wrote: ... > Sort-of. There is both a compile time issue and a runtime issue. > > Using the %fs:variable segment overrides doesn't make a lot of difference, > but the compiler is effectively wired so that they are treated as volatile. ... To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-smp" in the body of the message From owner-freebsd-smp Mon Aug 5 23:33: 8 2002 Delivered-To: freebsd-smp@freebsd.org Received: from mx1.FreeBSD.org (mx1.FreeBSD.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 2B56837B400 for ; Mon, 5 Aug 2002 23:32:59 -0700 (PDT) Received: from k6.locore.ca (k6.locore.ca [198.96.117.170]) by mx1.FreeBSD.org (Postfix) with ESMTP id 1201A43E4A for ; Mon, 5 Aug 2002 23:32:58 -0700 (PDT) (envelope-from jake@k6.locore.ca) Received: from k6.locore.ca (jake@localhost.locore.ca [127.0.0.1]) by k6.locore.ca (8.12.5/8.12.3) with ESMTP id g766cHVA091500; Tue, 6 Aug 2002 02:38:17 -0400 (EDT) (envelope-from jake@k6.locore.ca) Received: (from jake@localhost) by k6.locore.ca (8.12.5/8.12.3/Submit) id g766cH49091499; Tue, 6 Aug 2002 02:38:17 -0400 (EDT) Date: Tue, 6 Aug 2002 02:38:17 -0400 From: Jake Burkholder To: Luigi Rizzo Cc: Peter Wemm , Terry Lambert , smp@FreeBSD.ORG Subject: Re: how to create per-cpu variables in SMP kernels ? Message-ID: <20020806023816.D76014@locore.ca> References: <20020805015340.A17716@iguana.icir.org> <20020805221415.B0C732A7D6@canning.wemm.org> <20020805230556.C26751@iguana.icir.org> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.2.5i In-Reply-To: <20020805230556.C26751@iguana.icir.org>; from rizzo@icir.org on Mon, Aug 05, 2002 at 11:05:56PM -0700 Sender: owner-freebsd-smp@FreeBSD.ORG Precedence: bulk List-ID: List-Archive: (Web Archive) List-Help: (List Instructions) List-Subscribe: List-Unsubscribe: X-Loop: FreeBSD.org Apparently, On Mon, Aug 05, 2002 at 11:05:56PM -0700, Luigi Rizzo said words to the effect of; > Hi Peter, > thanks for the explaination. > I still have a few doubts on this (let's restrict to the -current > case where the code seems more readable): > > --- MINOR DETAIL --- > > * I wonder why the macro __PCPU_GET() in sys/i386/include/pcpu.h > cannot store directly into __result for operand sizes of 1,2,4 > instead of going through a temporary variable. I.e. what would > be wrong in having > > #define __PCPU_GET(name) ({ \ > __pcpu_type(name) __result; \ > \ > if (sizeof(__result) == 1) { \ > __asm __volatile("movb %%fs:%1,%0" \ > : "=r" (__result) \ > : "m" (*(u_char *)(__pcpu_offset(name)))); \ > } else if (sizeof(__result) == 2) { \ > > Probably the same holds for __PCPU_SET(). The code has to work for all types; if __result is a struct timeval, having it as an output in the asm statement doesn't compile. It all gets optimized out anyway. > > --- OVERALL IMPLEMENTATION OF THE PER-CPU DATA --- > > Partly following Terry's description, i thought an arrangement > like the following could be relatively simple to implement and not > require any recourse to assembly code, does not impact the compiler's > ability to do optimizations, and does not require an extra > segment descriptor to access the struct pcpu. > > It relies on the following variables, my_pcpu to access the > pcpu data of the local processor, all_pcpu to view all pcpu > data (including our own, at a different mapping in vm space): > > struct pcpu *my_pcpu; > > struct pcpu *all_pcpu[MAXCPU]; /* XXX volatile */ > > Early in the boot process we allocate MAXCPU physical pages, > and MAXCPU+1 entries in the VM space. Individual pcpu structs > go at the beginning of each of the physical pages, and the > VM -> physical mapping of the first MAXCPU VM entries is the > same for all processors. Then all_pcpu[i] can be initialized > with a pointer to the beginning of the i-th VM page. > > The MAXCPU+1-th VM entry maps differently on each CPU, > so that it effectively permits access to the per-cpu data. > my_pcpu can be initialized with a pointer to the MAXCPU+1-th VM page. > > At this point, curproc and all other per-cpu variables for the > local CPU can be accessed through > > my_pcpu->curproc > > and similar, whereas we can get to other cpu's data with > > all_pcpu[i]->curproc > > without the need for using %fs or special assembly language to > access these fields. > > Then we can discuss how/where to put "volatile" keywords. > In principle, all references through all_pcpu[] should be > readonly and treated as volatile, with perhaps the exception of > some section of code at machine startup. On the contrary we could > safely assume that references through my_pcpu are non-volatile > as the local processor should be the only one to mess with them > > Anything wrong with this description ? This doesn't work because the page directory is per-process, not per-cpu. To implement this you would need a fixed page directory entry which pointed to a different page table page on each cpu, which mapped the different per-cpu pages to the same virtual address. If 2 processes which shared page directories were running concurrently on 2 cpus, they would both see the same per-cpu data (one of then would get the wrong struct pcpu). Basically the struct pcpu's cannot all be mapped to the same virtual address. Jake > > cheers > luigi > > On Mon, Aug 05, 2002 at 03:14:15PM -0700, Peter Wemm wrote: > ... > > Sort-of. There is both a compile time issue and a runtime issue. > > > > Using the %fs:variable segment overrides doesn't make a lot of difference, > > but the compiler is effectively wired so that they are treated as volatile. > ... > > To Unsubscribe: send mail to majordomo@FreeBSD.org > with "unsubscribe freebsd-smp" in the body of the message To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-smp" in the body of the message From owner-freebsd-smp Tue Aug 6 0:40:25 2002 Delivered-To: freebsd-smp@freebsd.org Received: from mx1.FreeBSD.org (mx1.FreeBSD.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id CB84F37B400 for ; Tue, 6 Aug 2002 00:40:20 -0700 (PDT) Received: from gull.mail.pas.earthlink.net (gull.mail.pas.earthlink.net [207.217.120.84]) by mx1.FreeBSD.org (Postfix) with ESMTP id 60F3643E3B for ; Tue, 6 Aug 2002 00:40:20 -0700 (PDT) (envelope-from tlambert2@mindspring.com) Received: from pool0211.cvx22-bradley.dialup.earthlink.net ([209.179.198.211] helo=mindspring.com) by gull.mail.pas.earthlink.net with esmtp (Exim 3.33 #1) id 17bywg-0007Mn-00; Tue, 06 Aug 2002 00:39:59 -0700 Message-ID: <3D4F7D1B.4D91400A@mindspring.com> Date: Tue, 06 Aug 2002 00:39:07 -0700 From: Terry Lambert X-Mailer: Mozilla 4.79 [en] (Win98; U) X-Accept-Language: en MIME-Version: 1.0 To: Luigi Rizzo Cc: Peter Wemm , smp@freebsd.org Subject: Re: how to create per-cpu variables in SMP kernels ? References: <20020805015340.A17716@iguana.icir.org> <20020805221415.B0C732A7D6@canning.wemm.org> <20020805230556.C26751@iguana.icir.org> Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Sender: owner-freebsd-smp@FreeBSD.ORG Precedence: bulk List-ID: List-Archive: (Web Archive) List-Help: (List Instructions) List-Subscribe: List-Unsubscribe: X-Loop: FreeBSD.org Luigi Rizzo wrote: > Hi Peter, > thanks for the explaination. > I still have a few doubts on this (let's restrict to the -current > case where the code seems more readable): > > --- MINOR DETAIL --- > > * I wonder why the macro __PCPU_GET() in sys/i386/include/pcpu.h > cannot store directly into __result for operand sizes of 1,2,4 > instead of going through a temporary variable. I.e. what would > be wrong in having It could, if someone wrote the code for the compiler to be able to understand the operand sizes. 8-). The main problem is operand sizes that don't fit in a single register (ANSI C permits structures). > Partly following Terry's description, i thought an arrangement > like the following could be relatively simple to implement and not > require any recourse to assembly code, does not impact the compiler's > ability to do optimizations, and does not require an extra > segment descriptor to access the struct pcpu. > > It relies on the following variables, my_pcpu to access the > pcpu data of the local processor, all_pcpu to view all pcpu > data (including our own, at a different mapping in vm space): > > struct pcpu *my_pcpu; > > struct pcpu *all_pcpu[MAXCPU]; /* XXX volatile */ NO. This can not work. The problem is that the per-CPU are is mapped into the same location on each CPU -- and *totally inaccessible* to other CPUs. The entire point of having a per CPU area in the first place is to avoid indexing by the CPU ID, and to be able to *know* that the data stored there does not require protection of a mutex on read/write operations. Note that the CPUID is also not guaranteed to be a contiguous and adjacent space. Basically, this means your attempt to dereference all_pcpu will only work for the local processor data area *on the local processor*. In general, creating anything that needs this information in the first place is really a *big* mistake. In order to ensure the idempotence of the data being accessed, which could be a large structure, you would need to introduce locks. This is true even if the other CPUs only ever read the data, unless the data is capable of being modified or read with atomic instructions (this limits you to 32 bit values on Pentium class hardware). The only way this works without locks is if the data is statistical. In addition, the data in any page shared this way, with no locks, would have to reside in a page which is non-cacheable, to avoid caching in the L1 or L2, and the associated invalidate on writes having to be signalled to all processors. That means that whatever your CPU vs. memory bus multiplier, the expense of accessing it is divided by that (e.g. a 1.3GHz CPU with a 433MHz memory bus will take three clock cycles, minimum, to fetch data from the page). > Early in the boot process we allocate MAXCPU physical pages, > and MAXCPU+1 entries in the VM space. Individual pcpu structs > go at the beginning of each of the physical pages, and the > VM -> physical mapping of the first MAXCPU VM entries is the > same for all processors. Then all_pcpu[i] can be initialized > with a pointer to the beginning of the i-th VM page. > > The MAXCPU+1-th VM entry maps differently on each CPU, > so that it effectively permits access to the per-cpu data. > my_pcpu can be initialized with a pointer to the MAXCPU+1-th VM page. It won't work, for the reasons stated above. In addition, if you were to map it in alternate page map entries per CPU, you would find that you would run into TLB shootdown bugs on Intel and AMD processors (the way you would have to use this would guarantee that the shootdown was not delivered for most of the SMP L2/Bridge chipsets out there). > At this point, curproc and all other per-cpu variables for the > local CPU can be accessed through > > my_pcpu->curproc > > and similar, whereas we can get to other cpu's data with > > all_pcpu[i]->curproc The variables only exist on the CPU in question. That is why they are called "per CPU". > without the need for using %fs or special assembly language to > access these fields. Peter pointed out that these were "assumed volatile"; that's a simple way of saying "not permitted to be cached" or "must be explicitly fetched each time". This is what I was referring to originally, when I stated that there would be an additional dereference that would normally not be there, as it would be hidden by the cache hardware. Whether this is done with explicit overrides, or mapping the pages as non-cacheable (and non-global), is really irrelevent: it's six of one type of overhead, and a half a dozen of another. > Then we can discuss how/where to put "volatile" keywords. > In principle, all references through all_pcpu[] should be > readonly and treated as volatile, with perhaps the exception of > some section of code at machine startup. On the contrary we could > safely assume that references through my_pcpu are non-volatile > as the local processor should be the only one to mess with them > > Anything wrong with this description ? Er... how can "curproc" be read-only? You *REALLY* want to avoid per-CPU data, if you can. The stuff that's there now is really only there because it's unavoidable: any time you share a contention domain in memory, you add to the bus contention, and decrease the value of running a shared memory multiprocessor in the first place. -- Terry To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-smp" in the body of the message From owner-freebsd-smp Tue Aug 6 0:43:52 2002 Delivered-To: freebsd-smp@freebsd.org Received: from mx1.FreeBSD.org (mx1.FreeBSD.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 5A1E637B400 for ; Tue, 6 Aug 2002 00:43:51 -0700 (PDT) Received: from gull.mail.pas.earthlink.net (gull.mail.pas.earthlink.net [207.217.120.84]) by mx1.FreeBSD.org (Postfix) with ESMTP id CE15F43E72 for ; Tue, 6 Aug 2002 00:43:45 -0700 (PDT) (envelope-from tlambert2@mindspring.com) Received: from pool0211.cvx22-bradley.dialup.earthlink.net ([209.179.198.211] helo=mindspring.com) by gull.mail.pas.earthlink.net with esmtp (Exim 3.33 #1) id 17bz0A-0001rF-00; Tue, 06 Aug 2002 00:43:35 -0700 Message-ID: <3D4F7DF3.1E0CEF3A@mindspring.com> Date: Tue, 06 Aug 2002 00:42:43 -0700 From: Terry Lambert X-Mailer: Mozilla 4.79 [en] (Win98; U) X-Accept-Language: en MIME-Version: 1.0 To: Jake Burkholder Cc: Luigi Rizzo , Peter Wemm , smp@FreeBSD.ORG Subject: Re: how to create per-cpu variables in SMP kernels ? References: <20020805015340.A17716@iguana.icir.org> <20020805221415.B0C732A7D6@canning.wemm.org> <20020805230556.C26751@iguana.icir.org> <20020806023816.D76014@locore.ca> Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Sender: owner-freebsd-smp@FreeBSD.ORG Precedence: bulk List-ID: List-Archive: (Web Archive) List-Help: (List Instructions) List-Subscribe: List-Unsubscribe: X-Loop: FreeBSD.org Jake Burkholder wrote: > This doesn't work because the page directory is per-process, not per-cpu. I think this is backwards from what he wants, since what he's looking to do is to try and unify the accounting space for his scheduler in the SMP sense. The real answer is: "don't unify the accounting space; account per CPU as if the other CPUs did not exist, instead". However... > To implement this you would need a fixed page directory entry which pointed > to a different page table page on each cpu, which mapped the different > per-cpu pages to the same virtual address. If 2 processes which shared > page directories were running concurrently on 2 cpus, they would both > see the same per-cpu data (one of then would get the wrong struct pcpu). > Basically the struct pcpu's cannot all be mapped to the same virtual > address. This is an incredibly good point: process mappings are in fact a shared resource, and therefore can not be per CPU, per se. -- Terry To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-smp" in the body of the message From owner-freebsd-smp Tue Aug 6 1:32:37 2002 Delivered-To: freebsd-smp@freebsd.org Received: from mx1.FreeBSD.org (mx1.FreeBSD.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 46C4E37B400 for ; Tue, 6 Aug 2002 01:32:36 -0700 (PDT) Received: from iguana.icir.org (iguana.icir.org [192.150.187.36]) by mx1.FreeBSD.org (Postfix) with ESMTP id F224E43E5E for ; Tue, 6 Aug 2002 01:32:35 -0700 (PDT) (envelope-from rizzo@iguana.icir.org) Received: (from rizzo@localhost) by iguana.icir.org (8.11.6/8.11.3) id g768WVq28009; Tue, 6 Aug 2002 01:32:31 -0700 (PDT) (envelope-from rizzo) Date: Tue, 6 Aug 2002 01:32:31 -0700 From: Luigi Rizzo To: Jake Burkholder Cc: Peter Wemm , Terry Lambert , smp@FreeBSD.ORG Subject: Re: how to create per-cpu variables in SMP kernels ? Message-ID: <20020806013231.A27897@iguana.icir.org> References: <20020805015340.A17716@iguana.icir.org> <20020805221415.B0C732A7D6@canning.wemm.org> <20020805230556.C26751@iguana.icir.org> <20020806023816.D76014@locore.ca> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.2.5.1i In-Reply-To: <20020806023816.D76014@locore.ca>; from jake@locore.ca on Tue, Aug 06, 2002 at 02:38:17AM -0400 Sender: owner-freebsd-smp@FreeBSD.ORG Precedence: bulk List-ID: List-Archive: (Web Archive) List-Help: (List Instructions) List-Subscribe: List-Unsubscribe: X-Loop: FreeBSD.org On Tue, Aug 06, 2002 at 02:38:17AM -0400, Jake Burkholder wrote: ... > This doesn't work because the page directory is per-process, not per-cpu. > To implement this you would need a fixed page directory entry which pointed > to a different page table page on each cpu, which mapped the different > per-cpu pages to the same virtual address. If 2 processes which shared > page directories were running concurrently on 2 cpus, they would both > see the same per-cpu data (one of then would get the wrong struct pcpu). > Basically the struct pcpu's cannot all be mapped to the same virtual > address. so how does the use of %fs solve the problem ? sorry if the question is naive... cheers luigi To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-smp" in the body of the message From owner-freebsd-smp Tue Aug 6 5:55:48 2002 Delivered-To: freebsd-smp@freebsd.org Received: from mx1.FreeBSD.org (mx1.FreeBSD.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 79FFB37B400 for ; Tue, 6 Aug 2002 05:55:46 -0700 (PDT) Received: from mailman.zeta.org.au (mailman.zeta.org.au [203.26.10.16]) by mx1.FreeBSD.org (Postfix) with ESMTP id 407E243E81 for ; Tue, 6 Aug 2002 05:55:45 -0700 (PDT) (envelope-from bde@zeta.org.au) Received: from bde.zeta.org.au (bde.zeta.org.au [203.2.228.102]) by mailman.zeta.org.au (8.9.3/8.8.7) with ESMTP id WAA24515; Tue, 6 Aug 2002 22:55:20 +1000 Date: Tue, 6 Aug 2002 23:00:04 +1000 (EST) From: Bruce Evans X-X-Sender: bde@gamplex.bde.org To: Luigi Rizzo Cc: Jake Burkholder , Peter Wemm , Terry Lambert , Subject: Re: how to create per-cpu variables in SMP kernels ? In-Reply-To: <20020806013231.A27897@iguana.icir.org> Message-ID: <20020806222506.N1391-100000@gamplex.bde.org> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: owner-freebsd-smp@FreeBSD.ORG Precedence: bulk List-ID: List-Archive: (Web Archive) List-Help: (List Instructions) List-Subscribe: List-Unsubscribe: X-Loop: FreeBSD.org On Tue, 6 Aug 2002, Luigi Rizzo wrote: > On Tue, Aug 06, 2002 at 02:38:17AM -0400, Jake Burkholder wrote: > ... > > This doesn't work because the page directory is per-process, not per-cpu. > > To implement this you would need a fixed page directory entry which pointed > > to a different page table page on each cpu, which mapped the different > > per-cpu pages to the same virtual address. If 2 processes which shared > > page directories were running concurrently on 2 cpus, they would both > > see the same per-cpu data (one of then would get the wrong struct pcpu). > > Basically the struct pcpu's cannot all be mapped to the same virtual > > address. > > so how does the use of %fs solve the problem ? %fs is per-cpu, so it can (and does) index a cpu-dependent segment descriptor despite the index value being cpu-independent. Bruce To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-smp" in the body of the message From owner-freebsd-smp Tue Aug 6 8:29:55 2002 Delivered-To: freebsd-smp@freebsd.org Received: from mx1.FreeBSD.org (mx1.FreeBSD.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 3404837B400 for ; Tue, 6 Aug 2002 08:29:53 -0700 (PDT) Received: from klima.physik.uni-mainz.de (klima.Physik.Uni-Mainz.DE [134.93.180.162]) by mx1.FreeBSD.org (Postfix) with ESMTP id 754F943E65 for ; Tue, 6 Aug 2002 08:29:52 -0700 (PDT) (envelope-from ohartman@klima.physik.uni-mainz.de) Received: from klima.Physik.Uni-Mainz.DE (klima.Physik.Uni-Mainz.DE [134.93.180.162]) by klima.physik.uni-mainz.de (8.12.5/8.12.5) with ESMTP id g76FTpd2000948 for ; Tue, 6 Aug 2002 17:29:51 +0200 (CEST) (envelope-from ohartman@klima.physik.uni-mainz.de) Date: Tue, 6 Aug 2002 17:29:51 +0200 (CEST) From: "Hartmann, O." To: freebsd-smp@freebsd.org Subject: Dual XEON P4 main-PCB and FreeBSD 4.6 (SuperMicro SUPER P4DL6 and Intel SHG2), experiences?? Message-ID: <20020806172901.O926-100000@klima.physik.uni-mainz.de> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: owner-freebsd-smp@FreeBSD.ORG Precedence: bulk List-ID: List-Archive: (Web Archive) List-Help: (List Instructions) List-Subscribe: List-Unsubscribe: X-Loop: FreeBSD.org Dear Sirs. We need to upgrade and change our server hardware and I'm looking for new high-performance server platforms for P4 XEON systems. I have two mainboards in my focus, one is the Intel SHG2 and the other one is the SuperMicro SUPER P4DL6. The SuperMicro seems to be a very nice mainboard, but when I read the technical specs, I realized, that it is marked as DDR200-only main PCB, the Intel SHG2 is said to be capable to utilize DDR266 memories. In one of the famoust German computer magazines (c't) I read a test and they tested the SuperMicro with PC2100-2033 memory, so this implies that the SuperMicro also utilize DDR266 memory? Does anyone out here has experiences with both mainboards using FreeBSD 4.6-STABLE or FreeBSD 4.6.1? We have an AMI Enterprise 1600 RAID controller (PCI 64/66) that should be swapped over to the new machine in spe, so I need some tips and hints. In the past, FreeBSD has developed into a 'incredible' stability when using SCSI and SMP, I hope I can participate from this stability when changing our hardware 'on the fly'. Thanks a lot for your help in advance, Oliver -- MfG O. Hartmann ohartman@klima.physik.uni-mainz.de ------------------------------------------------------------------ IT-Administration des Institutes fuer Physik der Atmosphaere (IPA) ------------------------------------------------------------------ Johannes Gutenberg Universitaet Mainz Becherweg 21 55099 Mainz Tel: +496131/3924662 (Maschinenraum) Tel: +496131/3924144 (Buero) FAX: +496131/3923532 To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-smp" in the body of the message From owner-freebsd-smp Tue Aug 6 11: 0:20 2002 Delivered-To: freebsd-smp@freebsd.org Received: from mx1.FreeBSD.org (mx1.FreeBSD.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id CE69537B400 for ; Tue, 6 Aug 2002 11:00:18 -0700 (PDT) Received: from rwcrmhc51.attbi.com (rwcrmhc51.attbi.com [204.127.198.38]) by mx1.FreeBSD.org (Postfix) with ESMTP id 5DC0D43E4A for ; Tue, 6 Aug 2002 11:00:18 -0700 (PDT) (envelope-from julian@elischer.org) Received: from InterJet.elischer.org ([12.232.206.8]) by rwcrmhc51.attbi.com (InterMail vM.4.01.03.27 201-229-121-127-20010626) with ESMTP id <20020806180018.UYIH19356.rwcrmhc51.attbi.com@InterJet.elischer.org>; Tue, 6 Aug 2002 18:00:18 +0000 Received: from localhost (localhost.elischer.org [127.0.0.1]) by InterJet.elischer.org (8.9.1a/8.9.1) with ESMTP id KAA69935; Tue, 6 Aug 2002 10:53:25 -0700 (PDT) Date: Tue, 6 Aug 2002 10:53:23 -0700 (PDT) From: Julian Elischer To: Terry Lambert Cc: Luigi Rizzo , Peter Wemm , smp@freebsd.org Subject: Re: how to create per-cpu variables in SMP kernels ? In-Reply-To: <3D4F7D1B.4D91400A@mindspring.com> Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: owner-freebsd-smp@FreeBSD.ORG Precedence: bulk List-ID: List-Archive: (Web Archive) List-Help: (List Instructions) List-Subscribe: List-Unsubscribe: X-Loop: FreeBSD.org On Tue, 6 Aug 2002, Terry Lambert wrote: > Luigi Rizzo wrote: > > NO. This can not work. > > The problem is that the per-CPU are is mapped into the same > location on each CPU -- and *totally inaccessible* to other CPUs. Terry, -current does not have multile page directories , one per cpu. (not any more). we use teh %fs register which is not used for anything else to have a special 'per-cpu segment'. The per-cpu mappings were being used at one stage but not any more... each pcpu area lives at a different virtual address now. basically, we do *all_pcpu[MAXCPU] where the index is achieved using the unused %fs register to make teh indexing work. in some places in the kernel we actually iterate through the elements. if they were not at different addreses this would not be possible. To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-smp" in the body of the message From owner-freebsd-smp Tue Aug 6 15:18:56 2002 Delivered-To: freebsd-smp@freebsd.org Received: from mx1.FreeBSD.org (mx1.FreeBSD.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 227CD37B400 for ; Tue, 6 Aug 2002 15:18:50 -0700 (PDT) Received: from pintail.mail.pas.earthlink.net (pintail.mail.pas.earthlink.net [207.217.120.122]) by mx1.FreeBSD.org (Postfix) with ESMTP id B02EB43E65 for ; Tue, 6 Aug 2002 15:18:49 -0700 (PDT) (envelope-from tlambert2@mindspring.com) Received: from pool0191.cvx22-bradley.dialup.earthlink.net ([209.179.198.191] helo=mindspring.com) by pintail.mail.pas.earthlink.net with esmtp (Exim 3.33 #1) id 17cCez-0003Me-00; Tue, 06 Aug 2002 15:18:37 -0700 Message-ID: <3D504B0A.9FDB3A47@mindspring.com> Date: Tue, 06 Aug 2002 15:17:46 -0700 From: Terry Lambert X-Mailer: Mozilla 4.79 [en] (Win98; U) X-Accept-Language: en MIME-Version: 1.0 To: Julian Elischer Cc: Luigi Rizzo , Peter Wemm , smp@freebsd.org Subject: Re: how to create per-cpu variables in SMP kernels ? References: Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Sender: owner-freebsd-smp@FreeBSD.ORG Precedence: bulk List-ID: List-Archive: (Web Archive) List-Help: (List Instructions) List-Subscribe: List-Unsubscribe: X-Loop: FreeBSD.org Julian Elischer wrote: > On Tue, 6 Aug 2002, Terry Lambert wrote: > > Luigi Rizzo wrote: > > > > NO. This can not work. > > > > The problem is that the per-CPU are is mapped into the same > > location on each CPU -- and *totally inaccessible* to other CPUs. > > Terry, -current does not have multile page directories , one per cpu. > (not any more). we use teh %fs register which is not used for anything > else to have a special 'per-cpu segment'. The per-cpu mappings were > being used at one stage but not any more... each pcpu area lives at a > different virtual address now. > > basically, we do *all_pcpu[MAXCPU] where the index is achieved > using the unused %fs register to make teh indexing work. > > in some places in the kernel we actually iterate through the elements. > if they were not at different addreses this would not be possible. Julian, I'm not sure that Luigi is dealing with SMP on -current rather than SMP on -stable. However, looking at the HEAD branch of /sys/i386/i386/locore.s, I still see: ---------------------------------------------------------------------- #ifdef SMP /* * Define layout of per-cpu address space. * This is "constructed" in locore.s on the BSP and in mp_machdep.c * for each AP. DO NOT REORDER THESE WITHOUT UPDATING THE REST! */ .globl SMP_prvspace, lapic .set SMP_prvspace,(MPPTDI << PDRSHIFT) .set lapic,SMP_prvspace + (NPTEPG-1) * PAGE_SIZE #endif /* SMP */ ... #ifdef SMP .globl cpu0prvpage cpu0pp: .long 0 /* phys addr cpu0 private pg */ cpu0prvpage: .long 0 /* relocated version */ .globl SMPpt SMPptpa: .long 0 /* phys addr SMP page table */ SMPpt: .long 0 /* relocated version */ #endif /* SMP */ ... #ifdef SMP .globl KPTphys #endif ... #ifdef SMP /* Allocate cpu0's private data page */ ALLOCPAGES(1) movl %esi,R(cpu0pp) addl $KERNBASE, %esi movl %esi, R(cpu0prvpage) /* relocated to KVM space */ /* Allocate SMP page table page */ ALLOCPAGES(1) movl %esi,R(SMPptpa) addl $KERNBASE, %esi movl %esi, R(SMPpt) /* relocated to KVM space */ #endif /* SMP */ ---------------------------------------------------------------------- Which indicates that it still exists. Now I understand how %FS is being used; however, I object to it; I object to anything that moves away from a per CPU resource for truly per-CPU things, and a shared resource for truly shared things. IMO, this is being abused for information that should be maintained in design state, rather than in memory state. The main probem here is that Luigi is talking about per-CPU per process stuff. If you guys keep going down this road, you are not going to be thinking about CPU cycles as if they were anonymous resources. It would be an incredible mistake, IMO, for Luigi to maintain per-CPU state off the %FS. His problem is that he wants to have CPU state that is accessible from other CPUs and is non-statistical, which puts it into a contention domain where locking is required, because a read of the data must return a presice value, rather than a statistic (which can be a snapshot). Doing this effectively breaks the ability to both maintain the work he does, and provide for a future ability to support per-CPU run queues with a non-blocking statistical interaction as the only real interaction. If this happens, then FreeBSD will forever be limited to 4 CPUs before it hits the point of diminishing returns, and Hyperthreading affinity can not be worked in hierarchically so that there are multiple preference sets (e.g. "I prefer to stay on the same CPU, but if I can't, I prefer to stay on a CPU on the same Hyperthreaded chip, but if I can't, then I will migrate elsewhere"). Being able to represent arbitrarily scoped preference arrangements is necessary to support NUMA and clustering with cluster migration, at some time in the future. I would really prefer that you guys not "legislate" against the ability to run on NUMA systems right now, before you've even thought about the problem, or the benefits. Perhaps you can talk to Chuck about it? -- Terry To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-smp" in the body of the message From owner-freebsd-smp Tue Aug 6 19: 0:39 2002 Delivered-To: freebsd-smp@freebsd.org Received: from mx1.FreeBSD.org (mx1.FreeBSD.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 8434B37B400 for ; Tue, 6 Aug 2002 19:00:33 -0700 (PDT) Received: from canning.wemm.org (canning.wemm.org [192.203.228.65]) by mx1.FreeBSD.org (Postfix) with ESMTP id 4355643E42 for ; Tue, 6 Aug 2002 19:00:33 -0700 (PDT) (envelope-from peter@wemm.org) Received: from wemm.org (localhost [127.0.0.1]) by canning.wemm.org (Postfix) with ESMTP id 1F0842A7D6; Tue, 6 Aug 2002 19:00:33 -0700 (PDT) (envelope-from peter@wemm.org) X-Mailer: exmh version 2.5 07/13/2001 with nmh-1.0.4 To: Terry Lambert Cc: Julian Elischer , Luigi Rizzo , smp@freebsd.org Subject: Re: how to create per-cpu variables in SMP kernels ? In-Reply-To: <3D504B0A.9FDB3A47@mindspring.com> Date: Tue, 06 Aug 2002 19:00:33 -0700 From: Peter Wemm Message-Id: <20020807020033.1F0842A7D6@canning.wemm.org> Sender: owner-freebsd-smp@FreeBSD.ORG Precedence: bulk List-ID: List-Archive: (Web Archive) List-Help: (List Instructions) List-Subscribe: List-Unsubscribe: X-Loop: FreeBSD.org Terry Lambert wrote: > Julian Elischer wrote: > > On Tue, 6 Aug 2002, Terry Lambert wrote: > > > Luigi Rizzo wrote: > > > > > > NO. This can not work. > > > > > > The problem is that the per-CPU are is mapped into the same > > > location on each CPU -- and *totally inaccessible* to other CPUs. > > > > Terry, -current does not have multile page directories , one per cpu. > > (not any more). we use teh %fs register which is not used for anything > > else to have a special 'per-cpu segment'. The per-cpu mappings were > > being used at one stage but not any more... each pcpu area lives at a > > different virtual address now. > > > > basically, we do *all_pcpu[MAXCPU] where the index is achieved > > using the unused %fs register to make teh indexing work. > > > > in some places in the kernel we actually iterate through the elements. > > if they were not at different addreses this would not be possible. > > Julian, I'm not sure that Luigi is dealing with SMP on -current > rather than SMP on -stable. > > However, looking at the HEAD branch of /sys/i386/i386/locore.s, > I still see: > > ---------------------------------------------------------------------- > #ifdef SMP > /* > * Define layout of per-cpu address space. > * This is "constructed" in locore.s on the BSP and in mp_machdep.c > * for each AP. DO NOT REORDER THESE WITHOUT UPDATING THE REST! > */ > .globl SMP_prvspace, lapic > .set SMP_prvspace,(MPPTDI << PDRSHIFT) > .set lapic,SMP_prvspace + (NPTEPG-1) * PAGE_SIZE > #endif /* SMP */ > ... > #ifdef SMP > .globl cpu0prvpage > cpu0pp: .long 0 /* phys addr cpu0 private pg */ > cpu0prvpage: .long 0 /* relocated version */ > > .globl SMPpt > SMPptpa: .long 0 /* phys addr SMP page table */ > SMPpt: .long 0 /* relocated version */ > #endif /* SMP */ > ... > #ifdef SMP > .globl KPTphys > #endif > ... > #ifdef SMP > /* Allocate cpu0's private data page */ > ALLOCPAGES(1) > movl %esi,R(cpu0pp) > addl $KERNBASE, %esi > movl %esi, R(cpu0prvpage) /* relocated to KVM space */ > > /* Allocate SMP page table page */ > ALLOCPAGES(1) > movl %esi,R(SMPptpa) > addl $KERNBASE, %esi > movl %esi, R(SMPpt) /* relocated to KVM space */ > #endif /* SMP */ > ---------------------------------------------------------------------- > > Which indicates that it still exists. And it is wrong. Although it says 'private', it isn't in fact per-cpu. We just abuse the old code to cheat on allocating KVM. All cpus use the same identical "private" space. We use %fs exclusively for per-cpu data, but for various bogus reasons we still have this historical wart around. Cheers, -Peter -- Peter Wemm - peter@wemm.org; peter@FreeBSD.org; peter@yahoo-inc.com "All of this is for nothing if we don't go to the stars" - JMS/B5 To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-smp" in the body of the message From owner-freebsd-smp Wed Aug 7 0:19: 0 2002 Delivered-To: freebsd-smp@freebsd.org Received: from mx1.FreeBSD.org (mx1.FreeBSD.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 7E0AE37B490 for ; Wed, 7 Aug 2002 00:18:46 -0700 (PDT) Received: from iguana.icir.org (iguana.icir.org [192.150.187.36]) by mx1.FreeBSD.org (Postfix) with ESMTP id 18D7643E77 for ; Wed, 7 Aug 2002 00:18:46 -0700 (PDT) (envelope-from rizzo@iguana.icir.org) Received: (from rizzo@localhost) by iguana.icir.org (8.11.6/8.11.3) id g777IWV37800; Wed, 7 Aug 2002 00:18:32 -0700 (PDT) (envelope-from rizzo) Date: Wed, 7 Aug 2002 00:18:32 -0700 From: Luigi Rizzo To: Terry Lambert Cc: Julian Elischer , Peter Wemm , smp@freebsd.org Subject: Re: how to create per-cpu variables in SMP kernels ? Message-ID: <20020807001832.C37532@iguana.icir.org> References: <3D504B0A.9FDB3A47@mindspring.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.2.5.1i In-Reply-To: <3D504B0A.9FDB3A47@mindspring.com>; from tlambert2@mindspring.com on Tue, Aug 06, 2002 at 03:17:46PM -0700 Sender: owner-freebsd-smp@FreeBSD.ORG Precedence: bulk List-ID: List-Archive: (Web Archive) List-Help: (List Instructions) List-Subscribe: List-Unsubscribe: X-Loop: FreeBSD.org Hi, On Tue, Aug 06, 2002 at 03:17:46PM -0700, Terry Lambert wrote: ... > Julian, I'm not sure that Luigi is dealing with SMP on -current > rather than SMP on -stable. ... > The main probem here is that Luigi is talking about per-CPU per > process stuff. > > If you guys keep going down this road, you are not going to be > thinking about CPU cycles as if they were anonymous resources. > > It would be an incredible mistake, IMO, for Luigi to maintain > per-CPU state off the %FS. His problem is that he wants to have Terry keeps stating what *I* am supposed to have in mind, but I am not sure we agree on that :) The motivation for my question was mostly to understood how to understood how the per-CPU data is stored and how expensive is to access it. As for why i need it, there are at least two places. One is the scheduler -- when a new thread/process becomes ready for execution, i might need to preempt the "least important" among those currently in execution, and these things are not in any queue, but stored in the cur{thread|proc} variables of individual CPUS. I do not need to add additional per-CPU variabies here, nor to modify this information, just to read it. The second place might be the network stack, because there we collect all sort of statistics on packets. If we ever decide to move this code under a finer granularity locking scheme, rather than have a large number of atomic_add_*() to replace all the counter increments, and depending on how expensive is to access the per-cpu data, it might make sense to split these statistics on a per-CPU basis, and let the programs doing the collection take a snapshot and add them up. Now i see that in all other architectures the per-cpu data is accessed through a dedicated register (see the snapshot below for the alpha, but it is similar for sparc and ia64). alpha/include/pcpu.h register struct pcpu *pcpup __asm__("$8"); #define PCPU_GET(member) (pcpup->pc_ ## member) #define PCPU_PTR(member) (&pcpup->pc_ ## member) #define PCPU_SET(member,value) (pcpup->pc_ ## member = (value)) Unfortunately the i386 case looks less efficient, or at least more convoluted to handle, because we need to use %fs due to lack of general registers (maybe one could write something like register struct pcpu *pcpup __asm__("%%fs:0"); if gcc were able to parse this syntax cheers luigi To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-smp" in the body of the message From owner-freebsd-smp Wed Aug 7 20:34: 5 2002 Delivered-To: freebsd-smp@freebsd.org Received: from mx1.FreeBSD.org (mx1.FreeBSD.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id AFD7B37B400 for ; Wed, 7 Aug 2002 20:33:56 -0700 (PDT) Received: from tsti-ms2.etatung.com.tw (tc172092.adsl.tisnet.net.tw [139.223.172.92]) by mx1.FreeBSD.org (Postfix) with ESMTP id 4C08243E42 for ; Wed, 7 Aug 2002 20:33:54 -0700 (PDT) (envelope-from Albert.Huang@etatung.com.tw) Subject: about the compaq run dual cpu Date: Thu, 8 Aug 2002 11:33:41 +0800 Message-ID: <962ECD8BACB040458114056B1669F7D1077481@tsti-ms2.etatung.com.tw> MIME-Version: 1.0 Content-Type: multipart/related; type="multipart/alternative"; boundary="----_=_NextPart_001_01C23E8C.645BFD77" X-MS-Has-Attach: yes X-MS-TNEF-Correlator: Thread-Topic: about the compaq run dual cpu Content-Class: urn:content-classes:message X-MimeOLE: Produced By Microsoft Exchange V6.0.5762.3 Thread-Index: AcI+jEj/83HVDjG3T5mSeJtJj3efcQ== From: =?big5?B?tsCmQbe9?= To: Sender: owner-freebsd-smp@FreeBSD.ORG Precedence: bulk List-ID: List-Archive: (Web Archive) List-Help: (List Instructions) List-Subscribe: List-Unsubscribe: X-Loop: FreeBSD.org This is a multi-part message in MIME format. ------_=_NextPart_001_01C23E8C.645BFD77 Content-Type: multipart/alternative; boundary="----_=_NextPart_002_01C23E8C.645BFD77" ------_=_NextPart_002_01C23E8C.645BFD77 Content-Type: text/plain; charset="big5" Content-Transfer-Encoding: quoted-printable Dear sir: My company have a compaq ML370 G2 Dual CPU include 5i Array = Controller. The model install the FreeBSD 4.6 system is ok. when i recompiler the kernel to dual CPU kernel . I reboot the system have a error message " second CPU I/O error ". Can the compaq ML370G2 Model run the FreeBSD system for Dual CPU = kernel ? Please tell me about it,thank you very much. =20 =20 Best Regards, Albert Huang=20 =20 By 2002/8/8 =20 Email: albert.huang@etatung.com.tw or hzynet@ms7.hinet.net =20 =20 =20 =20 ------_=_NextPart_002_01C23E8C.645BFD77 Content-Type: text/html; charset="big5" Content-Transfer-Encoding: quoted-printable =B6l=A5=F3
Dear=20 sir:
My=20 company  have a compaq ML370 G2  Dual CPU  include 5i = Array=20 Controller.
The model install  the FreeBSD 4.6 system = is=20 ok.
when i=20 recompiler the kernel to dual CPU kernel .
I=20 reboot the system have a error message " second CPU I/O error=20 ".
 Can the compaq  ML370G2 Model run the FreeBSD system = for Dual=20 CPU kernel ?
Please=20 tell me about it,thank you very much.
 
 
Best Regards,
Albert Huang =
 
By=20 2002/8/8
 
Email: albert.huang@etatung.com.tw
         &nb= sp; or =20  hzynet@ms7.hinet.net
 
 
 
 
------_=_NextPart_002_01C23E8C.645BFD77-- ------_=_NextPart_001_01C23E8C.645BFD77 Content-Type: image/jpeg; name="Glacier Bkgrd.jpg" Content-Transfer-Encoding: base64 Content-ID: <579421703@08082002-1888> Content-Description: Glacier Bkgrd.jpg Content-Location: Glacier%20Bkgrd.jpg /9j/4AAQSkZJRgABAgEASABIAAD/7QSqUGhvdG9zaG9wIDMuMAA4QklNA+kAAAAAAHgAAwAAAEgA SAAAAAAC2gIo/+H/4QL5AkUDRwUoA/wAAgAAAEgASAAAAAAC2AIoAAEAAABkAAAAAQADAwMAAAAB Jw8AAQABAAAAAAAAAAAAAAAAYAgAGQGQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4 QklNA+0AAAAAABAASAAAAAEAAQBIAAAAAQABOEJJTQPzAAAAAAAIAAAAAAAAAAA4QklNBAoAAAAA AAEAADhCSU0nEAAAAAAACgABAAAAAAAAAAI4QklNA/UAAAAAAEgAL2ZmAAEAbGZmAAYAAAAAAAEA L2ZmAAEAoZmaAAYAAAAAAAEAMgAAAAEAWgAAAAYAAAAAAAEANQAAAAEALQAAAAYAAAAAAAE4QklN A/gAAAAAAHAAAP////////////////////////////8D6AAAAAD///////////////////////// ////A+gAAAAA/////////////////////////////wPoAAAAAP////////////////////////// //8D6AAAOEJJTQQAAAAAAAACAAA4QklNBAIAAAAAAAIAADhCSU0ECAAAAAAAEAAAAAEAAAJAAAAC QAAAAAA4QklNBAkAAAAAApkAAAABAAAAgAAAAAEAAAGAAAABgAAAAn0AGAAB/9j/4AAQSkZJRgAB AgEASABIAAD//gAnRmlsZSB3cml0dGVuIGJ5IEFkb2JlIFBob3Rvc2hvcKggNC4wAP/uAA5BZG9i ZQBkgAAAAAH/2wCEAAwICAgJCAwJCQwRCwoLERUPDAwPFRgTExUTExgRDAwMDAwMEQwMDAwMDAwM DAwMDAwMDAwMDAwMDAwMDAwMDAwBDQsLDQ4NEA4OEBQODg4UFA4ODg4UEQwMDAwMEREMDAwMDAwR DAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDP/AABEIAAEAgAMBIgACEQEDEQH/3QAEAAj/xAE/ AAABBQEBAQEBAQAAAAAAAAADAAECBAUGBwgJCgsBAAEFAQEBAQEBAAAAAAAAAAEAAgMEBQYHCAkK CxAAAQQBAwIEAgUHBggFAwwzAQACEQMEIRIxBUFRYRMicYEyBhSRobFCIyQVUsFiMzRygtFDByWS U/Dh8WNzNRaisoMmRJNUZEXCo3Q2F9JV4mXys4TD03Xj80YnlKSFtJXE1OT0pbXF1eX1VmZ2hpam tsbW5vY3R1dnd4eXp7fH1+f3EQACAgECBAQDBAUGBwcGBTUBAAIRAyExEgRBUWFxIhMFMoGRFKGx QiPBUtHwMyRi4XKCkkNTFWNzNPElBhaisoMHJjXC0kSTVKMXZEVVNnRl4vKzhMPTdePzRpSkhbSV xNTk9KW1xdXl9VZmdoaWprbG1ub2JzdHV2d3h5ent8f/2gAMAwEAAhEDEQA/APTqPon4/wAAir5X SQGyn6oSXyukip+qEl8rpJKfqhJfK6SSn6oSXyukkp+qEl8rpJKfqhJfK6SSn6oSXyukkp//2QA4 QklNBAYAAAAAAAcABAAAAAEBAP/+ACdGaWxlIHdyaXR0ZW4gYnkgQWRvYmUgUGhvdG9zaG9wqCA0 LjAA/+4ADkFkb2JlAGQAAAAAAf/bAIQABgQEBwUHCwYGCw4KCAoOEQ4ODg4RFhMTExMTFhEMDAwM DAwRDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAEHCQkTDBMiExMiFA4ODhQUDg4ODhQRDAwM DAwREQwMDAwMDBEMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwM/8AAEQgAAwZAAwERAAIRAQMR Af/dAAQAyP/EAaIAAAAHAQEBAQEAAAAAAAAAAAQFAwIGAQAHCAkKCwEAAgIDAQEBAQEAAAAAAAAA AQACAwQFBgcICQoLEAACAQMDAgQCBgcDBAIGAnMBAgMRBAAFIRIxQVEGE2EicYEUMpGhBxWxQiPB UtHhMxZi8CRygvElQzRTkqKyY3PCNUQnk6OzNhdUZHTD0uIIJoMJChgZhJRFRqS0VtNVKBry4/PE 1OT0ZXWFlaW1xdXl9WZ2hpamtsbW5vY3R1dnd4eXp7fH1+f3OEhYaHiImKi4yNjo+Ck5SVlpeYmZ qbnJ2en5KjpKWmp6ipqqusra6voRAAICAQIDBQUEBQYECAMDbQEAAhEDBCESMUEFURNhIgZxgZEy obHwFMHR4SNCFVJicvEzJDRDghaSUyWiY7LCB3PSNeJEgxdUkwgJChgZJjZFGidkdFU38qOzwygp 0+PzhJSktMTU5PRldYWVpbXF1eX1RlZmdoaWprbG1ub2R1dnd4eXp7fH1+f3OEhYaHiImKi4yNjo +DlJWWl5iZmpucnZ6fkqOkpaanqKmqq6ytrq+v/aAAwDAQACEQMRAD8A9N/vv+Lv+SWFhv5/7F37 7/i7/kliu/n/ALFbJ63H/dv0+lTFd/P/AGKj++/y/wDkliu/n/sVa29Tl8fSn7fCn/JPfFIRH/AY GTv+AxV3/AYq7/gMVd/wGKu/4DFXf8Birv8AgMVd/wABirv+AxV3/AYq7/gMVd/wGKu/4DFXf8Bi rv8AgMVd/wABirv+AxV3/AYq7/gMVd/wGKu/4DFXf8Birv8AgMVd/wABirv+AxV3/AYq7/gMVd/w GKu/4DFXf8Birv8AgMVd/wABirv+AxV3/AYq7/gMVd/wGKu/4DFXf8Birv8AgMVd/wABirv+AxV3 /AYq7/gMVd/wGKu/4DFXf8Birv8AgMVd/wABirv+AxV3/AYq7/gMVd/wGKu/4DFXf8Birv8AgMVd /wABirv+AxV3/AYq7/gMVd/wGKu/4DFXf8Birv8AgMVd/wABirv+AxV3/AYq7/gMVd/wGKu/4DFX f8Birv8AgMVd/wABirv+AxV3/AYq7/gMVd/wGKu/4DFXf8Birv8AgMVd/wABirv+AxV3/AYq7/gM Vd/wGKu/4DFXf8Birv8AgMVd/wABirv+AxV3/AYq7/gMVd/wGKu/4DFXf8Birv8AgMVd/wABirv+ AxV3/AYq7/gMVd/wGKu/4DFXf8Birv8AgMVd/wABirv+AxV3/AYq7/gMVd/wGKu/4DFXf8Birv8A gMVd/wABirv+AxV3/AYq7/gMVd/wGKu/4DFXf8Birv8AgMVd/wABirv+AxV3/AYq7/gMVd/wGKu/ 4DFXf8Birv8AgMVd/wABirv+AxV3/AYq7/gMVd/wGKu/4DFXf8Birv8AgMVd/wABirv+AxV3/AYq 7/gMVd/wGKu/4DFXf8Birv8AgMVd/wABirv+AxV3/AYq7/gMVd/wGKu/4DFXf8Birv8AgMVd/wAB irv+AxV3/AYq7/gMVd/wGKu/4DFXf8Birv8AgMVd/wABirv+AxV3/AYq7/gMVd/wGKu/4DFXf8Bi rv8AgMVd/wABirv+AxV3/AYq7/gMVd/wGKu/4DFXf8Birv8AgMVd/wABirv+AxV3/AYq7/gMVd/w GKu/4DFXf8Birv8AgMVd/wABirv+AxV3/AYq7/gMVd/wGKu/4DFXf8Birv8AgMVd/wABirv+AxV3 /AYq/wD/2Q== ------_=_NextPart_001_01C23E8C.645BFD77-- To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-smp" in the body of the message From owner-freebsd-smp Thu Aug 8 3:31:54 2002 Delivered-To: freebsd-smp@freebsd.org Received: from mx1.FreeBSD.org (mx1.FreeBSD.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id A1FD237B400 for ; Thu, 8 Aug 2002 03:31:41 -0700 (PDT) Received: from BehemoT.datanet.hu (BehemoT.datanet.hu [194.149.0.120]) by mx1.FreeBSD.org (Postfix) with ESMTP id ACD1643E3B for ; Thu, 8 Aug 2002 03:31:40 -0700 (PDT) (envelope-from mess@datanet.hu) Received: from localhost (localhost [127.0.0.1]) by BehemoT.datanet.hu (Postfix) with ESMTP id C65182C89F for ; Thu, 8 Aug 2002 12:31:37 +0200 (CEST) Date: Thu, 8 Aug 2002 12:31:37 +0200 (CEST) From: Kovacs Robi To: freebsd-smp@freebsd.org Subject: 2 X P4Xeon panics on 4.6.1-RELEASE-p10 SMP Message-ID: <0208081224090.31560-100000@BehemoT.datanet.hu> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: owner-freebsd-smp@FreeBSD.ORG Precedence: bulk List-ID: List-Archive: (Web Archive) List-Help: (List Instructions) List-Subscribe: List-Unsubscribe: X-Loop: FreeBSD.org I can not use my 2 P4 Xeons with an SMP kernel on an SE7500CW2 Intel motherboard. Aren't they supported? > Programming 24 pins in IOAPIC #2 > AP #1 (PHY# 6) failed! # mptable -dmesg =============================================================================== MPTable, version 2.0.15 ------------------------------------------------------------------------------- MP Floating Pointer Structure: location: BIOS physical address: 0x000f6670 signature: '_MP_' length: 16 bytes version: 1.4 checksum: 0x38 mode: Virtual Wire ------------------------------------------------------------------------------- MP Config Table Header: physical address: 0x0009ef70 signature: 'PCMP' base table length: 332 version: 1.4 checksum: 0xd9 OEM ID: ' ' Product ID: 'SE7500CW2' OEM table pointer: 0x00000000 OEM table size: 0 entry count: 33 local APIC address: 0xfee00000 extended table length: 184 extended table checksum: 250 ------------------------------------------------------------------------------- MP Config Base Table Entries: -- Processors: APIC ID Version State Family Model Step Flags 0 0x14 BSP, usable 15 2 4 0x3febfbff 6 0x14 AP, usable 15 2 4 0x3febfbff -- Bus: Bus ID Type 0 PCI 1 PCI 2 PCI 3 PCI 4 PCI 5 PCI 6 ISA -- I/O APICs: APIC ID Version State Address 2 0x20 usable 0xfec00000 3 0x20 usable 0xfec80000 4 0x20 usable 0xfec80400 -- I/O Ints: Type Polarity Trigger Bus ID IRQ APIC ID PIN# ExtINT active-hi edge 6 0 2 0 INT active-hi edge 6 1 2 1 INT active-hi edge 6 0 2 2 INT active-hi edge 6 3 2 3 INT active-hi edge 6 4 2 4 INT active-hi edge 6 5 2 5 INT active-hi edge 6 6 2 6 INT active-hi edge 6 7 2 7 INT active-hi edge 6 8 2 8 INT active-hi edge 6 9 2 9 INT active-hi edge 6 10 2 10 INT active-lo level 0 31:B 2 17 INT active-lo level 5 4:A 2 20 INT active-hi edge 6 13 2 13 INT active-hi edge 6 14 2 14 INT active-hi edge 6 15 2 15 INT active-lo level 2 2:A 3 4 INT active-lo level 5 3:A 2 21 INT active-lo level 5 5:A 2 23 -- Local Ints: Type Polarity Trigger Bus ID IRQ APIC ID PIN# ExtINT active-hi edge 6 0 255 0 NMI active-hi edge 6 0 255 1 ------------------------------------------------------------------------------- MP Config Extended Table Entries: -- System Address Space bus ID: 0 address type: I/O address address base: 0x0 address range: 0x10000 -- System Address Space bus ID: 0 address type: memory address address base: 0x80000000 address range: 0x7c000000 -- System Address Space bus ID: 0 address type: prefetch address address base: 0xfc000000 address range: 0x2000000 -- System Address Space bus ID: 0 address type: memory address address base: 0xfe000000 address range: 0xe00000 -- System Address Space bus ID: 0 address type: memory address address base: 0xfee01000 address range: 0x11ff000 -- System Address Space bus ID: 5 address type: memory address address base: 0xa0000 address range: 0x20000 -- System Address Space bus ID: 4 address type: memory address address base: 0xd0000 address range: 0x14000 -- System Address Space bus ID: 0 address type: memory address address base: 0x7ff80000 address range: 0x80000 -- Bus Heirarchy bus ID: 6 bus info: 0x01 parent bus ID: 0 -- Compatibility Bus Address bus ID: 0 address modifier: add predefined range: 0x00000000 -- Compatibility Bus Address bus ID: 0 address modifier: add predefined range: 0x00000001 ------------------------------------------------------------------------------- dmesg output: Copyright (c) 1992-2002 The FreeBSD Project. Copyright (c) 1979, 1980, 1983, 1986, 1988, 1989, 1991, 1992, 1993, 1994 The Regents of the University of California. All rights reserved. FreeBSD 4.5-RELEASE #2: Tue Jan 29 22:44:12 GMT 2002 murray@builder.freebsdmall.com:/usr/src/sys/compile/BOOTMFS Timecounter "i8254" frequency 1193182 Hz Timecounter "TSC" frequency 1993540992 Hz CPU: Pentium 4 (1993.54-MHz 686-class CPU) Origin = "GenuineIntel" Id = 0xf24 Stepping = 4 Features=0x3febfbff,ACC> real memory = 2146959360 (2096640K bytes) config> intro \^[[m\^[[H\^[[J\^[[3;26H\^[[m\^[[1m\^[[m\^[[6;11H\^[[m\^[[7m\^[[m\^[[7;11H\^[[m\^[[8;11H\^[[m\^[[11;3H\^[[m\^[[12;3H\^[[m\^[[13;3H\^[[m\^[[15;3H\^[[m\^[[16;3H\^[[m\^[[18;3H\^[[m\^[[19;3H\^[[m\^[[21;3H\^[[m\^[[7m\^[[m\^[[22;3H\^[[m\^[[1;1H\^[[m\^[[H\^[[Javail memory = 2083651584 (2034816K bytes) Preloaded elf kernel "kernel" at 0xc07f4000. Preloaded mfs_root "/mfsroot" at 0xc07f4084. md0: Preloaded image 4423680 bytes at 0xc03bad94 md1: Malloc disk Using $PIR table, 17 entries at 0xc00fdeb0 npx0: on motherboard npx0: INT 16 interface pcib0: on motherboard pci0: on pcib0 pci0: (vendor=0x8086, dev=0x2541) at 0.1 pcib1: at device 2.0 on pci0 pci1: on pcib1 pci1: (vendor=0x8086, dev=0x1461) at 28.0 pcib2: at device 29.0 on pci1 pci2: on pcib2 asr0: mem 0xfc000000-0xfdffffff irq 12 at device 2.0 on pci2 asr0: major=154 asr0: ADAPTEC 2110S FW Rev. 380E, 1 channel, 256 CCBs, Protocol I2O pcib3: at device 2.1 on pci2 pci3: on pcib3 pci1: (vendor=0x8086, dev=0x1461) at 30.0 pcib4: at device 31.0 on pci1 pci4: on pcib4 pcib5: at device 30.0 on pci0 pci5: on pcib5 pci5: at 3.0 irq 11 fxp0: port 0x7400-0x743f mem 0xf8200000-0xf821ffff,0xf8241000-0xf8241fff irq 12 at device 4.0 on pci5 fxp0: Ethernet address 00:02:b3:b0:a3:b9 inphy0: on miibus0 inphy0: 10baseT, 10baseT-FDX, 100baseTX, 100baseTX-FDX, auto fxp1: port 0x7440-0x747f mem 0xf8220000-0xf823ffff,0xf8242000-0xf8242fff irq 11 at device 5.0 on pci5 fxp1: Ethernet address 00:02:b3:b0:a3:76 inphy1: on miibus1 inphy1: 10baseT, 10baseT-FDX, 100baseTX, 100baseTX-FDX, auto isab0: at device 31.0 on pci0 isa0: on isab0 atapci0: port 0x6c20-0x6c2f,0-0x3,0-0x7,0-0x3,0-0x7 irq 0 at device 31.1 on pci0 ata0: at 0x1f0 irq 14 on atapci0 ata1: at 0x170 irq 15 on atapci0 pci0: (vendor=0x8086, dev=0x2483) at 31.3 irq 11 orm0: