From owner-freebsd-arch@FreeBSD.ORG Mon Nov 12 19:07:07 2007 Return-Path: Delivered-To: freebsd-arch@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 214F616A421 for ; Mon, 12 Nov 2007 19:07:07 +0000 (UTC) (envelope-from mav@FreeBSD.org) Received: from cmail.optima.ua (cmail.optima.ua [195.248.191.121]) by mx1.freebsd.org (Postfix) with ESMTP id 95EA013C4BF for ; Mon, 12 Nov 2007 19:07:06 +0000 (UTC) (envelope-from mav@FreeBSD.org) X-Spam-Flag: SKIP X-Spam-Yversion: Spamooborona 1.7.0 Received: from [212.86.226.226] (account mav@alkar.net HELO [192.168.3.2]) by cmail.optima.ua (CommuniGate Pro SMTP 5.1.10) with ESMTPA id 44825990; Mon, 12 Nov 2007 21:06:54 +0200 Message-ID: <4738A444.8040708@FreeBSD.org> Date: Mon, 12 Nov 2007 21:06:44 +0200 From: Alexander Motin User-Agent: Thunderbird 2.0.0.6 (Windows/20070728) MIME-Version: 1.0 To: Marcel Moolenaar References: <1191187393.00807485.1191175801@10.7.7.3> <1191189248.00807488.1191177603@10.7.7.3> <4736D8AF.7010209@FreeBSD.org> <20071111163815.GJ37471@deviant.kiev.zoral.com.ua> <47373C5E.2080800@elischer.org> <0414590D-0C2A-4EBD-9617-7AC193ABD1E8@mac.com> <4737696A.7050605@FreeBSD.org> <06618562-A789-4B5E-94BF-0ED8AB51A1FF@mac.com> <4737D7E3.3090500@elischer.org> <2FA48BC6-BCF3-4C16-B914-30A13C15B8AA@mac.com> In-Reply-To: <2FA48BC6-BCF3-4C16-B914-30A13C15B8AA@mac.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: Kostik Belousov , Julian Elischer , freebsd-arch@FreeBSD.org Subject: Re: Kernel thread stack usage X-BeenThere: freebsd-arch@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Discussion related to FreeBSD architecture List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 12 Nov 2007 19:07:07 -0000 Marcel Moolenaar wrote: > Yes. A good place would be cpu_switch in this case, because > the processor flushes the dirty stacked registers onto the > register stack only when it "feels" like it or when instructed > to do so. In practice this means that while the stacks may > have run into each other based on the pointers, the memory > corruption (sec) often happens in cpu_switch where we force > the processor to flush the dirty stacked registers. > > In other words: a thread is expected in the common case to > encounter the corruption until the next switch-in, but could > in case of excessive use of either or both stacks encounter > it on function boundaries (function calls and/or returns). > > As a side-note: The implementation of kernel stack guard pages > is just as meaningless for ia64. As a first improvement, you > want guard pages both at the top and at the bottom and not just > at the bottom. Secondly, you want to be able to protect each > stack running into each other. However, putting a guard page > somewhere in the middle may not be the right thing, because > different threads may require different ratios... If I understand you right you are talking about stack overflow/memory corruption detection mechanism. But I am speaking just about getting stack usage statistics. When Netgraph subsystem passes packet from one node to another it can or make direct function call or use queues. Direct call is preferred for the performance reasons, but numerous subsequent calls may cause stack overflow. To optimize behaviour I need some platform independent mechanism to get current stack usage to check is it reached some defined level, for example 50%, or not. Definitely such method still keeps overflow possible, but it will reduce it's probability as much as possible. For i386 platform we have registers pointing current stack head. To simplify my example I have used address of local variable for that. Probably for ia64 we should just use one more register or algorithm to take into account second stack growing upward. -- Alexander Motin