From owner-freebsd-arch Wed Dec 12 16: 0:41 2001 Delivered-To: freebsd-arch@freebsd.org Received: from rwcrmhc52.attbi.com (rwcrmhc52.attbi.com [216.148.227.88]) by hub.freebsd.org (Postfix) with ESMTP id EDBBA37B419 for ; Wed, 12 Dec 2001 16:00:14 -0800 (PST) Received: from InterJet.elischer.org ([12.232.206.8]) by rwcrmhc52.attbi.com (InterMail vM.4.01.03.27 201-229-121-127-20010626) with ESMTP id <20011213000014.MKCY403.rwcrmhc52.attbi.com@InterJet.elischer.org>; Thu, 13 Dec 2001 00:00:14 +0000 Received: from localhost (localhost.elischer.org [127.0.0.1]) by InterJet.elischer.org (8.9.1a/8.9.1) with ESMTP id PAA12091; Wed, 12 Dec 2001 15:49:19 -0800 (PST) Date: Wed, 12 Dec 2001 15:49:17 -0800 (PST) From: Julian Elischer To: Alfred Perlstein Cc: Julian Elischer , arch@freebsd.org Subject: Re: Threads, KSEs etc. during exit. In-Reply-To: <20011212172324.V92148@elvis.mu.org> Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: owner-freebsd-arch@FreeBSD.ORG Precedence: bulk List-ID: List-Archive: (Web Archive) List-Help: (List Instructions) List-Subscribe: List-Unsubscribe: X-Loop: FreeBSD.ORG On Wed, 12 Dec 2001, Alfred Perlstein wrote: > * Julian Elischer [011212 16:52] wrote: > > > > Here is an implimentation detail I've hit that I'd like to > > discuss because it has some ramifications for non KSE code too. > [snip] > > > > This is all ok except that pmap_dispose_thread(td) will free > > the stack pages so are we safe in returning? No. > > I think it's less important to get bogged down in the details, > what would make more sense is to perform the cleanup as part > of the job of the next-to-run thread's work as it exits the > scheduler. > > This could be done in a MI fashion most likely. > > The only problem is possibly recursing into one of those subsystems > in the case of thread pre-emption within the kernel. > > Ok, so let's make it a bit simpler. > > 1) The exiting thread marks the structures as "in use". > 2) It then queues them on the end of a "to be freed" list, this needs > a mutex. > 3) It then enters the scheduler which runs the next thread. > 4) On the way out in the context of another thread it will atomically > mark the structures as "freeable" then issue a wakeup/cv_signal on > the queue. > 5) A dedicated per-cpu thread "thread_reaper" will then wakeup and > perform the cleanup. > > Let's not over optimize it for now, getting it done correctly then > optimizing will probably get the job done for now. That's one of the possibilities I am considering, but you don't want to put too much extra testing etc on outgoing path of swtch() as it is called a hell of a lot. It's also worth wondering if it's enough to store a single 'to be freed' thread in the pcpu area or whether we need an open ended scheme. The answer to that would depend on whether we could recurse before we had the chance to free what we alreay have there. The freeing requires some "relatively complicated" things, e.g. freeing stack pages back to the vm, possibly freeing the vm-object associated with it, and maybe placing the thread structure back in the thread-zone for re-allocation. It is a question as to whether we may ever want to free the memory back to the system after a process with massive threading exits. To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-arch" in the body of the message