From owner-freebsd-stable@FreeBSD.ORG Tue May 27 14:05:55 2003 Return-Path: Delivered-To: freebsd-stable@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 6C19037B401 for ; Tue, 27 May 2003 14:05:55 -0700 (PDT) Received: from apollo.backplane.com (apollo.backplane.com [216.240.41.2]) by mx1.FreeBSD.org (Postfix) with ESMTP id DB0A243F85 for ; Tue, 27 May 2003 14:05:54 -0700 (PDT) (envelope-from dillon@apollo.backplane.com) Received: from apollo.backplane.com (localhost [127.0.0.1]) by apollo.backplane.com (8.12.9/8.12.6) with ESMTP id h4RL5sVI067809; Tue, 27 May 2003 14:05:54 -0700 (PDT) (envelope-from dillon@apollo.backplane.com) Received: (from dillon@localhost) by apollo.backplane.com (8.12.9/8.12.6/Submit) id h4RL5ppG067806; Tue, 27 May 2003 14:05:51 -0700 (PDT) Date: Tue, 27 May 2003 14:05:51 -0700 (PDT) From: Matthew Dillon Message-Id: <200305272105.h4RL5ppG067806@apollo.backplane.com> To: Mike Harding References: <20030521171941.364325314@netcom1.netcom.com> <20030524190051.R598@hub.org><20030526123617.D56519@hub.org> <1053964809.7831.6.camel@netcom1.netcom.com> <20030526130556.G56519@hub.org> <1054060050.640.35.camel@netcom1.netcom.com> cc: stable@freebsd.org Subject: Re: system slowdown - vnode related X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 27 May 2003 21:05:55 -0000 :I'll try this if I can tickle the bug again. : :I may have just run out of freevnodes - I only have about 1-2000 free :right now. I was just surprised because I have never seen a reference :to tuning this sysctl. : :- Mike H. The vnode subsystem is *VERY* sensitive to running out of KVM, meaning that setting too high a kern.maxvnodes value is virtually guarenteed to lockup the system under certain circumstances. If you can reliably reproduce the lockup with maxvnodes set fairly low (e.g. less then 100,000) then it ought to be easier to track the deadlock down. Historically speaking systems did not have enough physical memory to actually run out of vnodes.. they would run out of physical memory first which would cause VM pages to be reused and their underlying vnodes deallocated when the last page went away. Hence the amount of KVM being used to manage vnodes (vnode and inode structures) was kept under control. But today's Intel systems have far more physical memory relative to available KVM and it is possible for the vnode management to run out of KVM before the VM system runs out of physical memory. The vnlru kernel thread is an attempt to control this problem but it has had only mixed success in complex vnode management situations like unionfs where an operation on a vnode may cause accesses to additional underlying vnodes. In otherwords, vnlru can potentially shoot itself in the foot in such situations while trying to flush out vnodes. -Matt