From owner-freebsd-current@FreeBSD.ORG Mon May 3 23:32:30 2004 Return-Path: Delivered-To: freebsd-current@freebsd.org Received: from green.homeunix.org (freefall.freebsd.org [216.136.204.21]) by hub.freebsd.org (Postfix) with ESMTP id 5825816A4CE for ; Mon, 3 May 2004 23:32:30 -0700 (PDT) Received: from localhost (green@localhost [127.0.0.1]) by green.homeunix.org (8.12.11/8.12.11) with ESMTP id i446WTc8010688 for ; Tue, 4 May 2004 02:32:29 -0400 (EDT) (envelope-from green@green.homeunix.org) Message-Id: <200405040632.i446WTc8010688@green.homeunix.org> X-Mailer: exmh version 2.6.3 04/04/2003 with nmh-1.0.4 To: current@FreeBSD.org From: Brian Fundakowski Feldman Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Date: Tue, 04 May 2004 02:32:29 -0400 Sender: green@green.homeunix.org Subject: 5.x w/auto-maxusers has insane kern.maxvnodes X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 04 May 2004 06:32:30 -0000 I have a 512MB system and had to adjust kern.maxvnodes (desiredvnodes) down to something reasonable after discovering that it was the sole cause of too much paging for my workstation. The target number of vnodes was set to 33000, which would not be so bad if it did not also cause so many more UFS, VM and VFS objects, and the VM objects' associated inactive cache pages, lying around. I ended up saving a good 100MB of memory just adjusting kern.maxvnodes back down to something reasonable. Here are the current allocations (and some of the peak values): ITEM SIZE LIMIT USED FREE REQUESTS FFS2 dinode: 256, 0, 12340, 95, 1298936 FFS1 dinode: 128, 0, 315, 3901, 2570969 FFS inode: 140, 0, 12655, 14589, 3869905 L VFS Cache: 291, 0, 5, 892, 51835 S VFS Cache: 68, 0, 13043, 23301, 4076311 VNODE: 260, 0, 32339, 16, 32339 VM OBJECT: 132, 0, 10834, 24806, 2681863 (The number of VM pages allocated specifically to vnodes is not something easy to determine other than the fact that I saved so much memory even without the objects themselves, after uma_zfree(), having been reclaimed.) We really need to look into making the desiredvnodes default target more sane before 5.x is -STABLE or people are going to be very surprised switching from 4.x and seeing paging increase substantially. One more surprising thing is how many of these objects cannot be reclaimed because of they are UMA_ZONE_NOFREE or have no zfree function. If they were, I'd have an extra 10MB back right now in my specific case, having just reduced the kern.maxvnodes setting and did a failed-umount on every partition to force the vnodes to be flushed. The vnodes are always kept on the free vnode list after free because they might still be used again without having flushed out all of their associated VFS information -- but they should always be in a state that the list can be rescanned so they can actually be reclaimed by UMA if it asks for them. All of the rest should need very little in the way of supporting uma_reclaim(), but why are they not already like that? One last good example I personally see of wastage-by-virtue-of-zfree-function is the page tables on i386: PV ENTRY: 28, 938280, 59170, 120590, 199482221 Once again, why do those actually need to be non-reclaimable? I hope you guys can shed some light on this, and hopefully some have ideas on how to make maxusers-auto-scaling more sane. -- Brian Fundakowski Feldman \'[ FreeBSD ]''''''''''\ <> green@FreeBSD.org \ The Power to Serve! \ Opinions expressed are my own. \,,,,,,,,,,,,,,,,,,,,,,\