From owner-freebsd-current@FreeBSD.ORG Wed Jun 6 00:45:16 2007 Return-Path: X-Original-To: freebsd-current@freebsd.org Delivered-To: freebsd-current@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id E0DAC16A46C for ; Wed, 6 Jun 2007 00:45:16 +0000 (UTC) (envelope-from kris@obsecurity.org) Received: from elvis.mu.org (elvis.mu.org [192.203.228.196]) by mx1.freebsd.org (Postfix) with ESMTP id CD88D13C46C for ; Wed, 6 Jun 2007 00:45:16 +0000 (UTC) (envelope-from kris@obsecurity.org) Received: from obsecurity.dyndns.org (elvis.mu.org [192.203.228.196]) by elvis.mu.org (Postfix) with ESMTP id AEB141A3C19; Tue, 5 Jun 2007 17:46:34 -0700 (PDT) Received: from rot13.obsecurity.org (rot13.obsecurity.org [192.168.1.5]) by obsecurity.dyndns.org (Postfix) with ESMTP id 1E3155129D; Tue, 5 Jun 2007 20:45:16 -0400 (EDT) Received: by rot13.obsecurity.org (Postfix, from userid 1001) id 0838FC207; Tue, 5 Jun 2007 20:45:16 -0400 (EDT) Date: Tue, 5 Jun 2007 20:45:16 -0400 From: Kris Kennaway To: Kris Kennaway Message-ID: <20070606004515.GA50367@rot13.obsecurity.org> References: <78878E5C-A219-42A6-AB9F-D4C4C7FC994E@gmail.com> <20070606003551.GA50194@rot13.obsecurity.org> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070606003551.GA50194@rot13.obsecurity.org> User-Agent: Mutt/1.4.2.2i Cc: freebsd-current@freebsd.org, Ivan Voras Subject: Re: ZFS on 32-bit CPUs? X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 06 Jun 2007 00:45:17 -0000 On Tue, Jun 05, 2007 at 08:35:51PM -0400, Kris Kennaway wrote: > On Wed, Jun 06, 2007 at 02:19:57AM +0200, Ivan Voras wrote: > > Sean Hafeez wrote: > > > Has anyone looked at the ZFS port and how it does on 32-bit CPUs vs > > > 64-bit ones? I know under Solaris they do not recommend using a 32-bit > > > CPU. I my case I was thinking about doing some testing on a Dual P3-850. > > > > It works, and there's never been doubt that it would work. The main > > resource you need is memory. At least 1 GB is recommended, but it should > > work with 512 MB (though people were reporting panics unless they scale > > ZFS and VFS parameters down). If you're thinking of using it in > > production, you should read the threads on this list regarding ZFS, > > especially those mentioning panics. > > It "works", but there are serious performance issues to do with how > ZFS on freebsd handles caching of data. In order to get reasonable > performance you will want to tune VM_KMEM_SIZE_MAX as high as you can > get away with (depends on how much ram you have). Roughly half of > this will be used by the ARC (zfs buffer cache). This is typically > less memory than the standard buffer cache would have available so ZFS > still loses out on caching, particularly on systems with a lot of RAM. > > You may also need to hack zfs a bit. The following patch improves > performance for me on amd64 (and avoids a deadlock). I have not > tested whether it is sufficient or reasonable on i386 (only amd64), > the KVA shortage there makes it hard to tune memory availability the > way zfs wants it. > > There is also a panic condition that may be triggered on SMP when you > have INVARIANTS enabled. pjd and I don't yet understand the cause of > this but it appears to be spurious ("returning to userspace with 1 > locks held" when no locks appear to actually be held, i.e. it seems to > be some kind of leak in the stats). Also on amd64 it helps to crank kern.maxvnodes way up if you have the ram for it (I use 400000 on my 2GB system). With my patch it seems to do a reasonable job of autotuning itself if you set it too high, but there is a bit of performance loss from this if it kickstarts vnlru too frequently. Watch vfs.numvnodes to see where it stabilizes over time on your workload and then cap it a bit higher. On i386 this may be bad advice since vnodes are also allocated out of the kmem_map on i386 (in amd64 the use the direct mapped area) and will compete for space with everything else (i.e. with the default kmem_map size you have to *lower* the kern.maxvnodes from 100000 to 75000 to avoid zfs running out of space). Running with maxvnodes too low will seriously limit your performance by reducing caching though. The bottom line is that zfs on freebsd/i386 currently seems hard to tune for performance, so if possible consider running it on amd64 instead. There is a lot of scope for someone to fix zfs on freebsd to be more sane about memory management (on all architectures), and hopefully someone will be motivated to do that. Kris