From owner-freebsd-fs@FreeBSD.ORG Fri Sep 3 13:06:50 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id BC737106567A for ; Fri, 3 Sep 2010 13:06:50 +0000 (UTC) (envelope-from marco@tolstoy.tols.org) Received: from tolstoy.tols.org (tolstoy.tols.org [IPv6:2a02:898:0:20::57:1]) by mx1.freebsd.org (Postfix) with ESMTP id 559C88FC15 for ; Fri, 3 Sep 2010 13:06:50 +0000 (UTC) Received: from tolstoy.tols.org (localhost [127.0.0.1]) by tolstoy.tols.org (8.14.4/8.14.4) with ESMTP id o83D6kne024275 for ; Fri, 3 Sep 2010 13:06:46 GMT (envelope-from marco@tolstoy.tols.org) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.96.1 at tolstoy.tols.org Received: (from marco@localhost) by tolstoy.tols.org (8.14.4/8.14.4/Submit) id o83D6k5Y024274 for freebsd-fs@freebsd.org; Fri, 3 Sep 2010 15:06:46 +0200 (CEST) (envelope-from marco) Date: Fri, 3 Sep 2010 15:06:46 +0200 From: Marco van Tol To: freebsd-fs Message-ID: <20100903130646.GD19666@tolstoy.tols.org> Mail-Followup-To: freebsd-fs References: Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.4.2.3i X-Spam-Status: No, score=-2.9 required=5.0 tests=ALL_TRUSTED,BAYES_00 autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on tolstoy.tols.org Subject: Re: just another sad story in zfs tuning city X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 03 Sep 2010 13:06:50 -0000 On Fri, Sep 03, 2010 at 09:58:28PM +0900, Randy Bush wrote: > i have a system that should move right along. but it is as soggy as a > tenugui on an august afternoon. > > 2*amd64 2.0GHz 4GB 750GB > 8.1-STABLE Sun Aug 8 01:05:24 UTC 2010 > > /dev/mirror/boota 8.3G 909M 6.7G 12% / > procfs 4.1k 4.1k 0B 100% /proc > tank 729G 28k 729G 0% /tank > tank/data 729G 31k 729G 0% /data > tank/data/nfsen 877G 148G 729G 17% /data/nfsen > tank/data/rpki 729G 107M 729G 0% /data/rpki > tank/usr 737G 7.7G 729G 1% /usr > tank/usr/home 732G 2.6G 729G 0% /usr/home > tank/usr/usr 732G 2.6G 729G 0% /usr/usr > tank/var 730G 759M 729G 0% /var > tank/var/log 730G 415M 729G 0% /var/log > tank/var/spool 729G 68M 729G 0% /var/spool > > work0.psg.com:/usr/home/rancid# cat /boot/loader.conf.local > loader_logo=beastie > zfs_load=YES > vm.kmem_size=4G > vfs.zfs.arc_max=64M > vfs.zfs.prefetch_disable=1 > geom_mirror_load=YES > kern.maxvnodes=50000 > > i am tempted to just boot without the zfs memory hacks in loader conf. > any warnings on doing so? any other clues also gladly accepted. In my view you limited the page cache (equivalent) for your zfs filesystems to 64MB out of 4GB system memory. Not sure what your I/O is like, but that could explain the sluggishness? :-) Sticking a wet finger in the air, and assuming /dev/mirror/boota holds a UFS filesystem that's just used to boot from I'd say give kmem_size 1.5x physical, arc_max 1.0x physical, and arc_min 0.5x physical memory to have it compete with the UFS page cache. After that applications that run can't use more then (0.5x physical memory plus swap space). Then again, I'm by far no zfs expert, so consider this an experiment ;) Marco -- Now watch me snatch defeat from the jaws of victory - "Rigoletto" during a game on www.dailygammon.com