From owner-freebsd-stable@FreeBSD.ORG Fri Nov 6 22:41:21 2009 Return-Path: Delivered-To: freebsd-stable@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 4114C106566B; Fri, 6 Nov 2009 22:41:21 +0000 (UTC) (envelope-from 000.fbsd@quip.cz) Received: from elsa.codelab.cz (elsa.codelab.cz [94.124.105.4]) by mx1.freebsd.org (Postfix) with ESMTP id BDCDA8FC0A; Fri, 6 Nov 2009 22:41:20 +0000 (UTC) Received: from localhost (localhost.codelab.cz [127.0.0.1]) by elsa.codelab.cz (Postfix) with ESMTP id 5094019E023; Fri, 6 Nov 2009 23:41:19 +0100 (CET) Received: from [192.168.1.2] (r5bb235.net.upc.cz [86.49.61.235]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by elsa.codelab.cz (Postfix) with ESMTPSA id 6EA5C19E019; Fri, 6 Nov 2009 23:41:13 +0100 (CET) Message-ID: <4AF4A608.4020706@quip.cz> Date: Fri, 06 Nov 2009 23:41:12 +0100 From: Miroslav Lachman <000.fbsd@quip.cz> User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.0; en-US; rv:1.9.1.4) Gecko/20091017 SeaMonkey/2.0 MIME-Version: 1.0 To: Thomas Backman References: <772532900-1257123963-cardhu_decombobulator_blackberry.rim.net-1402739480-@bda715.bisx.prod.on.blackberry> <4AEEBD4B.1050407@quip.cz> <4AEEDB3B.5020600@quip.cz> <4AF46CA9.1040904@quip.cz> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: freebsd-stable@freebsd.org, Ivan Voras Subject: Re: Performance issues with 8.0 ZFS and sendfile/lighttpd X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 06 Nov 2009 22:41:21 -0000 Thomas Backman wrote: > On Nov 6, 2009, at 7:36 PM, Miroslav Lachman wrote: > >> Ivan Voras wrote: >>> Miroslav Lachman wrote: >>>> Ivan Voras wrote: >>>>> Miroslav Lachman wrote: >>>> >>>> [..] >>>> >>>>>> I have more strange issue with Lighttpd in jail on top of ZFS. >>>>>> Lighttpd is serving static content (mp3 downloads thru flash player). >>>>>> Is runs fine for relatively small number of parallel clients with >>>>>> bandwidth about 30 Mbps, but after some number of clients is reached >>>>>> (about 50-60 parallel clients) the throughput drops down to 6 Mbps. >>>>>> >>>>>> I can server hundereds of clients on same HW using Lighttpd not in >>>>>> jail and UFS2 with gjournal instead of ZFS reaching 100 Mbps (maybe >>>>>> more). >>>>>> >>>>>> I don't know if it is ZFS or Jail issue. >>>>> >>>>> Do you have actual disk IO or is the vast majority of your data served >>>>> from the caches? (actually - the same question to the OP) >>>> >>>> I had ZFS zpool as mirror of two SATA II drives (500GB) and in the >>>> peak iostat (or systat -vm or gstat) shows about 80 tps / 60% busy. >>>> >>>> In case of UFS, I am using gmirrored 1TB SATA II drives working nice >>>> with 160 or more tps. >>>> >>>> Both setups are using FreeBSD 7.x amd64 with GENERIC kernel, 4GB of >>>> RAM. >>>> >>>> As the ZFS + Lighttpd in jail was unreliable, I am no longer using it, >>>> but if you want some more info for debuging, I can set it up again. >>> >>> For what it's worth, I have just set up a little test on a production >>> machine with 3 500 GB SATA drives in RAIDZ, FreeBSD 7.2-RELEASE. The >>> total data set is some 2 GB in 5000 files but the machine has only 2 GB >>> RAM total so there is some disk IO - about 40 IOPS per drive. I'm also >>> using Apache-worker, not lighty, and siege to benchmark with 10 >>> concurrent users. >>> >>> In this setup, the machine has no problems saturating a 100 Mbit/s link >>> - it's not on a LAN but the latency is close enough and I get ~~ 11 >>> MB/s. >> >> [...] >> /boot/loader.conf: >> >> ## eLOM support >> hw.bge.allow_asf="1" >> ## gmirror RAID1 >> geom_mirror_load="YES" >> ## ZFS tuning >> vm.kmem_size="1280M" >> vm.kmem_size_max="1280M" >> kern.maxvnodes="400000" >> vfs.zfs.prefetch_disable="1" >> vfs.zfs.arc_min="16M" >> vfs.zfs.arc_max="128M" > I won't pretend to know much about this area, but your ZFS values here > are very low. May I assume that they are remnants of the times when the > ARC grew insanely large and caused a kernel panic? > You're effectively forcing ZFS to not use more than 128MB cache, which > doesn't sound like a great idea if you've got 2+ GB of RAM. I've had no > trouble without any tuning whatsoever on 2GB for a long time now. The > kmem lines can probably be omitted if you're on amd64, too (the default > value for kmem_size_max is about 307GB on my machine). Yes, loader values are one year old when I installed this machine. But I think auto tuning was commited after 7.2-RELEASE by Kip Macy, so some of them are still needed or am I wrong? (this is 7.2-RELEASE). I can grow arc_max but as this machine is running about 6 jails (not CPU or disk IO consuming), I still need some memory for processes, not just for filesystem.