From owner-freebsd-stable@FreeBSD.ORG Fri Nov 6 18:36:30 2009 Return-Path: Delivered-To: freebsd-stable@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 657F41065693; Fri, 6 Nov 2009 18:36:30 +0000 (UTC) (envelope-from 000.fbsd@quip.cz) Received: from elsa.codelab.cz (elsa.codelab.cz [94.124.105.4]) by mx1.freebsd.org (Postfix) with ESMTP id C55AF8FC13; Fri, 6 Nov 2009 18:36:29 +0000 (UTC) Received: from localhost (localhost.codelab.cz [127.0.0.1]) by elsa.codelab.cz (Postfix) with ESMTP id 8819719E023; Fri, 6 Nov 2009 19:36:28 +0100 (CET) Received: from [192.168.1.2] (r5bb235.net.upc.cz [86.49.61.235]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by elsa.codelab.cz (Postfix) with ESMTPSA id 9603F19E019; Fri, 6 Nov 2009 19:36:25 +0100 (CET) Message-ID: <4AF46CA9.1040904@quip.cz> Date: Fri, 06 Nov 2009 19:36:25 +0100 From: Miroslav Lachman <000.fbsd@quip.cz> User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.0; en-US; rv:1.9.1.4) Gecko/20091017 SeaMonkey/2.0 MIME-Version: 1.0 To: Ivan Voras References: <772532900-1257123963-cardhu_decombobulator_blackberry.rim.net-1402739480-@bda715.bisx.prod.on.blackberry> <4AEEBD4B.1050407@quip.cz> <4AEEDB3B.5020600@quip.cz> In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Cc: freebsd-stable@freebsd.org Subject: Re: Performance issues with 8.0 ZFS and sendfile/lighttpd X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 06 Nov 2009 18:36:30 -0000 Ivan Voras wrote: > Miroslav Lachman wrote: >> Ivan Voras wrote: >>> Miroslav Lachman wrote: >> >> [..] >> >>>> I have more strange issue with Lighttpd in jail on top of ZFS. >>>> Lighttpd is serving static content (mp3 downloads thru flash player). >>>> Is runs fine for relatively small number of parallel clients with >>>> bandwidth about 30 Mbps, but after some number of clients is reached >>>> (about 50-60 parallel clients) the throughput drops down to 6 Mbps. >>>> >>>> I can server hundereds of clients on same HW using Lighttpd not in >>>> jail and UFS2 with gjournal instead of ZFS reaching 100 Mbps (maybe >>>> more). >>>> >>>> I don't know if it is ZFS or Jail issue. >>> >>> Do you have actual disk IO or is the vast majority of your data served >>> from the caches? (actually - the same question to the OP) >> >> I had ZFS zpool as mirror of two SATA II drives (500GB) and in the >> peak iostat (or systat -vm or gstat) shows about 80 tps / 60% busy. >> >> In case of UFS, I am using gmirrored 1TB SATA II drives working nice >> with 160 or more tps. >> >> Both setups are using FreeBSD 7.x amd64 with GENERIC kernel, 4GB of RAM. >> >> As the ZFS + Lighttpd in jail was unreliable, I am no longer using it, >> but if you want some more info for debuging, I can set it up again. > > For what it's worth, I have just set up a little test on a production > machine with 3 500 GB SATA drives in RAIDZ, FreeBSD 7.2-RELEASE. The > total data set is some 2 GB in 5000 files but the machine has only 2 GB > RAM total so there is some disk IO - about 40 IOPS per drive. I'm also > using Apache-worker, not lighty, and siege to benchmark with 10 > concurrent users. > > In this setup, the machine has no problems saturating a 100 Mbit/s link > - it's not on a LAN but the latency is close enough and I get ~~ 11 MB/s. I tried it again to get some system statistics for you, so here it comes. I do not understand why there are 10MB/s read from disks when network traffic dropped to around 1MB/s (8Mbps) root@cage ~/# iostat -w 20 tty ad4 ad6 cpu tin tout KB/t tps MB/s KB/t tps MB/s us ni sy in id 0 14 41.66 53 2.17 41.82 53 2.18 0 0 2 0 97 0 18 50.92 96 4.77 54.82 114 6.12 0 0 3 1 96 0 6 53.52 101 5.29 54.98 108 5.81 1 0 4 1 94 0 6 54.82 98 5.26 55.89 108 5.89 0 0 3 1 96 root@cage ~/# ifstat -i bge1 10 bge1 KB/s in KB/s out 33.32 1174.34 34.35 1181.33 33.14 1172.27 31.64 1118.60 root@cage ~/# zpool iostat 10 capacity operations bandwidth pool used avail read write read write ---------- ----- ----- ----- ----- ----- ----- tank 382G 62.5G 73 31 3.30M 148K tank 382G 62.5G 150 38 11.2M 138K tank 382G 62.5G 148 33 11.3M 99.6K tank 382G 62.5G 148 29 10.9M 93.2K tank 382G 62.5G 137 25 10.4M 75.4K tank 382G 62.5G 149 32 11.3M 122K root@cage ~/# ~/bin/zfs_get_kernel_mem.sh TEXT=13245157, 12.6316 MB DATA=267506688, 255.114 MB TOTAL=280751845, 267.746 MB root@cage ~/# ~/bin/arcstat.pl 10 Time read miss miss% dmis dm% pmis pm% mmis mm% arcsz c 15:34:38 705M 46M 6 46M 6 0 0 29M 18 137061376 134217728 15:34:48 1K 148 11 148 11 0 0 57 96 137495552 134217728 15:34:58 1K 151 11 151 11 0 0 59 96 136692736 134217728 15:35:08 1K 140 10 140 10 0 0 45 76 165005824 134217728 15:35:18 1K 150 9 150 9 0 0 54 91 141642240 134217728 root@cage ~/# ~/bin/arc_summary.pl System Memory: Physical RAM: 4083 MB Free Memory : 0 MB ARC Size: Current Size: 133 MB (arcsize) Target Size (Adaptive): 128 MB (c) Min Size (Hard Limit): 16 MB (zfs_arc_min) Max Size (Hard Limit): 128 MB (zfs_arc_max) ARC Size Breakdown: Most Recently Used Cache Size: 97% 125 MB (p) Most Frequently Used Cache Size: 2% 2 MB (c-p) ARC Efficency: Cache Access Total: 7052224705 Cache Hit Ratio: 93% 6582803808 [Defined State for buffer] Cache Miss Ratio: 6% 469420897 [Undefined State for Buffer] REAL Hit Ratio: 93% 6582803808 [MRU/MFU Hits Only] Data Demand Efficiency: 96% Data Prefetch Efficiency: DISABLED (zfs_prefetch_disable) CACHE HITS BY CACHE LIST: Anon: --% Counter Rolled. Most Recently Used: 13% 869219380 (mru) [ Return Customer ] Most Frequently Used: 86% 5713584428 (mfu) [ Frequent Customer ] Most Recently Used Ghost: 0% 25025402 (mru_ghost) [ Return Customer Evicted, Now Back ] Most Frequently Used Ghost: 1% 103104325 (mfu_ghost) [ Frequent Customer Evicted, Now Back ] CACHE HITS BY DATA TYPE: Demand Data: 80% 5331503088 Prefetch Data: 0% 0 Demand Metadata: 19% 1251300720 Prefetch Metadata: 0% 0 CACHE MISSES BY DATA TYPE: Demand Data: 38% 179172125 Prefetch Data: 0% 0 Demand Metadata: 61% 290248772 Prefetch Metadata: 0% 0 --------------------------------------------- /boot/loader.conf: ## eLOM support hw.bge.allow_asf="1" ## gmirror RAID1 geom_mirror_load="YES" ## ZFS tuning vm.kmem_size="1280M" vm.kmem_size_max="1280M" kern.maxvnodes="400000" vfs.zfs.prefetch_disable="1" vfs.zfs.arc_min="16M" vfs.zfs.arc_max="128M" The network traffic is normally around 30Mbps, but when number of parallel downloads reaches some level, traffic drops to 6-8Mbps and number of parallel clients goes up even more. I can provide network and disk IO graphs if you are interested. Miroslav Lachman