From owner-freebsd-fs@FreeBSD.ORG Sun Jan 24 16:36:23 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 6C1E1106566B; Sun, 24 Jan 2010 16:36:23 +0000 (UTC) (envelope-from dan.naumov@gmail.com) Received: from mail-yw0-f197.google.com (mail-yw0-f197.google.com [209.85.211.197]) by mx1.freebsd.org (Postfix) with ESMTP id 037438FC0C; Sun, 24 Jan 2010 16:36:22 +0000 (UTC) Received: by ywh35 with SMTP id 35so2155988ywh.7 for ; Sun, 24 Jan 2010 08:36:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:date:message-id:subject :from:to:content-type; bh=i/+uDXn7ijsjDRMI+huRYoteHW8xWKP71eiWT2QtTwI=; b=XB5DGEvb22DO3uAcGFpTzF8+VqgdyknX3ERVi5zdtuvJajCWjxSwKlv2n1rswFcVlr XZ5OJp0ns7wOrDGTckhxS+jfMOR/Et2iQjUDEBnHQ4D/t+N9nr5SKO98KcYLsARBg0p/ sHZta9cRKYK4BOROZL0bhv8jdoREApwcQJIKI= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:date:message-id:subject:from:to:content-type; b=SmdO0NyTLyazmK8ukteUiPIdDrSe3JRmt41SOdrf/DH9VvBOpKR1Wwbxn+hup7o9wv jyvG34zoxkjvwFFHzJIl6CtHlu38DBqlij9Bb3CyreeFMGblF1Hf7awFv+GU7wqEpuOO ypaMXtb0imTmte4srJrrS70UPeDZgAKG7RuJ0= MIME-Version: 1.0 Received: by 10.100.235.36 with SMTP id i36mr3795152anh.104.1264350982055; Sun, 24 Jan 2010 08:36:22 -0800 (PST) Date: Sun, 24 Jan 2010 18:36:22 +0200 Message-ID: From: Dan Naumov To: freebsd-fs@freebsd.org, freebsd-questions@freebsd.org, FreeBSD-STABLE Mailing List Content-Type: text/plain; charset=ISO-8859-1 Cc: Subject: 8.0-RELEASE/amd64 - full ZFS install - low read and write disk performance X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 24 Jan 2010 16:36:23 -0000 Note: Since my issue is slow performance right off the bat and not performance degradation over time, I decided to start a separate discussion. After installing a fresh pure ZFS 8.0 system and building all my ports, I decided to do some benchmarking. At this point, about a dozen of ports has been built installed and the system has been up for about 11 hours, No heavy background services have been running, only SSHD and NTPD: ================================================================================== bonnie -s 8192: -------Sequential Output-------- ---Sequential Input-- --Random-- -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks--- Machine MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU 8192 23821 61.7 22311 19.2 13928 13.7 25029 49.6 44806 17.2 135.0 3.1 During the process, TOP looks like this: last pid: 83554; load averages: 0.31, 0.31, 0.37 up 0+10:59:01 17:24:19 33 processes: 2 running, 31 sleeping CPU: 0.1% user, 0.0% nice, 14.1% system, 0.7% interrupt, 85.2% idle Mem: 45M Active, 4188K Inact, 568M Wired, 144K Cache, 1345M Free Swap: 3072M Total, 3072M Free Oh wow, that looks low, alright, lets run it again, just to be sure: -------Sequential Output-------- ---Sequential Input-- --Random-- -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks--- Machine MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU 8192 18235 46.7 23137 19.9 13927 13.6 24818 49.3 44919 17.3 134.3 2.1 OK, let's reboot the machine and see what kind of numbers we get on a fresh boot: =============================================================== -------Sequential Output-------- ---Sequential Input-- --Random-- -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks--- Machine MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU 8192 21041 53.5 22644 19.4 13724 12.8 25321 48.5 43110 14.0 143.2 3.3 Nope, no help from the reboot, still very low speed. Here is my pool: =============================================================== zpool status pool: tank state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 mirror ONLINE 0 0 0 ad10s1a ONLINE 0 0 0 ad8s1a ONLINE 0 0 0 =============================================================== diskinfo -c -t /dev/ad10 /dev/ad10 512 # sectorsize 2000398934016 # mediasize in bytes (1.8T) 3907029168 # mediasize in sectors 3876021 # Cylinders according to firmware. 16 # Heads according to firmware. 63 # Sectors according to firmware. WD-WCAVY0301430 # Disk ident. I/O command overhead: time to read 10MB block 0.164315 sec = 0.008 msec/sector time to read 20480 sectors 3.030396 sec = 0.148 msec/sector calculated command overhead = 0.140 msec/sector Seek times: Full stroke: 250 iter in 7.309334 sec = 29.237 msec Half stroke: 250 iter in 5.156117 sec = 20.624 msec Quarter stroke: 500 iter in 8.147588 sec = 16.295 msec Short forward: 400 iter in 2.544309 sec = 6.361 msec Short backward: 400 iter in 2.007679 sec = 5.019 msec Seq outer: 2048 iter in 0.392994 sec = 0.192 msec Seq inner: 2048 iter in 0.332582 sec = 0.162 msec Transfer rates: outside: 102400 kbytes in 1.576734 sec = 64944 kbytes/sec middle: 102400 kbytes in 1.381803 sec = 74106 kbytes/sec inside: 102400 kbytes in 2.145432 sec = 47729 kbytes/sec =============================================================== diskinfo -c -t /dev/ad8 /dev/ad8 512 # sectorsize 2000398934016 # mediasize in bytes (1.8T) 3907029168 # mediasize in sectors 3876021 # Cylinders according to firmware. 16 # Heads according to firmware. 63 # Sectors according to firmware. WD-WCAVY1611513 # Disk ident. I/O command overhead: time to read 10MB block 0.176820 sec = 0.009 msec/sector time to read 20480 sectors 2.966564 sec = 0.145 msec/sector calculated command overhead = 0.136 msec/sector Seek times: Full stroke: 250 iter in 7.993339 sec = 31.973 msec Half stroke: 250 iter in 5.944923 sec = 23.780 msec Quarter stroke: 500 iter in 9.744406 sec = 19.489 msec Short forward: 400 iter in 2.511171 sec = 6.278 msec Short backward: 400 iter in 2.233714 sec = 5.584 msec Seq outer: 2048 iter in 0.427523 sec = 0.209 msec Seq inner: 2048 iter in 0.341185 sec = 0.167 msec Transfer rates: outside: 102400 kbytes in 1.516305 sec = 67533 kbytes/sec middle: 102400 kbytes in 1.351877 sec = 75747 kbytes/sec inside: 102400 kbytes in 2.090069 sec = 48994 kbytes/sec =============================================================== The exact same disks, on the exact same machine, are well capable of 65+ mb/s throughput (tested with ATTO multiple times) with different block sizes using Windows 2008 Server and NTFS. So what would be the cause of these very low Bonnie result numbers in my case? Should I try some other benchmark and if so, with what parameters? - Sincerely, Dan Naumov