From owner-freebsd-performance@FreeBSD.ORG Tue May 3 14:53:10 2005 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id BB78F16A4CE; Tue, 3 May 2005 14:53:10 +0000 (GMT) Received: from multiplay.co.uk (www1.multiplay.co.uk [212.42.16.7]) by mx1.FreeBSD.org (Postfix) with ESMTP id AF5DA43D86; Tue, 3 May 2005 14:53:09 +0000 (GMT) (envelope-from killing@multiplay.co.uk) Received: from vader ([212.135.219.179]) by multiplay.co.uk (multiplay.co.uk [212.42.16.7]) (MDaemon.PRO.v8.0.1.R) with ESMTP id md50001378031.msg; Tue, 03 May 2005 15:48:21 +0100 Message-ID: <01ad01c54fef$a56e6260$b3db87d4@multiplay.co.uk> From: "Steven Hartland" To: "Petri Helenius" , "Robert Watson" References: <19879.1115061648@critter.freebsd.dk><20050502214208.M87351@fledge.watson.org> <42770026.80901@he.iki.fi> Date: Tue, 3 May 2005 15:51:50 +0100 MIME-Version: 1.0 Content-Type: text/plain; format=flowed; charset="iso-8859-1"; reply-type=response Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.2527 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.2527 X-Spam-Processed: multiplay.co.uk, Tue, 03 May 2005 15:48:21 +0100 (not processed: message from valid local sender) X-MDRemoteIP: 212.135.219.179 X-Return-Path: killing@multiplay.co.uk X-MDAV-Processed: multiplay.co.uk, Tue, 03 May 2005 15:48:22 +0100 cc: Poul-Henning Kamp cc: Eric Anderson cc: freebsd-performance@freebsd.org Subject: Re: Very low disk performance on 5.x X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 03 May 2005 14:53:10 -0000 Summary of results: RAID0: Changing vfs.read_max 8 -> 16 and MAXPHYS 128k -> 1M increased read performance significantly from 129Mb/s to 199MB/s Max raw device speed here was 234Mb/s FS -> Raw device: 35Mb/s 14.9% performance loss RAID5: Changing vfs.read_max 8 -> 16 produced a small increase 129Mb/s to 135Mb/s. Increasing MAXPHYS 128k -> 1M prevented vfs.read_max from having any effect Max raw device speed here was 200Mb/s FS -> Raw device: 65Mb/s 32.5% performance loss Note: This batch of tests where done on uni processor kernel to keep variation down to a minimum, so are not directly comparable with my previous tests. All tests where performed with 16k RAID stripe across all 5 disks and a default newfs. Increasing or decreasing the block size for the fs was tried but only had negative effects. Results: **RAID0** sysctl vfs.read_max=8 dd if=/mnt/testfile of=/dev/null bs=64k count=100000 100000+0 records in 100000+0 records out 6553600000 bytes transferred in 48.459904 secs (135237577 bytes/sec) sysctl vfs.read_max=16 dd if=/mnt/testfile of=/dev/null bs=64k count=100000 100000+0 records in 100000+0 records out 6553600000 bytes transferred in 35.338873 secs (185450169 bytes/sec) sysctl vfs.read_max=24 dd if=/mnt/testfile of=/dev/null bs=64k count=100000 100000+0 records in 100000+0 records out 6553600000 bytes transferred in 35.692228 secs (183614203 bytes/sec) sysctl vfs.read_max=32 dd if=/mnt/testfile of=/dev/null bs=64k count=100000 100000+0 records in 100000+0 records out 6553600000 bytes transferred in 35.694294 secs (183603576 bytes/sec) sysctl vfs.read_max=8 + MAXPHYS = 1M dd if=/mnt/testfile of=/dev/null bs=64k count=100000 100000+0 records in 100000+0 records out 6553600000 bytes transferred in 40.546771 secs (161630626 bytes/sec) sysctl vfs.read_max=16 + MAXPHYS = 1M dd if=/mnt/testfile of=/dev/null bs=64k count=100000 100000+0 records in 100000+0 records out 6553600000 bytes transferred in 31.698017 secs (206751103 bytes/sec) sysctl vfs.read_max=24 + MAXPHYS = 1M dd if=/mnt/testfile of=/dev/null bs=64k count=100000 100000+0 records in 100000+0 records out 6553600000 bytes transferred in 31.409248 secs (208651924 bytes/sec) dd if=/mnt/testfile of=/dev/null bs=32k count=200000 200000+0 records in 200000+0 records out 6553600000 bytes transferred in 31.981614 secs (204917739 bytes/sec dd if=/mnt/testfile of=/dev/null bs=128k count=50000 50000+0 records in 50000+0 records out 6553600000 bytes transferred in 31.277051 secs (209533821 bytes/sec) dd if=/mnt/testfile of=/dev/null bs=256k count=25000 25000+0 records in 25000+0 records out 6553600000 bytes transferred in 33.271369 secs (196974161 bytes/sec) dd if=/mnt/testfile of=/dev/null bs=512k count=12500 12500+0 records in 12500+0 records out 6553600000 bytes transferred in 35.499299 secs (184612096 bytes/sec) dd if=/mnt/testfile of=/dev/null bs=1024k count=6250 6250+0 records in 6250+0 records out 6553600000 bytes transferred in 36.861447 secs (177790090 bytes/sec) sysctl vfs.read_max=24 + MAXPHYS = 1M ( raw device ) dd if=/dev/da0 of=/dev/null bs=64k count=100000 100000+0 records in 100000+0 records out 6553600000 bytes transferred in 27.818829 secs (235581447 bytes/sec) dd if=/dev/da0 of=/dev/null bs=1024k count=6250 6250+0 records in 6250+0 records out 6553600000 bytes transferred in 26.610258 secs (246280963 bytes/sec) **RAID5** sysctl vfs.read_max=8 ( raw device ) dd if=/dev/da0 of=/dev/null bs=64k count=100000 100000+0 records in 100000+0 records out 6553600000 bytes transferred in 31.110141 secs (210657997 bytes/sec) dd if=/dev/da0 of=/dev/null bs=1024k count=6520 6520+0 records in 6520+0 records out 6836715520 bytes transferred in 31.147035 secs (219498116 bytes/sec) sysctl vfs.read_max=8 dd if=/mnt/testfile of=/dev/null bs=64k count=100000 100000+0 records in 100000+0 records out 6553600000 bytes transferred in 48.380458 secs (135459652 bytes/sec) sysctl vfs.read_max=16 dd if=/mnt/testfile of=/dev/null bs=64k count=100000 100000+0 records in 100000+0 records out 6553600000 bytes transferred in 46.318109 secs (141491096 bytes/sec) sysctl vfs.read_max=24 dd if=/mnt/testfile of=/dev/null bs=64k count=100000 100000+0 records in 100000+0 records out 6553600000 bytes transferred in 46.305371 secs (141530018 bytes/sec) sysctl vfs.read_max=8 + MAXPHYS = 1M dd if=/mnt/testfile of=/dev/null bs=64k count=100000 100000+0 records in 100000+0 records out 6553600000 bytes transferred in 49.093645 secs (133491819 bytes/sec) sysctl vfs.read_max=16 + MAXPHYS = 1M dd if=/mnt/testfile of=/dev/null bs=64k count=100000 100000+0 records in 100000+0 records out 6553600000 bytes transferred in 48.347905 secs (135550857 bytes/sec) sysctl vfs.read_max=24 + MAXPHYS = 1M dd if=/mnt/testfile of=/dev/null bs=64k count=100000 100000+0 records in 100000+0 records out 6553600000 bytes transferred in 48.702546 secs (134563807 bytes/sec) sysctl vfs.read_max=24 + MAXPHYS = 1M ( raw device ) dd if=/dev/da0 of=/dev/null bs=64k count=100000 100000+0 records in 100000+0 records out 6553600000 bytes transferred in 38.822286 secs (168810254 bytes/sec) dd if=/dev/da0 of=/dev/null bs=1024k count=6250 6250+0 records in 6250+0 records out 6553600000 bytes transferred in 38.727828 secs (169221987 bytes/sec) ----- Original Message ----- From: "Petri Helenius" > > I noticed that changing vfs.read_max from the default 8 to 16 has a > dramatic effect on sequential read performance. Increasing it further > did not have measurable effect. ================================================ This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it. In the event of misdirection, illegible or incomplete transmission please telephone (023) 8024 3137 or return the E.mail to postmaster@multiplay.co.uk.