From owner-freebsd-performance@FreeBSD.ORG Sun May 8 13:05:42 2005 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 086C516A4E2; Sun, 8 May 2005 13:05:42 +0000 (GMT) Received: from pooker.samsco.org (pooker.samsco.org [168.103.85.57]) by mx1.FreeBSD.org (Postfix) with ESMTP id 3905743D5D; Sun, 8 May 2005 13:05:39 +0000 (GMT) (envelope-from scottl@samsco.org) Received: from [192.168.254.11] (junior-wifi.samsco.home [192.168.254.11]) (authenticated bits=0) by pooker.samsco.org (8.13.3/8.13.3) with ESMTP id j48DAgkj021051; Sun, 8 May 2005 07:10:43 -0600 (MDT) (envelope-from scottl@samsco.org) Message-ID: <427E0E87.1040701@samsco.org> Date: Sun, 08 May 2005 07:05:11 -0600 From: Scott Long User-Agent: Mozilla/5.0 (X11; U; FreeBSD i386; en-US; rv:1.7.5) Gecko/20050218 X-Accept-Language: en-us, en MIME-Version: 1.0 To: Steven Hartland References: <19879.1115061648@critter.freebsd.dk><20050502214208.M87351@fledge.watson.org> <42770026.80901@he.iki.fi> <01ad01c54fef$a56e6260$b3db87d4@multiplay.co.uk> In-Reply-To: <01ad01c54fef$a56e6260$b3db87d4@multiplay.co.uk> Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit X-Spam-Status: No, score=-2.8 required=3.8 tests=ALL_TRUSTED autolearn=failed version=3.0.2 X-Spam-Checker-Version: SpamAssassin 3.0.2 (2004-11-16) on pooker.samsco.org cc: Eric Anderson cc: Poul-Henning Kamp cc: Robert Watson cc: Petri Helenius cc: freebsd-performance@freebsd.org Subject: Re: Very low disk performance on 5.x X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 08 May 2005 13:05:42 -0000 Steven Hartland wrote: > Summary of results: > RAID0: > Changing vfs.read_max 8 -> 16 and MAXPHYS 128k -> 1M > increased read performance significantly from 129Mb/s to 199MB/s > Max raw device speed here was 234Mb/s > FS -> Raw device: 35Mb/s 14.9% performance loss > > RAID5: Changing vfs.read_max 8 -> 16 produced a small increase > 129Mb/s to 135Mb/s. > > Increasing MAXPHYS 128k -> 1M prevented vfs.read_max from > having any effect > Max raw device speed here was 200Mb/s > FS -> Raw device: 65Mb/s 32.5% performance loss > > Note: This batch of tests where done on uni processor kernel to > keep variation down to a minimum, so are not directly comparable > with my previous tests. All tests where performed with 16k RAID > stripe across all 5 disks and a default newfs. Increasing or decreasing > the block size for the fs was tried but only had negative effects. Changing MAXPHYS is very dangerous, unfortunately. The root of the problem is that kernel virtual memory (KVA) gets assigned to each I/O buffer as it passes through the kernel. If we allow too much I/O through at once then we have the very real possibility of exhausting the kernel address space and causing a deadlock and/or panic. That is why MAXPHYS is set so low. Your DD test is unlikely to trigger a problem, but try doing a bunch of DD's is parallel and you likely will. The solution is to re-engineer the way that I/O buffers pass through the kernel and only assign KVA when needed (for doing software parity calculations, for example). That way we could make MAXPHYS be any arbitrarily large number and not worry about exhausting KVA. I believe that there is some work in progress in this area, but it's a large project since nearly every single storage driver would need to be changed. Another possibility is to recognise that amd64 doesn't have the same KVA restrictions as i386 and thus can be treated differently. However, doing the KVA work is still attractive since it'll yeild some performance benefits too. Scott