From owner-freebsd-fs@FreeBSD.ORG Mon Aug 29 17:32:48 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 347981065670; Mon, 29 Aug 2011 17:32:48 +0000 (UTC) (envelope-from lev@FreeBSD.org) Received: from onlyone.friendlyhosting.spb.ru (onlyone.friendlyhosting.spb.ru [IPv6:2a01:4f8:131:60a2::2]) by mx1.freebsd.org (Postfix) with ESMTP id C1F278FC0A; Mon, 29 Aug 2011 17:32:47 +0000 (UTC) Received: from lion.home.serebryakov.spb.ru (unknown [IPv6:2001:470:923f:1:6407:f3f9:7d93:d34c]) (Authenticated sender: lev@serebryakov.spb.ru) by onlyone.friendlyhosting.spb.ru (Postfix) with ESMTPA id E6A994AC31; Mon, 29 Aug 2011 21:32:45 +0400 (MSD) Date: Mon, 29 Aug 2011 21:32:44 +0400 From: Lev Serebryakov Organization: FreeBSD X-Priority: 3 (Normal) Message-ID: <1742839983.20110829213244@serebryakov.spb.ru> To: Ivan Voras In-Reply-To: References: <1963980291.20110826232758@serebryakov.spb.ru> <201108262052.p7QKqpen039191@chez.mckusick.com> <758608837.20110827112116@serebryakov.spb.ru> MIME-Version: 1.0 Content-Type: text/plain; charset=windows-1251 Content-Transfer-Encoding: quoted-printable Cc: freebsd-fs@freebsd.org Subject: Re: Strange behaviour of UFS2+SU FS on FreeBSD 8-Stable: dreadful perofrmance for old data, excellent for new. X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: lev@FreeBSD.org List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 29 Aug 2011 17:32:48 -0000 Hello, Ivan. You wrote 27 =E0=E2=E3=F3=F1=F2=E0 2011 =E3., 21:02:44: >> I'm going to investigate alter, why it is ony ~180MiB/s, when >> theoretically it should be about (90*4) 360MiB/s linear read, and whom >> to blame: UFS or geom_raid5 or both :) > Try this: http://ivoras.net/blog/tree/2010-11-19.ufs-read-ahead.html > (or it could be a hardware issue - controller bottleneck or something > like that). It is more strange and more complex, than simple "180MiB/s" read. I have: (1) software RAID5 (geom_raid5) on 5xWD Green HDDs (yes, I know, that seek is not very fast on these disks, but I'm discussung only linear access now). Stripe size is 128KiB. Theoretical maximum performance is about 4*90 =3D 360MiB/s. (2) FS with 32K blocks (unfortunately, here is (WAS?) old bug, when system lock up when here are 16KiB/s and 64KiB/-sized FSes in one syste= m). (3) vfs.read_max=3D32, it means 32*32 =3D 1024KiB =3D 8 RAID stripes. Enough for parallel requests. And, in such conditions, good placed (not legacy ones, which are very fragmented, as were written on almost full FS) large (more than 1GiB) files fives from 120MiB/s up to 350MiB/s. Some files tend to read more fast, some not so fast, but it seems that speed could vary for one file from run to run (yes, I clean memory cache by reading big files between "benhc" euns). And, yes, 350MiB/s is not typical. 120-180MiB/s encounters much, much often than larger speeds. Do you have any ideas, how to debug this situation and make sure, that geom_raid5 does it best and is not bottleneck? Maybe some other UFS2 tunings or diagnostics? I've tried such configuration with software (ICH9R, which is not hardware implementation for sure) RAID5 on Windows and it was much more consistent (and almost always shows speed near theoretical maximum). Another question is how to measure and diagnose writing... --=20 // Black Lion AKA Lev Serebryakov