From owner-freebsd-fs@FreeBSD.ORG Tue Nov 20 14:43:17 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 9A6AB85B for ; Tue, 20 Nov 2012 14:43:17 +0000 (UTC) (envelope-from nowakpl@platinum.linux.pl) Received: from platinum.linux.pl (platinum.edu.pl [81.161.192.4]) by mx1.freebsd.org (Postfix) with ESMTP id 42D638FC12 for ; Tue, 20 Nov 2012 14:43:16 +0000 (UTC) Received: by platinum.linux.pl (Postfix, from userid 87) id 6A24F47E21; Tue, 20 Nov 2012 15:43:08 +0100 (CET) X-Spam-Checker-Version: SpamAssassin 3.3.2 (2011-06-06) on platinum.linux.pl X-Spam-Level: X-Spam-Status: No, score=-1.5 required=3.0 tests=ALL_TRUSTED,AWL autolearn=disabled version=3.3.2 Received: from [10.255.1.2] (c38-073.client.duna.pl [83.151.38.73]) by platinum.linux.pl (Postfix) with ESMTPA id 031AD47DCD for ; Tue, 20 Nov 2012 15:43:03 +0100 (CET) Message-ID: <50AB96F5.3060402@platinum.linux.pl> Date: Tue, 20 Nov 2012 15:43:01 +0100 From: Adam Nowacki User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:16.0) Gecko/20121026 Thunderbird/16.0.2 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: ZFS FAQ (Was: SSD recommendations for ZFS cache/log) References: <57ac1f$gf3rkl@ipmail05.adl6.internode.on.net> <50A31D48.3000700@shatow.net> <57ac1f$gg70bn@ipmail05.adl6.internode.on.net> <48C81451-B9E7-44B5-8B8A-ED4B1D464EC6@bway.net> <50AB3789.1000508@platinum.linux.pl> <230DE7DAE83749DCBD180D5EF85D4CB1@multiplay.co.uk> In-Reply-To: <230DE7DAE83749DCBD180D5EF85D4CB1@multiplay.co.uk> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 20 Nov 2012 14:43:17 -0000 On 2012-11-20 09:57, Steven Hartland wrote: >> vfs.zfs.arc_min="10000M" >> vfs.zfs.arc_max="10000M" >> vfs.zfs.vdev.cache.size="16M" # vdev cache helps a lot during scrubs >> vfs.zfs.vdev.cache.bshift="14" # grow all i/o requests to 16kiB, >> smaller have shown to have same latency so might as well get more "for >> free" >> vfs.zfs.vdev.cache.max="16384" > > This has been disabled by default for a while are you sure of the benefits? > > "Disable vdev cache (readahead) by default. > > The vdev cache is very underutilized (hit ratio 30%-70%) and may consume > excessive memory on systems with many vdevs. > > Illumos-gate revision: 13346" I'm not sure anymore - getting very weird results, with both vdev cache enabled and disabled. What I'm sure of is that 160MB used in my case for vdev cache is 1.5% of 10GB arc so can be ignored as insignificant. Weird results (just after reboot, fs not mounted so completely idle, begin scrub, wait 30 seconds, stop scrub, see how much got scrubbed): run 1) 9990 MB, run 2) 1530 MB, run 3) 10400 MB, run 4) 1490 MB, run 5) 1540 MB, run 6) 1430 MB, run 7) 10600 MB. Is ZFS tossing a coin to decide if scrub should be slow or fast? heads - 333MB/s, tails - 50MB/s ...