From owner-freebsd-questions@FreeBSD.ORG Mon Apr 27 16:18:25 2009 Return-Path: Delivered-To: freebsd-questions@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 97AD81065678 for ; Mon, 27 Apr 2009 16:18:25 +0000 (UTC) (envelope-from ghirai@ghirai.com) Received: from ghirai.com (ghirai.com [195.74.52.87]) by mx1.freebsd.org (Postfix) with ESMTP id 63B938FC1D for ; Mon, 27 Apr 2009 16:18:23 +0000 (UTC) (envelope-from ghirai@ghirai.com) Received: from localhost (localhost [127.0.0.1]) by ghirai.com (Postfix) with ESMTPSA id 2E57416F39 for ; Mon, 27 Apr 2009 17:18:10 +0100 (BST) Date: Mon, 27 Apr 2009 19:18:24 +0300 From: Ghirai To: freebsd-questions@freebsd.org Message-Id: <20090427191824.25e415e4.ghirai@ghirai.com> Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Subject: quick vfs tuning X-BeenThere: freebsd-questions@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: User questions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 27 Apr 2009 16:18:25 -0000 Hi, I'm running a RAID1 setup with gmirror and geli (AES-128) on top of that. While searching for ways to improve read performance, i found some posts (on kerneltrap i think) about vfs.max_read. The author suggested that increasing the default value of 8 to 16 resulted in increased read speed, and that increasing it further resulted in no noticeable performance gain. Results are below. Starting with vfs.read_max=32: triton# dd if=a.iso of=/dev/null bs=3M 1129+1 records in 1129+1 records out 3554287616 bytes transferred in 176.825898 secs (20100492 bytes/sec) triton# sysctl vfs.read_max=64 vfs.read_max: 32 -> 64 triton# dd if=a.iso of=/dev/null bs=3M 1129+1 records in 1129+1 records out 3554287616 bytes transferred in 162.943189 secs (21813048 bytes/sec) triton# sysctl vfs.read_max=128 vfs.read_max: 64 -> 128 triton# dd if=a.iso of=/dev/null bs=3M 1129+1 records in 1129+1 records out 3554287616 bytes transferred in 149.313994 secs (23804116 bytes/sec) triton# sysctl vfs.read_max=256 vfs.read_max: 128 -> 256 triton# dd if=a.iso of=/dev/null bs=3M 1129+1 records in 1129+1 records out 3554287616 bytes transferred in 150.466241 secs (23621828 bytes/sec) Here is seems to have hit a wall. Going a bit down to 192 results in almost exactly the same numbers, so the best value seems to be 128. As i read, vfs.read_max means 'cluster read-ahead max block count'. Does it read ahead the stuff into some memory? If so, can that memory size be increased via sysctl? Does the improvement in performance have to do with my particular setup (gmirror+geli)? I thought i'd share the results and maybe get a discussion going in this direction. Test was done on a pair of SATA300 HDs spinning at 7200rmp (which are seen as SATA150 by the OS for some reason; i couldn't fix it from the BIOS, so it must be the mobo), and 7.1-RELEASE, i386. -- Regards, Ghirai.