From owner-freebsd-fs@FreeBSD.ORG Fri Jan 20 10:10:29 2012 Return-Path: Delivered-To: fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 99419106564A for ; Fri, 20 Jan 2012 10:10:29 +0000 (UTC) (envelope-from wjw@digiware.nl) Received: from mail.digiware.nl (mail.ip6.digiware.nl [IPv6:2001:4cb8:1:106::2]) by mx1.freebsd.org (Postfix) with ESMTP id 32F718FC12 for ; Fri, 20 Jan 2012 10:10:29 +0000 (UTC) Received: from rack1.digiware.nl (localhost.digiware.nl [127.0.0.1]) by mail.digiware.nl (Postfix) with ESMTP id BA733153439 for ; Fri, 20 Jan 2012 11:10:27 +0100 (CET) X-Virus-Scanned: amavisd-new at digiware.nl Received: from mail.digiware.nl ([127.0.0.1]) by rack1.digiware.nl (rack1.digiware.nl [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id O-bbRlI7r36Z for ; Fri, 20 Jan 2012 11:10:26 +0100 (CET) Received: from [127.0.0.1] (opteron [192.168.10.67]) by mail.digiware.nl (Postfix) with ESMTP id 84AD7153433 for ; Fri, 20 Jan 2012 11:10:26 +0100 (CET) Message-ID: <4F193D90.9020703@digiware.nl> Date: Fri, 20 Jan 2012 11:10:24 +0100 From: Willem Jan Withagen User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:9.0) Gecko/20111222 Thunderbird/9.0.1 MIME-Version: 1.0 To: fs@freebsd.org Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: Subject: Question about ZFS with log and cache on SSD with GPT X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 20 Jan 2012 10:10:29 -0000 Hi, I need to run this too big MySQL database on a too small development server, so I need to tweak what I have there.... CPU (4 core HT XEON@3Ghz) is more than powerful enough since the query rate is low, but the amount of data is huge. (50Gb) Memory (16G) could be better, but all slots are full. The server is not really swapping. Now my question is more about the SSD configuration. (BTW adding 1 SSD got the insert rate up from 100/sec to > 1000/sec, once the cache was loaded.) The database is on a mirror of 2 1T disks: ada0: ATA-8 SATA 3.x device ada0: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes) ada0: Command Queueing enabled and there are 2 SSDs: ada2: ATA-8 SATA 2.x device ada2: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes) ada2: Command Queueing enabled What I've currently done is partition all disks (also the SSDs) with GPT like below: batman# zpool iostat -v capacity operations bandwidth pool alloc free read write read write ------------- ----- ----- ----- ----- ----- ----- zfsboot 50.0G 49.5G 1 13 46.0K 164K mirror 50.0G 49.5G 1 13 46.0K 164K gpt/boot4 - - 0 5 23.0K 164K gpt/boot6 - - 0 5 22.9K 164K ------------- ----- ----- ----- ----- ----- ----- zfsdata 59.4G 765G 12 62 250K 1.30M mirror 59.4G 765G 12 62 250K 1.30M gpt/data4 - - 5 15 127K 1.30M gpt/data6 - - 5 15 127K 1.30M gpt/log2 11M 1005M 0 22 12 653K gpt/log3 11.1M 1005M 0 22 12 652K cache - - - - - - gpt/cache2 9.99G 26.3G 27 53 1.20M 5.30M gpt/cache3 9.85G 26.4G 28 54 1.24M 5.23M ------------- ----- ----- ----- ----- ----- ----- disks 4 and 6 are naming remains of pre ahci times and are ada0 and ada1. So the hardisks have the "std" zfs setup: a boot-pool and a data-pool. The SSD's if partitioned and assigned to zfsdata with: gpart create -s GPT ada2 gpart create -s GPT ada3 gpart add -t freebsd-zfs -l log2 -s 1G ada2 gpart add -t freebsd-zfs -l log3 -s 1G ada3 gpart add -t freebsd-zfs -l cache2 ada2 gpart add -t freebsd-zfs -l cache3 ada3 zpool add zfsdata log /dev/gpt/log* zpool add zfsdata cache /dev/gpt/cache* Now the question would be are the GPT partitions correctly aligned to give optimal performance? The harddisks are still std 512byte sectors, so that would be alright? The SSD's I have my doubts..... Good thing is that v28 allow you to toy with log and cache without loosing data. So I could redo the recreation of cache and log relatively easy. I'd rather not redo the DB build since that takes a few days. :( But before loading the DB, I did use some of the tuning suggestions like using different recordsize for db-logs and innodb files. Anybody suggestions and/or experience with this? Thanx, --WjW