Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 23 Jul 2010 12:42:28 +0100
From:      John Hawkes-Reed <hirez@libeljournal.com>
To:        freebsd-stable@freebsd.org
Cc:        Dan Langille <dan@langille.org>
Subject:   Re: Using GTP and glabel for ZFS arrays
Message-ID:  <4C498024.7050106@libeljournal.com>
In-Reply-To: <4C48E695.6030602@langille.org>
References:  <4C47B57F.5020309@langille.org> <4C48E695.6030602@langille.org>

next in thread | previous in thread | raw e-mail | index | archive | help
Dan Langille wrote:
> Thank you to all the helpful discussion.  It's been very helpful and 
> educational.  Based on the advice and suggestions, I'm going to adjust 
> my original plan as follows.

[ ... ]

Since I still have the medium-sized ZFS array on the bench, testing this 
GPT setup seemed like a good idea.

The hardware's a Supermicro X8DTL-iF m/b + 12Gb memory, 2x 5502 Xeons, 
3x Supermicro USASLP-L8I 3G SAS controllers and 24x Hitachi 2Tb drives.

Partitioning the drives with the command-line:
gpart add -s 1800G -t freebsd-zfs -l disk00 da0[1] gave the following 
results with bonnie-64: (Bonnie -r -s 5000|20000|50000)[2]

    -------Sequential Output-------- ---Sequential Input-- --Random--
    -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
GB M/sec %CPU M/sec %CPU M/sec %CPU M/sec %CPU M/sec %CPU  /sec %CPU
  5  97.7 92.8 387.2 40.1 341.8 45.7 178.7 81.6 972.4 54.7   335  1.5
20  98.0 87.0 434.9 45.2 320.9 42.5 141.4 87.4 758.0 53.5   178  1.6
50  98.0 92.0 435.7 46.0 325.4 44.7 143.4 93.1 788.6 57.1   140  1.5


Repartitioning with
gpart add -b 1024 -s 1800G -t freebsd-zfs -l disk00 da0[1] gave the 
following:

    -------Sequential Output-------- ---Sequential Input-- --Random--
    -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
GB M/sec %CPU M/sec %CPU M/sec %CPU M/sec %CPU M/sec %CPU  /sec %CPU
  5  97.8 93.4 424.5 45.4 338.4 46.1 180.0 93.9 934.9 57.8   308  1.5
20  97.6 91.7 448.4 49.2 338.5 45.9 176.1 91.8 914.7 57.3   180  1.3
50  96.3 90.3 452.8 47.6 330.9 44.7 174.8 74.5 917.9 53.6   134  1.2

... So it would seem that bothering to align the blocks does make a 
difference.


For an apples/oranges comparison, here's the output from the other box 
we built. The hardware's more or less the same - the drive controller's 
an Areca-1280, but the OS was Solaris 10.latest:

    --------Sequential Output------- ---Sequential Input-- --Random--
    -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---

GB M/sec %CPU M/sec %CPU M/sec %CPU M/sec %CPU M/sec %CPU  /sec %CPU
  5 116.8 75.0 524.7 65.4 156.3 20.3 161.6 99.2 2924.0 100.0 199999 300.0
20 139.9 95.4 503.5 51.7 106.6 13.4  97.6 62.0 133.0  8.8   346  4.2
50 147.4 95.8 465.8 50.1 106.1 13.5  97.9 62.5 143.8  8.7   195  4.1




[1] da0 - da23, obviously.
[2] Our assumption locally is that the first test is likely just 
stressing the bandwidth to memory and the ZFS cache.

-- 
JH-R



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?4C498024.7050106>