Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 18 Mar 2013 20:13:47 +0100
From:      Davide D'Amico <davide.damico@contactlab.com>
To:        Steven Hartland <killing@multiplay.co.uk>
Cc:        freebsd-fs@freebsd.org
Subject:   Re: FreBSD 9.1 and ZFS v28 performances
Message-ID:  <1bfdea0efb95a7e06554dadf703d58e7@sys.tomatointeractive.it>
In-Reply-To: <42B9D942BA134E16AFDDB564858CA007@multiplay.co.uk>
References:  <514729BD.2000608@contactlab.com> <810E5C08C2D149DBAC94E30678234995@multiplay.co.uk> <51473D1D.3050306@contactlab.com> <1DD6360145924BE0ABF2D0979287F5F4@multiplay.co.uk> <51474F2F.5040003@contactlab.com> <E106A7DB08744581A08C610BD8A86560@multiplay.co.uk> <51475267.1050204@contactlab.com> <514757DD.9030705@contactlab.com> <42B9D942BA134E16AFDDB564858CA007@multiplay.co.uk>

next in thread | previous in thread | raw e-mail | index | archive | help

> How does ZFS compare if you do it on 1 SSD as per your second
> UFS test? As I'm wondering the mfi cache is kicking in?

Well, it was a test :)

The MFI cache is enabled because I am using mfid* as jbod (mfiutil 
create jbod mfid3 mfid4 mfid5 mfid6):

> 
> While running the tests what sort of thing are you
> seeing from gstat, any disks maxing? If so primarily
> read or write?
Here the r/w pattern using zpool iostat 2:

DATA        52.2G  1.03T    102      0  1.60M      0
DATA        52.2G  1.03T      7    105   128K   674K
DATA        52.2G  1.03T     40      0   655K      0
DATA        52.2G  1.03T     16      0   264K      0
DATA        52.2G  1.03T      7    154   120K   991K
DATA        52.2G  1.03T    125      0  1.95M      0
DATA        52.2G  1.03T     44    117   711K   718K
DATA        52.2G  1.03T     63      0  1015K      0
DATA        52.2G  1.03T     39      0   631K      0
DATA        52.2G  1.03T      1    152  24.0K  1006K
DATA        52.2G  1.03T      9      0   152K      0
DATA        52.2G  1.03T      2    100  40.0K   571K
DATA        52.2G  1.03T     41      0   663K      0
DATA        52.2G  1.03T     41      0   658K  89.9K
DATA        52.2G  1.03T      1    114  24.0K   741K
DATA        52.2G  1.03T      0      0      0      0
DATA        52.2G  1.03T      2    155  40.0K   977K
DATA        52.2G  1.03T      3      0  63.9K      0
DATA        52.2G  1.03T     28      0   456K      0
DATA        52.2G  1.03T     98    125  1.49M   863K
DATA        52.2G  1.03T    122      0  1.89M      0
DATA        52.2G  1.03T     70    123  1.10M   841K
DATA        52.2G  1.03T     21      0   352K      0
DATA        52.2G  1.03T      1      0  24.0K      0
DATA        52.2G  1.03T     10    160   168K  1.06M
DATA        52.2G  1.03T      6      0   112K      0
DATA        52.2G  1.03T      0    126  7.99K   908K
DATA        52.2G  1.03T     50      0   807K      0
DATA        52.2G  1.03T     19      0   320K  97.9K
DATA        52.2G  1.03T      4    122  66.9K   862K
DATA        52.2G  1.03T      6      0   104K      0
DATA        52.2G  1.03T      0    164      0  1.06M
DATA        52.2G  1.03T    128      0  2.01M      0
DATA        52.2G  1.03T      0      0      0      0
DATA        52.2G  1.03T      0    106      0   649K
DATA        52.2G  1.03T      5      0  95.9K      0
DATA        52.2G  1.03T      8    114   144K   711K
DATA        52.2G  1.03T     40      0   655K      0
DATA        52.2G  1.03T     47      0   759K      0
DATA        52.2G  1.03T     13     96   216K   551K
DATA        52.2G  1.03T      2      0  40.0K      0
DATA        52.2G  1.03T      0     97      0   402K

And the result from sysbench:
General statistics:
     total time:                          82.9567s
     total number of events:              1
     total time taken by event execution: 82.9545s


Using a SSD:
# iostat mfid2 -x 2
        tty           mfid2             cpu
  tin  tout  KB/t tps  MB/s  us ni sy in id
    0    32 125.21  31  3.84   0  0  0  0 99
    0   170  0.00   0  0.00   1  0  0  0 99
    0    22  0.00   0  0.00   3  0  2  0 96
    0    22  0.00   0  0.00   3  0  1  0 96
    0    22 32.00   2  0.08   3  0  1  0 96
    0    22 32.00   0  0.02   3  0  1  0 96
    0    22  4.00   0  0.00   3  0  1  0 96
    0    22  0.00   0  0.00   3  0  1  0 96
    0    22  0.00   0  0.00   3  0  2  0 96
    0    22  0.00   0  0.00   3  0  1  0 96
    0    22  0.00   0  0.00   3  0  1  0 96
    0    22  0.00   0  0.00   3  0  1  0 96
    0    22  0.00   0  0.00   3  0  1  0 96
    0    22  0.00   0  0.00   3  0  1  0 96
    0    22  0.00   0  0.00   3  0  1  0 96
    0    22  0.00   0  0.00   3  0  2  0 96
    0    22 44.80  67  2.95   3  0  1  0 96
    0    22 87.58   9  0.81   3  0  2  0 96
    0    22 32.00   3  0.09   2  0  2  0 96
    0   585  0.00   0  0.00   3  0  1  0 96
    0    22  4.00   0  0.00   0  0  0  0 100

And the result from sysbench:
General statistics:
     total time:                          36.1146s
     total number of events:              1
     total time taken by event execution: 36.1123s

That are the same results using SAS disks.

d.



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?1bfdea0efb95a7e06554dadf703d58e7>