Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 19 Mar 2013 00:14:26 -0000
From:      "Steven Hartland" <killing@multiplay.co.uk>
To:        <davide.damico@contactlab.com>
Cc:        freebsd-fs@freebsd.org
Subject:   Re: FreBSD 9.1 and ZFS v28 performances
Message-ID:  <A3E2B710EF8342CCAF963106E3E747FB@multiplay.co.uk>
References:  <514729BD.2000608@contactlab.com> <810E5C08C2D149DBAC94E30678234995@multiplay.co.uk> <51473D1D.3050306@contactlab.com> <1DD6360145924BE0ABF2D0979287F5F4@multiplay.co.uk> <51474F2F.5040003@contactlab.com> <E106A7DB08744581A08C610BD8A86560@multiplay.co.uk> <51475267.1050204@contactlab.com> <514757DD.9030705@contactlab.com> <42B9D942BA134E16AFDDB564858CA007@multiplay.co.uk> <1bfdea0efb95a7e06554dadf703d58e7@sys.tomatointeractive.it> <897DB64CEBAF4F04AE9C76B3F686E497@multiplay.co.uk> <13317bbd289c4c828f134e2c2592a2d7@sys.tomatointeractive.it>

next in thread | previous in thread | raw e-mail | index | archive | help
----- Original Message -----=20
From: "Davide D'Amico" <davide.damico@contactlab.com>
>>> And the result from sysbench:
>>> General statistics:
>>>     total time:                          82.9567s
>>>     total number of events:              1
>>>     total time taken by event execution: 82.9545s
>>=20
>> Thats hardly doing any disk access at all, so odd it would be doubling
>> your benchmark time.
>>=20
>>> Using a SSD:
>>> # iostat mfid2 -x 2
>>>        tty           mfid2             cpu
>>>  tin  tout  KB/t tps  MB/s  us ni sy in id
>>>    0    32 125.21  31  3.84   0  0  0  0 99
> [...]
>>>    0   585  0.00   0  0.00   3  0  1  0 96
>>>    0    22  4.00   0  0.00   0  0  0  0 100
>>> And the result from sysbench:
>>> General statistics:
>>>     total time:                          36.1146s
>>>     total number of events:              1
>>>     total time taken by event execution: 36.1123s
>>> That are the same results using SAS disks.
>>=20
>> So this is ZFS on the SSD, resulting the same benchmark results as=20
>> UFS?
> This is UFS on SSD, that has the same behaviour than UFS on RAID10 HW=20
> on SAS drives.

I'd recommend doing the same test on the SSD with ZFS as well as that would
give you a simple like for like comparison.

    Regards
    Steve

=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it.=20

In the event of misdirection, illegible or incomplete transmission please telephone +44 845 868 1337
or return the E.mail to postmaster@multiplay.co.uk.
From owner-freebsd-fs@FreeBSD.ORG  Tue Mar 19 00:38:39 2013
Return-Path: <owner-freebsd-fs@FreeBSD.ORG>
Delivered-To: freebsd-fs@freebsd.org
Received: from mx1.freebsd.org (mx1.freebsd.org
 [IPv6:2001:1900:2254:206a::19:1])
 by hub.freebsd.org (Postfix) with ESMTP id 6F1D1955
 for <freebsd-fs@freebsd.org>; Tue, 19 Mar 2013 00:38:39 +0000 (UTC)
 (envelope-from ml@my.gd)
Received: from mail-la0-x231.google.com (mail-la0-x231.google.com
 [IPv6:2a00:1450:4010:c03::231])
 by mx1.freebsd.org (Postfix) with ESMTP id D39BF863
 for <freebsd-fs@freebsd.org>; Tue, 19 Mar 2013 00:38:38 +0000 (UTC)
Received: by mail-la0-f49.google.com with SMTP id fs13so6890651lab.8
 for <freebsd-fs@freebsd.org>; Mon, 18 Mar 2013 17:38:37 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=google.com; s=20120113;
 h=mime-version:x-received:in-reply-to:references:date:message-id
 :subject:from:to:cc:content-type:x-gm-message-state;
 bh=QL+MHMa0V5luMGQvUqxsdUEiy57mzJCmgjDSVCqBuT0=;
 b=lT76LR4mY4YZISFSSzTalYcD9fCcd9T5bh/C1vFr9hWaxNBFf8A4LOu2g906fq6VMg
 nYbpZlN8iHNy0QCs6CZO8+oqXNqSC91a58NUcZHIlPL/r/tRTFalaVhO0MvbgbI7VqA+
 MRUIUkwFbohFvM0Wp0mQmwqV2p3y59DGm6JZGufagYKDEPNB6y2Fp5x1psdRTfDnlrX5
 oMnuNt3evxqIvRR0Pov7LfjrnbcWK3bxVeVQXvo/jJHYyaIoKyqGzY8PI3BPnKk7GRnb
 xNRDwb7tQN5vaLgM/GL91rzGkBvAvzmKkmoM1h7i2SF6N+cYpmC33vu/zx7FSqYWBboN
 lTTg==
MIME-Version: 1.0
X-Received: by 10.152.109.208 with SMTP id hu16mr42288lab.45.1363653517786;
 Mon, 18 Mar 2013 17:38:37 -0700 (PDT)
Received: by 10.112.144.104 with HTTP; Mon, 18 Mar 2013 17:38:37 -0700 (PDT)
Received: by 10.112.144.104 with HTTP; Mon, 18 Mar 2013 17:38:37 -0700 (PDT)
In-Reply-To: <20130318163833.GA11916@neutralgood.org>
References: <514729BD.2000608@contactlab.com>
 <810E5C08C2D149DBAC94E30678234995@multiplay.co.uk>
 <20130318163833.GA11916@neutralgood.org>
Date: Tue, 19 Mar 2013 01:38:37 +0100
Message-ID: <CAE63ME5P7YNz431Se1izeCHAz8sMFJe0gSjSH5ehpT=ov-cQJg@mail.gmail.com>
Subject: Re: FreBSD 9.1 and ZFS v28 performances
From: Damien Fleuriot <ml@my.gd>
To: kpneal@pobox.com
X-Gm-Message-State: ALoCoQmoUKxuddyyz2q0nWdducG6PxJJxPReSy20ST8a0ODSKf0lvmDbTp+EuBrmfW9ORIHuyz16
Content-Type: text/plain; charset=ISO-8859-1
X-Content-Filtered-By: Mailman/MimeDel 2.1.14
Cc: freebsd-fs@freebsd.org, Davide D'Amico <davide.damico@contactlab.com>
X-BeenThere: freebsd-fs@freebsd.org
X-Mailman-Version: 2.1.14
Precedence: list
List-Id: Filesystems <freebsd-fs.freebsd.org>
List-Unsubscribe: <http://lists.freebsd.org/mailman/options/freebsd-fs>,
 <mailto:freebsd-fs-request@freebsd.org?subject=unsubscribe>
List-Archive: <http://lists.freebsd.org/pipermail/freebsd-fs>;
List-Post: <mailto:freebsd-fs@freebsd.org>
List-Help: <mailto:freebsd-fs-request@freebsd.org?subject=help>
List-Subscribe: <http://lists.freebsd.org/mailman/listinfo/freebsd-fs>,
 <mailto:freebsd-fs-request@freebsd.org?subject=subscribe>
X-List-Received-Date: Tue, 19 Mar 2013 00:38:39 -0000

On Mar 18, 2013 5:39 PM, <kpneal@pobox.com> wrote:
>
> On Mon, Mar 18, 2013 at 03:31:51PM -0000, Steven Hartland wrote:
> >
> > ----- Original Message -----
> > From: "Davide D'Amico" <davide.damico@contactlab.com>
> > To: <freebsd-fs@freebsd.org>
> > Sent: Monday, March 18, 2013 2:50 PM
> > Subject: FreBSD 9.1 and ZFS v28 performances
> >
> >
> > > Hi all,
> > > I'm trying to use ZFS on a DELL R720 with 2x6-core, 32GB ram, H710
> > > controller (no JBOD) and 15K rpm SAS HD: I will use it for a mysql 5.6
> > > server, so I am trying to use ZFS to get L2ARC and ZIL benefits.
> > >
> > > I created a RAID10 and used zpool to create a pool on top:
> > >
> > > # zpool create DATA mfid3
> > > # zpool add DATA cache mfid1 log mfid2
> > >
> > > I have a question on zfs performances. Using:
> > >
> > > dd if=/dev/zero of=file.out bs=16k count=1M
> > >
> > > I cannot go faster than 400MB/s so I think I'm missing something; I
> > > tried removing zil, removing l2arc but everything is still the same.
>
> The ZIL only helps with synchronous writes. This is something apps must
> request specifically typically and I would guess that dd would not do
that.
> So the ZIL doesn't affect your test.
>
> The L2ARC is a read cache. It does very little for writes. If the ZFS
cache
> working set fits all in memory then the L2ARC does nothing for you. Since
> you are writing the only thing needed from the ARC is metadata.
>
> > > mfiutil show volumes:
> > > mfi0 Volumes:
> > >   Id     Size    Level   Stripe  State   Cache   Name
> > >  mfid0 (  278G) RAID-1      64k OPTIMAL Disabled <BASE>
> > >  mfid1 (  118G) RAID-0      64k OPTIMAL Disabled <L2ARC0>
> > >  mfid2 (  118G) RAID-0      64k OPTIMAL Disabled <ZIL0>
> > >  mfid3 ( 1116G) RAID-10   64k OPTIMAL Disabled <DATA>
> > >
> > > zpool status:
> > >   pool: DATA
> > >   state: ONLINE
> > >   scan: none requested
> > > config:
> > >
> > > NAME        STATE     READ WRITE CKSUM
> > > DATA        ONLINE       0     0     0
> > >   mfid3     ONLINE       0     0     0
> > > logs
> > >   mfid2     ONLINE       0     0     0
> > > cache
> > >   mfid1     ONLINE       0     0     0
>
> Warning: your ZIL should probably be mirrored. If it isn't, and the drive
> fails, AND your machine takes a sudden dive (kernel panic, power outage,
> etc) then you will lose data.
>

How so ?
Unless he loses the zil device itself, I can't see how he'd lose pending
trasnsactions.


> > DATA  primarycache          metadata               local
> > DATA  secondarycache        all                    default
>
> Is there a specific reason that you are making a point of not putting
> regular data in the ARC? If you do that then reads of data will look in
> the L2ARC, which is a normal 15k drive, before hitting the main pool
drives
> which also consists of normal 15k drives. Adding an extra set of spinning
> rust before accessing your spinning rust doesn't sound helpful.
>
> > HEAD has some significant changes for the mfi driver specifically:-
> > http://svnweb.freebsd.org/base?view=revision&revision=247369
> >
> > This fixes lots off bugs but also enables full queue support on TBOLT
> > cards so if your mfi is a TBOLT card you may see some speed up in
> > random IO, not that this would effect your test here.
>
> I believe the H710 is a TBOLT card. It was released with the 12G servers
> like the R720.
>

That's a negatory, we've got r[4-7]10 servers here with h710 raid cards.




> I don't believe the OP mentioned how many drives are in the RAID10. More
> drives ~== more parallelism ~== better performance. So I too am wondering
> how much performance is expected.
>
> > While having a separate ZIL disk is good, your benefits may well be
> > limited if said disk is a traditional HD, better to look at enterprise
> > SSD's for this. The same and them some applies to your L2ARC disks.
>
> Before purchasing SSD's check the H710 docs to make sure they are allowed.
> The 6/i in my R610 specifically says that if an SSD is used it must be the
> only drive. Your R720's H710 is much newer and thus may not have that
> restriction. Still, checking the documentation is cheap.
>
> --
> Kevin P. Neal                                http://www.pobox.com/~kpn/
>
> "It sounded pretty good, but it's hard to tell how it will work out
> in practice." -- Dennis Ritchie, ~1977, "Summary of a DEC 32-bit machine"
> _______________________________________________
> freebsd-fs@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-fs
> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org"



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?A3E2B710EF8342CCAF963106E3E747FB>