Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 25 Jan 2010 03:58:48 -0500
From:      Thomas Burgess <wonslung@gmail.com>
To:        Dan Naumov <dan.naumov@gmail.com>
Cc:        Jason Edwards <sub.mesa@gmail.com>, FreeBSD-STABLE Mailing List <freebsd-stable@freebsd.org>, Alexander Motin <mav@freebsd.org>, freebsd-fs@freebsd.org, freebsd-questions@freebsd.org
Subject:   Re: 8.0-RELEASE/amd64 - full ZFS install - low read and write disk  performance
Message-ID:  <deb820501001250058ye0b798ayeccb2583a08558dd@mail.gmail.com>
In-Reply-To: <cf9b1ee01001250032s24bf9f55r7f83d88d0ce03645@mail.gmail.com>
References:  <883b2dc51001240905r4cfbf830i3b9b400969ac261b@mail.gmail.com> <1264368182.00211075.1264355402@10.7.7.3> <4B5CC167.5010604@FreeBSD.org> <cf9b1ee01001241614x2ccb818at7631a58cfb143153@mail.gmail.com> <alpine.GSO.2.01.1001242315000.17824@freddy.simplesystems.org> <cf9b1ee01001242334x75cf0a2ajcf5fe83aa88c4983@mail.gmail.com> <cf9b1ee01001250032s24bf9f55r7f83d88d0ce03645@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
It depends on the bandwidth of the bus that it is on and the controller
itself.

I like to use pci-x with aoc-sat2-mv8 cards or pci-e cards....that way you
get a lot more bandwidth..

On Mon, Jan 25, 2010 at 3:32 AM, Dan Naumov <dan.naumov@gmail.com> wrote:

> On Mon, Jan 25, 2010 at 9:34 AM, Dan Naumov <dan.naumov@gmail.com> wrote:
> > On Mon, Jan 25, 2010 at 7:33 AM, Bob Friesenhahn
> > <bfriesen@simple.dallas.tx.us> wrote:
> >> On Mon, 25 Jan 2010, Dan Naumov wrote:
> >>>
> >>> I've checked with the manufacturer and it seems that the Sil3124 in
> >>> this NAS is indeed a PCI card. More info on the card in question is
> >>> available at
> >>> http://green-pcs.co.uk/2009/01/28/tranquil-bbs2-those-pci-cards/
> >>> I have the card described later on the page, the one with 4 SATA ports
> >>> and no eSATA. Alright, so it being PCI is probably a bottleneck in
> >>> some ways, but that still doesn't explain the performance THAT bad,
> >>> considering that same hardware, same disks, same disk controller push
> >>> over 65mb/s in both reads and writes in Win2008. And agian, I am
> >>> pretty sure that I've had "close to expected" results when I was
> >>
> >> The slow PCI bus and this card look like the bottleneck to me. Remember
> that
> >> your Win2008 tests were with just one disk, your zfs performance with
> just
> >> one disk was similar to Win2008, and your zfs performance with a mirror
> was
> >> just under 1/2 that.
> >>
> >> I don't think that your performance results are necessarily out of line
> for
> >> the hardware you are using.
> >>
> >> On an old Sun SPARC workstation with retrofitted 15K RPM drives on
> Ultra-160
> >> SCSI channel, I see a zfs mirror write performance of 67,317KB/second
> and a
> >> read performance of 124,347KB/second.  The drives themselves are capable
> of
> >> 100MB/second range performance. Similar to yourself, I see 1/2 the write
> >> performance due to bandwidth limitations.
> >>
> >> Bob
> >
> > There is lots of very sweet irony in my particular situiation.
> > Initially I was planning to use a single X25-M 80gb SSD in the
> > motherboard sata port for the actual OS installation as well as to
> > dedicate 50gb of it to a become a designaed L2ARC vdev for my ZFS
> > mirrors. The SSD attached to the motherboard port would be recognized
> > only as a SATA150 device for some reason, but I was still seeing
> > 150mb/s throughput and sub 0.1 ms latencies on that disk simply
> > because of how crazy good the X25-M's are. However I ended up having
> > very bad issues with the Icydock 2,5" to 3,5" converter jacket I was
> > using to keep/fit the SSD in the system and it would randomly drop
> > write IO on heavy load due to bad connectors. Having finally figured
> > out the cause of my OS installations to the SSD going belly up during
> > applying updates, I decided to move the SSD to my desktop and use it
> > there instead, additionally thinking that my perhaps my idea of the
> > SSD was crazy overkill for what I need the system to do. Ironically
> > now that I am seeing how horrible the performance is when I am
> > operating on the mirror through this PCI card, I realize that
> > actually, my idea was pretty bloody brilliant, I just didn't really
> > know why at the time.
> >
> > An L2ARC device on the motherboard port would really help me with
> > random read IO, but to work around the utterly poor write performance,
> > I would also need a dedicaled SLOG ZIL device. The catch is that while
> > L2ARC devices and be removed from the pool at will (should the device
> > up and die all of a sudden), the dedicated ZILs cannot and currently a
> > "missing" ZIL device will render the pool it's included in be unable
> > to import and become inaccessible. There is some work happening in
> > Solaris to implement removing SLOGs from a pool, but that work hasn't
> > yet found it's way in FreeBSD yet.
> >
> >
> > - Sincerely,
> > Dan Naumov
>
> OK final question: if/when I go about adding more disks to the system
> and want redundancy, am I right in thinking that: ZFS pool of
> disk1+disk2 mirror + disk3+disk4 mirror (a la RAID10) would completely
> murder my write and read performance even way below the current 28mb/s
> / 50mb/s I am seeing with 2 disks on that PCI controller and that in
> order to have the least negative impact, I should simply have 2
> independent mirrors in 2 independent pools (with the 5th disk slot in
> the NAS given to a non-redundant single disk running off the one
> available SATA port on the motherboard)?
>
> - Sincerely,
> Dan Naumov
> _______________________________________________
> freebsd-fs@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-fs
> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org"
>



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?deb820501001250058ye0b798ayeccb2583a08558dd>