Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 25 Feb 2014 11:16:23 -0800
From:      John-Mark Gurney <jmg@funkthat.com>
To:        Dmitry Sivachenko <trtrmitya@gmail.com>
Cc:        stable@freebsd.org, d@delphij.net
Subject:   Re: fsck dumps core
Message-ID:  <20140225191623.GR92037@funkthat.com>
In-Reply-To: <206E2401-F263-4D50-9E99-F7603828E206@gmail.com>
References:  <417919B7-C4D7-4003-9A71-64C4C9E73678@gmail.com> <530BC062.8070800@delphij.net> <206E2401-F263-4D50-9E99-F7603828E206@gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
Dmitry Sivachenko wrote this message on Tue, Feb 25, 2014 at 15:13 +0400:
> It is always the same story: I  was looking for software replacement of DELL PERC raid controller, so I test different variants of raidz.
> With low load, it is OK.  
> Under heavy write load, after it eats all free RAM for ARC, writing process stucks in zio->i state, write performance drops to few MB/sec
> (with 15-20 disks in raidz), and it takes dozens of seconds even to spawn login shell.

Well, if you mean a single raidz w/ 15-20, then of course your
performance would be bad, but I assume that you're doing 3-4 sets of
5 disks raidz, or even maybe 5-7 sets of 3 disk raidz...

I'm sure you found this and know this, but...

I can't find the link right now, but vdevs become effectively "one disk"
so, each vdev will only be as fast as it's slowest disk, and you then
only have x vdevs worth of "disks"...  So, if you are using 7200RPM
SATA drives w/ an IOPS of ~150, and only use one or two vdevs, you're
perf will suck compared to the same RAID5 system which has 3-5x the
IOPS... Also, depending upon sync workland (NFS), adding a SSD ZIL can
be a big improvement...

> These ZFS problems are heavily documented in mailing lists, time goes and nothing changes.

ZFS's raidz should be compared w/ raid3, not raid5 if you want to do
a more realistic comparision between fs's...

> avg@ states  "Empirical/anecdotal safe limit on pool utilization is said to be about 70-80%." -- isn't it too much price for fsck-less FS? :)
> http://markmail.org/message/mtws224umcy5afsa#query:+page:1+mid:xkcr53ll3ovcme5f+state:results

Even Solaris's ZFS guide says that:
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide#Storage_Pool_Performance_Considerations

> (my problems arise regardless of pool usage, even on almost empty partition).
> 
> So either I can't cook it (yes, I spent a lot of time reading FreeBSD's ZFS wiki and trying different settings), or ZFS is suitable only for low-load scenarios like root/var/home on zfs.

I know others are running high IOPS on ZFS... so, not sure what to say..

-- 
  John-Mark Gurney				Voice: +1 415 225 5579

     "All that I will do, has been done, All that I have, has not."



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20140225191623.GR92037>