Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 7 Mar 2013 18:21:45 +1100
From:      Peter Jeremy <peter@rulingia.com>
To:        Karl Denninger <karl@denninger.net>
Cc:        freebsd-stable@freebsd.org
Subject:   Re: ZFS "stalls" -- and maybe we should be talking about defaults?
Message-ID:  <20130307072145.GA2923@server.rulingia.com>
In-Reply-To: <513524B2.6020600@denninger.net>
References:  <513524B2.6020600@denninger.net>

next in thread | previous in thread | raw e-mail | index | archive | help

--zYM0uCDKw75PZbzx
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On 2013-Mar-04 16:48:18 -0600, Karl Denninger <karl@denninger.net> wrote:
>The subject machine in question has 12GB of RAM and dual Xeon
>5500-series processors.  It also has an ARECA 1680ix in it with 2GB of
>local cache and the BBU for it.  The ZFS spindles are all exported as
>JBOD drives.  I set up four disks under GPT, have a single freebsd-zfs
>partition added to them, are labeled and the providers are then
>geli-encrypted and added to the pool.

What sort of disks?  SAS or SATA?

>also known good.  I began to get EXTENDED stalls with zero I/O going on,
>some lasting for 30 seconds or so.  The system was not frozen but
>anything that touched I/O would lock until it cleared.  Dedup is off,
>incidentally.

When the system has stalled:
- Do you see very low free memory?
- What happens to all the different CPU utilisation figures?  Do they
  all go to zero?  Do you get high system or interrupt CPU (including
  going to 1 core's worth)?
- What happens to interrupt load?  Do you see any disk controller
  interrupts?

Would you be able to build a kernel with WITNESS (and WITNESS_SKIPSPIN)
and see if you get any errors when stalls happen.

On 2013-Mar-05 14:09:36 -0800, Jeremy Chadwick <jdc@koitsu.org> wrote:
>On Tue, Mar 05, 2013 at 01:09:41PM +0200, Andriy Gapon wrote:
>> Completely unrelated to the main thread:
>>=20
>> on 05/03/2013 07:32 Jeremy Chadwick said the following:
>> > That said, I still do not recommend ZFS for a root filesystem
>> Why?
>Too long a history of problems with it and weird edge cases (keep
>reading); the last thing an administrator wants to deal with is a system
>where the root filesystem won't mount/can't be used.  It makes
>recovery or problem-solving (i.e. the server is not physically accessible
>given geographic distances) very difficult.

I've had lots of problems with a gmirrored UFS root as well.  The
biggest issue is that gmirror has no audit functionality so you
can't verify that both sides of a mirror really do have the same data.

>My point/opinion: UFS for a root filesystem is guaranteed to work
>without any fiddling about and, barring drive failures or controller
>issues, is (again, my opinion) a lot more risk-free than ZFS-on-root.

AFAIK, you can't boot from anything other than a single disk (ie no
graid).

--=20
Peter Jeremy

--zYM0uCDKw75PZbzx
Content-Type: application/pgp-signature

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.19 (FreeBSD)

iEYEARECAAYFAlE4QAkACgkQ/opHv/APuId2wQCgs8WOllSrjKtPxNbBDDqtW9wG
Tz8An26LiYxeg46x2+kr6cT9dgakLkKN
=vgwF
-----END PGP SIGNATURE-----

--zYM0uCDKw75PZbzx--



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20130307072145.GA2923>