Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 5 Mar 2013 14:09:36 -0800
From:      Jeremy Chadwick <jdc@koitsu.org>
To:        Andriy Gapon <avg@FreeBSD.org>
Cc:        freebsd-stable@FreeBSD.org
Subject:   Re: ZFS "stalls" -- and maybe we should be talking about defaults?
Message-ID:  <20130305220936.GA54718@icarus.home.lan>
In-Reply-To: <5135D275.3050500@FreeBSD.org>
References:  <513524B2.6020600@denninger.net> <89680320E0FA4C0A99D522EA2037CE6E@multiplay.co.uk> <20130305050539.GA52821@anubis.morrow.me.uk> <20130305053249.GA38107@icarus.home.lan> <5135D275.3050500@FreeBSD.org>

next in thread | previous in thread | raw e-mail | index | archive | help
On Tue, Mar 05, 2013 at 01:09:41PM +0200, Andriy Gapon wrote:
> Completely unrelated to the main thread:
> 
> on 05/03/2013 07:32 Jeremy Chadwick said the following:
> > That said, I still do not recommend ZFS for a root filesystem
> 
> Why?

Too long a history of problems with it and weird edge cases (keep
reading); the last thing an administrator wants to deal with is a system
where the root filesystem won't mount/can't be used.  It makes
recovery or problem-solving (i.e. the server is not physically accessible
given geographic distances) very difficult.

Are there still issues booting from raidzX or stripes or root pools with
multiple vdevs?  What about with cache or log devices?

My point/opinion: UFS for a root filesystem is guaranteed to work
without any fiddling about and, barring drive failures or controller
issues, is (again, my opinion) a lot more risk-free than ZFS-on-root.

I say that knowing lots of people use ZFS-on-root, which is great -- I
just wonder how many of them have tested all the crazy scenarios and
then tried to boot from things.

> > (this biting people still happens even today)
> 
> What exactly?

http://lists.freebsd.org/pipermail/freebsd-questions/2013-February/249363.html
http://lists.freebsd.org/pipermail/freebsd-questions/2013-February/249387.html
http://lists.freebsd.org/pipermail/freebsd-stable/2013-February/072398.html

The last one got solved:

http://lists.freebsd.org/pipermail/freebsd-stable/2013-February/072406.html
http://lists.freebsd.org/pipermail/freebsd-stable/2013-February/072408.html

I know factually you're aware of the zpool.cache ordeal (which may or
may not be the cause of the issue shown in the 2nd URL above), but my
point is that still at this moment in time -- barring someone using a
stable/9 ISO for installation -- there still seem to be issues.

Things on the mailing lists which go unanswered/never provide closure of
this nature are numerous, and that just adds to my concern.

> > - Disks are GPT and are *partitioned, and ZFS refers to the partitions
> >   not the raw disk -- this matters (honest, it really does; the ZFS
> >   code handles things differently with raw disks)
> 
> Not on FreeBSD as far I can see.

My statement comes from here (first line in particular):

http://lists.freebsd.org/pipermail/freebsd-questions/2013-January/248697.html

If this is wrong/false, then this furthers my point about kernel folks
who are in-the-know needing to chime in and help stop the
misinformation.  The rest of us are just end-users, often misinformed.

-- 
| Jeremy Chadwick                                   jdc@koitsu.org |
| UNIX Systems Administrator                http://jdc.koitsu.org/ |
| Mountain View, CA, US                                            |
| Making life hard for others since 1977.             PGP 4BD6C0CB |



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20130305220936.GA54718>