Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 19 Feb 2011 10:35:35 -0500
From:      Daniel Staal <DStaal@usa.net>
To:        Matthew Seaman <m.seaman@infracaninophile.co.uk>, freebsd-questions@freebsd.org
Subject:   Re: ZFS-only booting on FreeBSD
Message-ID:  <AF8BFB811828E5E7EFD857A5@mac-pro.magehandbook.com>
In-Reply-To: <4D5FD756.5020306@infracaninophile.co.uk>
References:  <97405dd7ad34c6cbecebfdda327d1e83.squirrel@www.magehandbook.com> <4D5FB121.6090102@infracaninophile.co.uk> <F2D539249AB2457E49ED2013@mac-pro.magehandbook.com> <4D5FD756.5020306@infracaninophile.co.uk>

next in thread | previous in thread | raw e-mail | index | archive | help
--As of February 19, 2011 2:44:38 PM +0000, Matthew Seaman is alleged to 
have said:

> Umm... a sufficiently forgetful sysadmin can break *anything*.  This
> isn't really a fair test: forgetting to write the boot blocks onto a
> disk could similarly render a UFS based system unbootable.   That's why
> scripting this sort of stuff is a really good idea.   Any new sysadmin
> should of course be referred to the copious and accurate documentation
> detailing exactly the steps needed to replace a drive...
>
> ZFS is definitely advantageous in this respect, because the sysadmin has
> to do fewer steps to repair a failed drive, so there's less opportunity
> for anything to be missed out or got wrong.
>
> The best solution in this respect is one where you can simply unplug the
> dead drive and plug in the replacement.  You can do that with many
> hardware RAID systems, but you're going to have to pay a premium price
> for them.  Also, you loose out on the general day-to-day benefits of
> using ZFS.

--As for the rest, it is mine.

True, best case is hardware RAID for this specific problem.  What I'm 
looking at here is basically reducing the surprise: A ZFS pool being used 
as the boot drive has the 'surprising' behavior that if you replace a drive 
using the instructions from the man pages or a naive Google search, you 
will have a drive that *appears* to work, until some point later where you 
attempt to reboot your system.  (At which point you will need to start 
over.)  To avoid this you need to read local documentation and/or remember 
that there is something beyond the man pages needs to be done.

With a normal UFS/etc. filesystem the standard failure recovery systems 
will point out that this is a boot drive, and handle as necessary.  It will 
either work or not, it will never *appear* to work, and then fail at some 
future point from a current error.  It might be more steps to repair a 
specific drive, but all the steps are handled together.

Basically, if a ZFS boot drive fails, you are likely to get the following 
scenario:
1) 'What do I need to do to replace a disk in the ZFS pool?'
2) 'Oh, that's easy.'  Replaces disk.
3) System fails to boot at some later point.
4) 'Oh, right, you need to do this *as well* on the *boot* pool...'

Where if a UFS boot drive fails on an otherwise ZFS system, you'll get:
1) 'What's this drive?'
2) 'Oh, so how do I set that up again?'
3) Set up replacement boot drive.

The first situation hides that it's a special case, where the second one 
doesn't.

To avoid the first scenario you need to make sure your sysadmins are 
following *local* (and probably out-of-band) docs, and aware of potential 
problems.  And awake.  ;)  The scenario in the second situation presents 
it's problem as a unified package, and you can rely on normal levels of 
alertness to be able to handle it correctly.  (The sysadmin will realize it 
needs to be set up as a boot device because it's the boot device.  ;)  It 
may be complicated, but it's *obviously* complicated.)

I'm still not clear on whether a ZFS-only system will boot with a failed 
drive in the root ZFS pool.  Once booted, of course a decent ZFS setup 
should be able to recover from the failed drive.  But the question is if 
the FreeBSD boot process will handle the redundancy or not.  At this point 
I'm actually guessing it will, which of course only exasperates the above 
surprise problem: 'The easy ZFS disk replacement procedure *did* work in 
the past, why did it cause a problem now?'  (And conceivably it could cause 
*major* data problems at that point, as ZFS will *grow* a pool quite 
easily, but *shrinking* one is a problem.)

Daniel T. Staal

---------------------------------------------------------------
This email copyright the author.  Unless otherwise noted, you
are expressly allowed to retransmit, quote, or otherwise use
the contents for non-commercial purposes.  This copyright will
expire 5 years after the author's death, or in 30 years,
whichever is longer, unless such a period is in excess of
local copyright law.
---------------------------------------------------------------



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?AF8BFB811828E5E7EFD857A5>