Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 24 Jan 2009 13:15:17 -0800
From:      David Ehrmann <ehrmann@gmail.com>
To:        Wes Morgan <morganw@chemikals.org>
Cc:        freebsd-stable@freebsd.org
Subject:   Re: zfs drive keeps failing between export and import
Message-ID:  <497B84E5.3090304@gmail.com>
In-Reply-To: <alpine.BSF.2.00.0901240703270.66024@ibyngvyr.purzvxnyf.bet>
References:  <6e0e5340901151158n5108ba8ct6af8fb270b10b75b@mail.gmail.com> <E1LNmxP-0003vM-Lx@dilbert.ticketswitch.com> <6e0e5340901161521t30845197s9529fb5a55dbba13@mail.gmail.com> <6e0e5340901221324o33f1e2b1l53c842ebf9dad9a8@mail.gmail.com> <alpine.BSF.2.00.0901222204440.39246@ibyngvyr.purzvxnyf.bet> <6e0e5340901240026o39eb4554u6d8eb8c00ee2adbb@mail.gmail.com> <alpine.BSF.2.00.0901240703270.66024@ibyngvyr.purzvxnyf.bet>

next in thread | previous in thread | raw e-mail | index | archive | help
Wes Morgan wrote:
> You might try creating the pool, saving the first 512k of each block 
> device to a file, then export the pool and repeat, then import (or try 
> to). Run zdb on each file and compare the output. From creation to 
> export to import they should only differ by the "state" in the top 
> level of the label nvlist. If the entire label is corrupted, then 
> likely it's a crypto problem.
>
> Although, it really sounds like you've been able to eliminate zfs as a 
> culprit.
This is pretty much what I tried.  between export, geli detach, and geli 
attach, zdb -l went from reporting info on the pool to reporting that no 
labels were found.  dd confirmed what zdb was saying, so I have no 
reason to think zfs is acting up.  I just don't get why I haven't been 
able to reproduce this with another zpool-less disk or two md disks.  
Maybe the .eli device shows up before it's ready to use and something 
gets cached by zfs in the background.  Maybe it has something to do with 
me using two disks.  A race condition?  None of these are really easy 
things to find.

For now, the only two ideas I have are trying zfs on a single disk with 
this configuration, and trying it on multiple disks, when the RMA is 
done, with 2 disks.



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?497B84E5.3090304>