Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 11 Sep 2008 00:56:56 -0700
From:      Daniel Scheibli <daniel.scheibli@edelbyte.org>
To:        lulf@stud.ntnu.no
Cc:        freebsd-geom@freebsd.org
Subject:   Re: Interaction of geom_vinum & geom_eli
Message-ID:  <48C8CF48.1060808@edelbyte.org>
In-Reply-To: <20080908135741.GA2567@nobby.lan>
References:  <48C47AD0.50905@edelbyte.org> <20080908135741.GA2567@nobby.lan>

next in thread | previous in thread | raw e-mail | index | archive | help

Ulf Lilleengen wrote:
> On Sun, Sep 07, 2008 at 06:07:28PM -0700, Daniel Scheibli wrote:
> [...]
>> My question is how does geom_vinum react on this?
>>
>> I suspect it will reconstruct the data from the parity written
>> to the other disks to service the request.
>>
>> But how is the disk - with the corrupt block - handled? Is the
>> entire disk marked as bad? Or does it only mark that single block?
>> Does it attempt to rewrite the corrupt data with the reconstructed
>> data?
>>
> Hi,
> 
> Gvinum will set the state of the drive to "down" (And you will get a
> "GEOM_VINUM: lost drive XXX" message). It will then as you say reconstruct
> the data if it's part of a RAID-5 plex. It will not however "salvage" the
> data on the drive like for instance ZFS. 

Hi,

thanks for your reply, thats what I feared.

I tend to run a "checksum all data" script every time I do
a backup (to ensure that the backup worked, but also to check
that only expected file changed since the last checksum run).

If a single corrupt block result in the entire disk being
flagged "down", then I worry that I'am only 1 more corrupt
block (on any other disk) away from the entire plex being
considered broken.

Are there any future plans to rewrite the reconstructed
data down to the "failed" disk (in geom_vinum or geom_raid5)
or is this then something where one should look towards
the ZFS port? Also would it be of interest to have the
"escalation" mode configurable?




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?48C8CF48.1060808>