Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 18 Dec 2008 18:57:53 +0100
From:      Ulf Lilleengen <ulf.lilleengen@gmail.com>
To:        Dimitri Aivaliotis <aglarond@gmail.com>
Cc:        freebsd-geom@freebsd.org
Subject:   Re: gvinum raid10 stale
Message-ID:  <20081218175752.GA10326@carrot.lan>
In-Reply-To: <55c107bf0812180320x502847efi53df5a7da68b73e1@mail.gmail.com>
References:  <55c107bf0812180320x502847efi53df5a7da68b73e1@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On tor, des 18, 2008 at 12:20:26pm +0100, Dimitri Aivaliotis wrote:
> Hi,
> 
> I created a raid10 using gvinum with the following config:
> 
> drive a device /dev/da2
> drive b device /dev/da3
> volume raid10
>    plex org striped 512k
*SNIP*
> 
> 
Why do you create 32 subdisks for each stripe? They are still on the same
drive, and should not give you any performance increase as I see it. Just
having one subdisk for each drive and mirroring them would give the same
effect, and would allow you to expand the size.

> I wanted to add two additional disks to this raid10, so I shutdown the
> server, inserted the disks and brought it back up.  When the system
> booted, it reported the filesystem as needing a check.  Doing a gvinum
> list, I saw that all subdisks were stale, so both plexes were down.
> After rebooting again (to remove the additional disks), the problem
> persisted.  My assumption that the new disks caused the old subdisks
> to be stale wasn't true, as I later noticed that a different server
> with the same config has a plex down as well because all subdisks on
> that plex are stale.  The servers are running 6.3-RELEASE-p1 and
> 6.2-RELEASE-p9, respectively.
> 
> (I wound up doing a 'gvinum setstate -f up raid10.p1.s<num>' 32 times
> to bring one plex back up on the server that had both down.)
> 
> My questions:
> 
> - Why would these subdisks be set stale?
I don't see how the subdisks could go stale after inserting the disks unless
they changed names, and the new disks you inserted was named with the old
disks device number.

> - How can I recover the other plex, such that the data continues to be
> striped+mirrored correctly?
For the volume where you have one good plex, you can do:
gvinum start raid10 

This command will sync the bad plex from the good one.

For the volume where both the plexes are down, you can try to force subdisks
of one of the plex in the upstate and see if you are able to fsck/mount the
volume. If not, try the same procedure with the other plex. If one of them is
good, you can be pretty certain it is good, and  you can do a sync of the
plexes.

> - How can I extend this raid10 by adding two additional disks?
I assume you want to increase the size, and not add more mirrors, so you
can't. The plexes are striped, and extending the stripes is only supported in
a new gvinum version not yet committed.

-- 
Ulf Lilleengen



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20081218175752.GA10326>