Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 15 Jan 2009 09:33:52 +0000
From:      Ulf Lilleengen <ulf.lilleengen@gmail.com>
To:        Brian McCann <bjmccann@gmail.com>
Cc:        freebsd-questions <freebsd-questions@freebsd.org>, freebsd-geom@freebsd.org
Subject:   Re: gvinum & gjournal
Message-ID:  <20090115093352.GB1821@carrot>
In-Reply-To: <2b5f066d0901141323j7c9a194eo4606d9769279037e@mail.gmail.com>
References:  <2b5f066d0901141323j7c9a194eo4606d9769279037e@mail.gmail.com>

Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help
On Wed, Jan 14, 2009 at 04:23:30PM -0500, Brian McCann wrote:
> Hi all.  I'm cross-posting this since I figure I'll have better luck
> finding someone who's done this before...
> 
> I'm building a system that has 4 1.5TB Seagate SATA drives in it.
> I've setup gvinum and made mirrors for my OS partitions, and a raid5
> plex for a big data partition.  I'm trying to get gjournal to run on
> the raid5 volume...but it's doing stuff that isn't expected.  First,
> here's my gvinum config for the array:
> 
> ---snip---
> drive e0 device /dev/ad8s1g
> drive e1 device /dev/ad10s1g
> drive e2 device /dev/ad12s1g
> drive e3 device /dev/ad14s1g
> volume array1
>   plex org raid5 128k
>     sd drive e0
>     sd drive e1
>     sd drive e2
>     sd drive e3
> ---/snip---
> 
> Now...according to the handbook. the volume it creates is essentially
> a disk drive.  So...I run the following gjournal commands to make the
> journal, and here's what I get:
> 
> ---snip---
> # gjournal label /dev/gvinum/array1
> GEOM_JOURNAL: Journal 4267655417: gvinum/plex/array1.p0 contains data.
> GEOM_JOURNAL: Journal 4267655417: gvinum/plex/array1.p0 contains journal.
> GEOM_JOURNAL: Journal gvinum/plex/array1.p0 clean.
> GEOM_JOURNAL: BIO_FLUSH not supported by gvinum/plex/array1.p0.
> # gjournal list
> Geom name: gjournal 4267655417
> ID: 4267655417
> Providers:
> 1. Name: gvinum/plex/array1.p0.journal
>    Mediasize: 4477282549248 (4.1T)
>    Sectorsize: 512
>    Mode: r0w0e0
> Consumers:
> 1. Name: gvinum/plex/array1.p0
>    Mediasize: 4478356291584 (4.1T)
>    Sectorsize: 512
>    Mode: r1w1e1
>    Jend: 4478356291072
>    Jstart: 4477282549248
>    Role: Data,Journal
> --/snip---
> 
> So...why is it even touching the plex p0?  I figured it would, just
> like on a disk, if I gave it da0, create da0.journal.  Moving on, if I
> try to newfs the journal, which is now
> "gvinum/plex/array1.p0.journal", I get:
> 
Hi,

It think that it touches it because the .p0 contains the gjournal metadata in
the same way that the volume does, so gjournal attaches to that before the
volume. One problem is that gjournal attaches to the "wrong" provider, but
it's also silly that the provider is exposed in the first place. A fix for
this is in a newer version of gvinum (as the plex is not exposed) if you're
willing to try.

-- 
Ulf Lilleengen



Want to link to this message? Use this URL: <http://docs.FreeBSD.org/cgi/mid.cgi?20090115093352.GB1821>