Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 15 Jan 2009 17:57:05 +0000
From:      Ulf Lilleengen <ulf.lilleengen@gmail.com>
To:        Ivan Voras <ivoras@freebsd.org>
Cc:        Brian McCann <bjmccann@gmail.com>, freebsd-questions@freebsd.org, freebsd-geom@freebsd.org
Subject:   Re: gvinum & gjournal
Message-ID:  <20090115175704.GB1234@carrot>
In-Reply-To: <gkndaf$26q$1@ger.gmane.org>
References:  <2b5f066d0901141323j7c9a194eo4606d9769279037e@mail.gmail.com> <20090115093352.GB1821@carrot> <gkndaf$26q$1@ger.gmane.org>

next in thread | previous in thread | raw e-mail | index | archive | help
On Thu, Jan 15, 2009 at 02:22:13PM +0100, Ivan Voras wrote:
> Ulf Lilleengen wrote:
> > On Wed, Jan 14, 2009 at 04:23:30PM -0500, Brian McCann wrote:
> >> Hi all.  I'm cross-posting this since I figure I'll have better luck
> >> finding someone who's done this before...
> >>
> >> I'm building a system that has 4 1.5TB Seagate SATA drives in it.
> >> I've setup gvinum and made mirrors for my OS partitions, and a raid5
> >> plex for a big data partition.  I'm trying to get gjournal to run on
> >> the raid5 volume...but it's doing stuff that isn't expected.  First,
> >> here's my gvinum config for the array:
> >>
> >> ---snip---
> >> drive e0 device /dev/ad8s1g
> >> drive e1 device /dev/ad10s1g
> >> drive e2 device /dev/ad12s1g
> >> drive e3 device /dev/ad14s1g
> >> volume array1
> >>   plex org raid5 128k
> >>     sd drive e0
> >>     sd drive e1
> >>     sd drive e2
> >>     sd drive e3
> >> ---/snip---
> >>
> >> Now...according to the handbook. the volume it creates is essentially
> >> a disk drive.  So...I run the following gjournal commands to make the
> >> journal, and here's what I get:
> >>
> >> ---snip---
> >> # gjournal label /dev/gvinum/array1
> >> GEOM_JOURNAL: Journal 4267655417: gvinum/plex/array1.p0 contains data.
> >> GEOM_JOURNAL: Journal 4267655417: gvinum/plex/array1.p0 contains journal.
> >> GEOM_JOURNAL: Journal gvinum/plex/array1.p0 clean.
> >> GEOM_JOURNAL: BIO_FLUSH not supported by gvinum/plex/array1.p0.
> >> # gjournal list
> >> Geom name: gjournal 4267655417
> >> ID: 4267655417
> >> Providers:
> >> 1. Name: gvinum/plex/array1.p0.journal
> >>    Mediasize: 4477282549248 (4.1T)
> >>    Sectorsize: 512
> >>    Mode: r0w0e0
> >> Consumers:
> >> 1. Name: gvinum/plex/array1.p0
> >>    Mediasize: 4478356291584 (4.1T)
> >>    Sectorsize: 512
> >>    Mode: r1w1e1
> >>    Jend: 4478356291072
> >>    Jstart: 4477282549248
> >>    Role: Data,Journal
> >> --/snip---
> >>
> >> So...why is it even touching the plex p0?  I figured it would, just
> >> like on a disk, if I gave it da0, create da0.journal.  Moving on, if I
> >> try to newfs the journal, which is now
> >> "gvinum/plex/array1.p0.journal", I get:
> >>
> > Hi,
> > 
> > It think that it touches it because the .p0 contains the gjournal metadata in
> > the same way that the volume does, so gjournal attaches to that before the
> > volume. One problem is that gjournal attaches to the "wrong" provider, but
> > it's also silly that the provider is exposed in the first place. A fix for
> > this is in a newer version of gvinum (as the plex is not exposed) if you're
> > willing to try.
> > 
> 
> A simpler fix is to use the "-h" - "hardcode provider name" switch to
> the "gjournal label" command (see the man page).
> 
Oh, nice feature. I recommend this then :)

-- 
Ulf Lilleengen



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20090115175704.GB1234>