From owner-freebsd-geom@FreeBSD.ORG Thu Jan 15 08:33:47 2009 Return-Path: Delivered-To: freebsd-geom@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 13FFC1065673; Thu, 15 Jan 2009 08:33:47 +0000 (UTC) (envelope-from ulf.lilleengen@gmail.com) Received: from bene2.itea.ntnu.no (bene2.itea.ntnu.no [IPv6:2001:700:300:3::57]) by mx1.freebsd.org (Postfix) with ESMTP id 209A18FC14; Thu, 15 Jan 2009 08:33:46 +0000 (UTC) (envelope-from ulf.lilleengen@gmail.com) Received: from localhost (localhost [127.0.0.1]) by bene2.itea.ntnu.no (Postfix) with ESMTP id 7C3ED90005; Thu, 15 Jan 2009 09:33:44 +0100 (CET) Received: from carrot (unknown [IPv6:2001:700:300:3::184]) by bene2.itea.ntnu.no (Postfix) with ESMTP id EA0CF90001; Thu, 15 Jan 2009 09:33:43 +0100 (CET) Date: Thu, 15 Jan 2009 09:33:52 +0000 From: Ulf Lilleengen To: Brian McCann Message-ID: <20090115093352.GB1821@carrot> References: <2b5f066d0901141323j7c9a194eo4606d9769279037e@mail.gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <2b5f066d0901141323j7c9a194eo4606d9769279037e@mail.gmail.com> User-Agent: Mutt/1.5.18 (2008-05-17) X-Virus-Scanned: Debian amavisd-new at bene2.itea.ntnu.no Cc: freebsd-questions , freebsd-geom@freebsd.org Subject: Re: gvinum & gjournal X-BeenThere: freebsd-geom@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: GEOM-specific discussions and implementations List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 15 Jan 2009 08:33:47 -0000 On Wed, Jan 14, 2009 at 04:23:30PM -0500, Brian McCann wrote: > Hi all. I'm cross-posting this since I figure I'll have better luck > finding someone who's done this before... > > I'm building a system that has 4 1.5TB Seagate SATA drives in it. > I've setup gvinum and made mirrors for my OS partitions, and a raid5 > plex for a big data partition. I'm trying to get gjournal to run on > the raid5 volume...but it's doing stuff that isn't expected. First, > here's my gvinum config for the array: > > ---snip--- > drive e0 device /dev/ad8s1g > drive e1 device /dev/ad10s1g > drive e2 device /dev/ad12s1g > drive e3 device /dev/ad14s1g > volume array1 > plex org raid5 128k > sd drive e0 > sd drive e1 > sd drive e2 > sd drive e3 > ---/snip--- > > Now...according to the handbook. the volume it creates is essentially > a disk drive. So...I run the following gjournal commands to make the > journal, and here's what I get: > > ---snip--- > # gjournal label /dev/gvinum/array1 > GEOM_JOURNAL: Journal 4267655417: gvinum/plex/array1.p0 contains data. > GEOM_JOURNAL: Journal 4267655417: gvinum/plex/array1.p0 contains journal. > GEOM_JOURNAL: Journal gvinum/plex/array1.p0 clean. > GEOM_JOURNAL: BIO_FLUSH not supported by gvinum/plex/array1.p0. > # gjournal list > Geom name: gjournal 4267655417 > ID: 4267655417 > Providers: > 1. Name: gvinum/plex/array1.p0.journal > Mediasize: 4477282549248 (4.1T) > Sectorsize: 512 > Mode: r0w0e0 > Consumers: > 1. Name: gvinum/plex/array1.p0 > Mediasize: 4478356291584 (4.1T) > Sectorsize: 512 > Mode: r1w1e1 > Jend: 4478356291072 > Jstart: 4477282549248 > Role: Data,Journal > --/snip--- > > So...why is it even touching the plex p0? I figured it would, just > like on a disk, if I gave it da0, create da0.journal. Moving on, if I > try to newfs the journal, which is now > "gvinum/plex/array1.p0.journal", I get: > Hi, It think that it touches it because the .p0 contains the gjournal metadata in the same way that the volume does, so gjournal attaches to that before the volume. One problem is that gjournal attaches to the "wrong" provider, but it's also silly that the provider is exposed in the first place. A fix for this is in a newer version of gvinum (as the plex is not exposed) if you're willing to try. -- Ulf Lilleengen