Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 15 Jan 2009 02:56:45 -0500
From:      Yoshihiro Ota <ota@j.email.ne.jp>
To:        "Brian McCann" <bjmccann@gmail.com>
Cc:        freebsd-geom@freebsd.org
Subject:   Re: gvinum & gjournal
Message-ID:  <20090115025645.21ad2185.ota@j.email.ne.jp>
In-Reply-To: <2b5f066d0901141323j7c9a194eo4606d9769279037e@mail.gmail.com>
References:  <2b5f066d0901141323j7c9a194eo4606d9769279037e@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
Try 'dd if=/dev/zero of=/dev/gvinum/array1 bs=1M count=10'.

Zero clearing the head of disk helps sometimes.

You may want to add a new disk or create a partition for the journaling
area.  Journaling on raid5 sounds overkill.  

Hiro

On Wed, 14 Jan 2009 16:23:30 -0500
"Brian McCann" <bjmccann@gmail.com> wrote:

> Hi all.  I'm cross-posting this since I figure I'll have better luck
> finding someone who's done this before...
> 
> I'm building a system that has 4 1.5TB Seagate SATA drives in it.
> I've setup gvinum and made mirrors for my OS partitions, and a raid5
> plex for a big data partition.  I'm trying to get gjournal to run on
> the raid5 volume...but it's doing stuff that isn't expected.  First,
> here's my gvinum config for the array:
> 
> ---snip---
> drive e0 device /dev/ad8s1g
> drive e1 device /dev/ad10s1g
> drive e2 device /dev/ad12s1g
> drive e3 device /dev/ad14s1g
> volume array1
>   plex org raid5 128k
>     sd drive e0
>     sd drive e1
>     sd drive e2
>     sd drive e3
> ---/snip---
> 
> Now...according to the handbook. the volume it creates is essentially
> a disk drive.  So...I run the following gjournal commands to make the
> journal, and here's what I get:
> 
> ---snip---
> # gjournal label /dev/gvinum/array1
> GEOM_JOURNAL: Journal 4267655417: gvinum/plex/array1.p0 contains data.
> GEOM_JOURNAL: Journal 4267655417: gvinum/plex/array1.p0 contains journal.
> GEOM_JOURNAL: Journal gvinum/plex/array1.p0 clean.
> GEOM_JOURNAL: BIO_FLUSH not supported by gvinum/plex/array1.p0.
> # gjournal list
> Geom name: gjournal 4267655417
> ID: 4267655417
> Providers:
> 1. Name: gvinum/plex/array1.p0.journal
>    Mediasize: 4477282549248 (4.1T)
>    Sectorsize: 512
>    Mode: r0w0e0
> Consumers:
> 1. Name: gvinum/plex/array1.p0
>    Mediasize: 4478356291584 (4.1T)
>    Sectorsize: 512
>    Mode: r1w1e1
>    Jend: 4478356291072
>    Jstart: 4477282549248
>    Role: Data,Journal
> --/snip---
> 
> So...why is it even touching the plex p0?  I figured it would, just
> like on a disk, if I gave it da0, create da0.journal.  Moving on, if I
> try to newfs the journal, which is now
> "gvinum/plex/array1.p0.journal", I get:
> 
> ---snip---
> # newfs -J /dev/gvinum/plex/array1.p0.journal
> /dev/gvinum/plex/array1.p0.journal: 4269869.4MB (8744692476 sectors) block size
> 16384, fragment size 2048
>         using 23236 cylinder groups of 183.77MB, 11761 blks, 23552 inodes.
> newfs: can't read old UFS1 superblock: end of file from block device:
> No such file or directory
> ---/snip---
> 
> Followed by a panic and reboot:
> 
> ---snip---
> Fatal trap 12: page fault while in kernel mode
> cpuid = 0; apic id = 00
> fault virtual address   = 0x0
> fault code              = supervisor read, page not present
> instruction pointer     = 0x20:0xc0d8d440
> stack pointer           = 0x28:0xd4e25c44
> frame pointer           = 0x28:0xd4e25cf4
> code segment            = base 0x0, limit 0xfffff, type 0x1b
>                         = DPL 0, pres 1, def32 1, gran 1
> processor eflags        = interrupt enabled, resume, IOPL = 0
> current process         = 47 (gv_p array1.p0)
> trap number             = 12
> panic: page fault
> cpuid = 0
> Uptime: 14m38s
> Cannot dump. No dump device defined.
> Automatic reboot in 15 seconds - press a key on the console to abort
> ---/snip---
> 
> Next...I destroyed/cleared/stoped/etc the journal to start fresh, made
> a new one...it created the same thing
> (gvinum/plex/array1.p0.journal)...I then rebooted, loaded the gjournal
> module, and I now see gvinum/array1.journal as the provider, and the
> provider inside plex is gone. I then run my newfs (newfs -J
> /dev/gvinum/array1.journal) , and I get
> 
> ---snip---
> Fatal trap 12: page fault while in kernel mode
> cpuid = 0; apic id = 00
> fault virtual address   = 0x1c
> fault code              = supervisor read, page not present
> instruction pointer     = 0x20:0xc0d8eec5
> stack pointer           = 0x28:0xd4e2ecbc
> frame pointer           = 0x28:0xd4e2ecf4
> code segment            = base 0x0, limit 0xfffff, type 0x1b
>                         = DPL 0, pres 1, def32 1, gran 1
> processor eflags        = interrupt enabled, resume, IOPL = 0
> current process         = 50 (gv_v array1)
> trap number             = 12
> panic: page fault
> cpuid = 0
> Uptime: 8m18s
> Cannot dump. No dump device defined.
> Automatic reboot in 15 seconds - press a key on the console to abort
> 
> ---/snip---
> 
> Does anyone have any ideas here?  I assumed gjournal would play nice
> with any file system.  But clearly not.  After I clear the journal off
> of /dev/gvinum/array1, I can do a newfs on it (/dev/gvinum/array1)
> without the journal fine...so that tests that the RAID5 is ok.  Anyone
> havve any ideas?
> 
> Thanks!
> --Brian
> 
> -- 
> _-=-_-=-_-=-_-=-_-=-_-=-_-=-_-=-_-=-_-=-_-=-_
> Brian McCann
> 
> "I don't have to take this abuse from you -- I've got hundreds of
> people waiting to abuse me."
>                 -- Bill Murray, "Ghostbusters"
> _______________________________________________
> freebsd-geom@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-geom
> To unsubscribe, send any mail to "freebsd-geom-unsubscribe@freebsd.org"



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20090115025645.21ad2185.ota>