From owner-freebsd-questions@FreeBSD.ORG Wed Jan 14 21:23:31 2009 Return-Path: Delivered-To: freebsd-questions@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 57526106567B for ; Wed, 14 Jan 2009 21:23:31 +0000 (UTC) (envelope-from bjmccann@gmail.com) Received: from rv-out-0506.google.com (rv-out-0506.google.com [209.85.198.226]) by mx1.freebsd.org (Postfix) with ESMTP id 1D3148FC18 for ; Wed, 14 Jan 2009 21:23:30 +0000 (UTC) (envelope-from bjmccann@gmail.com) Received: by rv-out-0506.google.com with SMTP id b25so748393rvf.43 for ; Wed, 14 Jan 2009 13:23:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:message-id:date:from:to :subject:mime-version:content-type:content-transfer-encoding :content-disposition; bh=A26sshV/2/qGfzb7cgMGxDYVKreZQxSZRSZwbT0gcvc=; b=GclQPtB/IXLpOsiBD8O9jjVPx4T9B3oDuXHEt7fI5humd9ZEujJ8ni1eCmSJ6Qnu2E 9douoWNDJ4PK6E1r3P+6+ianhxiVEMHqUN8b89jD9HU+QUykTtvl2wB04jcOcSiZsQQL QIiXutNUd1Bjpcyybd3oLOWXWmmpxu+nZCvmQ= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=message-id:date:from:to:subject:mime-version:content-type :content-transfer-encoding:content-disposition; b=E+/3BYEonaIBn1iT48exPDY0H8YdvdCqCPnmURmxhrrs7iFquIBzQHUVzIV1QNLICq 463yWxShM6cgFr6KCoX6w1ulhi15BHwPOeyLdRhxT9sbdQbURPp5f1jToH2bpoR8gu/K fdUtG6JQDMfufpt2sJVKlRcHl8PxXC6m4silM= Received: by 10.140.127.13 with SMTP id z13mr202475rvc.145.1231968210678; Wed, 14 Jan 2009 13:23:30 -0800 (PST) Received: by 10.140.140.19 with HTTP; Wed, 14 Jan 2009 13:23:30 -0800 (PST) Message-ID: <2b5f066d0901141323j7c9a194eo4606d9769279037e@mail.gmail.com> Date: Wed, 14 Jan 2009 16:23:30 -0500 From: "Brian McCann" To: freebsd-questions , freebsd-geom@freebsd.org MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Content-Disposition: inline Cc: Subject: gvinum & gjournal X-BeenThere: freebsd-questions@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: User questions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 14 Jan 2009 21:23:32 -0000 Hi all. I'm cross-posting this since I figure I'll have better luck finding someone who's done this before... I'm building a system that has 4 1.5TB Seagate SATA drives in it. I've setup gvinum and made mirrors for my OS partitions, and a raid5 plex for a big data partition. I'm trying to get gjournal to run on the raid5 volume...but it's doing stuff that isn't expected. First, here's my gvinum config for the array: ---snip--- drive e0 device /dev/ad8s1g drive e1 device /dev/ad10s1g drive e2 device /dev/ad12s1g drive e3 device /dev/ad14s1g volume array1 plex org raid5 128k sd drive e0 sd drive e1 sd drive e2 sd drive e3 ---/snip--- Now...according to the handbook. the volume it creates is essentially a disk drive. So...I run the following gjournal commands to make the journal, and here's what I get: ---snip--- # gjournal label /dev/gvinum/array1 GEOM_JOURNAL: Journal 4267655417: gvinum/plex/array1.p0 contains data. GEOM_JOURNAL: Journal 4267655417: gvinum/plex/array1.p0 contains journal. GEOM_JOURNAL: Journal gvinum/plex/array1.p0 clean. GEOM_JOURNAL: BIO_FLUSH not supported by gvinum/plex/array1.p0. # gjournal list Geom name: gjournal 4267655417 ID: 4267655417 Providers: 1. Name: gvinum/plex/array1.p0.journal Mediasize: 4477282549248 (4.1T) Sectorsize: 512 Mode: r0w0e0 Consumers: 1. Name: gvinum/plex/array1.p0 Mediasize: 4478356291584 (4.1T) Sectorsize: 512 Mode: r1w1e1 Jend: 4478356291072 Jstart: 4477282549248 Role: Data,Journal --/snip--- So...why is it even touching the plex p0? I figured it would, just like on a disk, if I gave it da0, create da0.journal. Moving on, if I try to newfs the journal, which is now "gvinum/plex/array1.p0.journal", I get: ---snip--- # newfs -J /dev/gvinum/plex/array1.p0.journal /dev/gvinum/plex/array1.p0.journal: 4269869.4MB (8744692476 sectors) block size 16384, fragment size 2048 using 23236 cylinder groups of 183.77MB, 11761 blks, 23552 inodes. newfs: can't read old UFS1 superblock: end of file from block device: No such file or directory ---/snip--- Followed by a panic and reboot: ---snip--- Fatal trap 12: page fault while in kernel mode cpuid = 0; apic id = 00 fault virtual address = 0x0 fault code = supervisor read, page not present instruction pointer = 0x20:0xc0d8d440 stack pointer = 0x28:0xd4e25c44 frame pointer = 0x28:0xd4e25cf4 code segment = base 0x0, limit 0xfffff, type 0x1b = DPL 0, pres 1, def32 1, gran 1 processor eflags = interrupt enabled, resume, IOPL = 0 current process = 47 (gv_p array1.p0) trap number = 12 panic: page fault cpuid = 0 Uptime: 14m38s Cannot dump. No dump device defined. Automatic reboot in 15 seconds - press a key on the console to abort ---/snip--- Next...I destroyed/cleared/stoped/etc the journal to start fresh, made a new one...it created the same thing (gvinum/plex/array1.p0.journal)...I then rebooted, loaded the gjournal module, and I now see gvinum/array1.journal as the provider, and the provider inside plex is gone. I then run my newfs (newfs -J /dev/gvinum/array1.journal) , and I get ---snip--- Fatal trap 12: page fault while in kernel mode cpuid = 0; apic id = 00 fault virtual address = 0x1c fault code = supervisor read, page not present instruction pointer = 0x20:0xc0d8eec5 stack pointer = 0x28:0xd4e2ecbc frame pointer = 0x28:0xd4e2ecf4 code segment = base 0x0, limit 0xfffff, type 0x1b = DPL 0, pres 1, def32 1, gran 1 processor eflags = interrupt enabled, resume, IOPL = 0 current process = 50 (gv_v array1) trap number = 12 panic: page fault cpuid = 0 Uptime: 8m18s Cannot dump. No dump device defined. Automatic reboot in 15 seconds - press a key on the console to abort ---/snip--- Does anyone have any ideas here? I assumed gjournal would play nice with any file system. But clearly not. After I clear the journal off of /dev/gvinum/array1, I can do a newfs on it (/dev/gvinum/array1) without the journal fine...so that tests that the RAID5 is ok. Anyone havve any ideas? Thanks! --Brian -- _-=-_-=-_-=-_-=-_-=-_-=-_-=-_-=-_-=-_-=-_-=-_ Brian McCann "I don't have to take this abuse from you -- I've got hundreds of people waiting to abuse me." -- Bill Murray, "Ghostbusters"