Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 10 Nov 2004 14:31:10 -0700
From:      secmgr <security@jim-liesl.org>
To:        msch@snafu.de
Cc:        freebsd-stable@freebsd.org
Subject:   Re: freebsd 5.3 have any problem with vinum ?
Message-ID:  <4192889E.8010506@jim-liesl.org>
In-Reply-To: <200411071042.03382.msch@snafu.de>
References:  <02f201c4ba91$f9f95db0$33017f80@psique> <200411061217.06061.msch@snafu.de> <1099805401.4420.4.camel@emperor> <200411071042.03382.msch@snafu.de>

next in thread | previous in thread | raw e-mail | index | archive | help
ok, your instructions worked like a charm.  So i'm running my nice 4 
member SCSI gvinum raid5 array (with softupdates turned on), and it's 
zipping along.  Now I need to test just how robust this is.  camcontrol 
is too nice.  I want to test a more real world failure.  I'm running 
dbench and just pull one of  the drives.  My expectation is that  I 
should see a minor pause, and then the array continue in some slower, 
degraded mode.  What I get is a kernel trap 12 (boom!).  I reboot, and 
it will not mount the degraded set till I replace the drive.

I turned off softupdates, and had the same thing happen.  Is this a 
bogus test?  Is it reasonable to expect that a scsi drive failure should 
of been tolerated w/o crashing?

(bunch of scsi msgs to console)
sub-disk down
plex degraded
g_access failed:6

trap 12
page fault while in kernel mode
cpuid=1 apic id=01
fault virtual address   =0x18c
fault code                   supervisor write, page not present
instruction pointer      =0x8:0xc043d72c
stack pointer             =0x10:cbb17bf0
code segment            =base 0x0, limit 0xfff, type 0x1b
                                 =DPL0, pres1,def32,gran1
Processor flags            interupt enabled, resume,IOPL=0
current process            22(irq11:ahc1)


Matthias Schuendehuette wrote:

>gvinum> start <plexname>
>
>This (as far as I investigated :-)
>
>a) initializes a newly created RAID5-plex    or
>
>b) recalculates parity informations on a degraded RAID5-plex with
>   a new replaced subdisk.
>
>So, a 'gvinum start raid5.p0' initializes my RAID5-plex if newly 
>created. You can monitor the initialization process with subsequent 
>'gvinum list' commands.
>
>If you degrade a RAID5-plex with 'camcontrol stop <diskname>' (in case 
>of SCSI-Disks) and 'repair' it afterwards with 'camcontrol start 
><diskname>', the 'gvinum start raid5.p0' (my volume here is called 
>'raid5') command recalculates the parity and revives the subdisk which 
>was on disk <diskname>.
>  
>



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?4192889E.8010506>