Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 25 Jul 2004 17:58:11 +0200
From:      Matthias Schuendehuette <msch@snafu.de>
To:        freebsd-current@freebsd.org
Cc:        "Niels Chr. Bank-Pedersen" <ncbp@bank-pedersen.dk>
Subject:   Re: Vinum status
Message-ID:  <200407251758.11912.msch@snafu.de>
In-Reply-To: <20040725000935.GA45839@wheel.dk>
References:  <20040724041844.41697.qmail@web14206.mail.yahoo.com> <20040724235233.GE78419@wantadilla.lemis.com> <20040725000935.GA45839@wheel.dk>

next in thread | previous in thread | raw e-mail | index | archive | help
On Sunday 25 July 2004 02:09, Niels Chr. Bank-Pedersen wrote:
> If I'm not mistaken, neither "gmirror(8)" nor "graid5(8)" exists
> at the moment, so users of vinum raid5 or mirrored plexes has
> nowhere to go right now.

At least for RAID5 that's not true. I *have* a vinum RAID5-Volume 
running with geom_vinum and it's basically working... Ahmm, Ok, sort 
of... I would expect that for mirrored plexes as well, but I have none 
in the moment...

I use that RAID5 volume for /usr/obj for stress testing and I got data 
corruption on that volume so that I cannnot biuld world. But It worked 
with 'classic' vinum RAID5 as well as with geom_vinum 
CONCAT-plexes/volumes.

What I found in /var/log/messages during building world with the 
geom_vinum RAID5-Volume is:

Jul 22 21:27:13 current kernel:
swap_pager: I/O error - pageout failed; blkno 66119,size 4096, error 12
Jul 22 21:27:13 current kernel:
swap_pager: I/O error - pageout failed; blkno 612,size 4096, error 12
Jul 22 21:27:13 current kernel:
swap_pager: I/O error - pageout failed; blkno 66120,size 12288, error 12
Jul 22 21:27:13 current kernel:
swap_pager: I/O error - pageout failed; blkno 613,size 8192, error 12
[... repeatedly]

I noticed that geom_vinum RAID5 is about twice as fast as 'classic' 
vinum RAID5 concerning sequential write and has therefor a higher 
CPU-usage as before - but that should not prevent the swap_pager from 
working...

Another problem occurs if geom_vinum.ko is loaded via loader.conf - it 
does not panic any more (as it does about two weeks ago), but it does 
not collect all subdisks of a RAID5-plex *at boottime*. If I start 
gvinum if the system is up (multiuser), there are no problems any more 
collecting *all* subdisks available...

But, OK, that's alpha-software... I try to find out where the problems 
are and report them to Lukas. I'm sure he will working on that if he 
has overcome the more private nuisances he has in the moment...
-- 
Ciao/BSD - Matthias

Matthias Schuendehuette	<msch [at] snafu.de>, Berlin (Germany)
PGP-Key at <pgp.mit.edu> and <wwwkeys.de.pgp.net> ID: 0xDDFB0A5F



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200407251758.11912.msch>