Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 01 Nov 2004 08:07:30 +0000
From:      Jake Scott <jake@poptart.org>
To:        freebsd-current@freebsd.org
Subject:   Re: Gvinum RAID5 performance
Message-ID:  <4185EEC2.7040106@poptart.org>
In-Reply-To: <002401c4bf9c$c4fee8e0$0201000a@riker>
References:  <002401c4bf9c$c4fee8e0$0201000a@riker>

next in thread | previous in thread | raw e-mail | index | archive | help
My machine (FreeBSD 5.3-STABLE as of 30/10/2004, 13:00 GMT) hangs as 
soon as I try to write to a RAID5 mounted with gvinum.  If I use vinum, 
I don't get the problem - and I've completed both a parity rebuild and 
parity check.

Also - can anyone tell me if it's OK to have both gvinum and vinum 
loaded at once in order to perform operations that aren't supported by 
gvinum yet?  And if so, is a "gvinum read" sufficient to tell gvinum of 
changed made by vinum?..


Thanks in advance

J>


freebsd@newmillennium.net.au wrote:

>I've just added 4 x Seagate 200GB IDE hard drives to my system, and have
>created a RAID5 setup (279KB stripe size) across all 4 new disks with
>the volume name 'newexport'. Each drive is a single drive with its own
>IDE channel.
>
>The current gvinum config is below:
>
>gvinum -> l
>6 drives:
>D drive5                State: up       /dev/ad10s1d    A: 0/190654 MB
>(0%)
>D drive4                State: up       /dev/ad8s1d     A: 0/190654 MB
>(0%)
>D drive2                State: up       /dev/ad6s1d     A: 0/190654 MB
>(0%)
>D drive3                State: up       /dev/ad4s1d     A: 0/190654 MB
>(0%)
>D drive1                State: up       /dev/ad2s1d     A: 0/114345 MB
>(0%)
>D drive0                State: up       /dev/ad0s1d     A: 0/114345 MB
>(0%)
>
>8 volumes:
>V newexport             State: up       Plexes:       1 Size:        558
>GB
>V root                  State: up       Plexes:       2 Size:        160
>MB
>V home                  State: up       Plexes:       2 Size:       8192
>MB
>V usr                   State: up       Plexes:       2 Size:       8192
>MB
>V var                   State: up       Plexes:       2 Size:        512
>MB
>V pgsql                 State: up       Plexes:       2 Size:        512
>MB
>V squid                 State: up       Plexes:       1 Size:       2048
>MB
>V export                State: up       Plexes:       1 Size:        187
>GB
>
>13 plexes:
>P newexport.p0       R5 State: up       Subdisks:     4 Size:        558
>GB
>P root.p0             C State: up       Subdisks:     1 Size:        160
>MB
>P root.p1             C State: up       Subdisks:     1 Size:        160
>MB
>P home.p0             C State: up       Subdisks:     1 Size:       8192
>MB
>P home.p1             C State: up       Subdisks:     1 Size:       8192
>MB
>P usr.p0              C State: up       Subdisks:     1 Size:       8192
>MB
>P usr.p1              C State: up       Subdisks:     1 Size:       8192
>MB
>P var.p0              C State: up       Subdisks:     1 Size:        512
>MB
>P var.p1              C State: up       Subdisks:     1 Size:        512
>MB
>P pgsql.p0            C State: up       Subdisks:     1 Size:        512
>MB
>P pgsql.p1            C State: up       Subdisks:     1 Size:        512
>MB
>P squid.p0            S State: up       Subdisks:     2 Size:       2048
>MB
>P export.p0           S State: up       Subdisks:     2 Size:        187
>GB
>
>18 subdisks:
>S newexport.p0.s3       State: up       D: drive5       Size:        186
>GB
>S newexport.p0.s2       State: up       D: drive4       Size:        186
>GB
>S newexport.p0.s1       State: up       D: drive3       Size:        186
>GB
>S newexport.p0.s0       State: up       D: drive2       Size:        186
>GB
>S root.p0.s0            State: up       D: drive0       Size:        160
>MB
>S root.p1.s0            State: up       D: drive1       Size:        160
>MB
>S home.p0.s0            State: up       D: drive0       Size:       8192
>MB
>S home.p1.s0            State: up       D: drive1       Size:       8192
>MB
>S usr.p0.s0             State: up       D: drive0       Size:       8192
>MB
>S usr.p1.s0             State: up       D: drive1       Size:       8192
>MB
>S var.p0.s0             State: up       D: drive0       Size:        512
>MB
>S var.p1.s0             State: up       D: drive1       Size:        512
>MB
>S pgsql.p0.s0           State: up       D: drive0       Size:        512
>MB
>S pgsql.p1.s0           State: up       D: drive1       Size:        512
>MB
>S squid.p0.s0           State: up       D: drive0       Size:       1024
>MB
>S squid.p0.s1           State: up       D: drive1       Size:       1024
>MB
>S export.p0.s0          State: up       D: drive0       Size:         93
>GB
>S export.p0.s1          State: up       D: drive1       Size:         93
>GB
>
>
>So far so good.
>
>Now, running a dd from a plex gives me less performance than running a
>dd from one of the subdisks, even though the array is not running in
>degraded mode.
>
>picard# dd if=/dev/gvinum/sd/newexport.p0.s0 of=/dev/null bs=16M
>count=100
>100+0 records in
>100+0 records out
>1677721600 bytes transferred in 25.873485 secs (64843279 bytes/sec)
>picard# dd if=/dev/gvinum/plex/newexport.p0 of=/dev/null bs=16M
>count=100
>100+0 records in
>100+0 records out
>1677721600 bytes transferred in 28.513923 secs (58838680 bytes/sec)
>
>
>
>Also, something seems to blow up when running a newfs, but I don't get a
>panic. It could be a hardware issue, and I'll try reading/writing from
>the raw drives to verify this later.
>
>picard# newfs -L export -U /dev/gvinum/newexport 
>/dev/gvinum/newexport: 571962.0MB (1171378176 sectors) block size 16384,
>fragment size 2048
>        using 3113 cylinder groups of 183.77MB, 11761 blks, 23552
>inodes.
>        with soft updates
>super-block backups (for fsck -b #) at:
> 160, 376512, 752864, 1129216, 1505568, 1881920, 2258272, 2634624,
>3010976, 3387328, 3763680, 4140032, 4516384, 4892736, 5269088, 5645440,
>6021792, 6398144, 6774496, 7150848, 7527200, 7903552, 8279904,
> 8656256, 9032608, 9408960, 9785312, 10161664, 10538016, 10914368,
>11290720, 11667072, 12043424, 12419776, 12796128, 13172480, 13548832,
>13925184, 14301536, 14677888, 15054240, 15430592, 15806944,
> 16183296, 16559648, 16936000, 17312352, 17688704, 18065056, 18441408,
>18817760, 19194112, 19570464, 19946816, 20323168, 20699520, 21075872,
>21452224, 21828576, 22204928, 22581280, 22957632, 23333984,
> 23710336, 24086688, 24463040, 24839392, 25215744, 25592096, 25968448,
>26344800, 26721152, 27097504, 27473856, 27850208, 28226560, 28602912,
>28979264, 29355616, 29731968, 30108320, 30484672, 30861024,
> 31237376, 31613728, 31990080, 32366432, 32742784, 33119136, <machine
>consistently locks up at this point>
>
>
>On a final note, I'd like to implement the ability to grow RAID5 plexes
>- any suggestions on where to start?
>
>  
>



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?4185EEC2.7040106>