Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 06 Sep 2000 22:00:34 -0400
From:      Mike Tancsa <mike@sentex.net>
To:        Greg Lehey <grog@lemis.com>
Cc:        stable@FreeBSD.ORG
Subject:   Re: Vinum (was: RAID)
Message-ID:  <4.2.2.20000906212120.03316380@mail.sentex.net>
In-Reply-To: <20000907103313.C7718@wantadilla.lemis.com>
References:  <4.2.2.20000906000615.03226880@mail.sentex.net> <00090310375700.05988@www.runapplications.com> <4.2.2.20000904120154.07455bd0@mail.sentex.net> <20000905101653.A49732@wantadilla.lemis.com> <4.2.2.20000904204407.033e2920@mail.sentex.net> <20000905115126.A14470@myhakas.matti.ee> <4.3.2.7.0.20000905090105.053a13f0@marble.sentex.ca> <20000905155032.A27690@futuresouth.com> <4.3.2.7.0.20000905165313.037652c0@marble.sentex.ca> <20000906084835.B21113@wantadilla.lemis.com> <4.2.2.20000906000615.03226880@mail.sentex.net>

next in thread | previous in thread | raw e-mail | index | archive | help
At 10:33 AM 9/7/2000 +0930, Greg Lehey wrote:
>I'd really like to see the output of rawio, simply because it's more
>repeatable from one platform to another.

I will give it a try.


> > As for testing, so far so good.  Actually, I was quite taken aback
> > by some of the results.  It really does seem a lot faster, certainly
> > from the limited testing I have done. In one, test where I blast
> > email at the box as fast as I can from two outside hosts, it took
> > half the time deliver mail to 13,000 user mail boxes as compared to
> > the time it took on the 428 MegaRAID controller in a 3 disk striped
> > config with the same physical drives involved.
>
>You're saying Vinum is faster?


Yes vinum was faster in the mail test.  I think the big hit has been the 
issue of RAID5 on 3 disks. For testing, I put my scratch disks (5 1 gig 
seagates I got at surplus) and got the following results

1gig segate on an ahc

               -------Sequential Output-------- ---Sequential Input-- 
--Random--
               -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- 
--Seeks---
Machine    MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU  /sec %CPU
           400  4684 20.0  4342  5.8  2225  4.0  4466 28.0  4635  3.4 
140.6  1.1
5 stripe segate (vinum)
               -------Sequential Output-------- ---Sequential Input-- 
--Random--
               -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- 
--Seeks---
Machine    MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU  /sec %CPU
           400 21745 91.5 21862 31.0  3749  8.5  9325 59.3 11582 13.1 
348.5  3.8
          1000 21484 91.2 20910 29.0  3796  8.3  9429 59.7 11417 12.6 
186.7  2.4
raid5     400  1446  6.2  1427  1.9  1191  2.1  6597 41.6  6942  5.4 372.3  3.2
raid10    400  9012 39.1  8692 12.1  3434  6.4  7971 50.3  8209  6.4 377.2  3.0

1gig on amr428
               -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- 
--Seeks---
Machine    MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU  /sec %CPU
raid5     400 10204 43.4 10015 13.8  3106  5.3  7594 47.6  8142  6.1 218.1  1.6
          1000 10447 44.5 10593 14.6  3370  5.8  7628 48.0  8200  6.1 
105.3  0.9


What I find interesting is the raid5 on 5 disks (SEAGATE ST31055W 
40.000MB/s  (old 5400 rpm slow pokes, good for RAID5 'wreckin/testin')) I 
see better read write performance, than on one disk. But, when only using 3 
drives in a RAID 5, I see really crappy performance (which I realize now is 
expected).  So, when I was looking at

da0 (ATLAS IV 80.000MB/s SCSI-3 device)
               -------Sequential Output-------- ---Sequential Input-- 
--Random--
               -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- 
--Seeks---
Machine    MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU  /sec %CPU
           400 20072 84.3 19866 26.7  7477 13.6 14591 91.6 19343 14.9 
288.2  2.3
          1000 19247 81.6 18716 25.3  7913 17.8 14787 93.4 19310 21.4 
146.4  1.8

and looked at the drives in the amr in a 3 drive RAID5

               -------Sequential Output-------- ---Sequential Input-- 
--Random--
               -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- 
--Seeks---
Machine    MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU  /sec %CPU
           400  8984 38.5  8881 12.6  4189  7.2  9420 59.1  9797  7.4 
409.5  3.1
          1000  9275 39.6  9186 12.6  4341  7.5  9840 61.9 10229  7.7 
203.7  1.8

the figures looked disappointing to say the least.

I imagine, I would see a fairly big improvement in the amr if I had 2 more 
of these drives to test with.... Which in the end, is what I will probably 
go with. RAID5+ 1 hot spare.




> > The other neat thing I have found so far, was that on a 256MB
> > machine, comparing the amr, mlx, da, ad to the vinum drive, the
> > vinum tests gave me the best even distribution of multiple processes
> > blasting on the disk.  Running 15 bonnie -s 100 at once, all the
>
>Hmm.  I wonder if this is a coincidence.  I don't think it's a plus
>for Vinum, anyway.  I suspect that somewhere in the system we have a
>problem with process balancing.  It's very obvious initializing Vinum
>subdisks, where the elapsed time for performing identical operations
>in parallel can vary by a factor of 2:1.

The list is rather large, but these particular results can be found at

http://www.simianscience.com/multi.txt

Some of them are way out of whack.  If I have time at the office tomorrow 
morning, I will get the tests re-run with raw-io.

BTW, One quick test I just did from the equipment as is,
ad7: 19595MB <QUANTUM FIREBALLP LM20.5> [39813/16/63] at ata3-master using 
UDMA66
on a promise ata66
vs
da0: <QUANTUM ATLAS IV 9 WLS 0B0B> Fixed Direct Access SCSI-3 device
da0: 80.000MB/s transfers (40.000MHz, offset 31, 16bit), Tagged Queueing 
Enabled
da0: 8761MB (17942584 512 byte sectors: 255H 63S/T 1116C)

            Random read  Sequential read    Random write Sequential write
ID          K/sec  /sec    K/sec  /sec     K/sec  /sec     K/sec  /sec
ad7e       1573.7    98   1348.4    82    1804.0   112    2845.7   174
da0e       1889.2   117   1935.6   118    1611.4    99    1736.5   106


         ---Mike
--------------------------------------------------------------------
Mike Tancsa,                          	          tel +1 519 651 3400
Network Administration,     			  mike@sentex.net
Sentex Communications                 		  www.sentex.net
Cambridge, Ontario Canada			  www.sentex.net/mike



To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-stable" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?4.2.2.20000906212120.03316380>