Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 7 Jun 2001 15:04:38 -0700 
From:      Michael VanLoon <MichaelV@EDIFECS.COM>
To:        Michael VanLoon <MichaelV@EDIFECS.COM>, "'Achim Patzner'" <ap@bnc.net>
Cc:        hardware@freebsd.org
Subject:   RE: Casing wanted
Message-ID:  <36F7B20351634E4FBFFE6C6A216B30D54C26@ecx1.edifecs.com>

next in thread | raw e-mail | index | archive | help
OK so I can't do math when in a hurry... heh.  4 x 160 = 640, not 740MB/s...

> From: Michael VanLoon 
> Sent: Thursday, June 07, 2001 1:56 PM
> 
> Proof of concept is fine.  I wish you luck on building this 
> server and its
> long-term reliability.  So with that, I'm not trying to 
> convince you not to
> build it, but I figure I should address the points I already 
> brought up.
> 
> > From: Achim Patzner [mailto:ap@bnc.net]
> > Sent: Thursday, June 07, 2001 1:24 AM
> > 
> > > Agreed it's quite expensive but... with hardware SCSI 
> RAID you get:
> > > - You don't have all these squirrelly issues you just 
> > brought up (cable
> > lengths, etc.)
> > 
> > Which is about the only problem I encountered and 3ware just 
> > told me (after
> > a direct hit on their long term memory with a hard object) 
> > that there are
> > cables
> > up to 70 cm.
> 
> Yes but as far as I know these are outside the ATA100 spec, 
> and are not
> guaranteed to work in all cases.  It's kinda hit-and-miss.  
> If it works,
> you're good (unless it starts acting strange).  If it doesn't 
> work, oh well.
> Correct me if I'm wrong on this point -- I don't have an 
> ATA100 spec in
> front of me.
> 
> > > - Automatic unattended hardware failover to hot-spare(s)
> > > - Automatic unattended background fill-in of data on 
> > failed-in hot spares
> > >   while server is live
> > > - Caching controller that does delayed elevator-sorted 
> > write-backs, and
> > >   read-ahead
> > 
> > This is nothing the 3ware controller won't do.
> 
> Well that hasn't been proven to me.  3ware's specs and 
> white-papers are
> unfortunately alarmingly light on the actual details of how 
> complete their
> live hot-spare support actually is.  Although this doesn't 
> confirm that they
> don't do it as well, it's at least noting that they don't 
> spend any time
> saying more than a bullet point "hot-spare".
> 
> A good hardware SCSI RAID controller will fill in the 
> hot-spare while the
> server is up and live.  Without the server even realizing 
> anything happened
> (aside from reduced disc performance) it will back-fill the 
> redundant data
> onto the new drive and bring it into the array.  Once the new drive is
> filled, it is a full member of the array and the machine is 
> running exactly
> as it was before the failure.  All this without a reboot, and 
> without any
> downtime.
> 
> One other thing I forgot to mention is dynamic expansion of 
> arrays.  If you
> need to add more drives, it will use the same technology to 
> expand the size
> of the array without having to actually move any data (it 
> will redistribute
> the data on the disks for optimal striping, but that is 
> invisible to the
> user or OS).  Then if you're running NT you just tell it to expand the
> Volume, and if you're running FreeBSD you just run growfs.  
> Or make a new
> partition there if that suits you.
> 
> Without detailed white-papers it's hard to claim the 3ware 
> controller can do
> all this.  Maybe it can but I don't see any proof of it in their
> documentation.
> 
> > > - Better reliability (yes SCSI drives really ARE built better)
> > 
> > No. I had 15 IBM DDYS (of 35) failing after less than 12 
> > months and didn't
> > lose
> > a single Maxtor.
> 
> Everybody is going to have wildly varying accounts of this 
> over such a small
> sample size.  I had 6 of 8 Western Digital IDE drives fail 
> within 3 days of
> buying them.  But I realize this is just an anomaly and we 
> got a bad batch.
> 
> Over the span of several years and hundreds or thousands of 
> drives, I think
> you will find the results generally go in SCSI's favor.
> 
> > > - Higher performance (though yes, IDE performance is pretty good)
> > 
> > Hm. I've seen people getting 95 MB/s through a 3ware RAID. 
> > Don't forget that
> > it got a single channel per disk.
> > 
> > All in all this is a reason why they want this machine - 
> they want to
> > compare
> > performance...
> 
> Fine, I'm all for comparing performance.  Competition is good!
> 
> However keep in mind that modern SCSI controllers are 160MB/s 
> per channel.
> With four channels that's 740MB/s.  And that's just standard 
> Ultra3 LVD, not
> fiber-channel.
> 
> > > - Depending on the controller from 15 to 60 drives per controller
> > 
> > Not really. I'm a strong believer in one channel per disk.
> 
> Well you're looking at it from an IDE point of view.  IDE 
> drives REQUIRE one
> drive per controller to get decent performance, because IDE is a much
> simpler protocol.  And four channels gives you fine fault tolerance,
> especially if you're using a technology like RAID-10 (also 
> called 0+1),
> where you spread your mirror drives over multiple busses.
> 
> With tagged-command-queuing, disconnect/reconnect, etc. SCSI 
> drives can
> share a bus without stealing all the bandwidth.  A modern 
> drive can't do any
> more than around 40MB/s sustained in optimal conditions 
> anyway (and that
> drops significantly if any seeking is involved).  4 drives 
> per channel =
> 160.  4 channels X 4 drives = 16 drives = 740MB/s.  That adds 
> up to higher
> performance on my calculator.  Of course these numbers are totally
> theoretical, and you will get nowhere near that performance 
> in the real
> world, on a production filesystem, whether it be on SCSI or IDE.
> 
> > > - Higher quality cases, hot-swap cartridges, etc. on the market
> > 
> > Definitely not. The best hot swap cartridge I've ever seen was
> > an IDE cartridge. I thought someone mixed Dark Vader and the Cylons
> > and turned them into a status display
> 
> Status displays don't equal high-quality hot-swap equipment.
> 
> > > So it's not like you're paying more for nothing.  There are 
> > some very
> > > substantial benefits, especially in the reliability/uptime
> > > department when a disk fails -- no need to bring the 
> server down, or
> > > even be there when it swaps in a hot spare and starts using it.
> > 
> > Nothing I wouldn't get wit IDE too...
> 
> Once again the documentation is too light on all the details 
> to confirm
> this.  It may be true but their documentation doesn't give me 
> enough details
> to confirm it.
>  
> Just a counter-point. :-)
> 
> I'm not trying to dissuade you from building this and 
> verifying it works.
> However if you want a true and fair comparison, you need to 
> open-mindedly
> compare it with all that SCSI RAID has to offer and judge the 
> plusses and
> minuses of each platform.
> 
> If you're really interested in comparing, I'd suggest 
> starting your SCSI
> research here:
>
http://www.adaptec.com/worldwide/product/prodfulldesc.html?prodkey=ASR-3400S
&cat=%2fTechnology%2fRAID%2fRAID+for+Mid-Range+Servers
>
> There are lots of other good SCSI RAID also, this is just one of several.

To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-hardware" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?36F7B20351634E4FBFFE6C6A216B30D54C26>