From owner-freebsd-questions Wed May 2 13:54:19 2001 Delivered-To: freebsd-questions@freebsd.org Received: from bsdone.bsdwins.com (www.bsdwins.com [192.58.184.33]) by hub.freebsd.org (Postfix) with ESMTP id F221637B423 for ; Wed, 2 May 2001 13:54:14 -0700 (PDT) (envelope-from blc@bsdwins.com) Received: (from blc@localhost) by bsdone.bsdwins.com (8.11.3/8.11.0) id f42KsEL09840 for freebsd-questions@FreeBSD.org; Wed, 2 May 2001 16:54:14 -0400 (EDT) (envelope-from blc) Date: Wed, 2 May 2001 16:54:14 -0400 From: "Brad L. Chisholm" To: freebsd-questions@FreeBSD.org Subject: Optimal setup for large raid? Message-ID: <20010502165413.B9113@bsdone.bsdwins.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.2.5i Sender: owner-freebsd-questions@FreeBSD.ORG Precedence: bulk X-Loop: FreeBSD.ORG We are planning to create a large software raid volume, and I am interested in input about what might make the best configuration. We have 52 identical 9Gb drives (Seagate ST19171W) spread across 4 SCSI controllers (Adaptec AHA 2944UW), with 13 drives per controller. We want fault-tolerance, but cannot afford to "waste" 50% of our space for a mirrored (raid1) configuration. Thus, we are considering some sort of raid5 setup using vinum (possibly in combination with ccd). We are running FreeBSD 4.3-RELEASE, on a 550Mz P3 with 384Mb of memory. Possible configurations: Configuration #1: A single raid5 vinum volume consisting of all 52 drives. Questions: A) Is there a performance penalty for this many drives in a raid5 array? B) Should the plex be configured with sequential drives on different controllers? (i.e. if drives 1-13 are on controller 1, 14-27 on controller 2, 27-39 on controller 3, and 40-52 on controller 4, should the drive ordering be: 1,14,27,40,2,15,28,41,... or 1,2,3,4,5,6,7,8,... Configuration #2: Multiple raid5 vinum volumes (perhaps 1 per controller), combined into a single volume by striping the raid5 volumes. (Basically a "raid50" setup.) Questions: A) Is this possible with vinum? From the documentation, it didn't appear to be, so we were considering using 'ccd' to stripe the raid5 volumes together. B) Would this perform better, worse, or about the same as #1? Any other configurations that might prove superior? The final volume will be used as an online backup area, and will contain a relatively few, large tar files. Write performance will likely be more important that read, although I realize using raid5 will impact write performance. Any suggestions on what might be the best stripe size to use? Thanks in advance for any suggestions you might have. -Brad To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-questions" in the body of the message