From owner-freebsd-questions@FreeBSD.ORG Tue Jan 29 07:54:40 2013 Return-Path: Delivered-To: freebsd-questions@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id E0C283C2 for ; Tue, 29 Jan 2013 07:54:40 +0000 (UTC) (envelope-from freebsd-questions@m.gmane.org) Received: from plane.gmane.org (plane.gmane.org [80.91.229.3]) by mx1.freebsd.org (Postfix) with ESMTP id 91E0EE3F for ; Tue, 29 Jan 2013 07:54:40 +0000 (UTC) Received: from list by plane.gmane.org with local (Exim 4.69) (envelope-from ) id 1U061p-0004rJ-Fo for freebsd-questions@freebsd.org; Tue, 29 Jan 2013 08:54:49 +0100 Received: from pool-173-79-84-117.washdc.fios.verizon.net ([173.79.84.117]) by main.gmane.org with esmtp (Gmexim 0.1 (Debian)) id 1AlnuQ-0007hv-00 for ; Tue, 29 Jan 2013 08:54:49 +0100 Received: from nightrecon by pool-173-79-84-117.washdc.fios.verizon.net with local (Gmexim 0.1 (Debian)) id 1AlnuQ-0007hv-00 for ; Tue, 29 Jan 2013 08:54:49 +0100 X-Injected-Via-Gmane: http://gmane.org/ To: freebsd-questions@freebsd.org From: Michael Powell Subject: Re: Software raid VS hardware raid Date: Tue, 29 Jan 2013 02:54:18 -0500 Lines: 97 Message-ID: References: <5106E301.4070707@itlegion.ru> Mime-Version: 1.0 Content-Type: text/plain; charset="ISO-8859-1" Content-Transfer-Encoding: 7Bit X-Complaints-To: usenet@ger.gmane.org X-Gmane-NNTP-Posting-Host: pool-173-79-84-117.washdc.fios.verizon.net X-BeenThere: freebsd-questions@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list Reply-To: nightrecon@hotmail.com List-Id: User questions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 29 Jan 2013 07:54:40 -0000 Artem Kuchin wrote: > Hello! > > I have to made a decision on choosing a dedicated server. > The problem i see is that while i can find very affordable and good > options they do not > provide hardware raid or even if they do it is not the best hardware for > freebsd. > The server base conf is 8core 32gb ram 2.8+ ghz. > So, maybe someone has personal experience with both worlds and can tell > if it > really matters in such configuration if i go for software raid. What are > the benefits > and what are the negatives of software raid? How much is the performance > penalty? > I am planning to use mirror configuration of two SATA 7200rpm 2TB disks. > Nothing fancy. > File system planned is UFS with journaling. I can't say for sure exactly what's best for your needs, however, please allow me to toss out some very generic tidbits which may aid you in some way. Historically back when RAID was new, hardware controllers were the only way to go. Back then I would never look at software RAID for a server machine. Best to offload as much work away from the CPU as possible to free it up for running the OS. What has changed is the amount of raw horsepower available from modern-day processors as compared to when RAID first came out. On the multi-core monster CPUs of today software RAID is a perfectly viable consideration because there are CPU cycles to spare, so the "performance penalty" is less now than it once was. Having said that, there are several other considerations to keep in mind as well. The type of RAID required matters. If you want/need RAID 5/6 it is definitely better to go with hardware RAID because of the horsepower required to do the XOR parity generation. You would want RAID 5/6 running on a hardware controller and not on the CPU. On the other hand, RAID 0, 1, and 10 are fine candidates for software RAID. One thing I've noticed that seems to somewhat get lost in this discussion is equating software-based RAID with not needing to spend money on the expensive RAID controller. At first glance it does seem like quite a waste to spend hundreds of dollars on a really fast RAID controller and then turn all its functionality off and just use it JBOD style. If you truly want performance you still need the processing power of the hardware chip on the (expensive) controller. Most central to this is I/Os per second. This matters more to some workloads than others, with being a database server probably at the top of the list where I/Os per second is king. The better the chip on the controller card the more I/Os per second. Another thing that matters less wrt to server hardware is the third kind of RAID known as "fake" or "pseudo" RAID. This is mostly found on desktop PC motherboards and some low-end (cheap) hardware cards. There is a config in the BIOS to set up so-called "RAID", but it is only half of the matter - the other half is in the driver. FreeBSD does indeed have support for some of these "fake RAID" things but I stay far far away from them. Either go hardware or pure software only - the fakeraid is crap. Another thing I'd warn you about is the drives themselves. Take a look: http://wdc.custhelp.com/app/answers/detail/a_id/1397 Many people get very lucky much of the time and don't experience problems with this. Using drives designed for desktop PCs with RAID can be prone to problem. Drives designed for servers are more expensive, but I've always felt it is better to put server drives in servers. :-) In terms of a 'performance penalty' what you will find is it gets shifted away from just losing a few CPU cycles into other areas. If the drives are Advanced Format 4k sector critters and they aren't properly aligned in the partitioning phase of set up performance will take a hit. If the controller chip they are hooked up to is slow, then the entire drive subsystem will suffer. Another thing you will find that will surface as a problem area is the shift away from the old style DOS MBR scheme and towards GPT. Software RAID (and indeed hardware controllers too) store their metadata at the end of the drive and needs to be "outside" the file system. The problem arises when both the software raid and the GPT partitioning try to store metadata to the same location and collide. Just knowing about this in advance and spending some quality reading time about it prior to trying to set up the box will help greatly. Plenty has been written (even in this list) about this subject by people smarter than me so the info you need is out there, albeit it can be confusing at first. I guess what I'm trying to point out is that low performance wrt software RAID will stem from other things besides just simply consuming a few CPU cycles. Today's CPUs have the cycles to spare. I've been using gmirror for RAID 1 mirrors for a few years now and am happy with this. I have had a few old drives die and the servers stayed up and online. This allowed me to defer the actual drive replacement and not have to drop everything and fight fire. -Mike