From owner-freebsd-questions@FreeBSD.ORG Mon Jun 3 10:40:21 2013 Return-Path: Delivered-To: freebsd-questions@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 7D9C9E62 for ; Mon, 3 Jun 2013 10:40:21 +0000 (UTC) (envelope-from freebsd-questions@m.gmane.org) Received: from plane.gmane.org (plane.gmane.org [80.91.229.3]) by mx1.freebsd.org (Postfix) with ESMTP id 3DDC31287 for ; Mon, 3 Jun 2013 10:40:20 +0000 (UTC) Received: from list by plane.gmane.org with local (Exim 4.69) (envelope-from ) id 1UjSBO-0008QW-IH for freebsd-questions@freebsd.org; Mon, 03 Jun 2013 12:40:10 +0200 Received: from pool-173-79-84-117.washdc.fios.verizon.net ([173.79.84.117]) by main.gmane.org with esmtp (Gmexim 0.1 (Debian)) id 1AlnuQ-0007hv-00 for ; Mon, 03 Jun 2013 12:40:10 +0200 Received: from nightrecon by pool-173-79-84-117.washdc.fios.verizon.net with local (Gmexim 0.1 (Debian)) id 1AlnuQ-0007hv-00 for ; Mon, 03 Jun 2013 12:40:10 +0200 X-Injected-Via-Gmane: http://gmane.org/ To: freebsd-questions@freebsd.org From: Michael Powell Subject: Re: Max top end computer for Freebsd to run on Date: Mon, 03 Jun 2013 06:39:57 -0400 Lines: 80 Message-ID: References: <51ABAC4D.4040302@a1poweruser.com> <51ABB457.5060205@gmail.com> <51AC00D9.4030502@hdk5.net> Mime-Version: 1.0 Content-Type: text/plain; charset="WINDOWS-1252" Content-Transfer-Encoding: quoted-printable X-Complaints-To: usenet@ger.gmane.org X-Gmane-NNTP-Posting-Host: pool-173-79-84-117.washdc.fios.verizon.net X-BeenThere: freebsd-questions@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list Reply-To: nightrecon@hotmail.com List-Id: User questions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 03 Jun 2013 10:40:21 -0000 Al Plant wrote: > James wrote: >> Several modest servers applied well will take you further than one b= ig >> iron=97and for less cost. >=20 > James I agree. I have witnessed the benefit of what you say. Putting > your faith in one big server can be a problem if the box fails, > especially hardware failure. >=20 > Keeping a spare server in a rack that can be switched in to service > quickly can save you if one dies. Time (waiting for parts), most > failures are hardware if your running FreeBSD. Even most Linux boxes.= >=20 There are 2 approaches, and applying both together is what I favor. Sca= le up=20 (vertical) is a horsepower per box kind of thing. Scale out (horizontal= )=20 adds more of the same kind of box(es) in parallel. The resulting redund= ancy=20 will keep you up and online. Sizing matters somewhat. Having excess horsepower that sits unused is e= xtra=20 money spent on one box that could have been applied to scale out redund= ancy.=20 If you can size one machine to match your current and projected workloa= d,=20 then if there are two, or more, of these and one fails the remaining ca= n=20 shoulder the load while you get the broken one back up. Where the balance point is struck will depend on workload. Let's say=20= (hypothetical) one box as a web/database server can handle 1,000=20 connections/users per second within desired latency and response time. = If a=20 spike in demand suddenly comes that box will slow to a crawl (or even f= all=20 over) as it tries to keep up, as it is lacking the extra horsepower ove= rhead=20 that would otherwise be sitting idle if it did. Scaling out (horizontal= ly)=20 by adding more boxes will distribute this spike across multiple machine= s and=20 remain within the desired processing response/latency time so together = they=20 can handle 2,000 when the need is present. Need another 1,000? Add anot= her=20 box, and so on. So the trick is to understand your workload. Don't go overboard on just= one =20 huge high-power machine which sits mostly idle and takes you offline if= it=20 fails. Spend the money on more moderately sized boxen. Me, I like to ha= ve at=20 least 3 of everything (if I can) such that they are sized so that 2 of = them=20 together can easily handle the desired load. The third one is for redun= dancy=20 and the 'what-if' spike in demand. Another advantage here is you can take one offline for updates, then pu= t it=20 back online and test it out for problems. If there is no problem then y= ou=20 can take one of the other two down and update it. This way you can do=20= updates without your service being offline. But the trick is still to=20= understand your specific workload first, then spread the money around=20= accordingly. -Mike