From owner-freebsd-questions@FreeBSD.ORG Tue Oct 24 06:11:17 2006 Return-Path: X-Original-To: freebsd-questions@freebsd.org Delivered-To: freebsd-questions@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 0951116A416 for ; Tue, 24 Oct 2006 06:11:17 +0000 (UTC) (envelope-from tedm@toybox.placo.com) Received: from mail.freebsd-corp-net-guide.com (mail.web-strider.com [65.75.192.90]) by mx1.FreeBSD.org (Postfix) with ESMTP id 30AEB43D45 for ; Tue, 24 Oct 2006 06:11:15 +0000 (GMT) (envelope-from tedm@toybox.placo.com) Received: from coolf89ea26645 (nat-rtr.freebsd-corp-net-guide.com [65.75.197.130]) by mail.freebsd-corp-net-guide.com (8.11.1/8.11.1) with SMTP id k9O6Adx33418; Mon, 23 Oct 2006 23:10:40 -0700 (PDT) (envelope-from tedm@toybox.placo.com) Message-ID: <002c01c6f732$fa219c50$3c01a8c0@coolf89ea26645> From: "Ted Mittelstaedt" To: "Damian Wiest" , References: <7.0.1.0.2.20061018082011.066e8b60@msdi.ca><004001c6f34b$c9640570$3c01a8c0@coolf89ea26645> <20061023200016.GB5392@dfwdamian.vail> Date: Mon, 23 Oct 2006 23:06:56 -0700 MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2800.1807 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2800.1807 Cc: Subject: Re: Small Redundant web/mail setup X-BeenThere: freebsd-questions@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: User questions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 24 Oct 2006 06:11:17 -0000 ----- Original Message ----- From: "Damian Wiest" To: Sent: Monday, October 23, 2006 1:00 PM Subject: Re: Small Redundant web/mail setup > On Wed, Oct 18, 2006 at 11:57:04PM -0700, Ted Mittelstaedt wrote: > > > > ----- Original Message ----- > > From: "Ian Lord" > > To: > > Sent: Wednesday, October 18, 2006 5:34 AM > > Subject: Small Redundant web/mail setup > > > > > > > Hi, > > > > > > I need to setup a high-availability setup for mail/web setup > > > > > > I was thinking about the following setup: > > > > > > 4 servers total: > > > > > > > overkill, just asking for trouble. > > > > > Data Servers: > > > 1 Server holding all the websites data and mail messages. It > > > would serve these files via nfs to the application servers. > > > It would also run mysql > > > > > > A second server Also sharing it's content via nfs, > > > replicating it's data though rsync each ?? minutes. The mysql would > > > run as a slave of the primary > > > > > > Application Servers: > > > Both servers would be running apache, php, sendmail and > > > posfix and would serve content from the share nfs drive. > > > > > > 1- Is this a viable solution, I mean by that, Is it Like this big ISP > > > are set up ? > > > > > > > no > > > > The really big ISP's use proprietary commercial clustering solutions > > that make multiple systems appear as one single system. We are talking > > hundreds of thousands to millions of users. We are not talking 5000 > > users or fewer. > > > > You can easily serve 5K users on a single server. You just need to > > get good hardware. In other words, costs start at $5000 and go up. > > > > A lot of people are under the misconception that they can get several > > cheap $900 servers and assemble them into a redundant setup that is > > highly reliable. > > > > The real secret is in getting expensive name-brand hardware that > > doesen't go down. If you can afford that, your fine. If you can't, > > then you need to find a different table to play at. > > > > Ted > > Isn't part of the point in running a redundent configuration that you > can buy cheap(er) hardware? No. The point of a redundant setup is to attain 100% uptime. All hardware eventually dies it is just a question of how good the chances are. Cheaper hardware has a much higher chance of dying unexpectedly or having incompatabilities or problems. More expensive hardware has a lower chance. A $600 machine that does not have a good 6 months of burn in time on it in my experience has about a 30% chance of unexpectedly failing. If you put two of them together the chances of both dying at the same time are much lower of course - but it is still higher than the chances of a $5,000 machine dying after 24 hours of burn in time. And once the machine does die, it costs tech time to put things back together. Ultimately, the pursuit of clustering as a cost-effective way of increasing reliability is doomed. Clustering works great if what your initending to do with it is increase power of the cluster beyond what is attainable by a single machine. It also works great in life and health situations where you cannot afford anything less than 99.999999999% uptime. > A $600 machine should be powerful enough > to handle that many users. Just make sure you are using RAID 1+0 > filesystems, keep replacement parts on hand and are performing regular > backups. Baloney. > The real question to ask is what is the provider's SLA and > how much does an hour of downtime cost the provider. > > In my experience, the only things to die on servers have been fans, > disks (really the motors), and the occasional power supply. The only > things a more expensive system may give you are additional power > supplies, hot-swap drive bays and multiple CPUs. Other than the system > board and possibly the processors, the server's components come from the > same sources as your commodity hardware. > It's irrelevant. It may come as a surprise to you but a Seagate ST11950N purchased from someplace like Walmart or Costco is different than a Seagate ST11950N that is shipped from Dell in a server, this is true of most other expensive computer components. The component manufacturers make the components from cheaper materials and sloppier tolerances for the retail/desktop market than for the server market. For example a builder like Dell may spec a 20,000 MTBF sleeve bearing case fan from Panasonic for the desktop, and spec a 70,000 MTBF Panasonic Panaflo hydro wave fan for the servers. You really need to read up on hardware, there's tons of info on the Internet. It is possible to spec your own system and build a clone that is as reliable as a name-brand server, I've done it. But it won't cost $600. > I think the setup described above is viable, though I would consider > running the database (with master-slave replication) and application > services on the same server assuming it can handle the load. Also, you > can probably get away with using something like rsync to push changes to > your WWW servers. I'm not sure about email, but you could NFS export > your mail directories from a central server to the two application > servers. Just be aware of NFS' failure modes. > > So, I'd go with two, user-facing systems and an administrative > system that receives email and possibly hosts your code repository. > If you can afford it, get systems with redundent power supplies and > hot-swap drive bays. That's not a $600 system. > Depending on your userbase, you may want to > consider a robotic tape library so you don't have to manually change > tapes. I've heard some talk of people using raw disks for backups, but > I don't have any experience with that type of setup. > The cost per megabyte for backup to hard disk is cheaper than to tape, nowadays. Ted