From owner-freebsd-cluster@FreeBSD.ORG Wed Jun 24 23:05:55 2009 Return-Path: Delivered-To: freebsd-cluster@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 04E891065672 for ; Wed, 24 Jun 2009 23:05:55 +0000 (UTC) (envelope-from fjwcash@gmail.com) Received: from mail-gx0-f210.google.com (mail-gx0-f210.google.com [209.85.217.210]) by mx1.freebsd.org (Postfix) with ESMTP id A90338FC22 for ; Wed, 24 Jun 2009 23:05:54 +0000 (UTC) (envelope-from fjwcash@gmail.com) Received: by gxk6 with SMTP id 6so776400gxk.19 for ; Wed, 24 Jun 2009 16:05:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:date:message-id:subject :from:to:cc:content-type; bh=e3BnrEIUyUqPPe2BsqO6pKiabbFDJEUabBG+EL4ZYGE=; b=IHl4nAhKa+L7zuiUYJ0hiiQd18Ui8kxdC0GeKELvoN8mh0GCmpfC/28SyU76Pw1l5p 8k5VUZ3yYpsua3rXR6SaEujd01TvKiWDGHsYIGtHqLs+lynVhaZxH1hqWp3rypcnmoS5 UR8LuVS5RmhjLZDwa1vVpTe/I35BCQ8yG9ZOg= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:date:message-id:subject:from:to:cc:content-type; b=EvYbVaEwJKfEyE/tRsnDTpDVzYnuya2DnAZw19BOXA9Uu4HLE0REIq3rN5zHVPCH65 uFbpXlpRdeMXzYZg4ACGNQTeBDhCI8IiMOIfRgqCimAbTYx/UU8XgxbyJqVWH+tyG4c/ qnT00v1+bcDLx0T00wIRUG2IRCo/5TRPqGD+M= MIME-Version: 1.0 Received: by 10.151.119.1 with SMTP id w1mr3410972ybm.198.1245882925169; Wed, 24 Jun 2009 15:35:25 -0700 (PDT) Date: Wed, 24 Jun 2009 15:35:25 -0700 Message-ID: From: Freddie Cash To: freebsd-cluster@freebsd.org Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Cc: freebsd-fs@freebsd.org Subject: Fail-over SAN setup: ZFS, NFS, and ...? X-BeenThere: freebsd-cluster@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Clustering FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 24 Jun 2009 23:05:55 -0000 [Not exactly sure which ML this belongs on, as it's related to both clustering and filesystems. If there's a better spot, let me know and I'll update the CC:/reply-to.] We're in the planning stages for building a multi-site, fail-over SAN setup which will be used to provide redundant storage for a virtual machine setup. The setup will be like so: [Server Room 1] . [Server Room 2] ----------------- . ------------------- . [storage server] . [storage server] | . | | . | [storage switch] . [storage switch] \----fibre----/ | . | . | . [storage aggregator] . | . | . /---[switch]---\ . | | | . | [VM box] | . | | | . [VM box] | | . | | [VM box] . | | | . [network switch] . | . | . [internet] Server room 1 and server room 2 are on opposite ends of town (about 3 km) with a dedicated, direct-link, fibre link between them. There will be a set of VM boxes at each site, that use the shared storage, and will act as fail-over for each other. In theory, only 1 server room would ever be active at a time, although we may end up migrating VMs between the two sites for maintenance purposes. We've got the storage server side of things figured out (5U rackmounts with 24 drive bauys, using FreeBSD 7.x and ZFS). We've got the storage switches picked out (HP Procurve 2800 or 2900, depending on if we go with 1 GbE or 10 GbE fibre links between them). We're stuck on the storage aggregator. For a single aggregator box setup, we'd use FreeBSD 7.x with ZFS. The storage servers would each export a single zvol using iSCSI. The storage aggregator would use ZFS to create a pool using a mirrored vdev. To expand the pool, we put in two more storage servers, and add another mirrored vdev to the pool. No biggie. The storage aggregator then uses NFS and/or iSCSI to make storage available to the VM boxes. This is the easy part. However, we'd like to remove the single-point-of-failure that the storage aggregator represents, and have a duplicate of it running at Server Room 1. Right now, we can do this using cold-spares that rsync from the live box every X hours/days. We'd like this to be a live, fail-over spare, though. And this is where we're stuck. What can we use to do this? CARP? Heatbeat? ggate? Should we look at Linux with DRBD or linux-ha or cluster-nfs or similar? Perhaps RedHat Cluster Suite? (We'd prefer not to, as then storage management becomes a nightmare again, requiring mdadm, lvm, and more.) Would a cluster filessytem be needed? AFS or similar? We have next to no knowledge of fail-over clustering when it comes to high-availability and fail-over. Any pointers to things to read online, or tips, or even "don't do that, you're insane" comments greatly appreciated. :) Thanks. -- Freddie Cash fjwcash@gmail.com From owner-freebsd-cluster@FreeBSD.ORG Thu Jun 25 00:04:11 2009 Return-Path: Delivered-To: freebsd-cluster@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id B3442106564A; Thu, 25 Jun 2009 00:04:11 +0000 (UTC) (envelope-from efinleywork@efinley.com) Received: from mail1.etv.net (mail1.etv.net [66.111.113.18]) by mx1.freebsd.org (Postfix) with ESMTP id 9898D8FC13; Thu, 25 Jun 2009 00:04:11 +0000 (UTC) (envelope-from efinleywork@efinley.com) Received: from ef04.etv.net ([74.214.237.51]) by mail1.etv.net with esmtpa (Exim 4.69 (FreeBSD)) (envelope-from ) id 1MJbhV-000JPm-Bi; Wed, 24 Jun 2009 17:16:21 -0600 Message-ID: <4A42B3C7.9000500@efinley.com> Date: Wed, 24 Jun 2009 17:16:23 -0600 From: Elliot Finley User-Agent: Thunderbird 2.0.0.22 (Windows/20090605) MIME-Version: 1.0 To: Freddie Cash References: In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org, freebsd-cluster@freebsd.org Subject: Re: Fail-over SAN setup: ZFS, NFS, and ...? X-BeenThere: freebsd-cluster@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Clustering FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 25 Jun 2009 00:04:12 -0000 Why not take a look at gluster? Freddie Cash wrote: > [Not exactly sure which ML this belongs on, as it's related to both > clustering and filesystems. If there's a better spot, let me know and I'll > update the CC:/reply-to.] > > We're in the planning stages for building a multi-site, fail-over SAN setup > which will be used to provide redundant storage for a virtual machine setup. > The setup will be like so: > [Server Room 1] . [Server Room 2] > ----------------- . ------------------- > . > [storage server] . [storage server] > | . | > | . | > [storage switch] . [storage switch] > \----fibre----/ | > . | > . | > . [storage aggregator] > . | > . | > . /---[switch]---\ > . | | | > . | [VM box] | > . | | | > . [VM box] | | > . | | [VM box] > . | | | > . [network switch] > . | > . | > . [internet] > > Server room 1 and server room 2 are on opposite ends of town (about 3 km) > with a dedicated, direct-link, fibre link between them. There will be a set > of VM boxes at each site, that use the shared storage, and will act as > fail-over for each other. In theory, only 1 server room would ever be > active at a time, although we may end up migrating VMs between the two sites > for maintenance purposes. > > We've got the storage server side of things figured out (5U rackmounts with > 24 drive bauys, using FreeBSD 7.x and ZFS). We've got the storage switches > picked out (HP Procurve 2800 or 2900, depending on if we go with 1 GbE or 10 > GbE fibre links between them). We're stuck on the storage aggregator. > > For a single aggregator box setup, we'd use FreeBSD 7.x with ZFS. The > storage servers would each export a single zvol using iSCSI. The storage > aggregator would use ZFS to create a pool using a mirrored vdev. To expand > the pool, we put in two more storage servers, and add another mirrored vdev > to the pool. No biggie. The storage aggregator then uses NFS and/or iSCSI > to make storage available to the VM boxes. This is the easy part. > > However, we'd like to remove the single-point-of-failure that the storage > aggregator represents, and have a duplicate of it running at Server Room 1. > Right now, we can do this using cold-spares that rsync from the live box > every X hours/days. We'd like this to be a live, fail-over spare, though. > And this is where we're stuck. > > What can we use to do this? CARP? Heatbeat? ggate? Should we look at > Linux with DRBD or linux-ha or cluster-nfs or similar? Perhaps RedHat > Cluster Suite? (We'd prefer not to, as then storage management becomes a > nightmare again, requiring mdadm, lvm, and more.) Would a cluster > filessytem be needed? AFS or similar? > > We have next to no knowledge of fail-over clustering when it comes to > high-availability and fail-over. Any pointers to things to read online, or > tips, or even "don't do that, you're insane" comments greatly appreciated. > :) From owner-freebsd-cluster@FreeBSD.ORG Thu Jun 25 13:26:28 2009 Return-Path: Delivered-To: freebsd-cluster@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id B43AC106564A for ; Thu, 25 Jun 2009 13:26:28 +0000 (UTC) (envelope-from mandrei05@gmail.com) Received: from mail-bw0-f209.google.com (mail-bw0-f209.google.com [209.85.218.209]) by mx1.freebsd.org (Postfix) with ESMTP id CB0088FC08 for ; Thu, 25 Jun 2009 13:26:27 +0000 (UTC) (envelope-from mandrei05@gmail.com) Received: by bwz5 with SMTP id 5so1337354bwz.43 for ; Thu, 25 Jun 2009 06:26:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:message-id:subject:from:to:content-type; bh=K5ZGkFY5F+e+Gk/TvC30MRh9CHGl+P3OP/NH6vstQUE=; b=hW5PNW+J6UPcW0yRcMZRYptbC5G86D9YXAmi1x6Kvw+tohcdrEG+bUNy41/U6q2xdl hjywTidt4GnvzgOi80pxHnSjnEedfIx9irHvcGJb+t/6hz8dlDD9U88hcx9bVgrpqhyI rUDkuMVzIFasd70TzYO32LFenMX64XCkCAep8= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; b=WdRcaMws7LFknA4M4OQDE5Z4P4RFF+SN52tHH6Zh3lAgOwhZdgTsIJ/ZNkTbrtJAl0 +5kWkOcYkBTwrDwU7JpLrPsBGdpIe7sGHtkvC7KlrjBGXdyX+vlfJgM9q8g580pivRjG gkw3SMsjmCYpbd4sTBxOQxUGbZqp+vdEcyRlI= MIME-Version: 1.0 Received: by 10.223.107.135 with SMTP id b7mr2146670fap.30.1245936386345; Thu, 25 Jun 2009 06:26:26 -0700 (PDT) In-Reply-To: <20090625120014.57F0110656CE@hub.freebsd.org> References: <20090625120014.57F0110656CE@hub.freebsd.org> Date: Thu, 25 Jun 2009 15:26:26 +0200 Message-ID: From: Andrei Manescu To: freebsd-cluster@freebsd.org Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Subject: Re: freebsd-cluster Digest, Vol 124, Issue 1 X-BeenThere: freebsd-cluster@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Clustering FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 25 Jun 2009 13:26:28 -0000 Any docs on gcluster ??? 2009/6/25 > Send freebsd-cluster mailing list submissions to > freebsd-cluster@freebsd.org > > To subscribe or unsubscribe via the World Wide Web, visit > http://lists.freebsd.org/mailman/listinfo/freebsd-cluster > or, via email, send a message with subject or body 'help' to > freebsd-cluster-request@freebsd.org > > You can reach the person managing the list at > freebsd-cluster-owner@freebsd.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of freebsd-cluster digest..." > > > Today's Topics: > > 1. Fail-over SAN setup: ZFS, NFS, and ...? (Freddie Cash) > 2. Re: Fail-over SAN setup: ZFS, NFS, and ...? (Elliot Finley) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Wed, 24 Jun 2009 15:35:25 -0700 > From: Freddie Cash > Subject: Fail-over SAN setup: ZFS, NFS, and ...? > To: freebsd-cluster@freebsd.org > Cc: freebsd-fs@freebsd.org > Message-ID: > > Content-Type: text/plain; charset=UTF-8 > > [Not exactly sure which ML this belongs on, as it's related to both > clustering and filesystems. If there's a better spot, let me know and I'll > update the CC:/reply-to.] > > We're in the planning stages for building a multi-site, fail-over SAN setup > which will be used to provide redundant storage for a virtual machine > setup. > The setup will be like so: > [Server Room 1] . [Server Room 2] > ----------------- . ------------------- > . > [storage server] . [storage server] > | . | > | . | > [storage switch] . [storage switch] > \----fibre----/ | > . | > . | > . [storage aggregator] > . | > . | > . /---[switch]---\ > . | | | > . | [VM box] | > . | | | > . [VM box] | | > . | | [VM box] > . | | | > . [network switch] > . | > . | > . [internet] > > Server room 1 and server room 2 are on opposite ends of town (about 3 km) > with a dedicated, direct-link, fibre link between them. There will be a > set > of VM boxes at each site, that use the shared storage, and will act as > fail-over for each other. In theory, only 1 server room would ever be > active at a time, although we may end up migrating VMs between the two > sites > for maintenance purposes. > > We've got the storage server side of things figured out (5U rackmounts with > 24 drive bauys, using FreeBSD 7.x and ZFS). We've got the storage switches > picked out (HP Procurve 2800 or 2900, depending on if we go with 1 GbE or > 10 > GbE fibre links between them). We're stuck on the storage aggregator. > > For a single aggregator box setup, we'd use FreeBSD 7.x with ZFS. The > storage servers would each export a single zvol using iSCSI. The storage > aggregator would use ZFS to create a pool using a mirrored vdev. To expand > the pool, we put in two more storage servers, and add another mirrored vdev > to the pool. No biggie. The storage aggregator then uses NFS and/or iSCSI > to make storage available to the VM boxes. This is the easy part. > > However, we'd like to remove the single-point-of-failure that the storage > aggregator represents, and have a duplicate of it running at Server Room 1. > Right now, we can do this using cold-spares that rsync from the live box > every X hours/days. We'd like this to be a live, fail-over spare, though. > And this is where we're stuck. > > What can we use to do this? CARP? Heatbeat? ggate? Should we look at > Linux with DRBD or linux-ha or cluster-nfs or similar? Perhaps RedHat > Cluster Suite? (We'd prefer not to, as then storage management becomes a > nightmare again, requiring mdadm, lvm, and more.) Would a cluster > filessytem be needed? AFS or similar? > > We have next to no knowledge of fail-over clustering when it comes to > high-availability and fail-over. Any pointers to things to read online, or > tips, or even "don't do that, you're insane" comments greatly appreciated. > :) > > Thanks. > -- > Freddie Cash > fjwcash@gmail.com > > > ------------------------------ > > Message: 2 > Date: Wed, 24 Jun 2009 17:16:23 -0600 > From: Elliot Finley > Subject: Re: Fail-over SAN setup: ZFS, NFS, and ...? > To: Freddie Cash > Cc: freebsd-fs@freebsd.org, freebsd-cluster@freebsd.org > Message-ID: <4A42B3C7.9000500@efinley.com> > Content-Type: text/plain; charset=UTF-8; format=flowed > > Why not take a look at gluster? > > Freddie Cash wrote: > > [Not exactly sure which ML this belongs on, as it's related to both > > clustering and filesystems. If there's a better spot, let me know and > I'll > > update the CC:/reply-to.] > > > > We're in the planning stages for building a multi-site, fail-over SAN > setup > > which will be used to provide redundant storage for a virtual machine > setup. > > The setup will be like so: > > [Server Room 1] . [Server Room 2] > > ----------------- . ------------------- > > . > > [storage server] . [storage server] > > | . | > > | . | > > [storage switch] . [storage switch] > > \----fibre----/ | > > . | > > . | > > . [storage aggregator] > > . | > > . | > > . /---[switch]---\ > > . | | | > > . | [VM box] | > > . | | | > > . [VM box] | | > > . | | [VM box] > > . | | | > > . [network switch] > > . | > > . | > > . [internet] > > > > Server room 1 and server room 2 are on opposite ends of town (about 3 km) > > with a dedicated, direct-link, fibre link between them. There will be a > set > > of VM boxes at each site, that use the shared storage, and will act as > > fail-over for each other. In theory, only 1 server room would ever be > > active at a time, although we may end up migrating VMs between the two > sites > > for maintenance purposes. > > > > We've got the storage server side of things figured out (5U rackmounts > with > > 24 drive bauys, using FreeBSD 7.x and ZFS). We've got the storage > switches > > picked out (HP Procurve 2800 or 2900, depending on if we go with 1 GbE or > 10 > > GbE fibre links between them). We're stuck on the storage aggregator. > > > > For a single aggregator box setup, we'd use FreeBSD 7.x with ZFS. The > > storage servers would each export a single zvol using iSCSI. The storage > > aggregator would use ZFS to create a pool using a mirrored vdev. To > expand > > the pool, we put in two more storage servers, and add another mirrored > vdev > > to the pool. No biggie. The storage aggregator then uses NFS and/or > iSCSI > > to make storage available to the VM boxes. This is the easy part. > > > > However, we'd like to remove the single-point-of-failure that the storage > > aggregator represents, and have a duplicate of it running at Server Room > 1. > > Right now, we can do this using cold-spares that rsync from the live box > > every X hours/days. We'd like this to be a live, fail-over spare, > though. > > And this is where we're stuck. > > > > What can we use to do this? CARP? Heatbeat? ggate? Should we look at > > Linux with DRBD or linux-ha or cluster-nfs or similar? Perhaps RedHat > > Cluster Suite? (We'd prefer not to, as then storage management becomes a > > nightmare again, requiring mdadm, lvm, and more.) Would a cluster > > filessytem be needed? AFS or similar? > > > > We have next to no knowledge of fail-over clustering when it comes to > > high-availability and fail-over. Any pointers to things to read online, > or > > tips, or even "don't do that, you're insane" comments greatly > appreciated. > > :) > > > ------------------------------ > > _______________________________________________ > freebsd-cluster@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-cluster > To unsubscribe, send any mail to "freebsd-cluster-unsubscribe@freebsd.org" > > > End of freebsd-cluster Digest, Vol 124, Issue 1 > *********************************************** >