Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 10 Dec 2002 14:26:32 +0100
From:      Andy Sporner <sporner@nentec.de>
Cc:        freebsd-cluster@FreeBSD.ORG
Subject:   Re: sharing files within a cluster
Message-ID:  <3DF5EB88.9090409@nentec.de>
References:  <200212101257.gBACvv609153@splat.grant.org>

next in thread | previous in thread | raw e-mail | index | archive | help
Michael Grant wrote:

>Bob Bishop wrote:
>  
>
>>For another alternative approach, have a look at 
>>http://www.complang.tuwien.ac.at/reisner/drbd/
>>Linux only at the moment, but lighter-weight than Coda and directly 
>>addresses this problem.
>>    
>>
>
>Good try, but no dice, this is a fail-over solution only.  You can
>only mount the disk read-write on one of the boxes at a time.
>
>However, there's a link on the page you pointed me at to GFS the
>Global File System by Sistina:
>
>http://www.sistina.com/products_gfs.htm
>
>This seems to be similar to Coda but for Linux.  I hate to say this
>but it does look like Linux has superior support for clustering.
>
>Michael Grant
>
>To Unsubscribe: send mail to majordomo@FreeBSD.org
>with "unsubscribe freebsd-cluster" in the body of the message
>
>
>  
>

However, what's wrong with this if you also failover NFS???  I wrote 
this to another
recipient privately, but I put this here too as a possible scenario...

I am also looking into the porting of the drbd code that Bob Bishop
mentioned earlier.  I am not a real "Fan" of the idea of shared SCSI.
I have seen it work and have worked on such systems, but it has some
very significant drawbacks.  I like the idea of a network distributed
collection of raw devices that can be brought together in a VINUM
kind of way so that redundancy is assured.  I have no problem with
NFS as long as the underlying backing store is redundant.  Then the
NFS server can fail to anywhere in the cluster.  Take this example.

Node 1, 2, 3 and 4 have a 2 GB slice of raw space that is network
exported (via DRBD).   These slices are present on all of the cluster
nodes via the network interface.   (I don't know Vinum terms so I use
Sequent ones (varitas)).  We create a subdisk on all or part of these
raw slices.  Now we create two plexes by concatenating the subdisks
on nodes 1 and 3.  We create a corresponding plex on nodes 2 and 4.
Now we create a mirrored volume by attaching both plexes. 
Now Node 1 becomes the NFS server.  He has half of the volume local
and the other half remote.  He can serve any file to any other node in
the cluster and assure that the locking paradymns work correctly.  If
he should die, than any of the other nodes can recover the NFS server
and continue to operate.  For instance if Node 2 takes it over it has
the mirrored half of what was on node 1 and the other half is available
over the network.

The NFS server than is configured on a virtual IP address that fails
around the cluster.  With a journaled filesystem underlying the volume,
it is coherent if not all of the mirroring was completed when node 1
took the fall. 





To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-cluster" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?3DF5EB88.9090409>