Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 6 Mar 2002 08:52:00 -0700 (MST)
From:      Ronald G Minnich <rminnich@lanl.gov>
To:        Andy Sporner <sporner@nentec.de>
Cc:        Jason Fried <jfried@cluster.nix.selu.edu>, <freebsd-cluster@FreeBSD.ORG>
Subject:   RE: FreeBSD Cluster at SLU
Message-ID:  <Pine.LNX.4.33.0203060849090.7642-100000@snaresland.acl.lanl.gov>
In-Reply-To: <XFMail.020306164742.sporner@nentec.de>

next in thread | previous in thread | raw e-mail | index | archive | help
On Wed, 6 Mar 2002, Andy Sporner wrote:

> Within reason I agree...  However having things in one place defeats
> the high availabilty on a cluster, but we may be talking about
> different things here.

no, this is actually funny thinking about uptime.

People frequently confuse things.
- A system with Multiple Points of Failure (MPOF) has no Single Point of
  Failure (SPOF)
- A system with a Single Point of Failure (SPOF)
- A system with No SPOF

Often, people build systems with MPOF, and mistakenly think they have
achieved a sytem with No SPOF. Wrong.

We're just trying to get to a system with SPOF, harder than it looks.


>  I am looking at making Unix machines more
> reliable to get to 99.999% uptime.

You can actually do this with one node. It's doing it with lots of nodes
that is hard.

> If your configuration image is on
> machine, than you have no backups.

See above.

> The cluster approach I designed
> has replication of configuration that covers this, so your "Cluster
> Monitor" node can fail-over when that machine fails (should it...).

How large have you made your system to date? how many nodes? have you
built it?

ron



To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-cluster" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?Pine.LNX.4.33.0203060849090.7642-100000>