Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 19 Mar 2008 11:42:44 -0700
From:      Jeremy Chadwick <koitsu@freebsd.org>
To:        Chuck Robey <chuckr@chuckr.org>
Cc:        FreeBSD-Hackers <freebsd-hackers@freebsd.org>
Subject:   Re: remote operation or admin
Message-ID:  <20080319184244.GA29838@eos.sc1.parodius.com>
In-Reply-To: <47E1558A.2030107@chuckr.org>
References:  <47DF1045.6050202@chuckr.org> <20080318082816.GA74218@eos.sc1.parodius.com> <47E146F9.5060105@chuckr.org> <20080319172213.GA28075@eos.sc1.parodius.com> <47E1558A.2030107@chuckr.org>

next in thread | previous in thread | raw e-mail | index | archive | help
On Wed, Mar 19, 2008 at 02:03:54PM -0400, Chuck Robey wrote:
> Well, I am, and I'm not, if you could answer me one quiestion, then I would
> probably know for sure.  What is the difference between our SMP and the
> general idea of clustering, as typified by Beowulf?  I was under the
> impression I was talking about seeing the possibility of moving the two
> closer together, but maybe I'm confused in the meanings?

SMP as an implementation is mainly intended for single systems with
multiple processors (e.g. multiple physical CPUs, or multiple cores;
same thing).  It distributes kernel operations (kernel threads) across
those processors, rather than only utilising a single processor.

Clustering allows for the distribution of a task (a compile using gcc,
running of certain disk I/O tasks, running multiple userland (or I
suppose kernel, if the kernel had clustering support) threads) across
multiple physical computers on a local network.

The best example I have for real-world clustering is rendering (mostly
3D, but you can "render" anything; I'm referring to 3D in this case).

A person doing modelling creates a model scene using 3D objects, applies
textures to it, lighting, raytracing aspects, vertex/bones animation,
and anything else -- all using their single workstation.  Then the
person wants to see what it all looks like -- either as a still frame
(JPEG/PNG/TIFF), or as a rendered animation (AVI/MPG/MJPEG).

Without any form of clustering, the workstation has to do all of the
processing/rendering work by its lonesome self.  This can take a very,
very long time -- modellers aren't going to wait 2 hours for their work
to render, only to find they messed up some bones vertexes half way into
the animation.

With clustering, the workstation has the capability to send the
rendering request out onto the network to a series of what're called
"slaves" (other computers set up to handle such requests).  The
workstation says "I want this rendered.  I want all of you to do it".
Let's say there's 200 machines in the cluster as slaves, and let's say
all 200 of those machines are dual-core (so 400 CPUs total).  You then
have 400 CPUs rendering your animation, versus just 2 on the
workstation.

The same concept can apply to compiling (gcc saying "I want this C file
compiled" or whatever), or any other "distributed computing"
computational desired.  It all depends on if the software you want to
support clustering can do it.

Different clustering softwares run at different levels; some might act
as "virtual environments", thus underlying software may not need to know
about clustering (e.g. it "just works"); others might require each
program to be fully cluster-aware.

Make sense?  :-)

-- 
| Jeremy Chadwick                                    jdc at parodius.com |
| Parodius Networking                           http://www.parodius.com/ |
| UNIX Systems Administrator                      Mountain View, CA, USA |
| Making life hard for others since 1977.                  PGP: 4BD6C0CB |




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20080319184244.GA29838>