Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 21 Apr 2008 18:24:45 +0200
From:      Erik Trulsson <ertr1013@student.uu.se>
To:        Jeremy Chadwick <koitsu@freebsd.org>
Cc:        Clayton Milos <clay@milos.co.za>, Kris Kennaway <kris@FreeBSD.ORG>, stable@FreeBSD.ORG, net@FreeBSD.ORG
Subject:   Re: nfs-server silent data corruption
Message-ID:  <20080421162445.GA32697@owl.midgard.homeip.net>
In-Reply-To: <20080421154333.GA96237@eos.sc1.parodius.com>
References:  <wpmyno2kqe.fsf@heho.snv.jussieu.fr> <20080421094718.GY25623@hub.freebsd.org> <wp63ubp8e0.fsf@heho.snv.jussieu.fr> <20080421154333.GA96237@eos.sc1.parodius.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On Mon, Apr 21, 2008 at 08:43:33AM -0700, Jeremy Chadwick wrote:
> On Mon, Apr 21, 2008 at 04:52:55PM +0200, Arno J. Klaassen wrote:
> > Kris Kennaway <kris@FreeBSD.ORG> writes:
> > > Uh, you're getting server-side data corruption, it could definitely be
> > > because of the memory you added.
> > 
> > yop, though I'm still not convinced the memory is bad (the very same
> > Kingston ECC as the 2*1G in use for about half a year already) :
> 
> Can you download and run memtest86 on this system, with the added 2G ECC
> insalled?  memtest86 doesn't guarantee showing signs of memory problems,
> but in most cases it'll start spewing errors almost immediately.
> 
> One thing I did notice in the motherboard manual below is something
> called "Hammer Configuration".  It appears to default to 800MHz, but
> there's an "Auto" choice.  Does using Auto fix anything?
> 
> > I added it directly to the 2nd CPU (diagram on page 9 of
> >  http://www.tyan.com/manuals/m_s2895_101.pdf) and the problem
> > seems to be the interaction between nfe0 and powerd .... :
> 
> That board is the weirdest thing I've seen in years.
> 
> Two separate CPUs using a single (shared) memory controller,

No. Each CPU contains its own memory controller (just like all AMD's
Opteron/Athlon64 CPUs does.)


> two
> separate (and different!) nVidia chipsets,

More like one chipset consisting of several physical chips. (Which is
actually quite common. The most common division is a
"nortbridge/southbridge" division, but other ways are possible too.)

The only unusual thing is that there are several chips connected directly
to the CPUs, instead of having the CPUs talk to a single chip which in
turn talks to another chip which can easily create bottlenecks.  


> a SMSC I/O controller
> probably used for serial and parallel I/O

Just like almost all other motherboards.

>, two separate nVidia NICs with
> Marvell PHYs (yet somehow you can bridge the two NICs and PHYs?)

What is so wierd about that?  If you want to have more than one ethernet
connection, then you normally have more than one NIC.
Bridging can easily (and commonly) be done over separate NICs.


>, two
> separate PCI-e busses (each associated with a separate nVidia chipset),

Since it is always the case that each PCI-E slot or PCI-E device sits
on its own bus I fail to see anything strange about that.
(And it is actually very common to have the PCI-E slots on motherboards
be connected to different chips.)

> two separate PCI-X busses... the list continues.

Having more than one PCI-X bus used to be fairly common on server boards for
performance reasons.  Nowadays PCI-X is slowly being replaced with PCI-E
so on the latest generation of serverboards there are usually no more than
one PCI-X bus.


> 
> I know you don't need opinions at this point, but what a behemoth.  I
> can't imagine that thing running reliably.

I would rather say it is a quite elegant design for a high-end motherboard
intended for server/workstation installations.

It is a dual-socket Opteron board.  Each Opteron has its own
memory-controller and uses HyperTransport to connect to other components.
Each dual-socket Opteron has three HyperTransport links available.
One from each CPU will be needed to the other CPU, leaving two links from
each CPU available to connect to other chips.  From that starting point it
is a fairly obvious design.
To maximise the available bandwidth one would want to spread out the chips
over these links, which this motherboard does fairly well, using three
of the four available links.
(And hanging the most important things from CPU0, so you can actually use
the board even if you have only one CPU installed.)

As for reliability I see no particular reason for that board to be less
reliable than any other multi-CPU board.



-- 
<Insert your favourite quote here.>
Erik Trulsson
ertr1013@student.uu.se



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20080421162445.GA32697>