Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 8 Feb 2007 00:53:24 -0500
From:      Mike Meyer <mwm-keyword-freebsdquestions.8c5a2e@mired.org>
To:        Michael Vince <mv@thebeastie.org>
Cc:        Nicole Harrington <drumslayer2@yahoo.com>, freebsd-questions@freebsd.org, freebsd-amd64@freebsd.org
Subject:   Re: Dual Core Or Dual CPU - What's the real difference in performance?
Message-ID:  <17866.47828.219523.71972@bhuda.mired.org>
In-Reply-To: <45CAAB06.40907@thebeastie.org>
References:  <676973.69182.qm@web34510.mail.mud.yahoo.com> <45CAAB06.40907@thebeastie.org>

next in thread | previous in thread | raw e-mail | index | archive | help
In <45CAAB06.40907@thebeastie.org>, Michael Vince <mv@thebeastie.org> typed:
> Nicole Harrington wrote:
> > Using FreeBSD, what is really the difference, besides
> >power and ability to shove in more memory, between
> >having the two seperate CPUS's? 
> Dual core or Quad Core CPUs performance are far better compared to more 
> socket CPUs since they get shared access to memory cache and reduce 
> memory latency/probing over AMDs hypertransport bus.

Of course, it's not really that simple. For one thing, the intel quad
core CPUS are two dual core chips in one package, and the two chips
don't share internal resources - like cache. So any data in cache is
only available to two of the four cpus; if the one of the other two
cpus needs that data it'll have to go to the external bus.  The AMD
quad core package is similar - except they don't put the two chips in
the same package, but provide a proprietary high-speed interconnect
between them.

Also, shared access to the memory cache means - well shared access to
the memory cache and the memory behind it. Shared access raises the
possibility of contention, which will slow things down. If all four
CPUs get a cache miss for different data at the same time, one of them
is in for a long wait. Yeah, this isn't very likely under most
loads. How likely is it under yours?

Generally, more processors means things will go faster until you run
out of threads. However, if there's some shared resource that is the
bottleneck for your load, and the resource doesn't support
simultaneous access by all the cores, more cores can slow things
down.

Of course, it's not really that simple. Some shared resources can be
managed so as to make things improve under most loads, even if they
don't support simultaneous access.

	<mike
-- 
Mike Meyer <mwm@mired.org>		http://www.mired.org/consulting.html
Independent Network/Unix/Perforce consultant, email for more information.



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?17866.47828.219523.71972>