Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 15 Dec 2011 07:36:18 -0600
From:      Michael Larabel <michael.larabel@phoronix.com>
To:        Stefan Esser <se@freebsd.org>
Cc:        FreeBSD Stable Mailing List <freebsd-stable@freebsd.org>, Current FreeBSD <freebsd-current@freebsd.org>, Michael Ross <gmx@ross.cx>, freebsd-performance@freebsd.org, "O. Hartmann" <ohartman@zedat.fu-berlin.de>, Jeremy Chadwick <freebsd@jdc.parodius.com>
Subject:   Re: Benchmark (Phoronix): FreeBSD 9.0-RC2 vs. Oracle Linux 6.1 Server
Message-ID:  <4EE9F7D2.4050607@phoronix.com>
In-Reply-To: <4EE9F546.6060503@freebsd.org>
References:  <4EE1EAFE.3070408@m5p.com> <CAJ-FndDniGH8QoT=kUxOQ%2BzdVhWF0Z0NKLU0PGS-Gt=BK6noWw@mail.gmail.com> <4EE2AE64.9060802@m5p.com> <4EE88343.2050302@m5p.com> <CAFHbX1%2B5PttyZuNnYot8emTn_AWkABdJCvnpo5rcRxVXj0ypJA@mail.gmail.com> <4EE933C6.4020209@zedat.fu-berlin.de> <CAPjTQNEJDE17TLH-mDrG_-_Qa9R5N3mSeXSYYWtqz_DFidzYQw@mail.gmail.com> <20111215024249.GA13557@icarus.home.lan> <4EE9A2A0.80607@zedat.fu-berlin.de> <op.v6iv3qe5g7njmm@michael-think> <4EE9C79B.7080607@phoronix.com> <4EE9F546.6060503@freebsd.org>

next in thread | previous in thread | raw e-mail | index | archive | help
On 12/15/2011 07:25 AM, Stefan Esser wrote:
> Am 15.12.2011 11:10, schrieb Michael Larabel:
>> No, the same hardware was used for each OS.
>>
>> In terms of the software, the stock software stack for each OS was used.
> Just curious: Why did you choose ZFS on FreeBSD, while UFS2 (with
> journaling enabled) should be an obvious choice since it is more similar
> in concept to ext4 and since that is what most FreeBSD users will use
> with FreeBSD?

I was running some ZFS vs. UFS tests as well and this happened to have 
ZFS on when I was running some other tests.

>
> Did you tune the ZFS ARC (e.g. vfs.zfs.arc_max="6G") for the tests?

The OS was left in its stock configuration.

>
> And BTW: Did your measured run times account for the effect, that Linux
> keeps much more dirty data in the buffer cache (FreeBSD has a low limit
> on dirty buffers since under realistic load the already cached data is
> much more likely to be reused and thus more valuable than freshly
> written data; aggressively caching dirty data would significantly reduce
> throughput and responsiveness under high load). Given the hardware specs
> of the test system, I guess that Linux accepts at least 100 times the
> dirty data in the buffer cache, compared to FreeBSD (where this number
> is at most in the tens of megabyte range).
>
> If you did not, then your results do not represent a server load (which
> I'd expect relevant, if you are testing against Oracle Linux 6.1
> server), where continuous performance is required. Tests that run on an
> idle system starting in a clean state and ignoring background flushing
> of the buffer cache after the timed program has stopped are perhaps
> useful for a very lowly loaded PC, but not for a system with high load
> average as the default.
>
> I bet that if you compared the systems under higher load (which
> admittedly makes it much harder to get sensible numbers for the program
> under test) or with reduced buffer cache size (or raise the dirty buffer
> limit in FreeBSD accordingly, which ought to be possible with sysctl
> and/or boot time tuneables, e.g. "vfs.hidirtybuffers").
>
> And a last remark: Single benchmark runs do not provide reliable data.
> FreeBSD comes with "ministat" to check the significance of benchmark
> results. Each test should be repeated at least 5 times for meaningful
> averages with acceptable confidence level.

The Phoronix Test Suite runs most tests a minimum of three times and if 
the standard deviation exceeds 3.5% the run count is dynamically 
increased, among other safeguards.

-- Michael

>
> Regards, STefan
>




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?4EE9F7D2.4050607>