Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 28 May 2013 17:04:46 +0200
From:      dennis berger <db@bsdsystems.de>
To:        dennis berger <db@nipsi.de>
Cc:        Paul Pathiakis <pathiaki2@yahoo.com>, Adrian Chadd <adrian@freebsd.org>, "O. Hartmann" <ohartman@zedat.fu-berlin.de>, "freebsd-performance@freebsd.org" <freebsd-performance@freebsd.org>
Subject:   Re: New Phoronix performance benchmarks between some Linuxes and *BSDs
Message-ID:  <47ED9A36-D61D-42AB-B146-2E03197CBF97@bsdsystems.de>
In-Reply-To: <F2325751-7571-44AB-8B84-C7BD76D4812F@nipsi.de>
References:  <20130528090822.6bfe8771@thor.walstatt.dyndns.org> <CAJ-VmokyRX5G%2B%2Bso=LJk5zEX56J5Q0R-Kiw7oqQJqKnLEMoZuw@mail.gmail.com> <1369746142.64078.YahooMailNeo@web141401.mail.bf1.yahoo.com> <F2325751-7571-44AB-8B84-C7BD76D4812F@nipsi.de>

next in thread | previous in thread | raw e-mail | index | archive | help
Sorry, I missed the "variable file sizes" part.

So forget about my post.

Am 28.05.2013 um 16:27 schrieb dennis berger:

> Hi,
> for me it's unknown what 100 TPS means in that particular case. But =
this doesn't make sense at all and I don't see such a low number in the =
postmark output here.
>=20
> I think I get around 4690+-435 IOPS with 95% confidence.
>=20
> Guest and the actual test system is FreeBSD9.1/64bit inside of =
Virtualbox.
> Host system is MacOSX on 4year old macbook
> Storage is VDI file backed on a SSD  (OCZ vortex 2) with a 2gb ZFS =
pool=20
>=20
> When you I postmark with 25K transactions I get an output like this. =
(http://fsbench.filesystems.org/bench/postmark-1_5.c)
>=20
> pm>run
> Creating files...Done
> Performing transactions..........Done
> Deleting files...Done
> Time:
> 	6 seconds total
> 	5 seconds of transactions (5000 per second)
>=20
> Files:
> 	13067 created (2177 per second)
> 		Creation alone: 500 files (500 per second)
> 		Mixed with transactions: 12567 files (2513 per second)
> 	12420 read (2484 per second)
> 	12469 appended (2493 per second)
> 	13067 deleted (2177 per second)
> 		Deletion alone: 634 files (634 per second)
> 		Mixed with transactions: 12433 files (2486 per second)
>=20
> Data:
> 	80.71 megabytes read (13.45 megabytes per second)
> 	84.59 megabytes written (14.10 megabytes per second)
>=20
> I did this 100 times on my notebook and summed up this.
>=20
> root@freedb:/pool/nase # ministat -n *.txt
> x alltransactions.txt
> + appended-no.txt
> * created-no.txt
> % deleted-no.txt
> # reed-no.txt
>    N           Min           Max        Median           Avg        =
Stddev
> x 100          3571          5000          5000       4690.25     =
435.65125
> + 100          1781          2493          2493       2338.84      =
216.8531
> * 100          1633          2613          2613       2396.59     =
256.53752
> % 100          1633          2613          2613       2396.59     =
256.53752
> # 100          1774          2484          2484       2330.22      =
216.3084
>=20
>=20
> When I check "zpool iostat 1" I see
>=20
> root@freedb:/pool/nase # zpool iostat 1
>               capacity     operations    bandwidth
> pool        alloc   free   read  write   read  write
> ----------  -----  -----  -----  -----  -----  -----
> pool        10.6M  1.97G      0      8     28   312K
> ----------  -----  -----  -----  -----  -----  -----
> pool        10.6M  1.97G      0     33      0  4.09M
> ----------  -----  -----  -----  -----  -----  -----
> pool        10.6M  1.97G      0      0      0      0
> ----------  -----  -----  -----  -----  -----  -----
> pool        10.6M  1.97G      0      0      0      0
> ----------  -----  -----  -----  -----  -----  -----
> pool        10.6M  1.97G      0      0      0      0
> ----------  -----  -----  -----  -----  -----  -----
> pool        19.6M  1.97G      0     89      0  4.52M
> ----------  -----  -----  -----  -----  -----  -----
>=20
>=20
> around 30-90 TPS bursts.=20
>=20
> Did they counted this instead?
>=20
>=20
> -dennis
>=20
>=20
>=20
>=20
>=20
>=20
> Am 28.05.2013 um 15:02 schrieb Paul Pathiakis:
>=20
>> Outperform at "out of the box" testing. ;-)
>>=20
>> So, if I have a "desktop" distro like PCBSD, the only thing of =
relevance is putting up my own web server???? (Yes, the benchmark showed =
PCBSD seriously kicking butt with Apache on static pages.... but why =
would I care on a desktop OS?)
>>=20
>> Personally, I found the whole thing lacking coherency and relevancy =
on just about anything. =20
>>=20
>> Don't get me wrong, I do like the fact that this was done.  However, =
there are compiler differences (It was noted many times that CLANG was =
used and it may have been a detriment but it doesn't go into the how or =
why.) and other issues.
>>=20
>> There was a benchmark on PostGreSQL, but I didn't see any *BSD =
results?
>>=20
>> Transactions to a disk?  Does this measure the "bundling" effect of =
the "groups of transactions" of ZFS?  That's a whole lot less =
transactions that are sent to disk.  (Does anyone know any place where =
this can be found?  That is, how does the whole "bundling of disk I/O" =
go from writing to memory, locking those writes, then sending all the =
info in one shot to the disk?  This helps:  =
http://blog.delphix.com/ahl/2012/zfs-fundamentals-transaction-groups/ )
>>=20
>> I was working at a company that had the intention of doing =
"electronic asset ingestion and tagging".  Basically, take any thing =
moved to the front end web servers, copy it to disk, replicate it to =
other machines, etc... (maybe not in that order)  The whole system was =
java based.
>>=20
>> This was 3 years ago.  I believe I was using Debian V4 (it had just =
come out....  I don't recall the names etch, etc) and I took a single =
machine and rebuilt it 12 times:  OpenSuSe with ext2, ext3, xfs.  Debian =
with ext2, ext3, xfs.  CentOS with ext2, ext3, xfs.  FreeBSD 8.1 with =
ZFS, UFS2 w/ SU.
>>=20
>> Well, the numbers came in and this was all done on the same HP 180 1u =
server rebuilt that many times.  I withheld the FBSD results as the =
development was done on Debian and people were "Linux inclined".  The =
requisite was for 15000 tpm per machine for I/O.  Linux could only get =
to 3500.  People were pissed and they were looking at 5 years and $20m =
in time and development.  That's when I put the FBSD results in front of =
them..... 75,200 tpm.  Now, this was THEIR measurements and THEIR =
benchmarks (The Engineering team).  The machine was doing nothing but =
running flat out on a horrible method of using directory structure to =
organize the asset tags... (yeah, ugly)  However, ZFS almost didn't care =
compared to a traditional filesystem. =20
>>=20
>> So, what it comes down do is simple.... you can benchmark anything =
you want with various "authoritative" benchmarks, but in the end, your =
benchmark on your data set (aka real world in your world) is the only =
thing that matters.
>>=20
>> BTW, what happened in the situation I described?  Despite, a huge =
cost savings and incredible performance....  "We have to use Debian as =
we never put any type of automation in place that would allow us to be =
able to move from one OS to another"...  Yeah, I guess a Systems =
Architect (like me) is something that people tend to overlook.  System =
automation to allow nimble transitions like that are totally overlooked.
>>=20
>> Benchmarks are "nice".  However, tuning and understanding the =
underlying tech and what's it's good for is priceless.  Knowing there =
are memory management issues, scheduling issues, certain types of I/O on =
certain FS that cause it to sing or sob, these are the things that will =
make someone invaluable.  No one should be a tech bigot.  The mantra =
should be:  "The best tech for the situation".  No one should care if =
it's BSD, Linux, or Windoze if it's what works best in the situation.
>>=20
>> P
>>=20
>> PS -  When I see how many people are clueless about how much tech is =
ripped off from BSD to make other vendors' products just work and then =
they slap at BSD.... it's pretty bad.  GPLv3?  Thank you... there are so =
many people going to a "no GPL products in house" policy that there is a =
steady increase in BSD and ZFS.  I can only hope GPLv4 becomes "If you =
use our stuff, we own all the machines and code that our stuff coexists =
on" :-)
>>=20
>>=20
>>=20
>>=20
>>=20
>>=20
>> ________________________________
>> From: Adrian Chadd <adrian@freebsd.org>
>> To: O. Hartmann <ohartman@zedat.fu-berlin.de>=20
>> Cc: freebsd-performance@freebsd.org=20
>> Sent: Tuesday, May 28, 2013 5:03 AM
>> Subject: Re: New Phoronix performance benchmarks between some Linuxes =
and *BSDs
>>=20
>>=20
>> outperform at what?
>>=20
>>=20
>>=20
>> adrian
>>=20
>> On 28 May 2013 00:08, O. Hartmann <ohartman@zedat.fu-berlin.de> =
wrote:
>>> Phoronix has emitted another of its "famous" performance tests
>>> comparing different flavours of Linux (their obvious favorite OS):
>>>=20
>>> =
http://www.phoronix.com/scan.php?page=3Darticle&item=3Dbsd_linux_8way&num=3D=
1
>>>=20
>>> It is "impressive, too, to see that PHORONIX did not benchmark the
>>> gaming performance - this is done exclusively on the Linux
>>> distributions, I guess in the lack of suitable graphics cards at
>>> Phronix (although it should be possible to compare the nVidia BLOB
>>> performance between each system).
>>>=20
>>> Although I'm not much impressed by the way the benchmarks are
>>> orchestrated, Phoronix is the only platform known to me providing =
those
>>> from time to time benchmarks on most recent available operating =
systems.
>>>=20
>>> Also, the bad performance of ZFS compared to to UFS2 seems to have a
>>> very harsh impact on systems were that memory- and performance-hog =
ZFS
>>> isn't really needed.
>>>=20
>>> Surprised and really disappointing (especially for me personally) is
>>> the worse performance of the Rodinia benchmark on the BSDs, for what =
I
>>> try to have deeper look inside to understand the circumstances of =
the
>>> setups and what this scientific benchmark is supposed to do and
>>> measure.
>>>=20
>>> But the overall conclusion shown on Phoronix is that what I see at =
our
>>> department which utilizes some Linux flavours, Ubuntu 12.01 or Suse =
and
>>> in a majority CentOS (older versions), which all outperform the =
several
>>> FreeBSd servers I maintain (FreeBSD 9.1-STABLE and FreeBSD
>>> 10.0-CURRENT, so to end software compared to some older Linux =
kernels).
>>> _______________________________________________
>>> freebsd-performance@freebsd.org mailing list
>>> http://lists.freebsd.org/mailman/listinfo/freebsd-performance
>>> To unsubscribe, send any mail to =
"freebsd-performance-unsubscribe@freebsd.org"
>> _______________________________________________
>> freebsd-performance@freebsd.org mailing list
>> http://lists.freebsd.org/mailman/listinfo/freebsd-performance
>> To unsubscribe, send any mail to =
"freebsd-performance-unsubscribe@freebsd.org"
>> _______________________________________________
>> freebsd-performance@freebsd.org mailing list
>> http://lists.freebsd.org/mailman/listinfo/freebsd-performance
>> To unsubscribe, send any mail to =
"freebsd-performance-unsubscribe@freebsd.org"
>=20
>=20
> _______________________________________________
> freebsd-performance@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-performance
> To unsubscribe, send any mail to =
"freebsd-performance-unsubscribe@freebsd.org"

Dipl.-Inform. (FH)
Dennis Berger

email:   db@bsdsystems.de
mobile: +491791231509
fon: +494054001817




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?47ED9A36-D61D-42AB-B146-2E03197CBF97>