Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 5 Jul 2014 17:05:24 +0300
From:      Stefan Parvu <sparvu@systemdatarecorder.org>
To:        Roger Pau =?ISO-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Cc:        freebsd-fs@freebsd.org, FreeBSD Hackers <freebsd-hackers@freebsd.org>
Subject:   Re: Strange IO performance with UFS
Message-ID:  <20140705170524.4212b6fa0b1046a33e1fc69a@systemdatarecorder.org>
In-Reply-To: <53B7C616.1000702@citrix.com>
References:  <53B691EA.3070108@citrix.com> <53B69C73.7090806@citrix.com> <20140705001938.54a3873dd698080d93d840e2@systemdatarecorder.org> <53B7C616.1000702@citrix.com>

next in thread | previous in thread | raw e-mail | index | archive | help

> This looks much better than what I've saw in my benchmarks, how much
> memory does the system have?

We use on this system 64GB RAM. If you increase the block size in fio you should see
better throughput, as you already found. Cool, you sorted out the thing.

As a side note: interesting for us, was to discover that system usage between Debian 7
and FreeBSD was kind of different for our test workloads. Strange Linux system was 
around 3-4% system time, no matter what sort of block size or number of files we were pushing
using hardware raid 10, resulting in a high iowait time and high run queue length (which on Linux 
systems adds to it the iowait).
 
FreeBSD, I think, does not add to the run queue length the iowait processes waiting for a
storage, network etc. Is this correct ? 

-- 
Stefan Parvu <sparvu@systemdatarecorder.org>



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20140705170524.4212b6fa0b1046a33e1fc69a>