Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 5 Aug 2012 16:44:58 +0100
From:      Steve O'Hara-Smith <>
To:        Wojciech Puchar <>
Subject:   Re: ZFS bonnie puzzlement
Message-ID:  <>
In-Reply-To: <>
References:  <> <>

Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help
On Sun, 5 Aug 2012 13:29:51 +0200 (CEST)
Wojciech Puchar <> wrote:

> > are showing me. Read performance OTOH is strange, zpool and systat both
> > reporting consistently an aggregated read speed of around 120MB/s during
> > the block read tests (which seems a bit slow for the drives - and indeed
> > systat reports the drives at less than 50% utilisation) but bonnie is
> > only reporting 35MB/s, I see similar discrepancies with simple dd block
> > reads to /dev/null, in which case my stopwatch agrees with dd.
> no it is not wrong.
> Do more tests (possibly your own doing heavy mixed workload) to
> understand well why you should not use this "last word in filesystems".

	First surprise, with only 4GB I had set primarycache=metadata,
changing that to primarycache=all caused the systat, zpool iostat and
bonnie figures all to agree - and made them all a bit better too. Lesson
from this - don't bother setting primarycache=metadata.

	With that puzzle gone testing and tuning becomes more useful:

	Enabling prefetch made a huge difference to the per char sequential
read, but didn't really change anything else. Indeed this test is now CPU
limited in bonnie - that'll do.

	Rebooting with zfs.cache_flush_disable=1 made everything faster.
Block writes and reads maxed out the discs at around 110MB/s and 200MB/s
respectively - pretty close to the raw disc speed. Rewrite nearly doubled in
speed too.

	Next stop NFS tuning.

Steve O'Hara-Smith                          |   Directable Mirror Arrays
C:>WIN                                      | A better way to focus the sun
The computer obeys and wins.                |    licences available see
You lose and Bill collects.                 |

Want to link to this message? Use this URL: <>