Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 12 Apr 2011 13:33:02 +0200
From:      Lars Wilke <>
Subject:   ZFS performance strangeness
Message-ID:  <>

Next in thread | Raw E-Mail | Index | Archive | Help

There are quite a few threads about ZFS and performance difficulties,
but i did not find anything that really helped :)
Therefor any advice would be highly appreciated.
I started to use ZFS with 8.1R, only tuning i did was setting


The machines are supermicro boards with 48 GB ECC RAM and 15k RPM SAS
drives. Local read/write performance was and is great.
But exporting via NFS was a mixed bag in 8.1R.
Generally r/w speed over NFS was ok, but large reads or writes took
ages. Most of the reads and writes were small, so i did not bother.

Now i upgraded one machine to 8.2R and i get very good write performance
over NFS but read performance drops to a ridiciously low value, around
1-2 MB/s. While writes are around 100MB/s. The network is a dedicated
1GB Ethernet. The zpool uses RAIDZ1 over 7 drives, one vdev.
The filesystem has compression enabled. Turning it off made no
difference AFAICT

Now i tried a few of the suggested tunables and my last try was this


still no luck. Writting is fast, reading is not. Even with enabled
prefetching. The only thing i noticed is, that reading for example 10MB
is fast (on a freshly mounted fs) but when reading larger amounts, i.e.
couple hundred MBs, the performance drops and zpool iostat or iostat -x
show that there is not much activity on the zpool/hdds.

It seems as if ZFS does not care that someone wants to read data, also idle
time of the reading process happily ticks up and gets higher and higher!?
When trying to access the file during this time, the process blocks and
sometimes is difficult to kill, i.e. ls -la on the file.

I read and write with dd and before read tests i umount and mount the
NFS share again.

dd if=/dev/zero of=/mnt/bla size=1M count=X
dd if=/mnt/bla of=/dev/null size=1M count=Y

mount is done with this options from two centos 5 boxes:


Want to link to this message? Use this URL: <>