Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 11 Jun 2013 17:20:09 -0400 (EDT)
From:      Rick Macklem <rmacklem@uoguelph.ca>
To:        Attila Nagy <bra@fsn.hu>
Cc:        freebsd-fs@FreeBSD.org
Subject:   Re: An order of magnitude higher IOPS needed with ZFS than UFS
Message-ID:  <253074981.119060.1370985609747.JavaMail.root@erie.cs.uoguelph.ca>
In-Reply-To: <51B79023.5020109@fsn.hu>

next in thread | previous in thread | raw e-mail | index | archive | help
Attila Nagy wrote:
> Hi,
> 
> I have two identical machines. They have 14 disks hooked up to a HP
> smartarray (SA from now on) controller.
> Both machines have the same SA configuration and layout: the disks are
> organized into mirror pairs (HW RAID1).
> 
> On the first machine, these mirrors are formatted with UFS2+SU
> (default
> settings), on the second machine they are used as separate zpools
> (please don't tell me that ZFS can do the same, I know). Atime is
> turned
> off, otherwise, no other modifications (zpool/zfs or sysctl
> parameters).
> The file systems are loaded more or less evenly with serving of some
> kB
> to few megs files.
> 
> The machines act as NFS servers, so there is one, maybe important
> difference here: the UFS machine runs 8.3-RELEASE, while the ZFS one
> runs 9.1-STABLE@r248885.
> They get the same type of load, and according to nfsstat and netstat,
> the loads don't explain the big difference which can be seen in disk
> IOs. In fact, the UFS host seems to be more loaded...
> 
> According to gstat on the UFS machine:
> dT: 60.001s w: 60.000s filter: da
> L(q) ops/s r/s kBps ms/r w/s kBps ms/w %busy Name
> 0 42 35 404 6.4 8 150 214.2 21.5| da0
> 0 30 21 215 6.1 9 168 225.2 15.9| da1
> 0 41 33 474 4.5 8 158 211.3 18.0| da2
> 0 39 30 425 4.6 9 163 235.0 17.1| da3
> 1 31 24 266 5.1 7 93 174.1 14.9| da4
> 0 29 22 273 5.9 7 84 200.7 15.9| da5
> 0 37 30 692 7.1 7 115 206.6 19.4| da6
> 
> and on the ZFS one:
> dT: 60.001s w: 60.000s filter: da
> L(q) ops/s r/s kBps ms/r w/s kBps ms/w %busy Name
> 0 228 201 1045 23.7 27 344 53.5 88.7| da0
> 5 185 167 855 21.1 19 238 44.9 73.8| da1
> 10 263 236 1298 34.9 27 454 53.3 99.9| da2
> 10 255 235 1341 28.3 20 239 64.8 92.9| da3
> 10 219 195 994 22.3 23 257 46.3 81.3| da4
> 10 248 221 1213 22.4 27 264 55.8 90.2| da5
> 9 231 213 1169 25.1 19 229 54.6 88.6| da6
> 
> I've seen a lot of cases where ZFS required more memory and CPU (and
> even IO) to handle the same load, but they were nowhere this bad
> (often
> a 10x increase).
> 
> Any ideas?
> 
ken@ recently committed a change to the new NFS server to add file
handle affinity support to it. He reported that he had found that,
without file handle affinity, that ZFS's sequential reading heuristic
broke badly (or something like that, you can probably find the email
thread or maybe he will chime in).

Anyhow, you could try switching the FreeBSD 9 system to use the old
NFS server (assuming your clients are doing NFSv3 mounts) and see if
that has a significant effect. (For FreeBSD9, the old server has file
handle affinity, but the new server does not.)

rick

> BTW, the file systems are 77-78% full according to df (so ZFS holds
> more, because UFS is -m 8).
> 
> Thanks,
> _______________________________________________
> freebsd-fs@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-fs
> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org"



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?253074981.119060.1370985609747.JavaMail.root>