Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 5 Dec 2020 08:51:08 -0500
From:      Paul Mather <paul@gromit.dlib.vt.edu>
To:        freebsd-questions@freebsd.org
Cc:        tech-lists@zyxst.net
Subject:   Re: effect of differing spindle speeds on prospective zfs vdevs
Message-ID:  <EA44E7A9-961F-4101-8FBF-8EE5E81F2E2A@gromit.dlib.vt.edu>
In-Reply-To: <mailman.77.1607169601.55244.freebsd-questions@freebsd.org>
References:  <mailman.77.1607169601.55244.freebsd-questions@freebsd.org>

next in thread | previous in thread | raw e-mail | index | archive | help
On Fri, 4 Dec 2020 23:43:15 +0000, tech-lists <tech-lists@zyxst.net> =
wrote:

> Normally when making an array, I'd like to use all disks all same =
speed,=20
> interface, make and model but from different batches. In this case, =
I've no=20
> choice, so we have multiple 1TB disks some 7.2k some 5.4k. I've not =
mixed
> them like this before.
>=20
> What effect would this have on the final array? Slower than if all one =
or the other?
> No effect? I'm expecting the fastest access will be that of the =
slowest vdev.


I believe you are correct in intuiting that the performance of the pool =
will be influenced by the slowest devices.

ZFS supports a variety of pool organisations, each with differing I/O =
characteristics, so "making an array" could cover a multiplicity of =
possibilities.  I.e., a "JBOD" pool would have different I/O =
characteristics than a RAIDZ pool.  Read access would also be different =
than write access, and so the use case of the pool (read-intensive or =
write-intensive) would I/O speeds.  (And, furthermore, small random vs. =
large sequential I/O will have an impact.)

IIRC, write IOPS of RAIDZ pools are limited to the IOPS of the slowest =
device.


> Similarly some disks block size is 512b logical/512b physical, others =
are=20
> 512b logical/4096 physical, still others are 4096/4096. Any effect of
> mixing hardware? I understand sfs sets its own blocksize.


IIRC, ZFS pools have a single ashift for the entire pool, so you should =
set it to accommodate the 4096/4096 devices to avoid performance =
degradation.  I believe it defaults to that now, and should auto-detect =
anyway.  But, in a mixed setup of vdevs like you have, you should be =
using ashift=3D12.

I believe having an ashift=3D9 on your mixed-drive setup would have the =
biggest performance impact in terms of reducing performance.

Cheers,

Paul.=



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?EA44E7A9-961F-4101-8FBF-8EE5E81F2E2A>