Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 5 Dec 2020 19:16:33 +0000
From:      tech-lists <tech-lists@zyxst.net>
To:        freebsd-questions@freebsd.org
Subject:   Re: effect of differing spindle speeds on prospective zfs vdevs
Message-ID:  <X8vckXLRivTWBXAz@rpi4.local>
In-Reply-To: <EA44E7A9-961F-4101-8FBF-8EE5E81F2E2A@gromit.dlib.vt.edu>
References:  <mailman.77.1607169601.55244.freebsd-questions@freebsd.org> <EA44E7A9-961F-4101-8FBF-8EE5E81F2E2A@gromit.dlib.vt.edu>

next in thread | previous in thread | raw e-mail | index | archive | help

--EmrH6vx5AQVal/p4
Content-Type: text/plain; charset=us-ascii; format=flowed
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

Hi,

On Sat, Dec 05, 2020 at 08:51:08AM -0500, Paul Mather wrote:
> IIRC, ZFS pools have a single ashift for the entire pool, so you should=
=20
> set it to accommodate the 4096/4096 devices to avoid performance=20
> degradation.  I believe it defaults to that now, and should auto-detect=
=20
> anyway.  But, in a mixed setup of vdevs like you have, you should be=20
> using ashift=3D12.
>
> I believe having an ashift=3D9 on your mixed-drive setup would have the=
=20
> biggest performance impact in terms of reducing performance.

Part of my confusion about the ashift thing is I thought ashift=3D9 was for
512/512 logical/physical. Is this still the case?

On a different machine which has been running since FreeBSD12 was -current,
one of the disks in the array went bang. zdb shows ashift=3D9 (as was defau=
lt
when it was created). The only available replacement was an otherwise=20
identical disk but 512 logical/4096 physical. zpool status mildly warns=20
about preformance degradation like this:

ada2    ONLINE       0     0     0  block size: 512B configured, 4096B nati=
ve

  state: ONLINE
status: One or more devices are configured to use a non-native block size.
      Expect reduced performance.
action: Replace affected devices with devices that support the
      configured block size, or migrate data to a properly configured
      pool.

The other part of my confusion is that I understood zfs to set its own=20
blocksize on the fly.

(I guess there must be some performance degradation but it's not
yet enough for me to notice. Or it might only be noticable if low on space).
--=20
J.

--EmrH6vx5AQVal/p4
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQIzBAABCAAdFiEE8n3tWhxW11Ccvv9/s8o7QhFzNAUFAl/L3IkACgkQs8o7QhFz
NAUSWA/7BcomGgj5ZWn65mCUct/36ioeXPymcNar2M+m7iDYXtRkyrlcYrdWsxdP
Ae7MhNLMFlwWH85/b42Bd3pMKTMPdBOfFIoauZ2fxlNJ3x2sIuu6FicUB0ic9oWG
v6pnYxv2QzuPMGMQXxhpg4bgkkeqTq7AuWxE1fLkAkuAe9OPiyEl/DVl1C9/+u4/
efLzKZJT+buvKWQbF0BwaBCgmVvwItCqijDA4+JCXJZWU1KU/KXA9ADzYwTyLZ5i
Gbh7SNMmJyR5xKv3hhcMZeEkXg6UC2WCngxqVd6fwJicfkauau/xh5OzNCFiKuho
dya7xWSzlFAlvvjQ19QBdgER7TB0nj/9XZVfBYethl/99jatN1bV7WJJ7RZubD3F
WzIKi1mTrUV3bY3JLkiqQDbwHUh7GjxGXH+AK44vHn/XaJFPJRasBnOLkeGPEght
2C+z6D9qG0Fe0B0j/OdQEuj/NzP69flDca9tiYvJBYOBrlK7/4vuTFAZOgtnOYSL
U+s0MgYV8NQxBOGi0RNswTMqiXRYjnIHHToce1hjwIB+UAEyeYvA8AjW8zjuQ1ES
EF8ebXPFPHXKQU0Uue4lXTTQGPLj4rZdwlv5VpOocnLOcYuuhNT+9UObEGJbWFJn
Iqd8FbGZNqxSlR9qeWGP3Jb0a0nfkn7GWkVIUOmmCDYij4KrGPU=
=JnQr
-----END PGP SIGNATURE-----

--EmrH6vx5AQVal/p4--



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?X8vckXLRivTWBXAz>