Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 11 Dec 2011 18:26:40 +0200
From:      Kostik Belousov <kostikbel@gmail.com>
To:        Maksim Yevmenkin <maksim.yevmenkin@gmail.com>
Cc:        current@freebsd.org
Subject:   Re: calling all fs experts
Message-ID:  <20111211162640.GE50300@deviant.kiev.zoral.com.ua>
In-Reply-To: <CAFPOs6oJvS9%2BjVgU611imCDGj6L9mEJpr5OhLa4OK4XB7NCd8Q@mail.gmail.com>
References:  <CAFPOs6oJvS9%2BjVgU611imCDGj6L9mEJpr5OhLa4OK4XB7NCd8Q@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help

--Rej21XJzHXIYRs2Q
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Sat, Dec 10, 2011 at 05:42:01PM -0800, Maksim Yevmenkin wrote:
> Hello,
>=20
> i have a question for fs wizards.
>=20
> suppose i can persuade modern spinning disk to do "large" reads (say
> 512K to 1M) at a time. also, suppose file system on such modern
> spinning drive is used to store large files (tens to hundreds of
> megabytes). is there any way i can tweak the file system parameters
> (block size, layout, etc) to help it to get as close to "disk's
> sequential read rate" as possible. I understand that i will not be
> able to get 100MB/sec single client sequential read rate, but, can i
> get it into sustained 40-50MB/sec rate? also, can i reduce performance
> impact caused by "small reads" such as directory access etc.

If you wanted to get responses from experts only, sorry in advance.

The fs (AKA UFS) uses clustering provided by the block cache. The clustering
code, mainly located in the kern/vfs_cluster.c, coalesces sequence of
reads or writes that are targeting the consequtive blocks, into single
physical read or write of the maximal size of MAXPHYS. Current definition
of MAXPHYS is 128KB.

Clustering allows filesystem to improve the layout of the files by calling
VOP_REALLOCBLKS() to redo the allocation to make the writing sequence of
blocks sequential if it is not.

Even if file is not layed out ideally, or the i/o pattern is random, most
writes scheduled are asynchronous, and for reads, the system tries to
schedule read-aheads for some limited number of blocks. This allows the
lower layers, i.e. geom and disk drivers, to optimize the i/o queue
to coalesce requests that are consequitive on disk, but not on the queue.

BTW, some time ago I was interested in the effect on the fragmentation
on UFS, due to some semi-abandoned patch, which could make the
fragmentation worse. I wrote the tool that calculated the percentage
of non-consequtive spots in the whole filesystem. Apparently, even
under the hard load consisting of writing a lot of files under the
megabytes in size, UFS managed to keep the number of spots under 2-3% on
sufficiently free volume.

--Rej21XJzHXIYRs2Q
Content-Type: application/pgp-signature
Content-Disposition: inline

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.18 (FreeBSD)

iEYEARECAAYFAk7k2cAACgkQC3+MBN1Mb4gzHgCgxo4B9M8ThMUKQP65VSpfrR2Y
PmkAoMoyJ73h4qeM6rPPg8O7dZmzuvm4
=Dg61
-----END PGP SIGNATURE-----

--Rej21XJzHXIYRs2Q--



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20111211162640.GE50300>