Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 25 Feb 2013 11:00:10 -0600
From:      Kevin Day <toasty@dragondata.com>
To:        Andriy Gapon <avg@FreeBSD.org>
Cc:        FreeBSD Filesystems <freebsd-fs@FreeBSD.org>
Subject:   Re: Improving ZFS performance for large directories
Message-ID:  <237DCD81-5CAB-466B-8BF4-543D195FA545@dragondata.com>
In-Reply-To: <5124AC69.6010709@FreeBSD.org>
References:  <19DB8F4A-6788-44F6-9A2C-E01DEA01BED9@dragondata.com> <CAJjvXiE%2B8OMu_yvdRAsWugH7W=fhFW7bicOLLyjEn8YrgvCwiw@mail.gmail.com> <F4420A8C-FB92-4771-B261-6C47A736CF7F@dragondata.com> <20130201192416.GA76461@server.rulingia.com> <19E0C908-79F1-43F8-899C-6B60F998D4A5@dragondata.com> <5124AC69.6010709@FreeBSD.org>

next in thread | previous in thread | raw e-mail | index | archive | help

On Feb 20, 2013, at 4:58 AM, Andriy Gapon <avg@FreeBSD.org> wrote:

> on 19/02/2013 22:10 Kevin Day said the following:
>> Timing doing an "ls" in large directories 20 times, the first is the =
slowest,
> then all subsequent listings are roughly the same. There doesn't =
appear to be any
> gain after 20 repetitions
>=20
> I think that the above could be related to the below
>=20
>> 	vfs.zfs.arc_meta_limit                  16398159872
>> 	vfs.zfs.arc_meta_used                   16398120264
>=20


Doing some more testing=85

After a fresh reboot, without the SSD cache, an ls(1) in a large =
directory is pretty fast. After we've been running for an hour or so, =
the speed gets progressively worse. I can kill all other activity on the =
system, and it's still bad. I reboot, and it's back to normal.=20

On an idle system, I watched gstat(8), during the ls(1) the drives are =
basically at 100% busy while it's running, reading far more data than =
I'd think necessary to read a directory. top(1) is showing that the =
"zfskern" kernel process is burning a lot of CPU during that time too. =
Is there a possibility there's a bug/sub-optimal access pattern we're =
hitting when the arc_meta_limit is hit? Something akin to if something =
that was just read doesn't get put into the arc_meta cache, it's having =
to re-read the same data many times just to iterate through the =
directory?

I've been hesitating to increase the arc size because we've only got =
64GB of memory here and I can't add any further. The processes running =
on the system themselves need a fair chunk of ram, so I'm trying to =
figure out how we can either upgrade this motherboard to something newer =
or reduce our memory size. I've got a feeling I'm going to need to do =
this, but since this is a non-commercial project it's kinda hard to =
spend that much money on it. :)

-- Kevin




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?237DCD81-5CAB-466B-8BF4-543D195FA545>