Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 1 Jan 2011 11:09:31 -0800
From:      Artem Belevich <fbsdlist@src.cx>
To:        Attila Nagy <bra@fsn.hu>
Cc:        freebsd-fs@freebsd.org, freebsd-stable@freebsd.org
Subject:   Re: New ZFSv28 patchset for 8-STABLE
Message-ID:  <AANLkTimGdnESX-wwD52Fh4wCfS4xZ-839g6Ste5Bwihu@mail.gmail.com>
In-Reply-To: <4D1F7008.3050506@fsn.hu>
References:  <4D0A09AF.3040005@FreeBSD.org> <4D1F7008.3050506@fsn.hu>

next in thread | previous in thread | raw e-mail | index | archive | help
On Sat, Jan 1, 2011 at 10:18 AM, Attila Nagy <bra@fsn.hu> wrote:
> What I see:
> - increased CPU load
> - decreased L2 ARC hit rate, decreased SSD (ad[46]), therefore increased
> hard disk load (IOPS graph)
>
...
> Any ideas on what could cause these? I haven't upgraded the pool version and
> nothing was changed in the pool or in the file system.


The fact that L2 ARC is full does not mean that it contains the right
data.  Initial L2ARC warm up happens at a much higher rate than the
rate L2ARC is updated after it's been filled initially. Even
accelerated warm-up took almost a day in your case. In order for L2ARC
to warm up properly you may have to wait quite a bit longer. My guess
is that it should slowly improve over the next few days as data goes
through L2ARC and those bits that are hit more often take residence
there. The larger your data set, the longer it will take for L2ARC to
catch the right data.

Do you have similar graphs from pre-patch system just after reboot? I
suspect that it may show similarly abysmal L2ARC hit rates initially,
too.

--Artem



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?AANLkTimGdnESX-wwD52Fh4wCfS4xZ-839g6Ste5Bwihu>