Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 28 Sep 2014 14:44:15 -0400
From:      FF <fusionfoto@gmail.com>
To:        Steven Hartland <killing@multiplay.co.uk>
Cc:        freebsd-fs@freebsd.org
Subject:   Re: Unexpected zfs ARC behavior
Message-ID:  <CAD=tpecvhp7B1TSwUE5Oded3kDC3R%2B8PFf84EDNdp0or6cKTBg@mail.gmail.com>
In-Reply-To: <5B80B5A2CE304E5BB2BCD3BE5735D0D1@multiplay.co.uk>
References:  <CAD=tpeeibpFb542R-atN=v6qwyOBguKhT2AtevTqqwXR0J4iwA@mail.gmail.com> <CAD=tpeejUezjRWh17u_w7HcdzqgVWc768p3FcsCTheeoOrduPQ@mail.gmail.com> <CAD=tpefP7F6gdaU0mOqoOu7j5_h1OJLR6UJUGLU7LQEUiVt79A@mail.gmail.com> <5B80B5A2CE304E5BB2BCD3BE5735D0D1@multiplay.co.uk>

next in thread | previous in thread | raw e-mail | index | archive | help
Ok. Thanks!

I guess we'll test the patch for the ARC sizing on another machine and see
how it goes.

I turned of LZ4 compression on the file store thinking that might have some
impact on the L2ARC errors I was accruing. In about 12 hours I've only
added 1 each to chksum and io error and they seem to correspond to a time
when the L2ARC was 100%.

Best.

On Sun, Sep 28, 2014 at 10:26 AM, Steven Hartland <killing@multiplay.co.uk>
wrote:

> I wouldn't expect that to be related tbh.
>
> ----- Original Message ----- From: "FF" <fusionfoto@gmail.com>
> To: <freebsd-fs@freebsd.org>
> Sent: Sunday, September 28, 2014 10:00 AM
> Subject: Re: Unexpected zfs ARC behavior
>
>
>  I'm not sure, but this may be addressed by :
>>
>> https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=191510.
>>
>> On the same system, on a brand new SSD (Intel 320 series, supports TRIM,
>> 973 hours of total run time and no SMARTCTL errors), the l2arc is accruing
>> some errors and I can't tell if they are inter-related or not.
>>
>> sysctl -a |grep l2_
>> kstat.zfs.misc.arcstats.evict_l2_cached: 8717673244160
>> kstat.zfs.misc.arcstats.evict_l2_eligible: 2307662193664
>> kstat.zfs.misc.arcstats.evict_l2_ineligible: 3432677600768
>> kstat.zfs.misc.arcstats.l2_hits: 11990587
>> kstat.zfs.misc.arcstats.l2_misses: 66503164
>> kstat.zfs.misc.arcstats.l2_feeds: 3721611
>> kstat.zfs.misc.arcstats.l2_rw_clash: 153
>> kstat.zfs.misc.arcstats.l2_read_bytes: 1152794672128
>> kstat.zfs.misc.arcstats.l2_write_bytes: 27096209368064
>> kstat.zfs.misc.arcstats.l2_writes_sent: 1741090
>> kstat.zfs.misc.arcstats.l2_writes_done: 1741090
>> kstat.zfs.misc.arcstats.l2_writes_error: 0
>> kstat.zfs.misc.arcstats.l2_writes_hdr_miss: 2479
>> kstat.zfs.misc.arcstats.l2_evict_lock_retry: 1149
>> kstat.zfs.misc.arcstats.l2_evict_reading: 137
>> kstat.zfs.misc.arcstats.l2_free_on_write: 1000507
>> kstat.zfs.misc.arcstats.l2_abort_lowmem: 225
>> kstat.zfs.misc.arcstats.l2_cksum_bad: 8
>> kstat.zfs.misc.arcstats.l2_io_error: 6
>> kstat.zfs.misc.arcstats.l2_size: 35261600768
>> kstat.zfs.misc.arcstats.l2_asize: 33760987648
>> kstat.zfs.misc.arcstats.l2_hdr_size: 58288928
>> kstat.zfs.misc.arcstats.l2_compress_successes: 31952645
>> kstat.zfs.misc.arcstats.l2_compress_zeros: 903
>> kstat.zfs.misc.arcstats.l2_compress_failures: 436119
>> kstat.zfs.misc.arcstats.l2_write_trylock_fail: 2481253
>> kstat.zfs.misc.arcstats.l2_write_passed_headroom: 61954692
>> kstat.zfs.misc.arcstats.l2_write_spa_mismatch: 26664326
>> kstat.zfs.misc.arcstats.l2_write_in_l2: 19967700467
>> kstat.zfs.misc.arcstats.l2_write_io_in_progress: 94722
>> kstat.zfs.misc.arcstats.l2_write_not_cacheable: 11359575294
>> kstat.zfs.misc.arcstats.l2_write_full: 730445
>> kstat.zfs.misc.arcstats.l2_write_buffer_iter: 3721611
>> kstat.zfs.misc.arcstats.l2_write_pios: 1741090
>> kstat.zfs.misc.arcstats.l2_write_buffer_bytes_scanned: 3464594001047552
>> kstat.zfs.misc.arcstats.l2_write_buffer_list_iter: 229359623
>> kstat.zfs.misc.arcstats.l2_write_buffer_list_null_iter: 65007815
>>
>>
>> Anyway, thanks in advance for any help offered.
>>
>> Best.
>>
>>
>> On Sat, Sep 27, 2014 at 2:24 PM, FF <fusionfoto@gmail.com> wrote:
>>
>>
>>> Hi, forwarding from -questions.
>>>
>>> It looks like the image didn't make it across, so here it is:
>>>
>>> http://snag.gy/Ghb7X.jpg
>>>
>>> Thanks in advance for any pointers or suggestions, or confirmations that
>>> somehow this behavior is normal.
>>>
>>> --
>>>
>>>
>>> So on a somewhat loaded ZFS file server (all NFS) serving mostly VMs with
>>> the load steadily increasing over the month (migrated from other
>>> servers)... the ARC size has unexpectedly dropped (please see attached
>>> graphic, if it makes through the mailing list server). The system peaks
>>> around 2,000 NFS IOPS and hasn't exhibited any slow downs. The L2ARC has
>>> a
>>> very low hit rate and prefetch has been turned off to increase the L2ARC
>>> efficiency... but none of that should really matter as far as I can tell
>>> since the L1ARC should try to use all the memory it can.
>>>
>>>
>>>> Some tuning of zfs sysctls (mostly write_boost and write_max) to
>>>> increase
>>>> them, but this has since been backed off to default.
>>>>
>>>> vmstat -m reports that 24G is dedicated to opensolaris:
>>>>     solaris 1113433 24565820K       - 6185559839
>>>> 16,32,64,128,256,512,1024,2048,4096
>>>>
>>>> And top reports memory free:
>>>>
>>>> last pid: 32075;  load averages:  0.16,  0.13,
>>>> 0.14                                    up 36+21:44:59  09:16:49
>>>> 24 processes:  1 running, 23 sleeping
>>>> CPU:  0.0% user,  0.0% nice,  0.0% system,  0.6% interrupt, 99.4% idle
>>>> Mem: 25M Active, 1187M Inact, 26G Wired, 2048K Cache, 1536M Buf, 4160M
>>>> Free
>>>> ARC: 14G Total, 1679M MFU, 12G MRU, 28M Anon, 156M Header, 21M Other
>>>>
>>>> zfs-stats -a :
>>>>
>>>> ------------------------------------------------------------
>>>> ------------
>>>> ZFS Subsystem Report                            Sat Sep 27 09:17:07 2014
>>>> ------------------------------------------------------------
>>>> ------------
>>>>
>>>> System Information:
>>>>
>>>>         Kernel Version:                         902001 (osreldate)
>>>>         Hardware Platform:                      amd64
>>>>         Processor Architecture:                 amd64
>>>>
>>>>         ZFS Storage pool Version:               5000
>>>>         ZFS Filesystem Version:                 5
>>>>
>>>> FreeBSD 9.2-RELEASE-p10 #0 r270148M: Mon Aug 18 23:14:36 EDT 2014 root
>>>>  9:17AM  up 36 days, 21:45, 2 users, load averages: 0.12, 0.12, 0.13
>>>>
>>>> ------------------------------------------------------------
>>>> ------------
>>>>
>>>> System Memory:
>>>>
>>>>         0.08%   26.22   MiB Active,     3.75%   1.16    GiB Inact
>>>>         82.36%  25.47   GiB Wired,      0.01%   2.00    MiB Cache
>>>>         13.80%  4.27    GiB Free,       0.00%   1.03    MiB Gap
>>>>
>>>>         Real Installed:                         32.00   GiB
>>>>         Real Available:                 99.63%  31.88   GiB
>>>>         Real Managed:                   97.01%  30.93   GiB
>>>>
>>>>         Logical Total:                          32.00   GiB
>>>>         Logical Used:                   83.03%  26.57   GiB
>>>>         Logical Free:                   16.97%  5.43    GiB
>>>>
>>>> Kernel Memory:                                  23.54   GiB
>>>>         Data:                           99.90%  23.52   GiB
>>>>         Text:                           0.10%   23.13   MiB
>>>>
>>>> Kernel Memory Map:                              29.76   GiB
>>>>         Size:                           76.40%  22.74   GiB
>>>>         Free:                           23.60%  7.02    GiB
>>>>
>>>> ------------------------------------------------------------
>>>> ------------
>>>>
>>>> ARC Summary: (HEALTHY)
>>>>         Memory Throttle Count:                  0
>>>>
>>>> ARC Misc:
>>>>         Deleted:                                90.09m
>>>>         Recycle Misses:                         2.44m
>>>>         Mutex Misses:                           794.67k
>>>>         Evict Skips:                            17.90m
>>>>
>>>> ARC Size:                               44.78%  13.40   GiB
>>>>         Target Size: (Adaptive)         44.78%  13.40   GiB
>>>>         Min Size (Hard Limit):          12.50%  3.74    GiB
>>>>         Max Size (High Water):          8:1     29.93   GiB
>>>>
>>>> ARC Size Breakdown:
>>>>         Recently Used Cache Size:       86.16%  11.55   GiB
>>>>         Frequently Used Cache Size:     13.84%  1.85    GiB
>>>>
>>>> ARC Hash Breakdown:
>>>>         Elements Max:                           786.71k
>>>>         Elements Current:               87.05%  684.85k
>>>>         Collisions:                             153.35m
>>>>         Chain Max:                              16
>>>>         Chains:                                 194.92k
>>>>
>>>> ------------------------------------------------------------
>>>> ------------
>>>>
>>>> ARC Efficiency:                                 506.24m
>>>>         Cache Hit Ratio:                87.56%  443.25m
>>>>         Cache Miss Ratio:               12.44%  62.99m
>>>>         Actual Hit Ratio:               80.06%  405.29m
>>>>
>>>>         Data Demand Efficiency:         93.92%  372.74m
>>>>         Data Prefetch Efficiency:       49.49%  69.76m
>>>>
>>>>         CACHE HITS BY CACHE LIST:
>>>>           Anonymously Used:             6.10%   27.05m
>>>>           Most Recently Used:           31.37%  139.05m
>>>>           Most Frequently Used:         60.07%  266.24m
>>>>           Most Recently Used Ghost:     0.80%   3.56m
>>>>           Most Frequently Used Ghost:   1.66%   7.35m
>>>>
>>>>         CACHE HITS BY DATA TYPE:
>>>>           Demand Data:                  78.98%  350.09m
>>>>           Prefetch Data:                7.79%   34.53m
>>>>           Demand Metadata:              12.15%  53.86m
>>>>           Prefetch Metadata:            1.08%   4.77m
>>>>
>>>>         CACHE MISSES BY DATA TYPE:
>>>>           Demand Data:                  35.95%  22.65m
>>>>           Prefetch Data:                55.93%  35.23m
>>>>           Demand Metadata:              4.49%   2.83m
>>>>           Prefetch Metadata:            3.63%   2.29m
>>>>
>>>> ------------------------------------------------------------
>>>> ------------
>>>> <cut>
>>>>
>>>> Any suggestions? Is this expected or acceptable behavior?
>>>>
>>>> Thanks in advance,
>>>>
>>>> --
>>>> FF
>>>>
>>>>
>>>
>>>
>>> --
>>> FF
>>>
>>>
>>
>>
>> --
>> FF
>> _______________________________________________
>> freebsd-fs@freebsd.org mailing list
>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs
>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org"
>>
>>


-- 
FF



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAD=tpecvhp7B1TSwUE5Oded3kDC3R%2B8PFf84EDNdp0or6cKTBg>