Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 18 Oct 2013 15:57:16 +0100
From:      "Steven Hartland" <killing@multiplay.co.uk>
To:        "Vitalij Satanivskij" <satan@ukr.net>
Cc:        Vitalij Satanivskij <satan@ukr.net>, "Justin T. Gibbs" <gibbs@FreeBSD.org>, freebsd-current@freebsd.org, Borja Marcos <borjam@sarenet.es>, Dmitriy Makarov <supportme@ukr.net>
Subject:   Re: ZFS secondarycache on SSD problem on r255173
Message-ID:  <4459A6FAB7B8445C97CCB9EFF34FD4F0@multiplay.co.uk>
References:  <20131016080100.GA27758@hell.ukr.net> <3A44A8F6-8B62-4A23-819D-B91A3E6E5EF9@freebsd.org> <E5E6AB7C-C067-4B92-8A38-9DD811011D6F@FreeBSD.org> <7059AA6DCC0D46B8B1D33FC883C31643@multiplay.co.uk> <20131017061248.GA15980@hell.ukr.net> <326B470C65A04BC4BC83E118185B935F@multiplay.co.uk> <20131017073925.GA34958@hell.ukr.net> <2AFE1CBD9B124E3AB9E05A4E483CCE03@multiplay.co.uk> <20131018080148.GA75226@hell.ukr.net> <256B2E5A0BA44DCBB45BB3F3E820E190@multiplay.co.uk> <20131018144524.GA30018@hell.ukr.net>

next in thread | previous in thread | raw e-mail | index | archive | help
Looking at the l2arc compression code I believe that metadata is always
compressed with lz4, even if compression is off on all datasets.

This is backed up by what I'm seeing on my system here as it shows a
non-zero l2_compress_successes value even though I'm not using
compression at all.

I think we we may well need the following patch to set the minblock
size based on the vdev ashift and not SPA_MINBLOCKSIZE.

svn diff -x -p sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c
Index: sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c
===================================================================
--- sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c        (revision 256554)
+++ sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c        (working copy)
@@ -5147,7 +5147,7 @@ l2arc_compress_buf(l2arc_buf_hdr_t *l2hdr)
        len = l2hdr->b_asize;
        cdata = zio_data_buf_alloc(len);
        csize = zio_compress_data(ZIO_COMPRESS_LZ4, l2hdr->b_tmp_cdata,
-           cdata, l2hdr->b_asize, (size_t)SPA_MINBLOCKSIZE);
+           cdata, l2hdr->b_asize, (size_t)(1ULL << l2hdr->b_dev->l2ad_vdev->vdev_ashift));

        if (csize == 0) {
                /* zero block, indicate that there's nothing to write */

Could you try this patch on your system Vitalij see if it has any effect
on the number of l2_cksum_bad / l2_io_error?

    Regards
    Steve
----- Original Message ----- 
From: "Vitalij Satanivskij" <satan@ukr.net>
To: "Steven Hartland" <killing@multiplay.co.uk>
Cc: "Vitalij Satanivskij" <satan@ukr.net>; "Dmitriy Makarov" <supportme@ukr.net>; "Justin T. Gibbs" <gibbs@FreeBSD.org>; "Borja 
Marcos" <borjam@sarenet.es>; <freebsd-current@freebsd.org>
Sent: Friday, October 18, 2013 3:45 PM
Subject: Re: ZFS secondarycache on SSD problem on r255173


>
> Just right now stats not to actual because of some another test.
>
> Test is simply all gpart information destroyed from ssd and
>
> They used as raw cache devices. Just
> 2013-10-18.11:30:49 zpool add disk1 cache /dev/ada1 /dev/ada2 /dev/ada3
>
> So sizes at last l2_size and  l2_asize in not actual.
>
> But heare it is:
>
> kstat.zfs.misc.arcstats.hits: 5178174063
> kstat.zfs.misc.arcstats.misses: 57690806
> kstat.zfs.misc.arcstats.demand_data_hits: 313995744
> kstat.zfs.misc.arcstats.demand_data_misses: 37414740
> kstat.zfs.misc.arcstats.demand_metadata_hits: 4719242892
> kstat.zfs.misc.arcstats.demand_metadata_misses: 9266394
> kstat.zfs.misc.arcstats.prefetch_data_hits: 1182495
> kstat.zfs.misc.arcstats.prefetch_data_misses: 9951733
> kstat.zfs.misc.arcstats.prefetch_metadata_hits: 143752935
> kstat.zfs.misc.arcstats.prefetch_metadata_misses: 1057939
> kstat.zfs.misc.arcstats.mru_hits: 118609738
> kstat.zfs.misc.arcstats.mru_ghost_hits: 1895486
> kstat.zfs.misc.arcstats.mfu_hits: 4914673425
> kstat.zfs.misc.arcstats.mfu_ghost_hits: 14537497
> kstat.zfs.misc.arcstats.allocated: 103796455
> kstat.zfs.misc.arcstats.deleted: 40168100
> kstat.zfs.misc.arcstats.stolen: 20832742
> kstat.zfs.misc.arcstats.recycle_miss: 15663428
> kstat.zfs.misc.arcstats.mutex_miss: 1456781
> kstat.zfs.misc.arcstats.evict_skip: 25960184
> kstat.zfs.misc.arcstats.evict_l2_cached: 891379153920
> kstat.zfs.misc.arcstats.evict_l2_eligible: 50578438144
> kstat.zfs.misc.arcstats.evict_l2_ineligible: 956055729664
> kstat.zfs.misc.arcstats.hash_elements: 8693451
> kstat.zfs.misc.arcstats.hash_elements_max: 14369414
> kstat.zfs.misc.arcstats.hash_collisions: 90967764
> kstat.zfs.misc.arcstats.hash_chains: 1891463
> kstat.zfs.misc.arcstats.hash_chain_max: 24
> kstat.zfs.misc.arcstats.p: 73170954752
> kstat.zfs.misc.arcstats.c: 85899345920
> kstat.zfs.misc.arcstats.c_min: 42949672960
> kstat.zfs.misc.arcstats.c_max: 85899345920
> kstat.zfs.misc.arcstats.size: 85899263104
> kstat.zfs.misc.arcstats.hdr_size: 1425948696
> kstat.zfs.misc.arcstats.data_size: 77769994240
> kstat.zfs.misc.arcstats.other_size: 6056233632
> kstat.zfs.misc.arcstats.l2_hits: 21725934
> kstat.zfs.misc.arcstats.l2_misses: 35876251
> kstat.zfs.misc.arcstats.l2_feeds: 130197
> kstat.zfs.misc.arcstats.l2_rw_clash: 110181
> kstat.zfs.misc.arcstats.l2_read_bytes: 391282009600
> kstat.zfs.misc.arcstats.l2_write_bytes: 1098703347712
> kstat.zfs.misc.arcstats.l2_writes_sent: 130037
> kstat.zfs.misc.arcstats.l2_writes_done: 130037
> kstat.zfs.misc.arcstats.l2_writes_error: 0
> kstat.zfs.misc.arcstats.l2_writes_hdr_miss: 375921
> kstat.zfs.misc.arcstats.l2_evict_lock_retry: 331
> kstat.zfs.misc.arcstats.l2_evict_reading: 43
> kstat.zfs.misc.arcstats.l2_free_on_write: 255730
> kstat.zfs.misc.arcstats.l2_abort_lowmem: 0
> kstat.zfs.misc.arcstats.l2_cksum_bad: 854359
> kstat.zfs.misc.arcstats.l2_io_error: 38254
> kstat.zfs.misc.arcstats.l2_size: 136696884736
> kstat.zfs.misc.arcstats.l2_asize: 131427690496
> kstat.zfs.misc.arcstats.l2_hdr_size: 742951208
> kstat.zfs.misc.arcstats.l2_compress_successes: 5565311
> kstat.zfs.misc.arcstats.l2_compress_zeros: 0
> kstat.zfs.misc.arcstats.l2_compress_failures: 0
> kstat.zfs.misc.arcstats.l2_write_trylock_fail: 325157131
> kstat.zfs.misc.arcstats.l2_write_passed_headroom: 4897854
> kstat.zfs.misc.arcstats.l2_write_spa_mismatch: 115704249
> kstat.zfs.misc.arcstats.l2_write_in_l2: 15114214372
> kstat.zfs.misc.arcstats.l2_write_io_in_progress: 63417
> kstat.zfs.misc.arcstats.l2_write_not_cacheable: 3291593934
> kstat.zfs.misc.arcstats.l2_write_full: 47672
> kstat.zfs.misc.arcstats.l2_write_buffer_iter: 130197
> kstat.zfs.misc.arcstats.l2_write_pios: 130037
> kstat.zfs.misc.arcstats.l2_write_buffer_bytes_scanned: 369077156457472
> kstat.zfs.misc.arcstats.l2_write_buffer_list_iter: 8015080
> kstat.zfs.misc.arcstats.l2_write_buffer_list_null_iter: 79825
> kstat.zfs.misc.arcstats.memory_throttle_count: 0
> kstat.zfs.misc.arcstats.duplicate_buffers: 0
> kstat.zfs.misc.arcstats.duplicate_buffers_size: 0
> kstat.zfs.misc.arcstats.duplicate_reads: 0
>
>
> Values of
> ---------------------------------
> kstat.zfs.misc.arcstats.l2_cksum_bad: 854359
> kstat.zfs.misc.arcstats.l2_io_error: 38254
> --------------------------------
>
> not growing from last cache reconfiguration, just wait some time to see - maybe problem disapers :)
>
>
>
>
>
>
> Steven Hartland wrote:
> SH> Hmm so that rules out a TRIM related issue. I wonder if the
> SH> increase in ashift has triggered a problem in compression.
> SH>
> SH> What are all the values reported by:
> SH> sysctl -a kstat.zfs.misc.arcstats
> SH>
> SH>     Regards
> SH>     Steve
> SH>
> SH> ----- Original Message ----- 
> SH> From: "Vitalij Satanivskij" <satan@ukr.net>
> SH> To: "Steven Hartland" <killing@multiplay.co.uk>
> SH> Cc: <satan@ukr.net>; "Justin T. Gibbs" <gibbs@FreeBSD.org>; <freebsd-current@freebsd.org>; "Borja Marcos" 
> <borjam@sarenet.es>;
> SH> "Dmitriy Makarov" <supportme@ukr.net>
> SH> Sent: Friday, October 18, 2013 9:01 AM
> SH> Subject: Re: ZFS secondarycache on SSD problem on r255173
> SH>
> SH>
> SH> > Hello.
> SH> >
> SH> > Yesterday system was rebooted with vfs.zfs.trim.enabled=0
> SH> >
> SH> > System version 10.0-BETA1 FreeBSD 10.0-BETA1 #6 r256669, without any changes in code
> SH> >
> SH> > Uptime 10:51  up 16:41
> SH> >
> SH> > sysctl vfs.zfs.trim.enabled
> SH> > vfs.zfs.trim.enabled: 0
> SH> >
> SH> > Around 2 hours ago errors counter's
> SH> > kstat.zfs.misc.arcstats.l2_cksum_bad: 854359
> SH> > kstat.zfs.misc.arcstats.l2_io_error: 38254
> SH> >
> SH> > begin grow from zero values.
> SH> >
> SH> > After remove cache
> SH> > 2013-10-18.10:37:10 zpool remove disk1 gpt/cache0 gpt/cache1 gpt/cache2
> SH> >
> SH> > and attach again
> SH> >
> SH> > 2013-10-18.10:38:28 zpool add disk1 cache gpt/cache0 gpt/cache1 gpt/cache2
> SH> >
> SH> > counters stop growing (of couse thay not zeroed)
> SH> >
> SH> > before cache remove kstat.zfs.misc.arcstats.l2_asize was around 280GB
> SH> >
> SH> > hw size of l2 cache is 3x164G
> SH> >
> SH> > =>       34  351651821  ada3  GPT  (168G)
> SH> >         34          6        - free -  (3.0K)
> SH> >         40    8388608     1  zil2  (4.0G)
> SH> >    8388648  343263200     2  cache2  (164G)
> SH> >  351651848          7        - free -  (3.5K)
> SH> >
> SH> >
> SH> > Any hypothesis what alse we can test/try etc?
> SH> >
> SH> >
> SH> >
> SH> > Steven Hartland wrote:
> SH> > SH> Correct.
> SH> > SH> ----- Original Message ----- 
> SH> > SH> From: "Vitalij Satanivskij" <satan@ukr.net>
> SH> > SH>
> SH> > SH>
> SH> > SH> > Just to be sure I understand you clearly, I need to test next configuration:
> SH> > SH> >
> SH> > SH> > 1) System with ashift patch eg. just latest stable/10 revision
> SH> > SH> > 2) vfs.zfs.trim.enabled=0 in /boot/loader.conf
> SH> > SH> >
> SH> > SH> > So realy only diferens in default system configuration is  disabled trim functional ?
> SH> > SH> >
> SH> > SH> >
> SH> > SH> >
> SH> > SH> > Steven Hartland wrote:
> SH> > SH> > SH> Still worth testing with the problem version installed but
> SH> > SH> > SH> with trim disabled to see if that clears the issues, if
> SH> > SH> > SH> nothing else it will confirm / deny if trim is involved.
> SH> > SH>
> SH> > SH>
> SH> > SH> ================================================
> SH> > SH> This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. 
> In the
> SH> > event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any
> SH> > information contained in it.
> SH> > SH>
> SH> > SH> In the event of misdirection, illegible or incomplete transmission please telephone +44 845 868 1337
> SH> > SH> or return the E.mail to postmaster@multiplay.co.uk.
> SH> > SH>
> SH> > SH> _______________________________________________
> SH> > SH> freebsd-current@freebsd.org mailing list
> SH> > SH> http://lists.freebsd.org/mailman/listinfo/freebsd-current
> SH> > SH> To unsubscribe, send any mail to "freebsd-current-unsubscribe@freebsd.org"
> SH> > _______________________________________________
> SH> > freebsd-current@freebsd.org mailing list
> SH> > http://lists.freebsd.org/mailman/listinfo/freebsd-current
> SH> > To unsubscribe, send any mail to "freebsd-current-unsubscribe@freebsd.org"
> SH> >
> SH>
> SH>
> SH> ================================================
> SH> This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the 
> event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any 
> information contained in it.
> SH>
> SH> In the event of misdirection, illegible or incomplete transmission please telephone +44 845 868 1337
> SH> or return the E.mail to postmaster@multiplay.co.uk.
> SH>
> SH> _______________________________________________
> SH> freebsd-current@freebsd.org mailing list
> SH> http://lists.freebsd.org/mailman/listinfo/freebsd-current
> SH> To unsubscribe, send any mail to "freebsd-current-unsubscribe@freebsd.org"
> 


================================================
This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it. 

In the event of misdirection, illegible or incomplete transmission please telephone +44 845 868 1337
or return the E.mail to postmaster@multiplay.co.uk.




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?4459A6FAB7B8445C97CCB9EFF34FD4F0>