Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 09 May 2016 14:25:54 +0000
From:      bugzilla-noreply@freebsd.org
To:        freebsd-bugs@FreeBSD.org
Subject:   [Bug 209396] ZFS primarycache attribute affects secondary cache as well
Message-ID:  <bug-209396-8@https.bugs.freebsd.org/bugzilla/>

next in thread | raw e-mail | index | archive | help
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=3D209396

            Bug ID: 209396
           Summary: ZFS primarycache attribute affects secondary cache as
                    well
           Product: Base System
           Version: 10.3-RELEASE
          Hardware: Any
                OS: Any
            Status: New
          Severity: Affects Only Me
          Priority: ---
         Component: kern
          Assignee: freebsd-bugs@FreeBSD.org
          Reporter: noah.bergbauer@tum.de

# zpool create testpool gpt/test0 cache gpt/test1
# zpool list -v testpool
NAME          SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH=20
ALTROOT
testpool     1,98G   248K  1,98G         -     0%     0%  1.00x  ONLINE  -
  gpt/test0  1,98G   248K  1,98G         -     0%     0%
cache            -      -      -         -      -      -
  gpt/test1  2,00G    36K  2,00G         -     0%     0%
# zfs create -o compression=3Doff testpool/testset
# zfs set mountpoint=3D/testset testpool/testset
# dd if=3D/dev/zero of=3D/testset/test.bin bs=3D1M count=3D1K
1024+0 records in
1024+0 records out
1073741824 bytes transferred in 10.510661 secs (102157401 bytes/sec)
# zpool list -v testpool
NAME          SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH=20
ALTROOT
testpool     1,98G  1011M  1021M         -    33%    49%  1.00x  ONLINE  -
  gpt/test0  1,98G  1011M  1021M         -    33%    49%
cache            -      -      -         -      -      -
  gpt/test1  2,00G  1010M  1,01G         -     0%    49%


So far so good: The data was written both to the actual pool and to the cac=
he
device. But what if we want to cache this huge file only in L2ARC and not in
memory?


# zfs set primarycache=3Dmetadata testpool/testset
# zfs get secondarycache testpool/testset
NAME              PROPERTY        VALUE           SOURCE
testpool/testset  secondarycache  all             default
# dd if=3D/testset/test.bin of=3D/dev/null bs=3D1M
1024+0 records in
1024+0 records out
1073741824 bytes transferred in 0.182155 secs (5894663608 bytes/sec)


Still working as expected: This read was (obviously) serviced straight from=
 RAM
because setting primarycache didn't immediately drop the cache. However,
touching it with this read should cause the data to get evicted from ARC.

# zpool list -v testpool
NAME          SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH=20
ALTROOT
testpool     1,98G  1,00G  1006M         -    33%    50%  1.00x  ONLINE  -
  gpt/test0  1,98G  1,00G  1006M         -    33%    50%
cache            -      -      -         -      -      -
  gpt/test1  2,00G   556K  2,00G         -     0%     0%
# dd if=3D/testset/test.bin of=3D/dev/null bs=3D1M
1024+0 records in
1024+0 records out
1073741824 bytes transferred in 8.518173 secs (126053066 bytes/sec)


The speed shows that this read came from disk, which is expected because RAM
caching is now disabled. What's not expected is that the data was removed f=
rom
the cache device as well. No matter the workload (sequential/random), ZFS w=
ill
no longer utilize L2ARC for this dataset even though secondarycache is set =
to
all. There is *some* IO still going on, so perhaps it's still caching some
metadata as this is what primarycache is set to.



Note that I modified some sysctls to speed up cache warming for this test:
vfs.zfs.l2arc_norw: 0
vfs.zfs.l2arc_feed_again: 1
vfs.zfs.l2arc_noprefetch: 0
vfs.zfs.l2arc_feed_secs: 0
vfs.zfs.l2arc_write_max: 1000000000000

--=20
You are receiving this mail because:
You are the assignee for the bug.=



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?bug-209396-8>