Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 22 Sep 2015 20:38:25 +0300
From:      Dmitrijs <war@dim.lv>
To:        freebsd-questions@freebsd.org
Subject:   zfs performance degradation
Message-ID:  <56019211.2050307@dim.lv>

next in thread | raw e-mail | index | archive | help
Goog afternoon,

   I've encountered strange ZFS behavior - serious performance 
degradation over few days. Right after setup on fresh ZFS (2 hdd in a 
mirror) I made a test on a file 30Gb size with dd like
dd if=test.mkv of=/dev/null bs=64k
and got 150+Mbs speed.

Today I got only 90Mbs, tested with different blocksizes, many times, 
speed seems to be stable +-5%

nas4free: divx# dd if=test.mkv of=/dev/null bs=64k
484486+1 records in
484486+1 records out
31751303111 bytes transferred in 349.423294 secs (90867734 bytes/sec)



Computer\system details:

  nas4free: /mnt# uname -a
FreeBSD nas4free.local 10.2-RELEASE-p2 FreeBSD 10.2-RELEASE-p2 #0 
r287260M: Fri Aug 28 18:38:18 CEST 2015 
root@dev.nas4free.org:/usr/obj/nas4free/usr/src/sys/NAS4FREE-amd64 amd64

RAM 4Gb
I've got brand new 2x HGST HDN724040ALE640, 4Тб, 7200rpm (ada0, ada1) 
for pool data4.
Another pool, data2, performs slightly better even on older\cheaper WD 
Green 5400 HDDs, up to 99Mbs.


  nas4free: /mnt# zpool status
   pool: data2
  state: ONLINE
   scan: none requested
config:

         NAME        STATE     READ WRITE CKSUM
         data2       ONLINE       0     0     0
           mirror-0  ONLINE       0     0     0
             ada2    ONLINE       0     0     0
             ada3    ONLINE       0     0     0

errors: No known data errors

   pool: data4
  state: ONLINE
   scan: none requested
config:

         NAME        STATE     READ WRITE CKSUM
         data4       ONLINE       0     0     0
           mirror-0  ONLINE       0     0     0
             ada0    ONLINE       0     0     0
             ada1    ONLINE       0     0     0

errors: No known data errors


While dd is running, gstat is showing like:

dT: 1.002s w: 1.000s
L(q) ops/s r/s kBps ms/r w/s kBps ms/w %busy Name
0 366 366 46648 1.1 0 0 0.0 39.6| ada0
1 432 432 54841 1.0 0 0 0.0 45.1| ada1



so iops are very high, while %busy is quite low. It averages about 50%, 
rare peaks till 85-90%

Even top shows no significant load:

last pid: 61983; load averages: 0.44, 0.34, 0.37 up 11+07:51:31 16:44:56
40 processes: 1 running, 39 sleeping
CPU: 0.3% user, 0.0% nice, 6.4% system, 1.1% interrupt, 92.1% idle
Mem: 21M Active, 397M Inact, 2101M Wired, 56M Cache, 94M Buf, 1044M Free
ARC: 1024M Total, 232M MFU, 692M MRU, 160K Anon, 9201K Header, 91M Other
Swap: 4096M Total, 4096M Free
Not displaying idle processes.
PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND
61981 root 1 30 0 12364K 2084K zio->i 3 0:09 18.80% dd
61966 root 1 22 0 58392K 7144K select 3 0:24 3.86% proftpd



Zpool list:

  nas4free: /mnt# zpool list
NAME    SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP HEALTH  ALTROOT
data2  1.81T   578G  1.25T         -    11%    31%  1.00x ONLINE  -
data4  3.62T  2.85T   797G         -    36%    78%  1.00x ONLINE  -


Could it happen because of pool being 78% full? So I cannot fill puls full?
Can anyone please advice how could I fix the situation - or is it normal?
I've googled a lot about vmaxnodes, vminnodes but advices are mostly 
controversial and doesn't help.
I can provide additional system output on request.

best regards,
Dmitry




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?56019211.2050307>