Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 26 Nov 2015 14:19:18 +0500
From:      "Eugene M. Zheganin" <emz@norma.perm.ru>
To:        freebsd-stable <freebsd-stable@freebsd.org>
Subject:   high disk %busy, while almost nothing happens
Message-ID:  <5656CE96.9000103@norma.perm.ru>

next in thread | raw e-mail | index | archive | help
Hi.

I'm using FreeBSD 10.1-STABLE as an application server, last week I've
noticed that disks are always busy while gstat shows that the activity
measured in iops/reads/writes is low, form my point of view:


 L(q)  ops/s    r/s   kBps   ms/r    w/s   kBps   ms/w   %busy Name
    8     56     50     520  160.6      6    286  157.4  100.2  gpt/zfsroot0
    8     56     51   1474  162.8      5    228  174.4   99.9  gpt/zfsroot1

These %busy numbers arent't changing much, and from my point of view
both disks do very little.

zpool iostat:

[root@gw0:~]# zpool iostat 1
               capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
zfsroot      270G   186G     90    131  1,17M  1,38M
----------  -----  -----  -----  -----  -----  -----
zfsroot      270G   186G    113     93   988K   418K
----------  -----  -----  -----  -----  -----  -----
zfsroot      270G   186G    112      0   795K  93,8K
----------  -----  -----  -----  -----  -----  -----
zfsroot      270G   186G    109     55  1,28M   226K
----------  -----  -----  -----  -----  -----  -----
zfsroot      270G   186G    112    116  1,36M   852K
----------  -----  -----  -----  -----  -----  -----
zfsroot      270G   186G    105     47  1,44M  1,61M
----------  -----  -----  -----  -----  -----  -----

What can cause this ?

Pool is fragmented indeed, but I have others server with comparable
amount of fragmentation, and no signs of busyness while reads/writes are
that low.

# zpool list
NAME      SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
zfsroot   456G   270G   186G         -    51%    59%  1.00x  ONLINE  -

Loader settings:

vfs.root.mountfrom="zfs:zfsroot"
vfs.zfs.arc_max="2048M"
vfs.zfs.zio.use_uma=1


I've tried to play with vfs.zfs.zio.use_uma, but without any noticeable
effect. I've also tried to add separate log devices - this didn't help
either.

Thanks.
Eugene.



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?5656CE96.9000103>