Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 29 May 2015 14:12:55 +0200
From:      SimpleRezo Backup <simplerezo@gmail.com>
To:        freebsd-questions@freebsd.org
Subject:   High "flows" value in vmstat -z
Message-ID:  <CALVu1vZJ6HWMy5ZDNmwMHwtEacnkWg5_CRYdyuFQAiFcADWCCA@mail.gmail.com>

next in thread | raw e-mail | index | archive | help
Hello,

We are monitoring a lot of variables on all servers we are managing under
FreeBSD.

One of them is reporting a high "flows" (USED closed to LIMIT and FAIL
high) value in "vmstat -z":
$ vmstat -z | \grep -e ITEM -e flows
ITEM                   SIZE  LIMIT     USED     FREE      REQ FAIL SLEEP
flows:                   56, 348192,  328398,   19794,  601404,578976,   0

I can't find any details about this item... I think it's a replacement for
ip4flow and ip6flow in previous FreeBSD versions (server is using FBSD 10.1
with custom kernel), but cannot find information about those items neither.

Could someone explain me what it is and that could be wrong?

Here some more informations:
$ grep -i ether /var/run/dmesg.boot
bge0: <Broadcom NetXtreme Gigabit Ethernet, ASIC rev. 0x5720000> mem
0xd90a0000-0xd90affff,0xd90b0000-0xd90bffff,0xd90c0000-0xd90cffff irq 16 at
device 0.0 on pci1
bge0: Ethernet address: 74:86:7a:d0:0a:5c
bge1: <Broadcom NetXtreme Gigabit Ethernet, ASIC rev. 0x5720000> mem
0xd90d0000-0xd90dffff,0xd90e0000-0xd90effff,0xd90f0000-0xd90fffff irq 17 at
device 0.1 on pci1
bge1: Ethernet address: 74:86:7a:d0:0a:5e

$ netstat -i
Name    Mtu Network       Address              Ipkts Ierrs Idrop    Opkts
Oerrs  Coll
eth0   1500 <Link#1>      74:86:7a:d0:0a:5c 168954708     0     0 143226889
    0     0
eth0      - 192.168.14.0  cosedia           96887104     -     - 120785904
    -     -
eth1   1500 <Link#2>      74:86:7a:d0:0a:5e 103193946     0     0 77942787
    0     0
eth1      - 82.231.225.0  bne75-5-82-231-22  8459404     -     - 12833636
  -     -
<...>

$ netstat -m
1026/3294/4320 mbufs in use (current/cache/total)
1024/1772/2796/1017080 mbuf clusters in use (current/cache/total/max)
1024/1759 mbuf+clusters out of packet secondary zone in use (current/cache)
0/70/70/508539 4k (page size) jumbo clusters in use
(current/cache/total/max)
0/0/0/150678 9k jumbo clusters in use (current/cache/total/max)
0/0/0/84756 16k jumbo clusters in use (current/cache/total/max)
2304K/4647K/6952K bytes allocated to network (current/cache/total)
0/0/0 requests for mbufs denied (mbufs/clusters/mbuf+clusters)
0/0/0 requests for mbufs delayed (mbufs/clusters/mbuf+clusters)
0/0/0 requests for jumbo clusters delayed (4k/9k/16k)
0/0/0 requests for jumbo clusters denied (4k/9k/16k)
0 requests for sfbufs denied
0 requests for sfbufs delayed
5153 requests for I/O initiated by sendfile

Another think: it could be related or not, but because of some Apple
hardware on network, this host is handling a lot of "arp moves".

Regards

--
Clement Moulin
SimpleRezo



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CALVu1vZJ6HWMy5ZDNmwMHwtEacnkWg5_CRYdyuFQAiFcADWCCA>