Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 22 Oct 2013 15:42:07 -0400
From:      Zaphod Beeblebrox <zbeeble@gmail.com>
To:        FreeBSD Net <freebsd-net@freebsd.org>, freebsd-fs <freebsd-fs@freebsd.org>
Subject:   istgt causes massive jumbo nmbclusters loss
Message-ID:  <CACpH0MeJbuj=rwuUZWa6HZg%2BAb8b5VME6jZ_uPrhbcBtg2yP6w@mail.gmail.com>

next in thread | raw e-mail | index | archive | help
I have a server

FreeBSD virtual.accountingreality.com 9.2-STABLE FreeBSD 9.2-STABLE #13
r256549M: Tue Oct 15 16:29:48 EDT 2013
root@virtual.accountingreality.com:/usr/obj/usr/src/sys/VRA  amd64

That has an em0 with jumbo packets enabled:

em0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 9014

It has (among other things): ZFS, NFS, iSCSI (via istgt) and Samba.

Every day or two, it looses it's ability to talk to the network.  ifconfig
down/up on em0 gives the message about not being able to allocate the
receive buffers...

With everything running, but with specifically iSCSI not used, everything
seems good.  When I start hitting istgt, I see the denied stat for 9k mbufs
rise very rapidly (this amount only took a few seconds):

[1:47:347]root@virtual:/usr/local/etc/iet> netstat -m
1313/877/2190 mbufs in use (current/cache/total)
20/584/604/523514 mbuf clusters in use (current/cache/total/max)
20/364 mbuf+clusters out of packet secondary zone in use (current/cache)
239/359/598/261756 4k (page size) jumbo clusters in use
(current/cache/total/max)
1023/376/1399/77557 9k jumbo clusters in use (current/cache/total/max)
0/0/0/43626 16k jumbo clusters in use (current/cache/total/max)
10531K/6207K/16738K bytes allocated to network (current/cache/total)
0/0/0 requests for mbufs denied (mbufs/clusters/mbuf+clusters)
0/0/0 requests for mbufs delayed (mbufs/clusters/mbuf+clusters)
0/0/0 requests for jumbo clusters delayed (4k/9k/16k)
0/50199/0 requests for jumbo clusters denied (4k/9k/16k)
0/0/0 sfbufs in use (current/peak/max)
0 requests for sfbufs denied
0 requests for sfbufs delayed
0 requests for I/O initiated by sendfile
0 calls to protocol drain routines

... the denied number rises... and somewhere in the millions or more the
machine stops --- but even with the large number of denied 9k clusters, the
"9k jumbo clusters in use" line will always indicate some available.

... so is this a tuning or a bug issue?  I've tried ietd --- basically it
doesn't want to work with a zfs zvol, it seems (refuses to use it).



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CACpH0MeJbuj=rwuUZWa6HZg%2BAb8b5VME6jZ_uPrhbcBtg2yP6w>