Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 19 Dec 2013 00:41:48 -0800
From:      Adrian Chadd <adrian@freebsd.org>
To:        Mark Felder <feld@freebsd.org>
Cc:        FreeBSD Net <freebsd-net@freebsd.org>, FreeBSD Stable Mailing List <freebsd-stable@freebsd.org>
Subject:   Re: 10.0-RC1: bad mbuf leak?
Message-ID:  <CAJ-VmonGE2=vmFOnCtLVLyNp0=F%2BNUd6OdU6=rROH_PWkyXSDA@mail.gmail.com>
In-Reply-To: <1387204500.12061.60192349.19EAE1B4@webmail.messagingengine.com>
References:  <1387204500.12061.60192349.19EAE1B4@webmail.messagingengine.com>

next in thread | previous in thread | raw e-mail | index | archive | help
Hm, try reverting just the em code to that from a 10.0-BETA? Just in
case something changed there?



-a

On 16 December 2013 06:35, Mark Felder <feld@freebsd.org> wrote:
> Hi all,
>
> I think I'm experiencing a bad mbuf leak or something of the sort and I
> don't know how to diagnose this further.
>
> I have a machine at home that is mostly used for transcoding video for
> viewing on my TV via the multimedia/plexmediaserver port. This software
> runs in a jail and gets the actual files from my NAS via NFSv4. It's a
> pretty simple setup and sits idle unless I am watching TV.
>
> Between the 10.0-BETAs and the 10.0-RC1 did something network related
> that could affect mbufs change? Ever since I upgraded this machine to
> RC1 it has been "crashing", which I diagnosed as actually being an mbuf
> exhaustion. Raising the mbufs brings it back to life, and it does
> mention the exhaustion on the system console.
>
> Last night, for example, I rebooted the machine and it has been sitting
> mostly idle. I wake up this morning to see this:
>
> # vmstat -z
>
> ITEM                   SIZE  LIMIT     USED     FREE      REQ FAIL SLEEP
> mbuf_packet:            256, 6511095,    1023,    1727, 8322474,   0,
> 0
> mbuf:                   256, 6511095, 2811247,    1563,56000603,121933,
>  0
> mbuf_cluster:          2048, 1017358,    2750,       0,    2750,2740,
> 0
> mbuf_jumbo_page:       4096, 508679,       0,     152, 2831466, 137,   0
>
> # netstat -m
> 812270/3290/2815560 mbufs in use (current/cache/total)
> 1023/1727/2750/1017358 mbuf clusters in use (current/cache/total/max)
> 1023/1727 mbuf+clusters out of packet secondary zone in use
> (current/cache)
> 0/152/152/508679 4k (page size) jumbo clusters in use
> (current/cache/total/max)
> 0/0/0/150719 9k jumbo clusters in use (current/cache/total/max)
> 0/0/0/84779 16k jumbo clusters in use (current/cache/total/max)
> 705113K/4884K/709998K bytes allocated to network (current/cache/total)
> 121933/2740/0 requests for mbufs denied (mbufs/clusters/mbuf+clusters)
> 0/0/0 requests for mbufs delayed (mbufs/clusters/mbuf+clusters)
> 0/0/0 requests for jumbo clusters delayed (4k/9k/16k)
> 137/0/0 requests for jumbo clusters denied (4k/9k/16k)
> 0 requests for sfbufs denied
> 0 requests for sfbufs delayed
> 0 requests for I/O initiated by sendfile
>
>
> The network interface is em(4).
>
> Things I've tried:
>
> - restarting all software/services including the jail
> - down/up the network interface
>
> The only thing that works is rebooting.
>
> Also, the only possible "strange" part of this setup is that the NFS
> mounts used by the jail are not direct. They're actually nullfs mounted
> into the jail as I want access to them outside of the jail as well. Not
> sure if nullfs+nfs could do something this bizarre.
>
> If anyone has any hints on what I can do to track this down it would be
> appreciated.
> _______________________________________________
> freebsd-net@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-net
> To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org"



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAJ-VmonGE2=vmFOnCtLVLyNp0=F%2BNUd6OdU6=rROH_PWkyXSDA>