Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 08 Jan 2014 15:32:59 -0500
From:      Adam McDougall <mcdouga9@egr.msu.edu>
To:        freebsd-stable@freebsd.org
Subject:   Re: 10.0-RC1: bad mbuf leak?
Message-ID:  <52CDB5FB.90108@egr.msu.edu>
In-Reply-To: <1389033148.5084.67285353.3B31094A@webmail.messagingengine.com>
References:  <1387204500.12061.60192349.19EAE1B4@webmail.messagingengine.com> <CAJ-VmonGE2=vmFOnCtLVLyNp0=F%2BNUd6OdU6=rROH_PWkyXSDA@mail.gmail.com> <EE2A759D-B9BB-4176-BAC6-D6D3C45E2CD1@FreeBSD.org> <3A115E20-3ADB-49BA-885D-16189B97842B@FreeBSD.org> <20131225133356.GL71033@FreeBSD.org> <BAD36C0E-BC8D-4AB1-9E11-FE26E537DBA7@lurchi.franken.de> <20140104195505.GV71033@glebius.int.ru> <11BB3983-28F7-40EF-87DA-FD95BD297EA7@FreeBSD.org> <1389033148.5084.67285353.3B31094A@webmail.messagingengine.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On 01/06/2014 13:32, Mark Felder wrote:
> It's not looking promising. mbuf usage is really high again. I haven't
> hit the point where the system is unavailable on the network but it
> appears to be approaching.
> 
> root@skeletor:/usr/home/feld # netstat -m
> 4093391/3109/4096500 mbufs in use (current/cache/total)
> 1025/1725/2750/1017354 mbuf clusters in use (current/cache/total/max)
> 1025/1725 mbuf+clusters out of packet secondary zone in use
> (current/cache)
> 0/492/492/508677 4k (page size) jumbo clusters in use
> (current/cache/total/max)
> 0/0/0/150719 9k jumbo clusters in use (current/cache/total/max)
> 0/0/0/84779 16k jumbo clusters in use (current/cache/total/max)
> 1025397K/6195K/1031593K bytes allocated to network (current/cache/total)
> 0/0/0 requests for mbufs denied (mbufs/clusters/mbuf+clusters)
> 0/0/0 requests for mbufs delayed (mbufs/clusters/mbuf+clusters)
> 0/0/0 requests for jumbo clusters delayed (4k/9k/16k)
> 0/0/0 requests for jumbo clusters denied (4k/9k/16k)
> 0 requests for sfbufs denied
> 0 requests for sfbufs delayed
> 0 requests for I/O initiated by sendfile
> 
> root@skeletor:/usr/home/feld # vmstat -z | grep mbuf
> mbuf_packet:            256, 6511065,    1025,    1725, 9153363,   0,  
> 0
> mbuf:                   256, 6511065, 4092367,    1383,74246554,   0,  
> 0
> mbuf_cluster:          2048, 1017354,    2750,       0,    2750,   0,  
> 0
> mbuf_jumbo_page:       4096, 508677,       0,     492, 2655317,   0,   0
> mbuf_jumbo_9k:         9216, 150719,       0,       0,       0,   0,   0
> mbuf_jumbo_16k:       16384,  84779,       0,       0,       0,   0,   0
> mbuf_ext_refcnt:          4,      0,       0,       0,       0,   0,   0
> 
> root@skeletor:/usr/home/feld # uptime
> 12:30PM  up 15:05, 1 user, load averages: 0.24, 0.23, 0.27
> 
> root@skeletor:/usr/home/feld # uname -a
> FreeBSD skeletor.feld.me 10.0-PRERELEASE FreeBSD 10.0-PRERELEASE #17
> r260339M: Sun Jan  5 21:23:10 CST 2014
> _______________________________________________
> freebsd-stable@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-stable
> To unsubscribe, send any mail to "freebsd-stable-unsubscribe@freebsd.org"
> 

Can you try your NFS mounts from directly within the jails, or stop one
or more jails for a night and see if it becomes stable?  Anything else
unusual besides the jails/nullfs such as pf, ipfw, nat, vimages?  My
systems running 10 seem fine including the one running poudriere builds
which uses jails and I think nullfs, but not nfs.  Do mbufs go up when
you cause nfs traffic?



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?52CDB5FB.90108>