Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 4 Mar 2003 19:01:15 -0500
From:      Hiten Pandya <hiten@unixdaemons.com>
To:        Petri Helenius <pete@he.iki.fi>
Cc:        Bosko Milekic <bmilekic@unixdaemons.com>, freebsd-current@FreeBSD.ORG
Subject:   Re: mbuf cache
Message-ID:  <20030305000115.GA3455@unixdaemons.com>
In-Reply-To: <0e3701c2e2a7$aaa2b180$932a40c1@PHE>
References:  <0ded01c2e295$cbef0940$932a40c1@PHE> <20030304164449.A10136@unixdaemons.com> <0e1b01c2e29c$d1fefdc0$932a40c1@PHE> <20030304173809.A10373@unixdaemons.com> <0e2b01c2e2a3$96fd3b40$932a40c1@PHE> <20030304182133.A10561@unixdaemons.com> <0e3701c2e2a7$aaa2b180$932a40c1@PHE>

next in thread | previous in thread | raw e-mail | index | archive | help
Petri Helenius (Wed, Mar 05, 2003 at 01:42:05AM +0200) wrote:
> >
> >   This does look odd... maybe there's a leak somewhere... does "in use"
> >   go back down to a much lower number eventually?  What kind of test are
> >   you running?  "in pool" means that that's the number in the cache
> >   while "in use" means that that's the number out of the cache
> >   currently being used by the system; but if you're telling me that
> >   there's no way usage could be that high while you ran the netstat,
> >   either there's a serious leak somewhere or I got the stats wrong
> >   (anyone else notice irregular stats?)
> >
> I think I figured this, the "em" driver is allocating mbuf for each receive
> descriptor regardless if it?s needed or not. Does this cause a performance
> issue if there is 8000 mbufs in use and we get 100k-150k frees and
> allocates a second (for every packet?)
> 
> (I have the em driver configured for 4096 receive descriptors)

While you are there debugging mbuf issues, you might also want to try
this patch:

%%%
Index: sys/dev/em/if_em.c
===================================================================
RCS file: /home/ncvs/src/sys/dev/em/if_em.c,v
retrieving revision 1.19
diff -u -r1.19 if_em.c
--- sys/dev/em/if_em.c	19 Feb 2003 05:47:03 -0000	1.19
+++ sys/dev/em/if_em.c	4 Mar 2003 23:49:02 -0000
@@ -1802,15 +1802,10 @@
 	ifp = &adapter->interface_data.ac_if;
 
 	if (mp == NULL) {
-		MGETHDR(mp, M_DONTWAIT, MT_DATA);
+		mp = m_getcl(M_DONTWAIT, MT_DATA, M_PKTHDR);
 		if (mp == NULL) {
 			adapter->mbuf_alloc_failed++;
-			return(ENOBUFS);
-		}
-		MCLGET(mp, M_DONTWAIT);
-		if ((mp->m_flags & M_EXT) == 0) {
 			m_freem(mp);
-			adapter->mbuf_cluster_failed++;
 			return(ENOBUFS);
 		}
 		mp->m_len = mp->m_pkthdr.len = MCLBYTES;
%%%

This is sort of an optimization.  It reduces locking operations, and
will be making calling less routnes overall.  It would be beneficial to
know the profiling and performance results of this patch.

> I?m running a snort-like application with the interface getting receive only
> packets. It can either connect to a netgraph node or use bpf, both seem to have
> similar performance (most CPU is used elsewhere) as the email I sent previously
> had listed.

This code from 'em' driver worries me a bit:

                        if (em_get_buf(i, adapter, NULL) == ENOBUFS) {
                                adapter->dropped_pkts++;
                                em_get_buf(i, adapter, mp);
				if (adapter->fmp != NULL)
					m_freem(adapter->fmp);
				adapter->fmp = NULL;
				adapter->fmp = NULL;
			}

Cheers.

-- 
Hiten Pandya (hiten@unixdaemons.com, hiten@uk.FreeBSD.org)
http://www.unixdaemons.com/~hiten/

To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-current" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20030305000115.GA3455>