Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 4 Mar 2003 19:08:51 -0500
From:      Bosko Milekic <bmilekic@unixdaemons.com>
To:        Petri Helenius <pete@he.iki.fi>
Cc:        freebsd-current@FreeBSD.ORG
Subject:   Re: mbuf cache
Message-ID:  <20030304190851.A10853@unixdaemons.com>
In-Reply-To: <0e3701c2e2a7$aaa2b180$932a40c1@PHE>; from pete@he.iki.fi on Wed, Mar 05, 2003 at 01:42:05AM %2B0200
References:  <0ded01c2e295$cbef0940$932a40c1@PHE> <20030304164449.A10136@unixdaemons.com> <0e1b01c2e29c$d1fefdc0$932a40c1@PHE> <20030304173809.A10373@unixdaemons.com> <0e2b01c2e2a3$96fd3b40$932a40c1@PHE> <20030304182133.A10561@unixdaemons.com> <0e3701c2e2a7$aaa2b180$932a40c1@PHE>

next in thread | previous in thread | raw e-mail | index | archive | help

On Wed, Mar 05, 2003 at 01:42:05AM +0200, Petri Helenius wrote:
> >
> >   This does look odd... maybe there's a leak somewhere... does "in use"
> >   go back down to a much lower number eventually?  What kind of test are
> >   you running?  "in pool" means that that's the number in the cache
> >   while "in use" means that that's the number out of the cache
> >   currently being used by the system; but if you're telling me that
> >   there's no way usage could be that high while you ran the netstat,
> >   either there's a serious leak somewhere or I got the stats wrong
> >   (anyone else notice irregular stats?)
> >
> I think I figured this, the "em" driver is allocating mbuf for each receive
> descriptor regardless if it´s needed or not. Does this cause a performance
> issue if there is 8000 mbufs in use and we get 100k-150k frees and
> allocates a second (for every packet?)
> 
> (I have the em driver configured for 4096 receive descriptors)

  Yeah, it kinda sucks but I'm not sure how it works... when are the
  mbufs freed?  If they're all freed in a continous for loop that kinda
  sucks.
    
> >   Another thing I find odd about those stats is that you've set the high
> >   watermark to 8192, which means that in the next free, you should be
> >   moving buckets to the general cache... see if that's really
> >   happening...  The low watermark doesn't affect anything right now.
> 
> Nothing seems to be moving to the GEN pool.

  Lower the high watermark to like 512... wait for the next free...  if
  it's still not moving, but you see that the per-cpu caches are being
  used ("in use" is changing), please let me know ASAP.

> >   Can you give me more details on the exact type of test you're running?
> >   Let's move this to -current instead of -current and -net please (feel
> >   free to trim the one you want), getting 3 copies of the same
> >   message all the time is kinda annoying. :-(
> >
> I´m running a snort-like application with the interface getting receive only
> packets. It can either connect to a netgraph node or use bpf, both seem to have
> similar performance (most CPU is used elsewhere) as the email I sent previously
> had listed.
> 
> Pete

-- 
Bosko Milekic * bmilekic@unixdaemons.com * bmilekic@FreeBSD.org


To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-current" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20030304190851.A10853>