Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 14 Sep 2004 22:27:52 -0400
From:      "Don Bowman" <don@sandvine.com>
To:        "Andrew Gallatin" <gallatin@cs.duke.edu>
Cc:        freebsd-net@freebsd.org
Subject:   RE: packet generator
Message-ID:  <A8535F8D62F3644997E91F4F66E341FC5872EC@exchange.sandvine.com>

next in thread | raw e-mail | index | archive | help
From: Andrew Gallatin [mailto:gallatin@cs.duke.edu]
> Andrew Gallatin writes:
>=20
>  > xmit routine was called 683441 times.  This means that the=20
> queue was
>  > only a little over two packets deep on average, and vmstat=20
> shows idle
>  > time.  I've tried piping additional packets to nghook mx0:orphans
>  > input, but that does not seem to increase the queue depth.
>  >=20
>=20
> The problem here seems to be that rather than just slapping the
> packets onto the driver's queue, ng_source passes the mbuf down
> to more of netgraph, where there is at least one spinlock,
> and the driver's ifq lock is taken and released a zillion times
> by ether_output_frame(), etc.
>=20
> A quick hack (appended) to just slap the mbufs onto the if_snd queue
> gets me from ~410Kpps to 1020Kpps.  I also see very deep queues
> with this (because I'm slamming 4K pkts onto the queue at once..).
>=20
> This is nearly identical to the linux pktgen figure on the same
> hardware, which makes me feel comfortable that there is a lot of
> headroom in the driver/firmware API and I'm not botching something
> in the FreeBSD driver.
>=20
> BTW, did you see your 800Kpps on 4.x or 5.x?  If it was 4.x, what do
> you see on 5.x if you still have the same setup handy?
>=20
> Thanks,

800Kpps was on 4.7. on a dual 2.8GHz Xeon with 100MHz PCI-X on
em. I will try the 5.3.

--don



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?A8535F8D62F3644997E91F4F66E341FC5872EC>