Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 29 Jun 2003 00:21:56 -0700
From:      "Craig Reyenga" <craig@craig.afraid.org>
To:        "David Gilbert" <dgilbert@velocet.ca>
Cc:        freebsd-performance@freebsd.org
Subject:   Re: Tuning Gigabit
Message-ID:  <001f01c33e0f$1f4716d0$0200000a@fireball>
References:  <20030628190036.0E06B37B405@hub.freebsd.org> <000f01c33dad$1595a0f0$e602a8c0@flatline> <16126.9805.829406.368426@canoe.velocet.net> <000901c33dd1$12268780$0200000a@fireball> <16126.19861.842507.318997@canoe.velocet.net>

next in thread | previous in thread | raw e-mail | index | archive | help
From: "David Gilbert" <dgilbert@velocet.ca>
> >>>>> "Craig" == Craig Reyenga <craig@craig.afraid.org> writes:
>
> >> 300 megabit is about where 32bit 33Mhz PCI maxes out.
>
> Craig> Could you tell me a little more about your tests? What boards,
> Craig> and what configuration?
>
> Well... first of all, a 33Mhz 32-bit PCI bus can transfer 33M * 32
> bits ... which is just about 1 gigabit of _total_ PCI bus bandwidth.
> Consider that you're likely testing disk->RAM->NIC and you end up with
> 1/3 of that as throughput (minus bus overhead) so 300 megabit is a
> good number.

I should have mentioned that Iperf tests only linespeed with the options I
fed it. My 5400rpm disks can't even saturate a 100mbit line :(

[snip]

> Now some boards I've tested (like the nvidia chipset) are strangely
> limited to 100megabit.  I can't explain this.  It seems low no matter
> how you cut it.

As I mentioned in a previous email, this is horrible. Does this manifest
itself with disk controllers and other high-bandwidth devices?

>
> Our testing has been threefold:
>
> 1) Generating packets.  We test the machines ability to generate both
>    large (1500, 3000 and 9000 byte) and small (64 byte) packets.   The
>    large scale generation of packets is necessary for the other
>    tests.  So far, some packet flood utilities from the linux hacker
>    camp are our most efficient small packet generators.  netcat on
>    memory cached objects or on /dev/zero generate our big packets.
>
> 2) Passing packets.  Primarily, we're interested in routing.  Passing
>    packets, passing packets with 100k routes and passing packets with
>    100's of ipf accounting rules are our benchmarks.  We look at both
>    small and large packet performance.  Packet passing machines have
>    at least two interfaces ... but sometimes 3 or 4 are tested.
>    Polling is a major win in the small packet passing race.
>
> 3) Receiving packets.  netcat is our friend again here.  Receiving
>    packets doesn't appear to be the same level of challenge as
>    generating or passing them.
>
> At any rate, we're clearly not testing file delivery.  We sometimes
> play with file delivery as a first test ... or for other testing
> reasons.  We've found several boards that corrupt packets when they
> pass more than 100megabit of packets.  We havn't explained that one
> yet.  Our tests centre on routing packets (because that's what we do
> with our high performance FreeBSD boxes.  All our other FreeBSD boxes
> "just work" at the level of performance they have).
>

I look forward to seeing a paper on this; it would certainly assist people
in hardware purchase decisions.

[snip]

-Craig



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?001f01c33e0f$1f4716d0$0200000a>