Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 21 Jan 2013 23:35:41 -0500
From:      Kurt Lidl <lidl@pix.net>
To:        freebsd-sparc64@freebsd.org
Subject:   console stops with 9.1-RELEASE when under forwarding load
Message-ID:  <20130122043541.GA67894@pix.net>

next in thread | raw e-mail | index | archive | help
I'm not sure if this is better directed at freebsd-sparc64@
or freebsd-net@ but I'm going to guess here...

Anyways.  In all cases, I'm using an absolutely stock
FreeBSD 9.1-release installation.

I got several SunFire V120 machines recently, and have been testing
them out to verify their operation.  They all started out identically
configured -- 1 GB of memory, 2x36GB disks, DVD-rom, 650Mhz processor.
The V120 has two on-board "gem" network interfaces.  And the machine
can take a single, 32-bit PCI card.

I've benchmarked the gem interfaces being able to source or sink
about 90mbit/sec of TCP traffic.  This is comparable to the speed
of "hme" interfaces that I've tested in my slower Netra-T1-105
machines.

So.  I put a Intel 32bit gig-e interface (a "GT" desktop
Gig-E interface) into the machine, and it comes up like this:

em0: <Intel(R) PRO/1000 Legacy Network Connection 1.0.4> port 0xc00200-0xc0023f mem 0x20000-0x3ffff,0x40000-0x5ffff at device 5.0 on pci2
em0: Memory Access and/or Bus Master bits were not set!
em0: Ethernet address: 00:1b:21:<redacted>

That interface can source or sink TCP traffic at about
248 mbit/sec.

Since I really want to make one of these machines my firewall/router,
I took a different, dual-port Intel Gig-E server adaptor (a 64bit
PCI card) and put it into one of the machines so I could look at
the fowarding performance.  It probes like this:

em0: <Intel(R) PRO/1000 Legacy Network Connection 1.0.4> port 0xc00200-0xc0023f mem 0x20000-0x3ffff,0x40000-0x7ffff at device 5.0 on pci2
em0: Memory Access and/or Bus Master bits were not set!
em0: Ethernet address: 00:04:23:<redacted>
em1: <Intel(R) PRO/1000 Legacy Network Connection 1.0.4> port 0xc00240-0xc0027f mem 0xc0000-0xdffff,0x100000-0x13ffff at device 5.1 on pci2
em1: Memory Access and/or Bus Master bits were not set!
em1: Ethernet address: 00:04:23:<redacted>

Now this card can source traffic at about 250 mbit/sec and can sink
traffic around 204 mbit/sec.

But the real question is - how is the forwarding performance?

So I setup a test between some machines:

A --tcp data--> em0-sparc64-em1 --tcp data--> B
|                                             |
\---------<--------tcp acks-------<-----------/

So, A sends to interface em0 on the sparc64, the sparc64
forward out em1 to host B, and the ack traffic flows out
a different interface from B to A.  (A and B are amd64
machines, with Gig-E interfaces that are considerably
faster than the sparc64 machines.)

This test works surprisingly well -- 270 mbit/sec of forwarding
traffic, at around 29500 packets/second.

The problem is when I change the test to send the tcp ack traffic
back through the sparc64 (so, ack traffic goes from B into em1,
then forwarded out em0 to A), while doing the data in the same way.

The console of the sparc64 becomes completely unresponsive during
the running of this test.  The 'netstat 1' that I been running just
stops.  When the data finishes transmitting, the netstat output
gives one giant jump, counting all the packets that were sent during
the test as if they happened in a single second.

It's pretty clear that the process I'm running on the console isn't
receiving any cycles at all.  This is true for whatever I have
running on the console of machine -- a shell, vmstat, iostat,
whatever.  It just hangs until the forwarding test is over.
Then the console input/output resumes normally.

Has anybody else seen this type of problem?

-Kurt




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20130122043541.GA67894>