Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 12 Jun 2008 11:03:34 -0400
From:      "George V. Neville-Neil" <gnn@neville-neil.com>
To:        security <security@jim-liesl.org>
Cc:        freebsd-net@freebsd.org, Steve Bertrand <steve@ibctech.ca>
Subject:   Re: Throughput rate testing configurations
Message-ID:  <m2prqmitkp.wl%gnn@neville-neil.com>
In-Reply-To: <4850028F.6090103@jim-liesl.org>
References:  <484F3E1B.9050104@ibctech.ca> <4850028F.6090103@jim-liesl.org>

next in thread | previous in thread | raw e-mail | index | archive | help
At Wed, 11 Jun 2008 09:51:27 -0700,
security wrote:
> 
> Steve Bertrand wrote:
> > Hi everyone,
> >
> > I see what I believe to be less-than-adequate communication 
> > performance between many devices in parts of our network.
> >
> > Can someone recommend software (and config recommendations if 
> > possible) that I can implement to test both throughput and pps 
> > reliably, initially/primarily in a simple host-sw-host configuration?
> >
> > Perhaps I'm asking too much, but I'd like to have something that can 
> > push the link to it's absolute maximum capacity (for now, up to 1Gbps) 
> > for a long sustained time, that I can just walk away from and let it 
> > do it's work, and review the reports later where it had to scale down 
> > due to errors.
> >
> > What I'm really trying to achieve is:
> >
> > - test the link between hosts alone
> > - throw in a switch
> > - test the link while r/w to disk
> > - test the link while r/w to GELI disk
> > - test the link with oddball MTU sizes
> >
> Iperf or netperf are probably what you're looking for.  Both try real 
> had NOT to tweak other subsystems while they run, so if you want to 
> throw disk activity in, you'll need to run another tool or roll your own 
> to create disk activity.  You probably don't want to run them for 
> extended periods in a production network.  Depending on the adapters at 
> each end, you may or may not be able to drive the link to saturation or 
> alter frame size.  The Intel  adapters I've seen allow jumbo frames, and 
> generally good performance (as opposed to say the realtek).  It's also 
> useful to have a managed switch in between so you can look at the 
> counters on it.
> 

I personally prefer netpipe because it tries odd sized (non power of
2) messages and tends to help edge cases come to light.

/usr/ports/benchmarks/netpipe

Later,
George



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?m2prqmitkp.wl%gnn>