Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 10 Jun 2003 10:27:26 -0500
From:      Eric Anderson <anderson@centtech.com>
To:        Terry Lambert <tlambert2@mindspring.com>
Cc:        freebsd-performance@freebsd.org
Subject:   Re: Slow disk write speeds over network
Message-ID:  <3EE5F8DE.30001@centtech.com>
References:  <20030609211526.58641.qmail@web14908.mail.yahoo.com> <3EE4FAED.6090603@centtech.com> <3EE595D2.B223CA19@mindspring.com>

next in thread | previous in thread | raw e-mail | index | archive | help
Good news, but not done yet.. Keep reading:

Terry Lambert wrote:
[..snippity snip..]
> 
> Swap cables with another box.
> 
> BTW: 4 Gigabit cards in one box, with you going to local disk...
> you've got about 8 cards worth of traffic over your PCI bus.

I'm going to a RAID50 (hardware), and I know there's the PCI bus limits 
- I'm not planning on filling 4 Gig E's at once continually..

> Unless this is a PCI-X based box, you are most likely livelocked;
> even if it's a PCI-X based box, you could still be livelocked.
> 
> You haven't said if you were using UDP or TCP for the mounts;
> you should definitely use TCP with FreeBSD NFS servers; it's
> also just generally a good idea, since UDP frags act as a fixed
> non-sliding window: NFS over UDP sucks.

Most clients are TCP, but some are still UDP (due to bugs in unmentioned 
linux distros nfs clients).

> Also, you haven't said whether you are using aliases on your
> network cards; aliases and NFS tend to interact badly.

Nope, no aliases.. I have one card on each network, with one IP per 
card.  I have full subnets (/24) full of P4's trying to slam the NFS 
server for data all the time..

> Finally, you probably want to tweak some sysctl's, e.g.
> 
> 	net.inet.ip.check_interface=0
> 	net.inet.tcp.inflight_enable=1
> 	net.inet.tcp.inflight_debug=0
> 	net.inet.tcp.msl=3000
> 	net.inet.tcp.inflight_min=6100
> 	net.isr.enable=1

Ok - done.. some where defaults, and I couldn't find net.isr.enable.. 
Did I need to config something on my kernel for it to show up?
Also, can you explain any of those tweaks?

> Given your overloading of your bus, that last one is probably
> the most important one: it enables direct dispatch.
> 
> You'll also want to enable DEVICE_POLLING in your kernel
> config file (assuming you have a good ethernet card whose
> driver supports it):
> 
> 	options DEVICE_POLLING
> 	options HZ=2000

Well, the LINT file says only a few cards support it - not sure if I 
should trust that or not, but I have Intel PRO/1000T Server Adapters - 
which should be good enough cards to support it.. I've also put 100Mbit 
cards in place of the gige's for now to make sure I wasn't hitting a 
GigE problem or negotiation problem..

> ...and yet more sysctl's for this:
> 
> 	kern.polling.enable=1
> 	kern.polling.user_frac=50	# 0..100; whatever works best
>
> If you've got a really terrible Gigabit Ethernet card, then
> you may be copying all your packets over again (e.g. m_pullup()),
> and that could be eating your bus, too.


Ok, so the end result is that after playing around with sysctl's, I've 
found that the tcp transfers are doing 20MB/s over FTP, but my NFS is 
around 1-2MB/s - still slow.. So we've cleared up some tcp issues, but 
yet still NFS is stinky..

Any more ideas?

Eric




-- 
------------------------------------------------------------------
Eric Anderson	   Systems Administrator      Centaur Technology
Attitudes are contagious, is yours worth catching?
------------------------------------------------------------------



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?3EE5F8DE.30001>