Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 30 Nov 2009 16:50:04 +0800
From:      Adrian Chadd <adrian@freebsd.org>
To:        =?ISO-8859-1?Q?Eirik_=D8verby?= <ltning@anduin.net>
Cc:        pyunyh@gmail.com, weldon@excelsusphoto.com, freebsd-current@freebsd.org, Robert Watson <rwatson@freebsd.org>, Gavin Atkinson <gavin@freebsd.org>
Subject:   Re: FreeBSD 8.0 - network stack crashes?
Message-ID:  <d763ac660911300050p64ae41a4h1e01bdb649b02aac@mail.gmail.com>
In-Reply-To: <C3CC7F37-10BE-41DD-96E4-C952C6434ACC@anduin.net>
References:  <A1648B95-F36D-459D-BBC4-FFCA63FC1E4C@anduin.net> <20091129013026.GA1355@michelle.cdnetworks.com> <74BFE523-4BB3-4748-98BA-71FBD9829CD5@anduin.net> <alpine.BSF.2.00.0911291427240.80654@fledge.watson.org> <E9B13DDC-1B51-4EFD-95D2-544238BDF3A4@anduin.net> <d763ac660911292347i74caba25h9861a4d9feb63d77@mail.gmail.com> <C3CC7F37-10BE-41DD-96E4-C952C6434ACC@anduin.net>

next in thread | previous in thread | raw e-mail | index | archive | help
2009/11/30 Eirik =D8verby <ltning@anduin.net>:

>> That URL works for me. So how much traffic is this box handling during
>> peak times?
>
> Depends how you define load. It's a storage box (14TB ZFS) with a small h=
andful of NFS clients pushing backup data to it .. So lots of traffic in by=
tes/sec, but not many clients.

Ok.

> If you're referring to the Send-Q and Recv-Q values, they are zero everyw=
here I can tell.

Hm, I was. Ok.

>> See if you have full socket buffers showing up in netstat -an. Have
>> you tweaked the socket/TCP send/receive sizes? I typically lock mine
>> down to something small (32k-64k for the most part) so I don't hit
>> mbuf exhaustion on very busy proxies.

> I haven't touched any defaults except the mbuf clusters. What does your s=
ysctl.conf look like?

I just set these:

net.inet.tcp.sendspace=3D65536
net.inet.tcp.recvspace=3D65536

I tweak a lot of other TCP stack stuff to deal with satellite
latencies; its not relevant here.

I'd love to see where those mbufs are hiding and whether they're a
leak, or whether the NFS server is just pushing too much data out for
whatever reason. Actually, something I also set was this:

# Handle slightly more packets per interrupt tick
net.inet.ip.intr_queue_maxlen=3D512

It was defaulting to 50 which wasn't fast enough for small packet loads.



Adrian



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?d763ac660911300050p64ae41a4h1e01bdb649b02aac>