Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 23 Sep 2010 14:31:06 -0500
From:      Tom Judge <tom@tomjudge.com>
To:        pyunyh@gmail.com
Cc:        freebsd-net@freebsd.org, davidch@broadcom.com, yongari@freebsd.org
Subject:   Re: bce(4) - com_no_buffers (Again)
Message-ID:  <4C9BAAFA.7050901@tomjudge.com>
In-Reply-To: <20100923183954.GC15014@michelle.cdnetworks.com>
References:  <4C894A76.5040200@tomjudge.com>	<20100910002439.GO7203@michelle.cdnetworks.com>	<4C8E3D79.6090102@tomjudge.com>	<20100913184833.GF1229@michelle.cdnetworks.com>	<4C8E768E.7000003@tomjudge.com>	<20100913193322.GG1229@michelle.cdnetworks.com>	<4C8E8BD1.5090007@tomjudge.com>	<20100913205348.GJ1229@michelle.cdnetworks.com>	<4C9B6CBD.2030408@tomjudge.com> <20100923183954.GC15014@michelle.cdnetworks.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On 09/23/2010 01:39 PM, Pyun YongHyeon wrote:
> On Thu, Sep 23, 2010 at 10:05:33AM -0500, Tom Judge wrote:
>   
>> On 09/13/2010 03:53 PM, Pyun YongHyeon wrote:
>>     
>>> On Mon, Sep 13, 2010 at 03:38:41PM -0500, Tom Judge wrote:
>>>   
>>>       
>>>> On 09/13/2010 02:33 PM, Pyun YongHyeon wrote:
>>>>     
>>>>         
>>>>> On Mon, Sep 13, 2010 at 02:07:58PM -0500, Tom Judge wrote:
>>>>>   
>>>>>       
>>>>>           
>>>>>>>> Without BCE_JUMBO_HDRSPLIT then we see no errors.  With it we see number
>>>>>>>> of errors, however the rate seems to be reduced compaired to the
>>>>>>>> previous version of the driver.        
>>>>>>>>             
>>>>>>>>                 
>>>>>>> It seems there are issues in header splitting and it was disabled
>>>>>>> by default. Header splitting reduces packet processing overhead in
>>>>>>> upper layer so it's normal to see better performance with header
>>>>>>> splitting.
>>>>>>>           
>>>>>>>               
>>>>>> The reason that we have had header splitting enabled in the past is that
>>>>>> historically there have been issues with memory fragmentation when using
>>>>>> 8k jumbo frames (resulting in 9k mbuf's).
>>>>>>         
>>>>>>             
>>>>> Yes, if you use jumbo frames, header splitting would help to reduce
>>>>> memory fragmentation as header splitting wouldn't allocate jumbo
>>>>> clusters.
>>>>>
>>>>>       
>>>>>           
>>>> Under testing I have yet to see a memory fragmentation issue with this
>>>> driver.  I follow up if/when I find a problem with this again.
>>>>
>>>>     
>>>>         
>> So here we are again.  The system is locking up again because of 9k mbuf
>> allocation failures.
>>
>> tj@pidge '14:12:25' '~'
>>     
>>> $ netstat -m
>>>       
>> 514/4781/5295 mbufs in use (current/cache/total)
>> 0/2708/2708/25600 mbuf clusters in use (current/cache/total/max)
>> 0/1750 mbuf+clusters out of packet secondary zone in use (current/cache)
>> 0/2904/2904/12800 4k (page size) jumbo clusters in use
>> (current/cache/total/max)
>> 513/3274/3787/6400 9k jumbo clusters in use (current/cache/total/max)
>>     
> Number of 9k clusters didn't reach to the limit.
>
>   
I think it is being limited by the availability of 9k continuous memory
segments.
>> 0/0/0/3200 16k jumbo clusters in use (current/cache/total/max)
>> 4745K/47693K/52438K bytes allocated to network (current/cache/total)
>> 0/0/0 requests for mbufs denied (mbufs/clusters/mbuf+clusters)
>> 0/2692655/0 requests for jumbo clusters denied (4k/9k/16k)
>>     
> I see large denied value for 9k jumbo clusters but it could be
> normal under hight network load. But it should not lock up the
> controller. Note, under these conditions(cluster allocation
> failure) driver would drop incoming frames which in turn will does
> not pass received frames to upper stack. The end result could be
> shown as locked up as upper stack does not receive frames. I think
> you can check MAC statistics whether the driver is still running or
> not.
>   

The system lockup is more a visual side affect than a real one.  The
problem seems to be the time it takes applications to clear the input
queue on established connections.  The longer this takes and the more
connections active at a time the worse the problem appears.   For large
numbers of connections it can cause input frames to be dropped for up to
a few minutes while there are no memory regions free to be allocated.

>   
>> 0/0/0 sfbufs in use (current/peak/max)
>> 0 requests for sfbufs denied
>> 0 requests for sfbufs delayed
>> 0 requests for I/O initiated by sendfile
>> 0 calls to protocol drain routines
>>
>>
>>     
>>>>>> I have a kernel with the following configuration in testing right now:
>>>>>>
>>>>>> * Flow control enabled.
>>>>>> * Jumbo header splitting turned off.
>>>>>>
>>>>>>
>>>>>> Is there any way that we can fix flow control with jumbo header
>>>>>> splitting turned on?
>>>>>>
>>>>>>     
>>>>>>         
>>>>>>             
>>>>> Flow control has nothing to do with header splitting(i.e. flow
>>>>> control is always enabled). 
>>>>>
>>>>>   
>>>>>       
>>>>>           
>>>> Sorry let me rephrase that:
>>>>
>>>> Is there a way to fix the RX buffer shortage issues (when header
>>>> splitting is turned on) so that they are guarded by flow control.  Maybe
>>>> change the low watermark for flow control when its enabled?
>>>>
>>>>     
>>>>         
>>> I'm not sure how much it would help but try changing RX low
>>> watermark. Default value is 32 which seems to be reasonable value.
>>> But it's only for 5709/5716 controllers and Linux seems to use
>>> different default value.
>>>   
>>>       
>> These are: NetXtreme II BCM5709 Gigabit Ethernet
>>
>> So my next task is to turn the watermark related defines into sysctls
>> and turn on header splitting so that I can try to tune them without
>> having to reboot.
>>
>>
>>
>> My next question is, is it possible to increase the size of the RX ring
>> without switching to RSS?
>>
>>     
> Yes but I doubt it would help in this case as you seem to suffer
> from 9K jumbo frame allocation failure.
>   
With this solution I would be turning header splitting back on so that
there where no 9k allocation issues.

Tom


-- 
TJU13-ARIN




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?4C9BAAFA.7050901>