Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 13 Sep 2010 13:53:48 -0700
From:      Pyun YongHyeon <pyunyh@gmail.com>
To:        Tom Judge <tom@tomjudge.com>
Cc:        freebsd-net@freebsd.org, davidch@broadcom.com, yongari@freebsd.org
Subject:   Re: bce(4) - com_no_buffers (Again)
Message-ID:  <20100913205348.GJ1229@michelle.cdnetworks.com>
In-Reply-To: <4C8E8BD1.5090007@tomjudge.com>
References:  <4C894A76.5040200@tomjudge.com> <20100910002439.GO7203@michelle.cdnetworks.com> <4C8E3D79.6090102@tomjudge.com> <20100913184833.GF1229@michelle.cdnetworks.com> <4C8E768E.7000003@tomjudge.com> <20100913193322.GG1229@michelle.cdnetworks.com> <4C8E8BD1.5090007@tomjudge.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On Mon, Sep 13, 2010 at 03:38:41PM -0500, Tom Judge wrote:
> On 09/13/2010 02:33 PM, Pyun YongHyeon wrote:
> > On Mon, Sep 13, 2010 at 02:07:58PM -0500, Tom Judge wrote:
> >   
> >> On 09/13/2010 01:48 PM, Pyun YongHyeon wrote:
> >>     
> >>> On Mon, Sep 13, 2010 at 10:04:25AM -0500, Tom Judge wrote:
> >>>   
> >>>       
> >>>>         
> >> <SNIP/>
> >>     
> >>>> Does this mean that these cards are going to perform badly? This is was
> >>>> what I gathered from the previous thread.
> >>>>
> >>>>     
> >>>>         
> >>> I mean there are still many rooms to be done in driver for better
> >>> performance. bce(4) controllers are one of best controllers for
> >>> servers and driver didn't take full advantage of it.
> >>>
> >>>   
> >>>       
> >> So far our experiences with bce(4) on FreeBSD have been very
> >> disappointing.  Starting with when Dell switched to bce(4) based NIC's
> >> (around the time 6.2 was released and with the introduction of the Power
> >> Edge X9XX hardware) we have always had problems with the driver in every
> >> release we have used: 6.2, 7.0 and 7.1.  Luckily David has been helpful
> >> and helped us fix the issues.
> >>
> >> <SNIP/>
> >>     
> >>>   
> >>>       
> >>>> Without BCE_JUMBO_HDRSPLIT then we see no errors.  With it we see number
> >>>> of errors, however the rate seems to be reduced compaired to the
> >>>> previous version of the driver.
> >>>>
> >>>>     
> >>>>         
> >>> It seems there are issues in header splitting and it was disabled
> >>> by default. Header splitting reduces packet processing overhead in
> >>> upper layer so it's normal to see better performance with header
> >>> splitting.
> >>>   
> >>>       
> >> The reason that we have had header splitting enabled in the past is that
> >> historically there have been issues with memory fragmentation when using
> >> 8k jumbo frames (resulting in 9k mbuf's).
> >>
> >>     
> > Yes, if you use jumbo frames, header splitting would help to reduce
> > memory fragmentation as header splitting wouldn't allocate jumbo
> > clusters.
> >
> >   
> 
> Under testing I have yet to see a memory fragmentation issue with this
> driver.  I follow up if/when I find a problem with this again.
> 
> >> I have a kernel with the following configuration in testing right now:
> >>
> >> * Flow control enabled.
> >> * Jumbo header splitting turned off.
> >>
> >>
> >> Is there any way that we can fix flow control with jumbo header
> >> splitting turned on?
> >>
> >>     
> > Flow control has nothing to do with header splitting(i.e. flow
> > control is always enabled). 
> >
> >   
> Sorry let me rephrase that:
> 
> Is there a way to fix the RX buffer shortage issues (when header
> splitting is turned on) so that they are guarded by flow control.  Maybe
> change the low watermark for flow control when its enabled?
> 

I'm not sure how much it would help but try changing RX low
watermark. Default value is 32 which seems to be reasonable value.
But it's only for 5709/5716 controllers and Linux seems to use
different default value.



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20100913205348.GJ1229>