Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 04 Dec 2012 22:57:47 -0500
From:      Karim Fodil-Lemelin <fodillemlinkarim@gmail.com>
To:        Andre Oppermann <oppermann@networx.ch>
Cc:        Barney Cordoba <barney_cordoba@yahoo.com>, Adrian Chadd <adrian@freebsd.org>, John Baldwin <jhb@freebsd.org>, freebsd-net@freebsd.org
Subject:   Re: Latency issues with buf_ring
Message-ID:  <50BEC63B.6020801@gmail.com>
In-Reply-To: <50BE56C8.1030804@networx.ch>
References:  <1353259441.19423.YahooMailClassic@web121605.mail.ne1.yahoo.com> <201212041108.17645.jhb@freebsd.org> <CAJ-Vmo=tFFkeK2uADMPuBrgX6wN_9TSjAgs0WKPCrEfyhkG6Pw@mail.gmail.com> <50BE56C8.1030804@networx.ch>

next in thread | previous in thread | raw e-mail | index | archive | help
Hi,

On 04/12/2012 3:02 PM, Andre Oppermann wrote:
> On 04.12.2012 20:34, Adrian Chadd wrote:
>> .. and it's important to note that buf_ring itself doesn't have the
>> race condition; it's the general driver implementation that's racy.
>>
>> I have the same races in ath(4) with the watchdog programming. Exactly
>> the same issue.
>
> Our IF_* stack/driver boundary handoff isn't up to the task anymore.
>
> Also the interactions are either poorly defined or understood in many
> places.  I've had a few chats with yongari@ and am experimenting with
> a modernized interface in my branch.
>
> The reason I stumbled across it was because I'm extending the hardware
> offload feature set and found out that the stack and the drivers (and
> the drivers among themself) are not really in sync with regards to 
> behavior.
>
> For most if not all ethernet drivers from 100Mbit/s the TX DMA rings
> are so large that buffering at the IFQ level doesn't make sense anymore
> and only adds latency.  So it could simply directly put everything into
> the TX DMA and not even try to soft-queue.  If the TX DMA ring is full
> ENOBUFS is returned instead of filling yet another queue.  However there
> are ALTQ interactions and other mechanisms which have to be considered
> too making it a bit more involved.
I've also bumped into this 'internalization' of drbr for quite some time 
now.

I have been toying with some ideas around a multi-queue capable ALTQ. 
Not unlike IFQ_* the whole class_queue_t code in ALTQ could use some 
freshening up. One avenue I am looking into is drbr queues (and its 
associated TX lock) as the back end queue implementation for ALTQ. 
ALTQ(9) has a concept of driver managed queues and the approach tries to 
keep the same paradigm but adapt it for buf_ring. In that context, It 
doesn't feel natural for me that drbr logic is handled so low inside the 
device drivers and makes system level modifications to ALTQ 
unnecessarily driver dependent.

ALTQ is also using very coarse grained locking (using the IFQ_LOCK for 
everything) which doesn't make much sense in a SMP/multiqueue system but 
that's another story.
>
> I'm coming up with a draft and some benchmark results for an updated
> stack/driver boundary in the next weeks before xmas.
>
Sounds great, can't wait to read it while drinking that eggnog :)



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?50BEC63B.6020801>