From owner-freebsd-arm@FreeBSD.ORG Tue Sep 4 19:30:30 2012 Return-Path: Delivered-To: freebsd-arm@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 112CA1065678; Tue, 4 Sep 2012 19:30:30 +0000 (UTC) (envelope-from freebsd@damnhippie.dyndns.org) Received: from duck.symmetricom.us (duck.symmetricom.us [206.168.13.214]) by mx1.freebsd.org (Postfix) with ESMTP id 842F28FC1B; Tue, 4 Sep 2012 19:30:28 +0000 (UTC) Received: from damnhippie.dyndns.org (daffy.symmetricom.us [206.168.13.218]) by duck.symmetricom.us (8.14.5/8.14.5) with ESMTP id q84JURk3061606; Tue, 4 Sep 2012 13:30:27 -0600 (MDT) (envelope-from freebsd@damnhippie.dyndns.org) Received: from [172.22.42.240] (revolution.hippie.lan [172.22.42.240]) by damnhippie.dyndns.org (8.14.3/8.14.3) with ESMTP id q84JUOus041529; Tue, 4 Sep 2012 13:30:24 -0600 (MDT) (envelope-from freebsd@damnhippie.dyndns.org) From: Ian Lepore To: Warner Losh In-Reply-To: <2C9793BE-70B0-4A18-B5D2-FBA8C63A4552@bsdimp.com> References: <1346777897.1140.633.camel@revolution.hippie.lan> <1346781390.1140.641.camel@revolution.hippie.lan> <2C9793BE-70B0-4A18-B5D2-FBA8C63A4552@bsdimp.com> Content-Type: text/plain; charset="us-ascii" Date: Tue, 04 Sep 2012 13:30:24 -0600 Message-ID: <1346787024.1140.654.camel@revolution.hippie.lan> Mime-Version: 1.0 X-Mailer: Evolution 2.32.1 FreeBSD GNOME Team Port Content-Transfer-Encoding: 7bit Cc: freebsd-arm@freebsd.org, freebsd-mips@freebsd.org Subject: Re: busdma buffer management enhancements - call for review and test X-BeenThere: freebsd-arm@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Porting FreeBSD to the StrongARM Processor List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 04 Sep 2012 19:30:30 -0000 On Tue, 2012-09-04 at 12:08 -0600, Warner Losh wrote: > On Sep 4, 2012, at 11:56 AM, Ian Lepore wrote: > > > On Tue, 2012-09-04 at 12:40 -0500, Mark Tinguely wrote: > >> On Tue, Sep 4, 2012 at 11:58 AM, Ian Lepore > >> wrote: > >> How about having a per processor cache line definition rather than using: > >> > >> +#define MIN_ZONE_BUFSIZE 32 > >> > >> For those archs that have a 64 byte cache line. > >> > >> I like the separation of the regular and BUS_DMA_COHERENT or > >> BUS_DMA_NOCACHE. I never liked turning the cache off the entire page > >> for a few DMA buffers. > >> > >> --Mark. > > > > The code in busdma_bufalloc.c (where that constant appears) knows > > nothing about architecture cache line sizes, that info has to be passed > > in to the create call as the minimum_alignment parameter. > > > > The point of that constant is to enforce internal limits in the > > implementation... if the caller passes in a minimum_alignment size less > > than 32, it will still use 32 as the smallest buffer size in the cache, > > purely because it only allocates enough slots in the array to handle > > log2(PAGE_SIZE) - log2(MIN_ZONE_BUFSIZE) + 1 zones, but you can't use > > log2() in a compile time constant expression, so things are a bit > > hand-tuned, and there's some code for sanity-enforcement. > > kassert then, because we'll have this problem all over again. Would be nice if we could make those arrays dynamically sized though... > > Will read the rest of the patch later today... It's not really a kassert kind of thing. The allocator automatically creates an uma zone for each power of two size between minimum_alignment and PAGE_SIZE. If the caller passes in a minimum_alignment of 1 (may be perfectly reasonable on some platforms) then we don't need caches of buffers sized at 1, 2, 4, etc, bytes. The 32 is just a reasonable minimum size of buffer to keep in the cache. The upper size limit is enforced with a compile-time check on PAGE_SIZE greater than 64K and a #error. The alternative to this would be to have the caller pass in an upper size limit for cached buffers as well as the lower limit, but we would have to clip the passed in value to PAGE_SIZE to keep meeting the promises the allocator makes about alignments and boundaries and contiguity of the buffers it serves up. We could certainly add code to malloc the array at just the right size. That would typically eliminate a few unused slots the array, saving like 50-100 bytes. -- Ian