From owner-svn-src-all@FreeBSD.ORG Tue Jul 29 02:36:41 2014 Return-Path: Delivered-To: svn-src-all@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id D77396B7; Tue, 29 Jul 2014 02:36:41 +0000 (UTC) Received: from svn.freebsd.org (svn.freebsd.org [IPv6:2001:1900:2254:2068::e6a:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id C5347261A; Tue, 29 Jul 2014 02:36:41 +0000 (UTC) Received: from svn.freebsd.org ([127.0.1.70]) by svn.freebsd.org (8.14.9/8.14.9) with ESMTP id s6T2afx3074325; Tue, 29 Jul 2014 02:36:41 GMT (envelope-from ian@svn.freebsd.org) Received: (from ian@localhost) by svn.freebsd.org (8.14.9/8.14.9/Submit) id s6T2af9G074324; Tue, 29 Jul 2014 02:36:41 GMT (envelope-from ian@svn.freebsd.org) Message-Id: <201407290236.s6T2af9G074324@svn.freebsd.org> From: Ian Lepore Date: Tue, 29 Jul 2014 02:36:41 +0000 (UTC) To: src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-head@freebsd.org Subject: svn commit: r269212 - head/sys/arm/arm X-SVN-Group: head MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: svn-src-all@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: "SVN commit messages for the entire src tree \(except for " user" and " projects" \)" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 29 Jul 2014 02:36:42 -0000 Author: ian Date: Tue Jul 29 02:36:41 2014 New Revision: 269212 URL: http://svnweb.freebsd.org/changeset/base/269212 Log: Memory belonging to an mbuf, or allocated by bus_dmamem_alloc(), never triggers a need to bounce due to cacheline alignment. These buffers are always aligned to cacheline boundaries, and even when the DMA operation starts at an offset within the buffer or doesn't extend to the end of the buffer, it's safe to flush the complete cachelines that were only partially involved in the DMA. This is because there's a very strict rule on these types of buffers that there will not be concurrent access by the CPU and one or more DMA transfers within the buffer. Reviewed by: cognet Modified: head/sys/arm/arm/busdma_machdep-v6.c Modified: head/sys/arm/arm/busdma_machdep-v6.c ============================================================================== --- head/sys/arm/arm/busdma_machdep-v6.c Tue Jul 29 02:36:27 2014 (r269211) +++ head/sys/arm/arm/busdma_machdep-v6.c Tue Jul 29 02:36:41 2014 (r269212) @@ -162,6 +162,8 @@ struct bus_dmamap { void *callback_arg; int flags; #define DMAMAP_COHERENT (1 << 0) +#define DMAMAP_DMAMEM_ALLOC (1 << 1) +#define DMAMAP_MBUF (1 << 2) STAILQ_ENTRY(bus_dmamap) links; int sync_count; struct sync_list slist[]; @@ -279,12 +281,22 @@ alignment_bounce(bus_dma_tag_t dmat, bus } /* - * Return true if the buffer start or end does not fall on a cacheline boundary. + * Return true if the DMA should bounce because the start or end does not fall + * on a cacheline boundary (which would require a partial cacheline flush). + * COHERENT memory doesn't trigger cacheline flushes. Memory allocated by + * bus_dmamem_alloc() is always aligned to cacheline boundaries, and there's a + * strict rule that such memory cannot be accessed by the CPU while DMA is in + * progress (or by multiple DMA engines at once), so that it's always safe to do + * full cacheline flushes even if that affects memory outside the range of a + * given DMA operation that doesn't involve the full allocated buffer. If we're + * mapping an mbuf, that follows the same rules as a buffer we allocated. */ static __inline int -cacheline_bounce(bus_addr_t addr, bus_size_t size) +cacheline_bounce(bus_dmamap_t map, bus_addr_t addr, bus_size_t size) { + if (map->flags & (DMAMAP_DMAMEM_ALLOC | DMAMAP_COHERENT | DMAMAP_MBUF)) + return (0); return ((addr | size) & arm_dcache_align_mask); } @@ -302,8 +314,9 @@ static __inline int might_bounce(bus_dma_tag_t dmat, bus_dmamap_t map, bus_addr_t addr, bus_size_t size) { - return ((dmat->flags & BUS_DMA_COULD_BOUNCE) || - !((map->flags & DMAMAP_COHERENT) && cacheline_bounce(addr, size))); + return ((dmat->flags & BUS_DMA_EXCL_BOUNCE) || + alignment_bounce(dmat, addr) || + cacheline_bounce(map, addr, size)); } /* @@ -322,8 +335,7 @@ must_bounce(bus_dma_tag_t dmat, bus_dmam bus_size_t size) { - /* Coherent memory doesn't need to bounce due to cache alignment. */ - if (!(map->flags & DMAMAP_COHERENT) && cacheline_bounce(paddr, size)) + if (cacheline_bounce(map, paddr, size)) return (1); /* @@ -727,7 +739,9 @@ bus_dmamem_alloc(bus_dma_tag_t dmat, voi return (ENOMEM); } + (*mapp)->flags = DMAMAP_DMAMEM_ALLOC; (*mapp)->sync_count = 0; + /* We may need bounce pages, even for allocated memory */ error = allocate_bz_and_pages(dmat, *mapp); if (error != 0) { @@ -1080,6 +1094,9 @@ _bus_dmamap_load_buffer(bus_dma_tag_t dm if (segs == NULL) segs = dmat->segments; + if (flags & BUS_DMA_LOAD_MBUF) + map->flags |= DMAMAP_MBUF; + map->pmap = pmap; if (might_bounce(dmat, map, (bus_addr_t)buf, buflen)) { @@ -1196,6 +1213,7 @@ _bus_dmamap_unload(bus_dma_tag_t dmat, b map->pagesneeded = 0; } map->sync_count = 0; + map->flags &= ~DMAMAP_MBUF; } #ifdef notyetbounceuser