Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 21 Oct 2015 19:24:20 +0000 (UTC)
From:      Ian Lepore <ian@FreeBSD.org>
To:        src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-head@freebsd.org
Subject:   svn commit: r289716 - head/sys/mips/mips
Message-ID:  <201510211924.t9LJOKk1078834@repo.freebsd.org>

next in thread | raw e-mail | index | archive | help
Author: ian
Date: Wed Oct 21 19:24:20 2015
New Revision: 289716
URL: https://svnweb.freebsd.org/changeset/base/289716

Log:
  Treat mbufs as cacheline-aligned.  Even when the transfer begins at an
  offset within the buffer to align the L3 headers we know the buffer itself
  was allocated and sized on cacheline boundaries and we don't need to
  preserve partitial cachelines at the start and end of the buffer when
  doing busdma sync operations.

Modified:
  head/sys/mips/mips/busdma_machdep.c

Modified: head/sys/mips/mips/busdma_machdep.c
==============================================================================
--- head/sys/mips/mips/busdma_machdep.c	Wed Oct 21 19:16:13 2015	(r289715)
+++ head/sys/mips/mips/busdma_machdep.c	Wed Oct 21 19:24:20 2015	(r289716)
@@ -951,6 +951,8 @@ _bus_dmamap_load_buffer(bus_dma_tag_t dm
 
 	if (segs == NULL)
 		segs = dmat->segments;
+	if ((flags & BUS_DMA_LOAD_MBUF) != 0)
+		map->flags |= DMAMAP_CACHE_ALIGNED;
 
 	if ((dmat->flags & BUS_DMA_COULD_BOUNCE) != 0) {
 		_bus_dmamap_count_pages(dmat, map, pmap, buf, buflen, flags);
@@ -1071,10 +1073,16 @@ bus_dmamap_sync_buf(vm_offset_t buf, int
 	 * prevent a data loss we save these chunks in temporary buffer
 	 * before invalidation and restore them afer it.
 	 *
-	 * If the aligned flag is set the buffer came from our allocator caches
-	 * which are always sized and aligned to cacheline boundaries, so we can
-	 * skip preserving nearby data if a transfer is unaligned (especially
-	 * it's likely to not end on a boundary).
+	 * If the aligned flag is set the buffer is either an mbuf or came from
+	 * our allocator caches.  In both cases they are always sized and
+	 * aligned to cacheline boundaries, so we can skip preserving nearby
+	 * data if a transfer appears to overlap cachelines.  An mbuf in
+	 * particular will usually appear to be overlapped because of offsetting
+	 * within the buffer to align the L3 headers, but we know that the bytes
+	 * preceeding that offset are part of the same mbuf memory and are not
+	 * unrelated adjacent data (and a rule of mbuf handling is that the cpu
+	 * is not allowed to touch the mbuf while dma is in progress, including
+	 * header fields).
 	 */
 	if (aligned) {
 		size_cl = 0;



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?201510211924.t9LJOKk1078834>