Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 2 Dec 2020 16:54:24 +0000 (UTC)
From:      Michal Meloun <mmel@FreeBSD.org>
To:        src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-head@freebsd.org
Subject:   svn commit: r368279 - head/sys/dev/nvme
Message-ID:  <202012021654.0B2GsOP8000763@repo.freebsd.org>

next in thread | raw e-mail | index | archive | help
Author: mmel
Date: Wed Dec  2 16:54:24 2020
New Revision: 368279
URL: https://svnweb.freebsd.org/changeset/base/368279

Log:
  NVME: Multiple busdma related fixes.
  - in nvme_qpair_process_completions() do dma sync before completion buffer
    is used.
  - in nvme_qpair_submit_tracker(), don't do explicit wmb() also for arm
    and arm64. Bus_dmamap_sync() on these architectures is sufficient to ensure
    that all CPU stores are visible to external (including DMA) observers.
  - Allocate completion buffer as BUS_DMA_COHERENT. On not-DMA coherent systems,
    buffers continuously owned (and accessed) by DMA must be allocated with this
    flag. Note that BUS_DMA_COHERENT flag is no-op on DMA coherent systems
    (or coherent buses in mixed systems).
  
  MFC after:	4 weeks
  Reviewed by:	mav, imp
  Differential Revision: https://reviews.freebsd.org/D27446

Modified:
  head/sys/dev/nvme/nvme_qpair.c

Modified: head/sys/dev/nvme/nvme_qpair.c
==============================================================================
--- head/sys/dev/nvme/nvme_qpair.c	Wed Dec  2 16:46:45 2020	(r368278)
+++ head/sys/dev/nvme/nvme_qpair.c	Wed Dec  2 16:54:24 2020	(r368279)
@@ -547,6 +547,8 @@ nvme_qpair_process_completions(struct nvme_qpair *qpai
 	if (!qpair->is_enabled)
 		return (false);
 
+	bus_dmamap_sync(qpair->dma_tag, qpair->queuemem_map,
+	    BUS_DMASYNC_POSTREAD | BUS_DMASYNC_POSTWRITE);
 	/*
 	 * A panic can stop the CPU this routine is running on at any point.  If
 	 * we're called during a panic, complete the sq_head wrap protocol for
@@ -580,8 +582,6 @@ nvme_qpair_process_completions(struct nvme_qpair *qpai
 		}
 	}
 
-	bus_dmamap_sync(qpair->dma_tag, qpair->queuemem_map,
-	    BUS_DMASYNC_POSTREAD | BUS_DMASYNC_POSTWRITE);
 	while (1) {
 		cpl = qpair->cpl[qpair->cq_head];
 
@@ -722,7 +722,7 @@ nvme_qpair_construct(struct nvme_qpair *qpair,
 	bus_dma_tag_set_domain(qpair->dma_tag, qpair->domain);
 
 	if (bus_dmamem_alloc(qpair->dma_tag, (void **)&queuemem,
-	    BUS_DMA_NOWAIT, &qpair->queuemem_map)) {
+	     BUS_DMA_COHERENT | BUS_DMA_NOWAIT, &qpair->queuemem_map)) {
 		nvme_printf(ctrlr, "failed to alloc qpair memory\n");
 		goto out;
 	}
@@ -982,7 +982,7 @@ nvme_qpair_submit_tracker(struct nvme_qpair *qpair, st
 
 	bus_dmamap_sync(qpair->dma_tag, qpair->queuemem_map,
 	    BUS_DMASYNC_PREREAD | BUS_DMASYNC_PREWRITE);
-#ifndef __powerpc__
+#if !defined( __powerpc__) && !defined( __aarch64__)  && !defined( __arm__)
 	/*
 	 * powerpc's bus_dmamap_sync() already includes a heavyweight sync, but
 	 * no other archs do.



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?202012021654.0B2GsOP8000763>