Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 1 Feb 2018 16:52:03 +0000 (UTC)
From:      Alexander Motin <mav@FreeBSD.org>
To:        src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-stable@freebsd.org, svn-src-stable-11@freebsd.org
Subject:   svn commit: r328691 - stable/11/sys/dev/nvme
Message-ID:  <201802011652.w11Gq3Zh031059@repo.freebsd.org>

next in thread | raw e-mail | index | archive | help
Author: mav
Date: Thu Feb  1 16:52:03 2018
New Revision: 328691
URL: https://svnweb.freebsd.org/changeset/base/328691

Log:
  MFC r322994: Set the max transactions for NVMe drives better.
  
  Provided a better estimate for the number of transactions that can be
  pending at one time. This will be number of queues * number of
  trackers / 4, as suggested by Jim Harris. This gives a better estimate
  of the number of transactions that CAM should queue before applying
  back pressure. This should be revisted when we have real multi-queue
  support in CAM and the upper layers of the I/O stack.

Modified:
  stable/11/sys/dev/nvme/nvme_ctrlr.c
  stable/11/sys/dev/nvme/nvme_private.h
  stable/11/sys/dev/nvme/nvme_sim.c
Directory Properties:
  stable/11/   (props changed)

Modified: stable/11/sys/dev/nvme/nvme_ctrlr.c
==============================================================================
--- stable/11/sys/dev/nvme/nvme_ctrlr.c	Thu Feb  1 16:51:11 2018	(r328690)
+++ stable/11/sys/dev/nvme/nvme_ctrlr.c	Thu Feb  1 16:52:03 2018	(r328691)
@@ -146,6 +146,14 @@ nvme_ctrlr_construct_io_qpairs(struct nvme_controller 
 	num_trackers = min(num_trackers, (num_entries-1));
 
 	/*
+	 * Our best estimate for the maximum number of I/Os that we should
+	 * noramlly have in flight at one time. This should be viewed as a hint,
+	 * not a hard limit and will need to be revisitted when the upper layers
+	 * of the storage system grows multi-queue support.
+	 */
+	ctrlr->max_hw_pend_io = num_trackers * ctrlr->num_io_queues / 4;
+
+	/*
 	 * This was calculated previously when setting up interrupts, but
 	 *  a controller could theoretically support fewer I/O queues than
 	 *  MSI-X vectors.  So calculate again here just to be safe.

Modified: stable/11/sys/dev/nvme/nvme_private.h
==============================================================================
--- stable/11/sys/dev/nvme/nvme_private.h	Thu Feb  1 16:51:11 2018	(r328690)
+++ stable/11/sys/dev/nvme/nvme_private.h	Thu Feb  1 16:52:03 2018	(r328691)
@@ -263,6 +263,7 @@ struct nvme_controller {
 
 	uint32_t		num_io_queues;
 	uint32_t		num_cpus_per_ioq;
+	uint32_t		max_hw_pend_io;
 
 	/* Fields for tracking progress during controller initialization. */
 	struct intr_config_hook	config_hook;

Modified: stable/11/sys/dev/nvme/nvme_sim.c
==============================================================================
--- stable/11/sys/dev/nvme/nvme_sim.c	Thu Feb  1 16:51:11 2018	(r328690)
+++ stable/11/sys/dev/nvme/nvme_sim.c	Thu Feb  1 16:52:03 2018	(r328691)
@@ -261,7 +261,7 @@ nvme_sim_new_controller(struct nvme_controller *ctrlr)
 	int unit;
 	struct nvme_sim_softc *sc = NULL;
 
-	max_trans = ctrlr->num_io_queues;
+	max_trans = ctrlr->max_hw_pend_io;
 	unit = device_get_unit(ctrlr->dev);
 	devq = cam_simq_alloc(max_trans);
 	if (devq == NULL)



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?201802011652.w11Gq3Zh031059>