From owner-svn-src-all@FreeBSD.ORG Tue May 6 01:15:43 2014 Return-Path: Delivered-To: svn-src-all@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 281B01F9; Tue, 6 May 2014 01:15:43 +0000 (UTC) Received: from svn.freebsd.org (svn.freebsd.org [IPv6:2001:1900:2254:2068::e6a:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 0A10BB0D; Tue, 6 May 2014 01:15:43 +0000 (UTC) Received: from svn.freebsd.org ([127.0.1.70]) by svn.freebsd.org (8.14.8/8.14.8) with ESMTP id s461Fgot080703; Tue, 6 May 2014 01:15:42 GMT (envelope-from adrian@svn.freebsd.org) Received: (from adrian@localhost) by svn.freebsd.org (8.14.8/8.14.8/Submit) id s461Fgm6080700; Tue, 6 May 2014 01:15:42 GMT (envelope-from adrian@svn.freebsd.org) Message-Id: <201405060115.s461Fgm6080700@svn.freebsd.org> From: Adrian Chadd Date: Tue, 6 May 2014 01:15:42 +0000 (UTC) To: src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-head@freebsd.org Subject: svn commit: r265409 - head/sys/dev/ath X-SVN-Group: head MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: svn-src-all@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: "SVN commit messages for the entire src tree \(except for " user" and " projects" \)" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 06 May 2014 01:15:43 -0000 Author: adrian Date: Tue May 6 01:15:42 2014 New Revision: 265409 URL: http://svnweb.freebsd.org/changeset/base/265409 Log: Modify the RX path to keep the previous RX descriptor around once it's used. It turns out that the RX DMA engine does the same last-descriptor-link- pointer-re-reading trick that the TX DMA engine. That is, the hardware re-reads the link pointer before it moves onto the next descriptor. Thus we can't free a descriptor before we move on; it's possible the hardware will need to re-read the link pointer before we overwrite it with a new one. Tested: * AR5416, STA mode TODO: * more thorough AP and STA mode testing! * test on other pre-AR9380 NICs, just to be sure. * Break out the RX descriptor grabbing bits from the RX completion bits, like what is done in the RX EDMA code, so .. * .. the RX lock can be held during ath_rx_proc(), but not across packet input. Modified: head/sys/dev/ath/if_ath_rx.c head/sys/dev/ath/if_ath_sysctl.c head/sys/dev/ath/if_athvar.h Modified: head/sys/dev/ath/if_ath_rx.c ============================================================================== --- head/sys/dev/ath/if_ath_rx.c Tue May 6 00:00:07 2014 (r265408) +++ head/sys/dev/ath/if_ath_rx.c Tue May 6 01:15:42 2014 (r265409) @@ -245,6 +245,8 @@ ath_legacy_rxbuf_init(struct ath_softc * struct mbuf *m; struct ath_desc *ds; + /* XXX TODO: ATH_RX_LOCK_ASSERT(sc); */ + m = bf->bf_m; if (m == NULL) { /* @@ -974,6 +976,14 @@ rx_next: #define ATH_RX_MAX 128 +/* + * XXX TODO: break out the "get buffers" from "call ath_rx_pkt()" like + * the EDMA code does. + * + * XXX TODO: then, do all of the RX list management stuff inside + * ATH_RX_LOCK() so we don't end up potentially racing. The EDMA + * code is doing it right. + */ static void ath_rx_proc(struct ath_softc *sc, int resched) { @@ -995,6 +1005,7 @@ ath_rx_proc(struct ath_softc *sc, int re u_int64_t tsf; int npkts = 0; int kickpcu = 0; + int ret; /* XXX we must not hold the ATH_LOCK here */ ATH_UNLOCK_ASSERT(sc); @@ -1094,8 +1105,26 @@ ath_rx_proc(struct ath_softc *sc, int re if (ath_rx_pkt(sc, rs, status, tsf, nf, HAL_RX_QUEUE_HP, bf, m)) ngood++; rx_proc_next: - TAILQ_INSERT_TAIL(&sc->sc_rxbuf, bf, bf_list); - } while (ath_rxbuf_init(sc, bf) == 0); + /* + * If there's a holding buffer, insert that onto + * the RX list; the hardware is now definitely not pointing + * to it now. + */ + ret = 0; + if (sc->sc_rxedma[HAL_RX_QUEUE_HP].m_holdbf != NULL) { + TAILQ_INSERT_TAIL(&sc->sc_rxbuf, + sc->sc_rxedma[HAL_RX_QUEUE_HP].m_holdbf, + bf_list); + ret = ath_rxbuf_init(sc, + sc->sc_rxedma[HAL_RX_QUEUE_HP].m_holdbf); + } + /* + * Next, throw our buffer into the holding entry. The hardware + * may use the descriptor to read the link pointer before + * DMAing the next descriptor in to write out a packet. + */ + sc->sc_rxedma[HAL_RX_QUEUE_HP].m_holdbf = bf; + } while (ret == 0); /* rx signal state monitoring */ ath_hal_rxmonitor(ah, &sc->sc_halstats, sc->sc_curchan); @@ -1127,6 +1156,13 @@ rx_proc_next: * constantly write over the same frame, leading * the RX driver code here to get heavily confused. */ + /* + * XXX Has RX DMA stopped enough here to just call + * ath_startrecv()? + * XXX Do we need to use the holding buffer to restart + * RX DMA by appending entries to the final + * descriptor? Quite likely. + */ #if 1 ath_startrecv(sc); #else @@ -1217,6 +1253,58 @@ ath_legacy_flushrecv(struct ath_softc *s ath_rx_proc(sc, 0); } +static void +ath_legacy_flush_rxpending(struct ath_softc *sc) +{ + + /* XXX ATH_RX_LOCK_ASSERT(sc); */ + + if (sc->sc_rxedma[HAL_RX_QUEUE_LP].m_rxpending != NULL) { + m_freem(sc->sc_rxedma[HAL_RX_QUEUE_LP].m_rxpending); + sc->sc_rxedma[HAL_RX_QUEUE_LP].m_rxpending = NULL; + } + if (sc->sc_rxedma[HAL_RX_QUEUE_HP].m_rxpending != NULL) { + m_freem(sc->sc_rxedma[HAL_RX_QUEUE_HP].m_rxpending); + sc->sc_rxedma[HAL_RX_QUEUE_HP].m_rxpending = NULL; + } +} + +static int +ath_legacy_flush_rxholdbf(struct ath_softc *sc) +{ + struct ath_buf *bf; + + /* XXX ATH_RX_LOCK_ASSERT(sc); */ + /* + * If there are RX holding buffers, free them here and return + * them to the list. + * + * XXX should just verify that bf->bf_m is NULL, as it must + * be at this point! + */ + bf = sc->sc_rxedma[HAL_RX_QUEUE_HP].m_holdbf; + if (bf != NULL) { + if (bf->bf_m != NULL) + m_freem(bf->bf_m); + bf->bf_m = NULL; + TAILQ_INSERT_TAIL(&sc->sc_rxbuf, bf, bf_list); + (void) ath_rxbuf_init(sc, bf); + } + sc->sc_rxedma[HAL_RX_QUEUE_HP].m_holdbf = NULL; + + bf = sc->sc_rxedma[HAL_RX_QUEUE_LP].m_holdbf; + if (bf != NULL) { + if (bf->bf_m != NULL) + m_freem(bf->bf_m); + bf->bf_m = NULL; + TAILQ_INSERT_TAIL(&sc->sc_rxbuf, bf, bf_list); + (void) ath_rxbuf_init(sc, bf); + } + sc->sc_rxedma[HAL_RX_QUEUE_LP].m_holdbf = NULL; + + return (0); +} + /* * Disable the receive h/w in preparation for a reset. */ @@ -1228,6 +1316,8 @@ ath_legacy_stoprecv(struct ath_softc *sc ((_pa) - (_sc)->sc_rxdma.dd_desc_paddr))) struct ath_hal *ah = sc->sc_ah; + ATH_RX_LOCK(sc); + ath_hal_stoppcurecv(ah); /* disable PCU */ ath_hal_setrxfilter(ah, 0); /* clear recv filter */ ath_hal_stopdmarecv(ah); /* disable DMA engine */ @@ -1261,22 +1351,23 @@ ath_legacy_stoprecv(struct ath_softc *sc } } #endif - /* - * Free both high/low RX pending, just in case. - */ - if (sc->sc_rxedma[HAL_RX_QUEUE_LP].m_rxpending != NULL) { - m_freem(sc->sc_rxedma[HAL_RX_QUEUE_LP].m_rxpending); - sc->sc_rxedma[HAL_RX_QUEUE_LP].m_rxpending = NULL; - } - if (sc->sc_rxedma[HAL_RX_QUEUE_HP].m_rxpending != NULL) { - m_freem(sc->sc_rxedma[HAL_RX_QUEUE_HP].m_rxpending); - sc->sc_rxedma[HAL_RX_QUEUE_HP].m_rxpending = NULL; - } + + (void) ath_legacy_flush_rxpending(sc); + (void) ath_legacy_flush_rxholdbf(sc); + sc->sc_rxlink = NULL; /* just in case */ + + ATH_RX_UNLOCK(sc); #undef PA2DESC } /* + * XXX TODO: something was calling startrecv without calling + * stoprecv. Let's figure out what/why. It was showing up + * as a mbuf leak (rxpending) and ath_buf leak (holdbf.) + */ + +/* * Enable the receive h/w following a reset. */ static int @@ -1285,9 +1376,18 @@ ath_legacy_startrecv(struct ath_softc *s struct ath_hal *ah = sc->sc_ah; struct ath_buf *bf; + ATH_RX_LOCK(sc); + + /* + * XXX should verify these are already all NULL! + */ sc->sc_rxlink = NULL; - sc->sc_rxedma[HAL_RX_QUEUE_LP].m_rxpending = NULL; - sc->sc_rxedma[HAL_RX_QUEUE_HP].m_rxpending = NULL; + (void) ath_legacy_flush_rxpending(sc); + (void) ath_legacy_flush_rxholdbf(sc); + + /* + * Re-chain all of the buffers in the RX buffer list. + */ TAILQ_FOREACH(bf, &sc->sc_rxbuf, bf_list) { int error = ath_rxbuf_init(sc, bf); if (error != 0) { @@ -1303,6 +1403,8 @@ ath_legacy_startrecv(struct ath_softc *s ath_hal_rxena(ah); /* enable recv descriptors */ ath_mode_init(sc); /* set filters, etc. */ ath_hal_startpcurecv(ah); /* re-enable PCU/DMA engine */ + + ATH_RX_UNLOCK(sc); return 0; } Modified: head/sys/dev/ath/if_ath_sysctl.c ============================================================================== --- head/sys/dev/ath/if_ath_sysctl.c Tue May 6 00:00:07 2014 (r265408) +++ head/sys/dev/ath/if_ath_sysctl.c Tue May 6 01:15:42 2014 (r265409) @@ -413,12 +413,14 @@ ath_sysctl_txagg(SYSCTL_HANDLER_ARGS) ATH_RX_LOCK(sc); for (i = 0; i < 2; i++) { - printf("%d: fifolen: %d/%d; head=%d; tail=%d\n", + printf("%d: fifolen: %d/%d; head=%d; tail=%d; m_pending=%p, m_holdbf=%p\n", i, sc->sc_rxedma[i].m_fifo_depth, sc->sc_rxedma[i].m_fifolen, sc->sc_rxedma[i].m_fifo_head, - sc->sc_rxedma[i].m_fifo_tail); + sc->sc_rxedma[i].m_fifo_tail, + sc->sc_rxedma[i].m_rxpending, + sc->sc_rxedma[i].m_holdbf); } i = 0; TAILQ_FOREACH(bf, &sc->sc_rxbuf, bf_list) { Modified: head/sys/dev/ath/if_athvar.h ============================================================================== --- head/sys/dev/ath/if_athvar.h Tue May 6 00:00:07 2014 (r265408) +++ head/sys/dev/ath/if_athvar.h Tue May 6 01:15:42 2014 (r265409) @@ -510,6 +510,7 @@ struct ath_rx_edma { int m_fifo_tail; int m_fifo_depth; struct mbuf *m_rxpending; + struct ath_buf *m_holdbf; }; struct ath_tx_edma_fifo {