From owner-svn-src-all@freebsd.org Mon Apr 29 20:10:31 2019 Return-Path: Delivered-To: svn-src-all@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id C71BE159C18F; Mon, 29 Apr 2019 20:10:30 +0000 (UTC) (envelope-from np@FreeBSD.org) Received: from mxrelay.nyi.freebsd.org (mxrelay.nyi.freebsd.org [IPv6:2610:1c1:1:606c::19:3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) server-signature RSA-PSS (4096 bits) client-signature RSA-PSS (4096 bits) client-digest SHA256) (Client CN "mxrelay.nyi.freebsd.org", Issuer "Let's Encrypt Authority X3" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 7AC5970E21; Mon, 29 Apr 2019 20:10:30 +0000 (UTC) (envelope-from np@FreeBSD.org) Received: from repo.freebsd.org (repo.freebsd.org [IPv6:2610:1c1:1:6068::e6a:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mxrelay.nyi.freebsd.org (Postfix) with ESMTPS id 52F6426EDA; Mon, 29 Apr 2019 20:10:30 +0000 (UTC) (envelope-from np@FreeBSD.org) Received: from repo.freebsd.org ([127.0.1.37]) by repo.freebsd.org (8.15.2/8.15.2) with ESMTP id x3TKAUep094698; Mon, 29 Apr 2019 20:10:30 GMT (envelope-from np@FreeBSD.org) Received: (from np@localhost) by repo.freebsd.org (8.15.2/8.15.2/Submit) id x3TKAS0X094690; Mon, 29 Apr 2019 20:10:28 GMT (envelope-from np@FreeBSD.org) Message-Id: <201904292010.x3TKAS0X094690@repo.freebsd.org> X-Authentication-Warning: repo.freebsd.org: np set sender to np@FreeBSD.org using -f From: Navdeep Parhar Date: Mon, 29 Apr 2019 20:10:28 +0000 (UTC) To: src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-stable@freebsd.org, svn-src-stable-11@freebsd.org Subject: svn commit: r346923 - in stable/11/sys/dev/cxgbe: . iw_cxgbe X-SVN-Group: stable-11 X-SVN-Commit-Author: np X-SVN-Commit-Paths: in stable/11/sys/dev/cxgbe: . iw_cxgbe X-SVN-Commit-Revision: 346923 X-SVN-Commit-Repository: base MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Rspamd-Queue-Id: 7AC5970E21 X-Spamd-Bar: -- Authentication-Results: mx1.freebsd.org X-Spamd-Result: default: False [-2.97 / 15.00]; local_wl_from(0.00)[FreeBSD.org]; NEURAL_HAM_MEDIUM(-1.00)[-0.998,0]; NEURAL_HAM_SHORT(-0.97)[-0.972,0]; ASN(0.00)[asn:11403, ipnet:2610:1c1:1::/48, country:US]; NEURAL_HAM_LONG(-1.00)[-1.000,0] X-BeenThere: svn-src-all@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: "SVN commit messages for the entire src tree \(except for " user" and " projects" \)" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 29 Apr 2019 20:10:31 -0000 Author: np Date: Mon Apr 29 20:10:28 2019 New Revision: 346923 URL: https://svnweb.freebsd.org/changeset/base/346923 Log: MFC r339891, r340063, r342266, r342270, r342272, r342288-r342289 r339891: cxgbe/iw_cxgbe: Install the socket upcall before calling soconnect to ensure that it always runs when soisconnected does. Submitted by: Krishnamraju Eraparaju @ Chelsio Sponsored by: Chelsio Communications r340063: cxgbe/iw_cxgbe: Suppress spurious "Unexpected streaming data ..." messages. Submitted by: Krishnamraju Eraparaju @ Chelsio Sponsored by: Chelsio Communications r342266: cxgbe/iw_cxgbe: Use DSGLs to write to card's memory when appropriate. Submitted by: Krishnamraju Eraparaju @ Chelsio Sponsored by: Chelsio Communications r342270: cxgbe/iw_cxgbe: Add a knob for testing that lets iWARP connections cycle through 4-tuples quickly. Submitted by: Krishnamraju Eraparaju @ Chelsio Sponsored by: Chelsio Communications r342272: cxgbe/iw_cxgbe: Use -ve errno when interfacing with linuxkpi/OFED. Submitted by: Krishnamraju Eraparaju @ Chelsio Sponsored by: Chelsio Communications r342288: cxgbe/iw_cxgbe: Do not terminate CTRx messages with \n. r342289: cxgbe/iw_cxgbe: Remove redundant CTRs from c4iw_alloc/c4iw_rdev_open. This information is readily available elsewhere. Sponsored by: Chelsio Communications Modified: stable/11/sys/dev/cxgbe/iw_cxgbe/cm.c stable/11/sys/dev/cxgbe/iw_cxgbe/cq.c stable/11/sys/dev/cxgbe/iw_cxgbe/device.c stable/11/sys/dev/cxgbe/iw_cxgbe/iw_cxgbe.h stable/11/sys/dev/cxgbe/iw_cxgbe/mem.c stable/11/sys/dev/cxgbe/iw_cxgbe/provider.c stable/11/sys/dev/cxgbe/iw_cxgbe/qp.c stable/11/sys/dev/cxgbe/iw_cxgbe/t4.h stable/11/sys/dev/cxgbe/t4_main.c Directory Properties: stable/11/ (props changed) Modified: stable/11/sys/dev/cxgbe/iw_cxgbe/cm.c ============================================================================== --- stable/11/sys/dev/cxgbe/iw_cxgbe/cm.c Mon Apr 29 19:47:21 2019 (r346922) +++ stable/11/sys/dev/cxgbe/iw_cxgbe/cm.c Mon Apr 29 20:10:28 2019 (r346923) @@ -172,7 +172,6 @@ static void process_newconn(struct c4iw_listen_ep *mas free(__a, M_SONAME); \ } while (0) -#ifdef KTR static char *states[] = { "idle", "listen", @@ -188,7 +187,6 @@ static char *states[] = { "dead", NULL, }; -#endif static void deref_cm_id(struct c4iw_ep_common *epc) { @@ -429,7 +427,7 @@ static void process_timeout(struct c4iw_ep *ep) abort = 0; break; default: - CTR4(KTR_IW_CXGBE, "%s unexpected state ep %p tid %u state %u\n" + CTR4(KTR_IW_CXGBE, "%s unexpected state ep %p tid %u state %u" , __func__, ep, ep->hwtid, ep->com.state); abort = 0; } @@ -841,7 +839,7 @@ setiwsockopt(struct socket *so) sopt.sopt_val = (caddr_t)&on; sopt.sopt_valsize = sizeof on; sopt.sopt_td = NULL; - rc = sosetopt(so, &sopt); + rc = -sosetopt(so, &sopt); if (rc) { log(LOG_ERR, "%s: can't set TCP_NODELAY on so %p (%d)\n", __func__, so, rc); @@ -870,7 +868,9 @@ uninit_iwarp_socket(struct socket *so) static void process_data(struct c4iw_ep *ep) { + int ret = 0; int disconnect = 0; + struct c4iw_qp_attributes attrs = {0}; CTR5(KTR_IW_CXGBE, "%s: so %p, ep %p, state %s, sbused %d", __func__, ep->com.so, ep, states[ep->com.state], sbused(&ep->com.so->so_rcv)); @@ -885,9 +885,16 @@ process_data(struct c4iw_ep *ep) /* Refered in process_newconn() */ c4iw_put_ep(&ep->parent_ep->com); break; + case FPDU_MODE: + MPASS(ep->com.qp != NULL); + attrs.next_state = C4IW_QP_STATE_TERMINATE; + ret = c4iw_modify_qp(ep->com.dev, ep->com.qp, + C4IW_QP_ATTR_NEXT_STATE, &attrs, 1); + if (ret != -EINPROGRESS) + disconnect = 1; + break; default: - if (sbused(&ep->com.so->so_rcv)) - log(LOG_ERR, "%s: Unexpected streaming data. ep %p, " + log(LOG_ERR, "%s: Unexpected streaming data. ep %p, " "state %d, so %p, so_state 0x%x, sbused %u\n", __func__, ep, ep->com.state, ep->com.so, ep->com.so->so_state, sbused(&ep->com.so->so_rcv)); @@ -952,6 +959,7 @@ process_newconn(struct c4iw_listen_ep *master_lep, str { struct c4iw_listen_ep *real_lep = NULL; struct c4iw_ep *new_ep = NULL; + struct sockaddr_in *remote = NULL; int ret = 0; MPASS(new_so != NULL); @@ -996,6 +1004,20 @@ process_newconn(struct c4iw_listen_ep *master_lep, str START_EP_TIMER(new_ep); setiwsockopt(new_so); + ret = soaccept(new_so, (struct sockaddr **)&remote); + if (ret != 0) { + CTR4(KTR_IW_CXGBE, + "%s:listen sock:%p, new sock:%p, ret:%d", + __func__, master_lep->com.so, new_so, ret); + if (remote != NULL) + free(remote, M_SONAME); + uninit_iwarp_socket(new_so); + soclose(new_so); + c4iw_put_ep(&new_ep->com); + c4iw_put_ep(&real_lep->com); + return; + } + free(remote, M_SONAME); /* MPA request might have been queued up on the socket already, so we * initialize the socket/upcall_handler under lock to prevent processing @@ -1171,7 +1193,24 @@ process_socket_event(struct c4iw_ep *ep) } /* rx data */ - process_data(ep); + if (sbused(&ep->com.so->so_rcv)) { + process_data(ep); + return; + } + + /* Socket events for 'MPA Request Received' and 'Close Complete' + * were already processed earlier in their previous events handlers. + * Hence, these socket events are skipped. + * And any other socket events must have handled above. + */ + MPASS((ep->com.state == MPA_REQ_RCVD) || (ep->com.state == MORIBUND)); + + if ((ep->com.state != MPA_REQ_RCVD) && (ep->com.state != MORIBUND)) + log(LOG_ERR, "%s: Unprocessed socket event so %p, " + "so_state 0x%x, so_err %d, sb_state 0x%x, ep %p, ep_state %s\n", + __func__, so, so->so_state, so->so_error, so->so_rcv.sb_state, + ep, states[state]); + } SYSCTL_NODE(_hw, OID_AUTO, iw_cxgbe, CTLFLAG_RD, 0, "iw_cxgbe driver parameters"); @@ -1232,6 +1271,18 @@ static int snd_win = 128 * 1024; SYSCTL_INT(_hw_iw_cxgbe, OID_AUTO, snd_win, CTLFLAG_RWTUN, &snd_win, 0, "TCP send window in bytes (default = 128KB)"); +int use_dsgl = 1; +SYSCTL_INT(_hw_iw_cxgbe, OID_AUTO, use_dsgl, CTLFLAG_RWTUN, &use_dsgl, 0, + "Use DSGL for PBL/FastReg (default=1)"); + +int inline_threshold = 128; +SYSCTL_INT(_hw_iw_cxgbe, OID_AUTO, inline_threshold, CTLFLAG_RWTUN, &inline_threshold, 0, + "inline vs dsgl threshold (default=128)"); + +static int reuseaddr = 0; +SYSCTL_INT(_hw_iw_cxgbe, OID_AUTO, reuseaddr, CTLFLAG_RWTUN, &reuseaddr, 0, + "Enable SO_REUSEADDR & SO_REUSEPORT socket options on all iWARP client connections(default = 0)"); + static void start_ep_timer(struct c4iw_ep *ep) { @@ -1606,7 +1657,7 @@ send_abort(struct c4iw_ep *ep) sopt.sopt_val = (caddr_t)&l; sopt.sopt_valsize = sizeof l; sopt.sopt_td = NULL; - rc = sosetopt(so, &sopt); + rc = -sosetopt(so, &sopt); if (rc != 0) { log(LOG_ERR, "%s: sosetopt(%p, linger = 0) failed with %d.\n", __func__, so, rc); @@ -1624,6 +1675,7 @@ send_abort(struct c4iw_ep *ep) * handler(not yet implemented) of iw_cxgbe driver. */ release_ep_resources(ep); + ep->com.state = DEAD; return (0); } @@ -2263,7 +2315,7 @@ process_mpa_request(struct c4iw_ep *ep) MPA_V2_IRD_ORD_MASK; ep->ord = min_t(u32, ep->ord, cur_max_read_depth(ep->com.dev)); - CTR3(KTR_IW_CXGBE, "%s initiator ird %u ord %u\n", + CTR3(KTR_IW_CXGBE, "%s initiator ird %u ord %u", __func__, ep->ird, ep->ord); if (ntohs(mpa_v2_params->ird) & MPA_V2_PEER2PEER_MODEL) if (peer2peer) { @@ -2417,7 +2469,7 @@ int c4iw_accept_cr(struct iw_cm_id *cm_id, struct iw_c ep->ird = 1; } - CTR4(KTR_IW_CXGBE, "%s %d ird %d ord %d\n", __func__, __LINE__, + CTR4(KTR_IW_CXGBE, "%s %d ird %d ord %d", __func__, __LINE__, ep->ird, ep->ord); ep->com.cm_id = cm_id; @@ -2476,8 +2528,9 @@ static int c4iw_sock_create(struct sockaddr_storage *laddr, struct socket **so) { int ret; - int size; + int size, on; struct socket *sock = NULL; + struct sockopt sopt; ret = sock_create_kern(laddr->ss_family, SOCK_STREAM, IPPROTO_TCP, &sock); @@ -2487,7 +2540,34 @@ c4iw_sock_create(struct sockaddr_storage *laddr, struc return ret; } - ret = sobind(sock, (struct sockaddr *)laddr, curthread); + if (reuseaddr) { + bzero(&sopt, sizeof(struct sockopt)); + sopt.sopt_dir = SOPT_SET; + sopt.sopt_level = SOL_SOCKET; + sopt.sopt_name = SO_REUSEADDR; + on = 1; + sopt.sopt_val = &on; + sopt.sopt_valsize = sizeof(on); + ret = -sosetopt(sock, &sopt); + if (ret != 0) { + log(LOG_ERR, "%s: sosetopt(%p, SO_REUSEADDR) " + "failed with %d.\n", __func__, sock, ret); + } + bzero(&sopt, sizeof(struct sockopt)); + sopt.sopt_dir = SOPT_SET; + sopt.sopt_level = SOL_SOCKET; + sopt.sopt_name = SO_REUSEPORT; + on = 1; + sopt.sopt_val = &on; + sopt.sopt_valsize = sizeof(on); + ret = -sosetopt(sock, &sopt); + if (ret != 0) { + log(LOG_ERR, "%s: sosetopt(%p, SO_REUSEPORT) " + "failed with %d.\n", __func__, sock, ret); + } + } + + ret = -sobind(sock, (struct sockaddr *)laddr, curthread); if (ret) { CTR2(KTR_IW_CXGBE, "%s:Failed to bind socket. err %p", __func__, ret); @@ -2592,22 +2672,24 @@ int c4iw_connect(struct iw_cm_id *cm_id, struct iw_cm_ goto fail; setiwsockopt(ep->com.so); + init_iwarp_socket(ep->com.so, &ep->com); err = -soconnect(ep->com.so, (struct sockaddr *)&ep->com.remote_addr, ep->com.thread); - if (!err) { - init_iwarp_socket(ep->com.so, &ep->com); - goto out; - } else + if (err) goto fail_free_so; + CTR2(KTR_IW_CXGBE, "%s:ccE, ep %p", __func__, ep); + return 0; fail_free_so: + uninit_iwarp_socket(ep->com.so); + ep->com.state = DEAD; sock_release(ep->com.so); fail: deref_cm_id(&ep->com); c4iw_put_ep(&ep->com); ep = NULL; out: - CTR2(KTR_IW_CXGBE, "%s:ccE ret:%d", __func__, err); + CTR2(KTR_IW_CXGBE, "%s:ccE Error %d", __func__, err); return err; } @@ -2669,7 +2751,7 @@ c4iw_create_listen(struct iw_cm_id *cm_id, int backlog goto fail; } - rc = solisten(lep->com.so, backlog, curthread); + rc = -solisten(lep->com.so, backlog, curthread); if (rc) { CTR3(KTR_IW_CXGBE, "%s:Failed to listen on sock:%p. err %d", __func__, lep->com.so, rc); Modified: stable/11/sys/dev/cxgbe/iw_cxgbe/cq.c ============================================================================== --- stable/11/sys/dev/cxgbe/iw_cxgbe/cq.c Mon Apr 29 19:47:21 2019 (r346922) +++ stable/11/sys/dev/cxgbe/iw_cxgbe/cq.c Mon Apr 29 20:10:28 2019 (r346923) @@ -669,7 +669,7 @@ proc_cqe: BUG_ON(wq->sq.in_use <= 0 && wq->sq.in_use >= wq->sq.size); wq->sq.cidx = (uint16_t)idx; - CTR2(KTR_IW_CXGBE, "%s completing sq idx %u\n", + CTR2(KTR_IW_CXGBE, "%s completing sq idx %u", __func__, wq->sq.cidx); *cookie = wq->sq.sw_sq[wq->sq.cidx].wr_id; t4_sq_consume(wq); Modified: stable/11/sys/dev/cxgbe/iw_cxgbe/device.c ============================================================================== --- stable/11/sys/dev/cxgbe/iw_cxgbe/device.c Mon Apr 29 19:47:21 2019 (r346922) +++ stable/11/sys/dev/cxgbe/iw_cxgbe/device.c Mon Apr 29 20:10:28 2019 (r346923) @@ -120,24 +120,7 @@ c4iw_rdev_open(struct c4iw_rdev *rdev) rdev->qpmask = udb_density - 1; rdev->cqshift = PAGE_SHIFT - sp->iq_s_qpp; rdev->cqmask = ucq_density - 1; - CTR5(KTR_IW_CXGBE, "%s dev %s stag start 0x%0x size 0x%0x num stags %d", - __func__, device_get_nameunit(sc->dev), sc->vres.stag.start, - sc->vres.stag.size, c4iw_num_stags(rdev)); - CTR5(KTR_IW_CXGBE, "%s pbl start 0x%0x size 0x%0x" - " rq start 0x%0x size 0x%0x", __func__, - sc->vres.pbl.start, sc->vres.pbl.size, - sc->vres.rq.start, sc->vres.rq.size); - CTR5(KTR_IW_CXGBE, "%s:qp qid start %u size %u cq qid start %u size %u", - __func__, sc->vres.qp.start, sc->vres.qp.size, - sc->vres.cq.start, sc->vres.cq.size); - /*TODO - CTR5(KTR_IW_CXGBE, "%s udb %pR db_reg %p gts_reg %p" - "qpmask 0x%x cqmask 0x%x", __func__, - db_reg,gts_reg,rdev->qpmask, rdev->cqmask); - */ - - if (c4iw_num_stags(rdev) == 0) { rc = -EINVAL; goto err1; @@ -233,11 +216,6 @@ c4iw_alloc(struct adapter *sc) iwsc->rdev.adap = sc; /* init various hw-queue params based on lld info */ - CTR3(KTR_IW_CXGBE, "%s: Ing. padding boundary is %d, " - "egrsstatuspagesize = %d", __func__, - sc->params.sge.pad_boundary, - sc->params.sge.spg_len); - iwsc->rdev.hw_queue.t4_eq_status_entries = sc->params.sge.spg_len / EQ_ESIZE; iwsc->rdev.hw_queue.t4_max_eq_size = 65520; Modified: stable/11/sys/dev/cxgbe/iw_cxgbe/iw_cxgbe.h ============================================================================== --- stable/11/sys/dev/cxgbe/iw_cxgbe/iw_cxgbe.h Mon Apr 29 19:47:21 2019 (r346922) +++ stable/11/sys/dev/cxgbe/iw_cxgbe/iw_cxgbe.h Mon Apr 29 20:10:28 2019 (r346923) @@ -68,6 +68,9 @@ #define KTR_IW_CXGBE KTR_SPARE3 extern int c4iw_debug; +extern int use_dsgl; +extern int inline_threshold; + #define PDBG(fmt, args...) \ do { \ if (c4iw_debug) \ Modified: stable/11/sys/dev/cxgbe/iw_cxgbe/mem.c ============================================================================== --- stable/11/sys/dev/cxgbe/iw_cxgbe/mem.c Mon Apr 29 19:47:21 2019 (r346922) +++ stable/11/sys/dev/cxgbe/iw_cxgbe/mem.c Mon Apr 29 20:10:28 2019 (r346923) @@ -43,9 +43,9 @@ __FBSDID("$FreeBSD$"); #include #include "iw_cxgbe.h" -int use_dsgl = 1; #define T4_ULPTX_MIN_IO 32 #define C4IW_MAX_INLINE_SIZE 96 +#define T4_ULPTX_MAX_DMA 1024 static int mr_exceeds_hw_limits(struct c4iw_dev *dev, u64 length) @@ -55,10 +55,60 @@ mr_exceeds_hw_limits(struct c4iw_dev *dev, u64 length) } static int -write_adapter_mem(struct c4iw_rdev *rdev, u32 addr, u32 len, void *data) +_c4iw_write_mem_dma_aligned(struct c4iw_rdev *rdev, u32 addr, u32 len, + void *data, int wait) { struct adapter *sc = rdev->adap; struct ulp_mem_io *ulpmc; + struct ulptx_sgl *sgl; + u8 wr_len; + int ret = 0; + struct c4iw_wr_wait wr_wait; + struct wrqe *wr; + + addr &= 0x7FFFFFF; + + if (wait) + c4iw_init_wr_wait(&wr_wait); + wr_len = roundup(sizeof *ulpmc + sizeof *sgl, 16); + + wr = alloc_wrqe(wr_len, &sc->sge.ctrlq[0]); + if (wr == NULL) + return -ENOMEM; + ulpmc = wrtod(wr); + + memset(ulpmc, 0, wr_len); + INIT_ULPTX_WR(ulpmc, wr_len, 0, 0); + ulpmc->wr.wr_hi = cpu_to_be32(V_FW_WR_OP(FW_ULPTX_WR) | + (wait ? F_FW_WR_COMPL : 0)); + ulpmc->wr.wr_lo = wait ? (u64)(unsigned long)&wr_wait : 0; + ulpmc->wr.wr_mid = cpu_to_be32(V_FW_WR_LEN16(DIV_ROUND_UP(wr_len, 16))); + ulpmc->cmd = cpu_to_be32(V_ULPTX_CMD(ULP_TX_MEM_WRITE) | + V_T5_ULP_MEMIO_ORDER(1) | + V_T5_ULP_MEMIO_FID(sc->sge.ofld_rxq[0].iq.abs_id)); + ulpmc->dlen = cpu_to_be32(V_ULP_MEMIO_DATA_LEN(len>>5)); + ulpmc->len16 = cpu_to_be32(DIV_ROUND_UP(wr_len-sizeof(ulpmc->wr), 16)); + ulpmc->lock_addr = cpu_to_be32(V_ULP_MEMIO_ADDR(addr)); + + sgl = (struct ulptx_sgl *)(ulpmc + 1); + sgl->cmd_nsge = cpu_to_be32(V_ULPTX_CMD(ULP_TX_SC_DSGL) | + V_ULPTX_NSGE(1)); + sgl->len0 = cpu_to_be32(len); + sgl->addr0 = cpu_to_be64((u64)data); + + t4_wrq_tx(sc, wr); + + if (wait) + ret = c4iw_wait_for_reply(rdev, &wr_wait, 0, 0, NULL, __func__); + return ret; +} + + +static int +_c4iw_write_mem_inline(struct c4iw_rdev *rdev, u32 addr, u32 len, void *data) +{ + struct adapter *sc = rdev->adap; + struct ulp_mem_io *ulpmc; struct ulptx_idata *ulpsc; u8 wr_len, *to_dp, *from_dp; int copy_len, num_wqe, i, ret = 0; @@ -82,7 +132,7 @@ write_adapter_mem(struct c4iw_rdev *rdev, u32 addr, u3 wr = alloc_wrqe(wr_len, &sc->sge.ctrlq[0]); if (wr == NULL) - return (0); + return -ENOMEM; ulpmc = wrtod(wr); memset(ulpmc, 0, wr_len); @@ -91,7 +141,8 @@ write_adapter_mem(struct c4iw_rdev *rdev, u32 addr, u3 if (i == (num_wqe-1)) { ulpmc->wr.wr_hi = cpu_to_be32(V_FW_WR_OP(FW_ULPTX_WR) | F_FW_WR_COMPL); - ulpmc->wr.wr_lo = (__force __be64)(unsigned long) &wr_wait; + ulpmc->wr.wr_lo = + (__force __be64)(unsigned long) &wr_wait; } else ulpmc->wr.wr_hi = cpu_to_be32(V_FW_WR_OP(FW_ULPTX_WR)); ulpmc->wr.wr_mid = cpu_to_be32( @@ -124,6 +175,69 @@ write_adapter_mem(struct c4iw_rdev *rdev, u32 addr, u3 ret = c4iw_wait_for_reply(rdev, &wr_wait, 0, 0, NULL, __func__); return ret; } + +static int +_c4iw_write_mem_dma(struct c4iw_rdev *rdev, u32 addr, u32 len, void *data) +{ + struct c4iw_dev *rhp = rdev_to_c4iw_dev(rdev); + u32 remain = len; + u32 dmalen; + int ret = 0; + dma_addr_t daddr; + dma_addr_t save; + + daddr = dma_map_single(rhp->ibdev.dma_device, data, len, DMA_TO_DEVICE); + if (dma_mapping_error(rhp->ibdev.dma_device, daddr)) + return -1; + save = daddr; + + while (remain > inline_threshold) { + if (remain < T4_ULPTX_MAX_DMA) { + if (remain & ~T4_ULPTX_MIN_IO) + dmalen = remain & ~(T4_ULPTX_MIN_IO-1); + else + dmalen = remain; + } else + dmalen = T4_ULPTX_MAX_DMA; + remain -= dmalen; + ret = _c4iw_write_mem_dma_aligned(rdev, addr, dmalen, + (void *)daddr, !remain); + if (ret) + goto out; + addr += dmalen >> 5; + data = (u64 *)data + dmalen; + daddr = daddr + dmalen; + } + if (remain) + ret = _c4iw_write_mem_inline(rdev, addr, remain, data); +out: + dma_unmap_single(rhp->ibdev.dma_device, save, len, DMA_TO_DEVICE); + return ret; +} + +/* + * write len bytes of data into addr (32B aligned address) + * If data is NULL, clear len byte of memory to zero. + */ +static int +write_adapter_mem(struct c4iw_rdev *rdev, u32 addr, u32 len, + void *data) +{ + if (rdev->adap->params.ulptx_memwrite_dsgl && use_dsgl) { + if (len > inline_threshold) { + if (_c4iw_write_mem_dma(rdev, addr, len, data)) { + log(LOG_ERR, "%s: dma map " + "failure (non fatal)\n", __func__); + return _c4iw_write_mem_inline(rdev, addr, len, + data); + } else + return 0; + } else + return _c4iw_write_mem_inline(rdev, addr, len, data); + } else + return _c4iw_write_mem_inline(rdev, addr, len, data); +} + /* * Build and write a TPT entry. Modified: stable/11/sys/dev/cxgbe/iw_cxgbe/provider.c ============================================================================== --- stable/11/sys/dev/cxgbe/iw_cxgbe/provider.c Mon Apr 29 19:47:21 2019 (r346922) +++ stable/11/sys/dev/cxgbe/iw_cxgbe/provider.c Mon Apr 29 20:10:28 2019 (r346923) @@ -44,7 +44,7 @@ __FBSDID("$FreeBSD$"); #include "iw_cxgbe.h" #include "user.h" -extern int use_dsgl; + static int fastreg_support = 1; module_param(fastreg_support, int, 0644); MODULE_PARM_DESC(fastreg_support, "Advertise fastreg support (default = 1)"); Modified: stable/11/sys/dev/cxgbe/iw_cxgbe/qp.c ============================================================================== --- stable/11/sys/dev/cxgbe/iw_cxgbe/qp.c Mon Apr 29 19:47:21 2019 (r346922) +++ stable/11/sys/dev/cxgbe/iw_cxgbe/qp.c Mon Apr 29 20:10:28 2019 (r346923) @@ -63,7 +63,7 @@ struct cpl_set_tcb_rpl; #include "iw_cxgbe.h" #include "user.h" -extern int use_dsgl; + static int creds(struct toepcb *toep, struct inpcb *inp, size_t wrsize); static int max_fr_immd = T4_MAX_FR_IMMD;//SYSCTL parameter later... @@ -574,7 +574,7 @@ static void free_qp_work(struct work_struct *work) ucontext = qhp->ucontext; rhp = qhp->rhp; - CTR3(KTR_IW_CXGBE, "%s qhp %p ucontext %p\n", __func__, + CTR3(KTR_IW_CXGBE, "%s qhp %p ucontext %p", __func__, qhp, ucontext); destroy_qp(&rhp->rdev, &qhp->wq, ucontext ? &ucontext->uctx : &rhp->rdev.uctx); @@ -1473,6 +1473,22 @@ int c4iw_modify_qp(struct c4iw_dev *rhp, struct c4iw_q if (qhp->attr.state == attrs->next_state) goto out; + /* Return EINPROGRESS if QP is already in transition state. + * Eg: CLOSING->IDLE transition or *->ERROR transition. + * This can happen while connection is switching(due to rdma_fini) + * from iWARP/RDDP to TOE mode and any inflight RDMA RX data will + * reach TOE driver -> TCP stack -> iWARP driver. In this way + * iWARP driver keep receiving inflight RDMA RX data until socket + * is closed or aborted. And if iWARP CM is in FPDU sate, then + * it tries to put QP in TERM state and disconnects endpoint. + * But as QP is already in transition state, this event is ignored. + */ + if ((qhp->attr.state >= C4IW_QP_STATE_ERROR) && + (attrs->next_state == C4IW_QP_STATE_TERMINATE)) { + ret = -EINPROGRESS; + goto out; + } + switch (qhp->attr.state) { case C4IW_QP_STATE_IDLE: switch (attrs->next_state) { @@ -1860,10 +1876,10 @@ c4iw_create_qp(struct ib_pd *pd, struct ib_qp_init_att qhp->ibqp.qp_num = qhp->wq.sq.qid; init_timer(&(qhp->timer)); - CTR5(KTR_IW_CXGBE, "%s sq id %u size %u memsize %zu num_entries %u\n", + CTR5(KTR_IW_CXGBE, "%s sq id %u size %u memsize %zu num_entries %u", __func__, qhp->wq.sq.qid, qhp->wq.sq.size, qhp->wq.sq.memsize, attrs->cap.max_send_wr); - CTR5(KTR_IW_CXGBE, "%s rq id %u size %u memsize %zu num_entries %u\n", + CTR5(KTR_IW_CXGBE, "%s rq id %u size %u memsize %zu num_entries %u", __func__, qhp->wq.rq.qid, qhp->wq.rq.size, qhp->wq.rq.memsize, attrs->cap.max_recv_wr); return &qhp->ibqp; Modified: stable/11/sys/dev/cxgbe/iw_cxgbe/t4.h ============================================================================== --- stable/11/sys/dev/cxgbe/iw_cxgbe/t4.h Mon Apr 29 19:47:21 2019 (r346922) +++ stable/11/sys/dev/cxgbe/iw_cxgbe/t4.h Mon Apr 29 20:10:28 2019 (r346923) @@ -488,13 +488,13 @@ t4_ring_sq_db(struct t4_wq *wq, u16 inc, union t4_wr * /* Flush host queue memory writes. */ wmb(); if (wc && inc == 1 && wq->sq.bar2_qid == 0 && wqe) { - CTR2(KTR_IW_CXGBE, "%s: WC wq->sq.pidx = %d\n", + CTR2(KTR_IW_CXGBE, "%s: WC wq->sq.pidx = %d", __func__, wq->sq.pidx); pio_copy((u64 __iomem *) ((u64)wq->sq.bar2_va + SGE_UDB_WCDOORBELL), (u64 *)wqe); } else { - CTR2(KTR_IW_CXGBE, "%s: DB wq->sq.pidx = %d\n", + CTR2(KTR_IW_CXGBE, "%s: DB wq->sq.pidx = %d", __func__, wq->sq.pidx); writel(V_PIDX_T5(inc) | V_QID(wq->sq.bar2_qid), (void __iomem *)((u64)wq->sq.bar2_va + @@ -513,12 +513,12 @@ t4_ring_rq_db(struct t4_wq *wq, u16 inc, union t4_recv /* Flush host queue memory writes. */ wmb(); if (wc && inc == 1 && wq->rq.bar2_qid == 0 && wqe) { - CTR2(KTR_IW_CXGBE, "%s: WC wq->rq.pidx = %d\n", + CTR2(KTR_IW_CXGBE, "%s: WC wq->rq.pidx = %d", __func__, wq->rq.pidx); pio_copy((u64 __iomem *)((u64)wq->rq.bar2_va + SGE_UDB_WCDOORBELL), (u64 *)wqe); } else { - CTR2(KTR_IW_CXGBE, "%s: DB wq->rq.pidx = %d\n", + CTR2(KTR_IW_CXGBE, "%s: DB wq->rq.pidx = %d", __func__, wq->rq.pidx); writel(V_PIDX_T5(inc) | V_QID(wq->rq.bar2_qid), (void __iomem *)((u64)wq->rq.bar2_va + @@ -602,7 +602,7 @@ static inline void t4_swcq_produce(struct t4_cq *cq) { cq->sw_in_use++; if (cq->sw_in_use == cq->size) { - CTR2(KTR_IW_CXGBE, "%s cxgb4 sw cq overflow cqid %u\n", + CTR2(KTR_IW_CXGBE, "%s cxgb4 sw cq overflow cqid %u", __func__, cq->cqid); cq->error = 1; BUG_ON(1); @@ -674,7 +674,7 @@ static inline int t4_next_hw_cqe(struct t4_cq *cq, str static inline struct t4_cqe *t4_next_sw_cqe(struct t4_cq *cq) { if (cq->sw_in_use == cq->size) { - CTR2(KTR_IW_CXGBE, "%s cxgb4 sw cq overflow cqid %u\n", + CTR2(KTR_IW_CXGBE, "%s cxgb4 sw cq overflow cqid %u", __func__, cq->cqid); cq->error = 1; BUG_ON(1); Modified: stable/11/sys/dev/cxgbe/t4_main.c ============================================================================== --- stable/11/sys/dev/cxgbe/t4_main.c Mon Apr 29 19:47:21 2019 (r346922) +++ stable/11/sys/dev/cxgbe/t4_main.c Mon Apr 29 20:10:28 2019 (r346923) @@ -3995,6 +3995,18 @@ get_params__post_init(struct adapter *sc) else sc->params.filter2_wr_support = 0; + /* + * Find out whether we're allowed to use the ULPTX MEMWRITE DSGL. + * This is queried separately for the same reason as other params above. + */ + param[0] = FW_PARAM_DEV(ULPTX_MEMWRITE_DSGL); + val[0] = 0; + rc = -t4_query_params(sc, sc->mbox, sc->pf, 0, 1, param, val); + if (rc == 0) + sc->params.ulptx_memwrite_dsgl = val[0] != 0; + else + sc->params.ulptx_memwrite_dsgl = false; + /* get capabilites */ bzero(&caps, sizeof(caps)); caps.op_to_write = htobe32(V_FW_CMD_OP(FW_CAPS_CONFIG_CMD) |