From owner-p4-projects@FreeBSD.ORG Sat Jun 27 12:36:47 2009 Return-Path: Delivered-To: p4-projects@freebsd.org Received: by hub.freebsd.org (Postfix, from userid 32767) id D87BC1065670; Sat, 27 Jun 2009 12:36:46 +0000 (UTC) Delivered-To: perforce@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 968CC106564A for ; Sat, 27 Jun 2009 12:36:46 +0000 (UTC) (envelope-from zec@freebsd.org) Received: from xaqua.tel.fer.hr (xaqua.tel.fer.hr [161.53.19.25]) by mx1.freebsd.org (Postfix) with ESMTP id 2B16C8FC12 for ; Sat, 27 Jun 2009 12:36:45 +0000 (UTC) (envelope-from zec@freebsd.org) Received: by xaqua.tel.fer.hr (Postfix, from userid 20006) id B7D4A9B645; Sat, 27 Jun 2009 14:14:34 +0200 (CEST) X-Spam-Checker-Version: SpamAssassin 3.1.7 (2006-10-05) on xaqua.tel.fer.hr X-Spam-Level: X-Spam-Status: No, score=-1.4 required=5.0 tests=AWL autolearn=unavailable version=3.1.7 Received: from localhost (imunes.tel.fer.hr [161.53.19.8]) by xaqua.tel.fer.hr (Postfix) with ESMTP id 6FE4A9B646; Sat, 27 Jun 2009 14:14:10 +0200 (CEST) From: Marko Zec To: Julian Elischer Date: Sat, 27 Jun 2009 14:14:09 +0200 User-Agent: KMail/1.9.10 References: <200906261413.n5QED9j7023013@repoman.freebsd.org> <4A4541F5.1050301@elischer.org> In-Reply-To: <4A4541F5.1050301@elischer.org> MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: 7bit Content-Disposition: inline Message-Id: <200906271414.09094.zec@freebsd.org> Cc: Perforce Change Reviews Subject: Re: PERFORCE change 165251 for review X-BeenThere: p4-projects@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: p4 projects tree changes List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 27 Jun 2009 12:36:47 -0000 On Friday 26 June 2009 23:47:33 Julian Elischer wrote: > Marko Zec wrote: > > http://perforce.freebsd.org/chv.cgi?CH=165251 > > > > Change 165251 by zec@zec_amdx4 on 2009/06/26 14:13:00 > > > > Allow for rpc.statd and rpc.lockd to be started, but > > without doing any functional testing. Introduce a lot > > curvnet recursions triggered by the above daemons that > > have to be looked into and resolved. > > > > Affected files ... > > > > .. //depot/projects/vimage-commit2/src/sys/rpc/clnt_dg.c#6 edit > > .. //depot/projects/vimage-commit2/src/sys/rpc/svc_dg.c#4 edit > > > > Differences ... > > > > ==== //depot/projects/vimage-commit2/src/sys/rpc/clnt_dg.c#6 (text+ko) > > ==== > > > > @@ -56,6 +56,7 @@ > > #include > > #include > > #include > > +#include > > > > #include > > #include > > @@ -197,11 +198,14 @@ > > return (NULL); > > } > > > > + CURVNET_SET(so->so_vnet); > > if (!__rpc_socket2sockinfo(so, &si)) { > > rpc_createerr.cf_stat = RPC_TLIERROR; > > rpc_createerr.cf_error.re_errno = 0; > > + CURVNET_RESTORE(); > > return (NULL); > > } > > + CURVNET_RESTORE(); > > > > /* > > * Find the receive and the send size > > > > ==== //depot/projects/vimage-commit2/src/sys/rpc/svc_dg.c#4 (text+ko) > > ==== > > > > @@ -56,6 +56,7 @@ > > #include > > #include > > #include > > +#include > > > > #include > > > > @@ -101,8 +102,10 @@ > > struct sockaddr* sa; > > int error; > > > > + CURVNET_SET(so->so_vnet); > > if (!__rpc_socket2sockinfo(so, &si)) { > > printf(svc_dg_str, svc_dg_err1); > > + CURVNET_RESTORE(); > > return (NULL); > > } > > /* > > @@ -112,6 +115,7 @@ > > recvsize = __rpc_get_t_size(si.si_af, si.si_proto, (int)recvsize); > > if ((sendsize == 0) || (recvsize == 0)) { > > printf(svc_dg_str, svc_dg_err2); > > + CURVNET_RESTORE(); > > return (NULL); > > } > > > > @@ -142,6 +146,7 @@ > > if (xprt) { > > svc_xprt_free(xprt); > > } > > + CURVNET_RESTORE(); > > return (NULL); > > } > > while leaving all your virtualization clues in place can we make it so > that the nfs code always works on vnet0? > I putr it to you that NFS itself should be virtualized as a separate > major group than vnet.. but until that is done, use vnet0. Well in this particular case using CURVNET_SET(vnet0) instead of CURVNET_SET(so->so_vnet) would be wrong, because so->so_vnet has already been set, possibly to vnet0, possibly to another vnet. I think that we should use some other mechanism to (if we want to) enforce that NFS export / mount capabilities are available only to vnet0, possibly using priv(). I.e. if we'd allow for mount_nfs to be executed from within a non-default vnet, while hardcoding NFS to always work on vnet0, this would be clearly more dangerous than the current model that doesn't do any special casing re. in which vnet does a particular NFS mount / export exist. Marko