From owner-freebsd-questions@FreeBSD.ORG Mon Jun 27 15:54:03 2011 Return-Path: Delivered-To: freebsd-questions@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 3A201106564A for ; Mon, 27 Jun 2011 15:54:03 +0000 (UTC) (envelope-from freebsd@penx.com) Received: from Elmer.dco.penx.com (elmer.dco.penx.com [174.46.214.165]) by mx1.freebsd.org (Postfix) with ESMTP id 075888FC0C for ; Mon, 27 Jun 2011 15:54:02 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by Elmer.dco.penx.com (8.14.5/8.14.4) with ESMTP id p5RFs0b4070402 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Mon, 27 Jun 2011 09:54:02 -0600 (MDT) (envelope-from freebsd@penx.com) Date: Mon, 27 Jun 2011 09:54:00 -0600 (MDT) From: Dennis Glatting X-X-Sender: dennisg@Elmer.dco.penx.com To: Damien Fleuriot In-Reply-To: <4E0897BE.5030001@my.gd> Message-ID: References: <4E088E5E.6000106@my.gd> <4E0897BE.5030001@my.gd> User-Agent: Alpine 2.00 (BSF 1167 2008-08-23) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed Cc: freebsd-questions@freebsd.org Subject: Re: Using a "special" proxy for ports X-BeenThere: freebsd-questions@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: User questions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 27 Jun 2011 15:54:03 -0000 On Mon, 27 Jun 2011, Damien Fleuriot wrote: > > > On 6/27/11 4:27 PM, Dennis Glatting wrote: >> >> >> On Mon, 27 Jun 2011, Damien Fleuriot wrote: >> >>> On 6/27/11 4:52 AM, Dennis Glatting wrote: >>>> >>>> I have a requirement where I need to archive ports used across twenty >>>> hosts for a year or more. I've decided to do this using Squid and to >>>> take advantage of Squid's cache when updating common ports across those >>>> hosts. >>>> >>>> (BTW, at another site I used rsync to sync /usr/ports/distfiles across >>>> the hosts to a local master site then specified _MASTER_SITES_DEFAULT in >>>> make.conf to a FTP server on the local site. That method works when the >>>> port is previously cached however if the file isn't in the cache and I >>>> simultaneously install the port across ten hosts, the port is fetched >>>> ten times. Sigh.) >>>> >>>> I have a Squid proxy installed that isn't meant for every-day/every-user >>>> use and requires authentication. (Users either go through another Squid >>>> proxy or direct.) The special Squid proxy works. No surprise there. >>>> Authentication works. No surprise there. >>>> >>>> What I need is a method to embed into make.conf a proxy specification >>>> for fetch. Setting the environment variable HTTP_PROXY from the login >>>> shell /is not/ preferred because the account is used by different >>>> administrators, I don't what the special proxy accidentally polluted >>>> with non-port stuff, and it would only create confusion. >>>> >>>> Setting http_proxy in make.conf does not work. .netrc doesn't appear to >>>> be a viable method (if it did, I could specify FETCH_ARGS in make.conf). >>>> >>> >>> What about using a NFS share for /usr/ports/distfiles ? >> >> Many of these servers provide network/system services across a WAN. If a >> link goes down or is congested, NFS may hang them all. NFS also provides >> certain security challenges. >> >> > > What about using a SSHFS share for /usr/ports/distfiles ? > I don't know much about that file system and will have to look into it. I have had problems with FUSE code, as recently as last week (i.e., very large files). How does SSHFS resolve multiple systems simultaneously downloading and caching ports? I assume much the same as any file system where there is a reasonable risk of content corruption (e.g., one of the downloads abort resulting in a partial download or a lack of file locking results in multiple processes simultaneously writing to the same file with unpredictable content). Many of my servers provide network/system services over a dodgy AT&T MPLS. As such, the servers must be as autonomous as possible. In the _MASTER_SITES_DEFAULT technique I used at another site, if my site-local FTP server is unavailable then fetch does the normal stuff (i.e., it fails to the next site in the list). The compromise with a proxy technique is to disable the proxy spec if there is a network problem. This works because I have three, independent Internet exit points across my WAN linked together with local-preferenced BGP.