Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 28 May 2013 22:26:09 +0200
From:      Matthias Andree <matthias.andree@gmx.de>
To:        freebsd-ports@freebsd.org
Subject:   Re: The vim port needs a refresh
Message-ID:  <51A512E1.1060406@gmx.de>
In-Reply-To: <20130528195137.312c58d9@bsd64.grem.de>
References:  <20130524212318.B967FE6739@smtp.hushmail.com> <op.wxsmqsh834t2sn@markf.office.supranet.net> <51A4ADCC.4070204@marino.st> <20130528151600.4eb6f028@gumby.homeunix.com> <20130528195137.312c58d9@bsd64.grem.de>

next in thread | previous in thread | raw e-mail | index | archive | help
Am 28.05.2013 19:51, schrieb Michael Gmelin:
> On Tue, 28 May 2013 15:16:00 +0100
> RW <rwmaillists@googlemail.com> wrote:
> 
>> On Tue, 28 May 2013 15:14:52 +0200
>> John Marino wrote:
>>
>>> All
>>> patches only take 74 seconds to download[2] so there is no sympathy
>>> for your obviously single data point anecdote, 
>>
>> Well at the point you provided one data-point there was only one data
>> point. And it was like pulling teeth to get you to eliminate the
>> alternative explanations. Was it really too much to ask that you
>> provided some actual evidence. 
>>
>>> you're clearly doing
>>> something wrong.  You need to stop complaining and start think about
>>> folks with slow connections[3] who also rebuild Vim frequently. 
>>
>> Don't make things up. I never said anything about frequent rebuilds,
>> the patches all get redownloaded on the next rebuild. 
> 
> The real issue is not the number of patches, but the fact that every
> single patch is downloaded by invoking the fetch(1) command, creating
> lots of overhead not limited to the fetch command itself. The ports
> system wasn't designed for such an amount of distfiles in a single
> port I guess.
> 
> I just timed fetching the patches through ports vs. fetching over
> HTTP/1.1 using ftp/curl vs calling fetch directly. The VIM tarball was
> already downloaded, so this is really just the patches (plus
> downloading 6mb is barely noticeable on a fast line). It's a slow
> machine on a fast line.
> 
> Fetch:
> [user@server /usr/ports/editors/vim]$ time sudo make fetch
> ....
> real    4m57.327s
> user    0m17.010s
> sys     0m39.588s
> 
> Curl:
> [user@server /tmp]$ longcurlcommandline 
> ....
> real    0m15.291s
> user    0m0.026s
> sys     0m0.272s
> 
> Fetch on the command line (after initial make fetch, so this is only
> measuring transmission of the files):
> cd /usr/ports/editors/distfiles
> time for name in 7.3.*; do
>   fetch http://artfiles.org/vim.org/patches/7.3/$name
> done
> ....
> real    1m25.329s
> user    0m0.660s
> sys     0m3.174s
> 
> So just the fact we're invoking fetch for every file costs us about one
> minute - I assume the time lost is much bigger on a slow line with
> long latency. The remaining 3.5 minutes are spent somewhere in the
> ports infrastructure and clearly depend on the performance of the
> machine used. For comparison I timed "make fetch" on a reasonably fast
> server (good IO, fast datacenter connection), make fetch still took
> about 120 seconds(!).
> 
> So the bottomline is:
> - Using HTTP/1.1 and keepalive could safe a lot of time
> - The ports infrastructure creates a lot of overhead per patch file
> 
> Maybe there's something we can do to improve the situation.

Probably.

On the fetching side, we have:

- /usr/src/usr.sbin/portsnap/phttpget/phttpget.c & /usr/libexec/phttpget

- the possibility to specify multiple URLs on fetch(1)'s command line

- the xargs command to assemble command lines with a decent amount of
URL arguments

Given that connection setup for FTP costs considerable amounts of time
especially with FTP.


On the URL list generation, we have excessive external command in shell
scripts; try "make -nC /usr/ports/editors/vim fetch-url-list-int" to see
the commands.  I suppose fewer external commands and more make-internal
processing could help quite a bit.



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?51A512E1.1060406>