Date: Tue, 22 Feb 2005 12:29:09 +0800 (WST) From: David Adam <zanchey@ucc.gu.uwa.edu.au> To: Dejan Lesjak <dejan.lesjak@ijs.si> Cc: noackjr@alumni.rice.edu Subject: Re: X.Org 6.8.2 - (most probably) final patch Message-ID: <Pine.LNX.4.58.0502221218570.4364@mussel.ucc.gu.uwa.edu.au> In-Reply-To: <200502212133.19515.dejan.lesjak@ijs.si> References: <4219D008.6030107@alumni.rice.edu> <200502212133.19515.dejan.lesjak@ijs.si>
next in thread | previous in thread | raw e-mail | index | archive | help
Just weighing in on this... On Mon, 21 Feb 2005, Dejan Lesjak wrote: > This should also be better with gzipped vs bzipped tarballs. I don't have any > preference here. I'd like to know what we should be more concerned about here > - size of distfile or the speed of extraction. Please keep in mind though > that some people pay internet access by used bandwidth (and it's not cheap in > some cases). Here are both sizes for comparison: In my (completely unqualified) opinion, size of distfile is a much higher priority than the speed of extraction. Until recently, I was getting my ports on 56k, and that extra 11MB would hit hard - I am sure there are still many people in the world who are in a similar situation. If I use my University's connection, it's 6c/MB for international traffic, so the smaller the better. Again, there are bound to be users who are affected by this. The Subversion team looked at this when they were designing their system, and decided that disk space/CPU time were a lot more expendable than bandwidth, which is why SVN works well over dial-up. 17 minutes ~is~ a long time for extraction (particularly over several ports) but I think it's a problem best solved by split distfiles, not by lower compression. Is there any way around it (turning off cleaning, unbzip2ing and modification of the Makefile/distinfo)? Cheers, David Adam zanchey@ucc.gu.uwa.edu.au
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?Pine.LNX.4.58.0502221218570.4364>