Date: Sun, 26 Jul 2009 14:24:31 -0400 From: John Almberg <jalmberg@identry.com> To: Mel Flynn <mel.flynn+fbsd.questions@mailing.thruhere.net> Cc: freebsd-questions@freebsd.org Subject: Re: limit to number of files seen by ls? Message-ID: <8A69BBD9-5F3C-44B8-96C0-586C1B5A386F@identry.com> In-Reply-To: <200907260045.12045.mel.flynn%2Bfbsd.questions@mailing.thruhere.net> References: <20090725222918.AC51DB7E0@kev.msw.wpafb.af.mil> <4A6C071A.3020800@infracaninophile.co.uk> <200907260045.12045.mel.flynn%2Bfbsd.questions@mailing.thruhere.net>
next in thread | previous in thread | raw e-mail | index | archive | help
On Jul 26, 2009, at 4:45 AM, Mel Flynn wrote: > On Saturday 25 July 2009 23:34:50 Matthew Seaman wrote: > >> It's fairly rare to run into this as a practical >> limitation during most day to day use, and there are various >> tricks like >> using xargs(1) to extend the usable range. Even so, for really big >> applications that need to process long lists of data, you'ld have >> to code >> the whole thing to input the list via a file or pipe. > > ls itself is not glob(3) aware, but there are programs that are, > like scp. So > the fastest solution in those cases is to single quote the argument > and let > the program expand the glob. for loops are also a common work around: > ls */* == for f in */*; do ls $f; done > > Point of it all being, that the cause of the OP's observed behavior > is only > indirectly related to the directory size. He will have the same > problem if he > divides the 4000 files over 4 directories and calls ls */* H'mmm... I haven't come back on this question, because I want my next question to be an intelligent one, but I'm having a hard time understanding what is going on. I'm reading up on this, and as soon as I know enough to either understand the issue, or ask an intelligent question, I will do so... Thanks for all the comments... -- John
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?8A69BBD9-5F3C-44B8-96C0-586C1B5A386F>