Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 26 Jul 2009 00:45:11 -0800
From:      Mel Flynn <mel.flynn+fbsd.questions@mailing.thruhere.net>
To:        freebsd-questions@freebsd.org
Subject:   Re: limit to number of files seen by ls?
Message-ID:  <200907260045.12045.mel.flynn%2Bfbsd.questions@mailing.thruhere.net>
In-Reply-To: <4A6C071A.3020800@infracaninophile.co.uk>
References:  <20090725222918.AC51DB7E0@kev.msw.wpafb.af.mil> <4A6C071A.3020800@infracaninophile.co.uk>

next in thread | previous in thread | raw e-mail | index | archive | help
On Saturday 25 July 2009 23:34:50 Matthew Seaman wrote:

> It's fairly rare to run into this as a practical
> limitation during most day to day use, and there are various tricks like
> using xargs(1) to extend the usable range.  Even so, for really big
> applications that need to process long lists of data, you'ld have to code
> the whole thing to input the list via a file or pipe.

ls itself is not glob(3) aware, but there are programs that are, like scp. So 
the fastest solution in those cases is to single quote the argument and let 
the program expand the glob. for loops are also a common work around:
ls */* == for f in */*; do ls $f; done

Point of it all being, that the cause of the OP's observed behavior is only 
indirectly related to the directory size. He will have the same problem if he 
divides the 4000 files over 4 directories and calls ls */*.
-- 
Mel



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200907260045.12045.mel.flynn%2Bfbsd.questions>