Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 17 Nov 2000 12:56:52 +0100
From:      Sebastiaan van Erk <sebster@sebster.com>
To:        Zero Sum <count@shalimar.net.au>
Cc:        freebsd-questions@freebsd.org
Subject:   Re: argument list too long
Message-ID:  <20001117125652.A91692@sebster.com>
In-Reply-To: <00111722151204.01989@shalimar.net.au>; from count@shalimar.net.au on Fri, Nov 17, 2000 at 10:15:12PM %2B1100
References:  <20001116122313.A69018@sebster.com> <20001116154822.B18037@fw.wintelcom.net> <20001117011559.B3656@sebster.com> <00111722151204.01989@shalimar.net.au>

next in thread | previous in thread | raw e-mail | index | archive | help
Zero Sum wrote:

> Let's get a bit of truth here, no one is saying that getting arguments from
> a file is not a good idea.  In fact a fundamental examination shows that it
> was always meant to be used that way.  In unix everything is represented as
> a file.  Stdin, out and err, are 'just files'.  Devices are 'just files.

Why not have a stdarg file? Which can read arguments one by one. Then the
shell can implement tar `find /` by two concurrent processes.
 
> > Basically, it's IMPOSSIBLE to write a library to do what I want to do,
> > because there's no way that I can get an unlimited number of args to
> > a program that doesn't have a --args-from option. So this is a
> > fundamental limitation of the OS.
> >
> I am not sure that that is correct.  It would take some work on __init()
> though.  So, technically not a library.

Somehow you would have to bypass the argument limit. If you have a FIXED
program, how are you going to ever get it to read a LONG argument list?
If programs get their arguments from stdarg by default, then we have no
more problems. (There is a different problem with stdarg of course, 
namely the standard problem of direct access vs sequential access).
This thing that I think is weird is that most programs of the OS such
as rm, mv, etc., take a list of arguments. But any shell script that
does stuff like 'rm *' is then IN PRINCIPLE broken, because it could
overflow your argument list. These kind of programs should by that
very reasoning have a --from-file option. (Yes I know you can split
rm up into multiple jobs with xargs. Fortunately the path length is
limited to a power of 2 which is less than the argument list length).
But then, I believe, even rm with an xargs with more than -n 64 is wrong.
 
> And another point you miss/  I'll stick with " find | tar " because those
> processes can run concurrently while your " tar `find`" runs sequentially.

I didn't miss that point. I accept that find | tar is the better construction.
But by the very reasoning a lot of constructions are broken that I see very
often in shell scripts. And by the same reasoning I want to be able to do
find | rm. (And yes, I know of find -delete. The point is again of what 
should be possible, and not the what is made possible by workarounds (such
as -delete)).
 
> Just accept that is is a dead end construction. Why have to wait for the
> find to finish before the tar starts?  Particularly if you are talking long
> finds and huge argument lists.  So nothing is broken and nothing needs
> fixing.

So things are still broken (as I try to explain above), and I still think
it needs fixing (stdarg construction, or standard --args-from-file option
or somesuch).

> Unix is very good at running lots of small jobs very quickly and shares
> resources well.

And wouldn't it be great if it was EVEN better!

Greetings,
Sebastiaan


To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-questions" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20001117125652.A91692>