Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 18 Jul 1996 22:47:27 -0500 (CDT)
From:      Joe Greco <jgreco@brasil.moneng.mei.com>
To:        jkh@time.cdrom.com (Jordan K. Hubbard)
Cc:        imp@village.org, jlemon@americantv.com, current@FreeBSD.ORG
Subject:   Re: various 'fetch' errors
Message-ID:  <199607190347.WAA08396@brasil.moneng.mei.com>
In-Reply-To: <584.837720822@time.cdrom.com> from "Jordan K. Hubbard" at Jul 18, 96 01:13:42 pm

next in thread | previous in thread | raw e-mail | index | archive | help
> > Yes.  You are being naive.  :-) Let's say I wanted to fetch 10-20
> > things.  And I have microbandwidth to the rest of the world.  I want
> > them to happen sequentially rather than in parallel.  Let's also say I
> > have multiple users that want to do this (say 1-2 each and there are 5
> 
> One of the nice things about UNIX is that you can string existing
> tools together.  Putting all of the above into fetch strikes me as
> overkill in the extreme and you'll never see me add anything as
> ludicrous as this to it! :-)

Ludicrous?  I see.

While I might disagree with that description of the feature request, I 
would agree that it would be better to leverage off of another tool
rather than embedding "cron/at/queue" functionality in the tool.

Except...  we don't have one.  Well, I suppose you could cobble one
together with enough shell or perl programming...  but maybe that
shouldn't count.  

I do think that cron is a good paradigm for this sort of thing.  With
extensions similar to SunOS's queuedefs, it would be adequate to fill
this need.  It also goes a long way to handling resource contention on
slower machines...  because I limit "disk" intensive crontabs to a queue
with one job allowed on Solaria, my disks don't go totally wacko when a
maintenance chore takes longer than I anticipated... indeed I don't even
have to worry about it.

... JG



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199607190347.WAA08396>