Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 28 Jan 1999 13:45:42 -0500 (EST)
From:      "John S. Dyson" <dyson@iquest.net>
To:        dillon@apollo.backplane.com (Matthew Dillon)
Cc:        dyson@iquest.net, wes@softweyr.com, toasty@home.dragondata.com, hackers@FreeBSD.ORG
Subject:   Re: High Load cron patches - comments?
Message-ID:  <199901281845.NAA21716@y.dyson.net>
In-Reply-To: <199901281804.KAA09766@apollo.backplane.com> from Matthew Dillon at "Jan 28, 99 10:04:41 am"

next in thread | previous in thread | raw e-mail | index | archive | help
Matthew Dillon said:
> :Throttling fork rate is also a valuable tool, and maybe a hard limit is good
> :also.  It is all about how creative you are (or want to be) in your solution :-).
> 
>     Throttling the fork rate immediately leads to complaints.  The perception
>     of load is easily as important as the reality.  We had put fork rate 
>     limits on both sendmail and popper and the result was a hundreds of calls
>     to tech support :-(.  I even had load-based feedback mechanisms.  It was
>     a disaster.
>
I understand that.

> 
>     The issue is that the load is an interactive load, not a batch load -- it
>     is not acceptable to accept a connection and then pause for 5 minutes
>     before yielding a shell prompt, process a popper request, or even
>     respond with an SMTP HELO.  Or handle a web request.  The machine *must*
>     be able to handle a temporary overload.  Even *mail delivery* is an 
>     interactive load -- users have come to expect their email to propogate in
>     5 minutes or less and if it doesn't, we got complaints.
> 
If you are talking about 5minutes, then there is a problem with the mechanism
chosen.  It is tricky to produce a numerically stable result with large
sample intervals -- this is where things need to be thought out carefully.
Given ad-hoc algorithms, and a 10Hz sample rate, it is possible to mess things
up pretty badly (response wise.)  If you end up with even 5second delays, people
might complain (I sure would.)  I hate for shell prompts to be delayed like
that.  By having (nearly) runaway sendmail (and other service) processes,
I can see how responsiveness can get sluggish.

Imagine having 100 sendmails fork off instantaneously!!!  That would certainly
cause interactive performance to glitch a little, wouldn't it?  How big is too
big?  Is 1000 sendmails too many, or is 100 or is 10?  What are the real limits
on the system (think memory bandwidth or processor bandwidth or network
bandwidth)?  By limiting yourself to a fixed maximum number of processes
(esp in the case of sendmail), you aren't really limiting yourself to the
actual resource utilization (bandwidth)... (At least, I hope you aren't running
out of memory :-)).  What needs to be limited is the "bandwidth".

I'll bet that rate schemes that you have tested, set the "bandwidth" to a fixed
limit, and you decided that "rate" limiting was bad?  Well, that isn't the way
to do it -- it needs to be done on a sharing basis -- dynamically estimate the
capabilities of the system, and limit based upon that in a fair share scheme.

Simple schemes provide a fixed "capability" limit in the system, I propose more
complete schemes.  When having the luxury of doing the calculations for a fixed
limit, the stability of the math isn't critical.  It seems to me that correct
rate limiting provides a sharing approach, and not a limiting approach.  (The key
is that limiting approaches often artifically limit too low, or don't work.)

System resource limits:

fork rate:		forks per second
paging rate:	pages per second.
cpu usage rate:	seconds of CPU time per second.

The already existing scheduler code does a good job of the "cpu usage rate",
and that problem is simpler because it generally has a stable amount of
CPU available (100%.)  "forks per second possible" (which is a significant
load issue, becuase it ends up being a transient usage of CPU -- where
CPU load accounting isn't practical), and "pages per second possible" both
have to be estimated to manage the resource.  Hard limits on either end
up artifically limiting too low, or not limiting at all.

Numerical methods and stability is your friend.  I suspect that part of this
kind of thinking is why the existing scheduler that handles "cpu usage rate"
works so well.

It does seem that fork rate per process would be a reasonable limit approach
though...  If a rate limit is imposed, then the amount of CPU grabbed by
the difficult to account CPU for fork mechanism can be limited.

> 
>     The only thing that ever worked reliably were absolute limits.  The
>     internet is so bursty that a machine *must* be able to accept a high
>     load or overload situation for upwards of 10 or 15 minutes *without*
>     slapping limits on processes.
>
If your mechanism is working correctly, the load will be shared fairly.  Sum
of CPU (or other resources) is still the same.  Bad ad hoc sharing
mechanisms often cause wierd behavior.  Are you speaking of a simple rate limit?

Note that the current VM page mgmt isn't based upon a policy of giving out pages
to processes when the system thinks it is fair by limits, but more on a sharing
scheme.  The same approach should be taken by any resource mgmt mechanism.  Some
mechanisms are easier to deal with than others.

-- 
John                  | Never try to teach a pig to sing,
dyson@iquest.net      | it makes one look stupid
jdyson@nc.com         | and it irritates the pig.

To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-hackers" in the body of the message



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199901281845.NAA21716>