Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 10 Dec 2010 10:22:59 -0500
From:      Michael Powell <nightrecon@hotmail.com>
To:        freebsd-questions@freebsd.org
Subject:   Re: What is loading my server so much?
Message-ID:  <idtgjk$bkr$1@dough.gmane.org>
References:  <4D00BDF8.6020206@shopzeus.com> <4D01CA99.9020706@infracaninophile.co.uk>

next in thread | previous in thread | raw e-mail | index | archive | help
Matthew Seaman wrote:

> On 09/12/2010 11:31, Laszlo Nagy wrote:
>> Today something happened. Number of http processes went up to 200. As a
>> result, number of connections to database also went up to 200, and the
>> web server is now refusing clients with "Cannot connect to database"
>> messages (coming from PHP).
> 
> This is a classic scenario.  Some burst of traffic causes your apache to
> spawn more child processes than will all fit in RAM at one time.
> Consequently, the system starts to swap.  Swapping kills performance.
> This slows everything down so much that there are always requests
> waiting for apache to process, so apache will never find any idle
> children to kill off.  Result misery.
> 
> The answer is to limit the number of child processes apache will spawn.
> Decide how much of your available RAM you can devote to Apache.  Look at
> top(1) to find the maximum size apache processes grow to.  The ratio of
> those two sizes is the maximum number of apache processes your system
> can support.
> 
> Limiting the total number of apache processes sounds counter-intuitive.
> What happens when you get sufficient traffic that apache maxes out?  Web
> queries will generally be queued up until there's an apache child free
> to handle them.  Generally that will take from a few 10s of milliseconds
> on up -- although if you're regularly getting into a state where your
> webserver takes seconds to answer, then it's time to get more beefy
> hardware.
> 

The quintessential "first-try" for this has historically been to set 
keepalives to "Off". Not a solution, just an interim stop gap. If this makes 
any discernible improvement (however small) it confirms the situation.

What I did was switch to the event mpm and FastCGI. Fewer processes, but 
each process spawns lots of threads . Each of the threads within a process 
can reuse the database connection previously established by a different 
thread. Saves connection create/build-up/tear-down cycles as well as RAM. 

Then throw libmemcached into the mix, if possible. To properly utilize 
memcached your PHP has to have some code inserted so it will talk. This sets 
aside a cached buffer pool of RAM for database connections , similar to the 
connection pool you would find in use by a Java servlet container such as 
Tomcat or Resin. If unable to refactor code, in lieu of this you can enable 
MySQL SQL  data cache. Not as effective, but if you get a lot queries which 
are the same SQL it saves cycles.  The amount of YMMV here would depend upon 
how many queries are repetitive. If each and every query is unique it won't 
do much.

So some fictitious example would be like start with some small number of 
stand-by processess such as 8,16,32, etc. Each process might have a thousand 
threads. When this thousand threads is reached Apache opens another process 
with a thousand more. You would still want to set a max-child/max-thread 
count along the lines you previously described. However, this approach gets 
you more headroom at the PHP data access wall of hanging doom.

-Mike






Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?idtgjk$bkr$1>