Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 27 Oct 2008 20:43:45 +0000
From:      Matthew Seaman <m.seaman@infracaninophile.co.uk>
To:        =?ISO-8859-1?Q?Francis_Dub=E9?= <freebsd@optiksecurite.com>
Cc:        freebsd-questions@freebsd.org
Subject:   Re: collecting pv entries -- suggest increasing PMAP_SHPGPERPROC
Message-ID:  <49062801.9090805@infracaninophile.co.uk>
In-Reply-To: <49060AE0.3000301@optiksecurite.com>
References:  <49060AE0.3000301@optiksecurite.com>

next in thread | previous in thread | raw e-mail | index | archive | help
This is an OpenPGP/MIME signed message (RFC 2440 and 3156)
--------------enig330E7118D7B77CAC47E7C33E
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: quoted-printable

Francis Dub=E9 wrote:
> Hi everyone,
>=20
> I'm running a a webserver on FreeBSD (6.2-RELEASE-p6) and I have this=20
> error in my logs :
>=20
> collecting pv entries -- suggest increasing PMAP_SHPGPERPROC
>=20
> I've read that this is mainly caused by Apache spawning too many=20
> processes. Everyone seems to suggest to decrease the MaxClients=20
> directive in Apache(set to 450 at the moment), but here's the=20
> problem...i need to increase it ! During peaks all the processes are in=
=20
> use, we even have little drops sometime because there isn't enough=20
> processes to serve the requests. Our traffic is increasing slowly over =

> time so i'm affraid that it'll become a real problem soon. Any tips on =

> how I could deal with this situation, Apache's or FreBSD's side ?
>=20
> Here's the useful part of my conf :
>=20
> Apache/2.2.4, compiled with prefork mpm.
> httpd.conf :
> [...]
> <IfModule mpm_prefork_module>
>    ServerLimit         450
>    StartServers          5
>    MinSpareServers       5
>    MaxSpareServers      10
>    MaxClients          450
>    MaxRequestsPerChild   0
> </IfModule>
>=20
> KeepAlive On
> KeepAliveTimeout 15
> MaxKeepAliveRequests 500
> [...]

You don't say what sort of content you're serving, but if it is
PHP, Ruby-on-Rails, Apache mod_perl or similar dynamic content then=20
here's a very useful strategy.

Something like 25-75% of the HTTP queries on a dynamic web site will
typically be for static files: images, CSS, javascript, etc.  An
instance of Apache padded out with all the machinery to run all that
dynamic code is not the ideal server for the static stuff.  In fact,
if you install one of the special super-fast webservers optimised
for static content, you'll probably be able to answer all those=20
requests from a single thread of execution of a daemon substantially
slimmer than apache.  I like nginx for this purpose, but lighttpd
is another candidate, or you can even use a 2nd highly optimised=20
instance of apache with almost all of the loadable modules and other=20
stuff stripped out.

The tricky bit is managing to direct the HTTP requests to the appropriate=
 server.  With nginx I arrange for apache to bind to the
loopback interface and nginx handles the external network i/f, but
the document root for both servers is the same directory tree.  Then
I'd filter off requests for, say, PHP pages using a snippet like so
in nginx.conf:

        location ~ \.php$ {
            proxy_pass   http://127.0.0.1;
        }

So all the PHP gets passed through to Apache, and all of the other conten=
t (assumed to be static files) is served directly by nginx[1].
It also helps if you set nginx to put an 'Expires:' header several
days or weeks in the future for all the static content -- that way
the client browser will cache it locally and it won't even need to
connect back to your server and try doing an 'if-modified-since' HTTP
GET on page refreshes.

The principal effect of this is that Apache+PHP basically spends all=20
it's time doing the heavy lifting it's optimised for, and doesn't get dis=
tracted by all the little itty-bitty requests.  So you need fewer
apache child processes, which reduces memory pressure and to some=20
extent competition for CPU resources.

An alternative variation on this strategy is to use a reverse proxy
-- varnish is purpose designed for this, but you could also use squid
in this role -- the idea being that static content can be served mostly
out of the proxy cache and it's only the expensive to compute dynamic
content that always gets passed all the way back to the origin server.

You can also see the same strategy commonly used on Java based sites,
with Apache being the small-and-lightning-fast component, shielding
a larger and slower instance of Tomcat from the rapacious demands of=20
the Internet surfing public.

	Cheers,

	Matthew

[1] Setting 'index index.php' in nginx.conf means it will DTRT with
    directory URLs too.

--=20
Dr Matthew J Seaman MA, D.Phil.                   7 Priory Courtyard
                                                  Flat 3
PGP: http://www.infracaninophile.co.uk/pgpkey     Ramsgate
                                                  Kent, CT11 9PW


--------------enig330E7118D7B77CAC47E7C33E
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.9 (FreeBSD)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEAREIAAYFAkkGKAYACgkQ8Mjk52CukIxEhwCeJS09/JBZXfB4xQk1qs3Bo9AE
MT0An1+nloV63x7tq2GCOlufkLCO7aUF
=H8lU
-----END PGP SIGNATURE-----

--------------enig330E7118D7B77CAC47E7C33E--



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?49062801.9090805>