Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 11 Oct 2001 14:42:54 +0200
From:      "Mitterwald, Holger" <holger.mitterwald@coi.de>
To:        Marcel Prisi <marcel-lists@virtua.ch>
Cc:        freebsd-isp@FreeBSD.ORG
Subject:   Re: PostgreSQL & shared memory
Message-ID:  <3BC593CE.FCD09FCA@coi.de>
References:  <006001c1516b$db2637b0$8d01a8c0@gastroleader.com>

next in thread | previous in thread | raw e-mail | index | archive | help
Marcel Prisi wrote:
> 
> Hi all !
> 
> We just published a new website running on FreeBSD 4.4 prerelease 10 august
> / apache 1.3.20 / php 4.0.6 / PostgreSQL 7.1.2 .
> 
> In fact we moved the site from a Win2k/iis/sqlserver to BSD, and we have a
> HUGE amount of hits, and the machine is already getting overloaded.
> PostgreSQL seems like a ressource hog. The machine is a bi-PIII 866Mhz,
> 380Mb RAM, Adaptec 2100s.

What do you understand as a huge amount of hits?
How many percent of your pages need access to the database?

 
> I had to stop using both keepalive in apache and persistent connections in
> php in order to keep the machine from swapping and not responding anymore.
> The load is extremely high as some SQL queries are really hard ...

Disabling persistent connections to pgsql is in my opinion a bad idea. 
PostgreSQL needs a lot of time for forking a new process. Persistent
connections
will avoid this. I was able to speed my pgsql accesses up with
persistent connections
of about 4 times it was without!

> The first thing we'll do is install some more memory, and then a separate
> machine for PostgreSQL.
> 
> My question is : what are the settings I should add to be able to handle
> about 400-500 connections to PostgreSQL ? What about shared memory & FreeBSD
> kernel ?

My question is: what can you do to reduce the amounts of connections to
postgreSQL?

First, try to eliminate as many acesses to the databeses you can do
without much 
loss of functionality. Often it is better to create the pages once every
hour than
every time it is accessed. At least only the dynamic part of the pages
should created 
every time. Saves A LOT of time!

Second, run vacuumdb regularly. PostgreSQL is a bit stupid here and
running vacuumdb 
really increases speed if you have many updates/deletes/inserts to your
database.

Third, sit down at your developing system (you have one, do you???) and
put before
every SQL statement the "explain" command and analyze the output. There
you can find
out where postgres does full-table-scans and you can put an index on the
table. 
Don't forget running "vacuumdb -z" otherwise the optimizer doesn't know
anything of your
indices.

If you have tables with many duplicate values (lets say you save
temperatures of
some locations and you mainly search for the values of the locations)
then put 
an index on the location and cluster the table for it (see man cluster).


Speeding up the database will give you more speed than any
kernelparameter,
CPU or RAM can give. Speeding up by factor 1000 is realistic on bigger
databases!

If you can take the risk, disable fsync for writes. Saves a lot of time
if you have 
many updates/deletes/inserts, too! (I think it is option "-F" at
postgres)

Postgres can be much faster than Oracle on some queries - if it is
properly tuned!

Hope this helps...

Best regards,
   Holger

To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-isp" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?3BC593CE.FCD09FCA>