Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 06 Apr 2003 16:52:29 -0400
From:      Chuck Swiger <cswiger@mac.com>
To:        Support <support@netmint.com>
Cc:        freebsd-isp@freebsd.org
Subject:   Re: load testing and tuning a 4GB RAM server
Message-ID:  <3E90938D.2050307@mac.com>
In-Reply-To: <20030406145845.R18790-100000@netmint.com>
References:  <20030406145845.R18790-100000@netmint.com>

next in thread | previous in thread | raw e-mail | index | archive | help
Support wrote:
[ ... ]
>> Also, increase NSWAPDEV to at least two, so you at least have a
>> possibility of adding more swap to a running system or for adding some
>> in order to take down the primary swap area for some reason.
> 
> I probably will never need to increase swap without rebooting because
> there is no available disk space to do it.  Is there a reason to make it 2
> at the expense of losing KVA memory if you know that adding swap will
> entail a reboot (i.e. you can recompile kernel first)?

Aren't you using a Dell PowerEdge and hot-swappable drives?  I also 
thought you mentioned you were using 15K drives for swap, which implies 
SCSI...probably 80-pin SCA form-factor, right?

If you knew that the difference in KVA memory would be significant to 
your usage, then you could evaluate whether saving a couple of swap 
device slots is worth the loss of flexibility.  Configuring a system 
without any tolerance for change is a little like Procrustes being too 
precise in measuring his own bed.  :-)

[ ... ]
> At the lower end 1500 to 2000 and possibly as high as 4000 established web
> connections at peak times at any given moment. And as little as 500 at
> off-peak times. The web traffic will split 25-30% dynamic PHP/Perl (75% of
> which will require DB interaction) and 70-75% pure file downloads. This
> will amount to millions of connections per day, so I think looking at it
> from the constant load point of view allowing for X many connections
> established is a better idea.

I'm not sure you measure "established" the same way I do.  Do you mean 
you expect there to be 500 to 4000 active apache children all processing 
transactions 24-7, or do you mean you expect to see anywhere up to 4000 
people using the site(s) at a time, clicking at whatever rate they'd use 
the site(s) during normal transactions?

>> pageviews (if you can distinguish)?  Are you going to be using SSL, lots
>> of virtual domains, any special apache modules?  Logfile analysis needs?
> 
> Yes to SSL (openssl, mod_ssl), yes to SSL virtual domains with their own
> IPs, yes to normal virtual domains with with their own IPs.

As you noted, one can't do name-based virtual domains over SSL: each SSL 
site has to have it's own unique IP.

How much of your traffic is going to be over SSL?  You might want to 
look into getting a HI/FN crypto-accelerator card, particularly if you 
have lots of small/short SSL sessions rather than few longer ones.

[ ... ]
>> Also, what are you doing with the database; that is to say, which DB
> 
> MySQL for 90-95% and PostgreSQL for 5-10% of usage. The reason for going
> with 1 server instead of 2 is to create chunks of users per server and
> allow them to use unix sockets. As soon as load is too high, we just get
> another web/db server. Not sure what kind of usage the databases will see,
> most likely 80-85% reads and 15-20% writes.

You really want to run only one type of production database per machine; 
you're risking VM thrashing otherwise.

> [...clip...]
>> up user files.  That's aside from the fact that you really don't want to
>> keep database files on RAID-5 storage in the first place, either-- keep
>> the DB on RAID-1 or RAID-1,0 if you can: use some of that 15K swapspace
>> if you need to.
>  
> Chuck, I understand what you're saying. Unfortunately, the decision to go
> with RAID 5 is financial.  Is there concrete evidence that RAID 5 is
> absolutely terible for read/write access?

Sure.  Recent thread about iozone or bonnie on -stable, where someone 
was surprised to discover that writes to a normal (un-RAIDed) drive are 
considerably faster than writes to a RAID-5 array.  Or check what your 
databases recommend in terms of disk layout for the DB files; they 
should discuss interactions/tuning with RAID.

Besides, it's not clear that you need to spend more money: with the 
amount of RAM you've got, you should be able to avoid swapping often. 
Having really fast swapspace access for VM probably isn't as valuable as 
having really fast I/O for the databases.

 > It's been holding up pretty well in tests.

You might have something like a Dell/Adaptec PERC? RAID controller with 
128MB or so of I/O buffer memory which can also do the RAID-5 XOR 
calculations?  That will help, but even so RAID-5 write performance goes 
from adequate to poor as the I/O load increases.  Also, have you been 
testing I/O while also hitting a database (or two) at the same time?

-- 
-Chuck




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?3E90938D.2050307>