Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 20 Mar 2001 11:11:44 -0600
From:      "Michael C . Wu" <keichii@iteration.net>
To:        dillon@freebsd.org, grog@freebsd.org, fs@freebsd.org, hackers@freebsd.org
Subject:   tuning a VERY heavily (30.0) loaded server
Message-ID:  <20010320111144.A51924@peorth.iteration.net>

next in thread | raw e-mail | index | archive | help
[Lengthy email, bear with me please, it is quite interesting.
This box averages 30.0 load with no problems.]

system stats at 
http://zoo.ee.ntu.edu.tw/~keichii/

Hello Everyone,

I have a friend who admins a very heavily loaded BBS server.
(In Taiwan, BBS'es are still very popular, because they 
are the primary form of scholastic communication in colleges/universities.
And FreeBSD runs on most of the university systems in Taiwan ;) )

This box is rather a FreeBSD advocacate itself, as you will see why.

It runs an self-wrote PERL SMTP daemon. (Sendmail and Postfix croaks)
SMTPD pipes the mail to "bbsmail" that delivers the mail to 
BBS users.  SMTPd averages about 

BBSd averages about 3000 users at any given time of the day,
Peak usage is about 4300 users before the box dies.  
Each user averages 4-5KB/sec bandwidth.
BBSd is an in-house modification of a popular BBSD in Taiwan.

There is an innd backend to BBSd that gets a full feed of tw.bbs.*
and many other local newsgroups.

Average file size is about 4K.  /home/bbsusers* is on a vinum
stripe'd volume with 3 Ultra160 9G 10000RPM drives on sym0 at stripe
size 256K,  Greg: I know this should be a prime number,
can we safely use <150K stripe sizes? CPU time is not a problem.

The other parts of the system rest on 3*Ultra160 9g 10K RPM on AHC0
at stripe size 256K.

Physical memory is 2.5 GB.  We do MFS and it croaks/crashes
at midnight, our peak load time.  We do md0, it croaks before
peak time.

Dual PIII-750 CPU's

Due to the structure of BBS's, we cannot split the load across
different servers.  We also think that we probably cannot
get more performance out of hardware upgrades that we can afford.
(i.e. Please don't tell us to buy a Starfire 4500 :-) We are all volunteer
werkers at El Cheapo university budgets.)

We average around 30.0 server load with no noticeable delays
for users.  Peak load is up to 50.0.  Average process count
is around 4000 to 5000.

We have followed Alfred's advice to do sysctl -w vfs.vmioenable=1
It allows us to survive the peak load a little longer than before.
And we are putting our logs of sockstat, iostat 5, vmstat 5,
netstat 5, dmesg, uname -a on the following URL.

http://zoo.ee.ntu.edu.tw/~keichii/

*DRUM ROLL*
What do you think we can do to make this server survive the 
  peak load of around 5000 users? :)

* How should we setup our IPFW?  

* What should be the optimal newfs and tunefs 
  configurations for our filesystems?

* What should we try as vinum stripe sizes?

* What is possibly the bottleneck that we have for load 30.0?
  (since we are not CPU-bound nor memory bound)

* Is there any VM tweaks that we can do?

* Anything else we can do?

Thanks,
Michael
-- 
+-----------------------------------------------------------+
| keichii@iteration.net         | keichii@freebsd.org       |
| http://iteration.net/~keichii | Yes, BSD is a conspiracy. |
+-----------------------------------------------------------+

To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-hackers" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20010320111144.A51924>