Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 04 May 1999 10:09:58 -0500
From:      Andy Angrick <angrick@netdirect.net>
To:        Graeme Tait <graeme@echidna.com>
Cc:        freebsd-isp@freebsd.org
Subject:   Re: Apache Stress testing
Message-ID:  <1.5.4.32.19990504150958.00bfb424@netdirect.net>

next in thread | raw e-mail | index | archive | help
At 10:54 AM 5/4/99 -0700, you wrote:
>Andy Angrick wrote:
>> 
>> Does anyone know if there is a program available to stress test an apache
>> webserver. Pound it with requests to see how much it can handle and how much
>> of a toll it takes on the server? If there isn't anything like this
>> available, maybe it would be worth considering developing if anyone would be
>> interested in it also. It could be set up so that there are maybe 10 html
>> pages to choose randomly, then fork a bunch of processes to randomly
>> retrieve these pages. Could be configured as to how many processes to spawn,
>> length of test, etc. I could write it, but I would probably need some help.
>
>
>I would like to have such a tool, but it wouldn't be much use to me unless it 
>realistically simulated a typical Internet server environment.
>
>In particular, most requests on the Internet come from people with relatively 
>slow connections. If a typical connection is open for say 5 seconds to 
>request and transmit data, and you have 100 hits/second, then you will have of 
>order 500 connection open (ignoring persistent connections, and connections 
>with delayed closure). If you are on an fast intranet where the same amount of 
>data per connection is transmitted in milliseconds, the number of connections 
>open (and with Apache, the number of server children) will be much less. I 
>guess there are also a bunch of subtleties I don't much understand with TCP/IP 
>networking that are relevant here (e.g., the "FIN_WAIT_2 problem").
>
>Another point: if you are fetching the same pages repeatedly in a simulation, 
>the files will be cached in memory, so you won't be seeing any potential disk 
>bottleneck. Whether a real server would be able to use memory caching to 
>significantly reduce disk accesses would depend on the amount of RAM memory 
>available for caching, the total size of web files, plus access patterns, etc.



Agreed. It would have to have some way of simulating longer connection
times. Like coming from a 28.8 modem or whatever. Maybe even a randomly
selected stay online time. As far as the cache goes, there would need to be
a list of pages to fetch randomly. Maybe it can first crawl the site to get
a listing of as many pages as it can and them randomly try to fetch all of them.

-Andy


>
>
>-- 
>Graeme Tait - Echidna
>
>
>



To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-isp" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?1.5.4.32.19990504150958.00bfb424>