From owner-freebsd-isp Tue May 4 8:10:13 1999 Delivered-To: freebsd-isp@freebsd.org Received: from mothership.hostresource.com (unknown [216.37.30.11]) by hub.freebsd.org (Postfix) with ESMTP id 97FCB1562F for ; Tue, 4 May 1999 08:09:49 -0700 (PDT) (envelope-from angrick@netdirect.net) Received: from fdc7.fdcredit.com ([216.37.30.62]) by mothership.hostresource.com (8.8.8/8.8.8) with SMTP id KAA22602; Tue, 4 May 1999 10:08:25 -0500 (EST) (envelope-from angrick@netdirect.net) Message-Id: <1.5.4.32.19990504150958.00bfb424@netdirect.net> X-Sender: angrick@netdirect.net X-Mailer: Windows Eudora Light Version 1.5.4 (32) Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Date: Tue, 04 May 1999 10:09:58 -0500 To: Graeme Tait From: Andy Angrick Subject: Re: Apache Stress testing Cc: freebsd-isp@freebsd.org Sender: owner-freebsd-isp@FreeBSD.ORG Precedence: bulk X-Loop: FreeBSD.org At 10:54 AM 5/4/99 -0700, you wrote: >Andy Angrick wrote: >> >> Does anyone know if there is a program available to stress test an apache >> webserver. Pound it with requests to see how much it can handle and how much >> of a toll it takes on the server? If there isn't anything like this >> available, maybe it would be worth considering developing if anyone would be >> interested in it also. It could be set up so that there are maybe 10 html >> pages to choose randomly, then fork a bunch of processes to randomly >> retrieve these pages. Could be configured as to how many processes to spawn, >> length of test, etc. I could write it, but I would probably need some help. > > >I would like to have such a tool, but it wouldn't be much use to me unless it >realistically simulated a typical Internet server environment. > >In particular, most requests on the Internet come from people with relatively >slow connections. If a typical connection is open for say 5 seconds to >request and transmit data, and you have 100 hits/second, then you will have of >order 500 connection open (ignoring persistent connections, and connections >with delayed closure). If you are on an fast intranet where the same amount of >data per connection is transmitted in milliseconds, the number of connections >open (and with Apache, the number of server children) will be much less. I >guess there are also a bunch of subtleties I don't much understand with TCP/IP >networking that are relevant here (e.g., the "FIN_WAIT_2 problem"). > >Another point: if you are fetching the same pages repeatedly in a simulation, >the files will be cached in memory, so you won't be seeing any potential disk >bottleneck. Whether a real server would be able to use memory caching to >significantly reduce disk accesses would depend on the amount of RAM memory >available for caching, the total size of web files, plus access patterns, etc. Agreed. It would have to have some way of simulating longer connection times. Like coming from a 28.8 modem or whatever. Maybe even a randomly selected stay online time. As far as the cache goes, there would need to be a list of pages to fetch randomly. Maybe it can first crawl the site to get a listing of as many pages as it can and them randomly try to fetch all of them. -Andy > > >-- >Graeme Tait - Echidna > > > To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-isp" in the body of the message