Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 11 Mar 1996 08:56:26 +0100 (MET)
From:      Luigi Rizzo <luigi@labinfo.iet.unipi.it>
To:        jkh@time.cdrom.com (Jordan K. Hubbard)
Cc:        hackers@FreeBSD.ORG
Subject:   Re: Another wcarchive report..
Message-ID:  <199603110756.IAA17954@labinfo.iet.unipi.it>
In-Reply-To: <1529.826498452@time.cdrom.com> from "Jordan K. Hubbard" at Mar 10, 96 02:53:53 pm

next in thread | previous in thread | raw e-mail | index | archive | help
> I thought people would be amused to see the load at the >1000 user
> scenario:
> 
...
> Service class anonymous            - 1012 users (1250 maximum)
> 
> jkh@wcarchive-> top
> load averages: 12.18,  9.51,  7.99                                    14:48:20
> 1128 processes:9 running, 1118 sleeping, 1 zombie
> Cpu states: 13.5% user,  0.0% nice, 27.3% system, 19.9% interrupt, 39.4% idle
> Memory: 307M Active, 2356K Inact, 50M Wired, 129M Cache, 676K Free
> Swap:   819M Total, 804M Free, 2% Inuse  
> 
>   PID USERNAME PRI NICE   SIZE   RES STATE   TIME   WCPU    CPU COMMAND
>    82 root       2    0   180K  268K sleep 300:12  2.71%  2.71% syslogd
> 23487 root       2    0   464K  320K sleep   0:15  1.83%  1.83% ls
> 22605 dave       2    0  7324K 7636K sleep   0:31  1.18%  1.18% perl
> 28761 jkh       85    0  1464K 1596K run     0:00  2.19%  0.65% top
> 28176 root       2    0   700K  460K sleep   0:00  0.54%  0.53% ftpd
> 28775 root       2    0   592K  448K sleep   0:00  3.83%  0.53% ls
> 28772 root       2    0   700K  440K sleep   0:00  2.31%  0.42% ftpd
> 28773 root       2    0   680K  428K sleep   0:00  2.31%  0.42% ftpd
> 28784 root       2    0   680K  424K sleep   0:00  8.59%  0.42% ftpd
...

wcarchive (and similar extremely-busy systems) are undoubtably
interesting patients to study.

I am wondering: with such a busy system, I expect that performance is
mostly limited by disk seeks. Is it possible that the relatively low
load average is actually limited by the number of disks -- i.e. 1-2
processes per disk are ready, the other are just waiting for their I/O
requests to complete ? This would also explain the "39.4% idle".

It would be interesting to know the effective aggregate disk bandwidth
(not the peak value, that's going to make us consumed with envy :)
and see if that is the actual bottleneck.

Also, is there some special tuning of parameters (say, reserve some
memory for holding directories, modify ftpd to so some readahead
depending on the actual speed of the connection, etc.)

	Luigi
====================================================================
Luigi Rizzo                     Dip. di Ingegneria dell'Informazione
email: luigi@iet.unipi.it       Universita' di Pisa
tel: +39-50-568533              via Diotisalvi 2, 56126 PISA (Italy)
fax: +39-50-568522              http://www.iet.unipi.it/~luigi/
====================================================================



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199603110756.IAA17954>