From owner-freebsd-hackers Wed Feb 15 19:04:17 1995 Return-Path: hackers-owner Received: (from root@localhost) by freefall.cdrom.com (8.6.9/8.6.6) id TAA00317 for hackers-outgoing; Wed, 15 Feb 1995 19:04:17 -0800 Received: from p5.spnet.com (elh.com [204.156.130.1]) by freefall.cdrom.com (8.6.9/8.6.6) with ESMTP id TAA00304 for ; Wed, 15 Feb 1995 19:04:13 -0800 Received: from localhost (localhost [127.0.0.1]) by p5.spnet.com (8.6.9/8.6.6) with SMTP id TAA05841; Wed, 15 Feb 1995 19:02:13 GMT Message-Id: <199502151902.TAA05841@p5.spnet.com> X-Authentication-Warning: p5.spnet.com: Host localhost didn't use HELO protocol To: davidg@Root.COM, hackers@FreeBSD.org cc: elh@p5.spnet.com Subject: Re: 950210-SNAP, VM Free Date: Wed, 15 Feb 1995 19:02:13 +0000 From: Ed Hudson Sender: hackers-owner@FreeBSD.org Precedence: bulk > From: David Greenman > > The thing to compare this to would be a 2.0 system. I think you'll find > that it is always better. I believe the non-optimal performance you're seeing > is caused by our algorithm for deciding how much file data to cache. It tries > very hard (too hard) to not thash the VM system when large amounts of file I/O > are done. We will likely change the balance in the future, but at the moment > this is very difficult to do without unusual side effects. One thing we can do > right away, however, is increase the minimum size of the cache - it currently > can shrink to less than 10% of memory (and only half of this for file data - > the other half is for meta/directory data). This should probably be increased > to 15% or 20%. Try out the attached patch which changes it to 20%. > > -DG > > Index: machdep.c > =================================================================== > RCS file: /home/ncvs/src/sys/i386/i386/machdep.c,v > retrieving revision 1.110 > after applying your patch and rebooting with the new kernel, i still experience the same problem, but with an intermittent improvement. medium compile, *before, with-patch: 112.4u 24.8s 2:52.70 79.5% 872+1055k 325+1532io 27pf+11w medium compile, after, with-patch: (a) 112.1u 24.2s 3:10.10 71.7% 869+1053k 1614+1528io 0pf+0w (b) 112.9u 25.2s 3:55.78 58.6% 865+1048k 3930+1557io 7pf+9w medium compile, after, without-patch: (c) 113.0u 25.9s 4:07.75 56.0% 862+1040k 5571+1564io 14pf+0w small compile, *before, with-patch: 31.2u 6.4s 0:44.95 83.7% 919+888k 143+445io 68pf+0w small compile, after, with-patch: 31.3u 6.6s 1:04.63 58.7% 917+885k 1214+478io 2pf+0w 30.9u 6.8s 1:01.46 61.5% 914+887k 974+468io 0pf+0w small compile, after, without-patch: 31.4u 6.6s 1:00.21 63.1% 909+884k 954+477io 0pf+0w if i could speculate: my big compile (that creates the state above called 'after') ld's a lot of half-mega-byte files that eat up your cache, and your cache isn't good at throwing them out? when the almost miraculous compile (a) occurred, things suddenly sped up in the system - almost as if something had at last been reclaimed. a second run resulted in (b), which has essentially the same performance as the pre-your-patch kernel (c). also, i'm not certain that the worst here is no worse than a 2.0R kernel. i think that i can try this out. are there any commands i can use to examine the state of the disk/buffer cache stuff? thanks again, -elh