Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 16 Oct 1998 00:46:29 -0400 (EDT)
From:      "John W. DeBoskey" <jwd@unx.sas.com>
To:        mike@smith.net.au (Mike Smith)
Cc:        freebsd-current@FreeBSD.ORG
Subject:   Re: -current NFS problem
Message-ID:  <199810160446.AAA23559@bb01f39.unx.sas.com>
In-Reply-To: From Mike Smith at "Oct 14, 98 09:04:20 am"

next in thread | raw e-mail | index | archive | help
> > Perhaps this could be the problem with NFS "hanging" certain people all
> > the time? (not the pine thing) The system spending way too much time
> > inside the kernel transmitting NFS packets....
> 
> No.  Lack of ACCESS caching makes us slow and eats the network (because 
> we are very good at generating/sending/receiving them).
> 
> If there's someone out there that wants to work with the very best NFS 
> people in the business to sort out our problems, please let me know.  
> NetApp are keen to see our issues resolved (it will mean less angst 
> for them in the long run, as they have many FreeBSD-using customers).
> 
> Right now, we are accumulating a bad NFS reputation. 8(
> 
... deleted for brevity...

Hi,

   I have 50 266 Mhz pc's clustered around 3 netapp F630 Filers used
as compile servers for an in-house distributed make facility.

   85% of the traffic to the netapp(s) 'was' due to access calls. I
do not recommend it for public consumption, but the following patch
reduces the access overhead to less than 20%. 

diff nfs_vnops.c /nfs/locutus/usr/src/sys/nfs
273c273
<       int v3 = NFS_ISV3(vp);
---
>       int v3 = 0; /* NFS_ISV3(vp); */

   ie: tell the nfs_access() function we're always version 2, even
though we're actually connected via 3.  The side effect of this patch
is to delay file access errors, but this is not a real problem in
our environment.

FWIW: currently, 28 pc's connected to 1 F630 netapp delivered the
      following performance (not a controlled test, just an overnight
      job):

>>>STAT: 3149 .o     : avg time 00:02 : avg speed    520/minute :   0 failed.

      ie: 6 minutes to compile 3149 files.

      Our usage of the filers is sequential read of c/h files, and then
sequential write of the .o file.  We do not do mmap'd or random access
i/o and thus have seen no problems such as those reported on -current.

Just my 0.02
John

ps: BTW, I cannot amd mount the filers due to a problem with GARBAGE_ARGS
    being returned from clnt_call() in amfs_host.c\fetch_fhandle() regardless
    of whether I force a V2 or V3 mount protocol.  This is a 'random' error
    which I can get to occurr approximately 1 out of 10 mounts.

    We wrote a small shell script to mount/umount the filers and interfaced
    this with amd via the type=program statement (being tested). So far,
    this has yet to fail...

To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-current" in the body of the message



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199810160446.AAA23559>