Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 30 Jan 1998 22:14:02 -0500 (EST)
From:      "John W. DeBoskey" <jwd@unx.sas.com>
To:        freebsd-current@FreeBSD.ORG
Cc:        jwd@unx.sas.com (John W. DeBoskey)
Subject:   NFS v3 3.0-Current performance questions
Message-ID:  <199801310314.AA16700@iluvatar.unx.sas.com>

next in thread | raw e-mail | index | archive | help
Hello,

   I have a series of NFS v3 performance related questions which I
would like to present.

   The setup: 6  266Mhz PII 128Meg computers hooked up to a
                 network appliance file server via NFS V3. 
                 Running 3.0-980128-SNAP.

   Problem: This system is running a distributed make process
            which accesses (minimum) 2564 h files in 20+ directories
            located on the file server.

            I would like to buffer as much of (if not all) the
            directory information concerning these files and the
            file contents.


   Questions:

      Is it possible to tune the amount of cached directory information
   in the NFS v3 protocol?

      Is is possible to tune the amount of file datablock contents which
   are cached? From nfsstat, note the number of BioR hits and misses
   from simply running a job which continuously cats the files (of
   course I may be mis-interpreting the output).

Cache Info:
Attr Hits    Misses Lkup Hits    Misses BioR Hits    Misses BioW Hits    Misses
    69997      9172     43654      8700     10006     12674         0         0

   fyi: The time to cat all the files to /dev/null : 3 iterations :

   6.78s real    0.05s user    1.42s system 
   6.82s real    0.03s user    1.41s system 
   6.78s real    0.01s user    1.46s system 

   ( I'd like to cut this by at least 50%  :-)

      Using the default (NBUF=0) value causes nbuf to aquire the
   value 3078 for each system by default. I have set NBUF=8196 with
   no real performance gain, so I don't think this is the right
   direction. Comments?  I beleive I can get a big gain if I can simply
   reduce the number of BioR Misses. Again, Comments?

      From a performance testing standpoint, it would be nice if we could
   add a 'clear the counters' option to nfsstat so that root could reset
   the stat numbers to zero. Comments?
      

   I do not beleive I need to add memory to these boxes either. Note the
   number of free vm pages in the following vmstat output.

$ vmstat -s
   127855 cpu context switches
   877149 device interrupts
    52881 software interrupts
    11625 traps
    91484 system calls
        0 swap pager pageins
        0 swap pager pages paged in
        0 swap pager pageouts
        0 swap pager pages paged out
      243 vnode pager pageins
     1165 vnode pager pages paged in
        0 vnode pager pageouts
        0 vnode pager pages paged out
        0 page daemon wakeups
        0 pages examined by the page daemon
        0 pages reactivated
     4287 copy-on-write faults
     3173 zero fill pages zeroed
        5 intransit blocking page faults
    12564 total VM faults taken
    20366 pages freed
        0 pages freed by daemon
     5237 pages freed by exiting processes
      296 pages active
     1657 pages inactive
     1243 pages in VM cache
     3427 pages wired down
    25252 pages free
     4096 bytes per page
    79741 total name lookups
          cache hits (81% pos + 0% neg) system 0% per-directory
          deletions 0%, falsehits 0%, toolong 0%
$


   Any and all comments are welcome.

Thanks,
John

-- 
jwd@unx.sas.com       (w) John W. De Boskey          (919) 677-8000 x6915



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199801310314.AA16700>