Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 31 Mar 2000 11:58:11 -0500 (EST)
From:      Andrew Gallatin <gallatin@cs.duke.edu>
To:        nsayer@kfu.com
Cc:        freebsd-emulation@FreeBSD.ORG, dillon@FreeBSD.ORG
Subject:   VMware locks FreeBSD 4.0 solid
Message-ID:  <14564.53252.962047.551231@grasshopper.cs.duke.edu>
In-Reply-To: <38E3FB31.3DD4D170@sftw.com>
References:  <38E3FB31.3DD4D170@sftw.com>

next in thread | previous in thread | raw e-mail | index | archive | help

Nick Sayer writes:
 > 
 > Has anyone else seen horrible, dramatic problems with NT4 as a guest
 > under a FreeBSD host
 > using vmware v2? I would sort of prefer NT, since the few times a
 > difference between the two
 > matters, having NT is preferable.
 > 
 > Oh, and it's NT4 workstation, SP 6a.

I don't think its specific to NT.  Rather, I think vmware will lock
FreeBSD solid if the FreeBSD host is under serious memory pressure.
I'm hoping that Matt might be able to shed some light on it for us.

I'm running VMware 2.0 & FreeBSD 4.0-RELEASE.  The host is a 512MB
450MHz PIII, vmware is configured to use 64MB. 

If I run a synthetic program to apply memory pressure to the system, I
can lock the machine solid within minutes.
(app is ftp://ftp.cs.duke.edu/pub/gallatin/misc/hunt.c) 

Breaking into the debugger, I see this:

db> ps
  pid   proc     addr    uid  ppid  pgrp  flag stat wmesg   wchan   cmd
  359 d45f7520 d6dd1000 1387   321   359 004006  3  vmwait c02ce678 ahunt.x86
  321 d45f7380 d6dd5000 1387   320   321 2004082  3  opause d6dd5108 tcsh
  320 d45f71e0 d6dd8000    0   142   320 084080  2                  rlogind
  316 d45f76c0 d6dcd000 1387   313   316 000186  3  piperd d6d537a0 vmware
  315 d45f7040 d6ddb000 1387   313   315 000106  3   inode c1880c00 vmware
  314 d45f7a00 d6dc6000 1387   313   314 000186  3  piperd d6d53de0 vmware
  313 d45f7d40 d6dbf000 1387   216   313 004186  3  piperd d6d53c00 vmware
  256 d45f7860 d6dc9000 1387   255   256 004082  3   ttyin c188ca28 tcsh
  255 d45f7ba0 d6dc1000    0   142   255 004080  3  select c02cc1ac rlogind
  235 d45f8220 d6da2000 1387   234   235 004106  3  vmwait c02ce678 systat
  234 d45f7ee0 d6db6000 1387   216   234 004106  3  vmwait c02ce678 xterm
  216 d45f9c20 d6d6d000 1387   215   216 2004082  3  opause d6d6d108 tcsh
  215 d45f83c0 d6d9f000    0   142   215 004080  3  select c02cc1ac rlogind
  208 d45fa5e0 d6d4b000    0     1   208 084082  2                  getty
  203 d45f8080 d6da5000    0     1   203 000080  3  select c02cc1ac sshd1
  144 d45f8560 d6d9b000    0     1   144 080480  2                  cron
  142 d45f8d80 d6d8a000    0     1   142 000080  3  select c02cc1ac inetd
  125 d45f8700 d6d96000    0     1   120 000080  3  nfsidl c02ce4ec nfsiod
  124 d45f88a0 d6d93000    0     1   120 000080  3  nfsidl c02ce4e8 nfsiod
  123 d45f8a40 d6d90000    0     1   120 000080  3  nfsidl c02ce4e4 nfsiod
  122 d45f8be0 d6d8d000    0     1   120 000080  3  nfsidl c02ce4e0 nfsiod
  118 d45f95a0 d6d7a000    0     1   118 000080  3  select c02cc1ac rpc.statd
  117 d45f8f20 d6d86000    0     1   112 000080  3    nfsd c188ce00 nfsd
  116 d45f90c0 d6d83000    0     1   112 000080  3    nfsd c1874000 nfsd
  115 d45f9260 d6d80000    0     1   112 000080  3    nfsd c1874200 nfsd
  114 d45f9400 d6d7d000    0     1   112 000080  3    nfsd c1874400 nfsd
  110 d45f9740 d6d77000    0     1   110 000080  3  select c02cc1ac mountd
  104 d45f98e0 d6d73000    0     1   104 080480  2                  ypbind
  102 d45f9a80 d6d70000    1     1   102 000180  3  select c02cc1ac portmap
   99 d45f9dc0 d6d6a000    0     1    99 000004  3  vmwait c02ce678 ntpd
   93 d45f9f60 d6d5c000    0     1    93 080080  2                  syslogd
   29 d45fa2a0 d6d55000    0     1    29 2000080  3   pause d6d55108 adjkerntz
   22 d45fa100 d6d58000    0     1    22 000004  3  vmwait c02ce678 mount_mfs
    5 d45fa780 d4607000    0     0     0 000204  3  syncer c02cc148 syncer
    4 d45fa920 d4605000    0     0     0 100204  3  psleep c02b6cb8 bufdaemon
    3 d45faac0 d4603000    0     0     0 000204  3  psleep c02c24e0 vmdaemon
    2 d45fac60 d4601000    0     0     0 100204  3   biord cbf00540 pagedaemon
    1 d45fae00 d45ff000    0     0     1 004284  3    wait d45fae00 init
    0 c02cb540 c0333000    0     0     0 000204  3  vmwait c02ce678 swapper

db> show page
cnt.v_free_count: 345
cnt.v_cache_count: 303
cnt.v_inactive_count: 7286
cnt.v_active_count: 100686
cnt.v_wire_count: 19981
cnt.v_free_reserved: 345
cnt.v_free_min: 986
cnt.v_free_target: 3303
cnt.v_cache_min: 3303
cnt.v_inactive_target: 4954
db> call dumpsys()


From the dump, I see that vmware's (the thread blocked on inode) kernel
stack looks like:

(kgdb) proc 0xd45f7040
(kgdb) bt
#0  mi_switch () at ../../kern/kern_synch.c:859
#1  0xc0156aa9 in tsleep (ident=0xc1880c00, priority=8, 
    wmesg=0xc02669a2 "inode", timo=0) at ../../kern/kern_synch.c:468
#2  0xc014ef44 in acquire (lkp=0xc1880c00, extflags=16777280, wanted=1792)
    at ../../kern/kern_lock.c:147
#3  0xc014eff4 in lockmgr (lkp=0xc1880c00, flags=16973889, 
    interlkp=0xd6db3bcc, p=0xd45f7040) at ../../kern/kern_lock.c:227
#4  0xc017a7e4 in vop_stdlock (ap=0xd6ddce40) at ../../kern/vfs_default.c:231
#5  0xc01f4c29 in ufs_vnoperate (ap=0xd6ddce40)
    at ../../ufs/ufs/ufs_vnops.c:2283
#6  0xc01844ab in vn_lock (vp=0xd6db3b60, flags=16973889, p=0xd45f7040)
    at vnode_if.h:840
#7  0xc017d407 in vget (vp=0xd6db3b60, flags=16908353, p=0xd45f7040)
    at ../../kern/vfs_subr.c:1390
#8  0xc0202fac in vnode_pager_lock (object=0xd6daf780)
    at ../../vm/vnode_pager.c:978
#9  0xc01f6b03 in vm_fault (map=0xd45fc5c0, vaddr=713601024, 
    fault_type=3 '\003', fault_flags=8) at ../../vm/vm_fault.c:253
#10 0xc023cc42 in trap_pfault (frame=0xd6ddcfa8, usermode=1, eva=713601024)
    at ../../i386/i386/trap.c:797
#11 0xc023c737 in trap (frame={tf_fs = 137297967, tf_es = 47, 
      tf_ds = -1078001617, tf_edi = 713601024, tf_esi = 137406384, 
      tf_ebp = -1077941076, tf_isp = -690106412, tf_ebx = 8, 
      tf_edx = 713601024, tf_ecx = 1024, tf_eax = 4096, tf_trapno = 12, 
      tf_err = 6, tf_eip = 675258647, tf_cs = 31, tf_eflags = 78342, 
      tf_esp = -1077941084, tf_ss = 47}) at ../../i386/i386/trap.c:346

(kgdb) frame 2
#2  0xc014ef44 in acquire (lkp=0xc1880c00, extflags=16777280, wanted=1792)
    at ../../kern/kern_lock.c:147
147     in ../../kern/kern_lock.c
(kgdb) p *lkp
$2 = {
  lk_interlock = {
    lock_data = 0
  }, 
  lk_flags = 2098240, 
  lk_sharecount = 0, 
  lk_waitcount = 1, 
  lk_exclusivecount = 1, 
  lk_prio = 8, 
  lk_wmesg = 0xc02669a2 "inode", 
  lk_timo = 0, 
  lk_lockholder = 2
}


And the pagedaemon looks like: 

(kgdb) proc 0xd45fac60
(kgdb) bt
#0  mi_switch () at ../../kern/kern_synch.c:859
#1  0xc0156aa9 in tsleep (ident=0xcbf00540, priority=16, 
    wmesg=0xc025b589 "biord", timo=0) at ../../kern/kern_synch.c:468
#2  0xc0177873 in biowait (bp=0xcbf00540) at ../../kern/vfs_bio.c:2654
#3  0xc017510b in bread (vp=0xd6db3b60, blkno=808, size=8192, cred=0x0, 
    bpp=0xd4602cbc) at ../../kern/vfs_bio.c:515
#4  0xc01e4f15 in ffs_balloc (ap=0xd4602d7c) at ../../ufs/ffs/ffs_balloc.c:327
#5  0xc01edbc1 in ffs_write (ap=0xd4602dcc) at vnode_if.h:1035
#6  0xc0202efa in vnode_pager_generic_putpages (vp=0xd6db3b60, m=0xd4602edc, 
    bytecount=8192, flags=0, rtvals=0xd4602e70) at vnode_if.h:363
#7  0xc01ee1ce in ffs_putpages (ap=0xd4602e34)
    at ../../ufs/ufs/ufs_readwrite.c:677
#8  0xc0202d56 in vnode_pager_putpages (object=0xd6daf780, m=0xd4602edc, 
    count=2, sync=0, rtvals=0xd4602e70) at vnode_if.h:1126
#9  0xc01ffeba in vm_pageout_flush (mc=0xd4602edc, count=2, flags=0)
    at ../../vm/vm_pager.h:145
#10 0xc01ffe1d in vm_pageout_clean (m=0xc0ae7ff0) at ../../vm/vm_pageout.c:338
#11 0xc020073e in vm_pageout_scan () at ../../vm/vm_pageout.c:914
#12 0xc0201034 in vm_pageout () at ../../vm/vm_pageout.c:1350
#13 0xc0231740 in fork_trampoline ()

Hmm.. why is it stuck there?

------------------------------------------------------------------------------
Andrew Gallatin, Sr Systems Programmer	http://www.cs.duke.edu/~gallatin
Duke University				Email: gallatin@cs.duke.edu
Department of Computer Science		Phone: (919) 660-6590


To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-emulation" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?14564.53252.962047.551231>