Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 3 Jul 2009 10:24:39 -0300
From:      c0re dumped <ez.c0re@gmail.com>
To:        freebsd-i386@freebsd.org
Subject:   Problem with vm.pmap.shpgperproc and vm.pmap.pv_entry_max
Message-ID:  <6dd8736a0907030624t27595199w15ef047c9a83e382@mail.gmail.com>
In-Reply-To: <6dd8736a0907030618o722d8252x59479543fef23cc4@mail.gmail.com>
References:  <6dd8736a0907030618o722d8252x59479543fef23cc4@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
So, I never had problem with this server, but recently it starts to
give me the following messages *every* minute :

Jul =A03 10:04:00 squid kernel: Approaching the limit on PV entries,
consider increasing either the vm.pmap.shpgperproc or the
vm.pmap.pv_entry_max tunable.
Jul =A03 10:05:00 squid kernel: Approaching the limit on PV
entries,consider increasing either the vm.pmap.shpgperproc or the
vm.pmap.pv_entry_max tunable.
Jul =A03 10:06:00 squid kernel: Approaching the limit on PV
entries,consider increasing either the vm.pmap.shpgperproc or the
vm.pmap.pv_entry_max tunable.
Jul =A03 10:07:01 squid kernel: Approaching the limit on PV
entries,consider increasing either the vm.pmap.shpgperproc or the
vm.pmap.pv_entry_max tunable.
Jul =A03 10:08:01 squid kernel: Approaching the limit on PV
entries,consider increasing either the vm.pmap.shpgperproc or the
vm.pmap.pv_entry_max tunable.
Jul =A03 10:09:01 squid kernel: Approaching the limit on PV
entries,consider increasing either the vm.pmap.shpgperproc or the
vm.pmap.pv_entry_max tunable.
Jul =A03 10:10:01 squid kernel: Approaching the limit on PV
entries,consider increasing either the vm.pmap.shpgperproc or the
vm.pmap.pv_entry_max tunable.
Jul =A03 10:11:01 squid kernel: Approaching the limit on PV
entries,consider increasing either the vm.pmap.shpgperproc or the
vm.pmap.pv_entry_max tunable.

This server is running Squid + dansguardian. The users are complaining
about slow navigation and they are driving me crazy !

Have anyone faced this problem before ?

Some infos:

# uname -a
FreeBSD squid 7.2-RELEASE FreeBSD 7.2-RELEASE #0: Fri May =A01 08:49:13
UTC 2009 =A0 =A0 root@walker.cse.buffalo.edu:/usr/obj/usr/src/sys/GENERIC
i386

# sysctl vm
vm.vmtotal:
System wide totals computed every five seconds: (values in kilobytes)
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Processes: =A0 =A0 =A0 =A0 =A0 =A0 =A0(RUNQ: 1 Disk Wait: 1 Page Wait: 0 Sl=
eep: 230)
Virtual Memory: =A0 =A0 =A0 =A0 (Total: 19174412K, Active 9902152K)
Real Memory: =A0 =A0 =A0 =A0 =A0 =A0(Total: 1908080K Active 1715908K)
Shared Virtual Memory: =A0(Total: 647372K Active: 10724K)
Shared Real Memory: =A0 =A0 (Total: 68092K Active: 4436K)
Free Memory Pages: =A0 =A0 =A088372K

vm.loadavg: { 0.96 0.96 1.13 }
vm.v_free_min: 4896
vm.v_free_target: 20635
vm.v_free_reserved: 1051
vm.v_inactive_target: 30952
vm.v_cache_min: 20635
vm.v_cache_max: 41270
vm.v_pageout_free_min: 34
vm.pageout_algorithm: 0
vm.swap_enabled: 1
vm.kmem_size_scale: 3
vm.kmem_size_max: 335544320
vm.kmem_size_min: 0
vm.kmem_size: 335544320
vm.nswapdev: 1
vm.dmmax: 32
vm.swap_async_max: 4
vm.zone_count: 84
vm.swap_idle_threshold2: 10
vm.swap_idle_threshold1: 2
vm.exec_map_entries: 16
vm.stats.misc.zero_page_count: 0
vm.stats.misc.cnt_prezero: 0
vm.stats.vm.v_kthreadpages: 0
vm.stats.vm.v_rforkpages: 0
vm.stats.vm.v_vforkpages: 340091
vm.stats.vm.v_forkpages: 3604123
vm.stats.vm.v_kthreads: 53
vm.stats.vm.v_rforks: 0
vm.stats.vm.v_vforks: 2251
vm.stats.vm.v_forks: 19295
vm.stats.vm.v_interrupt_free_min: 2
vm.stats.vm.v_pageout_free_min: 34
vm.stats.vm.v_cache_max: 41270
vm.stats.vm.v_cache_min: 20635
vm.stats.vm.v_cache_count: 5734
vm.stats.vm.v_inactive_count: 242259
vm.stats.vm.v_inactive_target: 30952
vm.stats.vm.v_active_count: 445958
vm.stats.vm.v_wire_count: 58879
vm.stats.vm.v_free_count: 16335
vm.stats.vm.v_free_min: 4896
vm.stats.vm.v_free_target: 20635
vm.stats.vm.v_free_reserved: 1051
vm.stats.vm.v_page_count: 769244
vm.stats.vm.v_page_size: 4096
vm.stats.vm.v_tfree: 12442098
vm.stats.vm.v_pfree: 1657776
vm.stats.vm.v_dfree: 0
vm.stats.vm.v_tcached: 253415
vm.stats.vm.v_pdpages: 254373
vm.stats.vm.v_pdwakeups: 14
vm.stats.vm.v_reactivated: 414
vm.stats.vm.v_intrans: 1912
vm.stats.vm.v_vnodepgsout: 0
vm.stats.vm.v_vnodepgsin: 6593
vm.stats.vm.v_vnodeout: 0
vm.stats.vm.v_vnodein: 891
vm.stats.vm.v_swappgsout: 0
vm.stats.vm.v_swappgsin: 0
vm.stats.vm.v_swapout: 0
vm.stats.vm.v_swapin: 0
vm.stats.vm.v_ozfod: 56314
vm.stats.vm.v_zfod: 2016628
vm.stats.vm.v_cow_optim: 1959
vm.stats.vm.v_cow_faults: 584331
vm.stats.vm.v_vm_faults: 3661086
vm.stats.sys.v_soft: 23280645
vm.stats.sys.v_intr: 18528397
vm.stats.sys.v_syscall: 1990471112
vm.stats.sys.v_trap: 8079878
vm.stats.sys.v_swtch: 105613021
vm.stats.object.bypasses: 14893
vm.stats.object.collapses: 55259
vm.v_free_severe: 2973
vm.max_proc_mmap: 49344
vm.old_msync: 0
vm.msync_flush_flags: 3
vm.boot_pages: 48
vm.max_wired: 255475
vm.pageout_lock_miss: 0
vm.disable_swapspace_pageouts: 0
vm.defer_swapspace_pageouts: 0
vm.swap_idle_enabled: 0
vm.pageout_stats_interval: 5
vm.pageout_full_stats_interval: 20
vm.pageout_stats_max: 20635
vm.max_launder: 32
vm.phys_segs:
SEGMENT 0:

start: =A0 =A0 0x1000
end: =A0 =A0 =A0 0x9a000
free list: 0xc0cca168

SEGMENT 1:

start: =A0 =A0 0x100000
end: =A0 =A0 =A0 0x400000
free list: 0xc0cca168

SEGMENT 2:

start: =A0 =A0 0x1025000
end: =A0 =A0 =A0 0xbc968000
free list: 0xc0cca060

vm.phys_free:
FREE LIST 0:

=A0ORDER (SIZE) =A0| =A0NUMBER
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0| =A0POOL 0 =A0| =A0POOL 1
-- =A0 =A0 =A0 =A0 =A0 =A0-- -- =A0 =A0 =A0-- -- =A0 =A0 =A0--
=A010 ( =A04096K) =A0| =A0 =A0 =A0 0 =A0| =A0 =A0 =A0 0
=A0 9 ( =A02048K) =A0| =A0 =A0 =A0 0 =A0| =A0 =A0 =A0 0
=A0 8 ( =A01024K) =A0| =A0 =A0 =A0 0 =A0| =A0 =A0 =A0 0
=A0 7 ( =A0 512K) =A0| =A0 =A0 =A0 0 =A0| =A0 =A0 =A0 0
=A0 6 ( =A0 256K) =A0| =A0 =A0 =A0 0 =A0| =A0 =A0 =A0 0
=A0 5 ( =A0 128K) =A0| =A0 =A0 =A0 0 =A0| =A0 =A0 =A0 0
=A0 4 ( =A0 =A064K) =A0| =A0 =A0 =A0 0 =A0| =A0 =A0 =A0 0
=A0 3 ( =A0 =A032K) =A0| =A0 =A0 =A0 0 =A0| =A0 =A0 =A0 0
=A0 2 ( =A0 =A016K) =A0| =A0 =A0 =A0 0 =A0| =A0 =A0 =A0 0
=A0 1 ( =A0 =A0 8K) =A0| =A0 =A0 =A0 0 =A0| =A0 =A0 =A0 0
=A0 0 ( =A0 =A0 4K) =A0| =A0 =A0 =A024 =A0| =A0 =A03562

FREE LIST 1:

=A0ORDER (SIZE) =A0| =A0NUMBER
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0| =A0POOL 0 =A0| =A0POOL 1
-- =A0 =A0 =A0 =A0 =A0 =A0-- -- =A0 =A0 =A0-- -- =A0 =A0 =A0--
=A010 ( =A04096K) =A0| =A0 =A0 =A0 0 =A0| =A0 =A0 =A0 0
=A0 9 ( =A02048K) =A0| =A0 =A0 =A0 0 =A0| =A0 =A0 =A0 0
=A0 8 ( =A01024K) =A0| =A0 =A0 =A0 0 =A0| =A0 =A0 =A0 0
=A0 7 ( =A0 512K) =A0| =A0 =A0 =A0 0 =A0| =A0 =A0 =A0 0
=A0 6 ( =A0 256K) =A0| =A0 =A0 =A0 0 =A0| =A0 =A0 =A0 0
=A0 5 ( =A0 128K) =A0| =A0 =A0 =A0 0 =A0| =A0 =A0 =A0 2
=A0 4 ( =A0 =A064K) =A0| =A0 =A0 =A0 0 =A0| =A0 =A0 =A0 3
=A0 3 ( =A0 =A032K) =A0| =A0 =A0 =A0 6 =A0| =A0 =A0 =A011
=A0 2 ( =A0 =A016K) =A0| =A0 =A0 =A0 6 =A0| =A0 =A0 =A021
=A0 1 ( =A0 =A0 8K) =A0| =A0 =A0 =A014 =A0| =A0 =A0 =A035
=A0 0 ( =A0 =A0 4K) =A0| =A0 =A0 =A020 =A0| =A0 =A0 =A070

vm.reserv.reclaimed: 187
vm.reserv.partpopq:
LEVEL =A0 =A0 SIZE =A0NUMBER

=A0 -1: =A071756K, =A0 =A0 19

vm.reserv.freed: 35575
vm.reserv.broken: 94
vm.idlezero_enable: 0
vm.kvm_free: 310374400
vm.kvm_size: 1073737728
vm.pmap.pmap_collect_active: 0
vm.pmap.pmap_collect_inactive: 0
vm.pmap.pv_entry_spare: 50408
vm.pmap.pv_entry_allocs: 38854797
vm.pmap.pv_entry_frees: 37052501
vm.pmap.pc_chunk_tryfail: 0
vm.pmap.pc_chunk_frees: 130705
vm.pmap.pc_chunk_allocs: 136219
vm.pmap.pc_chunk_count: 5514
vm.pmap.pv_entry_count: 1802296
vm.pmap.pde.promotions: 0
vm.pmap.pde.p_failures: 0
vm.pmap.pde.mappings: 0
vm.pmap.pde.demotions: 0
vm.pmap.shpgperproc: 200
vm.pmap.pv_entry_max: 2002224
vm.pmap.pg_ps_enabled: 0

Either pmap.shpgperproc and vm.pmap.pv_entry_max are with their
default values. I read here
(http://lists.freebsd.org/pipermail/freebsd-hackers/2003-May/000695.html)
that its not a good ideia to increase these values arbitrarily.

Thanks

F=E1bio



--=20

"To err is human, to blame it on somebody else shows management potential."



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?6dd8736a0907030624t27595199w15ef047c9a83e382>