Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 28 Jan 2011 12:30:55 +0100
From:      Damien Fleuriot <ml@my.gd>
To:        Bartosz Stec <bartosz.stec@it4pro.pl>
Cc:        freebsd-stable@freebsd.org
Subject:   Re: top shows only part of available physmem
Message-ID:  <4D42A8EF.7060302@my.gd>
In-Reply-To: <4D429C71.6000100@it4pro.pl>
References:  <4D401192.3030400@it4pro.pl>	<201101261235.56856.jhb@freebsd.org>	<20110126180402.GA17271@tolstoy.tols.org>	<201101261344.50756.jhb@freebsd.org>	<4D40C355.6070306@it4pro.pl>	<20110127032142.GA19946@icarus.home.lan>	<4D417931.1060009@it4pro.pl> <4D429C71.6000100@it4pro.pl>

next in thread | previous in thread | raw e-mail | index | archive | help


On 1/28/11 11:37 AM, Bartosz Stec wrote:
> 
>>>>>>>>>> Guys,
>>>>>>>>>>
>>>>>>>>>> could someone explain me this?
>>>>>>>>>>
>>>>>>>>>>       # sysctl hw.realmem
>>>>>>>>>>       hw.realmem: 2139029504
>>>>>>>>>>
>>>>>>>>>> top line shows:
>>>>>>>>>>
>>>>>>>>>>       Mem: 32M Active, 35M Inact, 899M Wired, 8392K Cache,
>>>>>>>>>> 199M Buf, 58M Free
>>>>>>>>>>
>>>>>>>>>> 32+35+899+8+199+58 = 1231MB
>>>>>>>>>>
>>>>>>>>>> Shouldn't that sum to all available ram? Or maybe I'm reading
>>>>>>>>>> it wrong?
>>>>>>>>>> This machine has indeed 2GB of ram on board and showed in BIOS.
>>>>>>>>>> i386  FreeBSD 8.2-PRERELEASE #16: Mon Jan 17 22:28:53 CET 2011
>>>>>>>>>> Cheers.
>>>>>>>>> First, don't include 'buf' as isn't a separate set of RAM, it
>>>>>>>>> is only a range
>>>>>>>>> of the virtual address space in the kernel.  It used to be
>>>>>>>>> relevant when the
>>>>>>>>> buffer cache was separate from the VM page cache, but now it is
>>>>>>>>> mostly
>>>>>>>>> irrelevant (arguably it should just be dropped from top output).
>>>>>>>> Thanks for the explanation. So 1231MB - 199MB Buf and we got
>>>>>>>> about 1GB
>>>>>>>> of memory instead of 2B.
>>>>>>>>
>>>>>>>>> However, look at what hw.physmem says (and the realmem and
>>>>>>>>> availmem lines in
>>>>>>>>> dmesg).  realmem is actually not that useful as it is not a
>>>>>>>>> count of the
>>>>>>>>> amount of memory, but the address of the highest memory page
>>>>>>>>> available.  There
>>>>>>>>> can be less memory available than that due to "holes" in the
>>>>>>>>> address space for
>>>>>>>>> PCI memory BARs, etc.
>>>>>>>>>
>>>>>>>> OK, here you go:
>>>>>>>> # sysctl hw | grep mem
>>>>>>>>
>>>>>>>>      hw.physmem: 2125893632
>>>>>>>>      hw.usermem: 1212100608
>>>>>>>>      hw.realmem: 2139029504
>>>>>>>>      hw.pci.host_mem_start: 2147483648
>>>>>>> Humm, you should still have 2GB of RAM then.  All the memory you
>>>>>>> set aside
>>>>>>> for ARC should be counted in the 'wired' count, so I'm not sure
>>>>>>> why you see
>>>>>>> 1GB of RAM rather than 2GB.
>>>>>> For what its worth (seems to be the same values top shows), the
>>>>>> sysctl's
>>>>>> I use to make cacti graphs of memory usage are: (Counts are in pages)
>>>>>>
>>>>>> vm.stats.vm.v_page_size
>>>>>>
>>>>>> vm.stats.vm.v_wire_count
>>>>>> vm.stats.vm.v_active_count
>>>>>> vm.stats.vm.v_inactive_count
>>>>>> vm.stats.vm.v_cache_count
>>>>>> vm.stats.vm.v_free_count
>>>>>>
>>>>>> Using the output of those sysctls I allways get a cacti graph
>>>>>> which at
>>>>>> least very much seems to account for all memory, and has a flat
>>>>>> surface
>>>>>> in a stacked graph.
>>>>> These sysctls are exactly what top uses.  There is also a
>>>>> 'v_page_count'
>>>>> which is a total count of pages.
>>>>>
>>>> So here's additional sysctl output from now:
>>>>
>>>>     fbsd# sysctl hw | grep mem
>>>>     hw.physmem: 2125893632
>>>>     hw.usermem: 1392594944
>>>>     hw.realmem: 2139029504
>>>>     hw.pci.host_mem_start: 2147483648
>>>>
>>>>     fbsd# sysctl vm.stats.vm
>>>>     vm.stats.vm.v_kthreadpages: 0
>>>>     vm.stats.vm.v_rforkpages: 0
>>>>     vm.stats.vm.v_vforkpages: 1422927
>>>>     vm.stats.vm.v_forkpages: 4606557
>>>>     vm.stats.vm.v_kthreads: 40
>>>>     vm.stats.vm.v_rforks: 0
>>>>     vm.stats.vm.v_vforks: 9917
>>>>     vm.stats.vm.v_forks: 30429
>>>>     vm.stats.vm.v_interrupt_free_min: 2
>>>>     vm.stats.vm.v_pageout_free_min: 34
>>>>     vm.stats.vm.v_cache_max: 27506
>>>>     vm.stats.vm.v_cache_min: 13753
>>>>     vm.stats.vm.v_cache_count: 20312
>>>>     vm.stats.vm.v_inactive_count: 18591
>>>>     vm.stats.vm.v_inactive_target: 20629
>>>>     vm.stats.vm.v_active_count: 1096
>>>>     vm.stats.vm.v_wire_count: 179027
>>>>     vm.stats.vm.v_free_count: 6193
>>>>     vm.stats.vm.v_free_min: 3260
>>>>     vm.stats.vm.v_free_target: 13753
>>>>     vm.stats.vm.v_free_reserved: 713
>>>>     vm.stats.vm.v_page_count: 509752
>>>>     vm.stats.vm.v_page_size: 4096
>>>>     vm.stats.vm.v_tfree: 196418851
>>>>     vm.stats.vm.v_pfree: 2837177
>>>>     vm.stats.vm.v_dfree: 0
>>>>     vm.stats.vm.v_tcached: 1305893
>>>>     vm.stats.vm.v_pdpages: 3527455
>>>>     vm.stats.vm.v_pdwakeups: 187
>>>>     vm.stats.vm.v_reactivated: 83786
>>>>     vm.stats.vm.v_intrans: 3053
>>>>     vm.stats.vm.v_vnodepgsout: 134384
>>>>     vm.stats.vm.v_vnodepgsin: 29213
>>>>     vm.stats.vm.v_vnodeout: 96249
>>>>     vm.stats.vm.v_vnodein: 29213
>>>>     vm.stats.vm.v_swappgsout: 19730
>>>>     vm.stats.vm.v_swappgsin: 8573
>>>>     vm.stats.vm.v_swapout: 5287
>>>>     vm.stats.vm.v_swapin: 2975
>>>>     vm.stats.vm.v_ozfod: 83338
>>>>     vm.stats.vm.v_zfod: 2462557
>>>>     vm.stats.vm.v_cow_optim: 330
>>>>     vm.stats.vm.v_cow_faults: 1239253
>>>>     vm.stats.vm.v_vm_faults: 5898471
>>>>
>>>>     fbsd# sysctl vm.vmtotal
>>>>     vm.vmtotal:
>>>>     System wide totals computed every five seconds: (values in
>>>> kilobytes)
>>>>     ===============================================
>>>>     Processes:              (RUNQ: 1 Disk Wait: 0 Page Wait: 0
>>>> Sleep: 60)
>>>>     Virtual Memory:         (Total: 4971660K Active: 699312K)
>>>>     Real Memory:            (Total: 540776K Active: 29756K)
>>>>     Shared Virtual Memory:  (Total: 41148K Active: 19468K)
>>>>     Shared Real Memory:     (Total: 4964K Active: 3048K)
>>>>     Free Memory Pages:      105308K
>>>>
>>>>
>>>>     /usr/bin/top line: Mem: 4664K Active, 73M Inact, 700M Wired, 79M
>>>>     Cache, 199M Buf, 23M Free
>>>>     Sum (Without Buf): 879,5 MB
>>>>
>>>>     So what are we looking at? Wrong sysctls/top output or maybe
>>>>     actually FreeBSD doesn't use all available RAM for some reason?
>>>>     Could it be hardware problem? Maybe I should provide some
>>>> additional
>>>>     data?
>>> Does the behaviour become more expected if you remove ZFS from the
>>> picture?  Please try this (yes really).
>>>
>> About an hour ago I had to hard reset this machine because it stopped
>> responding (bu still gived ping response) after massive slowdown seen
>> by SAMBA users.
>> Now top shows following:
>> Mem: 78M Active, 83M Inact, 639M Wired, 120K Cache, 199M Buf, 1139M Free.
>>
>> What I am afraid is that this PC slowly eats own memory and finally
>> starved itself to death, because it happened second time in 2 weeks,
>> and it seems that rebuilding world+kernel Mon Jan 17 22:28:53 CET 2011
>> could be the cause. For some strange reason I believe that Jeremy
>> Chadwick could be right pointing ZFS. Way this machine stops
>> responding without any info in logs makes me believe that it has
>> simply lost ability to I/O to HDD (system is ZFS-only).
>>
> Day 2 after reboot:
> Mem: 100M Active, 415M Inact, 969M Wired, 83M Cache, 199M Buf, 21M Free
> Sum: 1588MB
> 1/4 of total RAM disappeared already.
> Anyone knows what possibly happening here or maybe I should hire some
> voodoo shaman to expel memory-eating-ghost from the machine ;)?
> 


Can you provide the following sysctls (ignore my values obviously)
again, now that some of your memory magicked itself away ?

hw.physmem: 4243976192
hw.usermem: 3417485312
hw.realmem: 5100273664
vfs.zfs.arc_min: 134217728
vfs.zfs.arc_max: 2147483648


And check out the ZFS ARC stats script here:
http://bitbucket.org/koie/arc_summary/changeset/dbe14d2cf52b/

Run it and see what results you get concerning your ZFS used memory.
What's of interest is the current size of your ZFS ARC cache.
It might account for the memory you're missing, with a bit of luck.



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?4D42A8EF.7060302>