Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 1 Oct 2004 20:34:37 -0700 (PDT)
From:      "Bruce R. Montague" <brucem@mail.cruzio.com>
To:        freebsd-hackers@freebsd.org
Cc:        durham@jcdurham.com
Subject:   Re: Sudden Reboots
Message-ID:  <200410020334.i923YbYB000383@mail.cruzio.com>

next in thread | raw e-mail | index | archive | help

 Hi, re:

> The odd thing was that it was happening at virtualy
> the same time every morning....
> [...]
> Then, they both just *stopped doing it by themselves* with no apparent
> correlation to anything installed software-wise. Neither server has had any
> problem for over a year now.

* What was the external power situation, grounding,
static situation, or other "noise"?  Was the UPS or
power-conditioning OK? Any large radars nearby? :) 
Radars have actually been known to matter. I once 
knew a system that died like this and it turned out
to be because it was mounted three floors above a
loading dock... a ROM pin or somesuch was doing a
great job as a vibration detector, whenever trucks
backed into the dock hard.

Which brings up the question, what's the cheapest/best
way these days to atually monitor high-res
sags/spikes/sags on the line into a box? Decades ago
it was a Drantez meter; I see they're still around:
  www.dranetz-bmi.com

Does anyone have any such "line-monitor" unit that
they particularly recommend as a good low-end buy?   


* Handwaving general remark about VM space overhead...
Early virtual memory systems rapidly ran into the   
problem that all of physical memory became consummed
by page tables. The solution was to page the page
tables (which is why modern architectures support
hierarchies of page tables). As systems become larger
this solution typically becomes less-and-less
effective, because each page in every _virtual_
address space requires a page table entry. If you
have many large addresses spaces, this requires many
page table entries total (this acts as pressure to
make pages larger). The page tables become large
data structures; managing them (keeping parts in
memory when needed) can become a bottleneck.  If you
have other restrictions (the page tables have to fit
in an address space segment, say, a kernel data
segment), the virtual space allocated for this data
structure can become exhausted. A kernel usually
needs to have page tables that can map every page
of physical memory, so for this page table, the more
physical memory present, the larger the table.

Page tables are used because they allow a page table
entry to be accessed via a simple addition based
on most of the virtual address. This is fast.

As address spaces grow above 32-bits, the potential
size of the page tables becomes more important. For
very large address spaces some form of "single-level
store" or "inverted page table" scheme is often
proposed. Instead of having a page table entry for
each page of virtual address space, these systems
have the equivalent of a page table entry for each
page of _physical_ memory. All addresses are effectively
disk-block+offset addresses; the virtual memory
hardware does an associative search to locate the
physical block in memory that corresponds to the
disk-block. This requires more expensive hardware
then a simple addition, but such systems only require
a page table entry for every page of physical memory.
These systems have been built from early days, but
are typically not competitive with VM systems that
require simple addition. (I think the IBM AS/400 is
the only widely-used commercial hardware using this
approach) At some point address space growth, cheap
associative lookup memories, and required page table
size may make this approach competitive.


 - bruce



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200410020334.i923YbYB000383>