Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 25 Jun 2002 16:43:31 -0700
From:      Terry Lambert <tlambert2@mindspring.com>
To:        Matthew Dillon <dillon@apollo.backplane.com>
Cc:        Alfred Perlstein <bright@mu.org>, Patrick Thomas <root@utility.clubscholarship.com>, freebsd-hackers@FreeBSD.ORG
Subject:   Re: tunings for many httpds...
Message-ID:  <3D190023.4BA9D75F@mindspring.com>
References:  <20020624151650.I68572-100000@utility.clubscholarship.com> <3D17D27A.11E82B2B@mindspring.com> <20020625022238.GH53232@elvis.mu.org> <3D17DBC1.351A8A35@mindspring.com> <20020625072509.GJ53232@elvis.mu.org> <3D18CDB2.151978F3@mindspring.com> <20020625210633.GQ53232@elvis.mu.org> <200206252209.g5PM9J79010543@apollo.backplane.com>

next in thread | previous in thread | raw e-mail | index | archive | help
Matthew Dillon wrote:
>     Even more importantly it would be nice if we could share compatible
>     pmap pages, then we would have no need for 4MB pages... 50 mappings
>     of the same shared memory segment would wind up using the same pmap
>     pages as if only one mapping had been made.  Such a feature would work
>     for SysV shared memory and for mmap()s.  I've looked at doing this
>     off and on for two years but do not have a sufficient chunk of time
>     available yet.

For the KVA, this should be one of the side effects of doing
preestablished mappings for the entire KVA space.  This would
default to 1M of memory used for mappings for 4K pages, or
none, for 4M pages, for mapping the entirety of the KVA.  The
issue that 4M pages would introduce here, though, is the
inability to page kernel memory on 4K boundaries.  Personally,
I think I would rather eat the 1M of page tables for the 1G of
KVA.

Mapping all of physical memory would take physical/1024; for 4G,
that works out to 4M.  I'm not sure if these mappings could
really be shared.  If they could, then it would eliminate a lot
of issues, like reverse notification in the case of a swap out
of a page shared between processes, etc..

I think both of these would end up introducing additional problems,
though.  One of these would be what to do when your memory is not
closely approaching your address space (4M of 32M or 64M is a
substantial chunk, for mapping an address space that can not, in
practice, be used).  Another would be that you would not have
unmapped kernel memory.  This would make it hard to implement
guard pages (on the plus side, there would be no more "trap 12"
panics 8-) 8-)).

Things tend to change considerably when you close in on the
physical RAM approaching the physical address space in size;
historically all the assumptions have been that this would not
be the case.  While there's some benefit to rexamining some of
these assumptions, going to a 64 bit address space with IA64
and Hammer architectures is just going to reset the assumptions
back down.

I think that some of th stuff that you've done already with
regard to preallocation policy on close approach (the machdep.c
changes that were discussed in this thread already) is close
to the limits of what can be done reasonably, without damaging
the system in the "physical_address_space/physical_RAM >> 1"
cases.

When I suggested pre-creating mappings for all of the KVA
space (and *only* the KVA space), I was really trying to talk
about unification of allocators, and dropping of areas of code
that would otherwise require locking to operate correctly, and
avoiding committing memory to a particular zone.  That is, I
don't think it's something that can be done generally to share
mappings to avoid duplication -- even if the segments end up
attached at the same address in multiple processes.

-- Terry

To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-hackers" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?3D190023.4BA9D75F>