Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 27 Jun 2014 09:28:10 -0400
From:      Shawn Webb <lattera@gmail.com>
To:        Alan Cox <alc@rice.edu>
Cc:        freebsd-hackers@freebsd.org, kib@freebsd.org, Oliver Pinter <oliver.pntr@gmail.com>
Subject:   Re: Help With ASLR
Message-ID:  <20140627132810.GA8396@pwnie.vrt.sourcefire.com>
In-Reply-To: <53ACE5B4.8070700@rice.edu>
References:  <20140626232727.GB1825@pwnie.vrt.sourcefire.com> <53ACE5B4.8070700@rice.edu>

next in thread | previous in thread | raw e-mail | index | archive | help

--bg08WKrSYDhXBjb5
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Jun 26, 2014 10:32 PM -0500, Alan Cox wrote:
> On 06/26/2014 18:27, Shawn Webb wrote:
> > Hey All,
> >
> > I've exchanged a few helpful emails with kib@. He has glanced over the
> > ASLR patch we have against 11-CURRENT (which is also attached to this
> > email as a reference point). He has suggested some changes to the VM
> > subsystem that I'm having trouble getting my head wrapped around.
> >
> > Essentially, he suggests that instead of doing the randomization in
> > various places like the ELF loader and sys_mmap, we should modify
> > vm_map_find() in sys/vm/vm_map.c to apply the randomization there.
> >
> > Here's his suggestion in full (I hope he doesn't mind me directly
> > quoting him):
> >
> > The mapping address randomization can only be performed in the
> > vm_map_find().  This is the place where usermode mapping flags are
> > already parsed, and the real decision for the placement is done, with
> > all the contextual information collected, the fixed requests are not
> > passed there.  Doing the 'randomization' in vm_mmap() means that the
> > mmap command must be parsed twice, which presented patch fails to do
> > in any sane way.  Only mappings in the usermode maps should be
> > randomized.
> >
> > Current patch does not randomize the mapping location at all, it only
> > randomizes the hint address passed from the userspace, which is
> > completely useless.
>=20
> Kostik, I'm not sure that I understand what you're saying here.  The
> randomly generated offset is added to the variable "addr" after this
> block of code has been executed:
>=20
>         } else {
>                 /*
>                  * XXX for non-fixed mappings where no hint is provided or
>                  * the hint would fall in the potential heap space,
>                  * place it after the end of the largest possible heap.
>                  *
>                  * There should really be a pmap call to determine a
> reasonable
>                  * location.
>                  */
>                 PROC_LOCK(td->td_proc);
>                 if (addr =3D=3D 0 ||
>                     (addr >=3D round_page((vm_offset_t)vms->vm_taddr) &&
>                     addr < round_page((vm_offset_t)vms->vm_daddr +
>                     lim_max(td->td_proc, RLIMIT_DATA))))
>                         addr =3D round_page((vm_offset_t)vms->vm_daddr +
>                             lim_max(td->td_proc, RLIMIT_DATA));
>                 PROC_UNLOCK(td->td_proc);
>         }
>=20
> So, the point at which the eventual call to vm_map_findspace() starts
> looking for free space is determined by the randomization, and thus the
> chosen address will be influenced by the randomization.  The first
> mmap() that I do in one execution of the program will be unlikely to
> have the same address in the next execution.
>=20
> That said, I think that this block of code would be a better place for
> the pax_aslr_mmap() call than were it currently resides.  Moving it here
> would address the problem with MAP_32BIT mentioned below.
>=20
>=20
> > ...  It also fails to correct the address to honour
> > the placement flags like MAP_32BIT or MAP_ALIGNMENT_MASK.
>=20
>=20
> I think that MAP_32BIT is broken, but not MAP_ALIGNMENT_MASK.  Whatever
> address the patch causes to be passed into vm_map_find(), we will round
> it up as necessary to achieve the requested alignment.
>=20
> In some sense, the real effect of the map alignment directives,
> including automatic alignment for superpages, is going to be to chop off
> address bits that Shawn has worked to randomize.  More on this below.=20
> See [*].
>=20
> > ...  What must
> > be done is vm_map_find() requesting vm_map_findspace() for the address
> > hole of the size of the requested mapping + (number of address bits to
> > randomize << PAGE_SHIFT).  Then, the rng value should be obtained and
> > final location for the mapping calculated as return value + (rng <<
> > PAGE_SHIFT).
>=20
>=20
> This seems to be trying to implement a different and more complex scheme
> than what Shawn is trying to implement.  Specifically, suppose that we
> have a program that creates 5 mappings with mmap(2).  Call them M1, M2,
> M3, M4, and M5, respectively.  Shawn is perfectly happy for M1 to always
> come before M2 in the address space, M2 to always come before M3, and so
> on.  He's not trying to permute their order in the address space from
> one execution of the program to the next.  He's only trying to change
> their addresses from one execution to the next.  The impression that I
> got during the BSDCan presentation was that this is what other operating
> systems were doing, and it was considered strike a reasonable balance
> between run-time overhead and the security benefits.
>=20
>=20
> >
> > If no address space hole exists which can satisfy the enlarged
> > request, either a direct fallback to the initial length should be
> > performed (I prefer this), or exponential decrease of the length up to
> > the initial size done, and findspace procedure repeated.
> >
> > Also, the vm_map_find() knows about the superpages hint for the
> > mapping being performed, which allows to not break superpages.  When
> > superpage-aligned mapping is requested, SUPERPAGE_SHIFT (available
> > from pagesizes[1]) should be used instead of PAGE_SHIFT in the formula
> > above, and probably, a different amount of address bits in the page
> > table page level 2 to randomize, selected.
>=20
>=20
> [*]  I agree with this last observation.  I  mentioned this possibility
> to Shawn after his talk.  On machines where pagesizes[1] is defined,
> i.e., non-zero, it makes little sense to randomize the low-order address
> bits that will be "rounded away" by aligning to pagesizes[1] boundary.
>=20
>=20
> >
> > =3D=3D=3D end of kib@ suggestions =3D=3D=3D
> >
> > I have a few concerns, though:
> >
> > 1) vm_map_find is also used when loading certain bits of data in the
> > kernel (like KLDs, for example);
> > 2) It's not possible to tell in vm_map_find if we're creating a mapping
> > for the stack, the execbase, or any other suitable mmap call. We apply
> > ASLR differently based on those three aspects;
> > 3) vm_map_find is also used in initializing the VM system upon boot.
> > What impacts would this cause?
> >
> > I would really appreciate any feedback. Thank you in advance for any
> > help or guidance.
>=20
> Shawn, while I'm here, I'll mention that what you were doing with stacks
> in pax_aslr_mmap() seemed odd.  Can you explain?
> =20
>=20

Thank you for your input, Alan. From what it sounds like, you're
suggesting we don't implement the ASLR functionality in vm_map_find()
and we stick with what we have. Please correct me if I'm wrong.

Did our changes break MAP_32BIT or was MAP_32BIT broken all along? If
our ASLR patch breaks it, what would be the best way to fix it?

I'm' a little unsure of your question regarding stacks in
pax_aslr_mmap(). I don't believe we're doing anything with stacks in
that function. We do have the function pax_aslr_stack(), but we're
working on providing a more robust stack randomization (which isn't in
the patchset I provided).

Thanks,

Shawn

--bg08WKrSYDhXBjb5
Content-Type: application/pgp-signature

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2

iQIcBAEBAgAGBQJTrXFqAAoJEGqEZY9SRW7uJuMP/3JTUOrcBGyQnjg2oQMVGdAc
IcpHWWPxPo4rWi2DaRM1bnHrULwR5WzTK+sG+hMTZGOJG5KmTNQ/gOkAnovNvLC8
CimbjyLeELSsRhe27krWi1lNFQn9LzL+o9m63L+WRNwwoG+TTt07KYwgoAGqre/f
DkxsOwJ7/vJN6LVwDrSEFYec/Wt4OF7pGVGu2jvG7rNFdnN4/t9P/tyjP9I0G0yI
rB09iyNGNN/1W0aamPbc7BI0qxYcuPfxvbTDPihGnX5WoU45LGI+fFIaTI+ZObbw
2w5cmoZ3RGEeiBaeUcFwuMUCOwjYjzNHBgRcYKm+xx19vDg4hOHDE5qsFfUvL+9f
gpus4kwtIHjKWPrTIKyvVCq1tYvPT5SA3ZrD9uxtALOF/o77Kh8iD1qihGXh+Gp1
Fj+i+ieaStCHWQpw0ar+BWUn95c0mdX3jU1lPOmpptuYbUA0QqZAO39gX/+lZAIe
5HpdwHhJpjHhvk6wgW/1H5EalaIvSk00/fCpL+p7hAhxA8zY+JOQESnuaJkDVvUA
A4cYvnXv18LQP6u0TGyoQkRxa0g1689BC6PKLdjDbpNa+TxqAYvSEgVfDdKRsLBh
zvZYH12qqswzYqvGwSTmHKioORyQbVRonsrHctwqbQ9ICoyj9Xo1q7fHfuHK/HuA
eEzYWJL9k7EA1E2jKX//
=bEKp
-----END PGP SIGNATURE-----

--bg08WKrSYDhXBjb5--



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20140627132810.GA8396>