Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 29 Jul 2012 02:30:05 GMT
From:      Stephen Montgomery-Smith <stephen@missouri.edu>
To:        freebsd-bugs@FreeBSD.org
Subject:   Re: bin/170206: complex arcsinh, log, etc.
Message-ID:  <201207290230.q6T2U5tG060118@freefall.freebsd.org>

next in thread | raw e-mail | index | archive | help
The following reply was made to PR bin/170206; it has been noted by GNATS.

From: Stephen Montgomery-Smith <stephen@missouri.edu>
To: Bruce Evans <brde@optusnet.com.au>
Cc: freebsd-bugs@freebsd.org, FreeBSD-gnats-submit@freebsd.org,
        Stephen Montgomery-Smith <stephen@freebsd.org>
Subject: Re: bin/170206: complex arcsinh, log, etc.
Date: Sat, 28 Jul 2012 21:27:20 -0500

 On 07/28/2012 06:12 PM, Stephen Montgomery-Smith wrote:
 > On 07/28/2012 11:15 AM, Stephen Montgomery-Smith wrote:
 >> On 07/28/2012 10:46 AM, Stephen Montgomery-Smith wrote:
 >>> OK.  This clog really seems to work.
 >>>
 >>> x*x + y*y - 1 is computed with a ULP less than 0.8.  The rest of the
 >>> errors seem to be due to the implementation of log1p.  The ULP of the
 >>> final answer seems to be never bigger than a little over 2.
 >>>
 >>>
 >>
 >>
 >> Also, I don't think the problem is due to the implementation of log1p.
 >> If you do an error analysis of log(1+x) where x is about exp(-1)-1, and
 >> x is correct to within 0.8 ULP, I suspect that about 2.5 ULP is the best
 >> you can do for the final answer:
 >>
 >> relative_error(log(1+x)) = fabs(1/((1+x) log(1+x))) * relative_error(x)
 >>                    = 1.58 * relative_error(x)
 >>
 >> Given that log1p has itself a ULP of about 1, and relative error in x is
 >> 0.8, and considering x=exp(-1)-1, this gives a ULP at around 1.58*0.8+1
 >> = 2.3.  And that is what I observed.
 >>
 >> (Here "=" means approximately equal to.)
 >
 > And I should add that I just realized that ULP isn't quite the same as
 > relative error, so an extra factor of up to 2 could make its way into
 > the calculations.
 
 In fact, I think I messed it up a bunch.
 
 So let D(f(x)) denote the absolute error in f(x).
 
 D(f(x)) = f'(x) Dx.
 
 So
 
 D(log(1+x)) = Dx/(1+x).
 
 If x is a bit bigger than exp(-1)+1 = -0.63, which has ilogb = -1.  If 
 ULP in calculating x is around 0.8, then
 Dx = 0.8 * 2^(-d-1).
 where d is the number of bits in the mantissa,
 
 So D(log(1+x)) = Dx/0.37.
 Since log(1+x) is a little bit bigger than -1, and so ilogb(log(1+x)) = -1.
 
 ULP(log(1+x)) = Dx/0.37 * 2^{d+1} = 0.8/0.37 = 2.2
 
 Now add 1 for ULP in calculating log1p, and this only gives a ULP of 
 3.2.  So the observed bound is actually better than expected.  If one 
 could get the ULP of log1p to be as good as possible (0.5), the best ULP 
 one could get is 2.7.  We still do a bit better than that.
 



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?201207290230.q6T2U5tG060118>