Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 31 May 2013 02:52:13 +1000 (EST)
From:      Bruce Evans <brde@optusnet.com.au>
To:        Warner Losh <imp@bsdimp.com>
Cc:        Stephen Montgomery-Smith <stephen@missouri.edu>, pfg@FreeBSD.org, freebsd-numerics@FreeBSD.org, David Schultz <das@FreeBSD.org>, freebsd-standards@FreeBSD.org
Subject:   Re: standards/175811: libstdc++ needs complex support in order use C99
Message-ID:  <20130531015915.N65390@besplex.bde.org>
In-Reply-To: <A3633CF7-B0D3-4E09-88FC-1D40197C652C@bsdimp.com>
References:  <201302040328.r143SUd3039504@freefall.freebsd.org> <510F306A.6090009@missouri.edu> <C5BD0238-121D-4D8B-924A-230C07222666@FreeBSD.org> <20130530064635.GA91597@zim.MIT.EDU> <A3633CF7-B0D3-4E09-88FC-1D40197C652C@bsdimp.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On Thu, 30 May 2013, Warner Losh wrote:

> I'm all for getting everything we can into the tree that produces an answer that's not perfect, but close. What's the error that would be generated with the naive implementation of
>
> long double tgammal(long double f) { return tgamma(f); }

On x86, 11 low bits wrong, for an error of 2048 ulps, in addition to any
errors in tgamma().  tgamma() on i386 inherits errors of 9 peta-ulps
(all 53 bits wrong) from i387 trig functions, but is OK on small args on
i386 and better on large args on amd64.

On sparc64, 60 low bits wrong, for an error of 1 exa-ulp, in addition
to any errors in tgamma(); the latter are the same as on amd64.  Sparc64
users of long double precision pay for it with a loss of performance
of a factor of several hundred, so they should be unhappy to not get
he extra bits when they ask for them (but the above inaccurate version
doesn't give them what they asked for).

On arches with long double == double, no difference.

On i386 with the default rounding precision of double, little difference.

> But assuming that, for some reason, produces errors larger than difference in precision between double and long double due to extreme non-linearity of these functions, having only a couple of stragglers is a far better position to be in than we are today.

Such extra errors normally don't happen.  In fact, my accuracy tests for
double functions are essentially to upcast the results of double functions
and compare the resulting bits with the corresponding results for long
double functions.  Nonlinearities tend to only happen at zeros and poles
of functions and then they are due to bugs, and for NaNs, and then they are 
due to implementation-defined behaviour.  It is difficult to even determine
the location of zeros and poles for some functions, and most of the
complexities in libm are to uses especially careful calculations near
them when they are known.

Bruce



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20130531015915.N65390>