Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 1 Oct 2005 00:03:01 +1000 (EST)
From:      Bruce Evans <bde@zeta.org.au>
To:        Steve Kargl <sgk@troutmask.apl.washington.edu>
Cc:        freebsd-standards@freebsd.org
Subject:   Re: complex.h math functions
Message-ID:  <20050930221846.T34595@delplex.bde.org>
In-Reply-To: <20050929195552.GA14982@troutmask.apl.washington.edu>
References:  <20050929195552.GA14982@troutmask.apl.washington.edu>

next in thread | previous in thread | raw e-mail | index | archive | help
  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--0-1213393512-1128088981=:34595
Content-Type: TEXT/PLAIN; charset=X-UNKNOWN; format=flowed
Content-Transfer-Encoding: QUOTED-PRINTABLE

On Thu, 29 Sep 2005, Steve Kargl wrote:

> Is it permissible to implement the complex.h math
> functions in terms of functions in math.h.  For

There are no namespace problems AFAIK, and the C standard doesn't
require any particular quality of implementation.

> example, if z =3D x + I * y, then
>
> cos(z) =3D cos(x) * cosh(y) - I sin(x) * sinh(y)
>
> This can be (naively?) implemented as
>
> double complex
> ccos(double complex z)
> {
>   double x, y;
>   x =3D creal(z);
>   y =3D cimag(y);
>   return (cosh(y) * (cos(x) - I * sin(x) * tanh(y)));
> }

The original formula gives a much better implementation.  It is missing
serious bugs at points where tanh(y) =3D=3D +-Inf; it may be missing bugs
at points where y is a NaN (*); tanh(y) takes more work to compute than
sinh(y), and then multiplying it by cosh(y) gives 1 extra multultiplication
that doesn't take much longer (relatively) but loses precision (relatively
more).

(*) The best treatment of exceptional args is unclear, but C99 specifies
a treatment in detail in at least n869.txt.  This spec probably rules out
simply using formulae as above -- exceptional args need to be handled
specially unless you can be sure that applying the formulae results in
the specified NaNs, etc.  From n869.txt:

%                ccos(z)     =3D ccosh(iz)
% ...

Some functions, including ccos() are specified in terms of others like this=
=2E

%        G.5.2.4  The ccosh functions
%=20
%        [#1]
%=20
%          -- ccosh(conj(z)) =3D conj(ccosh(z)) and ccosh is even.

A strict reading says that we must ensure that if a formula is used, the
rounding must always go in such a way that these symmetries are preserved
_exactly_.  Rounding to nearest may give this, but this is not clear.
Rounding towards Inf obviously wouldn't give this.

In the following, "I" is spelled "i" and "*" (multiplication) is omitted.

%=20
%          -- ccosh(+0+i0) returns 1+i0.

The formula should give this automatically provided the real functions
aren't broken.

Handly of -0 and -i0 (in various combinations with +0's) doesn't seem
to be specified for this function, although it is for others.  The sign
of the real 0 doesn't matter because the result is 1.  I suppose the
sign for ccosh(-i0) should track the sign of the arg in the same way
as for similar real functions.  Since ccosh(1) =3D=3D 1, the sign of the
real 0 can't affect the sign of the imaginary part of the result,
unlike for some other functions.

I just realized that many of the complications here are for type-generic
functions.  0.0 is quite different from (0.0 + I * 0.0) since it has
type double but the latter has type Complex.  When combined with +0
being slightly different from -0, this gives zillions of cases.

%=20
%          -- ccosh(+0+i) returns  NaN=B1i0  (where  the  sign  of  the
%             imaginary part of the result is unspecified) and raises
%             the invalid exception.

This is apparently a bug in n869.txt.  ccosh() on finite values is never
a NaN.

%=20
%          -- ccosh(++i0) returns ++i0.

Here ++ is not the C operator, but something like +-.  It may be another
bug that this says ++ and not +-.  Shouldn't the result reverse the sign?

%=20
%          -- ccosh(++i)  returns  ++iNaN  and  raises  the   invalid
%             exception.
%=20
%          -- ccosh(x+i)  returns  NaN+iNaN  and  raises  the invalid
%             exception, for finite nonzero x.

More bugs in n869.txt.

%=20
%          -- ccosh(++iy) returns +cis(y), for finite nonzero y.

cis(y) is (cos(y) + I * sin(y)).  This says that if the arg is pure
imaginary then the implicit real part of the arg (=3D 0) must not affect
the result, and it says something about signs.

%=20
%          -- ccosh(+0+iNaN) returns NaN=B1i0 (where the  sign  of  the
%             imaginary part of the result is unspecified).

This requires not propagating a NaN in the imaginary part to the real
part if the real part is precisely 0.

%=20
%          -- ccosh(++iNaN) returns ++iNaN.

The case of a pure imaginary NaN may be slightly different.  I don't
understand the non-ASCII character in the spec for ccosh(+0+iNaN).
It is '\xb1'.

%=20
%          -- ccosh(x+iNaN)  returns  NaN+iNaN  and optionally raises
%             the invalid exception, for finite nonzero x.

Nonzero x causes the NaN to be propagated to the real part.  Formulae
tend to do this automatically; it is the previous 2 cases that using
the formula for ccosh() breaks.

%=20
%          -- ccosh(NaN+i0) returns NaN=B1i0 (where  the  sign  of  the
%             imaginary part of the result is unspecified).
%=20
%          -- ccosh(NaN+iy)  returns  NaN+iNaN  and optionally raises
%             the invalid exception, for all nonzero numbers y.

Similarly if only the real part of the arg is a NaN.

%=20
%          -- ccosh(NaN+iNaN) returns NaN+iNaN.

This requirement is obviously right and is hard to avoid satisfy using
formulae.

About accuratcy of complex functions:

It is unreasonable to require or expect the result to be as accurate as
possible for the real and imaginary parts independently, since even
ordinary operations don't have this propery.  E.g., consider the simple
polynomial function f(z) =3D z*z.  For z =3D x+Iy, f(z) =3D (x*x-y*y)+2Ix*y=
=2E
Here 2Ix*y is almost as accurate as x and y, but x*x-y*y only has 1 or
2 bits of accuracy when x is as close as possible to y (but different).
It might be useful for cpow(z, 2) to be much more accurate than z*z,
but this would be hard to implement.  Standard polynomial approximation
methods just won't give enough accuracy.

OTOH, for cexp() and probably for most related functions, equal
(relative) accuracy for the real and imaginary parts occurs automatically
using standard formulae (since cexp(z) depends on Re(z) and Im(z)
independently):

     cexp(z) =3D exp(x) * (cos(x) + I * sin(x))

The multiplication expands the roundoff errors for the real and imaginary
parts independently, so the final relative error for each is < 2 ulps if
the errors for exp(), cos() and sin() are always < 1 ulp.  I'm not happy
with an error of 2 ulps of course, but don't know any morwe accurate way
implementing cexp() short of doing everything in more precision.  Standard
polynomial approximations with complex args might be more accurate=20
relative to |cexp(z)| =3D exp(x), but couldn't be very accurate for the
real and imaginary parts independently.  This corresponds to the
arg reduction requirements for cos(x) and sin(x) being very different.
Remez methods also seem to be completely unsuitable for for complex
args since according to my limited understanding of them they depend
on delicate cancellations which are only likely to occur when their
complex arg is in a discrete set of directions.

For ccos(z) implemented as:

     ccos(z) =3D cos(x) * cosh(y) - I sin(x) * sinh(y)

the error is again < 2 ulps for each of the real and imaginary parts
provided it is < 1 ulp for all the real functions.

Bruce
--0-1213393512-1128088981=:34595--



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20050930221846.T34595>