Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 01 Nov 2001 10:20:14 +1100
From:      Peter Jeremy <peter.jeremy@alcatel.com.au>
To:        Bakul Shah <bakul@bitblocks.com>
Cc:        Poul-Henning Kamp <phk@critter.freebsd.dk>, Peter Wemm <peter@wemm.org>, arch@FreeBSD.ORG
Subject:   Re: 64 bit times revisited..
Message-ID:  <20011101102014.D94635@gsmx07.alcatel.com.au>
In-Reply-To: <200110311947.OAA05182@devonshire.cnchost.com>; from bakul@bitblocks.com on Wed, Oct 31, 2001 at 11:47:28AM -0800
References:  <20011031163741.C85128@gsmx07.alcatel.com.au> <200110311947.OAA05182@devonshire.cnchost.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On 2001-Oct-31 11:47:28 -0800, Bakul Shah <bakul@bitblocks.com> wrote:
>> >On my 500Mhz PIII it takes about 4.6ns to divide a 64 bit
>> >number by a 64 bit 10^9.
...
>Dang!  My number is bogus:-(  The best number I get is about
>130ns on a 533Mhz PIII.

Your best number is not particularly relevant.  As you've noticed, the
timing is data-dependent, so you need to average the time over random
data (I was using 100 random 64-bit numbers courtesy of random(3)).
Your average time is probably closer to my 211nsec.  (Also, I included
negative quotients - which your program doesn't allow and which I
suspect are slower).

There are problems with any approach that involves division:
1) Division is _slow_.  It's much harder to build a fast divider than
   a fast multiplier and dividers don't scale to higher precisions.
2) Some architectures don't have hardware integer division at all.
   Alpha is one.  I'm not sure about IA-64 or Sparc64.
3) The solution has to work _today_ on embedded processors.  My guess
   is that a 486 is around 2 orders of magnitude slower than a PIII.

>> 1) Fixed point seconds with M bits to the left of the binary
>>    point and N bits to the right.
>> 2) Integral number of (fractional second units).  (As used on
>>    the IBM System 360 and later).
>
>My vote is for 2), where the fractional unit is 10^-9 seconds
>and I called it `nstime64_t'.

The problem with fractional units is extending them when you run out
of resolution and/or precision.  Either you extend them as a binary
fraction (from memory, the S/360 uses a 64-bit TOD which counts in
units of 2^-12 usec), or you switch to a smaller decimal unit.  The
former means you no longer have a clean metric fraction.  The latter
makes it very painful to convert between the "old" and "new" formats.

nsec resolution is inadequate _today_ for things like NTP PLL's.  It's
even inadequate for today's CPU speeds.  Converting your nstime64_t to
struct timespec needs an expensive 64-bit division returning both
quotient and remainder (the Alpha has tricks to manage reasonably fast
division by a constant, but you don't get a remainder).  Converting it
to a struct timeval needs a further 32-bit division.

OTOH, a fixed-point binary scheme makes it easy to extract the seconds
- at worst a shift to align the binary point to a word boundary.
Converting to either struct timeval or struct timespec needs a single,
unsigned, widening multiply.  And I don't think there's even any need
for using all 64 fraction bits - the error in using a 32x32->64
multiply averages out to ~0.25nsec.  This can be done efficiently even
on a 386.

>- converting to 64 bit time_t in a piecemeal fashion is not
>  worth it at this point.
>  - The sky is not to falling down until 2038!

Y2K should have taught us that software winds up being used for far
longer than was originally intended - and one of the major costs was
finding this software.  2038 is biting some applications now - and the
number will only rise with time.  The quicker we solve the problem,
the less re-work that will be necessary.

>- I am advocating a 64 bit type with a nanosecond resolution
>  and 584 years of span as a reasonable compromise type:

nsec resolution is inadequate today.  It'll be a joke in 10 years, let
alone by 2038.  IMHO, it makes sense for a timestamp to be (1<<N) bits
in size - anything else will probably result in the compiler padding
it to this size anyway.  64 bits cannot provide both the required
range and resolution so the next logical size is 128 bits - and 128
bits would seem to provide both adequate range (+/-292e9 years) and
resolution (5e-20 sec) for all practical purposes in the foreseeable
future.

>  Note this is just a proposal and I have not thought through
>  all of its implications (for example, on struct tm).  It
>  may be that we need a more radical solution such a 128 bit
>  type as PHK & other advocate or a long double or something.

I quite like PHK's 64.64 proposal.

>- The issue of what the kernel uses internally is separate.
>  As long as the kernel's representation can be mapped to
>  time_t (or its descendent) we are fine.

Agreed.  And a binary kernel timestamp is the simplest way to
achieve this.

>- what a filesystem uses for timestamping files is related

Agreed, but this is a different bikeshed - on-disk timestamps do not
need to be the same as either kernel or userland timestamps.  The
requirements for both range and resolution are not necessarily the
same, and the resolution requirements may differ for the 3 different
inode timestamps.  The only real relationship between the on-disk
timestamps and kernel/userland timestamps is that the kernel can
effeciently convert between them.

Peter

To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-arch" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20011101102014.D94635>