Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 09 Jan 1998 10:06:08 -0500
From:      "Louis A. Mamakos" <louie@TransSys.COM>
To:        Tim Tsai <tim@futuresouth.com>
Cc:        Greg Lehey <grog@lemis.com>, David Kelly <dkelly@hiwaay.net>, FreeBSD Hackers <Hackers@FreeBSD.ORG>
Subject:   Re: GPS for xntpd Stratum 1 servers 
Message-ID:  <199801091506.KAA22405@whizzo.TransSys.COM>
In-Reply-To: Your message of "Fri, 09 Jan 1998 01:07:37 CST." <19980109010737.63918@futuresouth.com> 
References:  <michaelh@cet.co.jp> <199801090340.VAA13302@nospam.hiwaay.net>  <19980108232535.39313@futuresouth.com> <19980109172927.06125@lemis.com>  <19980109010737.63918@futuresouth.com>

next in thread | previous in thread | raw e-mail | index | archive | help

> > >   It'd be easier to use a couple of RS232<->RS422/RS485 converters.  At
> > > the typical GPS baud rate (4800/9600 baud) you should be able to run the
> > > wire hundreds of meters if not more (RS422 spec escapes me at the moment).
> > > The converters run for about $30-$100 a piece.
> > 
> > What sort of time accuracy are you hoping for here?  To transmit a
> > short datagram (say, 16 bytes) at 9.6 kb/s will take you 16 ms.  
> 
>   Since I am no expert on NTP I will refrain from further comments on
> that.  I kinda doubt the accuracy is dependent on the transmission latency
> though (I'd think that a long but deterministic transmission time is
> better than short but unpredictable transmission time), but what do I
> know.  Also, dependable transmission time over RS232 would be better than
> unpredictable ethernet transmission time in this application, no?

Actually no.  What's going on is that you're using an external clock reference
to discipline the logical clock in your computer.  

This logical clock implemented using the "real" clock in the kernel,
and by having NTP adjust the frequency of the clock to effect a change
in the phase (that is, changing slightly the rate at which the clock
advances to advance or retard the time).  It does this on a periodic
basis, generating clock offset samples which are applied to the local
clock model.

You can think of the local clock model as a phase locked loop, where
the error is generated by these successive offset measurements, and
the charactistics of the PLL filter are also varied.  The "stiffness"
of the PLL increases as you get the clock running closer and closer to
the correct frequency; this causes the clock to exhibit less
short-term jitter, but makes it "harder" to change the frequency of
the clock.  Conversely, when the clock is further off the right
frequency, the PLL filter is looser so that you can more easily move
it closer to where you'd like it.

The point of all this is that it's very important that the corrections
used by the local clock algorithm have as low jitter as possible so
you can get the PLL to "tighten up" it's control loop.  The preferred
way of doing this is to arrange that the 1-PPS (pulse per second)
signal from the external reference clock capture the current offset
when it fires; typically the 1PPS signal is connected to a control
line (like DCD or CTS) which generates an interrupt when it transitions;
a line discipline or other kernel-level interrupt handler captures the
current system timestamp, and this is queued to be handled by the
daemon process leisurely. 

Early implementations of NTP which used "loaner" Cesium clocks had no
RS-232 connection; just the 1PPS signal.  Essentially, you use the
high-precision clock to mark the beginning of each second, and arrange
for some other low-precision technique (like using NTP itself) to
label the second that was marked.

A whole different matter is the characteristics of the packet
exchanges between peer NTP implementations.  Over an ethernet, with
the filtering algorithms used to process a set of clock offset and
delay samples, you should easily be able to synchronize clocks to
within 10ms.  The limiting factors in a LAN environment tend to be
OS performance in servicing arriving packets and timestamping them in
a timely fashion, as well as a quality clock implementation in the
hardware.  The filtering alrogithms will discard offset/delay samples
which are out of line due to instantaneous network congestion etc.

It's this filtering algorithm which computes the "dispersion", which
is a measurement of the quality of the path and clock.  It's a measure
of the jitter and noise on the offset/delay samples.

There's some code that I submitted quite a while ago and is
intergrated into (at least) 3.0-current which adds a socket option to
timestamp packets when the are queued to socket buffers.  I modified
xntpd to enable this socket option, and use the timestamp rather than
relying on the arrival timestamp taken when the SIGIO handler is
invoked.  I collected statistics on the difference between these two;
"normally" they are within a few hundred microseconds of each other on
a lightly loaded system.  Occasionally, however, there are the huge
excursions, where the SIGIO handler invocation is delayed 6 or 9
milliseconds; this is difficult to explain.

There's quite a bit of stuff going on inside of the xntpd; most of the
interesting bits are the filters used to process offset/delay samples
and the whole local clock model implementation (of which, some is in
the kernel with FreeBSD).  If you want more info, the NTP RFC actually
discusses all this - make sure you get the PostScript version of the
document as the math is much easier to read than the plain ASCII text
version.

louie








Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199801091506.KAA22405>