Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 22 May 1996 21:44:06 +1000
From:      Bruce Evans <bde@zeta.org.au>
To:        freebsd-current@freebsd.org, syssgm@devetir.qld.gov.au
Subject:   Re: Wildly inaccurate clock calibration.
Message-ID:  <199605221144.VAA25788@godzilla.zeta.org.au>

next in thread | raw e-mail | index | archive | help
>>>May 16 17:01:33 stupid /kernel: 63814 Hz differs from default of 1193182 Hz by more than 1%

>Ok, for my first stab in the dark, I lowered the interrupt frequency (as
>in the patch below).  This yielded very reasonable calibration values from
>1193766 to 1193782 (difference of 16Hz, or 0.0013% variation) over 15 tests.
>Perhaps it is working now, or perhaps it has some more subtle systematic
>error.  I can't tell yet for sure.

This seems like the right fix.  The interrupt frequency of 20000 must have
been a little too high.   I would have expected it to work though, since
no interrupts are involved.  calibrate_clocks() just needs to read both
the clocks within somwhat less than the period of the fastes clock (50us
in this case).

I see a range of 1192166 to 1192175 on a 486/33.  The slow 386 is doing
well to have less than twice as much variation.  Most of the variation
for a short test is caused by the latencies between the first and last
change of the RTC and when these changes are detected.  The maximum
latency is a little larger for the 386.

Please check that TIMER0_LATCH_COUNT is large enough for the slow 386.
It must be < 62.5 so that microtime() works with a clock interrupt
frequency of 16000 for pcaudio.  16000 is probably to large for the slow
386.  This is best fixed by not using pcaudio, but TIMER0_LATCH_COUNT
needs to work in all cases.  Perhaps it should be a variable.

>If the set_timer_freq() call is necessary, and must be lower for slow 386
>CPUs, it could be set based on the probed CPU type.

It can be be set to hz (= 100) now.  I made it large to excercise the
timer overflow handling and to trap bogus counts that might result from
reading something like 0xff from the hardware registers.  Bogus counts
don't seem to be a problem.

>What observable system feature will change if I accept 1193782Hz vs the
>default of 1193182Hz (600Hz difference)?  Will I be able to check "date"
>vs wall clock and find 43sec difference over 1 day?  Is this stuff used
>yet?

It is only used if you configure the kernel with option
CLK_USE_I8254_CALIBRATION or run `sysctl -w machdep.i8254_freq=1193782'.
Changing machdep.i8254_freq by 600 changes the timer maximum count by
600.0/hz (rounded to nearest) = 6.  I think there are no rounding errors
in this case, so the difference should indeed be about 43 seconds/day.
You want this iff the RTC is more accurate than the i8254.  You can
probably get more accuracy by giving an accurate frequency in the
sysctl command.  Accuracy is currently limited to about 50 parts in
1193182 by the rounding in the conversion to a maximum count.  This
is worth improving iff it is larger than the inaccuracy due to
temperature changes etc.

Bruce



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199605221144.VAA25788>