A month and a half ago, I reported on a simple experiment to measure the performance of a timer from the teaching labs. I started the timer running at a particualr time, and over the next couple of weeks checked in regularly with the Official US Time display at the NIST website, recording the delay between the timer reading and the NIST clock.
As a follow-up experiment, I did the same thing with a different timer, this one a Good Cook brand digital timer picked up for $10 in the local supermarket, and the same Fisher Scientific stopwatch/timer as the first experiment, with the Fisher Scientific timer thrown into the mini-fridge in the lab across the hall between measurements. The results look like this:
The red points are the data from the original test of the stopwatch, the blue points are the timer in the fridge, and the green points are the supermarket timer. The vertical axis is the delay between the two (all three ran slow compared to the NIST clock), and the horizontal axis is the elapsed time between starting the timer and a given measurement.
As you can see, this more or less agrees with what you would expect. The timer that is sold as a piece of laboratory equipment is the best of the lot, with the cheap kitchen timer being slightly less accurate. Putting the stopwatch in an environment that is significantly colder than room temperature (the average temperature recorded by an indoor-outdoor thermometer with its outdoor probe sitting in the mini-fridge was something like -3C) significantly degrades its performance, as you would expect for a timer based on a physical artifact (presumably a quartz crystal).
All three data sets are beautifully linear, with slopes of (1.016+/-0.00033)x10-5 (for the stopwatch), (3.336 +/- 0.0013)x10-5 (for the cold stopwatch), and (1.538 +/- 0.0017)x10-5 (for the Good Cook timer). The uncertainty for the more recent runs is a little larger, possibly because I was a little more casual about recording these, but also because I didn't take data for as long in the second experiment.
The dramatic difference between the cold stopwatch and the same watch at room temperature gives you a good idea of what clockmakers have to contend with when making precision timepieces. If you were relying on one of these as a navigational instrument, it would do vastly better in the summer than in the winter (or in warmer climates than colder ones), which could be a major issue for someone running a shipping business. That's why the John Harrison is justly celebrated, and why the Longitude Prize he was chasing was such a big deal.
The next obvious extension of this would be to borrow a couple more stopwatches from the teaching labs and test Fisher's quality control. I think I've accomplished more or less what I hoped to at this point, though, so I'll move on to other things.
- Log in to post comments
Most portable timers/stop-watches/etc. are based on the piezoelectric properties of Quartz crystals as the time reference. The resonant frequency of a piece of Quartz is related to it's dimensions/mass, as well as vibration mode being exploited, as well as the "cut" of the slab. Temperature affects this, but in unexpected ways. Depending upon the "cut" of the slab of Quartz from the raw crystal (e.g., displacement of the slab being cut from the X-Y-Z axes), it's possible to have an "S" shaped temperature coefficient. Ideally, by tweaking the cut, one can obtain a near zero temperature coefficient over a small range near room temperature (or, over a small range at an elevated temperature, which makes putting the crystal in a temperature controlled oven much easier).
Your mission, if you choose to accept it, is to find out how the error in the timers/stop-watches varies given a range of temperatures different from room temperature.
Dave
So you've just demonstrated what everyone knows, which is that time runs slower the closer you get to 0 deg K?
So it's not relativistic time dilation.
Babaganoosh here and this was such a treat, boost out another one asap