As I've mentioned before, I'm schedule to teach a class on "A Brief History of Timekeeping" next winter term as part of the Scholars Research Seminar program. Even though I have a hundred other things to do, I continue to think about this a lot.
One of the goals of the course is to introduce students to the idea of doing research. This was primarily conceived as a humanities/ social sciences sort of thing, so most of the discussion I've seen about these has been in terms of library research. Of course, as a physicist, I very rarely need to look things up in the library. when I think about research, I think about measuring stuff. So, I'm thinking about ways to incorporate some timekeeping measurements into the course-- asking students to either do something that enables a precise measurement of time, or to do something that evaluates a timekeeping method.
To that end, I did a measurement over the last few weeks to see how plausible this was. The materials were really simple: a cheap timer/stopwatch from Fisher Scientific, borrowed from the teaching lab stockroom, and the Internet. Specifically, the time readout on the NIST webpage.
NIST helpfully provides a display on their webpage of the official US time, synched to NIST atomic clocks. I started the time at exactly 1pm on May 2, and then compared the reading on the timer to the NIST time at various intervals over the next few weeks. As you can see from the picture, as of the last measurement on May 25, the stopwatch was running slow by 20 seconds.
All told, I made six measurements, shown on the following graph:
This plot shows the number of seconds the timer was behind the NIST time as a function of the elapsed time according to the NIST clock. The points are my measurements of the delay, the solid line is a linear fit to the data.
There are a couple of really nice things about this measurement. First, even though the measurement apparatus is ridiculously simple-- I estimated the delay time by holding the timer up next to the screen as seen in the picture above-- you get a really precise measurement. The slope of the line in the graph is 0.000010 seconds per second, or about one part in 100,000. This works out because the measurement is extended over three weeks, and there are 86,400 seconds per day.
In addition to giving a good sense of the sort of precision that can relatively easily be obtained in time measurements, this is a nice lead-in to the discussion of atomic clocks, and specifically an analogy for why the fountain clock technique is so useful-- it allows a much longer time between measurements, which makes it possible to do incredibly precise measurements of time and frequency.
It's also interesting to look at this in terms of the advancement of technology. This is a fairly cheap timer, and loses about 0.9s per day. Amazingly, this is comparable to the famous watches of John Harrison from the 1760's. The very best performance of one of Harrison's watches was accurate to within 0.08 s/day, while a second test found a somewhat more reasonable 0.83 s per day, which was three times better than the official standard for the longitude prize (which Harrison was screwed out of for political reasons, as recounted in Dava Sobel's Longitude), an astonishing achievement. And now, these are cheap throwaway stopwatches.
Anyway, I was very pleased with the outcome of this test. I might check a couple other timers, and different types of timers, but I think this sort of thing could definitely work as an assignment for the timekeeping class: ask the students to measure the performance of some sort of timer with a seconds readout, and see how different things stack up. Another possibility would be to check the performance based on environmental factors-- if I threw this timer into the fridge, how would that affect its operation?
Hmmm..... Check back in a few weeks, and we'll see what we see.
- Log in to post comments
I wonder if you could get your hands on a pendulum-style clock. It would be interesting to compare vastly different technologies.
My brother, who was an engineer at NASA, had an electronic wristwatch that gained time on his wrist at body temperature, and lost time on his nightstand at 68F air conditioned condo(or vice versa). He found that if he took it off at night for a fixed period of time, it was within 20 seconds per year compared to the NBS time at Kennedy Space Center. His office had a wall clock with elliptical gears - time really did creep by at midday.
If you divide your timer readings by 1.000010, what's the largest error compared to NBS - i.e. what's the precision?
Chad you could get you students doing experiments with simple pendulums or constructing simple sun dials.
If you're willing to possibly sacrifice the stop watch, you may be able to crack it open and find the oscillator used and look up its data sheet for spec'ed stability.
32.768kHz is a common frequency used in clocks that I have seen. 10 ppm seems to be quite reasonable.
I wonder if you could get your hands on a pendulum-style clock. It would be interesting to compare vastly different technologies.
It might be possible, but probably not cheaply. At least not for a clock that really runs off the pendulum-- lots of clocks with pendula these days are really quartz clocks driving a pendulum, rather than a pendulum serving as the basis for a clock. It's definitely something to look into, though.
Chad you could get you students doing experiments with simple pendulums or constructing simple sun dials.
That's also on my mental list of things to suggest for students to try. I might go for a "take a picture of the shadow of something at the same time every day" kind of sundial, and I should look into what it would take to make a small Foucault pendulum. If I had more time, I would try to set up a webcam to take a picture at noon every day, and try to build up an analemma, but that seems likely to be aggravating, so I'll probably just grab images from the Internet for that part of things.
Amongst watch collectors, there is a small subset who are interested in high precision watches and who obsess about how to monitor their watches. They have refined your method. You might like to see
http://forums.watchuseek.com/f9/methods-determining-accuracy-watch-3827…
Also, there is some nice but readily accessible physics in measuring the effects of temperature on a quartz timer.
Experimental ideas:
We have a combination indoor/outdoor thermometer and clock, where the clock runs off of the NIST radio signal. Excellent reference point, but only good to the minute.
I have compared my GPS (uses satellite timing data) and the clock on my cell phone (from the phone company system) and they click over the minute at the same time but the phone (like the clock) only displays to the minute.
Use video to compare sync of these devices?
My iPad has gained several minutes, so it clearly does not sync when only talking to the wireless world. Students could compare the clocks in all sorts of devices they own.
Ummm, you think you know when "noon" is? Since you appear to be at about 73.9 deg W, you are only about 4 minutes off when on STANDARD time, but your solar noon is around 1 PM in the summer. Measuring local 'noon' with precision -- and any large fixed rod will do as a gnomon, you don't need a fancy sun dial -- would be a nice challenge. You also don't have to map out the entire analemma to see the effects of the equation of time. There is a huge swing in the difference between mean (standard) time and true solar time in the fall. And, with longer shadows in the fall, the measurements are easier to do. Also, it helps if the rod is angled due north (not magnetic north), but that is not as critical as having a level surface for the measurements.
I recommend "Seize the Daylight" as a text on the standardization problem.
Sounds like someone has too much time on his hands.
heyooo