If 3σ results are wrong half the time, does that mean 6σ results are wrong all the time?
The social networks are a-buzz over the claim of a significant detection by the OPERA experiment of a neutrino pulse propagating superluminally over a 750 km baseline from CERN to the Gran Sasso lab.
You heard the claim - neutrino pulse generated 400 GeV protons fro the old Super Proton Synchrotron.
Every 6 secs a kicker magnet bumps two 10.5μs wide proton pulses, separated by 10ms.
These crash into a 2m graphite target (that is 7 ns travel time through target);
the mesons (π and K predominantly) are focused into a 1 km vacuum tunnel where they decay, with decay products clearly in a narrow beam down the tunnel, aimed right at the heart of Italy (isn't that an act of war or something?) and some of the resultant neutrinos blow through the Gran Sasso lab chamber, where a few of them scatter and give a signal.
The experiment is optimizes for 17GeV ν production, with primary science goal of looking at mu-tau oscillations.
The 10,000 nsec wide pulse of neutrinos is cleanly detected at Gran Sasso, that is not in question. The claim is that the neutrinos are, on average, 60 ns early - ie that they are traveling very slightly faster than light - light travel time over the distance is about 2.5 ms,
or 2.5 million nanosecs.
So, I thought I'd better read the bloomin' thing...
The claim is based on a statistical analysis of the distribution of the proton beam, vs the distribution of the neutrino arrival times, and is sensitive to the leading and trailing edges of the signal.
The authors claim a combined random and systematic error budget of 10 ns, and therefore a tentative 6σ detection of faster than light transport of the neutrino pulse.
The decay tunnel is 3 μs long, or about 50 times longer than the anomalous early arrival, the mesons enter the tunnel at speeds extremely close to the speed of light.
There are several interesting points in the blog posts below, including the comments:
One is the obvious, that the claims strongly contradicts the speed limits on neutrinos from Kamiokande detections of neutrinos from SN 1987a.
Those were anti-electron neutrinos, not muon/tau neutrinos, and at about 1,000 times lower energy.
Note there were controversies about an earlier neutrino burst detected at Mont Blanc, but even if real, the time difference is inconsistent with the propagation speed claimed here.
Secondly, there is concern about the local clocks, which are used to synch back to the GPS master clocks.
Butterworth makes an interesting point, namely that the actual detectors are in a particular location in the neutrino beam, which is about 3 km wide by the time it reaches Gran Sasso, this could bias the arrival time, there is a tiny bit of curvature on the beam front.
Finally, Eric asks whether they really used the geodesic distance, rather than the chord distance, which would be embarrassing, but the distance difference is close to the claimed time difference.
The claimed distance is 730534.61 ± 0.20 m
60 ns corresponds to a distance error of 20 mm m - (oops, my bad, typo. The "100 times the 20cm" is correct of course)
or about 100 times the claimed distance precision.
Note the very impressive relative changes in the distances due to geology are irrelevant if the baseline distance is erroneous because of confounding of geodesic distance and l-o-s distance, unlikely as that may be.
The statistics are based on about 16,000 neutrino interactions generated from about 1020 proton events.
So, what do I make of it?
Well, along with 99.87% of physicists, I am very skeptical.
It is almost certainly due to either a "silly error" - like the question about geodesic vs direct distance, or it is due to a subtle systematic error that the team overlooked but will be beaten out by the hordes of people reading over the paper this week.
A very, very faint possibility is that either relativity is wrong; or, muon neutrinos are weakly tachyonic; or, the neutrino tunneling between flavours is evidence of some funky stringy higher dimensional tunneling, and the geometry is weakly non-3D.
All of those are interesting, very interesting, but unlikely.
If the effect is real, the the NuMI/MINOS experiment in the US at Fermilab and Soudan ought to be able to see something.
An even more elegant experiment could also be done, by coupling the neutrino detector at Gran Sasso to the kick magnet timer at the SPS in CERN.
After about 100,000 or so pulses, the OPERA detection ought to shut off the SPS injection before the beam is fired...
of course the world would then also probably vanish in a puff of logic, or the Sun goes unstable, or a huge earthquake destroys Gran Sasso just in time to save us and sustain the Causality Protection Conjecture.
In the mean time, I confidently predict that hundreds of theorists will come up with thousands of possible explanations, most or all of which will be completely wrong.
These will be comfortably outnumbered by the not-even-wrong speculations in social media.
All of these will be entertaining for a while, then become tedious, and then be quietly resolved.
The most interesting applications of this would of course be for signaling; however there private enterprise is way ahead of us, and clearly some very clever hedge fund with deep pockets has already co-located neutrino detectors at the exchanges to front run everyone else.
Its just engineering really.
Explains a lot...
Ethan at Starts with a Bang
Chad makes a dog's breakfast of it at Uncertain Principles
Sean's take at Cosmic Variance
Excellent perspective by Jon Butterworth at the Grauniad's Life and Physics
PS: More Sean at CV
- Log in to post comments
I think that they used the straight line distance
http://operaweb.lngs.infn.it/Opera/publicnotes/note132.pdf
Part of me really hopes that Lorentz invariance is broken by flavor. That would just be so cool.
From what I understand, they measured the distance in a few ways, so this seems unlikely to be the problem.
I do think it is likely there's just an error, as boring as it is. Bph, I think everyone would like these results to be correct, because that would be awesome. Stuff like this comes around fairly often though, and it is almost always nothing.
My favorite quote on this was in the NYT article:
"Another CERN theorist, Alvaro DeRejula, said the claim was âflabbergasting.â âIf it is true, then we truly havenât understood anything about anything."
Poor Alvaro DeRejula must be getting a ribbing about that.
I think one of us is confused about the distance covered in 60ns at C.
In rough numbers, I get 6e-8s * 3e8m/s = 18 m (not 18 mm as you state in your post... typo?)
My beer money is on a geography error, possibly related to orbital variations due to the lumpy gravitational field.
I don't know of any other attempts to use GPS to measure large absolute distances. Relative distances are routine (e.g. plate tectonics), but I can't think of any absolute distance applications.
If they put a radio telescope at each site, could they use interferometry to get a whole number of wavelengths distance measurement?
That would be 20 m, not 20 mm.
0.2 m = 20 cm
20 mm = 2 cm
20 m = 2,000 cm
I hope this isn't double-posted...
That would be 20 m, not 20 mm.
0.2 m = 20 cm
20 mm = 2 cm
20 m = 2,000 cm
I looked at the clock synchronization stuff, and I think they're okay. I can't speak for the various instrumental delays, though.
See comments at the relevant "Starts With a Bang" thread, many have proposed ways the measurements could be in error.
BTW, if some neutrinos are nevertheless FTL and others not (question: what energy range and what type, and be superluminal?), we can call them newtrinos as an unexpected, effectively new subcategory of particle. Or maybe, whichever neutrinos are superluminal can be called tachinos (I think I have coined that, based on Internet search.) (PS: we have a problem defining their energy of course. I wonder who is thinking about that. Use of imaginary mass would give negative energy due to expansion of series, which is not what we need.)
Note also: I had some discussions at a picnic yesterday loaded with capable types like a Jefferson Lab (VA) nuclear physicist with whom I've done projects. He noted that maybe photons themselves don't even travel at the physical limit c, because of a tiny mass (or, as I thought, interaction with background "syrupyness" of space like caused by dark energy and/or and quantum effects, that might actually slow down photons more than neutrinos.) No, it doesn't violate the semantics of "speed of light" because we would treat "c" as a physical parameter of limit based on the absence of other factors, as what would determine actual time dilation, causal factors, etc. even if light is inhibited from going that fast.
PS: re "if 3Ï results are wrong half the time, does that mean 6Ï results are wrong all the time?" - if f 3Ï results are wrong half the time, it means for sure by definition that people aren't getting the characterization of the error spread right ... (Kind of like, you take all the times a forecaster said "70% chance of rain" and if it rained only 40% of those times, he's a lousy forecaster. I wrote some pieces about that showing that statements can be collectively wrong even if no individual statement is false ("70% chance of rain today" - you can't falsify *that one statement* can you?)
(Possible variant for new name: tachtrino, but that seems a mouthful.)
Notable as this is, a confirmation by MINOS would be far more interesting since it would be strong evidence that the problem is theoretical in origin. My money would then be on a problem with GR itself.
âIf it is true, then we truly havenât understood anything about anything."
I am afraid this is true. If the result is correct (and this is still a big IF) then we are all crackpots and the only difference is between those who speak the jargon and post their stuff to arxiv and the others who post to vixra.
It is embarrassing that only a tiny minority was seriously looking at previous hints that m^2 < 0 could be the case and almost everybody ignored the inconclusive result of MINOS. The OPERA experiment should not have been an accident, an experiment done on the side!
While we discussed how many angels can dance on a superstring, the real physics 'beyond the standard model' was perhaps sitting there, waiting for us during several lost years.
Neil Bates wrote:
"He noted that maybe photons themselves don't even travel at the physical limit c"
That's an interesting suggestion and might solve Dark Energy, at least from the SNe perspective.
We'll see the next few days/weeks
This gap will remain and it is up to the scientists to catch up with the imagination http://bit.ly/qsvnNe
Yes, of course, you're absolutely right. Those airplanes outside aren't really flying through the sky -- they are being held up by angels. All those cars on the roads are not really being directed by their GPS units -- it's little angels whispering directions. The Cassini probe around Saturn wasn't sent via rockets and the clever use of planets as billiard balls -- it was space angels.
All of modern civilization, its scientific and engineering accomplishments -- they don't exist. Or, if they do, it's just by lucky coincidences and angel dust.
so the current generation(s) of physicist can find solace in the fact that at least Newtonian physics is still ok.
alright ...
Thanks for pointing out that they're statistically pulling a 60ns measurement out of a 10500ns pulse. That really seems like the weak link in my layperson's opinion. Is anyone here qualified to evaluate that part of the paper?
The proton pulse that produces the neutrinos isn't smooth, and of course they only capture a tiny fraction of the neutrinos, so they actually measure the beam current, then somehow use that to find out a most likely flight time...that just seems like it could go wrong in so many ways, both statistical and systemic.
My best guess is that there are about 300 people "here" who are qualified to evaluate the statistical argument...
Have you ever been on an airplane when the PA comes on and the crew ask "is there a doctor on board", and suddenly half-dozen people hunch down and look very distracted while pointedly not listening...
it is a bit like that.
Geodesy: duh, I just realised the OPERA people mean geodesic in the GR sense, most likely, not in the geophysics sense, in which case that is not an issue... had to be too simple.
Gubser already has a paper out http://arxiv.org/abs/1109.5687
- courtesy of Sean C.
For SN1987A , there is also another quantity which comes into effect, viz. the one-way Shapiro delay due to gravitational potential of all matter along line of sight. (although it doesn't change the conclusions)
Sirs, All of the experts are super brilliant in this including those who have commented above. However I offer the following comments.
Since The neutrinos do not interact with matter,therefore their speed as measured in any medium would be same whereas this may not be the true with light due to following points since it needs following answers,
1.What was the medium in which the light was traveling and what was the pressure of the medium?
2.What was the colour of light(i.e. wavelength/frequency)
Since the light velocity is different in different mediums and refractive index for any medium is ratio of velocity of light in vacuum to velocity of light in that medium.
Thus the speed of neutrinos as measured on earth should be higher than that of light on the earth. Perhaps the speed in the outer ace (preferably in inter galactic space).
Thus for refractive index of the medium higher than one by even millionth part would result in time taken for the light to travel in that medium, less by about .003 nSec. per one kilometer of travel wrt neutrinos.
With light speed as 300000km per 10e9 nSec in vacuum, time gap of light and neutrino travel would be,
[10e9 /(3 x 10e5)] x ( 1/10e6) = 0.003 nSec.
For refractive index of air as 1.000275, it can be around 0.815 nSec. Per Km of travel.
Thus for a travel of 10 km the time gap would be 8.15 nSec or for 100 km travel it would be around 80+ nSec
Before anyone is continuing to speculate over superliminal velocities, I would like to point
out something interesting...
Determination of the CNGS global geodesy
http://operaweb.lngs.infn.it/Opera/publicnotes/note132.pdf
To get the exact point on the surface of earth, we need to know the position
of the GPS satellites for a given time and their distance from us. No problem so far,
GPS is able to provide us with that information.
The authors used the Bernese software which is available under
http://www.bernese.unibe.ch/
There are some kinds of problems which will be further points of investigation
if my first suspicion is incorrect.
One problem: You will normally compile Bernese yourself and it is a known problem
that FORTRAN compilers are very different in their quality to numerical
problems. The second problem is that FORTRAN constants are stored in REAL
precision (6-7 digits) if you are not declaring them. Anyway...
Lets imagine a programmer did the marginal error of not using the correct
reference ellipsoid value of 6 378 137 m, but the abbreviated version
6 378 000 m.
What will be the influence on the GPS precision ? Practically *NOTHING* because
the satellites are symmetrically around the earth and position accuracy is very insensitive against
height changes if the distance between sender and receiver is long enough.
For example, to get a 18 m change for a distance of 730 km you need a height difference of
5 km !
So the values are in fact accurate concerning the position and noone will see a problem,
the software is reliable.
But what if you want to know the *distance* between two points on earth ? Having a slightly
smaller value has the effect that the calculated distance is smaller than the actual distance.
How much ?
730534.610 km * ( (6 378 137 /6 378 000)-1.0) = 15.7 m
15.7 m / 299 792 458 m/s = 52.3 ns
Opera difference: 53.1 and 67.1 ns
Strange coincidence, isnt it ?
Posted by: TSK | October 3, 2011 6:37 PM