Many-Worlds and Decoherence: There Are No Other Universes

I seem to have been sucked into a universe in which I'm talking about the Many-Worlds Interpretation all the time, and Neil B keeps dropping subtle hints, so let me return to the whole question of decoherence and Many-Worlds. The following explanation is a recap of the argument of Chapter 4 of the book-in-progress, which will cover the same ground, with cute dog dialogue added.

The central question here is what sorts of things count as producing a "new universe" in Many-Worlds. The scare quotes are because I've come around to the opinion that the whole "parallel universe" language does more harm than good for giving people an idea of what's really going on. Hopefully, I'll make it clear why as we go on.

Anyway, to be concrete about it, let's consider a really simple quantum system that may or may not involve the creation of a "new universe": we have a single photon, hitting a 50-50 beamsplitter. There are two ways to talk about the photon after the beamsplitter: as a wavefunction with two components corresponding to the two different possible paths, or as a particle that has taken one of the two paths. Quantum mechanics tells us that this is properly described by a wavefunction in a superposition of the two paths, but everyday experience tells us that we only ever detect the particle on one path. Somehow, the quantum superposition has to evolve into the classical mixture. How does that happen?

We can't give a sensible answer to this question without having some way to distinguish between the two possibilities. So the first question we have to ask is, how do we know which situation we have?

Well, the signature of quantum behavior is interference, so the only way to really tell which case we've got is to do an interference experiment, bringing the two paths back together. There are lots of ways of doing that, but let's think about a Mach-Zehnder Interferometer:

i-66f0286010718d4ed1413f1d870e5d14-sm_fig4-2.jpg

We take a couple of mirrors and steer the two possible photon paths back together, and use a second beamsplitter to combine those two paths on two detectors. If we're dealing with waves, this should result in some interference that will depend on the relative lengths of the two paths. The probability of finding the photon at Detector 1 will range from 0% to 100%, and the probability of finding the photon at Detector 2 will range from 100% to 0%, in a complementary manner.

If we're dealing with a particle-like photon that definitely took one of the two paths, on the other hand, the probability of finding it at Detector 1 is 50%, and the probability of finding it at Detector 2 is 50%, no matter what you do with the path lengths. In this case, there's no interference-- there's a 50% chance of the photon taking each of the two paths, and then a 50% chance of being directed to each of the two detectors.

So, if we want to know whether we're dealing with wavefunctions or definite particle trajectories, we need to do an interference experiment. But we're talking about single photons, here-- how do you get an interference pattern out of a single photon? A single photon will produce a single "click," which is the canonical term for a detection event, even though nobody really uses detectors that make clicking noises any more. The photon will be detected at one detector or the other, and that's all the information you get.

The only way to detect interference of a single photon is to repeat the experiment many times, each time sending only one photon in. You record which detector "clicked" for each photon, and slowly build up a measurement of the probability of finding it at each detector. You can also vary the relative path length, repeating the experiment many times at various different lengths, and in this way you'll trace out the probability distribution as a function of mirror position. If you're dealing with wavefunctions, you'll see an interference pattern ranging between 0% and 100% probability for each detector, and if you're dealing with particles, you'll find a constant 50% for each detector.

So what do you find if you do this? Well, if you've set your interferometer up properly, with short path lengths and stable mirror mounts and all that technical stuff, you should see an interference pattern. So, a beamsplitter gives us a wavefunction in a superposition of two states.

How does this turn into two photons that each take a single path, though? To see that, let's think about making our interferometer huge:

i-c61ecb3e3b274fbd863021d5f7511b94-sm_fig4-3.jpg

The wavy lines indicate a really long distance, say, a hundred kilometers, passing through air the whole way. What do we see then?

Well, if we're talking about a long distance in a turbid medium, there's going to be a phase shift. If you think in terms of waves, there are going to be interactions along the way that slow down or speed up the waves on one path or the other. This will cause a shift in the interference pattern, depending on exactly what happened along the way. Those shifts are really tiny, but they add up. If you're talking about a short interferometer in a controlled laboratory setting, there won't be enough of a shift to do much, but if you're talking about a really long interferometer, passing through many kilometers of atmosphere, it'll build up to something pretty significant.

That phase shift changes the interference pattern. If the probability of finding the photon at Detector 1 is 100% with no interactions, it could be, say, 25% with the right sort of interactions. Or 50%. Or 75%. Or 0%. The exact probability depends on exactly what happened to the piece of the wavefunction that traveled on each path.

And here's the thing: that shift is also random. What you get depends on exactly what went on when you sent a particular photon in. A little gust of wind might result in a slightly higher air density, leading to a bigger phase shift. Another gust might lower the density, leading to a smaller phase shift. Every time you run the experiment, the shift will be slightly different.

So what happens to your interference pattern? Well, it goes away. The first photon may have a 100% chance of turning up at Detector 1, but the second will have a 25% chance, the third a 73.2% chance, the fourth a 3.6% chance, and so on. As you repeat the experiment many times these all smear together, and you end up finding that half of the photons end up on Detector 1, and the other half end up on Detector 2. The interaction between the photon and the air destroys your interference pattern. This process gets the name "decoherence," because it's destroying the "coherence" between the two pieces of the wavefunction, which is the technical term for "the property that allows us to see an interference pattern."

All right, then. Interactions with the environment destroy the coherence, so that's what takes us out of the wavefunction situation and into the particle situation. So, photons that have interacted with a big environment become like classical particles and don't interfere any more, right?

Wrong. The photons always behave like waves, and they always interfere. The exact probability of detecting a given photon at Detector 1 or Detector 2 depends on both paths. The only thing the interactions with the environment do is to obscure the interference by making it impossible to build up a pattern through repeated experiments. If you could somehow keep track of all the interactions, say by measuring the precise state and trajectory of every air molecule along each path, you could recover the pattern by post-selection: simply choose to count only those photons for which the environmental conditions gave the same phase shift. Add them together, and you'll see an interference pattern just as you did with the short interferometer.

The cumulative effect of the interactions is to make the photons look as if they were classical particles, taking one definite path or the other. They're always quantum objects, though, and they always interfere. Decoherence just keeps you from seeing the pattern.

This is why I say that the standard Many-Worlds language about "separate universes" is pernicious and misleading. What's going on here is not really a photon splitting into two photons in "separate universes," one taking each path, it's a photon wavefunction that is in a superposition state with random phases between the two pieces.

Why do we talk about decoherence as if it produced "separate universes?" It's really a matter of mathematical convenience. If you really wanted to be perverse, and keep track of absolutely everything, the proper description is a really huge wavefunction including that includes pieces for both photon paths, and also pieces for all of the possible outcomes of all of the possible interactions for each piece of the photon wavefunction as it travels along the path. You'd run out of ink and paper pretty quickly if you tried to write all of that down.

Since the end result is indistinguishable from a situation in which you have particles that took one of two definite paths, it's much easier to think of it that way. And since those two paths no longer seem to exert any influence on one another-- the probability is 50% for each detector, no matter what you do to the relative lengths-- it's as if those two possibilities exist in "separate universes," with no communication between them.

In reality, though, there are no separate universes. There's a single wavefunction, in a superposition of many states, with the number of states involved increasing exponentially all the time. The sheer complexity of it prevents us from seeing the clean and obvious interference effects that are the signature of quantum behavior, but that's really only a practical limitation.

Questions of the form "At what point does such-and-so situation cause the creation of a new universe?" are thus really asking "At what point does such-and-so situation stop leading to detectable interference between branches of the wavefunction?" The answer is, pretty much, "Whenever the random phase shifts between those branches build up to the point where they're large enough to obscure the interference." Which is both kind of circular and highly dependent on the specifics of the situation in question, but it's the best I can do.

More like this

So, I've put myself into a position where I need to spend a substantial amount of time thinking about weird foundational issues in quantum mechanics. This has revealed to me just why it is that not that many people spend a substantial amount of time thinking about weird foundational issues in…
A continuation of the lecture transcription/ working out of idea for Boskone that I started in the previous post. There's a greater chance that I say something stupid about quantum measurement in this part, but you'll have to look below the fold to find out... At the end of the previous post, I…
Phillip Ball has a long aggrieved essay about the Many-Worlds Interpretation, which is, as Sean Carroll notes, pretty bad. Ball declares that Many-Worlds is "incoherent, both philosophically and logically," but in fact, he's got this exactly backwards: Many-Worlds is, in fact, a marvel of logical…
There have been some good comments on last week's post about the Many-Worlds Interpretation, which I find a little surprising, as it was thrown together very quickly and kind of rant-y on my part, because I was annoyed by the tone of the original Phillip Ball article. (His follow-up hasn't helped…

Michio Kaku is going to be mad at you

It seems to me you're failing to describe the key part of the MWI, which is that the observer also is a quantum system that follows the laws of quantum mechanics. It's not the case, according to MWI, that the wave function somehow collapses at either detector 1 or detector 2. Just as the photon travels both paths, the observer sees both results, and Schrödinger's cat is both alive and dead. Of course, the observer and the cat don't perceive that. Their projection along one world line is either-or. But that perception has to be only one slice of the wave function, i.e., relative to only one worldline, if quantum mechanics is fully descriptive of them as well as of photons.

It seems to me you're failing to describe the key part of the MWI, which is that the observer also is a quantum system that follows the laws of quantum mechanics.

This is the entire point of the decoherence argument; once you include an environment and an apparatus as quantum mechanical entities, you start running into these kinds of problems. Chad's argument that the interference is still there but not observable is just the argument that we can't monitor all of the environmental degrees of freedom. When you trace those out and consider only the reduced density matrix of the system, you're left with what looks like a classical statistical ensemble.

Particularly with the advent of quantum information research, I'm surprised the decoherence approach to these fundamental issues in QM doesn't seem to get more love. The more I learn about it, the more I like it. This is getting a chapter in the book, then?

It seems to me you're failing to describe the key part of the MWI, which is that the observer also is a quantum system that follows the laws of quantum mechanics. It's not the case, according to MWI, that the wave function somehow collapses at either detector 1 or detector 2. Just as the photon travels both paths, the observer sees both results, and Schrödinger's cat is both alive and dead. Of course, the observer and the cat don't perceive that. Their projection along one world line is either-or. But that perception has to be only one slice of the wave function, i.e., relative to only one worldline, if quantum mechanics is fully descriptive of them as well as of photons.

Yeah, that's the next step along in the process. It was late, though, and I got sick of typing.

The key thing is that the electrical signals from the detectors, and the brain-states of the observers, and all the rest are in superpositions just like the photons themselves. They're also massively entangled, so the wavefunction of the universe includes a term for photon-along-path-1-that-triggered-detector-1-and-was-observed-by-Grad-Student, and one for photon-along-path-2-that-triggered-detector-2-and-was-observed-by-GradStudent, and all the rest. These things look like classically distinct "universes" because there's no way to do an experiment to demonstrate interference between them.

Of course, there's also some romance in the idea of things being different in another world. This or that random event turned different. This or that choice made. You know, like the Star Trek episode where Q shows Picard how boring his life would be if he had made one single life choice differently.

The romance probably keeps a lot of people thinking in those terms too. I know it's about the only part of Many Worlds that doesn't drive me batty.

Thanks Chad for a detailed post that deeply addresses the questions I was asking, and I'm flattered that you used my example of a beamsplit photon to make your points. I think it is good to bring up what happens with air in the way, since what we hear of so much is almost always "idealized" on purpose to get those "dirty bits of life" out of the way. So if they matter, then bring 'em on!

Here's my critique, along lines I have made before all over the place. You write,
The cumulative effect of the interactions is to make the photons look as if they were classical particles, taking one definite path or the other. They're always quantum objects, though, and they always interfere. Decoherence just keeps you from seeing the pattern.

OK, this is why I don't trust "decoherence" to explain "collapse." The problem we are trying to figure out is, why do at least "I" see the photon wave exclusively excite, to final consequence, a particular atom in a particular detector A or B? (REM also, it isn't just that the hit ends up in A or B but not both (i.e., A XOR B as they say in IT logic.) The "hit" can be recorded at some particular spot as on a CCD screen and it "could have been elsewhere" etc. so we still wonder, why "there" on a given screen.) The decoherence argument basically depends on messing up the ability of the photons to show interference long-term (which BTW only matters in the long term, right?) That still doesn't tell us why a vague wave doesn't just hang over or fill up both A and B and stay that way, period. Interfere or not, the states would just be all on top of each other which was the problem we tried to get out of.

Consider that if waves exposed to air etc. come together at recombiner R, they can have all sorts of relative phase. Well ... Say the MZ was calibrated for all in-phase to go out A (as if bright fringe.) Well, random phase submissions mean that various proportions of WF go out through A or B channel (just from amplitude addition given the phase shift at reflection etc.) If the waves came in 180° out of phase, then equal amplitudes go out to A and B instead of 100% towards A. So ironically, now both A and B are "exposed" to WF impingement and not a guaranteed hit in A as before. Remember that we have to presume the wave went in from both directions to make the calculation of the output amplitudes (if we can). It isn't like just splitting and then getting a hit without R, where you can retrodict to say it was "this way" only.

Then, both A and B are exposed to photon WF. That was always given as, a superposition of excitation in A and in B. Yes you can say the waves and detectors are "entangled" as hit in A means no wave left in B (right?) But that only makes sense after the fact of "collapse" already being a feature of our world (a person seeing one or the other, please don't deny that!) Look at how entanglement is written; it isn't just a description of local WFs but a relational claim. I don't have time for ASCII simulation of good characters so I will just say, as in the case of *two photon* entangled polarization we might have RH found at A means LH must be found at B, and vice versa. There is no actual "given" of which CP either photon has before it hits a detector, there is only the relationship. But talking in terms of relating one "result" to the other only makes sense in if something out there makes a collapse type event happen "by hand" first. Otherwise, why not just have the superposed states just staying that way forever? (BTW I am suspicious whether entanglement of different parts of one photon WF is much like that for two photons anyway - is it?)

Interference really has nothing to do with resolving the mess into either a real collapse or a "separated" system that can't cross-communicate etc. (And even if you don't say "many worlds", I myself have a particular experience after I start an experiment - it sure as hell isn't "me" that sees that other outcome, who does?) The waves should just stay waves anyway if that is what they fundamentally are, and nothing special diddles with them to change that. Think: if waves just followed the WM such as the Schrödinger Eq. etc, then they wouldn't end up showing that "classical behavior", interference or no, "to begin with."

I put "classical" in quotes because in the explanation above "classical" meant classical mechanics with particles, not classical wave mechanics! Ironically, if things really stayed classical in that sense, then of course things will happen just as I said they would if collapse didn't literally intervene: waves would just spread around and interact in like manner to say classical E&M. There would be no "photons" or hits or etc. anywhere. The WFs would just pile on top of each other and there'd be both the dead and alive cat together, etc. Nothing in wave mechanics per se gets you anything other than the whole thing just piled on, period.

See, that's why I consider the argument from decoherence to be a circular argument (unintentional and in good faith of course. That happens a lot.) The task for the deco enthusiast is to take "waves" as given by purely wave-mechanical considerations and explain "collapses" from that. The whole point is to avoid collapse having to be "put in by hand" or even gravity etc. Hence, you can't take what we already find happening that we call "classical behavior" (in which the collapse already happens) and feed that back into the attempt to explain that in the first place!

This is a ultimately a matter of logical validity, not physics. I kept trying to explain this over and over to Lawrence B Crowell at Cosmic Variance in the thread "Quantum Hyperion." (http://blogs.discovermagazine.com/cosmicvariance/2008/10/23/quantum-hyp… ) I just couldn't get through. I still appreciate the time and effort he and others put into discussing the issue.

Furthermore, what if:
1. We have a vacuum and don't have decoherence messing up the interference.
2. We don't care about interference. Let's make the whole thing even simpler by just splitting the photon towards two distantly separated detectors (in a vacuum!) No additional mirrors or recombiner. Leave interference aside this time. Then we wonder, why does the "hit" (and what is that really, anyway?) happen at one end of the far-apart leg and not the other? Even simpler than that, don't bother to split, just wonder where an expanding shell of WF is going to "land."
3. What does Renninger negative result (detector does not record a hit) do to the WF? It seems the WF must be "reallocated" so as not to be where we now know it can't be, yet it still hasn't collapsed yet at another detector. And, how do unreliable detectors affect the WF, etc.

I also suggest to interested parties, to check my blog (finally a new post!) The new post, "Quantum Stupefaction" pwns Many-Worlds! condenses what I said at Parallel Universes and Morality about Quantum Suicide and my new twist "Quantum Stupefaction" (snark.) Yet it really does bring up problematic implications of MW theory, at least in the most literal interpretation.

Well, shorter myself:

Talk of superposition or entanglement of detected/excites at A and not at B, with WFs correspondingly not at the other place, is premature because: Unless "collapse" was already a feature of reality, there wouldn't be any such thing or meaning to "exciting one detector" (and not the other) in the first place. The photon WF would just excite both detectors simultaneously (if the WF has amplitudes those places) and that would literally be the story, period - forever and so on.

Neil B keeps dropping subtle hints
Subtle... ha!

Neil, have you ever seen the Everett FAQ?

It's very hard to tell what your question is. If I skimmed correctly, the answer is that the photon's WF becomes entangled with the detectors' WF, which in turn becomes entangled with the environment's WF (including the human observers). The different components of the entangled WF hardly interfere, and when they do, the effect is practically random. In effect, the different components do not interact at all, thus you get something like "split-brain" syndrome, where one side observes one thing, and the other side observes something else.

Another answer:
Why is there any meaning to "exciting one detector"? I'm not sure what you mean there. I would say it's because the "excited detector" is an eigenstate of its wavefunction. Eigenstates form a complete and orthogonal basis, so of course you can express any state as a superposition of eigenstates.

Well, I don't see why it would be hard to see what my question is. I don't have time to deeply peruse the Everett FAQ but my criticism is a very basic point of logic, that one shouldn't use circular arguments and assume consequences based on back-door use of what one is attempting to prove. AFAIK, coherence and decoherence and interference etc. just don't have the implications that MW/D enthusiasts think they do if we just take wave mechanics as such. They only seem to, ironically, if one takes interpretations based on "collapse" already being a given - IOW, talking of "The exact probability of detecting a given photon at Detector 1 or Detector 2 depends on both paths." (I called them Detector A and B for easier designation.)

Like I said, unless "collapse" is put in by hand there wouldn't be such a thing as "the probability of detecting ..." - that isn't in the nature of waves to even do. Sure, we say the WF amp's squared give "probability." Yet that is true and makes sense only if we are *given* collapse first as an outside imposition. "Probability" isn't even part of the nature or meaning of waves if that isn't imposed. There wouldn't be "random" or not because again, that only makes sense if something happens to waves to concentrate them, to collapse them.

Apologists for MW/D are using concepts like "chance of" and "more random" that don't come out of wave evolution as such. Those concepts can't explain collapse. They only make sense if you already take it for granted. That's what makes decoherence an illegitimate "circular argument."

I don't know how to make it shorter and/or clearer, except to say: there is no reason why interference or decoherence etc. should lead to anything other than just classical wave evolution, which preserves the original problem of the localization into "hits" etc. After all, if I set up a ripple tank and shoot waves around, all I will get is ripples bouncing around, maybe interfering with each other and maybe not, but no collapses or separated worlds or "apparent collapses" etc. (A person's BS detector should go off when he or she hears of "apparent _____") Absent a special intervention, that's what should happen to wave functions as well. The criticism is not hard to get. At heart it's about basic logic, not physics.

OK, this is why I don't trust "decoherence" to explain "collapse." The problem we are trying to figure out is, why do at least "I" see the photon wave exclusively excite, to final consequence, a particular atom in a particular detector A or B? (REM also, it isn't just that the hit ends up in A or B but not both (i.e., A XOR B as they say in IT logic.) The "hit" can be recorded at some particular spot as on a CCD screen and it "could have been elsewhere" etc. so we still wonder, why "there" on a given screen.) The decoherence argument basically depends on messing up the ability of the photons to show interference long-term (which BTW only matters in the long term, right?) That still doesn't tell us why a vague wave doesn't just hang over or fill up both A and B and stay that way, period. Interfere or not, the states would just be all on top of each other which was the problem we tried to get out of.

Read comments #2 and #4.
You only perceive a single outcome because your apparatus and your mind become entangled with the system being measured. The wavefunction includes all of the possible outcomes, but each one is entangled with the detector state corresponding to that particular outcome, which is in turn entangled with the brain state corresponding to you observing that detector state, and so on.

You don't see multiple outcomes at once because conservation of energy still applies. You have one photon's worth of energy, which is enough to make one detector fire, not both. It's got to be in one place or the other, not both, or else conservation of energy is violated.

You're making all sorts of assertions and conclusions based on overreliance on a picture of wavefunctions behaving exactly like classical waves, which they're not. I simply don't have the free time or mental energy at this point to go through your gigantic example and parse out all the details. Maybe some time when I don't have a shrieking infant in the background, but probably not.

As for easier-to-answer questions:

Furthermore, what if:

1. We have a vacuum and don't have decoherence messing up the interference.

Then you continue to see interference patterns. See, for example, LIGO.

2. We don't care about interference. Let's make the whole thing even simpler by just splitting the photon towards two distantly separated detectors (in a vacuum!) No additional mirrors or recombiner. Leave interference aside this time. Then we wonder, why does the "hit" (and what is that really, anyway?) happen at one end of the far-apart leg and not the other? Even simpler than that, don't bother to split, just wonder where an expanding shell of WF is going to "land."

There's no experiment, here. Or, rather, there's nothing that has a prayer of elucidating quantum behavior. If you want to do something to show that there are wavefunctions extending through large areas of space, you need to do some sort of interference experiment. That's the only way to detect quantum coherence.

If you're hung up on why you see a particular photon being detected at Detector 1 and not Detector 2, I can't help you. Quantum mechanics doesn't predict that, and doesn't pretend to. Quantum mechanics predicts probabilities only, not specific outcomes. If you want to think of that as a deficiency in the theory, you wouldn't be the only one, but that's all we've got at the moment. You will see the photon at Detector 1 with some probability, at Detector 2 with some probability, and never at both, because of energy conservation.

3. What does Renninger negative result (detector does not record a hit) do to the WF? It seems the WF must be "reallocated" so as not to be where we now know it can't be, yet it still hasn't collapsed yet at another detector. And, how do unreliable detectors affect the WF, etc.

A measurement is a measurement, whether it's positive or negative. In a Many-Worlds picture, you would have a wavefunction containing a piece corresponding to one detector not recording a hit, and another corresponding to the other detector not recording a hit. If you're using the negative result to let you know that you have a photon at some other position-- say, at the input of another optics experiment-- then your negative result will always be correlated with the detection of something or another at the other position. If you don't do anything with the photon whose position you now know, then quantum mechanics doesn't really have anything to say about it.

And, if you have unreliable detectors, you can add a third part corresponding to neither detector recording a hit. What "really" happened to the photon in that case? The question doesn't really make any sense. Quantum mechanics predicts the probabilities of measurement outcomes, and that's it. I suppose you probably have some piece of the wavefunction corresponding to that one photon continuing to propagate outward at the speed of light forever, but that just vanishes into the vast tangle of other environmental effects that you're not measuring in the first place.

I'm headed out of town tomorrow, and will have only sporadic Internet access for the next week or so. I'm highly unlikely to respond to any further comments in this thread, or any other.

I don't have time to deeply peruse the Everett FAQ but my criticism is a very basic point of logic, that one shouldn't use circular arguments and assume consequences based on back-door use of what one is attempting to prove.

Well, I don't have time to deeply peruse long ramblings about your problems with the MW interpretation, and furthermore, understand them. If you've got some very basic point of logic, you should be able to express it briefly and clearly. Especially since you've apparently been trying to express the same point for so long.

Would it make you happier if we had a "single observation interpretation"? This interpretation, which I just made up, states that there will be a single true observation and subsequent true wavefunction collapse some time in the distant future. This true collapse is what allows "probabilities" to exist in QM. In the mean time, we have something exactly like the MW interpretation.

Miller, if you don't have time to peruse my "ramblings" then you don't have a valid way to critique my particular attempted rebuttal of MW/D. I am not making a corresponding mistake, since I just said I wouldn't peruse that particular example of MW boosterism - I have read about MW/D elsewhere. As for basic point of logic, as someone involved in "skepticism" you should appreciate that an argument is invalidated by being circular (you need to take time to see the case, by actually reading my comment which is still shorter than the original post.)

One simple point for now: MW distributes an effectively complete WF amplitude at more than one place, etc. Hey, if there's an entire electron "here" then there isn't also an entire same electron "there" - OK? You can't turn the ~0.7 amplitude at detector A into a "one" and the ~0.7 amplitude at detector B into a "one" as well. It doesn't go anywhere to hide under some physics-psychobabble that the extra integrated squared-amplitude quantity just slips off into the twilight zone.

No, it's got to be either unity at A and zero at B, or vice versa - the same old same old collapse problem. Sorry, MW is not internally consistent IMHO, and besides it involves believing in things (the "other branches") we can't observe. I thought skeptics didn't like that sort of stuff, it sounds like angels or God etc.

As for deco being useful or demonstrated to some extent, well: the part that is "tested/useful" by definition can't be the part about another branch also existing, which is "unobservable" (that's the whole point) - and that's the notion I'm criticizing, so it can't be defended with results etc. It too is just "an interpretation" (One that cheats.) And the tested/useful part either obeys the rules of QM that we knew back from the old "collapse problem" days, or else it represents a really new theory - and then, it wouldn't be just an interpretation of what to do about collapse per se. Can't have it both ways.

BTW here's a paper by Italo Vecchi criticizing decoherence:

http://arxiv.org/abs/quant-ph/0001021v1

He has some more along the same lines, and Roger Penrose has criticized it as well.

MW is kind of just taking the Feynman paths literally, it's not angels or god. The electron here and the electron there aren't really the same electron since they are in different worlds of the many worlds. One electron doesn't have to go anywhere since it always was in a different world. What goes somewhere is the connections between the many worlds.

http://www.valdostamuseum.org/hamsmith/Sets2Quarks6.html

Miller, if you don't have time to peruse my "ramblings" then you don't have a valid way to critique my particular attempted rebuttal of MW/D. I am not making a corresponding mistake, since I just said I wouldn't peruse that particular example of MW boosterism

Okay, point taken. However, I think it is to your advantage, as well as everyone else's, if you would be shorter and clearer. You're already putting effort into it, so just focus it more.

What do I think is wrong? You're referring to setups that I don't understand. You're overusing acronyms and abbreviations. You are overusing "intuitive" language. You say that MW asserts this and that, when I am not so sure that it really does. You talk of circular arguments without ever a clear statement of what premise is being used to prove itself and how. You just continue on without ever referencing any of our responses, so we do not know where our disagreement is.

As far as I know, I completely understood your argument, and responded to it correctly, but all I get is more, well, rambling, pretty much. So I guess I didn't understand your argument, and at this rate, I may never. If you like, I'll just concede already.

Miller, some of my stance is as a "challenger" who just doesn't think the case "for" has been made well enough to begin with. Hence I don't focus as much as maybe I should on direct and succinct rebuttal. One note of caution for all of us is, we are tending to lump MW and decoherence ideas together and they aren't just plain the same, however much relation some claim. I don't accept decoherence arguments as showing why collapse/appearance of collapse ("C" for short) happens without a special and mysterious imposition of collapse dynamics. IOW, the claim that wave evolution takes care of it by itself, if I get the drift of the attempt.

One "show me" challenge is, why you think concentration or even "apparent" concentration of the WF into a small region comes about from decoherence. Chad gives the impression that loss of interference should do this, to which I say: so what if there's loss of interference. Interference or not, never forced classical waves to shrink down to a small region. There's no reason for that to *concentrate* the WF into the space of one or the other detectors. So I say a good case was never made to begin with. Well, the MW dodge is that localization doesn't "really" happen, because the other possibility (being at the other detector) is also actualized. I say that is absurd and below I give a clear reason.

Let's say we start with an amplitude distribution like below:

Detector A ~ 0.7
Detector B ~ 0.7

In traditional QM, measurement makes that distribution "collapses into either

Detector 1: 1
Detector 2: 0

or (exclusively, as "XOR" to the logic nerds)

Detector 1: 0
Detector 2: 1

We don't know why Case I v. Case II, which is the collapse. It's problematical for two reasons : 1) The wave must suddenly shrink up. 2) No idea why it shrinks to D1 instead of D2.

The impression given my MW enthusiasts is, no problem because both are actualized in separate realms or whatever they are:

Detector 1: 1
Detector 2: 1

Well, one problem IMHO is that localizing a particle in two small spaces instead of one, when it had been all spread out, still doesn't really solve the issue of concentration/localization of the WF anyway! However there's no more preference problem. But look: MW cheated by multiplying the total amount of WF. We now have "more particle" than before. Saying the rest of it is separated and doesn't communicate etc. is mumbo-jumbo, you made more electron or photon than you started with and you shouldn't be able to.

If you still insist we can have a "split", I ask: OK, so how many worlds/?s are created in the split? The temptation above is to say "two" since the chance is 50/50. But suppose the amplitudes were 0.8 and 0.6, the the probabilities are 64% and 36%. So now what, we have 64 "worlds" one way and 36 "worlds" the other way, or 16 and 9, or .... to get the right proportion of chance for observation? What number of versions is appropriate? What if it's an irrational proportion? If you have infinite branchings, then how can you define "proportion" given such infinite sets?

Even if you say, it really isn't a matter of n specific separate worlds, how then does the proportion manifest if you somehow put "all" the particle into both detectors to avoid collapse into only one of them?

Sorry I couldn't be shorter - but look, do you really think anyone can ably rebut a major theoretical construct with a few short jabs? But I think I did at least succeed in chipping away at it.

Thank you Neil, this was a considerable improvement. Though I suppose it was in fact longer, it felt shorter because you were making your point clearer.

The biggest point where I would disagree with your argument is this:

Detector 1: 1
Detector 2: 1

I do not believe that this is how MW would describe the result. The wavefunction would remain at (.7, .7), not suddenly change to (1,1). The whole idea is that nothing special truly happens during an observation; the universal wavefunction obeys the same equations at all times. Therefore, MW does not multiply the entire wavefunction at all. But once there is significant decoherence, it becomes impossible to create an experiment that can distinguish between a fraction of the universe and a whole universe.

How many universes result from this situation? No matter what the amplitudes were, it would split into two components, each of which would appear to be a whole universe. So there are two universes, which are not necessarily equally likely. But it's more accurate to say that there's only really one universe at all times.

How do we account for the fact that one of these "universes" can be more "likely" than the other if Many Worlds Theory has no probabilities in it? I think we need to add in a new assumption to MW: the probability of living in one "branch" as opposed to another is equal to amplitude-squared. That's what you wanted to hear, right, that MW has an extra unstated assumption?

Why does the wavefunction get localized? At least in this case, it starts out localized in two different locations--one going towards detector A, and the other going towards detector B. It does not ever get more localized than that. After detection occurs, the two components simply don't interact in a testable way.

So does all that make sense? I'd hate to have spent all that time complaining about rambling, only to start rambling myself. It happens to all of us sometimes. :)

I've never been able to understand how the MWI squares the conflicting ideas that a) the different worlds are separate and b) they have a statistical correlation that means the totality of outcomes looks like probability.

I think this is a logical show-stopper but am open to argument.

Also, I've mentioned elsewhere that I'm interested in interference patterns in QM. I want to find out if the sequence of 'hits' that make up the pattern are random, or can they be modeled as a limit cycle. Unfortunately, no-one seems to have collected the data necessary to tell either way.

Eddie, the MW *doesn't* square the conflicting ideas together, as I explained (sometimes in a sprawling ineffectual way and sometimes pithily to the point, I suppose.)

Another capsule reply to MW/Decoherence and Chad's example here. Chad makes the following statement which is totally unwarranted:

The cumulative effect of the interactions is to make the photons look as if they were classical particles, taking one definite path or the other. They're always quantum objects, though, and they always interfere. Decoherence just keeps you from seeing the pattern.

No, no, the interactions and the decoherence do *not* make the photons look like classical particles. All it should logically do is scramble the nice orderly relationship between waves. Those waves still ought to be spread out over a wide area - what's to make them take one way or the other? Ironically, the only way to say there's now a chance of hitting D1 or D2 is if you do say the wave is in both places. The collapse problem is the wave all concentrating into one detector or the other. It's the detectors themselves and their response that's the problem, not what comes out of the beam recombiner.

There is no reason that messing up the interference patterns between parts of a wide-spread distribution of "waves" would make them concentrate down to a little point. You can't get away with a sloppy conflation of "classical" meaning classical waves and yet make a sly substitution into it also meaning classical particles. That's the whole problem to begin with, that remains unexplained.

The problem is not complex. First, we really do not know what a wave function represents. I hear its square is the probability of an event. In fact, Einstein was the first to say the square of the wave function was a distribution. Modern quantum physicist turned Einstein's idea into a theory of statistics on reality and from that we have all these strange ideas combined with advanced mathematics. Physics is a physical science. It is visual not mathematical. We use mathematics to describe the physical universe. The day modern quantum physicist said we could never visualize the atom and replaced our eyes with mathematical expressions was the day physics stop being a physical science and more about mathematics.

Note: The principal asked the teacher, "Where is your student"?. The teacher says, "I don't know but I can ask around". The teacher calls to each class trying to find her student. After several calls, the teacher finds the student was in the office at 1:00 pm. The principal ask where was the student before and after 1:00 pm. The teacher says, "I don't know where the student was before 1:00 pm or after 1:00 pm". The call is like taking a measurement. The student could have been anywhere before the measurement. Case closed, until the principal asked the student were he or she was before and after 1:00 pm. If the student says, "I don't know where I was or I was everywhere before the teacher called then the student is being smart. We don't need that kind of smart in physics.

By Jamahl A. Peavey (not verified) on 23 Nov 2008 #permalink

The problem is not complex. First, we really do not know what a wave function represents. I hear its square is the probability of an event. In fact, Einstein was the first to say the square of the wave function was a distribution. Modern quantum physicist turned Einstein's idea into a theory of statistics on reality and from that we have all these strange ideas combined with advanced mathematics. Physics is a physical science. It is visual not mathematical. We use mathematics to describe the physical universe. The day modern quantum physicist said we could never visualize the atom and replaced our eyes with mathematical expressions was the day physics stop being a physical science and more about mathematics.

Note: The principal asked the teacher, "Where is your student"?. The teacher says, "I don't know but I can ask around". The teacher calls to each class trying to find her student. After several calls, the teacher finds the student was in the office at 1:00 pm. The principal ask where was the student before and after 1:00 pm. The teacher says, "I don't know where the student was before 1:00 pm or after 1:00 pm". The call is like taking a measurement. The student could have been anywhere before the measurement. Case closed, until the principal asked the student were he or she was before and after 1:00 pm. If the student says, "I don't know where I was or I was everywhere before the teacher called then the student is being smart. We don't need that kind of smart in physics.

By Jamahl A. Peavey (not verified) on 23 Nov 2008 #permalink

Jamahl, many astute thinkers would say the problem is one of "interpretation" and understanding the "what" that is presumably there, not "complexity" as such.

To follow up on my comment: even after the interference is messed up in the MZ above, you still have some of the WF coming out of one channel (face) of the recombiner (cube at top right) and another portion coming out of the other face (since that's what the evolution of the WF predicts up to that time - REM we haven't even gotten to the detectors D1 and D2 yet.) So the problem is, with part of the WF at D1 and part at D2, we still don't know why a "hit" ends up at one D and not the other. And if you try to say, "it's both" to avoid the asymmetry, then you cheat by taking partial distributions at D1 and D2 and making them entire at both places - yes, that *does* violate conservation laws.

BTW, Chad, "interference" is not the signature feature of QM. We already had that in classical wave theory. The signature feature of QM is the particle/wave duality, the observation of collapse/interconversion between the aspects.

Also, what about decay of a structureless muon at a given moment? The muon is not interacting in a way affected its internals (there aren't any internals anyway AFAWK, so how the hell it can even turn into something else is absurd.) There's no interference or lack of, of something that has certainly already left the muon (as per the photon experiments), and yet at a certain moment, bang there it goes. And by golly, there's only one of them and only one decay.

Try this link also, for a quantum measurement paradox that may be relevant:

http://www.lepp.cornell.edu/spr/2000-11/msg0029236.html

Neil, you're beginning to piss me off.

I am out of town, visiting family, and do not have time to deal with you. I said as much at the end of comment #10.

I would appreciate it if you would refrain from misrepresenting my positions at a time when I am unable to respond.

My next act in this thread will be to close the comments. If you want to argue with people who are present to argue with you, fine. Leave me out of it.

Miller: "So does all that make sense? " - No.

First, there is a severe problem with saying that your chance of "being in" one "world" (however literal, isn't it the same problem regardless?) is equivalent to the amp. sqd. even if there are only two worlds. It is a direct contradiction to how probability works. First, if there is effectively one "you" that is sent into one or the other, then something could determine the chance of that. But then, it no longer makes sense to the MW concept. Really, in one "universe" the other "you" is an unconscious zombie that doesn't count because the "real you" was sent into the other of the two? Believe me, no MW enthusiast (kool-aid drinker ... ;-) ) believes that, because it destroys the equivalency.

No, the only way to do it is with frequentist probability applied to multiplied worlds/branches of equal standing. That requires having different proportions of worlds so you have more expectation at least of ending up in the more likely outcome. (Actually, the whole thing was silly to begin with anyway because you are turning into all of the branches, not as if one same person was running along the most likely directions ....) OK, I heard your attempt to evade the consequences of splitting for ranges of probability, what do other MW enthusiasts say?

Also, the non-metaphysical part of the OP's explanation is wrong. He says that the key is the significance of interference or it's being made irregular for example. The key false statement is this:
The cumulative effect of the interactions is to make the photons look as if they were classical particles, taking one definite path or the other. They're always quantum objects, though, and they always interfere. Decoherence just keeps you from seeing the pattern.

*** Money point: As I have said, messing up clean interference patterns is irrelevant to making an extended WF "hit" someplace. Messing up interference patterns of classical waves doesn't do that, does it? Weren't we supposed to be keeping the quantum wave dynamics without introducing an auxiliary, "mysterious collapse" dynamics? Ironically, it's the other way around: only because a genuine collapse does happen in that imposed way, that we're ever able to talk about "statistics" and interference "patterns" and the photon taking "this path" etc. to begin with.

Also, it is not appropriate to say that the photon found at D1 or D2 took one definite path or the other. Don't we still believe it went through both legs of the MZ before reaching recombiner R? After all, the whole argument explaining why the interference pattern was messed up depends on that! And the wave mechanics entails that the wave comes out of both sides of R, unless one leg just happened to be canceled out. Changed statistics has nothing to do with taking that away. You get "one way" information only if you do something like split the photon and never recombine, or try to measure which mirror it bounced off by recoil (which collapses and localizes it there, preventing further interference in the sort of case that actually does show transition to "classical" reality as a particle.)

Furthermore, what if we set up the MZ so there isn't any air etc. to mess up the interference pattern? We might arrange a constant phase shift to get 64% in D1 and 36% in D2. Well, now the clean interference is there. There is no "decoherence", and we still have the collapse issue of why the particle sometimes concentrates at D1 and other times at D2. It's the same problem over, but now without the feature that supposedly "explains" the "appearance" of WF collapse.

Finally, peruse my comment #24 and about muon decay.

OK Chad, sorry again - I just didn't think in terms of whether an OP could easily get back to comments. Do please understand that I am taking your example as representing "typical" decoherence arguments since they resemble what I've seen before. Also that critique is otherwise part of normal discourse, and that I am arguing most directly with other commenters here. As for misrepresentation: it is hard for me to know if I' ve done so without a specific rebuttal - so if you can't easily do that I will lay off.

Other readers - please continue if desired at my own blog where I will set up a post to discuss MW/decoherence, and without fixating on a specific presenter/ation of same.

Neil, you've reverted back to your previous unclear writing style.

When I asked whether I was making any sense, I wasn't asking whether you agreed with me. I was asking whether I stated my position clearly enough that you knew what it was. I must not have stated it clearly, because your response barely refers to what I said, and in fact misrepresents me. I am feeling unneeded in this discussion. Carry on then.

Miller, maybe you are still going to see this, so just for you: I still don't see what is so hard to understand. About the chance of seeing one result or the other: there isn't a single person who has a chance of ending up in one universe or the other. It isn't like you throw dice to see which room you go into. What supposedly splits (however imagined) is you along with the rest of the universe. So if the chance is supposed to be 70:30 instead of 50:50, then having a split into just two universes won't cut it. We'd need 70 of one and 30 of the other, etc.

As for general critique of the common case for decoherence as avenue to effective collapse: I am just saying that I can't imagine how randomizing of the phase of waves and meddling in predictable interference can take an extended wave and compress it into a small space, like the atom that absorbs the photon or electron, etc. I shouldn't be stubbornly certain, I just can't imagine how or why. But entanglement issues make it even more subtle, and that's an extra twist.

Please don't tell me that either of those points is hard to understand.

On this topic, I rather enjoy the Stanford Encyclopedia of Philosophy, easily found online. Perhaps Chad can tell which parts he finds agreeable, and which parts he warns us against (as, for example, a smart dog would warn us about the "smiling" cat)?

Many-Worlds Interpretation of Quantum Mechanics
First published Sun Mar 24, 2002

The Many-Worlds Interpretation (MWI) is an approach to quantum mechanics according to which, in addition to the world we are aware of directly, there are many other similar worlds which exist in parallel at the same space and time. The existence of the other worlds makes it possible to remove randomness and action at a distance from quantum theory and thus from all physics.

1. Introduction
2. Definitions
2.1 What is "A World"?
2.2 Who am "I"?
3. Correspondence Between the Formalism and Our Experience
3.1 The Quantum State of an Object
3.2 The Quantum State that Corresponds to a World
3.3 The Quantum State of the Universe
3.4 FAPP
3.5 The Measure of Existence
4. Probability in the MWI
5. Tests of the MWI
6. Objections to the MWI
6.1 Ockham's Razor
6.2 The Problem of the Preferred Basis
6.3 Derivation of the Probability Postulate from the Formalism of the MWI
6.4 Social Behavior of a Believer in the MWI
7. Why the MWI?
Bibliography
Other Internet Resources
Related Entries

1. Introduction

The fundamental idea of the MWI, going back to Everett 1957, is that there are myriads of worlds in the Universe in addition to the world we are aware of. In particular, every time a quantum experiment with different outcomes with non-zero probability is performed, all outcomes are obtained, each in a different world, even if we are aware only of the world with the outcome we have seen. In fact, quantum experiments take place everywhere and very often, not just in physics laboratories: even the irregular blinking of an old fluorescent bulb is a quantum experiment.

There are numerous variations and reinterpretations of the original Everett proposal, most of which are briefly discussed in the entry on Everett's relative state formulation of quantum mechanics. Here, a particular approach to the MWI (which differs from the popular "actual splitting worlds" approach in De Witt 1970) will be presented in detail, followed by a discussion relevant for many variants of the MWI.

The MWI consists of two parts:

A mathematical theory which yields evolution in time of the quantum state of the (single) Universe.
A prescription which sets up a correspondence between the quantum state of the Universe and our experiences.
Part (i) is essentially summarized by the Schrödinger equation or its relativistic generalization. It is a rigorous mathematical theory and is not problematic philosophically. Part (ii) involves "our experiences" which do not have a rigorous definition. An additional difficulty in setting up (ii) follows from the fact that human languages were developed at a time when people did not suspect the existence of parallel worlds. This, however, is only a semantic problem.[1]

2. Definitions

2.1 What is "A World"?

A world is the totality of (macroscopic) objects: stars, cities, people, grains of sand, etc. in a definite classically described state.

This definition is based on the common attitude to the concept of world shared by human beings.

Another concept (considered in some approaches as the basic one, e.g., in Saunders 1995) is a relative, or perspectival, world defined for every physical system and every one of its states (provided it is a state of non-zero probability): I will call it a centered world. This concept is useful when a world is centered on a perceptual state of a sentient being. In this world, all objects which the sentient being perceives have definite states, but objects that are not under her observation might be in a superposition of different (classical) states. The advantage of a centered world is that it does not split due to a quantum phenomenon in a distant galaxy, while the advantage of our definition is that we can consider a world without specifying a center, and in particular our usual language is just as useful for describing worlds at times when there were no sentient beings.

The concept of "world" in the MWI belongs to part (ii) of the theory, i.e., it is not a rigorously defined mathematical entity, but a term defined by us (sentient beings) in describing our experience. When we refer to the "definite classically described state" of, say, a cat, it means that the position and the state (alive, dead, smiling, etc.) of the cat is maximally specified according to our ability to distinguish between the alternatives and that this specification corresponds to a classical picture, e.g., no superpositions of dead and alive cats are allowed in a single world.[2]

The concept of a world in the MWI is based on the layman's conception of a world; however, several features are different... [truncated]

Hi Chad and everyone.

Anthony Leggett provides a detailed and devastating critique of decoherence theory in his article "Testing the limits of quantum mechanics: motivation, state of play, prospects" (J. Phys.: Condens. Matter 14 R415-R451, 2002), which I warmly recommend and from which I quote:

"This argument, with the conclusion that observation of QIMDS [quantum interference between macroscopically distinct states] will be in practice totally impossible, must appear literally thousands of times in the literature of the last few decades on the quantum measurement problem, to the extent that it was at one time the almost universally accepted orthodoxy [apparently it still is].
...
Let us now try to assess the decoherence argument. Actually, the most economical tactic at this point would be to go directly to the results of the next section, namely that it is experimentally refuted! However, it is interesting to spend a moment enquiring why it was reasonable to anticipate this in advance of the actual experiments. In fact, the argument contains several major loopholes ...".

The brackets are mine.

Cheers,

IV

Anthony Leggett provides a detailed and devastating critique of decoherence theory in his article "Testing the limits of quantum mechanics: motivation, state of play, prospects" (J. Phys.: Condens. Matter 14 R415-R451, 2002), which I warmly recommend and from which I quote:

I read the Leggett paper some time back, but didn't keep a copy, so it's taken me a while to get back to this. Now that I've looked at it again, I think you're badly misrepresenting his claim.

What Leggett is arguing against is not the idea of decoherence per se, or decoherence playing a role in the quantum measurement process. He's arguing against a much narrower claim, namely the idea that it would be utterly impossible to see quantum effects in macroscopic systems due to decoherence effects.

This claim has, in fact, been experimentally disproven, in that people have been able to observe quantum interference in fairly large systems-- the cleanest example being the experiments done in the Zeilinger group on interference and diffraction of fullerene molecules.

This does not discredit the whole idea of decoherence, though. Leggett describes a number of experimental demonstrations of quantum interference in fairly large systems, and in every case, the experimenters have worked very hard to avoid decoherence, usually by working at cryogenic temperatures.

It may be a matter of taste, but I regard the Stony Brook and Delft experiments (see http://physicsworld.com/cws/article/print/525) as the most relevant examples to date. Besides, Leggett is quite explicit in what he argues. If, beyond its previous erroneous claims, decoherence theory's boils down to the trivial statement that detecting macroscopic superpositions is a challenging task, then imo it's hardly worth discussing as a scientific theory.

It may be a matter of taste, but I regard the Stony Brook and Delft experiments (see http://physicsworld.com/cws/article/print/525) as the most relevant examples to date.

I like the fullerene experiment better, because it's easier to explain what's going on. The measurement is a straightforward counting of molecules at various positions, as opposed to the spectroscopic technique used in the superconducting-ring experiments.

Besides, Leggett is quite explicit in what he argues. If, beyond its previous erroneous claims, decoherence theory's boils down to the trivial statement that detecting macroscopic superpositions is a challenging task, then imo it's hardly worth discussing as a scientific theory.

If that's you standard, none of the alternatives are any more scientific. So the whole conversation is really a waste of time.

This is your blog, so you obviously make the rules here. Whether DT's circular arguments can be regarded as "scientific" is imo a reasonably worthy topic. And, as far as I am concerned, civilized exchange on a relevant issue is rarely a waste of time.

I'm not sure what you think is "circular" about the idea of decoherence. I don't see anywhere in the theory that it assumes its conclusions in order to prove them. The concept is fairly uncontroversial, as such things go.

If you've got some concrete physics objection to the idea, that might be worth discussing, but you seem to be opposed to it on obscure aesthetic grounds, in which case there's nothing to discuss.

Obscure aesthetic grounds?
Let's say that all DT arguments that I have inspected relie on some unphysical "no-recoil" assumption, either hidden or explicit, in order to achieve that diagonalization of the density matrix. First, the "right" pointer basis is selected by the author. Then an "ad hoc", basis-dependent dissipative mechanism is introduced to wipe away the off-diagonal elements. A physical system picking its own reference frame; that's what I call a circular argument.
It is true interference patterns that may reveal macroscopic superpositions are in general difficult to track. That's hardly surprising and does not require grotesque unphysical
assumptions to be explained or described. As Leggett states in his paper, there is no reason to believe that such difficulties cannot be overcome, even for massive objects, such as fleshy observers like me and you.

I'm not that upset by the ad hoc nature of a lot of attempts to calculate decoherence rates. The whole point of the thing is that the specific interactions that give rise to decoherence are difficult to track exactly. If you could keep track of them all, there wouldn't be decoherence in the first place. And of course, if you can minimize or control those interactions, you can see quantum effects with very large systems.

I'm even less bothered by the selection of "pointer" bases, because that's inescapable in quantum mechanics. The mathematical basis in which you express the wavefunction will inevitably depend on the details of the measurement you are making. Similarly, the way in which you express the dissipative mechanisms will be different in different bases. I really don't see any alternative to that-- there is no privileged frame of reference, or basis of measurement. That's the very core of modern physics.

What would you suggest as a more plausible way of dealing with issues of measurement than decoherence? I'm not aware of any approach that wouldn't be subject to the exact same problems.

First, thanks for your polite reply. If DT were only an empirical approach to experimental issues, I would hardly have any reason to object to it. Its claims are actually far broader, as witnessed e.g. by the essays in "Decoherence and the Appearance of a Classical World in Quantum Theory". Such wider claims relie on arguments that I regard flawed (see e.g. my previous post). Answering your question in a nutshell, the measurement problem faces us with deep semantic issues, as somehow acknowledged in the excerpt of "Many-Worlds Interpretation of Quantum Mechanics" posted by Jonathan above. The remark that "human languages were developed at a time when people did not suspect the existence of parallel worlds" is relevant even for someone who. like me, rejects the basic ontological framework of MWI (the "problem of Being" i.e. the deconstruction of the notion of existence, is a mainstay of modern Continental thought). The "preferred basis" problem mirrors the crucial issue of our semantic framework shaping our perceptions/measurement outcomes. The most relevant question for me is then, "What semantic approach is most adequate and fruitful to model and predict measurement outcomes in a QM setting?". In this respect I regard Rovelli's Relational Quantum Mechanics as quite promising. First, it is an epistemic theory, where semantic issues can be properly formulated and discussed. Second, it yields a simple and experimentally relevant solution to the problem of spooky entanglement non-locality, which has dogged QM for a long time ([1]). It also allows us to clarify in an operative way the problematic notion of observer and free will, which can then be defined relationally, without resorting to mysterious "sentient beings". I have posted extensively elsewhere about these topics (see e.g. [2]).

[1] "Relational EPR", http://arxiv.org/abs/quant-ph/0604064
[2] http://sci.tech-archive.net/Archive/sci.physics/2006-04/msg01733.html

I just happened on this thread and would like to add my 2c even if no one reads it anymore :P

Chad: "You only perceive a single outcome because your apparatus and your mind become entangled with the system being measured."

Yes but this is not solving anything the question still remains why the wavefunction representing your brain is entangled with one particular result and not the other - why is your brain entangled with "particle hitting detector A" and not "particle hitting detector B" what decides which one it is? What decides which particular outcome is entangled with your brain in the universe you are aware of? What assigns which outcome will be entangled with each universe?

Besides MW should simply be subjected to Ockham razor as ensemble interpretation is just as good from practical point of view and at the same time it is infinitely less confusing and infinitely more efficient in it's use of abstraction - dispensing universes at every interaction is the ultimate multiplication of entities beyond necessity.