The Problem with Irreducibly Complexity (revised post from blogger)

As I mentioned yesterday, I'm going to repost a few of my critiques of the bad math of the IDists, so that they'll be here at ScienceBlogs. Here's the first: Behe and irreducibly complexity. This isn't quite the original blogger post; I've made a few clarifications and formatting fixes; but the content remains essentially the same. You can find the original post in my blogger information theory index. The original publication date was March 13, 2006.

Today, I thought I'd take on another of the intelligent design sacred cows: irreducible complexity. This is the cornerstone of some of the really bad arguments used by people like Michael Behe.

To quote Behe himself:

By irreducibly complex I mean a single system composed of several well-matched, interacting parts that contribute to the basic function, wherein the removal of any one of the parts causes the system to effectively cease functioning. An irreducibly complex system cannot be produced directly (that is, by continuously improving the initial function, which continues to work by the same mechanism) by slight, successive modifications of a precursor system, because any precursor to an irreducibly complex system that is missing a part is by definition nonfunctional. An irreducibly complex biological system, if there is such a thing, would be a powerful challenge to Darwinian evolution. Since natural selection can only choose systems that are already working, then if a biological system cannot be produced gradually it would have to arise as an integrated unit, in one fell swoop, for natural selection to have any thing to act on.

Now, to be clear and honest upfront: Behe does not claim that this is a mathematical argument. But that doesn't mean that I don't get to use math to shred it.

There are a ton of problems with the whole IC argument, but I'm going to take a different tack, and say that even if those other flaws weren't there, it's still a meaningless argument. Because from a mathematical point of view, there's a critical, fundamental problem with the entire idea of irreducible complexity: you can't prove that something is irreducible complex.

This is a result of some work done by Greg Chaitin in Algorithmic Complexity Theory. A fairly nifty version of this can be found on Greg's page.

The fundamental result is: given a system S, you cannot in general show that there is no smaller/simpler system that performs the same task as S.

As usual for algorithmic information theory, the proof is in terms of computer programs, but it works beyond that; you can think of the programs as the instructions to build and/or operate an arbitrary device.

First, suppose that we have a computing system φ, which we'll treat as a function. So φ(x) = the result of running program x on φ. x is both a program and its input data coded into a single string, so x=(c,d), where c is code, and d is data.

Now, suppose we have a formal axiomatic system, which describes the basic rules that φ operates under. We can call this FAS.

If it's possible to tell where you have minimal program using the axiomatic system, then you can write a program that examines other programs, and determines if they're minimal. Even better: you can write a program that will generate a list of every possible minimal program, sorted by size.


Let's jump aside for just a second to show how you can generate a list of every possible minimum program. Here's a sketch of the program:
i-9e51b4753140c793d4e5ff95032bcc8f-minimal.jpg
  1. First, write a program which generates every possible string of one character, then every possible string of two characters, etc., and outputs them in sequence.
  2. Connect the output of that program to another program, which checks each string that it receives as input to see if it's a syntactically valid program for φ. If it is, it outputs it. If it isn't, it just discards it.
  3. At this point, we've got a program which is generating every possible program for φ. Now, remember that we said that using FAS, we can write a program that tests an input program to determine if its minimal. So, we use that program to test our inputs, to see if they're minimal. If they are, we output them; if they aren't, we discard them.

Now, let's take a second, and write out the program in mathematical terms:

Remember that φ is a function modeling our computing system, FAS is the formal axiomatic system. We can describe φ as a function from a combination of program and data to an output: φ(c,d)=result.

In this case, c is the program above; d is FAS. So φ(c,FAS)=a list of minimal programs.


Now, back to the main track.

Using the program that we sketched above, given any particular length, we can easily generate programs larger than that length.

Take our program, c, and our formal axiomatic system, FAS. and compute their
length. Call that l(c,FAS). If we know l(c,FAS), we can run φ(c,FAS) until it generates a string longer than l(c,FAS).

Ok. Now, write a program c' that for φ that runs φ(c,FAS) until it finds a program K, where the length of the output of φ(K) is larger than l(c,FAS) + length(c'). c' then outputs the same thing as φ(K).

This is the tricky part. What does this program do? It runs a program which generates a sequence of provably minimal programs. It runs those provably minimal programs until it finds one larger than itself plus all of its data. Then it runs that and emits the output.

So - c' outputs the same result as a supposedly minimal program K, where K is larger than c' and its data. But since c' is a program which emits the same result as K, but is smaller, then K cannot be minimal.

No matter what you do - no matter what kind of formal system you've developed for showing that something is minimal, you're screwed. Godel just came in and threw a wrench into the works. There is absolutely no way that you can show that any system is minimal - the idea of doing it is intrinsically contradictory.

Evil, huh?

But the point of it is quite deep. It's not just a mathematical game. We can't tell when something complicated is minimal. Even if we knew every relevant fact about all of the chemistry and physics that affects things, even if the world were perfectly deterministic, we can't tell when something is as simple as it can possibly be.

So irreducibly complexity is useless in an argument; because we can't know when something is irreducibly complex.

Categories

More like this

Someone sent me some questions about information theory; or actually, about some questions raised by a bozo creationist arguing about information theory. But the issues raised are interesting. So while I'm nominally snarking at a clueless creationist, I'm really using it as an excuse to talk…
Back when I first started this blog on blogspot, one of the first things I did was write an introduction to information theory. It's not super deep, but it was a decent overview - enough, I thought, to give people a good idea of what it's about, and to help understand why the common citations of it…
Many people would probably say that things like computability and the halting program aren't basics. But I disagree: many of our basic intuitions about numbers and the things that we can do with them are actually deeply connected with the limits of computation. This connection of intuition with…
I haven't taken a look at Uncommon Descent in a while; seeing the same nonsense get endlessly rehashed, seeing anyone who dares to express disagreement with the moderators get banned, well, it gets old. But then... Last week, DaveScott (which is, incidentally, a psueudonym!) decided to retaliate…

You haven't fixed the problems with this argument that were explained to you by several people the last time you posted it. To summarize:

(1) "Irreducibly complex" in Behe's sense is not "minimal" in your sense.

This was explained to you by "Gorobei" in the 5th comment and by "keiths" in the 14th comment.

(2) Being able to show that a particular program is minimal is not the same as having a procedure for determining for every single program P whether P is minimal.

This was explained to you by "viked" in the 7th comment, "Gorobei" in the 8th comment, and "Anonymous" in the 16th comment.

(3) Claiming that there is empirical evidence for irreducible complexity in a system is not the same thing as claiming to have a deductive proof for irreducible complexity in a system.

This was explained to you by "Macht" in the 18th comment.

By Chris Grant (not verified) on 13 Jun 2006 #permalink

Chris:

I thought I addressed the complaints in the comments last time; I'll try again in the comments here. The fundamental point is that the concept of irreducibility is itself flawed. It has nothing to do with how you demonstrate irreducibility; it's the concept of irreducibility itself that's the problem.

(1) Behe never gives much in the way of a precise definition of IC, but as far as he defines it, I do not see any reason why it's fundamentally different than the mathematical notion of minimal: both mean that you can't remove anything and get the same result. Dembski quite clearly means the information theoretic notion of minimality when he references IC type arguments in his explanatory filter/specified complexity arguments.

(2) The fundamental point of the information theoretic argument is that there's a threshhold beyond which no proof of minimality is valid. It doesn't matter how good the proof looks to you. It doesn't matter how good your allegedly IC system is. Minimality tests fail not because of any flaws in the system, not because there's anything wrong with the reasoning in your supposed proof of minimality - but because fundamentaly, the notion of minimal complexity is itself flawed: minimal complexity itself lacks meaning.

(3) If you look at the proof I show, there is no problem with the minimality proof of K. And there is no way of knowing if the allegedly irreducible system you're looking at is beyond the K threshold. The fundamental concept of irreducible systems does not work: It doesn't matter that you have what looks to be a perfectly valid proof of the irreducibility of a system if the concept of irreducibility itself is flawed. Empricial evidence that a system has a fundamentally meaningless property doesn't make that property meaningful. Once you cross the complexity threshold where the complexity of your system exceeds that of K in the proof, no proof of minimality has any meaning, because minimality itself has no meaning; and to make matters worse, you can't determine where that point is!

It seems that there is a difference between minimality and what Behe seems to want IC to mean (which is not to say that IC actually makes sense/is valid). It appears to me that minimality is a static concept related a program's functioning which is invariant to any model you have of how the program was developed.

IC (if it were to be defined rigorously) would have to be a dynamic property embedded in some model of program/object generation - i.e. some partial order that would say that progam X is/could be a child of program Y (given generation method G). Then saying that program X is IC would mean that X's output is meaningful, while any program Y for which X is a successor to Y (under a specific G, or maybe that there exists a G...) is not meaningful.

My point in my comment in the original post was that science doesn't deal with proof. So the fact that "you can't prove that something is irreducible complex" is irrelevant to science. I can't prove "insert any scientific theory here" but as long as it makes testable predictions, is consistent with other theories, etc. the fact that I can't prove it is true really doesn't matter.

steve:

I think you're partially right. The main catch is that Behe does have a kind of minimality constraint. That is, he only allows things to *grow*. That's a key piece of his argument: you can only produce an IC system by *adding* features - and if you can't get there by adding features, then it can't evolve.

So while I think you're right that he's arguing for something like a transition system, where X in irreducible if/f any program Y for which X is a successor is non-functional, I think that he also places an additional ordering constraint on it: that Y can only be a predecessor of X if Y is smaller than X.

To be fair to Behe, his original definition ie 'if you remove any part it fails' is testable in that you can tell if something is irreducibly complex by sequentially removing parts and seeing if the system still functions. This has absolutely nothing to do with whether or not the system would have evolved though, and you are right that it is no proof the function could not be performed by a simpler system.

Macht:

Sorry, but I think you're wrong there. While we can't, for example, prove beyond any reasonable doubt that, for example, gravity behaves the way that relativity says it does, that's not the kind of concept that underlies IC.

For something like IC, if you can't prove that something is IC, then it's a useless argument. If the argument is specifically that a system has the property that it can't be created incrementally, then saying that you can't prove things like whether a system has that property means that you don't have an argument.

If IC is provable - then the information theory argument tears it down.

If it isn't provable, then it's got no value whatsoever. IC is an argument that evolution couldn't exlain certain systems, because they have this specific property that makes them non-evolvable. If we can't show whether or not any system has this non-evolvable property, then it isn't an argument at all.

I don't exactly see what distinction you are making. It can't be that IC is a property, since other areas of science posit specific properties without proving them (e.g., properties of atoms). If it is the word "can't" that you think makes the difference, I don't think that is significant either. It certainly hasn't been proven that nothing can go faster than the speed of light but we have no problem using the word "can't" in that context.

Maybe you mean something else though.

Macht:

My point is that WRT to irreducible complexity as a theory, if IC isn't a provable property, then the theory has no meaning whatsoever.

As a theory, what IC says is: If there exists an observed system S with property IC, then evolution can't have developed S. But if you can't prove IC, then the IC statement becomes "If there exists an observed system S with a property that I can't detect, then evolution can't have developed S." That's a totally empty statement.

You can't have it both ways. Behe wants to use IC as an argument that "proves" evolution can't explain features of life. If it can't "prove" that, then it becomes a worthless statement: "I can't imagine how this system could evolve, therefore it couldn't have evolved". On the other hand, if it purports to prove (or demonstrate, if you prefer that term) that evolution couldn't develop certain features, then the irredicibility of those features must be provable (or demonstrable). But if you try to prove/demonstrate the ICness of a system, then you fall into the IT irreducibility trap.

IC is meaningless as a theory, because it's built on a fundamentally flawed concept. All that the proof I showed does is demonstrate why that concept (irreducibility) is flawed.

Behe's original definition of IC: A single system which is composed of several interacting parts that contribute to the basic function, and where the removal of any one of the parts causes the system to effectively cease functioning".

He admitted that this approach is defective, which it obviously is.

His later definition: An irreducibly complex evolutionary pathway is one that contains one or more unselected steps (that is, one or more necessary-but-unselected mutations). The degree of irreducible complexity is the number of unselected steps in the pathway.

Here IC is a property of an evolutionary pathway rather than a system, which renders it virtually useless.

(Definitions taken from here.)

By secondclass (not verified) on 13 Jun 2006 #permalink

I'll try my hand at explaining:

IDer: "Evolution can't evolve flarschnikit features."
Mark: "There's no way to know if a feature is flarschnikit. Here's why. (Programs) All the features they've declared to be flarschnikit are based solely on an inability to think of something more flarschnikity."

Another angle I like to bring up myself:

1. IC is a negative premise. (This thingy is not reducible)
2. The idea that evolution can't evolve IC is a negative premise.
3. ID is an affirmative conclusion based on those negative premises.
4. Affirmative conclusions from negative premises are a no-no in logic. Especially if you don't know all the possibilities.

There seems to be a serious flaw in the above proof: if you go through running every minimal program to see how long its output is, as soon as you run across *one* non-halting program, c' gets stuck and never gets past trying to decide how long its output is. Therefore it may not output anything, let alone output the same thing as K. You can't fix this by examining only programs that halt, because then you'd need a way of deciding which ones those are.

Another flaw: the fact that the OUTPUT of K is longer than the PROGRAM c' doesn't prove that the PROGRAM K is longer than the PROGRAM c'. The output of K can easily be larger than K itself.

Ultimately, though, even if your proof can be fixed, it still isn't applicable to IC. I think you're slipping a quantifier. You can't *always* decide minimality in finite time, but that doesn't mean you can't *ever* decide it in finite time. Some systems could be provably minimal, others could be provably non-minimal: there are definitely *some* systems whose minimality is undecidable, but not necessarily *all* of them.

You're going from
~EP: Ax: P(x)=1 if x is minimal, 0 otherwise
(No P can decide minimality of ALL programs)
to
Ax: ~EP: P(x)=1 if x is minimal, 0 otherwise
(No program can decide the minimality of ANY other program)

which is invalid.

IC still has other problems, of course. But this isn't really one of them.

By Anonymous (not verified) on 13 Jun 2006 #permalink

anon:

You're making two mistakes. First, you're scrambling the output of a program and the program itself. And second, you're assuming that this is a proof that a specific IC system (K) is not minimal.

(1) The program vs the output.

The "IC" system is a program that generates an output. The process of generating an output is the computation representation of the task performed by the IC system.

The process described above doesn't run any of the programs it's examining for IC. It's examining them with a formal axiomatic system to determine if they're minimal. (If they're provably minimal, there's a formal axiomatic system which can prove it, by definition.) It doesn't run anything until it finds a "minimal system" larger than itself plus its data. That's an allegedly IC system.

It doesn't matter how large the output of K is. The point is that K is provably minimal; but something smaller than K can do the same thing as K (because "doing the same thing" means "generating the same output"). So K isn't minimal.

(2) Proving something about a specific K vs proving a concept.

If you look at the proof, we don't know what K is. In fact, we can't know what K is. There is nothing wrong with the proof that K is minimal. The problem is that the concept of minimality itself is flawed. So it doesn't matter how good the proof of alleged minimality/ICness for a particular system is. The concept is fundamentally flawed: any allegedly IC system might be K for some formal axiomatic system.

Echoing BronzeDog somewhat, my take on IC is that it's like looking through the wrong end of a telescope. If evolution has a direction (not necessarily "forward," let's not go there), then IC is looking in exactly the opposite direction.

I read the concept as this: Given a state, S and all of its possible immediate predecessor states, Pm through Pn, S is IC iff none of the predecessor states has the same function as S.

This condition is not applicable to evolution. Co-option demonstrates that the predecessor's function need not be the same as the succesor's and neutral mutation demonstrates that even if non-functional, predecessor states need only be non-lethal. To prove something IC, Behe would have to demonstrate that all of the possible predecessor states were lethal. To disprove that something is IC, it is only necessary to show that there is at least one predecessor state that is non-lethal, which is a pretty low bar.

ArtK:

You're absolutely right about that - there's a lot wrong with the IC idea. While I focused on one narrow mathematical point, that's just one tiny little corner of what's wrong with the IC concept.

IC as an anti-evolution argument is predicated on the fundamental notion that evolution behaves in a particular way, which has very little to do with how evolution actually behaves.

The IC argument assumes that there's exactly one way that evolution produces features: via a strictly additive constructive process. No co-option; no reduction; no elimination; addition only.

So according to the IC argument you can't produce a 3-part device by removing a part from a 4-part device. You can't create a working device by modifying a different working device. But of course, that's nonsense.

The process described above doesn't run any of the programs it's examining for IC. It's examining them with a formal axiomatic system to determine if they're minimal. (If they're provably minimal, there's a formal axiomatic system which can prove it, by definition.)

Uh, your program requires not only that minimality be always provable, but also non-minimality. I.E. "Is the program s string minimal under FAS?" is decidable, not just partially decidable (provable + disprovable, not just disprovable). Now if non-minimality was provable there'd be no problem; provability of minimality would then imply decidability of minimality. But it's not obvious that non-minimality is provable.
This is partially fixable: if minimality is provable but not decidable, you can still write a program to enumerate minimal programs until it finds one longer than any given length. But the problem is that either way your proof gives a very weak result, namely that the set of minimal programs is not recursively enumerable. In other words, it is not possible to prove a program is minimal for every minimal program. This is not at all the same thing as for every minimal program being unable to prove that the program is minimal.
Your proof reminds me of the proof that the halting property of Turing machines is not decidable. This doesn't mean that the halting property lacks meaning; it's trivially provable for those combinations of programs and inputs that have it, and the set of Turing machines and & inputs that halt is recursively enumerable. Like the set of minimal programs, the set of non-halting programs & input is not recursively enumerable, but it is manifestly possible to find some elements of the latter set.
Similarly, you have not proved that it is impossible to prove a system has Irreducible Complexity. All you have done is proved that it can't be done for every system.

By Andrew Wade (not verified) on 13 Jun 2006 #permalink

Andrew:

You're missing the key bit to that proof.

In the proof about the Halting problem, you're constructing a specific program for which your halting-detector fails. That is, you've identified a property that a program needs to posess to "outwit" a halting-detector, and constructed a program that posesses that property. We can find program for which we can determine whether they halt or not, because we can look for specific properties of the program that guarantee that they will halt. Whether we can determine whether or not a particular program will halt is dependent on the properties of that program.

In the case of minimality, we're constructing a program which makes the proof of minimality of some arbitrary, unknown provably minimal program invalid.

That's a huge, important difference. We aren't constructing a specific non-minimal system which posesses a property that tricks the minimality identifaction process into failing. What we're doing is constructing a system which undermines the entire concept of proving minimality. The specific system whose minimality proof gets undermined has no special properties at all. The only reason that it's the one hit by the system is because it was in the right place in the entirely arbitrary ordering of provably minimal systems.

So there is no way of identifying a system whose minimality proof is actually invalid! Because the invalidity of the proof isn't a property of the program, and it's not a property of the proof. It's a property of the concept of minimality.

I think you are kind of playing fast and loose with the words "prove" and "demonstrate." Scientific demonstration isn't the same as proof. If I say that I've demonstrated that some atomic particle has some property, I'm not saying I proved it, I'm saying that I've hypothesized it and tested it over and over. My demonstration of this property could turn out to be wrong tomorrow if a better theory comes along.

All you've said in this post is that you can't prove that a system is minimal. You haven't showed that you can't hypothesize IC for some system and test it.

Going back to your post, you say:

"The fundamental result is: given a system S, you cannot in general show that there is no smaller/simpler system that performs the same task as S."

This just isn't a problem if we are doing science (as opposed to math). You can just hypothesize that there is no simpler system and test it. If it stands up to testing, helps you make useful predictions, etc. then scientists will use the theory. If not, then they will forget it.

So, the claim is that any claim to demonstrate minimality is invalid, right? Let's look at a concrete example. Wouldn't the program that consists of only the STOP symbol be a minimal program to output the empty string? (Or if that doesn't work, say a two symbol program "output 1" and STOP). How does your proof undermine their minimality? I can't see how there could be a shorter program with the same output - maybe I'm missing the intuition.

Macht:

I think you're deliberately missing the point - playing a very Dembski-ish game by playing with meta-issues in order to obscure the point that the entire core of an argument is fundamentally flawed.

I've run out of different ways to say it. I'll just try to restate it in the simplest way I can.

(1) Behe claims to "prove" that certain systems can't involve, because they have the property that they are irreducibly complex.

(2) If you cannot prove that something is IC, then Behe's argument degenerates to "Certain systems can't evolve because Behe says they can't evolve".

If you can prove that something is IC, then Behe's argument hits the information theoretic problem with minimality. If you can't prove that something is IC, then Behe's argument becomes nothing but an unsupported bare assertion.

steve:

If you look at the proof, there is a critical threshold, based on the size of the formal axiomatic system (FAS) that proves minimality. The problem with minimality kicks in as soon as the complexity of the system becomes such that the length of the meta-program plus the size of the FAS is smaller than the information-theoretic complexity of the system.

Tiny systems - like a single-instruction program - are probably smaller than that threshold. (I'll explain the probably bit in a moment.) But we can't know where that threshold is: because finding the threshold requires finding the minimum size of a FAS that can be used to show mimimality. And that's the minimality problem biting itself.

The probably thing up there is because of one important thing that we do in information theory. The IT complexity of a program is dependent on the size of the program plus the IT complexity of the machine that it runs on. So a single instruction program can be a very complex beast if it's a single instruction for a complex machine. To show a trivial example of that, you can imagine a machine which is basically a vonNeumann computer (essentially the mathematical formalism for real computers), but which has one instruction that causes it to output the first 300 digits of the square root of a number. Then computing 300 digits of an irrational number takes only one instruction - but in IT terms, it's a very complex operation.

Wow, this was educational.

First, since I learned a natural consequence of Gödel's theorems. (I've learned to accept the theorems and use them beneficially. But they were not exactly intuitive the first time around. This application is.)

Second, since I learned that IC is as empty as CSI. IC has been special since it isn't really a one-one correspondence to ID. There has also been plenty of cases were allegedly IC cases has been falsified. So it is satisfying to find that IC is as vacuous as general creationist concepts.

Third, since Bronze and Art also gave new insights.

I find it comical that this creationist concept fails because it algorithmically asks for a perfect solution that isn't there, meanwhile evolution itself succeeds splendidly while it algorithmically has the decency to ask for good enough (surviving) solutions.

By Torbjörn Larsson (not verified) on 14 Jun 2006 #permalink

I don't think you understand Behe's argument. He is only talking about evolutionary pairs differing by a single mutation.

The point is this: when a complex system such as blood clotting evolves by a mutation the resulting system is inferior and will lead to extinction of the new species. Mutations are symmetric in time so if it cannot evolve forward then it did not evolve from a precursor.

Proving that there is an unrelated system which does the same thing is irrelevant. You are only allowed to study 2 systems linked by a simple mutation, and that mutation must be a real genetic event.

In the case of minimality, we're constructing a program which makes the proof of minimality of some arbitrary, unknown provably minimal program invalid.

Where I've been getting tangled up is in the conditions necessary for constructing such a program. I now see that it's enough for an infinite number of programs to be provably minimal under FAS for such a program to be constructed. Where I was going wrong was thinking that the program was constructed under the assumption that all minimal programs are provably minimal.

... So there is no way of identifying a system whose minimality proof is actually invalid!

Yes there is: c' finds such a system. All we have to do is construct and run a slight modification of c', that outputs K instead of Ï(K) to tell us what that system is.

Because the invalidity of the proof isn't a property of the program, and it's not a property of the proof. It's a property of the concept of minimality.

You've got bigger problems: the minimality proof isn't invalid. (c' is constructed to find to find a system that is provably minimal if your original argument is to make sense: i.e. a valid proof exists). What is happening is that you're proving something that isn't true; FAS is inconsistent. (This is the contradiction part of proof by contradiction). If FAS is consistent, then it must not be possible to construct c'.
So why is in not possible to construct c'? Certainly one possibility is that no minimality proofs exist at all. But that's not the only possibility. c' is based on the assumption that there is at least one provably minimal program longer than l(c,FAS) + length(c'). This is trivially true if the number of provably minimal programs is infinite, ... but not necessarily true if the number of provably minimal programs is finite.
So ultimately, what you've proved is that there is at most a finite number of programs that are provably minimal under FAS, and provided an upper bound on their size. While a very interesting result, it's not a problem for the concept of minimality; perhaps the rest of the minimal programs are simply not provably minimal under FAS, that is, FAS is incomplete. The incompleteness of FAS is not something that Godel hasn't already proved.

Which gets back to earlier comments:

Once you cross the complexity threshold where the complexity of your system exceeds that of K in the proof, no proof of minimality has any meaning, because minimality itself has no meaning; and to make matters worse, you can't determine where that point is!

If FAS is consistent, once you cross the threshold of K you won't be able to find any proofs of minimality for systems.

... There is nothing wrong with the proof that K is minimal. ...

... which means that FAS is inconsistent. You had better not be able to find K in the first place. Which means (if FAS is consistent), there are no provably minimal programs at all beyond some threshold.
Where I disagree is that not being able to find that threshold poses a problem: If you can find a proof of minimality you are below the threshold. If you're greater than l(c,FAS) + length(c'), you've over the threshold. If both are true then FAS must be inconsistent and you've got bigger problems.

BTW, in your proof, when you say,

... where the length of the output of Ï(K) is larger than l(c,FAS) + length(c').

I think that should be:

... where the length of K is larger than l(c,FAS) + length(c').

By Andrew Wade (not verified) on 14 Jun 2006 #permalink

SObrien:

I understand Behe's argument. It's a thoroughly bogus argument. I'm just pointing out one mathematical problem with it here; I'm not trying to address all of its problems at once.

The point is this: when a complex system such as blood clotting evolves by a mutation the resulting system is inferior and will lead to extinction of the new species. Mutations are symmetric in time so if it cannot evolve forward then it did not evolve from a precursor.

The problem with this argument is, as I explained in another comment, that it is a strictly additive constructive argument. That is, it assumes that the only way to evolve to a given system is by adding parts in single steps.

The basic IC argument is predicated on several distinct things. Among them:
(1) I can show that for a given system, it has the minimum number of parts necessary to perform its task. This is what the IT argument refutes.

(2) The only way that an IC system could evolve is by a process of adding parts: that is, if my "IC" system has 4 parts, then if it evolved, it must have evolved from a system that had 3 parts in a single step that added one part. This is the part that you're focusing on. It's a separate part of the IC argument. It's *also* bogus, but for a different reason. Real systems evolve in many different ways - evolution is not a directed additive process. Evolution can add parts, copy parts, remove parts, modify parts, and co-opt parts. If parts can be removed, modified, or co-opted, then the argument that IC can't evolve fails.

Suppose you have a 3-piece system that you claim is minimal and couldn't have evolved, because if you remove any of the three pieces, it wouldn't work.

What if there's a 5-piece system performing some other task that can be made to perform the same task as your 3-piece "IC" system via a single mutation? IC doesn't address that. Now, what if the 5-piece system can evolve to a 4-piece system in a single mutation? And then the 4-piece to a 3-piece via a single mutation?

That's evolution of an IC system via co-option and reduction.

The IC argument fails if co-option and reduction are possibilities. And both co-option and reduction are real phenomena that have been observed. (For example, see Tara's recent piece here.

Andrew:

This is getting beyond the scope of what I think is easily discussed in blog comments. We're moving from the information theory argument which is an application of a Godel-like proof to the fundamental Godel issues of incompleteness. The answer to your objection lies in the completeness properties of the axiomatic system, which are themselves intrinsically problematic.

I'd suggest looking at one of Greg Chaitin's books, as I think he's a lot better at explaining this than I am. "The Limits of Mathematics" is the book where I first saw this proof; the original post also includes a link to Greg's website where he has his version of this proof. Greg does address the issues you're asking about. (And he does it with a kind of clarity and flair that I can only dream of.)

(1) OK, so given the possibility of a proof below the threshold (and since Andrew said having a proof is sufficient to show that you're below the threshold) then it seems your proof is demonstrating that if IC were to be feasible to prove, it'd probably only work for simple systems - but is not demonstrating that no system exists for which IC is provable.

(2) More importantly, given Behe's restriction to additive mutations, I think he's not requiring full minimality over all smaller programs (which your proof assumes), just minimaility over the subset that are potential precursors. This is a much weaker notion of minimality. Moreover, I think being over the threshold for general minimality needn't imply that it is similarly impossible to prove minimality over the restricted subset. Proving restricted minimality should be a much easier problem. I think his evolutionary error might get him out of his IT problem.

Steve:

Having a proof isn't sufficient to show you're below the threshold. That's only true if you have an axiomatic system which is both complete and consistent. But per Godel, no axiomatic system is complete and consistent. If you have an incomplete axiomatic system, then the axiomatic system isn't powerful enough to do minimality proofs. If the axiomatic system is powerful enough to do minimality proofs (i.e., it's complete), then it's by definition inconsistent. (See Chaitin for why you need a complete axiomatic system.)

For the other part - yeah, if you accept the invalid restriction that undermines Behe's entire argument, then you don't need the information-theory stuff to show that it's invalid. But if you do that, you're making the argument moot anyway, because you're building your argument on a false axiom.

Ok folks, please stop the email that I copped out on Andrew :-)

Andrew's argument, summarized by me: (I'm using blockquote for formatting; this is not a literal quote of Andrew, but my attempt to summarize; please correct me if I'm summarizing badly):

If you have a consistent axiomatic system, then it won't return any minimal programs beyond the threshold of the length of the axiomatic system plus the meta-program. So the minimality results that the consistent system generates are correct.

The problems:

  1. We don't know what the threshold is. Regardless of whether the axiomatic system is complete or consistent, it can't know where that threshold is. To know that threshold is impossible meta-knowledge.
  2. An axiomatic system that reasons about programs in an effective computing system encodes the semantics of that computing system in its rules; if the computing system is an effective system, then the axiomatic system is, by definition, complete.
  3. If the computing system is not an ECS, then there are reasonable systems that are arbitrarily excluded from the minimality space; a proof of minimality in a non-ECS does not provide that you have a minimal system.
  4. The systems of biology - that is, DNA and the processes that it drives in a real biological system - have been shown to be an ECS. (DNA is turing complete - see some of the work IBM has done experimenting with DNA computers.)

I agree that restricting to mutation by addition is an incorrect assumption... but wouldn't the restriction to the subset of predecessor programs from all the kinds of mutations we've seen (addition, subtraction, cooptation, etc.) still be a weaker condition than full-blown minimality over every program, thus avoiding the IT problem?

Steve:

The answer to your question is, it depends.

The idea of "restriction to the subset of predecessor programs from all the kinds of mutations we've seen (addition, subtraction, cooptation, etc.)" isn't quite well-defined enough.

You can define "predecessor programs" in a number of different ways. Do you require "predecessor programs" to be a single point-mutation difference from a previous step? Must every "predecessor program" be a fully-functional system performing some function necessary to the survival of the organism?

Those sounds like weaselly questions, but they really aren't. Real evolution is an amazing process, which has a lot of complexity to it that we don't normally think of. If we allow the full range of evolutionary modification steps, in the way that they really occur - then the predecessor requirement is not a problem. But one of the tricks that frequently gets played by people like Behe and Dembski is to create artificial restrictions on the process that do not have any basis in reality.

Let me give one example, to try to make clear what I mean. In a real biological system, we frequently see patterns like duplication and cooption, where one mutation produces a duplicate of some element of the genetic code, which is useless but harmless; and then generations later, a mutation coopts one of the duplicates to perform a new function. So you have steps in the process that do not have any intrinsic value to the organism. They're neutral: they provide no immediate value to the organism, but they also provide no immediate harm. But they open up the possibility of later changes. A lot of ID/IC types would exclude this scenario, because it contains intermediate steps that have no function. They focus on single systems, rather than the organism as a whole, and say "what function did the predecessor of this supposedly IC system perform?"; and if there's no answer to that question, then they assert that it doesn't count as a predecessor.

If you make those kinds of restrictions on the transitions, then you wind up with a situation where the IT problem doesn't really come into play, because you've excluded so many paths. But those paths are real - and by excluding them, you've created an artificial system that doesn't model reality. If you allow the full range of mutation and adaptation - then you can find paths to essentially any system; and then the IT problem comes into force.

If you have a consistent axiomatic system, then it won't return any minimal programs beyond the threshold of the length of the axiomatic system plus the meta-program. So the minimality results that the consistent system generates are correct.

That's an accurate summation.

An axiomatic system that reasons about programs in an effective computing system encodes the semantics of that computing system in its rules; if the computing system is an effective system, then the axiomatic system is, by definition, complete.

Ah, this I think is the heart of the disagreement: I don't think the axiomatic system will/can be complete.
I think it is time for me to find one of Greg Chaitin's books; we're starting to get into technical issues that I'm not really familiar with.

By Andrew Wade (not verified) on 14 Jun 2006 #permalink

SoBrian,

The point is this: when a complex system such as blood clotting evolves by a mutation the resulting system is inferior and will lead to extinction of the new species.

Mark has already addressed the symmetry of mutations in time: It's just not so. Perhaps Behe is making the common error of assuming the only type of mutations are point-wise mutations? But there is another problem with this argument. If the original mutation resulted in a superior system, then reversing the mutation will result in a inferior system, that will go extinct. But this does not mean that the original (inferior) system could not have existed in the first place, after all it didn't have the superior system to compete against. And it does not mean that no mutations will result in an even better system, I do not know of what confusion could have lead to that conclusion.

By Andrew Wade (not verified) on 14 Jun 2006 #permalink

Mark,

I'm not "deliberately missing" anything. That kind of talk is really unhelpful and doesn't get us anywhere. I've understood your points since the first time you wrote them and I genuinely think you are wrong.

"1) Behe claims to "prove" that certain systems can't involve, because they have the property that they are irreducibly complex."

I really don't think he does. Where does he claim this?

"(2) If you cannot prove that something is IC, then Behe's argument degenerates to "Certain systems can't evolve because Behe says they can't evolve"."

No, it then makes his argument "Certain systems can't evolve because Behe hypothesizes (as opposed to proven) that they are IC." (Which happens to have been his argument all along.) Behe makes a hypothesis and scientists test that hypothesis and if it holds up, it holds up. If it doesn't hold up, it doesn't.

I agree with Andrew: you've proved that the minimality filter c can't be complete (and in fact can't prove the minimality of anything past a certain size/complexity threshold), but you haven't proved that minimality can't be defined at all. In fact, minimality is very simple to define, even though it's computationally undecidable *in general* in any given finite system.

c can be accurate in the sense that any program verified as minimal *is* in fact minimal, but by the above proof it can't prove the minimality of any system larger than some threshold size; therefore, it must miss infinitely many programs that are unprovably minimal.

It doesn't matter if we know where the threshold is, or if the axiomatic system knows where the threshold is. We know there must be some minimal programs that aren't provably minimal within the axiomatic system (infinitely many, in fact).

But that doesn't undermine the validity of the proofs that the system *is* capable of. The provably minimal programs are still a proper subset of all minimal programs. And the minimality or non-minimality of any given program *is* well defined - there are only finitely many shorter programs, each of which is either equivalent or non-equivalent.

It's just sometimes impossible to determine which in finite time.

P.S. Your original post says that "the length of the output of φ(K) is larger than l(c,FAS) + length(c')." where I think you mean the length of K itself is larger, which confused me before. You might want to correct that. I don't know why signing in via TypeKey made my post show up as anonymous - it doesn't do that on other ScienceBlogs.

P.P.S. Like Andrew, I'm also going to be heading to the library to see if they have any of Chaitin's books, to pursue this further.

By Anonymous (not verified) on 14 Jun 2006 #permalink

Macht:

If Behe is *not* arguing that the existence of IC systems proves evolution is impossible, then just what do you claim he is arguing?

If his argument *is* that the existence of IC systems proves evolution is impossible, but you can't ever prove that a system is IC - then what value does his argument have?

We can't *prove* relativity in the mathematical sense. But there are specific predictions of relativity where we can *prove* that reality matches relativities predictions; if it's wrong, there will be specific things we can do that will *prove* that relativity is not an accurate description of reality. That's what makes relativity valuable as a scientific theory.

What value does Behe's whole IC argument have if it can't ever be proven or disproven?

Finally, what is Behe's entire book about, if not proving that evolution can't explain life?

"If Behe is *not* arguing that the existence of IC systems proves evolution is impossible, then just what do you claim he is arguing?"

Ummm, I thought we were talking about proving that something is IC, not proving that evolution is impossible? As I said in my last post, Behe is putting forth a *hypothesis* (not a proof) that he thinks shows that NS+RM isn't sufficient for certain biological features to come about. But I would still like to know where you got the idea from that Behe is trying to prove anything. You haven't answered that.

"If his argument *is* that the existence of IC systems proves evolution is impossible,..."

It isn't, as far as I can tell.

"But there are specific predictions of relativity where we can *prove* that reality matches relativities predictions; if it's wrong, there will be specific things we can do that will *prove* that relativity is not an accurate description of reality. That's what makes relativity valuable as a scientific theory."

See, this is why I said you were playing "fast and loose" with the word "prove." You are essentially describing falsificationism. But it is well known that in philosophy of science that this neither proves theories right or wrong. Scientists come to accept or reject these theories in light of new evidence or better theories, but this isn't proof. Here is why. Your logic looks something like this:

If some theory T is true, we would expect to see X.
We see X.
Therefore, theory T is true (that is, proven).

This is, of course, not a valid argument (affirming the consequent).

The part of your logic is:

If some theory T is true, we would expect to see X.
We don't see X.
Therefore, T isn't true.

This is, of course, Popper's famous modus tollens argument for falsificationism. It is valid, but unfortunately it isn't sound. If we don't see X, it may be true that T isn't true, but it could also be that part of our experiment went wrong causing us not to see X, or it could be that some of our background information causes us to think T would expect us to see X when we really wouldn't, or any number of other things. So, in this case we have't proven anything, either.

"What value does Behe's whole IC argument have if it can't ever be proven or disproven?"

In my very first post I said "proof is for mathematics and alcohol." What I wrote above explains why science doesn't deal with proof. So the fact that Behe's argument can't be proven or disproven is irrelevant to whether it is valuable as science or not.

"Finally, what is Behe's entire book about, if not proving that evolution can't explain life?"

As I've said, I read it as a hypothesis or theory about whether evolution can explain life. As I've also said, the way to go about dealing with a theory or hypothesis is to test it.

Macht:

This is exactly why I say you're just playing word games and deliberately avoiding the real issues. This isn't an argument about falsificationism, Popperism, the philosophy of science, or the things that can go wrong in experimental observations.

My entire point, from the start, has been that Behe's argument is based on the invalid premise that he can show that some things are irreducibly complex.

By shifting words, and saying things like "Behe's argument isn't about trying to prove that evolution can't evolve IC", but instead "Behe is trying to propose a hypothesis about whether or not evolution can explain life", you're just deliberately obfuscating and avoiding the point.

You can say that "Behe is trying to make an argument about whether or not evolution can explain life. Specifically, his argument is that there are some things that can't evolve." Or you can say "Behe is trying to propose a hypothesis about whether or not evolution can explain life. Specifically, his hypothesis is that there are some things that can't evolve". In any essential way, how does the second statement differ in meaning from the first? Substituting "argue" for "prove", or "hypothesis" for "argument" is just word games.

He's trying to put together a logical argument where if his axioms are true, then his conclusion is true. You can call that an argument, or you can call that a proof. It *does not matter* which words you choose. The basic point is: either he *does* have a argument where the conclusion follows from a set of true axioms via valid logical reasoning; or he does *not*.

One of his axioms is "I can show that some biological systems are irreducibly complex". If he can't do that, then the argument fails. Call it argument or hypothesis or proof, call it hypothesis or call it theory; it doesn't matter.

If "irreducibly complex" is a meaningless term where you can never show whether or not anything is irreducibly complex, then any argument based on being able to show that something is irreducibly complex is nonsense.

Macht:

I think the point is that Behe's argument is purely a logical one, not an empirical one. If IC systems exist, it is indeed a problem for evolution. However, a supposedly IC system is only a problem for evolution if it is definitely IC. If it only looks like an IC system, it's no better than saying, "I can't explain this, so evolution can't have done it." I think that most people intuitively knew that there's no real way to tell if a system is IC. Mark's entry is just pointing out that their disbelief can be formalized.

There's no way of determining empirically whether a system could have evolved. If there is an essentially infinite number of paths to a result, you can't simply say that because the obvious paths are wrong, there is no way to reach that result. Behe realizes this and posits IC systems as a special case: "Let's pretend that there are evolutionary results to which all possible evolutionary paths are invalid. If that's so, evolution fails." That's a perfectly fine piece of logical reasoning, but it doesn't make for much of a book career.

Behe's major failure is that he spends the rest of his time trying to come up with examples of such systems. The point is, all he can do is come up with systems that LOOK like they're IC. He could spend his entire life listing the possible evolutionary pathways to one of his IC systems and pointing out how each of those pathways fails, but there's no reason to believe that he has examined any significant portion of the possible pathways (and good reasons to think that he never could). No useful science comes out of it, so IC is nothing more than a logic game. Calling Mark for staying in logic land for something that can clearly never leave logic land is hardly fair.

By Troublesome Frog (not verified) on 14 Jun 2006 #permalink

Mark,

I don't know how to respond to you because you've basically ignored every point I made and then just threw a blanket "you're obfuscating" retort on it. I point by point showed why your premises are wrong but you haven't responded to them. I'm not sure what else I can say unless you actually address what I've argued.

I clearly explained the difference between a proof and how science works but you just gloss over it without saying why (again, "you're obfuscating" or "you're playing word games" isn't a reason).

I talked about falsificationism and verification because *YOU* brought it up in your relativity example. I clearly showed how these *AREN'T* proofs because they are either invalid or unsound.

You *STILL* haven't explained where you are getting these arguments from Behe's work. You are just claiming that he makes them. For example, where does Behe claim that one of his axioms is "I can show that some biological systems are irreducibly complex"? And if you answer my question again with another question like "Well if he isn't making that arugment, what argument is he making?" I'm going to scream since I've repeatedly said what argument he is making.

If IC systems exist, it is indeed a problem for evolution.

Solved problem. There are a number of well known examples of structures for which "the basic function" has changed. What is IC for one function isn't necessarily IC for another. Also, while removing one part from an IC system will cause it to cease functioning (by definition), removing that part from its precursor will not necessarily have that effect: the precursor may have functioned slightly differently, and may well have been more complex. IC just isn't a useful concept.

By Andrew Wade (not verified) on 15 Jun 2006 #permalink

Macht:

I think that you're the one who consistently ignores every point I make, and replies with endless obfuscation and definitional games rather than addressing anything real.

I keep asking, what is Behe's actual claim, if it isn't that there are systems that have the property that they're IC, and evolution can't produce them?

And you keep dancing around the issue; substituting terms, arguing about whether proof is meaningful in science, and insisting that Behe isn't arguing that he show that there are IC systems that by virtue of being IC can't evolve. But all of your statements of what Behe actually says are just ways of rephrasing, from "Behe claims that certain systems can't evolve because they're irreducibly complex", to "Certain systems can't evolve because Behe hypothesizes (as opposed to proven) that they are IC."

To me - that's word games. It's just substituting the word "hypothesizes" for "proves" or "demonstrates" in a way that lets you hedge and dance around the point, without ever addressing the fundamental issue that Behe's argument has no value whatsoever.

I don't care what words you use. The fundamental point remains: Behe's argument relies on the assertion that some systems can't have evolved because they are irreducibly complex. If the assertion/hypothesis/whatever that that those things are irreducibly complex, but that assertion/hypothesis/whatever word you like is nothing but a blind assertion, and you can in fact never verify that something is, indeed, IC, then it's a totally worthless argument.

You can hedge all you want; switch words around all you want; play philosophy-of-science games all you want. But if Behe makes an argument that is, in your words, "Certain systems can't evolve because Behe hypothesizes (as opposed to proven) that they are IC." - that is a worthless argument. Because either: IC is a meaningful property, or it isn't. If IC is meaningful, then he can provide evidence that something is IC - and since IC is a logical/mathematical property, he needs to show it in a mathematical way. If IC isn't a meaningful property, and he can't do anything but just assert/hypothesize that things have this IC property, but he can't ever demonstrate/prove/whatever that anything has the property IC, or that in fact the property IC is actually meaningful in any way - then it's a totally worthless argument.

Mark,

The problem is that you seem to think that "prove," "hypothesize," "show," "demonstrate," and "assert" are all synonyms. They aren't. What you did in your original post is a proof. Scientists don't do that. They hypothesize theories and then test them. After a hypothesis has been well tested, then it is asserted to be true. I'm almost positive that you've taken enough math courses to give you some idea about what the difference between a "proof" and a "demonstration." I'm also sure that you understand that demonstrations don't constitute proof.

I understand that *YOU* don't care what words I use. That seems to be the entire problem.

Macht:

I'd say that the problem is that you are focusing on distinctions between words, rather than looking at the concepts that those words are talking about.

*It doesn't matter* what words you use to describe Behe's IC nonsense. The important point is that either he's talking about a meaningful concept which he can demonstrate in a meaningful way; or he's *not* talking about a meaningful concept that he can demonstrate in a meaningful way.

I'm arguing that his theory is based on a fundamentally meaningless concept.

You're throwing around the distinction between proofs and tests, hypotheses and theory, etc. But if the fundamental concept is invalid, then proof versus test, hypothesis versus theory, and all of the other word games you've introduced are all distinctions without a difference.

If you want to play word games, that's fine. Go ahead. But you're ignoring the point that my original post is about, and the post that I've been arguing through this whole miserable thread: that IC as a theory is nothing but a facade over a meaningless concept.

"If his argument *is* that the existence of IC systems proves evolution is impossible,..."

It isn't, as far as I can tell.

Neither would be quite correct:

A. Yes, if you read Darwin's Black Box you see that I say the following, "Even if a system is irreducibly complex and could not have been produced directly, however one cannot definitely rule out the possibility of an indirect circuitous route. As the complexity of an interacting system increases though, the likelihood of such an indirect route drops precipitously."

So here I was arguing well, there's a big problem for Darwinian theory. These things can't be produced directly, but nonetheless you can't rule out an indirect route, but nonetheless building a structure by changing its mechanism and changing its components multiple times is very implausible and the likelihood of such a thing, the more complex it gets, the less likely it appears. So the point is that I was careful in my book to qualify my argument at numerous points, and Professor Miller ignores those qualifications.

So Darwinian theory can explain IC structures, but has a great deal of difficulty doing so. That's some nice weaseling there as it means his argument cannot be defeated by a single counterexample. And he restricts IC to molecular systems, which we have a much poorer evolutionary record for than macroscopic systems. Poorer, but not non-existant; Darwinian evolution leaves clues in the genes, and in the molecular structure too, by which evolutionary history can sometimes be deduced. Even qualified, his argument has been shown to be wrong.

By Andrew Wade (not verified) on 15 Jun 2006 #permalink

But you haven't shown that IC is a "fundamentally meaningless concept." All you've done is show that you can't prove that a system is minimal. That's not the same as saying it is meaningless. That isn't even the same as saying that a system can't be minimal. A system could be minimal despite us not being able to prove that it is. This is exactly why I'm making the distinctions I'm making. A scientist would have no use for proving that something is IC, he would hypothesize that something is IC and then test to see if that hypothesis holds up.

Macht:

Still the same old wordgames. I don't give a damn if you rephrase it from "proving that something is IC" to "hypothesize something is IC and then test to see if the hypothesis holds up". The fact remains that no test can ever demonstrate that a real system is IC. A scientist can't test to see if a hypothesis of IC holds up. It's a non-testable hypothesis, because the underlying concept is invalid.

No wordgames can change the fact that there IC is a facade with nothing behind it. It's scientifically useless, because no test can ever confirm its predictions.

You are avoiding that point in favor of endless games of vocabulary shifting and meta-reasoning about word choices. IC is a mathematicaly meaningless and scientifically worthless - because no test can ever confirm its existence.

Mark,

You have in no way shown that "the underlying concept is invalid." All you've shown, as I've already said, is that you can't prove a system is minimal. You haven't shown that a minimal system is a meaningless idea and you haven't even shown that some system can't be minimal. The fact that I can't prove a system to be minimal doesn't mean it isn't minimal.

SOBrien says:

"The point is this: when a complex system such as blood clotting evolves by a mutation the resulting system is inferior and will lead to extinction of the new species. Mutations are symmetric in time so if it cannot evolve forward then it did not evolve from a precursor."

Besides being beside the point, as Mark explains, this is also a good illustration of the IT argument.

But first another point. You say erroneously that a mutation should make the system worse. First, there are neutral mutations. Second, we are discussing evolution here. The system is not optimised when it first clots blood. It continues to evolve towards better fitness. In this case it could be optimised clotting. (As good as possible without clogging arteries at the slightest provocation.)

Now to the illustration. The blood clotting system isn't IC or even minimal as stated, since dolphins and fishes have variations of the same clotting system with fewer components. If you need references I think you find them on talkorigin or in The Pandas Thumb discussion.

By Torbjörn Larsson (not verified) on 15 Jun 2006 #permalink

Macht:

Again, you quite deliberately miss the point.

What value does a so-called theory have if it's relies on an untestable concept?

From the point of view of Behe's IC theory: what value does IC have if you can never actually confirm that any IC system ever exists anywhere in biology?

Behe claims that the existence of IC systems is a problem for evolution. If we don't know if the problem actually exists, and we can never confirm that it exists, then is it really a problem at all?

IC comes down to handwaving: *Behe says* that there are IC systems. Therefore, *Behe says* that there's a problem with the theory of evolution. There's no evidence for the existence of the problem beyond the fact that *Behe says* that there is. He can't actually demonstrate its existence.

I can say that Behe must not have written the book that he claims to have written, because that book has the property of frobitzness, and religious people can't write things that are frobitz. If I can't tell you how I know that his book is frobitz, and I can't tell you how to test if his book is frobitz, and I can't tell you how to test if religious people can write things that are frobitz, then do I have any claim to a valid argument based on the frobitzness of his book?

"What value does a so-called theory have if it's relies on an untestable concept?"

Your post here doesn't show that the concept is untestable. Your post (and I'm not sure why I'm repeating this again) only shows that you can't prove it. (Oh yeah, I forgot, this is just a word game.) This is what I said above when I was talking about falsification and confirmation of relativity theory. You said that we can't prove it in a mathematical sense and then I showed how it can't be proved (or disproved) in any sense, since any proof will either be invalid or unsound. If you want to claim that IC is untestable, fine. But your post here doesn't show that it is untestable. Besides, there are plenty people who have looked at IC and have ripped it shreds because the evidence is against it (aka, they've tested it and rejected it). So, it is testable.

"*Behe says* that there are IC systems. "

Yes.

"Therefore, *Behe says* that there's a problem with the theory of evolution."

Yes.

"There's no evidence for the existence of the problem beyond the fact that *Behe says* that there is. "

Okay, I can deal with that. If there is not evidence, then that means its bad science and it should be rejected. This really has nothing to do with your post, though. In fact I said this a long, long time ago in one of my first comments in my thread:

"This just isn't a problem if we are doing science (as opposed to math). You can just hypothesize that there is no simpler system and test it. If it stands up to testing, helps you make useful predictions, etc. then scientists will use the theory. If not, then they will forget it."

And, again, I have to say that you *SHOULD* know the difference between demonstrate and prove, if my assumption is correct that you are fairly well versed in math. That you are confusing these two is just a poor, beginner's mistake.

You said:

"I can say that Behe must not have written the book that he claims to have written, because that book has the property of frobitzness, and religious people can't write things that are frobitz. If I can't tell you how I know that his book is frobitz, and I can't tell you how to test if his book is frobitz, and I can't tell you how to test if religious people can write things that are frobitz, then do I have any claim to a valid argument based on the frobitzness of his book?"

See, these are all valid objections to Behe. Obviously, if I put forth a hypothesis and have no way to test the hypothesis, then science will reject it. But, again, this has nothing to do with your original post. Testing something is not the same as proving it. I've said this numerous times already, too.

Obviously Macht doesn't get it. If you can't test for the property in principle, you can't make observations.

I'm no philosopher but I'm sure falsifiability is stronger than Macht does it. Special relativity would be disproved if we observe speeds faster than light, evolution would be disproved if we observe the fossil of a rabbit in a pre-Cambrian strata. (Ie the theory predicts 'not X', we find 'X', theory must be wrong.) But since we can't make a test for IC from the hypotheses, we can't observe it, so we can't falsify it. (Not surprisingly, since Mark shows the property is invalidly defined, ie it can't exist.)

"The systems of biology - that is, DNA and the processes that it drives in a real biological system - have been shown to be an ECS. (DNA is turing complete - see some of the work IBM has done experimenting with DNA computers.)"

Again, wow! The discussion isn't a waste. I would love to hear more, and how 'Information is Conserved' (D*mbski) face up if DNA is turingcomplete, but evolution, say the RM+NS part, isn't. (Or is it?)

By Torbjörn Larsson (not verified) on 15 Jun 2006 #permalink

Torbjörn,

"(Ie the theory predicts 'not X', we find 'X', theory must be wrong.)"

I explained why this isn't the case. Perhaps what is wrong is our belief that the theory predicts "not X." Perhaps it doesn't predict that at all. Perhaps what is wrong is "X." Most modern day scientific "observations" are actually experiments. Experiments always have experimental error, so you can't rule out that the experiment is wrong. And you can't rule out that somebody made a mistake in setting up the experiment.

So, no, if theory T predicts "not X" and you see X, you haven't disproved T. It probably isn't a theory you would want to pursue anymore, but it isn't disproven.

It would seem that IC is arguing for organized complexity as opposed to complex randomness. Doesn�t organization imply reducibility because organization is basically repeated patterns? If something was irreducible wouldn�t it be completely random apart from being mathematically impossible (Godel incompleteness theorem)?

By Christian Albert (not verified) on 15 Jun 2006 #permalink

And you can't rule out that somebody made a mistake in setting up the experiment.

So, no, if theory T predicts "not X" and you see X, you haven't disproved T.

It sounds like Macht is banking on all of evolutionary experiments and observations being the result of dumb luck mistakes and bad experimental protocols.

bronze:

Yeah, I think you're pretty much right. Macht is playing a game of reductionism where nothing can ever be proven or disproven, and we can therefore not ever know anything.

If you're willing to accept that kind of silly reductionism, then sure, Behe's garbage is as valid as Einstein - i.e., both become nothing but a pile of unverifiable, untestable rubbish.

"Macht is playing a game of reductionism where nothing can ever be proven or disproven, and we can therefore not ever know anything."

Well, seeing as how I think we can prove/disprove some things and I also think that we can know things that we haven't proven, you are wrong on both counts.

...I also think that we can know things that we haven't proven...

Sounds like faith, which is pretty much Behe's basis for determining which things are IC.

Of course, even if that were a legitimate way of knowing, IC still wouldn't be a problem. Evolution already has simple ways of making IC structures, and probably lots of clever methods we haven't even conceived of yet.

"It sounds like Macht is banking on all of evolutionary experiments and observations being the result of dumb luck mistakes and bad experimental protocols."

Seeing as how I haven't argued against evolutionary theory here (or anywhere, for that matter) and have just been arguing about whether Mark's proof has any relevance to IC, this is also incorrect.

Macht, you wrote:

No, it then makes his argument "Certain systems can't evolve because Behe hypothesizes (as opposed to proven) that they are IC." (Which happens to have been his argument all along.) Behe makes a hypothesis and scientists test that hypothesis and if it holds up, it holds up. If it doesn't hold up, it doesn't.

Certain systems can't evolve because Behe hypothesizes they are IC? I'm pretty sure that's not what you actually want to assert. It's no wonder to me that Mark thinks you are being deliberately obtuse.

I'm guessing that the assertion you want to make is more along the lines of, "Behe hypothesizes, but does not prove, that certain systems can't evolve because they are IC." Mark's post describes a theorem that asserts, in essence, that no procedure for testing an arbitrary system for the property of IC exists. Macht, how can scientists test Behe's hypothesis when it's not possible, even in principle, to tell IC systems from non-IC systems?

By Canuckistani (not verified) on 15 Jun 2006 #permalink

Macht says:

""(Ie the theory predicts 'not X', we find 'X', theory must be wrong.)"

I explained why this isn't the case."

No, you were discussing a weaker case.

Regarding the triviality about experimental verification, when we attest an observation or theory beyond reasonable doubt we apply an arbitrary, area contingent, but robust limit.

For example, to verify a phenomena in physics one must have more than 5 sigma certainty observations. What you are discussing is some arbitrary philosophy unreasonable doubt, not science as it is used and understood by practitioners.

By Torbjörn Larsson (not verified) on 15 Jun 2006 #permalink

"Evolution can't explain unicorns!"

"But there aren't any unicorns to explain. At least none that have been shown to exist, yet."

Macht,
Also, you are confused about the common usage of observation and experiment. A series of individual observations is an experiment. You use the observations to establish the experimental error. That is the certainty I spoke about in my earlier comment.

Mistakes and bad experiments (with undiscovered systematic errors, for example) happens, but these are sooner or later found by peer review, replicated experiments or too large discrepancy with theory.

By Torbjörn Larsson (not verified) on 15 Jun 2006 #permalink

"Certain systems can't evolve because Behe hypothesizes they are IC? I'm pretty sure that's not what you actually want to assert."

Yeah, that's not what I wanted to say. Stupid mistake on my part. You are correct about what I was trying to say.

Christian Albert:

It would seem that IC is arguing for organized complexity as opposed to complex randomness. Doesn't organization imply reducibility because organization is basically repeated patterns? If something was irreducible wouldn't it be completely random apart from being mathematically impossible (Godel incompleteness theorem)?

Behe doesn't mean incompressible by irreducible. He means mutual interdependence between all the parts. Which assumes that there is one, unchanging function to the system, because whether the parts are interdependent depends on which function they are performing. He doesn't like it when others point out this assumption, you can see him do the razzle-dazzle in the transcript I linked to when the attorney did that. He also assumes that (Darwinian) evolution is mainly some sort of linear additive process (just keep adding on parts). (Mark noticed this). I haven't read the whole transcript so I don't know if the attorney ever called him on this assumption and forced him to make it explicit. If Behe is honest he states his assumptions upfront. I haven't seen enough of his work to know if he does this, but I doubt it.

By Andrew Wade (not verified) on 15 Jun 2006 #permalink

Well Macht, I think we've finally managed to figure out where you're coming from.

Well, seeing as how I think we can prove/disprove some things and I also think that we can know things that we haven't proven, you are wrong on both counts.

So, just what is it that you purport we can *know* without any proof? I'm willing to bet that I know the answer if you're honest, but I want to see if you're willing to come out and admit it.

"So, just what is it that you purport we can *know* without any proof?"

Physics, chemistry, biology, etc. There is lots of evidence and well-supported theories in those disciplines, but they don't really deal proof. What is this, like, the 20th time I've said this?

Somewhat OT:

FWIW, there is at least one non-religious truth that can be known even in the absence of proof. You can play Godel games in plain English a lot easier than you can in number theory. For example:

Canuckistani cannot prove this statement is true.

It's obviously true, but you'll have to prove it yourself, since you can't take my word for it.

In fact, a portion of Chaitin's work in algorithmic complexity amounts to putting encoding the Berry paradox into formal number theory.

By Canuckistani (not verified) on 15 Jun 2006 #permalink

Andrew,

There is a long comment that discusses this at
http://www.pandasthumb.org/archives/2006/06/laudan_demarcat.html#commen…

The crux is:

"The long term evolution of most features of life has not been what Behe, or indeed most people, would call direct. And even short term evolution can be indirect in Behes terms. So it is surprising to read, on page 40, Behes argument against indirect evolution of IC systems. Here is the crux of it:

Even if a system is irreducibly complex (and thus cannot have been produced directly), however, one can not definitely rule out the possibility of an indirect, circuitous route. As the complexity of an interacting system increases, though, the likelihood of such an indirect route drops precipitously. (page 40) [Darwin's Black Box]

He simply asserts that evolution of irreducible complexity by an indirect route is so improbable as to be virtually out of the question, except in simple cases. He makes no special connection between indirect evolution and IC. He offers no evidence. He just asserts that it is too improbable."

So Mark's elegant disembowlement of the testability of this concept clarifies Behe's dishonest treatment further.

By Torbjörn Larsson (not verified) on 16 Jun 2006 #permalink

Torbjörn Larsson,

Falsifiability is a subtle matter. Many of the theories I was taught in physics were known to be wrong. Quantum Chromodynamics? disagrees with experiment on a couple of values. (In something like the sixth place sure, but still not explicable by experimental error). Also fails completely at very small scales and assumes a flat space-time. Nuclear theories (the liquid drop model, the shell model, the alpha particle in a well model)? All false. I believe some of them were even known to be false at the time they were introduced. General Relativity? Doesn't know how to handle quantum superpositions. Newtonian gravity? Wrong. All "theories in trouble" as the IDers would have it. But not abandoned, because they are useful. They predict things that are true and don't predict things that are false in some area better (more accurately, or more simply) than all their competitors. Each has a great deal of predictive power (albeit somewhat flawed). It's not so much that falsifiability allows us to know when to discard a theory as that falsifiability is necessary for a theory to be at all useful in the first place.

By Andrew Wade (not verified) on 16 Jun 2006 #permalink

I'm sorry, but I don't see the connection between my last comment and yours. I was responding to your comment on Behe's assumption of interdependence, or "direct" evolution as the Panda's commenter called it.

If you are discussing my answer to Macht, I agree that the question of falsifiability is subtle. I haven't studied Popper.

Four things. First, I think my comment to Macht is correct.

Second, the examples of failures you point out seem to me to be mainly theories outside their area of relevance or their singularities. All theories have both, and those doesn't falsify them.

Third, I'm not sure how strong falsifiability is, but it is subtle as you say. We can look at string theory as an illustration of this.

Some say it hasn't been tested.

But the theory is concocted by parts and methods that hasn't yet been solidly mathematically connected with old theories. Still, it has been tested as correct for trivial properties and for nontrivial such as correspondence with semiclassical black hole properties. Further it explained channel properties in some particle experiments before an analogous flux tube method in QCD was invented. Or so they tell me. (And I don't know enough to see if either the string or the QCD explanation was contorted or natural.)

So some others say it has been tested against falsifiable predictions, both theoretical and experimental.

Finally, when asked what would falsify evolution, the answer in talkorigin from a biologist was a rabbit in preCambrian strata. I think I can say that supraluminal speeds would falsify relativity, not move the area of relevance. For example, gauge theories become unstable in such circumstances ( http://arxiv.org/PS_cache/hep-th/pdf/0606/0606091.pdf )

In summary, I don't know exactly how strong falsifiability is, but I agree that it seems to have uses. If you follow my earlier link, the discussion has mentioned that demarcation criteria seem to be contingent. Since that is coherent with my view of science as hard to describe workings and possible results off, I can live with that.

By Torbjörn Larsson (not verified) on 16 Jun 2006 #permalink

I'm sorry, but I don't see the connection between my last comment and yours.

Sorry, yes there is no connection and I screwed up in not given any context at all to my comment. I have nothing further to add to your last comment, it expresses the comprehensive nature of Behe's wrongness so well. I was indeed discussing your answer to Macht.

Four things. First, I think my comment to Macht is correct.

Yes. However the implications of a theory being falsified are rather different then is often assumed. I used your comment to go off on a tangent of my own that I think is quite important, unfortunately I did not make that clear. A pre-Cambrian rabbit (or rather many pre-Cambrian rabbits, one being possibly a fake) will falsify RM+NS, but by itself it will not cause the theory to be abandoned.

Second, the examples of failures you point out seem to me to be mainly theories outside their area of relevance or their singularities. All theories have both, and those doesn't falsify them.

That's a bit circular, the failure points of these theories plays a part is determining what their area of relevance is. And even inside the area of relevance, there generally are some discrepancies with observation. The discrepancies are slight, but Newtonion gravity does not correctly predict the motions of the planets; its original domain. The errors of the nuclear theories are not slight; often being more than an order of magnitute for decay rates. (And more than that the weaknesses of the nuclear theories on theoretical grounds are fairly well understood; slight tweaks will not save them.)

Third, I'm not sure how strong falsifiability is, but it is subtle as you say. We can look at string theory as an illustration of this.

String theory is an illuminative example. Because string theory is not in fact falsifiable; it is much like ID in this regard (Individual string theories may well be falsifiable, and it sounds like there has been very limited but nonetheless exciting progress in making them so.). But string theorists don't expect their theories to be accepted until they are falsifiable.
Opinions may vary on whether string theory or ID are a promising ideas (and they do), but the string theorists have some hope of creating useful theories if there is something to string theory. No so for the current bunch of clowns promoting ID, who seem either incapeable or uninterested it creating something substantive.
There is one other observation you made that bears on falsifiability: "[string theory] hasn't yet been solidly mathematically connected with old theories." Because if/when string theory ever supplants the old theories it will have to explain why the old theories appear to be true. Because while the old theories may be false, they reflect a real pattern in the data. Similarly, the theory of Evolution reflects a real pattern in the data even if pre-Cambrian rabbits are found, and any replacement to RM+NS is going to have to explain that pattern. RM+NS is falsifiable, but the theory is always going to be pretty good at explaining the fossil record. The epicycles of Ptolemy's successors have been abandoned, but the regularities they described have not disappeared. ID has done a fairly good job of fooling people into thinking some of the regularities described by the tree of life aren't there, but they are.

I think I can say that supraluminal speeds would falsify relativity, not move the area of relevance. For example, gauge theories become unstable in such circumstances ( http://arxiv.org/PS_cache/hep-th/pdf/0606/0606091.pdf )

Well, the area of relevance would be subluminal speeds, no? I don't disagree that relativity would be falsified (so long as supraluminal speeds are approprately qualified), but it would nonetheless still have a wide area of applicability.

In summary, I don't know exactly how strong falsifiability is, but I agree that it seems to have uses. If you follow my earlier link, the discussion has mentioned that demarcation criteria seem to be contingent. Since that is coherent with my view of science as hard to describe workings and possible results off, I can live with that.

Agreed.

By Andrew Wade (not verified) on 17 Jun 2006 #permalink

"A pre-Cambrian rabbit (or rather many pre-Cambrian rabbits, one being possibly a fake) will falsify RM+NS, but by itself it will not cause the theory to be abandoned."

As I said, science, its theory and its demarcation is contingent. This was the opinion of a biologist - if we observe these rabbits, common descent with modification (ie all evolutionary theories) must be wrong.

"That's a bit circular, the failure points of these theories plays a part is determining what their area of relevance is."

The failure points (singularities, area of relevance) are predicted by the theories, not observed - no circularity that I can see. They help falsification.

"And even inside the area of relevance, there generally are some discrepancies with observation."

Experimental error is known and controlled. For your first example, once relativity was known the area of relevancy for Newtonian gravitation (not gravity) was predicted. The theory isn't falsified but is continued to be used as a convenient approximation within its area of relevance.

The situation you describe for nuclear theories sounds like they aren't the final explanation, ie the theories aren't complete, but used until a better theory comes about. This seems like a genuine problem for falsification fantasts - uncomplete theories are allowed. (Ad hocs can't be a big problem.)

"Because string theory is not in fact falsifiable;"

This is wrong. It is even contested that string theory already has been falsifiable tested. The reason that it is wrong is that it has falsifiable predictions before or latest at Planck scale.

I have noted that philosophers distinguish between falsifiability and testability. Testability is logical falsifiability and practical observability. String theory is logical falsifiable, but the practical observability is contested. Especially since some scientists prefer to allow tests against old theory.

"it is much like ID in this regard"

A contemporary discussion, which is supported by Mark's observations in this post and elsewhere, is found under the link on demarcation I gave above and centers around the vacuity of ID. Ie while string theory is well defined, connects with old theory and thus old observations, and gives predictions like AdS/CFT, ID does neither of these things.

"Because if/when string theory ever supplants the old theories"

Note that QM and thermodynamics et cetera aren't going to be supplanted by string theory. String theory will have to make a good connection to them.

"Well, the area of relevance would be subluminal speeds, no?"

Not if relativity predicts subluminal speeds, and it does. Newtonian gravity never made any prediction about light and causality - that was its problem.

By Anonymous (not verified) on 18 Jun 2006 #permalink

Sorry, I have no idea why my last comment was anonymous and why i didn't see that at the time.

By Torbjörn Larsson (not verified) on 20 Jun 2006 #permalink

Rereading my last comment, it is inconsistent of sorts, since a greater use of falsification is used.

Newtonian gravity wasn't falsified, merely constricted, since the new theory could explain it. This is a contingent use of falsification of course. I agree that when observations of Mercury had problem, the old theory wasn't falsified outright, but it was understood that it wasn't complete. Compare with phlogiston that could be falsified outright.

I think Nick Matzke noted an analogous problem on the thread on Laudand and demarcation: "I think that this kind of critique of falsificationism, although very common, is always extremely tenditious because anyone who ever advocated falsifiability recognized that the whole point of it was that many claims could be shown to be false and excluded from the body of science because they were false. The problem that falsificationists were legitimately pointing to was that some claims are not even wrong, which sometimes (as with the God/ID did it claim) is more pernicious than being wrong."

By Torbjörn Larsson (not verified) on 20 Jun 2006 #permalink

The problem is that you seem to think that "prove," "hypothesize," "show," "demonstrate," and "assert" are all synonyms. They aren't.

Sigh. It's sad how badly the point was missed here. Mark, not being an imbecile, obviously does not think that. What Macht failed to understand is that Behe's argument -- that there are systems that cannot evolve -- can go through (ignoring all the other problems with it) if he can prove that there are IC systems. It's not enough to hypothesize or assert that there are IC systems, because that is simply an opinion, with no logical force. And what could possibly count as a "demonstration" that a system is IC, other than a proof that it is? "system S is IC" is not the sort of proposition that is amenable to empirical demonstration.

What you did in your original post is a proof. Scientists don't do that.

What ridiculous poppycock. Scientists deal with both empirical claims, established through observation, and analytical claims, established through logical or mathematical proof. But it seems that Macht would have scientists kicked out of the academy for applying modus ponens. Here's a clue: every time you add two numbers, you perform a proof. How else do you obtain the result? It's not simply a hypothesis or assertion, nor is it an empirical demonstration. a: first addend = 1 (premise). b: second addend = 2 (premise). c: 1 + 2 = 3 (theorem of arithmetic). d: first addend + second addend = 3 (a, b, c, Liebniz's Law). Q.E.D.

By Anonymous (not verified) on 06 Feb 2007 #permalink

can go through

Arrgh! Make that "can only go through".

By Anonymous (not verified) on 06 Feb 2007 #permalink

Macht, if any of the suggestions you make were true, IC can show nothing in regards to evolution. If Behe is "hypthesising" that some things MIGHT be IC, and thus MIGHT be a problem for evolution, then what makes IC any better than, say, "I hypothesise that some biological systems may be magical, and this might be a problem for evolution."

A bold assertion that something might exist, if there's no way to tell if it does or doesn't, or even to accumulate evidence either way, is MEANINGLESS.

Okay, someone just linked over here from the Pharyngula comment section, and I read over the section again.

Mark, you know I love you, and have been a long-time reader, but I think you're a bit off-base here.

Disclaimer: Anything Behe said is probably wrong (ignoring the fact that he's always wrong, of course). My definitions are my own.

Now, an IC system would simply be a system where, if you knocked out any piece, it stops working. It is not necessarily minimal! There may very well be a smaller system that *does* work. But all the immediately smaller system don't. In other words, an IC system is a local minimum in complexity

The fact that a system is IC doesn't say anything about its evolvability, of course. It is completely possible for evolution to build up a larger system and then whittle it down to an IC one, or for evolution to create a system with one task and then coopt it for a different task where it is IC. Anyone who maintains that this is a problem for evolution is an idiot who thinks that mutations are purely additive.

I'm just sayin' that IC *is* a well-defined and empirically testable quality.

By Xanthir, FCD (not verified) on 07 Jun 2007 #permalink