Complexity from Simplicity; or, Why Casey Luskin Needs a Math Class

One of my fellow ScienceBloggers, [Karmen at Chaotic Utopia](http://scienceblogs.com/chaoticutopia/2006/11/puzzling_at_a_simpleminde…) pointed out a spectacularly stupid statement in [Casey Luskin's critique of Carl Zimmer][lutkin] (*another* fellow SBer) at the Discovery Institutes "Center for Science and Culture". Now normally, I might not pile on to tear-down of Casey (not because he doesn't deserve it, but because often my SciBlings do such a good job that I have nothing to add); but this time, he's crossed much too far into *my* territory, and I can't let that pass without at least a brief comment.

[lutkin]: http://www.evolutionnews.org/2006/11/evolution_nat_geo_1.html

So, here's the dumb statement:

>The article called evolution a "simple" process. In our experience, does a "simple" process generate
>the type of vast complexity found throughout biology?

Yes, one of the leading IDist writers on the net believes that in reality, simple processes don't generate complex results.

Karmen pointed out fractals as a beautiful example of the generation of complexity from simplicity. I'd like to point out something that, while lacking the artistic beauty of a well-chosen fractal, is an *even simpler* and possibly more profound example.

As long-time readers of GM/BM know, I'm fascinated by [cellular automata (CAs).][alpaca] CAs are an incredibly simple idea which can generate the most spectacularly complex behaviors out of pure triviality.

[alpaca]: http://scienceblogs.com/goodmath/2006/10/a_metalanguage_for_pathologica…

For the simplest example of this, line up a bunch of little tiny machines in a row. Each machine has an LED on top. The LED can be either on, or off. Once every second, all of the CAs simultaneously look at their neighbors to the left and to the right, and decide whether to turn their LED on or off based on whether their neighbors lights are on or off. Here's a table describing one
possible set of rules for the decision about whether to turn the LED on or off.

Current State Left Neighbor Right Neighbor New State
On On On Off
On On Off On
On Off On On
On Off Off On
Off On On On
Off On Off Off
Off Off On On
Off Off Off Off

So, for example, if a cell is on, it's left neighbor is on, and its right neighbor is off, then for the next second, the cell will keep its light on. If a cell has its light on, and both its left and right neighbors have *their* lights on, then it will turn its light *off* for the next second.

This is known as the *rule 110* cellular automaton according to Steven Wolfram's taxonomy of
one-dimensional CAs in [A New Kind of Science][wolfram]. Rule 110 is interesting for several reasons.

First, it's so trivial. Even writing as a table like the one up there is making it look more
complicated than it should - it's an incredibly simple thing. And yet, it creates an amazing amound
of complexity. If you run rule 110, and take the row of lights and line them up those rows chronologically - so that the initial row of lights is on top, the state that it went to after one second is below it, and so on - you can create a two dimensional image, like this one:

i-d78059584c62df67ad62002d5b16e53a-CA_rule110s.png

That image is *bizarre*! It forms a pattern of triangles, which on the one hand exhibits a lot of structure - large triangles are surrounded by smaller triangles, which are surrounded by smaller ones; and the larger triangles will often appear in almost linear patterns - the exact place where large triangles show up is very chaotic and impossible to predict without *running* the automata. Incredible complexity is being spawned by utter triviality.

How complex can rule-100 CAs get? [110 is turing complete.][110] *Any* computational process, *anything* which can be computed by any mechanical device, can be computed using nothing but a single row of rule 110 CAs.

[110]: http://www.complex-systems.com/Archive/hierarchy/abstract.cgi?vol=15&is…

Another example? The most famous CA of them all is [Conway's "game of life"][life]. I've written about
life before, because it's such an interesting system. Life is also turing complete - and someone has actually implemented a two-symbol [turing machine as a grid of Life cells][life-turing]. What's particularly interesting about that, from the point of view of Casey's statement about how a simple process like evolution doesn't generate complexity, is that the Life turing machine is generated from a set of components *that were not designed by a human being*. The components were discovered by running a
genetic algorithm on relatively small life grids, and looking for stable patterns. All of the
parts of the Life turing machine - the clocks, the gears for moving the tapes, the tape cells, the tape read/write head - all of the complex machinery of the turing machine - was generated
randomly from an extremely simple set of rules.

[life]: http://www.ericweisstein.com/encyclopedias/life/
[life-turing]: http://www.cs.ualberta.ca/~bulitko/F02/papers/tm_words.pdf

More like this

Todays entry isn't so much a pathological language itself as it is a delightful toy which can be used to *produce* pathological languages. I'm looking at Chris Pressey's wonderful language [ALPACA](http://catseye.mine.nu:8080/projects/alpaca/), which is a meta-language for describing different…
I'm sure that in the friday pathological programming languages, I have a fondness for languages that make your code beautiful: languages like [SNUSP][snusp], [Befunge][befunge], [Piet][piet], and such. Those languages all you two-dimensional layouts for their programs. [snusp]: http://scienceblogs…
Casey Luskin's ignorance is well-known, and this recent essay stopped most of us cold at his Ford Pinto comparison. We should have kept going. Karmen plucks out another particularly stupid statement, one that's even dumber than the Pinto remark: The article called evolution a "simple" process. In…
Being a Nice Jewish BoyTM, Christmas is one of the most boring days of the entire year. So yesterday, I was sitting with my laptop, looking for something interesting to read. I try to regularly read the [Panda's Thumb][pt], but sometimes when I don't have time, I just drop a bookmark in my "to read…

"Life" was a formative part of my scientific understanding - it was face-slappingly apparent how complex systems could evolve from simple ones. I even use it as an example to this day on system evolution - even if, sadly, now I use it as an example in, say, designing project processes.

By DragonScholar (not verified) on 21 Nov 2006 #permalink

Complexity arising naturally out of simplicity is so incredibly observable and cool that the question is not just "how can they dismiss it?", but also "why would they dismiss it?" Some of the ID crowd may be nerds, but none of them are geeks. That is to say, they may look dweebish, but they seem to have no real fascination with the spectacularity of the commonplace (and true); no Metamagical Themas for them.

My favorite example of "complex behavior from simple things" are Turing Machines themselves, in particular, the Busy Beaver Turing machines.

Presuming that someone knows what a Turing Machine is, you can ask the simple sounding question "what is the maximum number of ones that an n-state Turing machine can write on an empty tape before halting?". It's fairly easy to determine that a one state Turing machine can write a single one, then halt. A two state machine can write 4. A three state machine can write six. A bit of work will show that a four state machine can write 13.

It is currently still unknown what the number is for a five state machine. A result by Marxen and Buntrock gives a five state machine that can write 4098 ones, but it is not yet known to be maximal. It executes for over 47 million cycles before halting.

A six state machine? It can write 1.29 * 10^865 ones. It runs for over 3 * 10^1730 cycles.

The best resource I know on the subject is http://www.drb.insel.de/~heiner/BB/. A blog post of mine on the subject, along with a tiny Turing Machine simulator that can run the proposed five state busy beaver machine is at http://brainwagon.org/archives/2006/09/19/2160/.

Even cone snails know that these fancy complex patterns can be built up from the merest simplicity.

Since this is the right type of blog to ask questions on this, I'm going to cut and paste from my earlier comments to get to those questions:

"Chaos, fractals and turing machines gets their full properties from the simple but subtle mechanism of iterations (recursions). And I can't think of anything more recursive than replicating life, filling much the same fitness and genetic landscape as the former population. One of those properties is a fixed point. Check: no mutations - OK.

According to computer scientists a turing capable process is the most efficient computing system there is, capable of processing or creating all algorithmic constructs. I think some of them believes that is as much as nature can do. (Since abstract math methods like infinities aren't feasible for a finite process.)

With this in mind it is no big reach to think provisionally that evolutionary algorithms can do everything naturally possible within biophysical constraints. (Not that I think any biologist thought otherwise. :-) Even such simple cellular automata like Conway's Game of Life are turing capable.

[A comment on complexity, a blanket, appearing from simple iterations, knitting] got me thinking of other biological analogies like that multicellular organisms are iterations of simple cell types, cell machinery is iterations of processes controlled by genes, genetic DNA is iterations of base pairs. Simple iterations making complexity on all levels. Not that we are discussing turing completeness here but constrained complexity, I think, but nevertheless complexity from humble origins."

So:
- Are such processes as chaos, turbulence, turing machines, related by iterative foldings (of phase space, for example)? I assume fractals are not exactly foldings.

- Are evolution and evolutionary algorithms turing complete, assuming for example that variation and selection change states (properties in a population) depending on 'input' from the fitness landscape?

By Torbjörn Larsson (not verified) on 21 Nov 2006 #permalink

I'm amazed by all the complexity unfolding from nothing more than 1,2,3,4... I wonder if that is specified, though :)

I meant:
"Are such processes as chaos, turbulence, turing machines, related by by iterative foldings" - Are complexity in such such processes as chaos, turbulence, turing machines, related by by iterative foldings.

By Torbjörn Larsson (not verified) on 21 Nov 2006 #permalink

Though I have to agree in any case, I wonder if the author does not mean "that type of complexity does not occur from simple processes"? The subject is explored in Wolfram's NKS book, and he gives an incredibly rich selection of examples that more any complex behavior occurs most efficiently in simple processes, but it is not strictly shown and many people have objected to this opinion.

The question I am talking about is basically:

given that simple processes develop different, and massively diverse complexities, which of these is true?
1. all complex behavior can be modelled in simple processes
2. significantly different behavior can come from complex processes (as opposed some simple process)

...and I have no idea if it is applicable, I did not see the quote in context. It does at least provide a possible interpretation. It is hard to imagine someone interested in sciences or math would not know about such a basic idea of complexity.

By Robbie Muffin (not verified) on 21 Nov 2006 #permalink

1. all complex behavior can be modelled in simple processes

Depends on how good the model is. The human brain is a complex klugey biological construct, well adapted to its environment. Good models would require something approaching it in complexity, I would think. That doesn't mean it couldn't emerge from a conceptually simple process (replication, variation, selection). "Conceptually simple" could still be complex, tho.

These CA examples are pretty and undeniably complex, but are they useful? (do they exhibit adaptation and thus the appearance of design?) Can we come up with examples of synthetic complex emergent behaviour in a real-world applications?

Torbjön:

First, let me clarify one point. A turing machine (or equivalent) is not the most efficient computing system. Turing equivalence says nothing about efficiency - in fact, a turing machine can be exponentially slower than a lambda calculus reduction system like the G machine; and a Minsky machine can be exponentially slower than a turing machine! The important fact about Turing machines and their equivalents is *capability*, not efficiency. Anything which *can be computed* by any possible computing device can be computed by a Turing machine - but no guarantee on how long it will take.

Moving on to the questions... You're asking some tough ones!

Sorry, accidentally posted that before I was done!

I'm not entirely sure if I understand your questions; as I was saying before, you're asking some tough ones. I'll do my best, and you can ask again if I didn't interpret your questions correctly!

- Chaos, turbulence, etc... I think that they're different domains. A chaotic process is computable, given sufficiently precise inputs for the initial values. Turbulence is a bit less clear; I'm not sure whether the full behavior of turbulence is deterministic. What we typically talk about in computability is deterministic computation - even when we use a non-deterministic machine, we expect a deterministic result. Honestly, I don't know if turbulent systems - fluid flow for example - is really fully deterministic in the real world, or if things like quantum effects can "bubble up" enough to affect things like precise boundaries in turbulence. Fractals are back in the range of chaos: possibly *very* sensitive to initial conditions, but computable.

Evolutionary algorithms... A computing *device* is Turing complete, not an *algorithm*. That might sound like nitpicking, but the point is that the "language" generated by the evolutionary algorithm is what's turing complete, not the process of evolution itself. So for any evolutionary algorithm system, the question is whether or not the code that it generates is going to be executed on a Turing-capable machine. If you're working with an evolutionary algorithm system that generates code for a sub-turing machine, it's impossible for it to "evolve" turing completeness; the evolutionary algorithm system doesn't get to modify its "universe".

Physical reality clearly permits turing complete systems. And biological systems - including the biochemical machinery including DNA - are turing complete. So evolutionary algorithms running using living things (or parts of living things) as components are clearly capable of generating a program for *any* computable function, given enough time.

Does that answer your questions?

Cool, I didn't know rule 110 had been proved Turing Complete (knew about life though). Sounds to me like an ideal candidate for a new esoteric programming language.

By malpollyon (not verified) on 21 Nov 2006 #permalink

MarkCC: Determinisitism in your average, everyday sort of turbulence seems likely. The Kolmogorov scale is generally at or well above the mean free path in the fluid, and at that point there's not really enough energy left to push electrons around and do other quantum mechanical sorts of things. Maybe in turbulent plasmas there might be enough extra energy left over to drive turbulent dissipation down into the non-deterministic regimes, but I doubt it since it's usually all taken up by radiative effects that are largely independent of the flow itself.

An example from physics is the coupled spring. Let's say you have a stationary point on the ceiling, from which hangs a series combination of a spring, a weight and another spring. Couple the last to a simple harmonic oscillator. The weight moves in a very complex fashion, in a simple system.

By complex_field (not verified) on 22 Nov 2006 #permalink

"Physical reality clearly permits turing complete systems. And biological systems - including the biochemical machinery including DNA - are turing complete."

I don't think this is true. A central notion in both Turing machines and the concept of algorithm is stepping: a computation process is not continuous. So I don't believe it makes sense to talk about a biochemical process being Turing complete; computing and chemistry have a completely different nature in that regard. Of course a DNA model in a computer could be Turing complete; but that would be a model, not the actual process. The map is not the territory...

Stef:

DNA *is* Turing complete. Some researchers at IBM have been experimenting with using DNA as a nanotech computer. They have definitely shown it to be Turing complete. For example, here's a link to a paper that shows DNA computation is capable of generating any recursively enumerable language. Only Turing complete systems can generate all RE languages. (In fact, there's an alternative way of characterizing computational capability based on languages, and RE languages are the equivalent of Turing machines in that heirarchy.)

Malpollyon:

The problem with using 110 as a basis for an esolang is that it actually requires an infinite input string.

It's a pretty interesting concept for a computation system, but basically, you need to have an input that propagates itself down the computation. It's a fascinating effect. At some point, I'm going to do a series about CAs, and I'll describe how 110 does computation when I get to that.

"DNA *is* Turing complete. Some researchers at IBM have been experimenting with using DNA as a nanotech computer. "

Ok, you can use DNA to make a computer. But that's not my point. I'm just saying that "Turing completeness" can only be the property of a mathematical object. Like being "odd" or "even" is a property of a number; you would not say your cat is odd (well maybe :) or you would no say your house is derivable. Same here: a chemical process can not be Turing complete, by definition.

So I'm just pointing to a semantical confusion which can stretch far away; see for example the page http://en.wikipedia.org/wiki/Rule_110 where one can read (talking about rule 110 being Turing complete):

"This is an interesting result because Rule 110 is an extremely simple system, simple enough to suggest that naturally occurring physical systems may also be capable of universality--and hence questions about them will often be undecidable. This means they may not be amenable to closed-form mathematical solutions."

Nonsense:

1) A "natural system" being "capable of universality" just does not mean anything at all. First, a natural phenomenon is not a 'system' unless modeled. At that point, the model can possibly be proved to be Turing complete all right since we're talking about math objects. But the natural phenomenon has nothing to do with that. It is not a mathematical object. So to be precise, what we are really talking about is how a mathematical representation of a natural phenomenon relate to calculability.

2) "hence questions about them will often be undecidable". Here we are at the heart of confusion. Is a question about DNA likely to be undecidable because you can use DNA to make a computer ? Undecidability is a property of a sentence in a formal system. The questions one can ask about DNA are probably stated in English rather than A-C-G-T encoding, right ? so the usage of the concept of undecidability in that statement is utterly inappropriate.

3) the last sentence "this means they may not be amenable to closed-form mathematical solutions" I can't either fathom. I believe the person who wrote this does not know himself what this is supposed to mean. If it is that we have no hope of coming up with a complete, definitive and consistent mathematical description of some natural phenomenons, well, we know that already; and not only for some phenomenons, but for all of them. This is the very nature of science.

Stef

Not to mention the lowly logistic finite difference equation and other maps of the unit interval to itself. OTOH, it is also possible to get simplicity from randomness as in the Central Limit Theorem.

[Stef] By the first part of your argument, no physical device can be considered Turing complete, which seems pedantic to the point of obtuseness: yes, there is a sense in which the laptop on which I'm typing this is not a universal computer, but most of us would still be happy to call it Turing complete because the alternative is to get so bogged down in Platonic idealism that it isn't possible to say anything useful at all.

If we can get over that, then it is not so far-fetched to consider DNA Turing complete. Cell chemistry is only continuous from one modelling perspective. Certainly there are state changes involved that are effectively discrete, and the mechanics of molecular biology are built around those. I don't see anything absurd in describing such systems in computational terms. You can argue that that is just another model, but life does demonstrably work.

While the quoted Wikipedia suggestion that natural systems may be undecidable is pretty hand-waving, that doesn't automatically render the idea meaningless. Undecidability is about meta-questions, and those need not be phrased in the system language (though it should be possible to do so if you are sufficiently bloody minded). Such questions may be undecidable independent of whether the model is an approximation.

Mark:
"A turing machine (or equivalent) is not the most efficient computing system."
Sorry, I was posting on sugar fumes. I didn't try to clarify my vague notions, and here I forgot to distinguish between capability and efficiency. Of course CS, contrary to most other math, use efficiency.

"different domains"
Fair enough.

"Honestly, I don't know if turbulent systems - fluid flow for example - is really fully deterministic in the real world, or if things like quantum effects can "bubble up" enough to affect things like precise boundaries in turbulence."
IIRC it is in principle like that, energy dissipates on all scales, so large whorls drives small whorls. But se billb's scale discussion on why it probably doesn't work all the way down. And for 'default' incompressible ideal fluids we have the millenium prize attempts to show determinism in the Navier-Stokes.

"the point is that the "language" generated by the evolutionary algorithm is what's turing complete"
Erhm! Yes, of course. Did I tell you I was posting on sugar fumes?

"Does that answer your questions?"
It certainly does! Thanks for taking your time to unravel the tangled threads of my comment!

By Torbjörn Larsson (not verified) on 22 Nov 2006 #permalink

"Of course CS, contrary to most other math, use efficiency" - Of course CS, contrary to most other math, use efficiency as well.

By Torbjörn Larsson (not verified) on 22 Nov 2006 #permalink

[matt] I would accept a statement such as "my laptop is Turing complete" because it is a case of usual broadening of concept we do need to be able to communicate simply; but this should not compromise precision.

"If we can get over that, [... so ok let's get over that...] then it is not so far-fetched to consider DNA Turing complete."

IMHO it is: to me this is broadening the concept too much and have it loose it's meaning.

"Certainly there are state changes involved that are effectively discrete, and the mechanics of molecular biology are built around those."

I'm not a biologist although I did study it a bit and I disagree here. The mechanics of molecular biology also include thermodynamical aspects which blurs the discrete nature of individual transitions; chemistry does not reduce to quantum mechanics.

"I don't see anything absurd in describing such systems in computational terms. You can argue that that is just another model, but life does demonstrably work".

Well I don't want to sound pedantic again, but here you do presuppose that life is a kind of computation. In my view (which I don't claim is better than yours) life can not be reduced to information theory nor to mechanics so it does not even "work" at all... but this is a whole other debate and I guess we should not abuse this blog with it :)

"Undecidability is about meta-questions, and those need not be phrased in the system language (though it should be possible to do so if you are sufficiently bloody minded). Such questions may be undecidable independent of whether the model is an approximation"

Here I strongly disagree. Undecidability is commonly associated to meta-questions because of the way Gödel did demonstrate his theorem: encoding a meta-question as a formal sentence and thus coming up with a logical paradox of the like of Russel's barber. But undecidability as such is not about meta-questions at all: it is a property of a sentence *within* a formal system, with no implication of a meta-level whatsoever. In formal systems there is no need for a meaning to be associated to sentences, only their forms (hence "formal") matter, so the notion of "question" may not even arise. So if you want to study the decidability of a proposition (question) then you definitely need to formulate it, in your words, "in the system language". Only relatively to that "system language" does undecidability have a meaning. The very same proposition/question may be undecidable in one system and decidable in another system (for example in the system you make by extending the former one with the proposition itself considered as an axiom )

Stef:
I'm not sure what you are arguing. My own reason to ask about turing capability was because creationists often tries to come up with barriers for evolution. Apparently there is none in the computational sense. If it suits your thinking better, at least by the time we reach evolution on the level of organs, there are physical constraints that form the adaptations.

blurs the discrete nature of individual transitions

I think this may be a red herring, like the one matt points out between idealized and real turing machines - read/write memory makes the memory space large enough. Digital systems are capable of emulating analog systems, so conversely, even though discrete state changes may be necessary to discuss turing capability, I would think fuzzified ones could be enough to get it. Stochastic and reversible state changes, and likewise collective states, are probably also no barrier.

Undecidability

Another red herring, I think. Incompleteness of formal systems AFAIK makes them extendable to match anything nature may throw at them. (Within the constraints of the model, of course.) That it also means that we can't predict what nature will throw at us is what makes life exciting.

Here I take it to mean that an organism may in principle (within constraints) eventually meet any selective pressure, not known beforehand, and likewise may in principle (within constraints) eventually match that selection. Ie if a predator stumbles on adaptations that allows becoming faster, so can the prey.

Perhaps that explains why life is still here after 3.5 Ga.

By Torbjörn Larsson (not verified) on 23 Nov 2006 #permalink

"what nature will throw" - what nature may throw

Let's unconfuse undecidability and uncertainty...

By Torbjörn Larsson (not verified) on 23 Nov 2006 #permalink

"Stef:
I'm not sure what you are arguing."

never mind then :) I can't get much clearer I'm afraid.

regards

[stef] I'm not sure how it is possible to assert an intangible otherness of life without trespassing on the realm of religion -- a fair enough way to nullify debate, but not otherwise very useful.

Are you really arguing that semiconductor technology can be an acceptable substrate for Turing completeness -- or a reasonable approximation thereof -- while biochemical systems cannot? Either way the emergent digital consequences are built on a probabilistic foundation. All the processes of life in which DNA is involved depend on its discrete behaviour. That discreteness may not characterize things perfectly, but the approximation is good enough to support billions of years of terrestrial existence -- which is a damn sight more than you can say for any man-made computer.

As for undecidability, your assertions regarding its manifestations are mired in specificity. Yes, a given undecidable statement will be particular to its formal system, but adding the statement as an axiom does nothing to solve the fundamental problem of undecidability in the system: the whole Gödelian point is that the new system will be just as incomplete as the old. Undecidability is not specific, it is systemic. A question of decidability -- such as the halting problem -- may be demonstrated by particular examples, but the real issue is fundamental, and cannot be resolved by sidestepping any number of individual instances.

It is precisely the general case that matters, otherwise there would be no problem. And that is why it's all about meta-questions, not because any particular proof might recruit Russell's paradox. Which in turn means that we can talk about the problem in any language we choose, be it English or DNA.

I'm just saying that "Turing completeness" can only be the property of a mathematical object.

Everything that exists is a mathematical object.

By Caledonian (not verified) on 24 Nov 2006 #permalink

Sorry guys, I can't follow here.

I tried my best to explain what seems to me a very simple mathematical point; but the current responses from different people all roughly being in the same line ("red herring", "nullify debate" and "everything that exists is a mathematical object"-dot) I don't find any way to add something constructive.

basically all I had to say is in my previous posts.

..so I guess I just failed to convey my point of view :)

regards,

I figured I could summarize that "very simple mathematical point" before leaving the thread. Here it is:

"Turing machine" is a precisely defined mathematical concept; we should take care of using it only where and when it is relevant. else, well... we're doing bad math.

good math, bad math.

[matt] I'm not sure how it is possible to assert an intangible otherness of life without trespassing on the realm of religion -- a fair enough way to nullify debate, but not otherwise very useful.

It isn't obligatory for meeting the boundary of religion and good science with a impenetrable barrier. That a boundary exists as result of religion is a lack of faith that ones religion is more than traditions inherited from ones ancestors. Alternatively, it's a hobbling of the investigation of reality by say this can be real and that cannot without a thorough investigation of all possibilities.

back to the subject at hand....

I don't understand how a computer/silicon-based system is a valid turing machine/model of computation and DNA/carbon-based system as not valid. 60+ years ago, a computer was a person who did computations, another carbon-based system of sorts. Not as fast or reliable a system as ones PC but still it would be turing complete.

That a system can make errors doesn't invalidate its function as a computational model - a computer probably won't be very reliable if there's enough ionizing radiation around.

At a low level, a conventional computer doesn't have discrete steps. It looks like it from the outside, but the traces on the circuit board or in the CPU are messy curved voltage/current patterns. I see discrete steps because the system was designed to have a repeating control pattern.

DNA is messy too, but that doesn't make it non-computational. The molecules that the researchers built are surely designed the same way - there are repeating control patterns that can be abstracted into discrete steps.

By Z. Sesqui (not verified) on 24 Nov 2006 #permalink

[Stef] Your point is indeed simple and clearly stated. On the rest we must agree to differ. I'd like to apologise for the offensive tone of my last comment, anyway: it was entirely uncalled-for. Sorry.

Stef:
OK, now I understand. I have to agree with matt, you stated your point clearly. I missed the context by sloppy reading of your first comments. Sorry about that!

By Torbjörn Larsson (not verified) on 25 Nov 2006 #permalink

Stef wrote:
chemistry does not reduce to quantum mechanics.

What would be an example of this?